[jira] [Commented] (YARN-6982) Potential issue on setting AMContainerSpec#tokenConf to null before app is completed

2017-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146665#comment-16146665
 ] 

Hudson commented on YARN-6982:
--

ABORTED: Integrated in Jenkins build Hadoop-trunk-Commit #12270 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12270/])
YARN-6982. Potential issue on setting AMContainerSpec#tokenConf to null 
(rohithsharmaks: rev 4cae120c619811006b26b9a95680a98732572af6)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java


> Potential issue on setting AMContainerSpec#tokenConf to null before app is 
> completed
> 
>
> Key: YARN-6982
> URL: https://issues.apache.org/jira/browse/YARN-6982
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Manikandan R
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-6982.001.patch
>
>
> While reviewing patch for YARN-65, I found that many places 
> RMAppImpl#submissionContext sets containerLaunchcontext#setTokensConf has 
> been set to null i.e 
> {code}
> // set the memory free
> app.submissionContext.getAMContainerSpec().setTokensConf(null);
> {code}
> This appears be a issue if application is updated may be because queue move 
> or lifetime or priority, then submission context will be restored again into 
> state store. Consider after app update, if RM is restarted then submission 
> context will have null tokenConf. This could be a potential issue. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7033) Add support for NM Recovery of assigned resources(GPU's, NUMA, FPGA's) to container

2017-08-29 Thread Devaraj K (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146663#comment-16146663
 ] 

Devaraj K commented on YARN-7033:
-

Thanks [~leftnoteasy] and [~sunilg] for the confirmation, I will update the 
patch with the revert of enum change. 

> Add support for NM Recovery of assigned resources(GPU's, NUMA, FPGA's) to 
> container
> ---
>
> Key: YARN-7033
> URL: https://issues.apache.org/jira/browse/YARN-7033
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Devaraj K
>Assignee: Devaraj K
> Attachments: YARN-7033-v0.patch, YARN-7033-v1.patch, 
> YARN-7033-v2.patch
>
>
> This JIRA adds the common logic to store the assigned resources to container 
> such as GPU's(YARN-6620), NUMA(YARN-5764) and FPGA's(YARN-5983) etc. and 
> recover upon restart of NM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6877) Create an abstract log reader for extendability

2017-08-29 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146629#comment-16146629
 ] 

Xuan Gong commented on YARN-6877:
-

All the failed testcases can be passed on my local environment.

> Create an abstract log reader for extendability
> ---
>
> Key: YARN-6877
> URL: https://issues.apache.org/jira/browse/YARN-6877
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6877-branch-2.001.patch, YARN-6877-trunk.001.patch, 
> YARN-6877-trunk.002.patch, YARN-6877-trunk.003.patch, 
> YARN-6877-trunk.004.patch, YARN-6877-trunk.005.patch, 
> YARN-6877-trunk.006.patch
>
>
> Currently, TFile log reader is used to read aggregated log in YARN. We need 
> to add an abstract layer, and pick up the correct log reader based on the 
> configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7127) Run yarn-native-services branch against trunk

2017-08-29 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146622#comment-16146622
 ] 

Jian He commented on YARN-7127:
---

A patch includes the diff between yarn-native-services branch and trunk

> Run yarn-native-services branch against trunk
> -
>
> Key: YARN-7127
> URL: https://issues.apache.org/jira/browse/YARN-7127
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-7127.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7127) Run yarn-native-services branch against trunk

2017-08-29 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He reassigned YARN-7127:
-

Assignee: Jian He

> Run yarn-native-services branch against trunk
> -
>
> Key: YARN-7127
> URL: https://issues.apache.org/jira/browse/YARN-7127
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-7127.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7127) Run yarn-native-services branch against trunk

2017-08-29 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7127:
--
Attachment: YARN-7127.01.patch

> Run yarn-native-services branch against trunk
> -
>
> Key: YARN-7127
> URL: https://issues.apache.org/jira/browse/YARN-7127
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
> Attachments: YARN-7127.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7073) Rest API site documentation

2017-08-29 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146619#comment-16146619
 ] 

Jian He commented on YARN-7073:
---

Uploaded a new patch with updated doc and a bit of changes in ApiServer

> Rest API site documentation
> ---
>
> Key: YARN-7073
> URL: https://issues.apache.org/jira/browse/YARN-7073
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation, site
>Reporter: Gour Saha
>Assignee: Gour Saha
> Attachments: YARN-7073-yarn-native-services.001.patch, 
> YARN-7073-yarn-native-services.002.patch, 
> YARN-7073-yarn-native-services.003.patch
>
>
> Commit site documentation for REST API service, generated from the swagger 
> definition as a MD file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7073) Rest API site documentation

2017-08-29 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7073:
--
Attachment: YARN-7073-yarn-native-services.003.patch

> Rest API site documentation
> ---
>
> Key: YARN-7073
> URL: https://issues.apache.org/jira/browse/YARN-7073
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation, site
>Reporter: Gour Saha
>Assignee: Gour Saha
> Attachments: YARN-7073-yarn-native-services.001.patch, 
> YARN-7073-yarn-native-services.002.patch, 
> YARN-7073-yarn-native-services.003.patch
>
>
> Commit site documentation for REST API service, generated from the swagger 
> definition as a MD file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6721) container-executor should have stack checking

2017-08-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated YARN-6721:
---
Target Version/s: 3.0.0-beta1

> container-executor should have stack checking
> -
>
> Key: YARN-6721
> URL: https://issues.apache.org/jira/browse/YARN-6721
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, security
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: security
>
> As per https://www.qualys.com/2017/06/19/stack-clash/stack-clash.txt and 
> given that container-executor is setuid, it should be compiled with stack 
> checking if the compiler supports such features.  (-fstack-check on gcc, 
> -fsanitize=safe-stack on clang, -xcheck=stkovf on "Oracle Solaris Studio", 
> others as we find them, ...)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6721) container-executor should have stack checking

2017-08-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reassigned YARN-6721:
--

Assignee: Allen Wittenauer  (was: Sunil G)

> container-executor should have stack checking
> -
>
> Key: YARN-6721
> URL: https://issues.apache.org/jira/browse/YARN-6721
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, security
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: security
>
> As per https://www.qualys.com/2017/06/19/stack-clash/stack-clash.txt and 
> given that container-executor is setuid, it should be compiled with stack 
> checking if the compiler supports such features.  (-fstack-check on gcc, 
> -fsanitize=safe-stack on clang, -xcheck=stkovf on "Oracle Solaris Studio", 
> others as we find them, ...)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6721) container-executor should have stack checking

2017-08-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated YARN-6721:
---
Target Version/s:   (was: 2.7.5)

> container-executor should have stack checking
> -
>
> Key: YARN-6721
> URL: https://issues.apache.org/jira/browse/YARN-6721
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, security
>Reporter: Allen Wittenauer
>Assignee: Sunil G
>  Labels: security
>
> As per https://www.qualys.com/2017/06/19/stack-clash/stack-clash.txt and 
> given that container-executor is setuid, it should be compiled with stack 
> checking if the compiler supports such features.  (-fstack-check on gcc, 
> -fsanitize=safe-stack on clang, -xcheck=stkovf on "Oracle Solaris Studio", 
> others as we find them, ...)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6737) Rename getApplicationAttempt to getCurrentAttempt in AbstractYarnScheduler/CapacityScheduler

2017-08-29 Thread Tao Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146586#comment-16146586
 ] 

Tao Yang edited comment on YARN-6737 at 8/30/17 4:33 AM:
-

Upload v1 patch for trunk.  Sorry to be late for this update. 
I have scanned all the usages of AbstractYarnScheduler#getApplicationAttempt 
and CapacityScheduler#getApplicationAttempt and found one potential problem in 
QueuePriorityContainerCandidateSelector#preChecksForMovingReservedContainerToNode.
{code}
FiCaSchedulerApp app =
preemptionContext.getScheduler().getCurrentApplicationAttempt(
reservedContainer.getApplicationAttemptId());
if (!app.getAppSchedulingInfo().canDelayTo(
reservedContainer.getAllocatedSchedulerKey(), ResourceRequest.ANY)) {
  // This is a hard locality request
  return false;
}
{code}
NPE should happen here if app is no longer exist, I think we can correct it 
through adding null check for app like this (the outer caller will skip this 
invalid reservedContainer):
{code}
FiCaSchedulerApp app =
preemptionContext.getScheduler().getCurrentApplicationAttempt(
reservedContainer.getApplicationAttemptId());
if (app == null || !app.getAppSchedulingInfo().canDelayTo(
reservedContainer.getAllocatedSchedulerKey(), ResourceRequest.ANY)) {
  // This is a hard locality request
  return false;
}
{code}
[~sunilg] Please help to review this patch. Thanks!


was (Author: tao yang):
Upload v1 patch for trunk. 
Sorry to be late for this update. I have scanned all the usages of 
AbstractYarnScheduler#getApplicationAttempt and 
CapacityScheduler#getApplicationAttempt and found one potential problem in 
QueuePriorityContainerCandidateSelector#preChecksForMovingReservedContainerToNode.
{code}
FiCaSchedulerApp app =
preemptionContext.getScheduler().getCurrentApplicationAttempt(
reservedContainer.getApplicationAttemptId());
if (!app.getAppSchedulingInfo().canDelayTo(
reservedContainer.getAllocatedSchedulerKey(), ResourceRequest.ANY)) {
  // This is a hard locality request
  return false;
}
{code}
NPE should happen here if app is no longer exist, I think we can correct it 
through adding null check for app like this (the outer caller will skip this 
invalid reservedContainer):
{code}
FiCaSchedulerApp app =
preemptionContext.getScheduler().getCurrentApplicationAttempt(
reservedContainer.getApplicationAttemptId());
if (app == null || !app.getAppSchedulingInfo().canDelayTo(
reservedContainer.getAllocatedSchedulerKey(), ResourceRequest.ANY)) {
  // This is a hard locality request
  return false;
}
{code}
[~sunilg] Please help to review this patch. Thanks!

> Rename getApplicationAttempt to getCurrentAttempt in 
> AbstractYarnScheduler/CapacityScheduler
> 
>
> Key: YARN-6737
> URL: https://issues.apache.org/jira/browse/YARN-6737
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Tao Yang
>Priority: Minor
> Attachments: YARN-6737.001.patch
>
>
> As discussed in YARN-6714 
> (https://issues.apache.org/jira/browse/YARN-6714?focusedCommentId=16052158=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16052158)
> AbstractYarnScheduler#getApplicationAttempt is inconsistent to its name, it 
> discarded application_attempt_id and always return the latest attempt. We 
> should: 1) Rename it to getCurrentAttempt, 2) Change parameter from attemptId 
> to applicationId. 3) Took a scan of all usages to see if any similar issue 
> could happen.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6737) Rename getApplicationAttempt to getCurrentAttempt in AbstractYarnScheduler/CapacityScheduler

2017-08-29 Thread Tao Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-6737:
---
Attachment: YARN-6737.001.patch

Upload v1 patch for trunk. 
Sorry to be late for this update. I have scanned all the usages of 
AbstractYarnScheduler#getApplicationAttempt and 
CapacityScheduler#getApplicationAttempt and found one potential problem in 
QueuePriorityContainerCandidateSelector#preChecksForMovingReservedContainerToNode.
{code}
FiCaSchedulerApp app =
preemptionContext.getScheduler().getCurrentApplicationAttempt(
reservedContainer.getApplicationAttemptId());
if (!app.getAppSchedulingInfo().canDelayTo(
reservedContainer.getAllocatedSchedulerKey(), ResourceRequest.ANY)) {
  // This is a hard locality request
  return false;
}
{code}
NPE should happen here if app is no longer exist, I think we can correct it 
through adding null check for app like this (the outer caller will skip this 
invalid reservedContainer):
{code}
FiCaSchedulerApp app =
preemptionContext.getScheduler().getCurrentApplicationAttempt(
reservedContainer.getApplicationAttemptId());
if (app == null || !app.getAppSchedulingInfo().canDelayTo(
reservedContainer.getAllocatedSchedulerKey(), ResourceRequest.ANY)) {
  // This is a hard locality request
  return false;
}
{code}
[~sunilg] Please help to review this patch. Thanks!

> Rename getApplicationAttempt to getCurrentAttempt in 
> AbstractYarnScheduler/CapacityScheduler
> 
>
> Key: YARN-6737
> URL: https://issues.apache.org/jira/browse/YARN-6737
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Tao Yang
>Priority: Minor
> Attachments: YARN-6737.001.patch
>
>
> As discussed in YARN-6714 
> (https://issues.apache.org/jira/browse/YARN-6714?focusedCommentId=16052158=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16052158)
> AbstractYarnScheduler#getApplicationAttempt is inconsistent to its name, it 
> discarded application_attempt_id and always return the latest attempt. We 
> should: 1) Rename it to getCurrentAttempt, 2) Change parameter from attemptId 
> to applicationId. 3) Took a scan of all usages to see if any similar issue 
> could happen.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6756) ContainerRequest#executionTypeRequest causes NPE

2017-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146566#comment-16146566
 ] 

Hudson commented on YARN-6756:
--

ABORTED: Integrated in Jenkins build Hadoop-trunk-Commit #12268 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12268/])
YARN-6756. ContainerRequest#executionTypeRequest causes NPE. Contributed 
(jianhe: rev 8201ed8009e5f04c49568a8133635d47fcde3989)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClientContainerRequest.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java


> ContainerRequest#executionTypeRequest causes NPE
> 
>
> Key: YARN-6756
> URL: https://issues.apache.org/jira/browse/YARN-6756
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
>Priority: Critical
> Attachments: YARN-6756.01.patch, YARN-6756.02.patch
>
>
> ContainerRequest#executionTypeRequest is initialized as null, which could 
> cause below " execTypeReq.getExecutionType" unconditionally throw NPE.
> {code}
>   ResourceRequestInfo addResourceRequest(Long allocationRequestId,
>   Priority priority, String resourceName, ExecutionTypeRequest 
> execTypeReq,
>   Resource capability, T req, boolean relaxLocality,
>   String labelExpression) {
> ResourceRequestInfo resourceRequestInfo = get(priority, resourceName,
> execTypeReq.getExecutionType(), capability);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7127) Run yarn-native-services branch against trunk

2017-08-29 Thread Jian He (JIRA)
Jian He created YARN-7127:
-

 Summary: Run yarn-native-services branch against trunk
 Key: YARN-7127
 URL: https://issues.apache.org/jira/browse/YARN-7127
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7100) YARN service api can not reuse json file serialized in hdfs

2017-08-29 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He reassigned YARN-7100:
-

Assignee: Eric Yang

> YARN service api can not reuse json file serialized in hdfs
> ---
>
> Key: YARN-7100
> URL: https://issues.apache.org/jira/browse/YARN-7100
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.0.0-beta1
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7100.001.patch, 
> YARN-7100.002.yarn-native-services.patch, 
> YARN-7100.003.yarn-native-services.patch, 
> YARN-7100.004.yarn-native-services.patch
>
>
> org.apache.hadoop.yarn.service.api.records.Resource has a new method 
> introduced in YARN-6903 for casting memory from string to long value.  
> However, the method name getMemoryMB introduces new output in json that looks 
> like this:
> {code}
> "resource" : {
>   "uri" : null,
>   "profile" : null,
>   "cpus" : 1,
>   "memory" : "2048",
>   "memory_mb" : 2048
> },
> {code}
> This prevents the file to be resubmitted to services api because memory_mb 
> property is unknown to the REST API.  It may be better to rename getMemoryMB 
> method to calcMemoryMB to avoid the method to be serialized unintentionally.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7100) YARN service api can not reuse json file serialized in hdfs

2017-08-29 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146543#comment-16146543
 ] 

Jian He commented on YARN-7100:
---

Patch committed in yarn-native-services branch
Thanks [~eyang] !

> YARN service api can not reuse json file serialized in hdfs
> ---
>
> Key: YARN-7100
> URL: https://issues.apache.org/jira/browse/YARN-7100
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.0.0-beta1
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7100.001.patch, 
> YARN-7100.002.yarn-native-services.patch, 
> YARN-7100.003.yarn-native-services.patch, 
> YARN-7100.004.yarn-native-services.patch
>
>
> org.apache.hadoop.yarn.service.api.records.Resource has a new method 
> introduced in YARN-6903 for casting memory from string to long value.  
> However, the method name getMemoryMB introduces new output in json that looks 
> like this:
> {code}
> "resource" : {
>   "uri" : null,
>   "profile" : null,
>   "cpus" : 1,
>   "memory" : "2048",
>   "memory_mb" : 2048
> },
> {code}
> This prevents the file to be resubmitted to services api because memory_mb 
> property is unknown to the REST API.  It may be better to rename getMemoryMB 
> method to calcMemoryMB to avoid the method to be serialized unintentionally.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7094) Document that server-side graceful decom is currently not recommended

2017-08-29 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146488#comment-16146488
 ] 

Junping Du commented on YARN-7094:
--

Thanks Robert for updating the patch. +1 on 002 patch. Commit pending on 
Jenkins report.

> Document that server-side graceful decom is currently not recommended
> -
>
> Key: YARN-7094
> URL: https://issues.apache.org/jira/browse/YARN-7094
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: graceful
>Affects Versions: 3.0.0-beta1
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Blocker
> Attachments: YARN-7094.001.patch, YARN-7094.002.patch
>
>
> Server-side NM graceful decom currently does not work correctly when an RM 
> failover occurs because we don't persist the info in the state store (see 
> YARN-5464).  Given time constraints for Hadoop 3 beta 1, we've decided to 
> document this limitation and recommend client-side NM graceful decom in the 
> meantime if you need this functionality (see [this 
> comment|https://issues.apache.org/jira/browse/YARN-5464?focusedCommentId=16126119=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16126119]).
>   Once YARN-5464 is done, we can undo this doc change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6868) Add test scope to certain entries in hadoop-yarn-server-resourcemanager pom.xml

2017-08-29 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146485#comment-16146485
 ] 

Haibo Chen commented on YARN-6868:
--

+1. Will commit it tomorrow.

> Add test scope to certain entries in hadoop-yarn-server-resourcemanager 
> pom.xml
> ---
>
> Key: YARN-6868
> URL: https://issues.apache.org/jira/browse/YARN-6868
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: YARN-6868.001.patch
>
>
> The tag
> {noformat}
> test
> {noformat}
> is missing from a few entries in the pom.xml for 
> hadoop-yarn-server-resourcemanager.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7071) Add vcores and number of containers in web UI v2 node heat map

2017-08-29 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146473#comment-16146473
 ] 

Sunil G commented on YARN-7071:
---

Yes [~templedf] and [~leftnoteasy].
Definitely added metrics for containers are helpful clearly in some cases. So I 
am perfectly fine for this. Thanks for clearing my doubts.

Also thanks [~ayousufi], latest changes seems fine. Positioning seems better. I 
will test and update my feedback,

> Add vcores and number of containers in web UI v2 node heat map
> --
>
> Key: YARN-7071
> URL: https://issues.apache.org/jira/browse/YARN-7071
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Attachments: dropdown updated screenshot.png, Screen Shot 2017-08-29 
> at 7.42.26 PM.png, YARN-7071.001.patch, YARN-7071.002.patch
>
>
> Currently, the node heat map displays memory usage per node. This change 
> would add a dropdown to view cpu vcores or number of containers as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6839) [YARN-3368] Be able to view application / container logs on the new YARN UI.

2017-08-29 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146469#comment-16146469
 ] 

Sunil G commented on YARN-6839:
---

Yes. Updated ticket as per this understanding.

> [YARN-3368] Be able to view application / container logs on the new YARN UI.
> 
>
> Key: YARN-6839
> URL: https://issues.apache.org/jira/browse/YARN-6839
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>
> Currently viewing application/container logs will be redirected to the old 
> UI, we should leverage the new UI capabilities to make a better log-viewing 
> experiences.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7037) Optimize data transfer with zero-copy approach for containerlogs REST API in NMWebServices

2017-08-29 Thread Tao Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146470#comment-16146470
 ] 

Tao Yang commented on YARN-7037:


Thanks [~djp] for review and commit !

> Optimize data transfer with zero-copy approach for containerlogs REST API in 
> NMWebServices
> --
>
> Key: YARN-7037
> URL: https://issues.apache.org/jira/browse/YARN-7037
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.8.0
>Reporter: Tao Yang
>Assignee: Tao Yang
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.3
>
> Attachments: YARN-7037.001.patch, YARN-7037.branch-2.8.001.patch
>
>
> Split this improvement from YARN-6259.
> It's useful to read container logs more efficiently. With zero-copy approach, 
> data transfer pipeline (disk --> read buffer --> NM buffer --> socket buffer) 
> can be optimized to pipeline(disk --> read buffer --> socket buffer) .
> In my local test, time cost of copying 256MB file with zero-copy can be 
> reduced from 12 seconds to 2.5 seconds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-6839) [YARN-3368] Be able to view application / container logs on the new YARN UI.

2017-08-29 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G resolved YARN-6839.
---
Resolution: Duplicate

> [YARN-3368] Be able to view application / container logs on the new YARN UI.
> 
>
> Key: YARN-6839
> URL: https://issues.apache.org/jira/browse/YARN-6839
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>
> Currently viewing application/container logs will be redirected to the old 
> UI, we should leverage the new UI capabilities to make a better log-viewing 
> experiences.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6911) Graph application-level resource utilization in Web UI v2

2017-08-29 Thread Abdullah Yousufi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146467#comment-16146467
 ] 

Abdullah Yousufi commented on YARN-6911:


Thanks for the comments, [~sunilg]. I'll resolve those styling issues and 
upload a new patch.

> Graph application-level resource utilization in Web UI v2
> -
>
> Key: YARN-6911
> URL: https://issues.apache.org/jira/browse/YARN-6911
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn-ui-v2
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Attachments: Resource Graph Screenshot.png, Resource Utilization 
> Graph Mock Up.png, YARN-6911.001.patch, YARN-6911.002.patch
>
>
> It would be useful to have a visualization of the resource utilization 
> (memory, cpu, etc.) per application using the ATSv2 time series data. Rough 
> mock up attached.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7033) Add support for NM Recovery of assigned resources(GPU's, NUMA, FPGA's) to container

2017-08-29 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146466#comment-16146466
 ] 

Sunil G commented on YARN-7033:
---

Sigh. Thats on me. 
I thoughts this recovery model is only for GPU/NUMA/FPGA. So post YARN-3926, 
any arbitrary new resource could be considered here. [~leftnoteasy]  

> Add support for NM Recovery of assigned resources(GPU's, NUMA, FPGA's) to 
> container
> ---
>
> Key: YARN-7033
> URL: https://issues.apache.org/jira/browse/YARN-7033
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Devaraj K
>Assignee: Devaraj K
> Attachments: YARN-7033-v0.patch, YARN-7033-v1.patch, 
> YARN-7033-v2.patch
>
>
> This JIRA adds the common logic to store the assigned resources to container 
> such as GPU's(YARN-6620), NUMA(YARN-5764) and FPGA's(YARN-5983) etc. and 
> recover upon restart of NM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7088) Fix application start time and add submit time to UIs

2017-08-29 Thread Abdullah Yousufi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146462#comment-16146462
 ] 

Abdullah Yousufi commented on YARN-7088:


Sorry, to clarify, I meant the delta between startTime and launchTime vs the 
delta between submitTime and launchTime. I think I'm not seeing why pendingTime 
is confusing because I'm thinking of it as referencing the application's 
pending state after it is accepted and before it begins running.

> Fix application start time and add submit time to UIs
> -
>
> Key: YARN-7088
> URL: https://issues.apache.org/jira/browse/YARN-7088
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Attachments: YARN-7088.001.patch, YARN-7088.002.patch, 
> YARN-7088.003.patch, YARN-7088.004.patch, YARN-7088.005.patch, 
> YARN-7088.006.patch
>
>
> Currently, the start time in the old and new UI actually shows the app 
> submission time. There should actually be two different fields; one for the 
> app's submission and one for its start, as well as the elapsed pending time 
> between the two.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6911) Graph application-level resource utilization in Web UI v2

2017-08-29 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146461#comment-16146461
 ] 

Sunil G commented on YARN-6911:
---

Thanks [~abdullah.yousufi_impala_ff43]
Overall this is very useful one and thanks for working on same. Some minor UI 
look and feel comments.
 I think the label on right side for Memory and VCores(change from CPU to 
Vcores) is not suiting the UI style. Could please follow same font and text 
style there. Also the color coding is represented in Square, may be circle is 
better like the existing pie charts. Could you please try out some of these and 
update best one which matches the whole UI pattern.

> Graph application-level resource utilization in Web UI v2
> -
>
> Key: YARN-6911
> URL: https://issues.apache.org/jira/browse/YARN-6911
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn-ui-v2
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Attachments: Resource Graph Screenshot.png, Resource Utilization 
> Graph Mock Up.png, YARN-6911.001.patch, YARN-6911.002.patch
>
>
> It would be useful to have a visualization of the resource utilization 
> (memory, cpu, etc.) per application using the ATSv2 time series data. Rough 
> mock up attached.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5536) Multiple format support (JSON, etc.) for exclude node file in NM graceful decommission with timeout

2017-08-29 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146459#comment-16146459
 ] 

Robert Kanter commented on YARN-5536:
-

Sounds like we won't have time to implement the "full version".  In that case, 
I think we should do the 2nd faster option (remove the XML format) - it will be 
safer from a compatibility perspective if we don't have to worry about the XML 
formatting.  The downside (being unable to specify an individual timeout per 
host) sounds like an uncommon use case.  [~djp], what do you think?

> Multiple format support (JSON, etc.) for exclude node file in NM graceful 
> decommission with timeout
> ---
>
> Key: YARN-5536
> URL: https://issues.apache.org/jira/browse/YARN-5536
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: graceful
>Reporter: Junping Du
>Priority: Blocker
>
> Per discussion in YARN-4676, we agree that multiple format (other than xml) 
> should be supported to decommission nodes with timeout values.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7094) Document that server-side graceful decom is currently not recommended

2017-08-29 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-7094:

Attachment: YARN-7094.002.patch

That's a good point.  We don't want to introduce a temporary thing and users 
start thinking it's the correct behavior and we get stuck with something 
backwards incompatible. :D

The new patch changes the wording to make it clear that this is a known issue.

> Document that server-side graceful decom is currently not recommended
> -
>
> Key: YARN-7094
> URL: https://issues.apache.org/jira/browse/YARN-7094
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: graceful
>Affects Versions: 3.0.0-beta1
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Blocker
> Attachments: YARN-7094.001.patch, YARN-7094.002.patch
>
>
> Server-side NM graceful decom currently does not work correctly when an RM 
> failover occurs because we don't persist the info in the state store (see 
> YARN-5464).  Given time constraints for Hadoop 3 beta 1, we've decided to 
> document this limitation and recommend client-side NM graceful decom in the 
> meantime if you need this functionality (see [this 
> comment|https://issues.apache.org/jira/browse/YARN-5464?focusedCommentId=16126119=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16126119]).
>   Once YARN-5464 is done, we can undo this doc change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7022) Improve click interaction in queue nodes tree in Web UI v2

2017-08-29 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146454#comment-16146454
 ] 

Sunil G commented on YARN-7022:
---

Yea. This behavior seems fine enough for now i think.

> Improve click interaction in queue nodes tree in Web UI v2
> --
>
> Key: YARN-7022
> URL: https://issues.apache.org/jira/browse/YARN-7022
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-ui-v2
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Attachments: YARN-7022.001.patch
>
>
> Currently, the behavior of interacting with the tree view in the queues tab 
> of the UI is difficult in that you must mouse over to select a queue node and 
> then click to drill down. It would be more intuitive to single click to 
> select a different queue and double-click to drill down instead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7088) Fix application start time and add submit time to UIs

2017-08-29 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146452#comment-16146452
 ] 

Sunil G commented on YARN-7088:
---

bq.same as the difference between when the app is submitted and launched? From 
what I've seen, the application is registered in the RM very soon after it is 
submitted. 

This is not same. Essentially when u have max-am-percent is hit, APP will be 
accepted state for longer time. Hence it depends on the scheduler by which it 
can provide AM container sooner. Hence pending seems confusing and not accurate 
name.

> Fix application start time and add submit time to UIs
> -
>
> Key: YARN-7088
> URL: https://issues.apache.org/jira/browse/YARN-7088
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Attachments: YARN-7088.001.patch, YARN-7088.002.patch, 
> YARN-7088.003.patch, YARN-7088.004.patch, YARN-7088.005.patch, 
> YARN-7088.006.patch
>
>
> Currently, the start time in the old and new UI actually shows the app 
> submission time. There should actually be two different fields; one for the 
> app's submission and one for its start, as well as the elapsed pending time 
> between the two.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6877) Create an abstract log reader for extendability

2017-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146450#comment-16146450
 ] 

Hadoop QA commented on YARN-6877:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
16s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
50s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 58s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 8 new + 314 unchanged - 11 fixed = 322 total (was 325) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
42s{color} | {color:red} hadoop-yarn-common in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
40s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
58s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
7s{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the 
patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  4m 43s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.api.impl.TestAMRMClient |
|   | hadoop.yarn.client.api.impl.TestDistributedScheduling |
|   | hadoop.yarn.client.api.impl.TestYarnClient |
|   | hadoop.yarn.client.api.impl.TestAMRMProxy |
|   | hadoop.yarn.client.TestApplicationClientProtocolOnHA |
|   | hadoop.yarn.client.cli.TestLogsCLI |
|   | hadoop.yarn.client.TestRMFailover |
|   | hadoop.yarn.client.cli.TestYarnCLI |
|   | 

[jira] [Comment Edited] (YARN-7022) Improve click interaction in queue nodes tree in Web UI v2

2017-08-29 Thread Abdullah Yousufi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16145858#comment-16145858
 ] 

Abdullah Yousufi edited comment on YARN-7022 at 8/30/17 1:21 AM:
-

Hey [~sunilg], I just checked and the cursor does change to a hand when you 
hover over a node. Should it do something else in addition?


was (Author: ayousufi):
Hey [~sunilg], I just checked and the cursor does change to a hand with you 
hover over a node. Should it do something else in addition?

> Improve click interaction in queue nodes tree in Web UI v2
> --
>
> Key: YARN-7022
> URL: https://issues.apache.org/jira/browse/YARN-7022
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-ui-v2
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Attachments: YARN-7022.001.patch
>
>
> Currently, the behavior of interacting with the tree view in the queues tab 
> of the UI is difficult in that you must mouse over to select a queue node and 
> then click to drill down. It would be more intuitive to single click to 
> select a different queue and double-click to drill down instead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7088) Fix application start time and add submit time to UIs

2017-08-29 Thread Abdullah Yousufi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdullah Yousufi updated YARN-7088:
---
Attachment: YARN-7088.006.patch

> Fix application start time and add submit time to UIs
> -
>
> Key: YARN-7088
> URL: https://issues.apache.org/jira/browse/YARN-7088
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Attachments: YARN-7088.001.patch, YARN-7088.002.patch, 
> YARN-7088.003.patch, YARN-7088.004.patch, YARN-7088.005.patch, 
> YARN-7088.006.patch
>
>
> Currently, the start time in the old and new UI actually shows the app 
> submission time. There should actually be two different fields; one for the 
> app's submission and one for its start, as well as the elapsed pending time 
> between the two.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7088) Fix application start time and add submit time to UIs

2017-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146442#comment-16146442
 ] 

Hadoop QA commented on YARN-7088:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} YARN-7088 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7088 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884380/YARN-7088.005.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17198/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix application start time and add submit time to UIs
> -
>
> Key: YARN-7088
> URL: https://issues.apache.org/jira/browse/YARN-7088
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Attachments: YARN-7088.001.patch, YARN-7088.002.patch, 
> YARN-7088.003.patch, YARN-7088.004.patch, YARN-7088.005.patch
>
>
> Currently, the start time in the old and new UI actually shows the app 
> submission time. There should actually be two different fields; one for the 
> app's submission and one for its start, as well as the elapsed pending time 
> between the two.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7088) Fix application start time and add submit time to UIs

2017-08-29 Thread Abdullah Yousufi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdullah Yousufi updated YARN-7088:
---
Attachment: YARN-7088.005.patch

> Fix application start time and add submit time to UIs
> -
>
> Key: YARN-7088
> URL: https://issues.apache.org/jira/browse/YARN-7088
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Attachments: YARN-7088.001.patch, YARN-7088.002.patch, 
> YARN-7088.003.patch, YARN-7088.004.patch, YARN-7088.005.patch
>
>
> Currently, the start time in the old and new UI actually shows the app 
> submission time. There should actually be two different fields; one for the 
> app's submission and one for its start, as well as the elapsed pending time 
> between the two.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7100) YARN service api can not reuse json file serialized in hdfs

2017-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146434#comment-16146434
 ] 

Hadoop QA commented on YARN-7100:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} yarn-native-services Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 5s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} yarn-native-services passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
16s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7100 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884374/YARN-7100.004.yarn-native-services.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2654854bd545 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / 54ad59a |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17196/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17196/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> YARN service api can not reuse json file serialized in hdfs
> ---
>
> Key: YARN-7100
> URL: https://issues.apache.org/jira/browse/YARN-7100
> Project: Hadoop YARN
> 

[jira] [Commented] (YARN-6911) Graph application-level resource utilization in Web UI v2

2017-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146433#comment-16146433
 ] 

Hadoop QA commented on YARN-6911:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  0m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6911 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884378/YARN-6911.002.patch |
| Optional Tests |  asflicense  |
| uname | Linux d0b75fa2e39e 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cf93d60 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17197/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Graph application-level resource utilization in Web UI v2
> -
>
> Key: YARN-6911
> URL: https://issues.apache.org/jira/browse/YARN-6911
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn-ui-v2
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Attachments: Resource Graph Screenshot.png, Resource Utilization 
> Graph Mock Up.png, YARN-6911.001.patch, YARN-6911.002.patch
>
>
> It would be useful to have a visualization of the resource utilization 
> (memory, cpu, etc.) per application using the ATSv2 time series data. Rough 
> mock up attached.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6911) Graph application-level resource utilization in Web UI v2

2017-08-29 Thread Abdullah Yousufi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdullah Yousufi updated YARN-6911:
---
Attachment: YARN-6911.002.patch

new patch with some refactored + cleaned up code

> Graph application-level resource utilization in Web UI v2
> -
>
> Key: YARN-6911
> URL: https://issues.apache.org/jira/browse/YARN-6911
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn-ui-v2
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Attachments: Resource Graph Screenshot.png, Resource Utilization 
> Graph Mock Up.png, YARN-6911.001.patch, YARN-6911.002.patch
>
>
> It would be useful to have a visualization of the resource utilization 
> (memory, cpu, etc.) per application using the ATSv2 time series data. Rough 
> mock up attached.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5536) Multiple format support (JSON, etc.) for exclude node file in NM graceful decommission with timeout

2017-08-29 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146427#comment-16146427
 ] 

Andrew Wang commented on YARN-5536:
---

[~rkanter] [~djp] what's our move on this JIRA? One of two blockers that 
doesn't have an assignee currently.

> Multiple format support (JSON, etc.) for exclude node file in NM graceful 
> decommission with timeout
> ---
>
> Key: YARN-5536
> URL: https://issues.apache.org/jira/browse/YARN-5536
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: graceful
>Reporter: Junping Du
>Priority: Blocker
>
> Per discussion in YARN-4676, we agree that multiple format (other than xml) 
> should be supported to decommission nodes with timeout values.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7100) YARN service api can not reuse json file serialized in hdfs

2017-08-29 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7100:

Attachment: YARN-7100.004.yarn-native-services.patch

Rebase patch based on the current HEAD of yarn-native-services branch.

> YARN service api can not reuse json file serialized in hdfs
> ---
>
> Key: YARN-7100
> URL: https://issues.apache.org/jira/browse/YARN-7100
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.0.0-beta1
>Reporter: Eric Yang
> Attachments: YARN-7100.001.patch, 
> YARN-7100.002.yarn-native-services.patch, 
> YARN-7100.003.yarn-native-services.patch, 
> YARN-7100.004.yarn-native-services.patch
>
>
> org.apache.hadoop.yarn.service.api.records.Resource has a new method 
> introduced in YARN-6903 for casting memory from string to long value.  
> However, the method name getMemoryMB introduces new output in json that looks 
> like this:
> {code}
> "resource" : {
>   "uri" : null,
>   "profile" : null,
>   "cpus" : 1,
>   "memory" : "2048",
>   "memory_mb" : 2048
> },
> {code}
> This prevents the file to be resubmitted to services api because memory_mb 
> property is unknown to the REST API.  It may be better to rename getMemoryMB 
> method to calcMemoryMB to avoid the method to be serialized unintentionally.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7100) YARN service api can not reuse json file serialized in hdfs

2017-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146383#comment-16146383
 ] 

Hadoop QA commented on YARN-7100:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} YARN-7100 does not apply to yarn-native-services. Rebase 
required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for 
help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7100 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884371/YARN-7100.003.yarn-native-services.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17195/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> YARN service api can not reuse json file serialized in hdfs
> ---
>
> Key: YARN-7100
> URL: https://issues.apache.org/jira/browse/YARN-7100
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.0.0-beta1
>Reporter: Eric Yang
> Attachments: YARN-7100.001.patch, 
> YARN-7100.002.yarn-native-services.patch, 
> YARN-7100.003.yarn-native-services.patch
>
>
> org.apache.hadoop.yarn.service.api.records.Resource has a new method 
> introduced in YARN-6903 for casting memory from string to long value.  
> However, the method name getMemoryMB introduces new output in json that looks 
> like this:
> {code}
> "resource" : {
>   "uri" : null,
>   "profile" : null,
>   "cpus" : 1,
>   "memory" : "2048",
>   "memory_mb" : 2048
> },
> {code}
> This prevents the file to be resubmitted to services api because memory_mb 
> property is unknown to the REST API.  It may be better to rename getMemoryMB 
> method to calcMemoryMB to avoid the method to be serialized unintentionally.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7100) YARN service api can not reuse json file serialized in hdfs

2017-08-29 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7100:

Attachment: YARN-7100.003.yarn-native-services.patch

Change the patch to use ignore annotation.  Thanks for the feedbacks.

> YARN service api can not reuse json file serialized in hdfs
> ---
>
> Key: YARN-7100
> URL: https://issues.apache.org/jira/browse/YARN-7100
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.0.0-beta1
>Reporter: Eric Yang
> Attachments: YARN-7100.001.patch, 
> YARN-7100.002.yarn-native-services.patch, 
> YARN-7100.003.yarn-native-services.patch
>
>
> org.apache.hadoop.yarn.service.api.records.Resource has a new method 
> introduced in YARN-6903 for casting memory from string to long value.  
> However, the method name getMemoryMB introduces new output in json that looks 
> like this:
> {code}
> "resource" : {
>   "uri" : null,
>   "profile" : null,
>   "cpus" : 1,
>   "memory" : "2048",
>   "memory_mb" : 2048
> },
> {code}
> This prevents the file to be resubmitted to services api because memory_mb 
> property is unknown to the REST API.  It may be better to rename getMemoryMB 
> method to calcMemoryMB to avoid the method to be serialized unintentionally.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6830) Support quoted strings for environment variables

2017-08-29 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146378#comment-16146378
 ] 

Daniel Templeton commented on YARN-6830:


Oops, sorry.  Didn't mean to do a drive by.

What about using {{Matcher.find()}} to iteratively hunt for vars, something 
like:
{code}
Pattern p = Pattern.compile("^(" + Shell.ENV_NAME_REGEX + 
")=(['\"]?)(((?!\\2).)*)\\2(,|$)");
Matcher m = p.matcher(envString);
int start = 0;

while (m.find(start)) {
  String key = m.group(1);
  String value = m.group(3);

  ...

  start = m.end();
}

if (m.end() != envString.length()) {
  // Complain about left-over bits
}
{code}

Just thinking out loud.

> Support quoted strings for environment variables
> 
>
> Key: YARN-6830
> URL: https://issues.apache.org/jira/browse/YARN-6830
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-6830.001.patch
>
>
> There are cases where it is necessary to allow for quoted string literals 
> within environment variables values when passed via the yarn command line 
> interface.
> For example, consider the follow environment variables for a MR map task.
> {{MODE=bar}}
> {{IMAGE_NAME=foo}}
> {{MOUNTS=/tmp/foo,/tmp/bar}}
> When running the MR job, these environment variables are supplied as a comma 
> delimited string.
> {{-Dmapreduce.map.env="MODE=bar,IMAGE_NAME=foo,MOUNTS=/tmp/foo,/tmp/bar"}}
> In this case, {{MOUNTS}} will be parsed and added to the task environment as 
> {{MOUNTS=/tmp/foo}}. Any attempts to quote the embedded comma separated value 
> results in quote characters becoming part of the value, and parsing still 
> breaks down at the comma.
> This issue is to allow for quoting the comma separated value (escaped double 
> or single quote). This was mentioned on YARN-4595 and will impact YARN-5534 
> as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7077) TestAMSimulator and TestNMSimulator fail

2017-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146375#comment-16146375
 ] 

Hudson commented on YARN-7077:
--

ABORTED: Integrated in Jenkins build Hadoop-trunk-Commit #12266 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12266/])
YARN-7077. TestAMSimulator and TestNMSimulator fail. (Contributed by (yufei: 
rev 26fafc359787eae0ef82196000f4a04956b2abaa)
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
* (edit) hadoop-tools/hadoop-sls/src/test/resources/yarn-site.xml


> TestAMSimulator and TestNMSimulator fail
> 
>
> Key: YARN-7077
> URL: https://issues.apache.org/jira/browse/YARN-7077
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: YARN-7077.001.patch, YARN-7077.002.patch
>
>
> TestAMSimulator and TestNMSimulator are failing:
> {noformat}
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Class 
> org.apache.hadoop.yarn.sls.scheduler.SLSFairScheduler not instance of 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy.init(ProportionalCapacityPreemptionPolicy.java:159)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor.serviceInit(SchedulingMonitor.java:61)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:744)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1140)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:301)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.sls.appmaster.TestAMSimulator.setup(TestAMSimulator.java:77)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6877) Create an abstract log reader for extendability

2017-08-29 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146367#comment-16146367
 ] 

Xuan Gong commented on YARN-6877:
-

bq. In LogCLIHelpers.java, I saw we are mixing using err.println() and 
System.err.println(). Both way should works fine but better to keep consistent 
here.

This is because some of the function use (PrintStream err) as parameter input, 
so err.println() will be used. For the functions which do not have (PrintStream 
err) as parameter, we use System.err.println()

Uploaded a new patch for other comments.

> Create an abstract log reader for extendability
> ---
>
> Key: YARN-6877
> URL: https://issues.apache.org/jira/browse/YARN-6877
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6877-branch-2.001.patch, YARN-6877-trunk.001.patch, 
> YARN-6877-trunk.002.patch, YARN-6877-trunk.003.patch, 
> YARN-6877-trunk.004.patch, YARN-6877-trunk.005.patch, 
> YARN-6877-trunk.006.patch
>
>
> Currently, TFile log reader is used to read aggregated log in YARN. We need 
> to add an abstract layer, and pick up the correct log reader based on the 
> configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6877) Create an abstract log reader for extendability

2017-08-29 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-6877:

Attachment: YARN-6877-trunk.006.patch

> Create an abstract log reader for extendability
> ---
>
> Key: YARN-6877
> URL: https://issues.apache.org/jira/browse/YARN-6877
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6877-branch-2.001.patch, YARN-6877-trunk.001.patch, 
> YARN-6877-trunk.002.patch, YARN-6877-trunk.003.patch, 
> YARN-6877-trunk.004.patch, YARN-6877-trunk.005.patch, 
> YARN-6877-trunk.006.patch
>
>
> Currently, TFile log reader is used to read aggregated log in YARN. We need 
> to add an abstract layer, and pick up the correct log reader based on the 
> configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6091) the AppMaster register failed when use Docker on LinuxContainer

2017-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146345#comment-16146345
 ] 

Hadoop QA commented on YARN-6091:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
34s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6091 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884361/YARN-6091.002.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 124e5c29ae1d 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f59332b |
| Default Java | 1.8.0_144 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17193/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17193/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> the AppMaster register failed when use Docker on LinuxContainer 
> 
>
> Key: YARN-6091
> URL: https://issues.apache.org/jira/browse/YARN-6091
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager, yarn
>Affects Versions: 2.8.1
> Environment: CentOS
>Reporter: zhengchenyu
>Assignee: Eric Badger
>Priority: Critical
> Attachments: YARN-6091.001.patch, YARN-6091.002.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> In some servers, When I use Docker on LinuxContainer, I found the aciton that 
> AppMaster register to Resourcemanager failed. But didn't happen in other 
> servers. 
> I found the pclose (in container-executor.c) return different value in 
> different server, even though the process which is launched by popen is 
> running normally. Some server return 0, and others return 13. 
> Because yarn regard the application as failed application when pclose return 
> nonzero, and yarn will remove the AMRMToken, then the AppMaster register 
> failed 

[jira] [Commented] (YARN-7077) TestAMSimulator and TestNMSimulator fail

2017-08-29 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146343#comment-16146343
 ] 

Yufei Gu commented on YARN-7077:


Thanks for the patch, [~ajisakaa]. Committed to trunk. Need a patch for 
branch-2. 

> TestAMSimulator and TestNMSimulator fail
> 
>
> Key: YARN-7077
> URL: https://issues.apache.org/jira/browse/YARN-7077
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: YARN-7077.001.patch, YARN-7077.002.patch
>
>
> TestAMSimulator and TestNMSimulator are failing:
> {noformat}
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Class 
> org.apache.hadoop.yarn.sls.scheduler.SLSFairScheduler not instance of 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy.init(ProportionalCapacityPreemptionPolicy.java:159)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor.serviceInit(SchedulingMonitor.java:61)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:744)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1140)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:301)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.sls.appmaster.TestAMSimulator.setup(TestAMSimulator.java:77)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7126) Create introductory site documentation for YARN native services

2017-08-29 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha reassigned YARN-7126:
---

Assignee: Gour Saha

> Create introductory site documentation for YARN native services
> ---
>
> Key: YARN-7126
> URL: https://issues.apache.org/jira/browse/YARN-7126
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7126) Create introductory site documentation for YARN native services

2017-08-29 Thread Gour Saha (JIRA)
Gour Saha created YARN-7126:
---

 Summary: Create introductory site documentation for YARN native 
services
 Key: YARN-7126
 URL: https://issues.apache.org/jira/browse/YARN-7126
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Gour Saha






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5219) When an export var command fails in launch_container.sh, the full container launch should fail

2017-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146323#comment-16146323
 ] 

Hudson commented on YARN-5219:
--

ABORTED: Integrated in Jenkins build Hadoop-trunk-Commit #12265 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12265/])
YARN-5219. When an export var command fails in launch_container.sh, the 
(wangda: rev f59332b97b9a57e3cf1dcdeb47d7838d287100eb)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java


> When an export var command fails in launch_container.sh, the full container 
> launch should fail
> --
>
> Key: YARN-5219
> URL: https://issues.apache.org/jira/browse/YARN-5219
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Hitesh Shah
>Assignee: Sunil G
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-5219.001.patch, YARN-5219.003.patch, 
> YARN-5219.004.patch, YARN-5219.005.patch, YARN-5219.006.patch, 
> YARN-5219.007.patch, YARN-5219-branch-2.001.patch
>
>
> Today, a container fails if certain files fail to localize. However, if 
> certain env vars fail to get setup properly either due to bugs in the yarn 
> application or misconfiguration, the actual process launch still gets 
> triggered. This results in either confusing error messages if the process 
> fails to launch or worse yet the process launches but then starts behaving 
> wrongly if the env var is used to control some behavioral aspects. 
> In this scenario, the issue was reproduced by trying to do export 
> abc="$\{foo.bar}" which is invalid as var names cannot contain "." in bash. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7077) TestAMSimulator and TestNMSimulator fail

2017-08-29 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146322#comment-16146322
 ] 

Yufei Gu commented on YARN-7077:


The test failures are unrelated. 
+1 for the second patch.

> TestAMSimulator and TestNMSimulator fail
> 
>
> Key: YARN-7077
> URL: https://issues.apache.org/jira/browse/YARN-7077
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: YARN-7077.001.patch, YARN-7077.002.patch
>
>
> TestAMSimulator and TestNMSimulator are failing:
> {noformat}
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Class 
> org.apache.hadoop.yarn.sls.scheduler.SLSFairScheduler not instance of 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy.init(ProportionalCapacityPreemptionPolicy.java:159)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor.serviceInit(SchedulingMonitor.java:61)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:744)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1140)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:301)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.yarn.sls.appmaster.TestAMSimulator.setup(TestAMSimulator.java:77)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7115) Move BoundedAppender to org.hadoop.yarn.util pacakge

2017-08-29 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146314#comment-16146314
 ] 

Daniel Templeton commented on YARN-7115:


Will commit shortly.

> Move BoundedAppender to org.hadoop.yarn.util pacakge 
> -
>
> Key: YARN-7115
> URL: https://issues.apache.org/jira/browse/YARN-7115
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-7115.01.patch, YARN-7115.02.patch, 
> YARN-7115.03.patch, YARN-7115.04.patch
>
>
> BoundedAppender is a useful util class which can be present in the util 
> package



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7037) Optimize data transfer with zero-copy approach for containerlogs REST API in NMWebServices

2017-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146291#comment-16146291
 ] 

Hudson commented on YARN-7037:
--

ABORTED: Integrated in Jenkins build Hadoop-trunk-Commit #12264 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12264/])
YARN-7037. Optimize data transfer with zero-copy approach for (junping_du: rev 
ad45d19998c1b0da25754d0016854046731fa623)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/LogToolUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/NMWebServices.java


> Optimize data transfer with zero-copy approach for containerlogs REST API in 
> NMWebServices
> --
>
> Key: YARN-7037
> URL: https://issues.apache.org/jira/browse/YARN-7037
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.8.0
>Reporter: Tao Yang
>Assignee: Tao Yang
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.3
>
> Attachments: YARN-7037.001.patch, YARN-7037.branch-2.8.001.patch
>
>
> Split this improvement from YARN-6259.
> It's useful to read container logs more efficiently. With zero-copy approach, 
> data transfer pipeline (disk --> read buffer --> NM buffer --> socket buffer) 
> can be optimized to pipeline(disk --> read buffer --> socket buffer) .
> In my local test, time cost of copying 256MB file with zero-copy can be 
> reduced from 12 seconds to 2.5 seconds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6091) the AppMaster register failed when use Docker on LinuxContainer

2017-08-29 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-6091:
--
Attachment: YARN-6091.002.patch

Rebasing patch to trunk

> the AppMaster register failed when use Docker on LinuxContainer 
> 
>
> Key: YARN-6091
> URL: https://issues.apache.org/jira/browse/YARN-6091
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager, yarn
>Affects Versions: 2.8.1
> Environment: CentOS
>Reporter: zhengchenyu
>Assignee: Eric Badger
>Priority: Critical
> Attachments: YARN-6091.001.patch, YARN-6091.002.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> In some servers, When I use Docker on LinuxContainer, I found the aciton that 
> AppMaster register to Resourcemanager failed. But didn't happen in other 
> servers. 
> I found the pclose (in container-executor.c) return different value in 
> different server, even though the process which is launched by popen is 
> running normally. Some server return 0, and others return 13. 
> Because yarn regard the application as failed application when pclose return 
> nonzero, and yarn will remove the AMRMToken, then the AppMaster register 
> failed because Resourcemanager have removed this applicaiton's token. 
> In container-executor.c, the judgement condition is whether the return code 
> is zero. But man the pclose, the document tells that "pclose return -1" 
> represent wrong. So I change the judgement condition, then slove this 
> problem. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7033) Add support for NM Recovery of assigned resources(GPU's, NUMA, FPGA's) to container

2017-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146283#comment-16146283
 ] 

Hadoop QA commented on YARN-7033:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
3s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
51s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7033 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884349/YARN-7033-v2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4dde1403249f 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cc8893e |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/17192/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17192/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17192/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add support for NM Recovery of assigned resources(GPU's, NUMA, FPGA's) to 
> container
> ---
>
> Key: YARN-7033
> URL: 

[jira] [Commented] (YARN-7115) Move BoundedAppender to org.hadoop.yarn.util pacakge

2017-08-29 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146271#comment-16146271
 ] 

Jian He commented on YARN-7115:
---

the test failures are not related 

> Move BoundedAppender to org.hadoop.yarn.util pacakge 
> -
>
> Key: YARN-7115
> URL: https://issues.apache.org/jira/browse/YARN-7115
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-7115.01.patch, YARN-7115.02.patch, 
> YARN-7115.03.patch, YARN-7115.04.patch
>
>
> BoundedAppender is a useful util class which can be present in the util 
> package



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7088) Fix application start time and add submit time to UIs

2017-08-29 Thread Abdullah Yousufi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146261#comment-16146261
 ] 

Abdullah Yousufi commented on YARN-7088:


Thanks [~sunilg]. Regarding your first point, this is intentional as an initial 
value for launchTime because it is part of the ApplicationFinishData, as the 
field will have been set by the time the application has finished.

For the pendingTime name, isn't the difference between app.getStartTime() and 
app.getLaunchTime() essentially the same as the difference between when the app 
is submitted and launched? From what I've seen, the application is registered 
in the RM very soon after it is submitted. Therefore, pendingTime is describing 
how long the application was pending before it actually began running, which 
makes sense in that context. Furthermore, the placement between the start and 
launch time in both UI's makes it clear what delta it is measuring. Let me know 
what you think and I'd be curious if anyone else has any thoughts on this.

> Fix application start time and add submit time to UIs
> -
>
> Key: YARN-7088
> URL: https://issues.apache.org/jira/browse/YARN-7088
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
> Attachments: YARN-7088.001.patch, YARN-7088.002.patch, 
> YARN-7088.003.patch, YARN-7088.004.patch
>
>
> Currently, the start time in the old and new UI actually shows the app 
> submission time. There should actually be two different fields; one for the 
> app's submission and one for its start, as well as the elapsed pending time 
> between the two.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6877) Create an abstract log reader for extendability

2017-08-29 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146249#comment-16146249
 ] 

Junping Du commented on YARN-6877:
--

btw, I just commit YARN-7037 which may need a bit rebase effort here.

> Create an abstract log reader for extendability
> ---
>
> Key: YARN-6877
> URL: https://issues.apache.org/jira/browse/YARN-6877
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6877-branch-2.001.patch, YARN-6877-trunk.001.patch, 
> YARN-6877-trunk.002.patch, YARN-6877-trunk.003.patch, 
> YARN-6877-trunk.004.patch, YARN-6877-trunk.005.patch
>
>
> Currently, TFile log reader is used to read aggregated log in YARN. We need 
> to add an abstract layer, and pick up the correct log reader based on the 
> configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7010) Federation: routing REST invocations transparently to multiple RMs (part 2 - getApps)

2017-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146243#comment-16146243
 ] 

Hudson commented on YARN-7010:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12263 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12263/])
YARN-7010. Federation: routing REST invocations transparently to (carlo curino: 
rev cc8893edc0b7960e958723c81062986c12f06ade)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/RouterMetrics.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/FederationInterceptorREST.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/AppsInfo.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/TestFederationInterceptorREST.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedApplicationManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/TestFederationInterceptorRESTRetry.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/TestRouterWebServiceUtil.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/AppInfo.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/RouterWebServiceUtil.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/MockDefaultRequestInterceptorREST.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/TestRouterMetrics.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java


> Federation: routing REST invocations transparently to multiple RMs (part 2 - 
> getApps)
> -
>
> Key: YARN-7010
> URL: https://issues.apache.org/jira/browse/YARN-7010
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-7010.v0.patch, YARN-7010.v1.patch, 
> YARN-7010.v2.patch, YARN-7010.v3.patch, YARN-7010.v4.patch, YARN-7010.v5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7033) Add support for NM Recovery of assigned resources(GPU's, NUMA, FPGA's) to container

2017-08-29 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146241#comment-16146241
 ] 

Wangda Tan commented on YARN-7033:
--

[~devaraj.k],
Apologize I missed last comment from [~sunilg]. I suggest not change type from 
String to enum, we need to change code when any new resource type added to the 
system. And after YARN-3926, it is possible to support arbitrary resource type 
and enforce it in a customized resource module, enum doesn't work in this case. 

Thoughts? +[~sunilg].

> Add support for NM Recovery of assigned resources(GPU's, NUMA, FPGA's) to 
> container
> ---
>
> Key: YARN-7033
> URL: https://issues.apache.org/jira/browse/YARN-7033
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Devaraj K
>Assignee: Devaraj K
> Attachments: YARN-7033-v0.patch, YARN-7033-v1.patch, 
> YARN-7033-v2.patch
>
>
> This JIRA adds the common logic to store the assigned resources to container 
> such as GPU's(YARN-6620), NUMA(YARN-5764) and FPGA's(YARN-5983) etc. and 
> recover upon restart of NM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7100) YARN service api can not reuse json file serialized in hdfs

2017-08-29 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146236#comment-16146236
 ] 

Gour Saha commented on YARN-7100:
-

We should just mark the method with @JsonIgnore annotation so that it does not 
get serialized.

> YARN service api can not reuse json file serialized in hdfs
> ---
>
> Key: YARN-7100
> URL: https://issues.apache.org/jira/browse/YARN-7100
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.0.0-beta1
>Reporter: Eric Yang
> Attachments: YARN-7100.001.patch, 
> YARN-7100.002.yarn-native-services.patch
>
>
> org.apache.hadoop.yarn.service.api.records.Resource has a new method 
> introduced in YARN-6903 for casting memory from string to long value.  
> However, the method name getMemoryMB introduces new output in json that looks 
> like this:
> {code}
> "resource" : {
>   "uri" : null,
>   "profile" : null,
>   "cpus" : 1,
>   "memory" : "2048",
>   "memory_mb" : 2048
> },
> {code}
> This prevents the file to be resubmitted to services api because memory_mb 
> property is unknown to the REST API.  It may be better to rename getMemoryMB 
> method to calcMemoryMB to avoid the method to be serialized unintentionally.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-4757) [Umbrella] Simplified discovery of services via DNS mechanisms

2017-08-29 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-4757:
--
Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} YARN-4757 does not apply to YARN-4757. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-4757 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808955/YARN-4757-YARN-4757.005.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15495/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.

)

> [Umbrella] Simplified discovery of services via DNS mechanisms
> --
>
> Key: YARN-4757
> URL: https://issues.apache.org/jira/browse/YARN-4757
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Jonathan Maron
>  Labels: oct16-hard
> Attachments: 
> 0001-YARN-4757-Initial-code-submission-for-DNS-Service.patch, 
> YARN-4757.001.patch, YARN-4757.002.patch, YARN-4757- Simplified discovery of 
> services via DNS mechanisms.pdf, YARN-4757-YARN-4757.001.patch, 
> YARN-4757-YARN-4757.002.patch, YARN-4757-YARN-4757.003.patch, 
> YARN-4757-YARN-4757.004.patch, YARN-4757-YARN-4757.005.patch
>
>
> [See overview doc at YARN-4692, copying the sub-section (3.2.10.2) to track 
> all related efforts.]
> In addition to completing the present story of service­-registry (YARN-913), 
> we also need to simplify the access to the registry entries. The existing 
> read mechanisms of the YARN Service Registry are currently limited to a 
> registry specific (java) API and a REST interface. In practice, this makes it 
> very difficult for wiring up existing clients and services. For e.g, dynamic 
> configuration of dependent end­points of a service is not easy to implement 
> using the present registry­-read mechanisms, *without* code-changes to 
> existing services.
> A good solution to this is to expose the registry information through a more 
> generic and widely used discovery mechanism: DNS. Service Discovery via DNS 
> uses the well-­known DNS interfaces to browse the network for services. 
> YARN-913 in fact talked about such a DNS based mechanism but left it as a 
> future task. (Task) Having the registry information exposed via DNS 
> simplifies the life of services.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-4757) [Umbrella] Simplified discovery of services via DNS mechanisms

2017-08-29 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-4757:
--
Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 10s{color} 
| {color:red} YARN-4757 does not apply to YARN-4757. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-4757 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808955/YARN-4757-YARN-4757.005.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17191/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.

)

> [Umbrella] Simplified discovery of services via DNS mechanisms
> --
>
> Key: YARN-4757
> URL: https://issues.apache.org/jira/browse/YARN-4757
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Jonathan Maron
>  Labels: oct16-hard
> Attachments: 
> 0001-YARN-4757-Initial-code-submission-for-DNS-Service.patch, 
> YARN-4757.001.patch, YARN-4757.002.patch, YARN-4757- Simplified discovery of 
> services via DNS mechanisms.pdf, YARN-4757-YARN-4757.001.patch, 
> YARN-4757-YARN-4757.002.patch, YARN-4757-YARN-4757.003.patch, 
> YARN-4757-YARN-4757.004.patch, YARN-4757-YARN-4757.005.patch
>
>
> [See overview doc at YARN-4692, copying the sub-section (3.2.10.2) to track 
> all related efforts.]
> In addition to completing the present story of service­-registry (YARN-913), 
> we also need to simplify the access to the registry entries. The existing 
> read mechanisms of the YARN Service Registry are currently limited to a 
> registry specific (java) API and a REST interface. In practice, this makes it 
> very difficult for wiring up existing clients and services. For e.g, dynamic 
> configuration of dependent end­points of a service is not easy to implement 
> using the present registry­-read mechanisms, *without* code-changes to 
> existing services.
> A good solution to this is to expose the registry information through a more 
> generic and widely used discovery mechanism: DNS. Service Discovery via DNS 
> uses the well-­known DNS interfaces to browse the network for services. 
> YARN-913 in fact talked about such a DNS based mechanism but left it as a 
> future task. (Task) Having the registry information exposed via DNS 
> simplifies the life of services.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7115) Move BoundedAppender to org.hadoop.yarn.util pacakge

2017-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146233#comment-16146233
 ] 

Hadoop QA commented on YARN-7115:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
37s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 48m 53s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA |
|   | org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7115 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884333/YARN-7115.04.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux addae8fb309d 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 63fc1b0 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Updated] (YARN-6269) Pull into native services SLIDER-1185 - container/application diagnostics for enhanced debugging

2017-08-29 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6269:
--
Parent Issue: YARN-7054  (was: YARN-4793)

> Pull into native services SLIDER-1185 - container/application diagnostics for 
> enhanced debugging
> 
>
> Key: YARN-6269
> URL: https://issues.apache.org/jira/browse/YARN-6269
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
> Fix For: yarn-native-services
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4757) [Umbrella] Simplified discovery of services via DNS mechanisms

2017-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146217#comment-16146217
 ] 

Hadoop QA commented on YARN-4757:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 10s{color} 
| {color:red} YARN-4757 does not apply to YARN-4757. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-4757 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808955/YARN-4757-YARN-4757.005.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17191/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [Umbrella] Simplified discovery of services via DNS mechanisms
> --
>
> Key: YARN-4757
> URL: https://issues.apache.org/jira/browse/YARN-4757
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Jonathan Maron
>  Labels: oct16-hard
> Attachments: 
> 0001-YARN-4757-Initial-code-submission-for-DNS-Service.patch, 
> YARN-4757.001.patch, YARN-4757.002.patch, YARN-4757- Simplified discovery of 
> services via DNS mechanisms.pdf, YARN-4757-YARN-4757.001.patch, 
> YARN-4757-YARN-4757.002.patch, YARN-4757-YARN-4757.003.patch, 
> YARN-4757-YARN-4757.004.patch, YARN-4757-YARN-4757.005.patch
>
>
> [See overview doc at YARN-4692, copying the sub-section (3.2.10.2) to track 
> all related efforts.]
> In addition to completing the present story of service­-registry (YARN-913), 
> we also need to simplify the access to the registry entries. The existing 
> read mechanisms of the YARN Service Registry are currently limited to a 
> registry specific (java) API and a REST interface. In practice, this makes it 
> very difficult for wiring up existing clients and services. For e.g, dynamic 
> configuration of dependent end­points of a service is not easy to implement 
> using the present registry­-read mechanisms, *without* code-changes to 
> existing services.
> A good solution to this is to expose the registry information through a more 
> generic and widely used discovery mechanism: DNS. Service Discovery via DNS 
> uses the well-­known DNS interfaces to browse the network for services. 
> YARN-913 in fact talked about such a DNS based mechanism but left it as a 
> future task. (Task) Having the registry information exposed via DNS 
> simplifies the life of services.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4933) Evaluate parent-slave DNS options to assess deployment options for DNS service

2017-08-29 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-4933:
--
Parent Issue: YARN-7054  (was: YARN-4793)

> Evaluate parent-slave DNS options to assess deployment options for DNS service
> --
>
> Key: YARN-4933
> URL: https://issues.apache.org/jira/browse/YARN-4933
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Maron
>Assignee: Jonathan Maron
>
> Comments on YARN-4757 indicate that, in addition to the primary server to 
> YARN DNS service zone request forwarding implementation currently suggested, 
> it may be appropriate to also offer the ability to configure the DNS service 
> as a master server that can support zone transfers to slaves.  Some other 
> features that are related and should be examined are:
> - DNS NOTIFY
> - AXFR
> - IXFR



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7117) Capacity Scheduler: Support Auto Creation of Leaf Queues While Doing Queue Mapping

2017-08-29 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146216#comment-16146216
 ] 

Wangda Tan commented on YARN-7117:
--

Thanks [~jlowe], 

Regarding to zero capacity queue: Apologize that I didn't make it clear, one of 
the use case we saw, assume parent's guaranteed resource is not overcommitted, 
we still want each auto created leaf queue has a minimum guaranteed resource to 
run their jobs via preemption. Setting guaranteed resource to zero means no SLA 
for any of auto created queue. I agree that setting capacity to 0 is a better 
solution if there's no SLA requirement.

bq. ... Ripping it out may be tricky depending upon the expectations of the 
user.
This is a valid concern, instead of deleting queue, how about stopping the 
queue? At least queue is still there. In today's CS, capacities of stopped 
queue are accounted when we check resource sharing, probably we should exclude 
shares of stopped queue and fail reactivate (stop->running) when parent queue's 
guaranteed resource overcommitted.

bq. Does the job submission fail since it cannot create the child queue with 
that guarantee or ..?
Yes, this is my original proposal.

bq. I don't have all the details on the specific use cases, but this seems like 
we're going out of our way to essentially emulate what user limits and in-queue 
preemption can already accomplish when users share the same queue.
Actually we thought about this option before, basically they have different use 
cases: user limit and related preemption, etc. are more appropriate for mixed 
of batch jobs running in the same queue submitted by different users. That's 
why we do FIFO order for apps, etc. And we allow overcommit of user limit (no 
hard limit of #running-users in a queue).
Running jobs submitted by different users in different queues can better 
support long running apps. For example each user allowed to run at least 2 
docker containers and do whatever inside the docker container. And queue is 
more individual operated and has better metrics, UI, etc. exposed to end users.
My plan is to modify as less as possible logics inside scheduler to support 
auto-queue creation. Only need to add logics to support auto-create queues and 
change queue mapping policy to be able to specify create-new-queue-when-absent 
flag.

> Capacity Scheduler: Support Auto Creation of Leaf Queues While Doing Queue 
> Mapping
> --
>
> Key: YARN-7117
> URL: https://issues.apache.org/jira/browse/YARN-7117
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacity scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>
> Currently Capacity Scheduler doesn't support auto creation of queues when 
> doing queue mapping. We saw more and more use cases which has complex queue 
> mapping policies configured to handle application to queues mapping. 
> The most common use case of CapacityScheduler queue mapping is to create one 
> queue for each user/group. However update {{capacity-scheduler.xml}} and 
> {{RMAdmin:refreshQueues}} needs to be done when new user/group onboard. One 
> of the option to solve the problem is automatically create queues when new 
> user/group arrives.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4933) Evaluate parent-slave DNS options to assess deployment options for DNS service

2017-08-29 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-4933:
--
Parent Issue: YARN-4793  (was: YARN-4757)

> Evaluate parent-slave DNS options to assess deployment options for DNS service
> --
>
> Key: YARN-4933
> URL: https://issues.apache.org/jira/browse/YARN-4933
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Maron
>Assignee: Jonathan Maron
>
> Comments on YARN-4757 indicate that, in addition to the primary server to 
> YARN DNS service zone request forwarding implementation currently suggested, 
> it may be appropriate to also offer the ability to configure the DNS service 
> as a master server that can support zone transfers to slaves.  Some other 
> features that are related and should be examined are:
> - DNS NOTIFY
> - AXFR
> - IXFR



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7010) Federation: routing REST invocations transparently to multiple RMs (part 2 - getApps)

2017-08-29 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146210#comment-16146210
 ] 

Carlo Curino commented on YARN-7010:


Thanks [~giovanni.fumarola] for the contribution. I fixed while committing the 
whitespace and javadoc clarity issue. Patch committed to trunk.

> Federation: routing REST invocations transparently to multiple RMs (part 2 - 
> getApps)
> -
>
> Key: YARN-7010
> URL: https://issues.apache.org/jira/browse/YARN-7010
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-7010.v0.patch, YARN-7010.v1.patch, 
> YARN-7010.v2.patch, YARN-7010.v3.patch, YARN-7010.v4.patch, YARN-7010.v5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7033) Add support for NM Recovery of assigned resources(GPU's, NUMA, FPGA's) to container

2017-08-29 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated YARN-7033:

Attachment: YARN-7033-v2.patch

> Add support for NM Recovery of assigned resources(GPU's, NUMA, FPGA's) to 
> container
> ---
>
> Key: YARN-7033
> URL: https://issues.apache.org/jira/browse/YARN-7033
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Reporter: Devaraj K
>Assignee: Devaraj K
> Attachments: YARN-7033-v0.patch, YARN-7033-v1.patch, 
> YARN-7033-v2.patch
>
>
> This JIRA adds the common logic to store the assigned resources to container 
> such as GPU's(YARN-6620), NUMA(YARN-5764) and FPGA's(YARN-5983) etc. and 
> recover upon restart of NM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4952) need configuration mechanism for specifying per-host network interface

2017-08-29 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-4952:
--
Parent Issue: YARN-7054  (was: YARN-4757)

> need configuration mechanism for specifying per-host network interface
> --
>
> Key: YARN-4952
> URL: https://issues.apache.org/jira/browse/YARN-4952
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Maron
>Assignee: Jonathan Maron
>
> The initial configuration approach for the DNS service specified a 
> bind-address that designated the network interface to which the service 
> should bind its listener port.  However, there is a need to potentially 
> specify multiple DNS service instances (HA approach) and therefore a need to 
> specify bind addresses for each instance (and those interfaces may vary 
> between hosts).  This may take a for similar to the RM HA approach (rm1, rm2)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6830) Support quoted strings for environment variables

2017-08-29 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146192#comment-16146192
 ] 

Shane Kumpf commented on YARN-6830:
---

[~templedf] - hoping you might have time to take another look. thanks in 
advance.

> Support quoted strings for environment variables
> 
>
> Key: YARN-6830
> URL: https://issues.apache.org/jira/browse/YARN-6830
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-6830.001.patch
>
>
> There are cases where it is necessary to allow for quoted string literals 
> within environment variables values when passed via the yarn command line 
> interface.
> For example, consider the follow environment variables for a MR map task.
> {{MODE=bar}}
> {{IMAGE_NAME=foo}}
> {{MOUNTS=/tmp/foo,/tmp/bar}}
> When running the MR job, these environment variables are supplied as a comma 
> delimited string.
> {{-Dmapreduce.map.env="MODE=bar,IMAGE_NAME=foo,MOUNTS=/tmp/foo,/tmp/bar"}}
> In this case, {{MOUNTS}} will be parsed and added to the task environment as 
> {{MOUNTS=/tmp/foo}}. Any attempts to quote the embedded comma separated value 
> results in quote characters becoming part of the value, and parsing still 
> breaks down at the comma.
> This issue is to allow for quoting the comma separated value (escaped double 
> or single quote). This was mentioned on YARN-4595 and will impact YARN-5534 
> as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7125) Revisit deleteService call once YARN is able to do post clean up for an app

2017-08-29 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146190#comment-16146190
 ] 

Gour Saha commented on YARN-7125:
-

Revisit once YARN-2261 is done

> Revisit deleteService call once YARN is able to do post clean up for an app
> ---
>
> Key: YARN-7125
> URL: https://issues.apache.org/jira/browse/YARN-7125
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>
> deleteService call internally deletes the zookeeper root node and hdfs root 
> dir etc.
> If YARN has a way to perform these post cleanup steps, client won't need to 
> do this and this can make rest call return fast.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6877) Create an abstract log reader for extendability

2017-08-29 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146181#comment-16146181
 ] 

Junping Du commented on YARN-6877:
--

Thanks [~xgong] for working on a patch. 
The patch is huge. However, mostly are refactoring format binding methods from 
LogCLIHelpers, LogToolUtils and AggregatedLogsBlock to new created extensible 
classes, so still straightforward. Some review comments:

It seems we have two AggregatedLogsBlock now, both are extended from HtmlBlock. 
One is under filecontroller package (new) while the other in web app package 
(old):
{noformat}
+@InterfaceAudience.LimitedPrivate({"YARN", "MapReduce"})
+public abstract class AggregatedLogsBlock extends HtmlBlock {
{noformat}
What's use case for both AggregatedLogsBlock here? Does old one get 
dropped/deprecated? If not, we may need to think some other name for the new 
class to get rid of confusion here.

Some minor comments: 
In TestLogsCLI.java,
{noformat}
-Configuration configuration = new Configuration();
+Configuration configuration = new YarnConfiguration();
{noformat}
Why we need this change? I think both should be working here. Do I miss 
anything?

In many places, the code are missing space between between '}' and catch
{noformat}
}catch (NumberFormatException ne) {
{noformat}

In LogCLIHelpers.java, I saw we are mixing using err.println() and 
System.err.println(). Both way should works fine but better to keep consistent 
here.

seems like no change in AggregatedLogsBlockForTest.java. If so, better remove 
it. 

> Create an abstract log reader for extendability
> ---
>
> Key: YARN-6877
> URL: https://issues.apache.org/jira/browse/YARN-6877
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6877-branch-2.001.patch, YARN-6877-trunk.001.patch, 
> YARN-6877-trunk.002.patch, YARN-6877-trunk.003.patch, 
> YARN-6877-trunk.004.patch, YARN-6877-trunk.005.patch
>
>
> Currently, TFile log reader is used to read aggregated log in YARN. We need 
> to add an abstract layer, and pick up the correct log reader based on the 
> configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5855) DELETE call sometimes returns success when app is not deleted

2017-08-29 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146178#comment-16146178
 ] 

Jian He commented on YARN-5855:
---

Sounds good.
I opened YARN-7125 for the followup.
Closet this

> DELETE call sometimes returns success when app is not deleted
> -
>
> Key: YARN-5855
> URL: https://issues.apache.org/jira/browse/YARN-5855
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Gour Saha
>
> Looking into this issue with [~gsaha], we noticed that multiple things can 
> contribute to an app continuing to run after a DELETE call, which consists of 
> a stop and a destroy operation. One problem is that the stop call is 
> asynchronous unless a force flag is set. Without the force flag, a message is 
> sent to the AM and success is returned, and with the flag 
> yarnClient.killRunningApplication is called. (There is also an option to wait 
> for a fixed amount of time for the app to stop before returning, but DELETE 
> is not setting this option and force is preferable in this case.) The other 
> issue is that the destroy operation is attempted in a loop, but if the number 
> of retries is exceeded the call returns a 204 response.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5855) DELETE call sometimes returns success when app is not deleted

2017-08-29 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He resolved YARN-5855.
---
Resolution: Fixed

> DELETE call sometimes returns success when app is not deleted
> -
>
> Key: YARN-5855
> URL: https://issues.apache.org/jira/browse/YARN-5855
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Gour Saha
>
> Looking into this issue with [~gsaha], we noticed that multiple things can 
> contribute to an app continuing to run after a DELETE call, which consists of 
> a stop and a destroy operation. One problem is that the stop call is 
> asynchronous unless a force flag is set. Without the force flag, a message is 
> sent to the AM and success is returned, and with the flag 
> yarnClient.killRunningApplication is called. (There is also an option to wait 
> for a fixed amount of time for the app to stop before returning, but DELETE 
> is not setting this option and force is preferable in this case.) The other 
> issue is that the destroy operation is attempted in a loop, but if the number 
> of retries is exceeded the call returns a 204 response.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7125) Revisit deleteService call once YARN is able to do post clean up for an app

2017-08-29 Thread Jian He (JIRA)
Jian He created YARN-7125:
-

 Summary: Revisit deleteService call once YARN is able to do post 
clean up for an app
 Key: YARN-7125
 URL: https://issues.apache.org/jira/browse/YARN-7125
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He


deleteService call internally deletes the zookeeper root node and hdfs root dir 
etc.
If YARN has a way to perform these post cleanup steps, client won't need to do 
this and this can make rest call return fast.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7115) Move BoundedAppender to org.hadoop.yarn.util pacakge

2017-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146176#comment-16146176
 ] 

Hadoop QA commented on YARN-7115:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 45s{color} 
| {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 37s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.util.TestBoundedAppender |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
 |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-7115 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12884307/YARN-7115.03.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7c16b2e6c3a9 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 63fc1b0 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17187/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
 |
| unit | 

[jira] [Resolved] (YARN-5791) [YARN Native Service] Build application specific UI on top of data posted to timeline service V.2

2017-08-29 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He resolved YARN-5791.
---
Resolution: Duplicate

this is done in YARN-6398

> [YARN Native Service] Build application specific UI on top of data posted to 
> timeline service V.2
> -
>
> Key: YARN-5791
> URL: https://issues.apache.org/jira/browse/YARN-5791
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Gour Saha
>
> As per YARN-5780 we will start getting application-specific data in timeline 
> service v2 for all YARN native services. Exposing a UI for these 
> application-specific data would be very beneficial to application owners.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5780) [YARN-5079] Allowing YARN native services to post data to timeline service V.2

2017-08-29 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He resolved YARN-5780.
---
Resolution: Duplicate

this is done in YARN-6395

> [YARN-5079] Allowing YARN native services to post data to timeline service V.2
> --
>
> Key: YARN-5780
> URL: https://issues.apache.org/jira/browse/YARN-5780
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Li Lu
>Assignee: Vrushali C
> Attachments: YARN-5780.poc.patch
>
>
> The basic end-to-end workflow of timeline service v.2 has been merged into 
> trunk. In YARN native services, we would like to post some service-specific 
> data to timeline v.2. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6669) Kerberos support for native service AM with the service REST API

2017-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146168#comment-16146168
 ] 

Hadoop QA commented on YARN-6669:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} YARN-6669 does not apply to yarn-native-services. Rebase 
required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for 
help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6669 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12871700/YARN-6669.yarn-native-services.05.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17190/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Kerberos support for native service AM with the service REST API
> 
>
> Key: YARN-6669
> URL: https://issues.apache.org/jira/browse/YARN-6669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6669.yarn-native-services.01.patch, 
> YARN-6669.yarn-native-services.03.patch, 
> YARN-6669.yarn-native-services.04.patch, 
> YARN-6669.yarn-native-services.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6023) Allow multiple IPs in native services container ServiceRecord

2017-08-29 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6023:
--
Parent Issue: YARN-7054  (was: YARN-5079)

> Allow multiple IPs in native services container ServiceRecord
> -
>
> Key: YARN-6023
> URL: https://issues.apache.org/jira/browse/YARN-6023
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>
> Currently ProviderUtils.updateServiceRecord sets a single IP as "yarn:ip" in 
> the ServiceRecord, and ignores any additional IPs. The Registry DNS 
> implementation in the YARN-4757 feature branch reads the "yarn:ip" and uses 
> it to create a DNS record. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6744) Recover component information on YARN native services AM restart

2017-08-29 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6744:
--
Parent Issue: YARN-7054  (was: YARN-5079)

> Recover component information on YARN native services AM restart
> 
>
> Key: YARN-6744
> URL: https://issues.apache.org/jira/browse/YARN-6744
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
> Fix For: yarn-native-services
>
>
> The new RoleInstance#Container constructor does not populate all the 
> information needed for a RoleInstance. This is the constructor used when 
> recovering running containers in AppState#addRestartedContainer. We will have 
> to figure out a way to determine this information for a running container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-6397) Support kerberos deployment for yarn-native-service

2017-08-29 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He resolved YARN-6397.
---
Resolution: Duplicate

> Support kerberos deployment for yarn-native-service
> ---
>
> Key: YARN-6397
> URL: https://issues.apache.org/jira/browse/YARN-6397
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>
> Current service REST API doesn't support kerberos deployment. User should be 
> able to run their service with kerberos on by specifying keytabs, principal 
> etc. with the REST API.
> Also, we've found some issues with current security implementation while 
> doing testing. e.g. currently The AM cannot not talk to zookeeper in 
> kerberized environment. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6669) Kerberos support for native service AM with the service REST API

2017-08-29 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6669:
--
Parent Issue: YARN-7054  (was: YARN-5079)

> Kerberos support for native service AM with the service REST API
> 
>
> Key: YARN-6669
> URL: https://issues.apache.org/jira/browse/YARN-6669
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-6669.yarn-native-services.01.patch, 
> YARN-6669.yarn-native-services.03.patch, 
> YARN-6669.yarn-native-services.04.patch, 
> YARN-6669.yarn-native-services.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-6696) SliderAppMaster gets an NPE when creating UnregisterComponentInstance

2017-08-29 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He resolved YARN-6696.
---
Resolution: Invalid

this code is changed after YARN-6903.
close this

> SliderAppMaster gets an NPE when creating UnregisterComponentInstance
> -
>
> Key: YARN-6696
> URL: https://issues.apache.org/jira/browse/YARN-6696
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
> Fix For: yarn-native-services
>
>
> In the onContainersCompleted method of SliderAppMaster, there is some issue 
> with the RoleInstance passed to the UnregisterComponentInstance constructor: 
> java.lang.NullPointerException at 
> org.apache.slider.server.appmaster.actions.UnregisterComponentInstance.(UnregisterComponentInstance.java:38)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6601) Allow service to be started as System Services during serviceapi start up

2017-08-29 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6601:
--
Parent Issue: YARN-7054  (was: YARN-5079)

> Allow service to be started as System Services during serviceapi start up
> -
>
> Key: YARN-6601
> URL: https://issues.apache.org/jira/browse/YARN-6601
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
> Attachments: SystemServices.pdf, 
> YARN-6601-yarn-native-services.001.patch, 
> YARN-6601-yarn-native-services.002.patch, 
> YARN-6601-yarn-native-services.003.patch
>
>
> This is extended from YARN-1593 focusing only on system services. System 
> services are started during boot up of daemon or admin can be configurable 
> and started at any point of time. These services have special characteristics 
> which need to be respected. The document covers details about system services 
> characteristics. 
> This JIRA is focusing on configuring services using a json template and 
> placing in a shared filesystem. During YARN REST server( native-service-api) 
> start up read services details from shared location and start those services. 
> If there are services already configured than skip those services and 
> continue to start up the services. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6496) Rethink on configurations published into ATSv2

2017-08-29 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6496:
--
Parent Issue: YARN-7054  (was: YARN-5079)

> Rethink on configurations published into ATSv2
> --
>
> Key: YARN-6496
> URL: https://issues.apache.org/jira/browse/YARN-6496
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>
> Slider configuration model is bit different from Hadoop. Slider has 
> configuration object with several fields differentiations such as properties, 
> environment and files that has more functionality such as fileName, fileType, 
> srcFile, destFile. 
> {code}
> "configuration" : {
> "properties": {
> "rohith.test.properties": "inside-properties"
> },
> "env": {
> "NUM_SHARDS": "insde-env"
> }
> "files" : [
> {
> "type": "HADOOP_XML_TEMPLATE",
> "src_file": "hdfs://yclouddev/tmp/conf/core-site.xml",
> "dest_file": "/etc/hadoop/conf/core-site.xml"
> }]
> {code}
> Timeline entity config is modeled as a flattened map of  String to String, 
> that is not flexible enough to satisfy above use-case. This need to rethink 
> how can it be modeled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-6422) Service REST API should have a way to specify YARN related params

2017-08-29 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He resolved YARN-6422.
---
Resolution: Duplicate

> Service REST API should have a way to specify YARN related params
> -
>
> Key: YARN-6422
> URL: https://issues.apache.org/jira/browse/YARN-6422
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>
> Currently, the service REST API doesn't have a way to specify the YARN 
> related params (i.e. all the fields in ApplicationSubmissionContext). It does 
> have a configuration section. But that's intended for deployed apps. 
> We probably need to create a new config section targeted for the app itself, 
> i.e. AM



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6394) Support specifying YARN related params in the service REST API

2017-08-29 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6394:
--
Parent Issue: YARN-7054  (was: YARN-5079)

> Support specifying YARN related params in the service REST API
> --
>
> Key: YARN-6394
> URL: https://issues.apache.org/jira/browse/YARN-6394
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Jian He
>
> Today user can specify a lot of YARN parameters (such as 
> LogAggregationContext. i.e. all the fields in ApplicationSubmissionContext) 
> when submit an app. The service REST API hasn't accounted for those so far. 
> We need a way to allow users to specify those configs when submitting their 
> service app.  Basically, we probably need a separation of 
> 1) configs for AM's own needs   
> 2) configs for deployed services.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6422) Service REST API should have a way to specify YARN related params

2017-08-29 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6422:
--
Parent Issue: YARN-7054  (was: YARN-5079)

> Service REST API should have a way to specify YARN related params
> -
>
> Key: YARN-6422
> URL: https://issues.apache.org/jira/browse/YARN-6422
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>
> Currently, the service REST API doesn't have a way to specify the YARN 
> related params (i.e. all the fields in ApplicationSubmissionContext). It does 
> have a configuration section. But that's intended for deployed apps. 
> We probably need to create a new config section targeted for the app itself, 
> i.e. AM



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6397) Support kerberos deployment for yarn-native-service

2017-08-29 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6397:
--
Parent Issue: YARN-7054  (was: YARN-5079)

> Support kerberos deployment for yarn-native-service
> ---
>
> Key: YARN-6397
> URL: https://issues.apache.org/jira/browse/YARN-6397
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>
> Current service REST API doesn't support kerberos deployment. User should be 
> able to run their service with kerberos on by specifying keytabs, principal 
> etc. with the REST API.
> Also, we've found some issues with current security implementation while 
> doing testing. e.g. currently The AM cannot not talk to zookeeper in 
> kerberized environment. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5855) DELETE call sometimes returns success when app is not deleted

2017-08-29 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146138#comment-16146138
 ] 

Gour Saha commented on YARN-5855:
-

Yup, and as we discussed let's leave DELETE as it is now. No need to lower the 
10 secs wait.

> DELETE call sometimes returns success when app is not deleted
> -
>
> Key: YARN-5855
> URL: https://issues.apache.org/jira/browse/YARN-5855
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Gour Saha
>
> Looking into this issue with [~gsaha], we noticed that multiple things can 
> contribute to an app continuing to run after a DELETE call, which consists of 
> a stop and a destroy operation. One problem is that the stop call is 
> asynchronous unless a force flag is set. Without the force flag, a message is 
> sent to the AM and success is returned, and with the flag 
> yarnClient.killRunningApplication is called. (There is also an option to wait 
> for a fixed amount of time for the app to stop before returning, but DELETE 
> is not setting this option and force is preferable in this case.) The other 
> issue is that the destroy operation is attempted in a loop, but if the number 
> of retries is exceeded the call returns a 204 response.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6393) Create a API class for yarn-native-service user-facing constants

2017-08-29 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6393:
--
Parent Issue: YARN-7054  (was: YARN-5079)

> Create a API class for yarn-native-service user-facing constants
> 
>
> Key: YARN-6393
> URL: https://issues.apache.org/jira/browse/YARN-6393
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Jian He
>
> User can use some constants  in the json input spec file for later 
> substitution.
> e.g. if user specifies $HOSTNAME in the env section of the input file, it'll 
> be substituted by AM with the actual host name. We'll need to create an API 
> class and clearly documents it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6391) Support specifying extra options from yarn-native-service CLI

2017-08-29 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6391:
--
Parent Issue: YARN-7054  (was: YARN-5079)

> Support specifying extra options from yarn-native-service CLI
> -
>
> Key: YARN-6391
> URL: https://issues.apache.org/jira/browse/YARN-6391
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Jian He
>
> The CLI has been changed to take the same json input spec as YARN-4692.
> We should also have a way to allow for substituting individual field of the 
> json spec file from CLI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6161) YARN support for port allocation

2017-08-29 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6161:
--
Parent Issue: YARN-7054  (was: YARN-5079)

> YARN support for port allocation
> 
>
> Key: YARN-6161
> URL: https://issues.apache.org/jira/browse/YARN-6161
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
> Fix For: yarn-native-services
>
>
> Since there is no agent code in YARN native services, we need another 
> mechanism for allocating ports to containers. This is not necessary when 
> running Docker containers, but it will become important when an agent-less 
> docker-less provider is introduced.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5855) DELETE call sometimes returns success when app is not deleted

2017-08-29 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146130#comment-16146130
 ] 

Jian He commented on YARN-5855:
---

Talked with Gour, if we have  YARN-2261 or YARN-5759, then the DELETE can be a 
fast asynchronous call as well.
Yarn will take care of deleting the left over records.  
Once that feature is complete, the delete can be revisited. 

> DELETE call sometimes returns success when app is not deleted
> -
>
> Key: YARN-5855
> URL: https://issues.apache.org/jira/browse/YARN-5855
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Gour Saha
>
> Looking into this issue with [~gsaha], we noticed that multiple things can 
> contribute to an app continuing to run after a DELETE call, which consists of 
> a stop and a destroy operation. One problem is that the stop call is 
> asynchronous unless a force flag is set. Without the force flag, a message is 
> sent to the AM and success is returned, and with the flag 
> yarnClient.killRunningApplication is called. (There is also an option to wait 
> for a fixed amount of time for the app to stop before returning, but DELETE 
> is not setting this option and force is preferable in this case.) The other 
> issue is that the destroy operation is attempted in a loop, but if the number 
> of retries is exceeded the call returns a 204 response.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   3   >