[jira] [Commented] (YARN-8373) RM Received RMFatalEvent of type CRITICAL_THREAD_CRASH

2019-11-26 Thread Sunil G (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-8373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16983250#comment-16983250
 ] 

Sunil G commented on YARN-8373:
---

Yes [~pbacsko] we need this till 3.1 if possible. trunk->branch-3.2->branch-3.1

> RM  Received RMFatalEvent of type CRITICAL_THREAD_CRASH
> ---
>
> Key: YARN-8373
> URL: https://issues.apache.org/jira/browse/YARN-8373
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, resourcemanager
>Affects Versions: 2.9.0
>Reporter: Girish Bhat
>Assignee: Wilfred Spiegelenburg
>Priority: Major
>  Labels: newbie
> Fix For: 3.3.0, 3.2.2
>
> Attachments: YARN-8373-branch-3.1.001.patch, 
> YARN-8373-branch.3.1.001.patch, YARN-8373.001.patch, YARN-8373.002.patch, 
> YARN-8373.003.patch, YARN-8373.004.patch, YARN-8373.005.patch
>
>
>  
>  
> {noformat}
> sudo -u yarn /usr/local/hadoop/latest/bin/yarn version Hadoop 2.9.0 
> Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r 
> 756ebc8394e473ac25feac05fa493f6d612e6c50 Compiled by arsuresh on 
> 2017-11-13T23:15Z Compiled with protoc 2.5.0 From source with checksum 
> 0a76a9a32a5257331741f8d5932f183 This command was run using 
> /usr/local/hadoop/hadoop-2.9.0/share/hadoop/common/hadoop-common-2.9.0.jar{noformat}
> This is for version 2.9.0 
>  
> {noformat}
> 2018-05-25 05:53:12,742 ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Received 
> RMFatalEvent of type CRITICAL_THREAD_CRASH, caused by a critical thread, Fai
> rSchedulerContinuousScheduling, that exited unexpectedly: 
> java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!
> at java.util.TimSort.mergeHi(TimSort.java:899)
> at java.util.TimSort.mergeAt(TimSort.java:516)
> at java.util.TimSort.mergeForceCollapse(TimSort.java:457)
> at java.util.TimSort.sort(TimSort.java:254)
> at java.util.Arrays.sort(Arrays.java:1512)
> at java.util.ArrayList.sort(ArrayList.java:1454)
> at java.util.Collections.sort(Collections.java:175)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.ClusterNodeTracker.sortedNodeList(ClusterNodeTracker.java:340)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.continuousSchedulingAttempt(FairScheduler.java:907)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$ContinuousSchedulingThread.run(FairScheduler.java:296)
> 2018-05-25 05:53:12,743 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Shutting down 
> the resource manager.
> 2018-05-25 05:53:12,749 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: a critical thread, FairSchedulerContinuousScheduling, that exited 
> unexpectedly: java.lang.IllegalArgumentException: Comparison method violates 
> its general contract!
> at java.util.TimSort.mergeHi(TimSort.java:899)
> at java.util.TimSort.mergeAt(TimSort.java:516)
> at java.util.TimSort.mergeForceCollapse(TimSort.java:457)
> at java.util.TimSort.sort(TimSort.java:254)
> at java.util.Arrays.sort(Arrays.java:1512)
> at java.util.ArrayList.sort(ArrayList.java:1454)
> at java.util.Collections.sort(Collections.java:175)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.ClusterNodeTracker.sortedNodeList(ClusterNodeTracker.java:340)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.continuousSchedulingAttempt(FairScheduler.java:907)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler$ContinuousSchedulingThread.run(FairScheduler.java:296)
> 2018-05-25 05:53:12,772 ERROR 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
>  ExpiredTokenRemover received java.lang.InterruptedException: sleep 
> interrupted{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9936) Support vector of capacity percentages in Capacity Scheduler configuration

2019-11-26 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16983247#comment-16983247
 ] 

Szilard Nemeth commented on YARN-9936:
--

Hi [~zsiegl]!
If you don't mind, I can take this over.

> Support vector of capacity percentages in Capacity Scheduler configuration
> --
>
> Key: YARN-9936
> URL: https://issues.apache.org/jira/browse/YARN-9936
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Reporter: Zoltan Siegl
>Assignee: Zoltan Siegl
>Priority: Major
> Attachments: Capacity Scheduler support of “vector of resources 
> percentage”.pdf
>
>
> Currently, the Capacity Scheduler queue configuration supports two ways to 
> set queue capacity.
>  * In percentage of all available resources as a float ( eg. 25.0 ) means 25% 
> of the resources of its parent queue for all resource types equally (eg. 25% 
> of all memory, 25% of all CPU cores, and 25% of all available GPU in the 
> cluster) The percentages of all queues has to add up to 100%.
>  * In an absolute amount of resources ( e.g. 
> memory=4GB,vcores=20,yarn.io/gpu=4 ). The amount of all resources in the 
> queues has to be less than or equal to all resources in the cluster.
> Apart from these two already existing ways, there is a demand to add capacity 
> percentage of each available resource type separately. (eg. 
> {{memory=20%,vcores=40%,yarn.io/gpu=100%}}).
>  At the same time, a similar concept should be included with queues 
> maximum-capacity as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9937) Add missing queue configs in RMWebService#CapacitySchedulerQueueInfo

2019-11-26 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16983238#comment-16983238
 ] 

Prabhu Joseph commented on YARN-9937:
-

Thanks [~snemeth]. Have attached branch-3.2 and branch-3.1 patch.

> Add missing queue configs in RMWebService#CapacitySchedulerQueueInfo
> 
>
> Key: YARN-9937
> URL: https://issues.apache.org/jira/browse/YARN-9937
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: Screen Shot 2019-10-28 at 8.54.53 PM.png, 
> YARN-9937-001.patch, YARN-9937-002.patch, YARN-9937-003.patch, 
> YARN-9937-004.patch, YARN-9937-addendum-01.patch, 
> YARN-9937-branch-3.1.001.patch, YARN-9937-branch-3.2.001.patch, 
> YARN-9937-branch-3.2.002.patch, YARN-9937-branch-3.2.003.patch
>
>
> Below are the missing queue configs which are not part of RMWebServices 
> scheduler endpoint. 
> 1. Maximum Allocation
> 2. Queue ACLs
> 3. Queue Priority
> 4. Application Lifetime



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9937) Add missing queue configs in RMWebService#CapacitySchedulerQueueInfo

2019-11-26 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9937:

Attachment: YARN-9937-branch-3.1.001.patch

> Add missing queue configs in RMWebService#CapacitySchedulerQueueInfo
> 
>
> Key: YARN-9937
> URL: https://issues.apache.org/jira/browse/YARN-9937
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: Screen Shot 2019-10-28 at 8.54.53 PM.png, 
> YARN-9937-001.patch, YARN-9937-002.patch, YARN-9937-003.patch, 
> YARN-9937-004.patch, YARN-9937-addendum-01.patch, 
> YARN-9937-branch-3.1.001.patch, YARN-9937-branch-3.2.001.patch, 
> YARN-9937-branch-3.2.002.patch, YARN-9937-branch-3.2.003.patch
>
>
> Below are the missing queue configs which are not part of RMWebServices 
> scheduler endpoint. 
> 1. Maximum Allocation
> 2. Queue ACLs
> 3. Queue Priority
> 4. Application Lifetime



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9937) Add missing queue configs in RMWebService#CapacitySchedulerQueueInfo

2019-11-26 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9937:

Attachment: YARN-9937-branch-3.2.003.patch

> Add missing queue configs in RMWebService#CapacitySchedulerQueueInfo
> 
>
> Key: YARN-9937
> URL: https://issues.apache.org/jira/browse/YARN-9937
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: Screen Shot 2019-10-28 at 8.54.53 PM.png, 
> YARN-9937-001.patch, YARN-9937-002.patch, YARN-9937-003.patch, 
> YARN-9937-004.patch, YARN-9937-addendum-01.patch, 
> YARN-9937-branch-3.2.001.patch, YARN-9937-branch-3.2.002.patch, 
> YARN-9937-branch-3.2.003.patch
>
>
> Below are the missing queue configs which are not part of RMWebServices 
> scheduler endpoint. 
> 1. Maximum Allocation
> 2. Queue ACLs
> 3. Queue Priority
> 4. Application Lifetime



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9290) Invalid SchedulingRequest not rejected in Scheduler PlacementConstraintsHandler

2019-11-26 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16983172#comment-16983172
 ] 

Prabhu Joseph commented on YARN-9290:
-

Thanks [~snemeth]. 

I think this is not required for branch-3.2 and 3.1. Will mark this as resolved.

> Invalid SchedulingRequest not rejected in Scheduler 
> PlacementConstraintsHandler 
> 
>
> Key: YARN-9290
> URL: https://issues.apache.org/jira/browse/YARN-9290
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9290-001.patch, YARN-9290-002.patch, 
> YARN-9290-003.patch, YARN-9290-004.patch, YARN-9290-005.patch, 
> YARN-9290-006.patch, YARN-9290-007.patch, YARN-9290-008.patch, 
> YARN-9290-009.patch
>
>
> SchedulingRequest with Invalid namespace is not rejected in Scheduler  
> PlacementConstraintsHandler. RM keeps on trying to allocateOnNode with 
> logging the exception. This is rejected in case of placement-processor 
> handler.
> {code}
> 2019-02-08 16:51:27,548 WARN 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator:
>  Failed to query node cardinality:
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.InvalidAllocationTagsQueryException:
>  Invalid namespace prefix: notselfi, valid values are: 
> all,not-self,app-id,app-tag,self
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.TargetApplicationsNamespace.fromString(TargetApplicationsNamespace.java:277)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.TargetApplicationsNamespace.parse(TargetApplicationsNamespace.java:234)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.AllocationTags.createAllocationTags(AllocationTags.java:93)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfySingleConstraintExpression(PlacementConstraintsUtil.java:78)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfySingleConstraint(PlacementConstraintsUtil.java:240)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfyConstraints(PlacementConstraintsUtil.java:321)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfyAndConstraint(PlacementConstraintsUtil.java:272)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfyConstraints(PlacementConstraintsUtil.java:324)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfyConstraints(PlacementConstraintsUtil.java:365)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator.checkCardinalityAndPending(SingleConstraintAppPlacementAllocator.java:355)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator.precheckNode(SingleConstraintAppPlacementAllocator.java:395)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.precheckNode(AppSchedulingInfo.java:779)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator.preCheckForNodeCandidateSet(RegularContainerAllocator.java:145)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator.allocate(RegularContainerAllocator.java:837)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator.assignContainers(RegularContainerAllocator.java:890)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.ContainerAllocator.assignContainers(ContainerAllocator.java:54)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.assignContainers(FiCaSchedulerApp.java:977)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainers(LeafQueue.java:1173)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainersToChildQueues(ParentQueue.java:795)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainers(ParentQueue.java:623)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateOrReserveNewContainers(CapacityScheduler.ja

[jira] [Commented] (YARN-9052) Replace all MockRM submit method definitions with a builder

2019-11-26 Thread Sunil G (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16983146#comment-16983146
 ] 

Sunil G commented on YARN-9052:
---

Thanks [~snemeth] and apologies for delay
 # MockRMAppSubmitter is now used to submit an app, and MockRM is no longer 
used, is that a correct statement ? If thats the case, what is MockRM is used 
for now?
 # Please check the warnings given by checkstyle, because there are some 
missing javadoc comments errors.
 # Some naming related comments In MockRMAppSubmissionData, 
 ## withName ==> withAppName
 ## withUnmanaged ==> withUnmanagedAM
 ## withWaitForAccepted ==> withWaitForAppAcceptedState
 ## withPriority ==> withAppPriority .  (because we also have a queue priority)

> Replace all MockRM submit method definitions with a builder
> ---
>
> Key: YARN-9052
> URL: https://issues.apache.org/jira/browse/YARN-9052
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: 
> YARN-9052-004withlogs-patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt,
>  YARN-9052-testlogs003-justfailed.txt, 
> YARN-9052-testlogs003-patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt,
>  YARN-9052-testlogs004-justfailed.txt, YARN-9052.001.patch, 
> YARN-9052.002.patch, YARN-9052.003.patch, YARN-9052.004.patch, 
> YARN-9052.004.withlogs.patch, YARN-9052.005.patch, YARN-9052.006.patch, 
> YARN-9052.007.patch, YARN-9052.testlogs.002.patch, 
> YARN-9052.testlogs.002.patch, YARN-9052.testlogs.003.patch, 
> YARN-9052.testlogs.patch
>
>
> MockRM has 31 definitions of submitApp, most of them having more than 
> acceptable number of parameters, ranging from 2 to even 22 parameters, which 
> makes the code completely unreadable.
> On top of unreadability, it's very hard to follow what RmApp will be produced 
> for tests as they often pass a lot of empty / null values as parameters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8148) Update decimal values for queue capacities shown on queue status CLI

2019-11-26 Thread Sunil G (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-8148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16983140#comment-16983140
 ] 

Sunil G commented on YARN-8148:
---

Hi [~snemeth]

Since this is a CLI o/p and change is only for value, in this case we dont need 
to consider compatibility.

Pls go ahead to commit. +1

> Update decimal values for queue capacities shown on queue status CLI
> 
>
> Key: YARN-8148
> URL: https://issues.apache.org/jira/browse/YARN-8148
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client
>Affects Versions: 3.0.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-8148-002.patch, YARN-8148.1.patch
>
>
> Capacities are shown with two decimal values in RM UI as part of YARN-6182. 
> The queue status cli are still showing one decimal value.
> {code}
> [root@bigdata3 yarn]# yarn queue -status default
> Queue Information : 
> Queue Name : default
>   State : RUNNING
>   Capacity : 69.9%
>   Current Capacity : .0%
>   Maximum Capacity : 70.0%
>   Default Node Label expression : 
>   Accessible Node Labels : *
>   Preemption : enabled
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9992) Max allocation per queue is zero for custom resource types on RM startup

2019-11-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16983125#comment-16983125
 ] 

Hadoop QA commented on YARN-9992:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 102 unchanged - 0 fixed = 103 total (was 102) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 85m 
29s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}142m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | YARN-9992 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986851/YARN-9992.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux dae86372c6c0 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ef950b0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/25225/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/25225/testReport/ |
| Max. process+th

[jira] [Updated] (YARN-9992) Max allocation per queue is zero for custom resource types on RM startup

2019-11-26 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-9992:

Target Version/s: 2.10.1

> Max allocation per queue is zero for custom resource types on RM startup
> 
>
> Key: YARN-9992
> URL: https://issues.apache.org/jira/browse/YARN-9992
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-9992.001.patch
>
>
> Found an issue where trying to request GPUs on a newly booted RM cannot 
> schedule. It throws the exception in 
> SchedulerUtils#throwInvalidResourceException:
> {noformat}
> throw new InvalidResourceRequestException(
> "Invalid resource request, requested resource type=[" + reqResourceName
> + "] < 0 or greater than maximum allowed allocation. Requested "
> + "resource=" + reqResource + ", maximum allowed allocation="
> + availableResource
> + ", please note that maximum allowed allocation is calculated "
> + "by scheduler based on maximum resource of registered "
> + "NodeManagers, which might be less than configured "
> + "maximum allocation="
> + ResourceUtils.getResourceTypesMaximumAllocation());{noformat}
> Upon refreshing scheduler (e.g. via refreshQueues), GPU scheduling works 
> again.
> I think the RC is that upon scheduler refresh, resource-types.xml is loaded 
> in CapacitySchedulerConfiguration (as part of YARN-7738), so when we call 
> ResourceUtils#fetchMaximumAllocationFromConfig in 
> CapacitySchedulerConfiguration#getMaximumAllocationPerQueue, it's able to 
> fetch the {{yarn.resource-types}} config. But resource-types.xml is not 
> loaded into the conf in CapacityScheduler#initScheduler, so it doesn't find 
> the custom resource when computing max allocations, and the custom resource 
> max allocation is 0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9992) Max allocation per queue is zero for custom resource types on RM startup

2019-11-26 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung reassigned YARN-9992:
---

Assignee: Jonathan Hung

> Max allocation per queue is zero for custom resource types on RM startup
> 
>
> Key: YARN-9992
> URL: https://issues.apache.org/jira/browse/YARN-9992
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-9992.001.patch
>
>
> Found an issue where trying to request GPUs on a newly booted RM cannot 
> schedule. It throws the exception in 
> SchedulerUtils#throwInvalidResourceException:
> {noformat}
> throw new InvalidResourceRequestException(
> "Invalid resource request, requested resource type=[" + reqResourceName
> + "] < 0 or greater than maximum allowed allocation. Requested "
> + "resource=" + reqResource + ", maximum allowed allocation="
> + availableResource
> + ", please note that maximum allowed allocation is calculated "
> + "by scheduler based on maximum resource of registered "
> + "NodeManagers, which might be less than configured "
> + "maximum allocation="
> + ResourceUtils.getResourceTypesMaximumAllocation());{noformat}
> Upon refreshing scheduler (e.g. via refreshQueues), GPU scheduling works 
> again.
> I think the RC is that upon scheduler refresh, resource-types.xml is loaded 
> in CapacitySchedulerConfiguration (as part of YARN-7738), so when we call 
> ResourceUtils#fetchMaximumAllocationFromConfig in 
> CapacitySchedulerConfiguration#getMaximumAllocationPerQueue, it's able to 
> fetch the {{yarn.resource-types}} config. But resource-types.xml is not 
> loaded into the conf in CapacityScheduler#initScheduler, so it doesn't find 
> the custom resource when computing max allocations, and the custom resource 
> max allocation is 0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9992) Max allocation per queue is zero for custom resource types on RM startup

2019-11-26 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16983036#comment-16983036
 ] 

Jonathan Hung commented on YARN-9992:
-

Attached a simple one liner [^YARN-9992.001.patch]

> Max allocation per queue is zero for custom resource types on RM startup
> 
>
> Key: YARN-9992
> URL: https://issues.apache.org/jira/browse/YARN-9992
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-9992.001.patch
>
>
> Found an issue where trying to request GPUs on a newly booted RM cannot 
> schedule. It throws the exception in 
> SchedulerUtils#throwInvalidResourceException:
> {noformat}
> throw new InvalidResourceRequestException(
> "Invalid resource request, requested resource type=[" + reqResourceName
> + "] < 0 or greater than maximum allowed allocation. Requested "
> + "resource=" + reqResource + ", maximum allowed allocation="
> + availableResource
> + ", please note that maximum allowed allocation is calculated "
> + "by scheduler based on maximum resource of registered "
> + "NodeManagers, which might be less than configured "
> + "maximum allocation="
> + ResourceUtils.getResourceTypesMaximumAllocation());{noformat}
> Upon refreshing scheduler (e.g. via refreshQueues), GPU scheduling works 
> again.
> I think the RC is that upon scheduler refresh, resource-types.xml is loaded 
> in CapacitySchedulerConfiguration (as part of YARN-7738), so when we call 
> ResourceUtils#fetchMaximumAllocationFromConfig in 
> CapacitySchedulerConfiguration#getMaximumAllocationPerQueue, it's able to 
> fetch the {{yarn.resource-types}} config. But resource-types.xml is not 
> loaded into the conf in CapacityScheduler#initScheduler, so it doesn't find 
> the custom resource when computing max allocations, and the custom resource 
> max allocation is 0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9992) Max allocation per queue is zero for custom resource types on RM startup

2019-11-26 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-9992:

Attachment: YARN-9992.001.patch

> Max allocation per queue is zero for custom resource types on RM startup
> 
>
> Key: YARN-9992
> URL: https://issues.apache.org/jira/browse/YARN-9992
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-9992.001.patch
>
>
> Found an issue where trying to request GPUs on a newly booted RM cannot 
> schedule. It throws the exception in 
> SchedulerUtils#throwInvalidResourceException:
> {noformat}
> throw new InvalidResourceRequestException(
> "Invalid resource request, requested resource type=[" + reqResourceName
> + "] < 0 or greater than maximum allowed allocation. Requested "
> + "resource=" + reqResource + ", maximum allowed allocation="
> + availableResource
> + ", please note that maximum allowed allocation is calculated "
> + "by scheduler based on maximum resource of registered "
> + "NodeManagers, which might be less than configured "
> + "maximum allocation="
> + ResourceUtils.getResourceTypesMaximumAllocation());{noformat}
> Upon refreshing scheduler (e.g. via refreshQueues), GPU scheduling works 
> again.
> I think the RC is that upon scheduler refresh, resource-types.xml is loaded 
> in CapacitySchedulerConfiguration (as part of YARN-7738), so when we call 
> ResourceUtils#fetchMaximumAllocationFromConfig in 
> CapacitySchedulerConfiguration#getMaximumAllocationPerQueue, it's able to 
> fetch the {{yarn.resource-types}} config. But resource-types.xml is not 
> loaded into the conf in CapacityScheduler#initScheduler, so it doesn't find 
> the custom resource when computing max allocations, and the custom resource 
> max allocation is 0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9992) Max allocation per queue is zero for custom resource types on RM startup

2019-11-26 Thread Jonathan Hung (Jira)
Jonathan Hung created YARN-9992:
---

 Summary: Max allocation per queue is zero for custom resource 
types on RM startup
 Key: YARN-9992
 URL: https://issues.apache.org/jira/browse/YARN-9992
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jonathan Hung


Found an issue where trying to request GPUs on a newly booted RM cannot 
schedule. It throws the exception in 
SchedulerUtils#throwInvalidResourceException:
{noformat}
throw new InvalidResourceRequestException(
"Invalid resource request, requested resource type=[" + reqResourceName
+ "] < 0 or greater than maximum allowed allocation. Requested "
+ "resource=" + reqResource + ", maximum allowed allocation="
+ availableResource
+ ", please note that maximum allowed allocation is calculated "
+ "by scheduler based on maximum resource of registered "
+ "NodeManagers, which might be less than configured "
+ "maximum allocation="
+ ResourceUtils.getResourceTypesMaximumAllocation());{noformat}
Upon refreshing scheduler (e.g. via refreshQueues), GPU scheduling works again.

I think the RC is that upon scheduler refresh, resource-types.xml is loaded in 
CapacitySchedulerConfiguration (as part of YARN-7738), so when we call 
ResourceUtils#fetchMaximumAllocationFromConfig in 
CapacitySchedulerConfiguration#getMaximumAllocationPerQueue, it's able to fetch 
the {{yarn.resource-types}} config. But resource-types.xml is not loaded into 
the conf in CapacityScheduler#initScheduler, so it doesn't find the custom 
resource when computing max allocations, and the custom resource max allocation 
is 0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9462) TestResourceTrackerService.testNodeRemovalGracefully fails sporadically

2019-11-26 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982934#comment-16982934
 ] 

Szilard Nemeth commented on YARN-9462:
--

Hi [~prabhujoseph]!
This is no longer blocked as YARN-9011 got committed.

> TestResourceTrackerService.testNodeRemovalGracefully fails sporadically
> ---
>
> Key: YARN-9462
> URL: https://issues.apache.org/jira/browse/YARN-9462
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, test
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Minor
> Attachments: 
> TestResourceTrackerService.testNodeRemovalGracefully.txt, YARN-9462-001.patch
>
>
> TestResourceTrackerService.testNodeRemovalGracefully fails sporadically
> {code}
> [ERROR] 
> testNodeRemovalGracefully(org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService)
>   Time elapsed: 3.385 s  <<< FAILURE!
> java.lang.AssertionError: Shutdown nodes should be 0 now expected:<1> but 
> was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService.testNodeRemovalUtilDecomToUntracked(TestResourceTrackerService.java:2318)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService.testNodeRemovalUtil(TestResourceTrackerService.java:2280)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService.testNodeRemovalGracefully(TestResourceTrackerService.java:2133)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9128) Use SerializationUtils from apache commons to serialize / deserialize ResourceMappings

2019-11-26 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth reassigned YARN-9128:


Assignee: Adam Antal  (was: Zoltan Siegl)

> Use SerializationUtils from apache commons to serialize / deserialize 
> ResourceMappings
> --
>
> Key: YARN-9128
> URL: https://issues.apache.org/jira/browse/YARN-9128
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Adam Antal
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9128.001.patch, YARN-9128.002.patch, 
> YARN-9128.003.patch, YARN-9128.branch-3.2.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-3890) FairScheduler should show the scheduler health metrics similar to ones added in CapacityScheduler

2019-11-26 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-3890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth reassigned YARN-3890:


Assignee: Adam Antal  (was: Zoltan Siegl)

> FairScheduler should show the scheduler health metrics similar to ones added 
> in CapacityScheduler
> -
>
> Key: YARN-3890
> URL: https://issues.apache.org/jira/browse/YARN-3890
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Reporter: Anubhav Dhoot
>Assignee: Adam Antal
>Priority: Major
> Attachments: YARN-3890.001.patch, YARN-3890.002.patch, 
> YARN-3890.003.patch
>
>
> We should add information displayed in YARN-3293 in FairScheduler as well 
> possibly sharing the implementation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5106) Provide a builder interface for FairScheduler allocations for use in tests

2019-11-26 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-5106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth reassigned YARN-5106:


Assignee: Adam Antal  (was: Zoltan Siegl)

> Provide a builder interface for FairScheduler allocations for use in tests
> --
>
> Key: YARN-5106
> URL: https://issues.apache.org/jira/browse/YARN-5106
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Adam Antal
>Priority: Major
>  Labels: newbie++
> Attachments: YARN-5106-branch-3.1.001.patch, 
> YARN-5106-branch-3.1.001.patch, YARN-5106-branch-3.1.001.patch, 
> YARN-5106-branch-3.1.002.patch, YARN-5106-branch-3.2.001.patch, 
> YARN-5106-branch-3.2.001.patch, YARN-5106-branch-3.2.002.patch, 
> YARN-5106.001.patch, YARN-5106.002.patch, YARN-5106.003.patch, 
> YARN-5106.004.patch, YARN-5106.005.patch, YARN-5106.006.patch, 
> YARN-5106.007.patch, YARN-5106.008.patch, YARN-5106.008.patch, 
> YARN-5106.008.patch, YARN-5106.009.patch, YARN-5106.010.patch, 
> YARN-5106.011.patch, YARN-5106.012.patch
>
>
> Most, if not all, fair scheduler tests create an allocations XML file. Having 
> a helper class that potentially uses a builder would make the tests cleaner. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9290) Invalid SchedulingRequest not rejected in Scheduler PlacementConstraintsHandler

2019-11-26 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982907#comment-16982907
 ] 

Hudson commented on YARN-9290:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17700 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17700/])
YARN-9290. Invalid SchedulingRequest not rejected in Scheduler (snemeth: rev 
ef950b086354c8a02eecd6745f6ab0fe5449f7b0)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestMaxRunningAppsEnforcer.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/DefaultAMSProcessor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFSAppAttempt.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestSchedulingRequestContainerAllocation.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/placement/AppPlacementAllocator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/Allocation.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestSchedulerApplicationAttempt.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/TestFifoScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestQueueManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/placement/SingleConstraintAppPlacementAllocator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestAppSchedulingInfo.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java


> Invalid SchedulingRequest not rejected in Scheduler 
> PlacementConstraintsHandler 
> 
>
> Key: YARN-9290
> URL: https://issues.apache.org/jira/browse/YARN-9290
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9290-001.patch, YARN-9290-002.patch, 
> YARN-9290-003.patch, YARN-9290-004.patch, YARN-9290-005.patch, 
> YARN-9290-006.patch, YARN-9290-007.patch, YARN-9290-008.patch, 
> YARN-9290-009.patch
>
>
> SchedulingRequest with Invalid namespace is not rejected in Scheduler  
> PlacementConstraintsHandler. RM keeps on trying to allocateOnNode with 
> logging the exception. This is rejected in case of placement-processor 
> handler.
> {code}
> 2019-02-08 16:51:27,548 WARN 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator:
>  Failed to query node cardinality:
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.InvalidAllocationTagsQueryException:
>  Invalid namespace prefix: notselfi, valid values are: 
> all,not-self,app-id,app-tag,self
>   at 
> org.ap

[jira] [Updated] (YARN-9290) Invalid SchedulingRequest not rejected in Scheduler PlacementConstraintsHandler

2019-11-26 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-9290:
-
Fix Version/s: 3.3.0

> Invalid SchedulingRequest not rejected in Scheduler 
> PlacementConstraintsHandler 
> 
>
> Key: YARN-9290
> URL: https://issues.apache.org/jira/browse/YARN-9290
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9290-001.patch, YARN-9290-002.patch, 
> YARN-9290-003.patch, YARN-9290-004.patch, YARN-9290-005.patch, 
> YARN-9290-006.patch, YARN-9290-007.patch, YARN-9290-008.patch, 
> YARN-9290-009.patch
>
>
> SchedulingRequest with Invalid namespace is not rejected in Scheduler  
> PlacementConstraintsHandler. RM keeps on trying to allocateOnNode with 
> logging the exception. This is rejected in case of placement-processor 
> handler.
> {code}
> 2019-02-08 16:51:27,548 WARN 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator:
>  Failed to query node cardinality:
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.InvalidAllocationTagsQueryException:
>  Invalid namespace prefix: notselfi, valid values are: 
> all,not-self,app-id,app-tag,self
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.TargetApplicationsNamespace.fromString(TargetApplicationsNamespace.java:277)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.TargetApplicationsNamespace.parse(TargetApplicationsNamespace.java:234)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.AllocationTags.createAllocationTags(AllocationTags.java:93)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfySingleConstraintExpression(PlacementConstraintsUtil.java:78)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfySingleConstraint(PlacementConstraintsUtil.java:240)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfyConstraints(PlacementConstraintsUtil.java:321)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfyAndConstraint(PlacementConstraintsUtil.java:272)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfyConstraints(PlacementConstraintsUtil.java:324)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfyConstraints(PlacementConstraintsUtil.java:365)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator.checkCardinalityAndPending(SingleConstraintAppPlacementAllocator.java:355)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator.precheckNode(SingleConstraintAppPlacementAllocator.java:395)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.precheckNode(AppSchedulingInfo.java:779)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator.preCheckForNodeCandidateSet(RegularContainerAllocator.java:145)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator.allocate(RegularContainerAllocator.java:837)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator.assignContainers(RegularContainerAllocator.java:890)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.ContainerAllocator.assignContainers(ContainerAllocator.java:54)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.assignContainers(FiCaSchedulerApp.java:977)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainers(LeafQueue.java:1173)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainersToChildQueues(ParentQueue.java:795)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainers(ParentQueue.java:623)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateOrReserveNewContainers(CapacityScheduler.java:1630)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainerOnSingleNo

[jira] [Commented] (YARN-9290) Invalid SchedulingRequest not rejected in Scheduler PlacementConstraintsHandler

2019-11-26 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982886#comment-16982886
 ] 

Szilard Nemeth commented on YARN-9290:
--

Thanks [~prabhujoseph]!
Latest patch looks good, +1, committed to trunk.
What do you think about backporting this to older branches (3.2 / 3.1)? 
thanks.

> Invalid SchedulingRequest not rejected in Scheduler 
> PlacementConstraintsHandler 
> 
>
> Key: YARN-9290
> URL: https://issues.apache.org/jira/browse/YARN-9290
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-9290-001.patch, YARN-9290-002.patch, 
> YARN-9290-003.patch, YARN-9290-004.patch, YARN-9290-005.patch, 
> YARN-9290-006.patch, YARN-9290-007.patch, YARN-9290-008.patch, 
> YARN-9290-009.patch
>
>
> SchedulingRequest with Invalid namespace is not rejected in Scheduler  
> PlacementConstraintsHandler. RM keeps on trying to allocateOnNode with 
> logging the exception. This is rejected in case of placement-processor 
> handler.
> {code}
> 2019-02-08 16:51:27,548 WARN 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator:
>  Failed to query node cardinality:
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.InvalidAllocationTagsQueryException:
>  Invalid namespace prefix: notselfi, valid values are: 
> all,not-self,app-id,app-tag,self
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.TargetApplicationsNamespace.fromString(TargetApplicationsNamespace.java:277)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.TargetApplicationsNamespace.parse(TargetApplicationsNamespace.java:234)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.AllocationTags.createAllocationTags(AllocationTags.java:93)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfySingleConstraintExpression(PlacementConstraintsUtil.java:78)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfySingleConstraint(PlacementConstraintsUtil.java:240)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfyConstraints(PlacementConstraintsUtil.java:321)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfyAndConstraint(PlacementConstraintsUtil.java:272)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfyConstraints(PlacementConstraintsUtil.java:324)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.PlacementConstraintsUtil.canSatisfyConstraints(PlacementConstraintsUtil.java:365)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator.checkCardinalityAndPending(SingleConstraintAppPlacementAllocator.java:355)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.placement.SingleConstraintAppPlacementAllocator.precheckNode(SingleConstraintAppPlacementAllocator.java:395)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.precheckNode(AppSchedulingInfo.java:779)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator.preCheckForNodeCandidateSet(RegularContainerAllocator.java:145)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator.allocate(RegularContainerAllocator.java:837)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator.assignContainers(RegularContainerAllocator.java:890)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.ContainerAllocator.assignContainers(ContainerAllocator.java:54)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.assignContainers(FiCaSchedulerApp.java:977)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.assignContainers(LeafQueue.java:1173)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainersToChildQueues(ParentQueue.java:795)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainers(ParentQueue.java:623)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateOrReserveNewContaine

[jira] [Commented] (YARN-9362) Code cleanup in TestNMLeveldbStateStoreService

2019-11-26 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982873#comment-16982873
 ] 

Hudson commented on YARN-9362:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17699 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17699/])
YARN-9362. Code cleanup in TestNMLeveldbStateStoreService. Contributed 
(snemeth: rev 828ab400eea64ebb628a36cc3d0d53de0bf38934)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/recovery/TestNMLeveldbStateStoreService.java


> Code cleanup in TestNMLeveldbStateStoreService
> --
>
> Key: YARN-9362
> URL: https://issues.apache.org/jira/browse/YARN-9362
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Denes Gerencser
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: YARN-9362.001.patch, YARN-9362.002.patch
>
>
> There are many ways to improve TestNMLeveldbStateStoreService: 
> 1. RecoveredContainerState fields are asserted many times repeatedly. Some 
> simple method extractions would definitely make this more readable.
> 2. The tests are very long and hard to read in general: Again, finding how 
> methods could be extracted to avoid code repetition could help. 
> 3. You name it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9991) Queue mapping based on userid passed through application tag: Change prefix to 'userid'

2019-11-26 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982856#comment-16982856
 ] 

Szilard Nemeth commented on YARN-9991:
--

Thanks [~prabhujoseph]

> Queue mapping based on userid passed through application tag: Change prefix 
> to 'userid'
> ---
>
> Key: YARN-9991
> URL: https://issues.apache.org/jira/browse/YARN-9991
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9991.001.patch, YARN-9991.002.patch
>
>
> There are situations when the real submitting user differs from the user what 
> arrives to YARN. For example in case of a Hive application when Hive 
> impersonation is turned off, the hive queries will run as Hive user and the 
> mapping is done based on this username. Unfortunately in this case YARN 
> doesn't have any information about the real user and there are cases when the 
> customer may want to map these applications to the real submitting user's 
> queue instead of the Hive queue.
> For these cases, if they would pass the username in the application tag we 
> may read it and use it during the queue mapping, if that user has rights to 
> run on the real user's queue.  
> UPDATE REQUIRED: Hive jobs are using "userid=" instead of "u=" for the 
> application tags.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9362) Code cleanup in TestNMLeveldbStateStoreService

2019-11-26 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-9362:
-
Fix Version/s: 3.3.0

> Code cleanup in TestNMLeveldbStateStoreService
> --
>
> Key: YARN-9362
> URL: https://issues.apache.org/jira/browse/YARN-9362
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Denes Gerencser
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: YARN-9362.001.patch, YARN-9362.002.patch
>
>
> There are many ways to improve TestNMLeveldbStateStoreService: 
> 1. RecoveredContainerState fields are asserted many times repeatedly. Some 
> simple method extractions would definitely make this more readable.
> 2. The tests are very long and hard to read in general: Again, finding how 
> methods could be extracted to avoid code repetition could help. 
> 3. You name it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9937) Add missing queue configs in RMWebService#CapacitySchedulerQueueInfo

2019-11-26 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982855#comment-16982855
 ] 

Szilard Nemeth commented on YARN-9937:
--

Hi [~prabhujoseph]!
As per our offline discussion, you said we need to backport this until 3.1
Could you please add branch-3.1 patch, then?
Thanks.

> Add missing queue configs in RMWebService#CapacitySchedulerQueueInfo
> 
>
> Key: YARN-9937
> URL: https://issues.apache.org/jira/browse/YARN-9937
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: Screen Shot 2019-10-28 at 8.54.53 PM.png, 
> YARN-9937-001.patch, YARN-9937-002.patch, YARN-9937-003.patch, 
> YARN-9937-004.patch, YARN-9937-addendum-01.patch, 
> YARN-9937-branch-3.2.001.patch, YARN-9937-branch-3.2.002.patch
>
>
> Below are the missing queue configs which are not part of RMWebServices 
> scheduler endpoint. 
> 1. Maximum Allocation
> 2. Queue ACLs
> 3. Queue Priority
> 4. Application Lifetime



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9991) Queue mapping based on userid passed through application tag: Change prefix to 'userid'

2019-11-26 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982854#comment-16982854
 ] 

Hudson commented on YARN-9991:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17698 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17698/])
YARN-9991. Fix Application Tag prefix to userid. Contributed by Szilard 
(prabhujoseph: rev aa7ab2719f745f6e2a5cfbca713bb49865cf52bd)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestAppManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java


> Queue mapping based on userid passed through application tag: Change prefix 
> to 'userid'
> ---
>
> Key: YARN-9991
> URL: https://issues.apache.org/jira/browse/YARN-9991
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9991.001.patch, YARN-9991.002.patch
>
>
> There are situations when the real submitting user differs from the user what 
> arrives to YARN. For example in case of a Hive application when Hive 
> impersonation is turned off, the hive queries will run as Hive user and the 
> mapping is done based on this username. Unfortunately in this case YARN 
> doesn't have any information about the real user and there are cases when the 
> customer may want to map these applications to the real submitting user's 
> queue instead of the Hive queue.
> For these cases, if they would pass the username in the application tag we 
> may read it and use it during the queue mapping, if that user has rights to 
> run on the real user's queue.  
> UPDATE REQUIRED: Hive jobs are using "userid=" instead of "u=" for the 
> application tags.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9362) Code cleanup in TestNMLeveldbStateStoreService

2019-11-26 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982851#comment-16982851
 ] 

Szilard Nemeth commented on YARN-9362:
--

Latest patch looks good to me, +1, committed to trunk.
Thanks [~denes.gerencser] for your contribution.
Thanks [~pbacsko] for the review.

> Code cleanup in TestNMLeveldbStateStoreService
> --
>
> Key: YARN-9362
> URL: https://issues.apache.org/jira/browse/YARN-9362
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Denes Gerencser
>Priority: Minor
> Attachments: YARN-9362.001.patch, YARN-9362.002.patch
>
>
> There are many ways to improve TestNMLeveldbStateStoreService: 
> 1. RecoveredContainerState fields are asserted many times repeatedly. Some 
> simple method extractions would definitely make this more readable.
> 2. The tests are very long and hard to read in general: Again, finding how 
> methods could be extracted to avoid code repetition could help. 
> 3. You name it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9991) Queue mapping based on userid passed through application tag: Change prefix to 'userid'

2019-11-26 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982845#comment-16982845
 ] 

Prabhu Joseph commented on YARN-9991:
-

Thank you [~snemeth] for fixing this issue. Latest patch looks good. Have 
committed  [^YARN-9991.002.patch] to trunk.

> Queue mapping based on userid passed through application tag: Change prefix 
> to 'userid'
> ---
>
> Key: YARN-9991
> URL: https://issues.apache.org/jira/browse/YARN-9991
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9991.001.patch, YARN-9991.002.patch
>
>
> There are situations when the real submitting user differs from the user what 
> arrives to YARN. For example in case of a Hive application when Hive 
> impersonation is turned off, the hive queries will run as Hive user and the 
> mapping is done based on this username. Unfortunately in this case YARN 
> doesn't have any information about the real user and there are cases when the 
> customer may want to map these applications to the real submitting user's 
> queue instead of the Hive queue.
> For these cases, if they would pass the username in the application tag we 
> may read it and use it during the queue mapping, if that user has rights to 
> run on the real user's queue.  
> UPDATE REQUIRED: Hive jobs are using "userid=" instead of "u=" for the 
> application tags.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9899) Migration tool that help to generate CS config based on FS config [Phase 2]

2019-11-26 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982843#comment-16982843
 ] 

Hudson commented on YARN-9899:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17697 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17697/])
YARN-9899. Migration tool that help to generate CS config based on FS (snemeth: 
rev 8c9018d5c7830ae8ec85f446985cafbc8a14d688)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/TestQueuePlacementConverter.java
* (edit) hadoop-yarn-project/hadoop-yarn/bin/yarn
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/TestFSConfigToCSConfigArgumentHandler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/QueuePlacementConverter.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceManager.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/TestFSConfigToCSConfigConverterMain.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/FSConfigToCSConfigArgumentHandler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/FSConfigConverterTestCommons.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/FSConfigToCSConfigConverterMain.java


> Migration tool that help to generate CS config based on FS config [Phase 2] 
> 
>
> Key: YARN-9899
> URL: https://issues.apache.org/jira/browse/YARN-9899
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Szilard Nemeth
>Assignee: Peter Bacsko
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9899-001.patch, YARN-9899-002.patch, 
> YARN-9899-003.patch, YARN-9899-004.patch, YARN-9899-005.patch, 
> YARN-9899-006.patch, YARN-9899-007.patch
>
>
> YARN-9699 laid down the groundworks of a converter from FS to CS config.
> During the development of the converter, we came up with the following things 
> to fix. 
> 1. If we don't specify a mandatory option, we have this stacktrace for 
> example:
>  
> {code:java}
> org.apache.commons.cli.MissingOptionException: Missing required option: o
>  at org.apache.commons.cli.Parser.checkRequiredOptions(Parser.java:299)
>  at org.apache.commons.cli.Parser.parse(Parser.java:231)
>  at org.apache.commons.cli.Parser.parse(Parser.java:85)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.converter.FSConfigToCSConfigArgumentHandler.parseAndConvert(FSConfigToCSConfigArgumentHandler.java:100)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1572){code}
>  
> We should provide a more concise and meaningful error message (without 
> stacktrace on the CLI, but we should log the exception with stacktrace to the 
> RM log).
> An explanation of the missing option is also required.
> 2. We may think about how to handle exceptions from commons CLI: 
> MissingArgumentException vs. MissingOptionException
> 3. We need to provide a -h / --help option for the CLI that prints all the 
> possible options / arguments.
> 4. Last but not least: We should move the CLI command to a more reasonable 
> place:
> As YARN-9699 implemented it, the command can be invoked like: 
> {code:java}
> /opt/hadoop/bin/yarn resourcemanager -convert-fs-configuration -y 
> /opt/hadoop/etc/hadoop/yarn-site.xml -f 
> /opt/hadoop/etc/hadoop/fair-scheduler.xml -r 
> ~systest/sample-rules-config.properties -o /tmp/fs-cs-output
> {code}
> This is problematic, as if YARN RM is already running, we need to stop it in 
> order to start the RM again with the conversion switch.
> 5. Add unit test coverage for {{QueuePlacementConverter}}
> 6. Close some feature gaps.
>  



--
This message wa

[jira] [Commented] (YARN-9899) Migration tool that help to generate CS config based on FS config [Phase 2]

2019-11-26 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982831#comment-16982831
 ] 

Szilard Nemeth commented on YARN-9899:
--

Hi [~pbacsko]!
Latest patch looks good to me, +1, committed to trunk.
Thanks for your contribution!

> Migration tool that help to generate CS config based on FS config [Phase 2] 
> 
>
> Key: YARN-9899
> URL: https://issues.apache.org/jira/browse/YARN-9899
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Szilard Nemeth
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9899-001.patch, YARN-9899-002.patch, 
> YARN-9899-003.patch, YARN-9899-004.patch, YARN-9899-005.patch, 
> YARN-9899-006.patch, YARN-9899-007.patch
>
>
> YARN-9699 laid down the groundworks of a converter from FS to CS config.
> During the development of the converter, we came up with the following things 
> to fix. 
> 1. If we don't specify a mandatory option, we have this stacktrace for 
> example:
>  
> {code:java}
> org.apache.commons.cli.MissingOptionException: Missing required option: o
>  at org.apache.commons.cli.Parser.checkRequiredOptions(Parser.java:299)
>  at org.apache.commons.cli.Parser.parse(Parser.java:231)
>  at org.apache.commons.cli.Parser.parse(Parser.java:85)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.converter.FSConfigToCSConfigArgumentHandler.parseAndConvert(FSConfigToCSConfigArgumentHandler.java:100)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1572){code}
>  
> We should provide a more concise and meaningful error message (without 
> stacktrace on the CLI, but we should log the exception with stacktrace to the 
> RM log).
> An explanation of the missing option is also required.
> 2. We may think about how to handle exceptions from commons CLI: 
> MissingArgumentException vs. MissingOptionException
> 3. We need to provide a -h / --help option for the CLI that prints all the 
> possible options / arguments.
> 4. Last but not least: We should move the CLI command to a more reasonable 
> place:
> As YARN-9699 implemented it, the command can be invoked like: 
> {code:java}
> /opt/hadoop/bin/yarn resourcemanager -convert-fs-configuration -y 
> /opt/hadoop/etc/hadoop/yarn-site.xml -f 
> /opt/hadoop/etc/hadoop/fair-scheduler.xml -r 
> ~systest/sample-rules-config.properties -o /tmp/fs-cs-output
> {code}
> This is problematic, as if YARN RM is already running, we need to stop it in 
> order to start the RM again with the conversion switch.
> 5. Add unit test coverage for {{QueuePlacementConverter}}
> 6. Close some feature gaps.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9561) Add C changes for the new RuncContainerRuntime

2019-11-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982778#comment-16982778
 ] 

Hadoop QA commented on YARN-9561:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 14m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
61m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 15m 32s{color} | 
{color:red} root generated 4 new + 22 unchanged - 4 fixed = 26 total (was 26) 
{color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 14m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 42s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}139m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.security.token.delegation.TestZKDelegationTokenSecretManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | YARN-9561 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986713/YARN-9561.015.patch |
| Optional Tests |  dupname  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux cca9e906ad59 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 52e9ee3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| cc | 
https://builds.apache.org/job/PreCommit-YARN-Build/25224/artifact/out/diff-compile-cc-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/25224/artifact/out/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/25224/testReport/ |
| Max. process+thread count | 1485 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/25224/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Add C changes for the new RuncContainerRuntime
> --
>
> Key: YARN-9561
> URL: https://issues.apache.org/jira/browse/YARN-9561
> Project: Hadoop YARN
>

[jira] [Commented] (YARN-9991) Queue mapping based on userid passed through application tag: Change prefix to 'userid'

2019-11-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982755#comment-16982755
 ] 

Hadoop QA commented on YARN-9991:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  6m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 21s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 3 new + 300 unchanged - 0 fixed = 303 total (was 300) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 54s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
57s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
52s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 82m 
12s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}181m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | YARN-9991 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986816/YARN-9991.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux c5da862813fb 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 201

[jira] [Commented] (YARN-9899) Migration tool that help to generate CS config based on FS config [Phase 2]

2019-11-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982752#comment-16982752
 ] 

Hadoop QA commented on YARN-9899:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 63 unchanged - 1 fixed = 63 total (was 64) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
25s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
15s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m  0s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 87m  
1s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
58s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}268m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.webproxy.TestWebAppProxyServlet 

[jira] [Commented] (YARN-9561) Add C changes for the new RuncContainerRuntime

2019-11-26 Thread Jim Brennan (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982720#comment-16982720
 ] 

Jim Brennan commented on YARN-9561:
---

Thanks for the update [~ebadger]!  I downloaded patch 015 and then verified I 
could build nodemanager after doing a clean at the top level.  I then ran 
cetest and test-container-executor.  I then verified I could build from the top 
level as well.

+1 (non-binding) on patch 015.

 

> Add C changes for the new RuncContainerRuntime
> --
>
> Key: YARN-9561
> URL: https://issues.apache.org/jira/browse/YARN-9561
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9561.001.patch, YARN-9561.002.patch, 
> YARN-9561.003.patch, YARN-9561.004.patch, YARN-9561.005.patch, 
> YARN-9561.006.patch, YARN-9561.007.patch, YARN-9561.008.patch, 
> YARN-9561.009.patch, YARN-9561.010.patch, YARN-9561.011.patch, 
> YARN-9561.012.patch, YARN-9561.013.patch, YARN-9561.014.patch, 
> YARN-9561.015.patch
>
>
> This JIRA will be used to add the C changes to the container-executor native 
> binary that are necessary for the new RuncContainerRuntime. There should be 
> no changes to existing code paths. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9923) Introduce HealthReporter interface and implement running Docker daemon checker

2019-11-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982712#comment-16982712
 ] 

Hadoop QA commented on YARN-9923:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 26 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
33s{color} | {color:green} root generated 0 new + 1868 unchanged - 2 fixed = 
1868 total (was 1870) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 43s{color} | {color:orange} root: The patch generated 13 new + 596 unchanged 
- 52 fixed = 609 total (was 648) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m  
1s{color} | {color:orange} The patch generated 3 new + 0 unchanged - 0 fixed = 
3 total (was 0) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
18s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
33s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
12s{color} | {color:green} hadoop-co

[jira] [Commented] (YARN-9991) Queue mapping based on userid passed through application tag: Change prefix to 'userid'

2019-11-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982697#comment-16982697
 ] 

Hadoop QA commented on YARN-9991:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
56s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 22s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 3 new + 87 unchanged - 0 fixed = 90 total (was 87) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
17s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 84m 
19s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | YARN-9991 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986804/YARN-9991.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux 0aa73ab18ce9 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | tru

[jira] [Commented] (YARN-9607) Auto-configuring rollover-size of IFile format for non-appendable filesystems

2019-11-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982654#comment-16982654
 ] 

Hadoop QA commented on YARN-9607:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
50s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | YARN-9607 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986814/YARN-9607.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b5abffd9f0b6 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 448ffb1 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/25222/testReport/ |
| Max. process+thread count | 413 (vs. ulimit of 5500) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/25222/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Auto-configuring rollover-size of IFile format for non-appendable filesystems
> ---

[jira] [Commented] (YARN-9444) YARN API ResourceUtils's getRequestedResourcesFromConfig doesn't recognize yarn.io/gpu as a valid resource

2019-11-26 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982632#comment-16982632
 ] 

Hudson commented on YARN-9444:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17695 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17695/])
YARN-9444. YARN API ResourceUtils's getRequestedResourcesFromConfig (snemeth: 
rev 52e9ee39a12ce91b3a545603dcf1103518ad2920)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResourceUtils.java


> YARN API ResourceUtils's getRequestedResourcesFromConfig doesn't recognize 
> yarn.io/gpu as a valid resource
> --
>
> Key: YARN-9444
> URL: https://issues.apache.org/jira/browse/YARN-9444
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Minor
> Attachments: YARN-9444.001.patch, YARN-9444.002.patch, 
> YARN-9444.003.patch
>
>
> The original issue was the jobclient test did not send the requested resource 
> type, when specified in the command line eg:
> {code:java}
> hadoop jar hadoop-mapreduce-client-jobclient-tests.jar sleep 
> -Dmapreduce.reduce.resource.yarn.io/gpu=1  -m 10 -r 1 -mt 9
> {code}
> After some investigation, it turned out it only affects resource types with 
> name containing '.' characters. And the root cause is regexp from the 
> getRequestedResourcesFromConfig method.
> {code:java}
> "^" + Pattern.quote(prefix) + "[^.]+$"
> {code}
> This regexp explicitly forbids any dots in the resource type name, which is 
> inconsistent with the default resource type for gpu and fpga, which are 
> yarn.io/gpu and yarn.io/fpga respectively.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9444) YARN API ResourceUtils's getRequestedResourcesFromConfig doesn't recognize yarn.io/gpu as a valid resource

2019-11-26 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982619#comment-16982619
 ] 

Szilard Nemeth commented on YARN-9444:
--

Thanks [~shuzirra] for this contribution, patch looks good, committed to trunk!
Thanks [~adam.antal] for the review!

[~shuzirra]: Could you please upload patches for branch-3.1 / branch-3.2? Once 
we have green jenkins for them, will commit those to their appropriate 
branches. 
Thanks.

> YARN API ResourceUtils's getRequestedResourcesFromConfig doesn't recognize 
> yarn.io/gpu as a valid resource
> --
>
> Key: YARN-9444
> URL: https://issues.apache.org/jira/browse/YARN-9444
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Minor
> Attachments: YARN-9444.001.patch, YARN-9444.002.patch, 
> YARN-9444.003.patch
>
>
> The original issue was the jobclient test did not send the requested resource 
> type, when specified in the command line eg:
> {code:java}
> hadoop jar hadoop-mapreduce-client-jobclient-tests.jar sleep 
> -Dmapreduce.reduce.resource.yarn.io/gpu=1  -m 10 -r 1 -mt 9
> {code}
> After some investigation, it turned out it only affects resource types with 
> name containing '.' characters. And the root cause is regexp from the 
> getRequestedResourcesFromConfig method.
> {code:java}
> "^" + Pattern.quote(prefix) + "[^.]+$"
> {code}
> This regexp explicitly forbids any dots in the resource type name, which is 
> inconsistent with the default resource type for gpu and fpga, which are 
> yarn.io/gpu and yarn.io/fpga respectively.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9991) Queue mapping based on userid passed through application tag: Change prefix to 'userid'

2019-11-26 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982598#comment-16982598
 ] 

Szilard Nemeth commented on YARN-9991:
--

Hi [~prabhujoseph]!
Thanks for the comments!
1. Makes sense, added the prefix
2. Good point, modified the logs accordingly.
3. Not sure what you meant with this.

Uploaded patch002. Please check it!

> Queue mapping based on userid passed through application tag: Change prefix 
> to 'userid'
> ---
>
> Key: YARN-9991
> URL: https://issues.apache.org/jira/browse/YARN-9991
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9991.001.patch, YARN-9991.002.patch
>
>
> There are situations when the real submitting user differs from the user what 
> arrives to YARN. For example in case of a Hive application when Hive 
> impersonation is turned off, the hive queries will run as Hive user and the 
> mapping is done based on this username. Unfortunately in this case YARN 
> doesn't have any information about the real user and there are cases when the 
> customer may want to map these applications to the real submitting user's 
> queue instead of the Hive queue.
> For these cases, if they would pass the username in the application tag we 
> may read it and use it during the queue mapping, if that user has rights to 
> run on the real user's queue.  
> UPDATE REQUIRED: Hive jobs are using "userid=" instead of "u=" for the 
> application tags.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9991) Queue mapping based on userid passed through application tag: Change prefix to 'userid'

2019-11-26 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-9991:
-
Attachment: YARN-9991.002.patch

> Queue mapping based on userid passed through application tag: Change prefix 
> to 'userid'
> ---
>
> Key: YARN-9991
> URL: https://issues.apache.org/jira/browse/YARN-9991
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9991.001.patch, YARN-9991.002.patch
>
>
> There are situations when the real submitting user differs from the user what 
> arrives to YARN. For example in case of a Hive application when Hive 
> impersonation is turned off, the hive queries will run as Hive user and the 
> mapping is done based on this username. Unfortunately in this case YARN 
> doesn't have any information about the real user and there are cases when the 
> customer may want to map these applications to the real submitting user's 
> queue instead of the Hive queue.
> For these cases, if they would pass the username in the application tag we 
> may read it and use it during the queue mapping, if that user has rights to 
> run on the real user's queue.  
> UPDATE REQUIRED: Hive jobs are using "userid=" instead of "u=" for the 
> application tags.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9607) Auto-configuring rollover-size of IFile format for non-appendable filesystems

2019-11-26 Thread Adam Antal (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982596#comment-16982596
 ] 

Adam Antal commented on YARN-9607:
--

Rebased & removed whitespaces.

Anyone fancy a review please? [~ste...@apache.org], [~sunilg], [~snemeth], 
[~prabhujoseph]

> Auto-configuring rollover-size of IFile format for non-appendable filesystems
> -
>
> Key: YARN-9607
> URL: https://issues.apache.org/jira/browse/YARN-9607
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: log-aggregation, yarn
>Affects Versions: 3.3.0
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Attachments: YARN-9607.001.patch, YARN-9607.002.patch, 
> YARN-9607.003.patch, YARN-9607.004.patch
>
>
> In YARN-9525, we made IFile format compatible with remote folders with s3a 
> scheme. In rolling fashioned log-aggregation IFile still fails with the 
> "append is not supported" error message, which is a known limitation of the 
> format by design. 
> There is a workaround though: setting the rollover size in the configuration 
> of the IFile format, in each rolling cycle a new aggregated log file will be 
> created, thus we eliminated the append from the process. Setting this config 
> globally would cause performance problems in the regular log-aggregation, so 
> I'm suggesting to enforcing this config to zero, if the scheme of the URI is 
> s3a (or any other non-appendable filesystem).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9607) Auto-configuring rollover-size of IFile format for non-appendable filesystems

2019-11-26 Thread Adam Antal (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated YARN-9607:
-
Attachment: YARN-9607.004.patch

> Auto-configuring rollover-size of IFile format for non-appendable filesystems
> -
>
> Key: YARN-9607
> URL: https://issues.apache.org/jira/browse/YARN-9607
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: log-aggregation, yarn
>Affects Versions: 3.3.0
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Attachments: YARN-9607.001.patch, YARN-9607.002.patch, 
> YARN-9607.003.patch, YARN-9607.004.patch
>
>
> In YARN-9525, we made IFile format compatible with remote folders with s3a 
> scheme. In rolling fashioned log-aggregation IFile still fails with the 
> "append is not supported" error message, which is a known limitation of the 
> format by design. 
> There is a workaround though: setting the rollover size in the configuration 
> of the IFile format, in each rolling cycle a new aggregated log file will be 
> created, thus we eliminated the append from the process. Setting this config 
> globally would cause performance problems in the regular log-aggregation, so 
> I'm suggesting to enforcing this config to zero, if the scheme of the URI is 
> s3a (or any other non-appendable filesystem).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-9991) Queue mapping based on userid passed through application tag: Change prefix to 'userid'

2019-11-26 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982575#comment-16982575
 ] 

Prabhu Joseph edited comment on YARN-9991 at 11/26/19 3:07 PM:
---

[~snemeth] Thanks for the patch. The patch looks good to me. Few minor comments 
related to the old patch.

1. The two configs introduced *application-tag-based-placement.enable* and 
*application-tag-based-placement.username.whitelist* are not consistent with 
other yarn configs. Can we add the prefix "*yarn.resourcemanager."*

2. There are few logs in RMAppManager which points to userId which may mislead 
to user. 
{code:java}
"Application tag based placement is enabled, checking for " +
 "userId in the application tag");

Found userId '{}' in application tag

userId was not found in application tags
{code}
3. If possible, Can you also share the reference to userid in hive for 
validation.

Thanks.


was (Author: prabhu joseph):
[~snemeth] Thanks for the patch. The patch looks good to me. Few minor comments 
related to the old patch.

1. The two configs introduced *application-tag-based-placement.enable* and 
*application-tag-based-placement.username.whitelist* are not consistent with 
other yarn configs. Can we add the prefix "*yarn.resourcemanager."*

2. There are few logs in RMAppManager which points to userId which may mislead 
to user. 
{code:java}
User '{}' is not allowed to do placement based

Found userId '{}' in application tag

userId was not found in application tags
{code}
3. If possible, Can you also share the reference to userid in hive for 
validation.

Thanks.

> Queue mapping based on userid passed through application tag: Change prefix 
> to 'userid'
> ---
>
> Key: YARN-9991
> URL: https://issues.apache.org/jira/browse/YARN-9991
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9991.001.patch
>
>
> There are situations when the real submitting user differs from the user what 
> arrives to YARN. For example in case of a Hive application when Hive 
> impersonation is turned off, the hive queries will run as Hive user and the 
> mapping is done based on this username. Unfortunately in this case YARN 
> doesn't have any information about the real user and there are cases when the 
> customer may want to map these applications to the real submitting user's 
> queue instead of the Hive queue.
> For these cases, if they would pass the username in the application tag we 
> may read it and use it during the queue mapping, if that user has rights to 
> run on the real user's queue.  
> UPDATE REQUIRED: Hive jobs are using "userid=" instead of "u=" for the 
> application tags.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9991) Queue mapping based on userid passed through application tag: Change prefix to 'userid'

2019-11-26 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982575#comment-16982575
 ] 

Prabhu Joseph commented on YARN-9991:
-

[~snemeth] Thanks for the patch. The patch looks good to me. Few minor comments 
related to the old patch.

1. The two configs introduced *application-tag-based-placement.enable* and 
*application-tag-based-placement.username.whitelist* are not consistent with 
other yarn configs. Can we add the prefix "*yarn.resourcemanager."*

2. There are few logs in RMAppManager which points to userId which may mislead 
to user. 
{code:java}
User '{}' is not allowed to do placement based

Found userId '{}' in application tag

userId was not found in application tags
{code}
3. If possible, Can you also share the reference to userid in hive for 
validation.

Thanks.

> Queue mapping based on userid passed through application tag: Change prefix 
> to 'userid'
> ---
>
> Key: YARN-9991
> URL: https://issues.apache.org/jira/browse/YARN-9991
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9991.001.patch
>
>
> There are situations when the real submitting user differs from the user what 
> arrives to YARN. For example in case of a Hive application when Hive 
> impersonation is turned off, the hive queries will run as Hive user and the 
> mapping is done based on this username. Unfortunately in this case YARN 
> doesn't have any information about the real user and there are cases when the 
> customer may want to map these applications to the real submitting user's 
> queue instead of the Hive queue.
> For these cases, if they would pass the username in the application tag we 
> may read it and use it during the queue mapping, if that user has rights to 
> run on the real user's queue.  
> UPDATE REQUIRED: Hive jobs are using "userid=" instead of "u=" for the 
> application tags.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9923) Introduce HealthReporter interface and implement running Docker daemon checker

2019-11-26 Thread Adam Antal (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982573#comment-16982573
 ] 

Adam Antal commented on YARN-9923:
--

Uploaded patchset v5:
- Removed the java-based Docker health checker implementation
- Added the supported for checking up to 4 scripts regularly.
- Updated markdown file with the new feature.

TODO:
- Add test containing this python-based health check

> Introduce HealthReporter interface and implement running Docker daemon checker
> --
>
> Key: YARN-9923
> URL: https://issues.apache.org/jira/browse/YARN-9923
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager, yarn
>Affects Versions: 3.2.1
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Attachments: YARN-9923.001.patch, YARN-9923.002.patch, 
> YARN-9923.003.patch, YARN-9923.004.patch, YARN-9923.005.patch
>
>
> Currently if a NodeManager is enabled to allocate Docker containers, but the 
> specified binary (docker.binary in the container-executor.cfg) is missing the 
> container allocation fails with the following error message:
> {noformat}
> Container launch fails
> Exit code: 29
> Exception message: Launch container failed
> Shell error output: sh: : No 
> such file or directory
> Could not inspect docker network to get type /usr/bin/docker network inspect 
> host --format='{{.Driver}}'.
> Error constructing docker command, docker error code=-1, error 
> message='Unknown error'
> {noformat}
> I suggest to add a property say "yarn.nodemanager.runtime.linux.docker.check" 
> to have the following options:
> - STARTUP: setting this option the NodeManager would not start if Docker 
> binaries are missing or the Docker daemon is not running (the exception is 
> considered FATAL during startup)
> - RUNTIME: would give a more detailed/user-friendly exception in 
> NodeManager's side (NM logs) if Docker binaries are missing or the daemon is 
> not working. This would also prevent further Docker container allocation as 
> long as the binaries do not exist and the docker daemon is not running.
> - NONE (default): preserving the current behaviour, throwing exception during 
> container allocation, carrying on using the default retry procedure.
> 
> A new interface called {{HealthChecker}} is introduced which is used in the 
> {{NodeHealthCheckerService}}. Currently existing implementations like 
> {{LocalDirsHandlerService}} are modified to implement this giving a clear 
> abstraction to the node's health. The {{DockerHealthChecker}} implements this 
> new interface.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9923) Introduce HealthReporter interface and implement running Docker daemon checker

2019-11-26 Thread Adam Antal (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated YARN-9923:
-
Attachment: YARN-9923.005.patch

> Introduce HealthReporter interface and implement running Docker daemon checker
> --
>
> Key: YARN-9923
> URL: https://issues.apache.org/jira/browse/YARN-9923
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager, yarn
>Affects Versions: 3.2.1
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Attachments: YARN-9923.001.patch, YARN-9923.002.patch, 
> YARN-9923.003.patch, YARN-9923.004.patch, YARN-9923.005.patch
>
>
> Currently if a NodeManager is enabled to allocate Docker containers, but the 
> specified binary (docker.binary in the container-executor.cfg) is missing the 
> container allocation fails with the following error message:
> {noformat}
> Container launch fails
> Exit code: 29
> Exception message: Launch container failed
> Shell error output: sh: : No 
> such file or directory
> Could not inspect docker network to get type /usr/bin/docker network inspect 
> host --format='{{.Driver}}'.
> Error constructing docker command, docker error code=-1, error 
> message='Unknown error'
> {noformat}
> I suggest to add a property say "yarn.nodemanager.runtime.linux.docker.check" 
> to have the following options:
> - STARTUP: setting this option the NodeManager would not start if Docker 
> binaries are missing or the Docker daemon is not running (the exception is 
> considered FATAL during startup)
> - RUNTIME: would give a more detailed/user-friendly exception in 
> NodeManager's side (NM logs) if Docker binaries are missing or the daemon is 
> not working. This would also prevent further Docker container allocation as 
> long as the binaries do not exist and the docker daemon is not running.
> - NONE (default): preserving the current behaviour, throwing exception during 
> container allocation, carrying on using the default retry procedure.
> 
> A new interface called {{HealthChecker}} is introduced which is used in the 
> {{NodeHealthCheckerService}}. Currently existing implementations like 
> {{LocalDirsHandlerService}} are modified to implement this giving a clear 
> abstraction to the node's health. The {{DockerHealthChecker}} implements this 
> new interface.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9444) YARN API ResourceUtils's getRequestedResourcesFromConfig doesn't recognize yarn.io/gpu as a valid resource

2019-11-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982547#comment-16982547
 ] 

Hadoop QA commented on YARN-9444:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 22s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 3 new + 15 unchanged - 0 fixed = 18 total (was 15) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
53s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
12s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | YARN-9444 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986785/YARN-9444.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 209eccc817b3 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 448ffb1 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Bui

[jira] [Commented] (YARN-9991) Queue mapping based on userid passed through application tag: Change prefix to 'userid'

2019-11-26 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982536#comment-16982536
 ] 

Szilard Nemeth commented on YARN-9991:
--

Hi [~sunilg]! Can you please take a look?
Thanks

> Queue mapping based on userid passed through application tag: Change prefix 
> to 'userid'
> ---
>
> Key: YARN-9991
> URL: https://issues.apache.org/jira/browse/YARN-9991
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9991.001.patch
>
>
> There are situations when the real submitting user differs from the user what 
> arrives to YARN. For example in case of a Hive application when Hive 
> impersonation is turned off, the hive queries will run as Hive user and the 
> mapping is done based on this username. Unfortunately in this case YARN 
> doesn't have any information about the real user and there are cases when the 
> customer may want to map these applications to the real submitting user's 
> queue instead of the Hive queue.
> For these cases, if they would pass the username in the application tag we 
> may read it and use it during the queue mapping, if that user has rights to 
> run on the real user's queue.  
> UPDATE REQUIRED: Hive jobs are using "userid=" instead of "u=" for the 
> application tags.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9991) Queue mapping based on userid passed through application tag: Change prefix to 'userid'

2019-11-26 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-9991:
-
Attachment: YARN-9991.001.patch

> Queue mapping based on userid passed through application tag: Change prefix 
> to 'userid'
> ---
>
> Key: YARN-9991
> URL: https://issues.apache.org/jira/browse/YARN-9991
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9991.001.patch
>
>
> There are situations when the real submitting user differs from the user what 
> arrives to YARN. For example in case of a Hive application when Hive 
> impersonation is turned off, the hive queries will run as Hive user and the 
> mapping is done based on this username. Unfortunately in this case YARN 
> doesn't have any information about the real user and there are cases when the 
> customer may want to map these applications to the real submitting user's 
> queue instead of the Hive queue.
> For these cases, if they would pass the username in the application tag we 
> may read it and use it during the queue mapping, if that user has rights to 
> run on the real user's queue.  
> UPDATE REQUIRED: Hive jobs are using "userid=" instead of "u=" for the 
> application tags.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9991) Queue mapping based on userid passed through application tag: Change prefix to 'userid'

2019-11-26 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-9991:
-
Reporter: Szilard Nemeth  (was: Kinga Marton)

> Queue mapping based on userid passed through application tag: Change prefix 
> to 'userid'
> ---
>
> Key: YARN-9991
> URL: https://issues.apache.org/jira/browse/YARN-9991
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Reporter: Szilard Nemeth
>Assignee: Kinga Marton
>Priority: Major
> Fix For: 3.3.0
>
>
> There are situations when the real submitting user differs from the user what 
> arrives to YARN. For example in case of a Hive application when Hive 
> impersonation is turned off, the hive queries will run as Hive user and the 
> mapping is done based on this username. Unfortunately in this case YARN 
> doesn't have any information about the real user and there are cases when the 
> customer may want to map these applications to the real submitting user's 
> queue instead of the Hive queue.
> For these cases, if they would pass the username in the application tag we 
> may read it and use it during the queue mapping, if that user has rights to 
> run on the real user's queue.  
> UPDATE REQUIRED: Hive jobs are using "userid=" instead of "u=" for the 
> application tags.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9991) Queue mapping based on userid passed through application tag: Change prefix to 'userid'

2019-11-26 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth reassigned YARN-9991:


Assignee: Szilard Nemeth  (was: Kinga Marton)

> Queue mapping based on userid passed through application tag: Change prefix 
> to 'userid'
> ---
>
> Key: YARN-9991
> URL: https://issues.apache.org/jira/browse/YARN-9991
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Fix For: 3.3.0
>
>
> There are situations when the real submitting user differs from the user what 
> arrives to YARN. For example in case of a Hive application when Hive 
> impersonation is turned off, the hive queries will run as Hive user and the 
> mapping is done based on this username. Unfortunately in this case YARN 
> doesn't have any information about the real user and there are cases when the 
> customer may want to map these applications to the real submitting user's 
> queue instead of the Hive queue.
> For these cases, if they would pass the username in the application tag we 
> may read it and use it during the queue mapping, if that user has rights to 
> run on the real user's queue.  
> UPDATE REQUIRED: Hive jobs are using "userid=" instead of "u=" for the 
> application tags.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9991) Queue mapping based on userid passed through application tag: Change prefix to 'userid'

2019-11-26 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-9991:
-
Summary: Queue mapping based on userid passed through application tag: 
Change prefix to 'userid'  (was: Queue mapping based on userid passed through 
application tag)

> Queue mapping based on userid passed through application tag: Change prefix 
> to 'userid'
> ---
>
> Key: YARN-9991
> URL: https://issues.apache.org/jira/browse/YARN-9991
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Reporter: Kinga Marton
>Assignee: Kinga Marton
>Priority: Major
> Fix For: 3.3.0
>
>
> There are situations when the real submitting user differs from the user what 
> arrives to YARN. For example in case of a Hive application when Hive 
> impersonation is turned off, the hive queries will run as Hive user and the 
> mapping is done based on this username. Unfortunately in this case YARN 
> doesn't have any information about the real user and there are cases when the 
> customer may want to map these applications to the real submitting user's 
> queue instead of the Hive queue.
> For these cases, if they would pass the username in the application tag we 
> may read it and use it during the queue mapping, if that user has rights to 
> run on the real user's queue.  
> UPDATE REQUIRED: Hive jobs are using "userid=" instead of "u=" for the 
> application tags.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9991) Queue mapping based on userid passed through application tag

2019-11-26 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-9991:
-
Description: 
There are situations when the real submitting user differs from the user what 
arrives to YARN. For example in case of a Hive application when Hive 
impersonation is turned off, the hive queries will run as Hive user and the 
mapping is done based on this username. Unfortunately in this case YARN doesn't 
have any information about the real user and there are cases when the customer 
may want to map these applications to the real submitting user's queue instead 
of the Hive queue.

For these cases, if they would pass the username in the application tag we may 
read it and use it during the queue mapping, if that user has rights to run on 
the real user's queue.  

UPDATE REQUIRED: Hive jobs are using "userid=" instead of "u=" for the 
application tags.

 

  was:
There are situations when the real submitting user differs from the user what 
arrives to YARN. For example in case of a Hive application when Hive 
impersonation is turned off, the hive queries will run as Hive user and the 
mapping is done based on this username. Unfortunately in this case YARN doesn't 
have any information about the real user and there are cases when the customer 
may want to map these applications to the real submitting user's queue instead 
of the Hive queue.

For these cases, if they would pass the username in the application tag we may 
read it and use it during the queue mapping, if that user has rights to run on 
the real user's queue.  

[~sunilg] please correct me if I missed something.

 


> Queue mapping based on userid passed through application tag
> 
>
> Key: YARN-9991
> URL: https://issues.apache.org/jira/browse/YARN-9991
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Reporter: Kinga Marton
>Assignee: Kinga Marton
>Priority: Major
> Fix For: 3.3.0
>
>
> There are situations when the real submitting user differs from the user what 
> arrives to YARN. For example in case of a Hive application when Hive 
> impersonation is turned off, the hive queries will run as Hive user and the 
> mapping is done based on this username. Unfortunately in this case YARN 
> doesn't have any information about the real user and there are cases when the 
> customer may want to map these applications to the real submitting user's 
> queue instead of the Hive queue.
> For these cases, if they would pass the username in the application tag we 
> may read it and use it during the queue mapping, if that user has rights to 
> run on the real user's queue.  
> UPDATE REQUIRED: Hive jobs are using "userid=" instead of "u=" for the 
> application tags.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9991) Queue mapping based on userid passed through application tag

2019-11-26 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-9991:
-
Summary: Queue mapping based on userid passed through application tag  
(was: CLONE - Queue mapping based on userid passed through application tag)

> Queue mapping based on userid passed through application tag
> 
>
> Key: YARN-9991
> URL: https://issues.apache.org/jira/browse/YARN-9991
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Reporter: Kinga Marton
>Assignee: Kinga Marton
>Priority: Major
> Fix For: 3.3.0
>
>
> There are situations when the real submitting user differs from the user what 
> arrives to YARN. For example in case of a Hive application when Hive 
> impersonation is turned off, the hive queries will run as Hive user and the 
> mapping is done based on this username. Unfortunately in this case YARN 
> doesn't have any information about the real user and there are cases when the 
> customer may want to map these applications to the real submitting user's 
> queue instead of the Hive queue.
> For these cases, if they would pass the username in the application tag we 
> may read it and use it during the queue mapping, if that user has rights to 
> run on the real user's queue.  
> [~sunilg] please correct me if I missed something.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9991) CLONE - Queue mapping based on userid passed through application tag

2019-11-26 Thread Szilard Nemeth (Jira)
Szilard Nemeth created YARN-9991:


 Summary: CLONE - Queue mapping based on userid passed through 
application tag
 Key: YARN-9991
 URL: https://issues.apache.org/jira/browse/YARN-9991
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: scheduler
Reporter: Kinga Marton
Assignee: Kinga Marton
 Fix For: 3.3.0


There are situations when the real submitting user differs from the user what 
arrives to YARN. For example in case of a Hive application when Hive 
impersonation is turned off, the hive queries will run as Hive user and the 
mapping is done based on this username. Unfortunately in this case YARN doesn't 
have any information about the real user and there are cases when the customer 
may want to map these applications to the real submitting user's queue instead 
of the Hive queue.

For these cases, if they would pass the username in the application tag we may 
read it and use it during the queue mapping, if that user has rights to run on 
the real user's queue.  

[~sunilg] please correct me if I missed something.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-9444) YARN API ResourceUtils's getRequestedResourcesFromConfig doesn't recognize yarn.io/gpu as a valid resource

2019-11-26 Thread Gergely Pollak (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982475#comment-16982475
 ] 

Gergely Pollak edited comment on YARN-9444 at 11/26/19 2:01 PM:


Thanks [~adam.antal]  for the feedback, I've extracted the "(yarn.io/)?" part 
into a constant, but since the whole regexp is relying on the external 
argument, I cannot extract the whole regexp to a constant and I think 
extracting the "[^.]+$" part wouldn't help much.


was (Author: shuzirra):
Thanks [~adam.antal]  for the feedback, I've extracted the "(yarn.io/)?" part 
into a constant, but since the while regexp is relying on the external 
argument, I cannot extract the whole regexp to a constant and I think 
extracting the "[^.]+$" part wouldn't help much.

> YARN API ResourceUtils's getRequestedResourcesFromConfig doesn't recognize 
> yarn.io/gpu as a valid resource
> --
>
> Key: YARN-9444
> URL: https://issues.apache.org/jira/browse/YARN-9444
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Minor
> Attachments: YARN-9444.001.patch, YARN-9444.002.patch, 
> YARN-9444.003.patch
>
>
> The original issue was the jobclient test did not send the requested resource 
> type, when specified in the command line eg:
> {code:java}
> hadoop jar hadoop-mapreduce-client-jobclient-tests.jar sleep 
> -Dmapreduce.reduce.resource.yarn.io/gpu=1  -m 10 -r 1 -mt 9
> {code}
> After some investigation, it turned out it only affects resource types with 
> name containing '.' characters. And the root cause is regexp from the 
> getRequestedResourcesFromConfig method.
> {code:java}
> "^" + Pattern.quote(prefix) + "[^.]+$"
> {code}
> This regexp explicitly forbids any dots in the resource type name, which is 
> inconsistent with the default resource type for gpu and fpga, which are 
> yarn.io/gpu and yarn.io/fpga respectively.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9899) Migration tool that help to generate CS config based on FS config [Phase 2]

2019-11-26 Thread Peter Bacsko (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-9899:
---
Attachment: YARN-9899-007.patch

> Migration tool that help to generate CS config based on FS config [Phase 2] 
> 
>
> Key: YARN-9899
> URL: https://issues.apache.org/jira/browse/YARN-9899
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Szilard Nemeth
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9899-001.patch, YARN-9899-002.patch, 
> YARN-9899-003.patch, YARN-9899-004.patch, YARN-9899-005.patch, 
> YARN-9899-006.patch, YARN-9899-007.patch
>
>
> YARN-9699 laid down the groundworks of a converter from FS to CS config.
> During the development of the converter, we came up with the following things 
> to fix. 
> 1. If we don't specify a mandatory option, we have this stacktrace for 
> example:
>  
> {code:java}
> org.apache.commons.cli.MissingOptionException: Missing required option: o
>  at org.apache.commons.cli.Parser.checkRequiredOptions(Parser.java:299)
>  at org.apache.commons.cli.Parser.parse(Parser.java:231)
>  at org.apache.commons.cli.Parser.parse(Parser.java:85)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.converter.FSConfigToCSConfigArgumentHandler.parseAndConvert(FSConfigToCSConfigArgumentHandler.java:100)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1572){code}
>  
> We should provide a more concise and meaningful error message (without 
> stacktrace on the CLI, but we should log the exception with stacktrace to the 
> RM log).
> An explanation of the missing option is also required.
> 2. We may think about how to handle exceptions from commons CLI: 
> MissingArgumentException vs. MissingOptionException
> 3. We need to provide a -h / --help option for the CLI that prints all the 
> possible options / arguments.
> 4. Last but not least: We should move the CLI command to a more reasonable 
> place:
> As YARN-9699 implemented it, the command can be invoked like: 
> {code:java}
> /opt/hadoop/bin/yarn resourcemanager -convert-fs-configuration -y 
> /opt/hadoop/etc/hadoop/yarn-site.xml -f 
> /opt/hadoop/etc/hadoop/fair-scheduler.xml -r 
> ~systest/sample-rules-config.properties -o /tmp/fs-cs-output
> {code}
> This is problematic, as if YARN RM is already running, we need to stop it in 
> order to start the RM again with the conversion switch.
> 5. Add unit test coverage for {{QueuePlacementConverter}}
> 6. Close some feature gaps.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9444) YARN API ResourceUtils's getRequestedResourcesFromConfig doesn't recognize yarn.io/gpu as a valid resource

2019-11-26 Thread Adam Antal (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982485#comment-16982485
 ] 

Adam Antal commented on YARN-9444:
--

[~shuzirra], thanks!

+1 (non-binding).

> YARN API ResourceUtils's getRequestedResourcesFromConfig doesn't recognize 
> yarn.io/gpu as a valid resource
> --
>
> Key: YARN-9444
> URL: https://issues.apache.org/jira/browse/YARN-9444
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Minor
> Attachments: YARN-9444.001.patch, YARN-9444.002.patch, 
> YARN-9444.003.patch
>
>
> The original issue was the jobclient test did not send the requested resource 
> type, when specified in the command line eg:
> {code:java}
> hadoop jar hadoop-mapreduce-client-jobclient-tests.jar sleep 
> -Dmapreduce.reduce.resource.yarn.io/gpu=1  -m 10 -r 1 -mt 9
> {code}
> After some investigation, it turned out it only affects resource types with 
> name containing '.' characters. And the root cause is regexp from the 
> getRequestedResourcesFromConfig method.
> {code:java}
> "^" + Pattern.quote(prefix) + "[^.]+$"
> {code}
> This regexp explicitly forbids any dots in the resource type name, which is 
> inconsistent with the default resource type for gpu and fpga, which are 
> yarn.io/gpu and yarn.io/fpga respectively.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9444) YARN API ResourceUtils's getRequestedResourcesFromConfig doesn't recognize yarn.io/gpu as a valid resource

2019-11-26 Thread Gergely Pollak (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982475#comment-16982475
 ] 

Gergely Pollak commented on YARN-9444:
--

Thanks [~adam.antal]  for the feedback, I've extracted the "(yarn.io/)?" part 
into a constant, but since the while regexp is relying on the external 
argument, I cannot extract the whole regexp to a constant and I think 
extracting the "[^.]+$" part wouldn't help much.

> YARN API ResourceUtils's getRequestedResourcesFromConfig doesn't recognize 
> yarn.io/gpu as a valid resource
> --
>
> Key: YARN-9444
> URL: https://issues.apache.org/jira/browse/YARN-9444
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Minor
> Attachments: YARN-9444.001.patch, YARN-9444.002.patch, 
> YARN-9444.003.patch
>
>
> The original issue was the jobclient test did not send the requested resource 
> type, when specified in the command line eg:
> {code:java}
> hadoop jar hadoop-mapreduce-client-jobclient-tests.jar sleep 
> -Dmapreduce.reduce.resource.yarn.io/gpu=1  -m 10 -r 1 -mt 9
> {code}
> After some investigation, it turned out it only affects resource types with 
> name containing '.' characters. And the root cause is regexp from the 
> getRequestedResourcesFromConfig method.
> {code:java}
> "^" + Pattern.quote(prefix) + "[^.]+$"
> {code}
> This regexp explicitly forbids any dots in the resource type name, which is 
> inconsistent with the default resource type for gpu and fpga, which are 
> yarn.io/gpu and yarn.io/fpga respectively.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9011) Race condition during decommissioning

2019-11-26 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982470#comment-16982470
 ] 

Szilard Nemeth commented on YARN-9011:
--

Hi [~pbacsko]!
Patches look good, committed them to their respective branches. Thanks for your 
contribution!

> Race condition during decommissioning
> -
>
> Key: YARN-9011
> URL: https://issues.apache.org/jira/browse/YARN-9011
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.1.1
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9011-001.patch, YARN-9011-002.patch, 
> YARN-9011-003.patch, YARN-9011-004.patch, YARN-9011-005.patch, 
> YARN-9011-006.patch, YARN-9011-007.patch, YARN-9011-008.patch, 
> YARN-9011-009.patch, YARN-9011-branch-3.1.001.patch, 
> YARN-9011-branch-3.2.001.patch
>
>
> During internal testing, we found a nasty race condition which occurs during 
> decommissioning.
> Node manager, incorrect behaviour:
> {noformat}
> 2018-06-18 21:00:17,634 WARN 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Received 
> SHUTDOWN signal from Resourcemanager as part of heartbeat, hence shutting 
> down.
> 2018-06-18 21:00:17,634 WARN 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Message from 
> ResourceManager: Disallowed NodeManager nodeId: node-6.hostname.com:8041 
> hostname:node-6.hostname.com
> {noformat}
> Node manager, expected behaviour:
> {noformat}
> 2018-06-18 21:07:37,377 WARN 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Received 
> SHUTDOWN signal from Resourcemanager as part of heartbeat, hence shutting 
> down.
> 2018-06-18 21:07:37,377 WARN 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Message from 
> ResourceManager: DECOMMISSIONING  node-6.hostname.com:8041 is ready to be 
> decommissioned
> {noformat}
> Note the two different messages from the RM ("Disallowed NodeManager" vs 
> "DECOMMISSIONING"). The problem is that {{ResourceTrackerService}} can see an 
> inconsistent state of nodes while they're being updated:
> {noformat}
> 2018-06-18 21:00:17,575 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.NodesListManager: hostsReader 
> include:{172.26.12.198,node-7.hostname.com,node-2.hostname.com,node-5.hostname.com,172.26.8.205,node-8.hostname.com,172.26.23.76,172.26.22.223,node-6.hostname.com,172.26.9.218,node-4.hostname.com,node-3.hostname.com,172.26.13.167,node-9.hostname.com,172.26.21.221,172.26.10.219}
>  exclude:{node-6.hostname.com}
> 2018-06-18 21:00:17,575 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.NodesListManager: Gracefully 
> decommission node node-6.hostname.com:8041 with state RUNNING
> 2018-06-18 21:00:17,575 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: 
> Disallowed NodeManager nodeId: node-6.hostname.com:8041 node: 
> node-6.hostname.com
> 2018-06-18 21:00:17,576 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: Put Node 
> node-6.hostname.com:8041 in DECOMMISSIONING.
> 2018-06-18 21:00:17,575 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=yarn 
> IP=172.26.22.115OPERATION=refreshNodes  TARGET=AdminService 
> RESULT=SUCCESS
> 2018-06-18 21:00:17,577 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: Preserve 
> original total capability: 
> 2018-06-18 21:00:17,577 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: 
> node-6.hostname.com:8041 Node Transitioned from RUNNING to DECOMMISSIONING
> {noformat}
> When the decommissioning succeeds, there is no output logged from 
> {{ResourceTrackerService}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9899) Migration tool that help to generate CS config based on FS config [Phase 2]

2019-11-26 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982464#comment-16982464
 ] 

Szilard Nemeth commented on YARN-9899:
--

Sure [~pbacsko], I agree with your proposal. Please go ahead and change the 
code according to this.

> Migration tool that help to generate CS config based on FS config [Phase 2] 
> 
>
> Key: YARN-9899
> URL: https://issues.apache.org/jira/browse/YARN-9899
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Szilard Nemeth
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9899-001.patch, YARN-9899-002.patch, 
> YARN-9899-003.patch, YARN-9899-004.patch, YARN-9899-005.patch, 
> YARN-9899-006.patch
>
>
> YARN-9699 laid down the groundworks of a converter from FS to CS config.
> During the development of the converter, we came up with the following things 
> to fix. 
> 1. If we don't specify a mandatory option, we have this stacktrace for 
> example:
>  
> {code:java}
> org.apache.commons.cli.MissingOptionException: Missing required option: o
>  at org.apache.commons.cli.Parser.checkRequiredOptions(Parser.java:299)
>  at org.apache.commons.cli.Parser.parse(Parser.java:231)
>  at org.apache.commons.cli.Parser.parse(Parser.java:85)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.converter.FSConfigToCSConfigArgumentHandler.parseAndConvert(FSConfigToCSConfigArgumentHandler.java:100)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1572){code}
>  
> We should provide a more concise and meaningful error message (without 
> stacktrace on the CLI, but we should log the exception with stacktrace to the 
> RM log).
> An explanation of the missing option is also required.
> 2. We may think about how to handle exceptions from commons CLI: 
> MissingArgumentException vs. MissingOptionException
> 3. We need to provide a -h / --help option for the CLI that prints all the 
> possible options / arguments.
> 4. Last but not least: We should move the CLI command to a more reasonable 
> place:
> As YARN-9699 implemented it, the command can be invoked like: 
> {code:java}
> /opt/hadoop/bin/yarn resourcemanager -convert-fs-configuration -y 
> /opt/hadoop/etc/hadoop/yarn-site.xml -f 
> /opt/hadoop/etc/hadoop/fair-scheduler.xml -r 
> ~systest/sample-rules-config.properties -o /tmp/fs-cs-output
> {code}
> This is problematic, as if YARN RM is already running, we need to stop it in 
> order to start the RM again with the conversion switch.
> 5. Add unit test coverage for {{QueuePlacementConverter}}
> 6. Close some feature gaps.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9444) YARN API ResourceUtils's getRequestedResourcesFromConfig doesn't recognize yarn.io/gpu as a valid resource

2019-11-26 Thread Gergely Pollak (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Pollak updated YARN-9444:
-
Attachment: YARN-9444.003.patch

> YARN API ResourceUtils's getRequestedResourcesFromConfig doesn't recognize 
> yarn.io/gpu as a valid resource
> --
>
> Key: YARN-9444
> URL: https://issues.apache.org/jira/browse/YARN-9444
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Minor
> Attachments: YARN-9444.001.patch, YARN-9444.002.patch, 
> YARN-9444.003.patch
>
>
> The original issue was the jobclient test did not send the requested resource 
> type, when specified in the command line eg:
> {code:java}
> hadoop jar hadoop-mapreduce-client-jobclient-tests.jar sleep 
> -Dmapreduce.reduce.resource.yarn.io/gpu=1  -m 10 -r 1 -mt 9
> {code}
> After some investigation, it turned out it only affects resource types with 
> name containing '.' characters. And the root cause is regexp from the 
> getRequestedResourcesFromConfig method.
> {code:java}
> "^" + Pattern.quote(prefix) + "[^.]+$"
> {code}
> This regexp explicitly forbids any dots in the resource type name, which is 
> inconsistent with the default resource type for gpu and fpga, which are 
> yarn.io/gpu and yarn.io/fpga respectively.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9937) Add missing queue configs in RMWebService#CapacitySchedulerQueueInfo

2019-11-26 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982445#comment-16982445
 ] 

Hudson commented on YARN-9937:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17693 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17693/])
YARN-9937. addendum: Add missing queue configs in (snemeth: rev 
448ffb12ecaf5b265d50ef18144950d4904f9ac0)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerQueueInfo.java


> Add missing queue configs in RMWebService#CapacitySchedulerQueueInfo
> 
>
> Key: YARN-9937
> URL: https://issues.apache.org/jira/browse/YARN-9937
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: Screen Shot 2019-10-28 at 8.54.53 PM.png, 
> YARN-9937-001.patch, YARN-9937-002.patch, YARN-9937-003.patch, 
> YARN-9937-004.patch, YARN-9937-addendum-01.patch, 
> YARN-9937-branch-3.2.001.patch, YARN-9937-branch-3.2.002.patch
>
>
> Below are the missing queue configs which are not part of RMWebServices 
> scheduler endpoint. 
> 1. Maximum Allocation
> 2. Queue ACLs
> 3. Queue Priority
> 4. Application Lifetime



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9937) Add missing queue configs in RMWebService#CapacitySchedulerQueueInfo

2019-11-26 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982438#comment-16982438
 ] 

Szilard Nemeth commented on YARN-9937:
--

Hi [~prabhujoseph]!
Addendum patch looks good, committed to trunk!
[~sunilg], [~prabhujoseph]: What about backports to branch-3.2 / branch-3.1? I 
can see that [~prabhujoseph] added those patches but the original patch for 
YARN-9937 is only committed to trunk.

> Add missing queue configs in RMWebService#CapacitySchedulerQueueInfo
> 
>
> Key: YARN-9937
> URL: https://issues.apache.org/jira/browse/YARN-9937
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: Screen Shot 2019-10-28 at 8.54.53 PM.png, 
> YARN-9937-001.patch, YARN-9937-002.patch, YARN-9937-003.patch, 
> YARN-9937-004.patch, YARN-9937-addendum-01.patch, 
> YARN-9937-branch-3.2.001.patch, YARN-9937-branch-3.2.002.patch
>
>
> Below are the missing queue configs which are not part of RMWebServices 
> scheduler endpoint. 
> 1. Maximum Allocation
> 2. Queue ACLs
> 3. Queue Priority
> 4. Application Lifetime



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9956) Improve connection error message for YARN ApiServerClient

2019-11-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982320#comment-16982320
 ] 

Hadoop QA commented on YARN-9956:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 1s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 15s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 4 unchanged - 1 fixed = 5 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 26m 
29s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 54s{color} 
| {color:red} hadoop-yarn-services-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.service.client.TestSecureApiServiceClient |
|   | hadoop.yarn.service.client.TestApiServiceClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | YARN-9956 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986756/YARN-9956-002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f98bce542a7e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6c20512 |

[jira] [Commented] (YARN-9990) Testcase fails with "Insufficient configured threads: required=16 < max=10"

2019-11-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982267#comment-16982267
 ] 

Hadoop QA commented on YARN-9990:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
59s{color} | {color:green} hadoop-yarn-server-web-proxy in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
51s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | YARN-9990 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986743/YARN-9990-001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7997a2ed1462 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6c20512 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/25216/testReport/ |
| Max. process+thread count | 581 (vs. ulimit

[jira] [Resolved] (YARN-9988) Hadoop Native Build fails at hadoop-yarn-server-nodemanager

2019-11-26 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph resolved YARN-9988.
-
Resolution: Won't Fix

> Hadoop Native Build fails at hadoop-yarn-server-nodemanager
> ---
>
> Key: YARN-9988
> URL: https://issues.apache.org/jira/browse/YARN-9988
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Priority: Blocker
> Attachments: console
>
>
> Hadoop Native Build fails  at hadoop-yarn-server-nodemanager. Have observed 
> this in YARN-9781.
> {code:java}
> [WARNING] make[2]: Leaving directory 
> '/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native'
> [WARNING] CMakeFiles/Makefile2:131: recipe for target 
> 'CMakeFiles/container-executor.dir/all' failed
> [WARNING] make[2]: Leaving directory 
> '/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native'
> [WARNING] CMakeFiles/Makefile2:236: recipe for target 
> 'CMakeFiles/test-container-executor.dir/all' failed
> [WARNING] Linking CXX static library libgtest.a
> [WARNING] /opt/cmake/bin/cmake -P 
> CMakeFiles/gtest.dir/cmake_clean_target.cmake
> [WARNING] /opt/cmake/bin/cmake -E cmake_link_script 
> CMakeFiles/gtest.dir/link.txt --verbose=1
> [WARNING] /usr/bin/ar cq libgtest.a  
> CMakeFiles/gtest.dir/testptch/hadoop/hadoop-common-project/hadoop-common/src/main/native/gtest/gtest-all.cc.o
> [WARNING] /usr/bin/ranlib libgtest.a
> [WARNING] make[2]: Leaving directory 
> '/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native'
> [WARNING] /opt/cmake/bin/cmake -E cmake_progress_report 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/CMakeFiles
>   35
> [WARNING] [ 62%] Built target gtest
> [WARNING] make[1]: Leaving directory 
> '/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native'
> [WARNING] Makefile:76: recipe for target 'all' failed
> [WARNING] make[2]: *** No rule to make target 
> '/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/../../../../../hadoop-common-project/hadoop-common/target/native/target/usr/local/lib/libhadoop.a',
>  needed by 'target/usr/local/bin/test-container-executor'.  Stop.
> [WARNING] make[2]: *** Waiting for unfinished jobs
> [WARNING] make[2]: *** No rule to make target 
> '/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/../../../../../hadoop-common-project/hadoop-common/target/native/target/usr/local/lib/libhadoop.a',
>  needed by 'target/usr/local/bin/container-executor'.  Stop.
> [WARNING] make[2]: *** Waiting for unfinished jobs
> [WARNING] make[1]: *** [CMakeFiles/container-executor.dir/all] Error 2
> [WARNING] make[1]: *** Waiting for unfinished jobs
> [WARNING] make[1]: *** [CMakeFiles/test-container-executor.dir/all] Error 2
> [WARNING] make: *** [all] Error 2 {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9988) Hadoop Native Build fails at hadoop-yarn-server-nodemanager

2019-11-26 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16982261#comment-16982261
 ] 

Prabhu Joseph commented on YARN-9988:
-

Thanks [~ebadger]. Will close this Jira.

> Hadoop Native Build fails at hadoop-yarn-server-nodemanager
> ---
>
> Key: YARN-9988
> URL: https://issues.apache.org/jira/browse/YARN-9988
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Priority: Blocker
> Attachments: console
>
>
> Hadoop Native Build fails  at hadoop-yarn-server-nodemanager. Have observed 
> this in YARN-9781.
> {code:java}
> [WARNING] make[2]: Leaving directory 
> '/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native'
> [WARNING] CMakeFiles/Makefile2:131: recipe for target 
> 'CMakeFiles/container-executor.dir/all' failed
> [WARNING] make[2]: Leaving directory 
> '/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native'
> [WARNING] CMakeFiles/Makefile2:236: recipe for target 
> 'CMakeFiles/test-container-executor.dir/all' failed
> [WARNING] Linking CXX static library libgtest.a
> [WARNING] /opt/cmake/bin/cmake -P 
> CMakeFiles/gtest.dir/cmake_clean_target.cmake
> [WARNING] /opt/cmake/bin/cmake -E cmake_link_script 
> CMakeFiles/gtest.dir/link.txt --verbose=1
> [WARNING] /usr/bin/ar cq libgtest.a  
> CMakeFiles/gtest.dir/testptch/hadoop/hadoop-common-project/hadoop-common/src/main/native/gtest/gtest-all.cc.o
> [WARNING] /usr/bin/ranlib libgtest.a
> [WARNING] make[2]: Leaving directory 
> '/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native'
> [WARNING] /opt/cmake/bin/cmake -E cmake_progress_report 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/CMakeFiles
>   35
> [WARNING] [ 62%] Built target gtest
> [WARNING] make[1]: Leaving directory 
> '/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native'
> [WARNING] Makefile:76: recipe for target 'all' failed
> [WARNING] make[2]: *** No rule to make target 
> '/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/../../../../../hadoop-common-project/hadoop-common/target/native/target/usr/local/lib/libhadoop.a',
>  needed by 'target/usr/local/bin/test-container-executor'.  Stop.
> [WARNING] make[2]: *** Waiting for unfinished jobs
> [WARNING] make[2]: *** No rule to make target 
> '/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/../../../../../hadoop-common-project/hadoop-common/target/native/target/usr/local/lib/libhadoop.a',
>  needed by 'target/usr/local/bin/container-executor'.  Stop.
> [WARNING] make[2]: *** Waiting for unfinished jobs
> [WARNING] make[1]: *** [CMakeFiles/container-executor.dir/all] Error 2
> [WARNING] make[1]: *** Waiting for unfinished jobs
> [WARNING] make[1]: *** [CMakeFiles/test-container-executor.dir/all] Error 2
> [WARNING] make: *** [all] Error 2 {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9956) Improve connection error message for YARN ApiServerClient

2019-11-26 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9956:

Attachment: YARN-9956-002.patch

> Improve connection error message for YARN ApiServerClient
> -
>
> Key: YARN-9956
> URL: https://issues.apache.org/jira/browse/YARN-9956
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Yang
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-9956-001.patch, YARN-9956-002.patch
>
>
> In HA environment, yarn.resourcemanager.webapp.address configuration is 
> optional.  ApiServiceClient may produce confusing error message like this:
> {code}
> 19/10/30 20:13:42 INFO client.ApiServiceClient: Fail to connect to: 
> host1.example.com:8090
> 19/10/30 20:13:42 INFO client.ApiServiceClient: Fail to connect to: 
> host2.example.com:8090
> 19/10/30 20:13:42 INFO util.log: Logging initialized @2301ms
> 19/10/30 20:13:42 ERROR client.ApiServiceClient: Error: {}
> GSSException: No valid credentials provided (Mechanism level: Server not 
> found in Kerberos database (7) - LOOKING_UP_SERVER)
>   at 
> java.security.jgss/sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:771)
>   at 
> java.security.jgss/sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:266)
>   at 
> java.security.jgss/sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:196)
>   at 
> org.apache.hadoop.yarn.service.client.ApiServiceClient$1.run(ApiServiceClient.java:125)
>   at 
> org.apache.hadoop.yarn.service.client.ApiServiceClient$1.run(ApiServiceClient.java:105)
>   at java.base/java.security.AccessController.doPrivileged(Native Method)
>   at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
>   at 
> org.apache.hadoop.yarn.service.client.ApiServiceClient.generateToken(ApiServiceClient.java:105)
>   at 
> org.apache.hadoop.yarn.service.client.ApiServiceClient.getApiClient(ApiServiceClient.java:290)
>   at 
> org.apache.hadoop.yarn.service.client.ApiServiceClient.getApiClient(ApiServiceClient.java:271)
>   at 
> org.apache.hadoop.yarn.service.client.ApiServiceClient.actionLaunch(ApiServiceClient.java:416)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:589)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:125)
> Caused by: KrbException: Server not found in Kerberos database (7) - 
> LOOKING_UP_SERVER
>   at 
> java.security.jgss/sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:73)
>   at 
> java.security.jgss/sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:251)
>   at 
> java.security.jgss/sun.security.krb5.KrbTgsReq.sendAndGetCreds(KrbTgsReq.java:262)
>   at 
> java.security.jgss/sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:308)
>   at 
> java.security.jgss/sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:126)
>   at 
> java.security.jgss/sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:458)
>   at 
> java.security.jgss/sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:695)
>   ... 15 more
> Caused by: KrbException: Identifier doesn't match expected value (906)
>   at 
> java.security.jgss/sun.security.krb5.internal.KDCRep.init(KDCRep.java:140)
>   at 
> java.security.jgss/sun.security.krb5.internal.TGSRep.init(TGSRep.java:65)
>   at 
> java.security.jgss/sun.security.krb5.internal.TGSRep.(TGSRep.java:60)
>   at 
> java.security.jgss/sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:55)
>   ... 21 more
> 19/10/30 20:13:42 ERROR client.ApiServiceClient: Fail to launch application: 
> java.io.IOException: java.lang.reflect.UndeclaredThrowableException
>   at 
> org.apache.hadoop.yarn.service.client.ApiServiceClient.getApiClient(ApiServiceClient.java:293)
>   at 
> org.apache.hadoop.yarn.service.client.ApiServiceClient.getApiClient(ApiServiceClient.java:271)
>   at 
> org.apache.hadoop.yarn.service.client.ApiServiceClient.actionLaunch(ApiServiceClient.java:416)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:589)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:125)
> Caused by: java.lang.reflect.UndeclaredThrowableException
>   at 
> org.apache.hadoop.security.UserGroupIn