[jira] [Commented] (YARN-6069) CORS support in timeline v2
[ https://issues.apache.org/jira/browse/YARN-6069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904653#comment-15904653 ] Rohith Sharma K S commented on YARN-6069: - [~varun_saxena] We are facing issue in trunk integration with ATSv2. Could we commit to trunk as well? > CORS support in timeline v2 > --- > > Key: YARN-6069 > URL: https://issues.apache.org/jira/browse/YARN-6069 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelinereader >Reporter: Sreenath Somarajapuram >Assignee: Rohith Sharma K S > Fix For: YARN-5355, YARN-5355-branch-2 > > Attachments: YARN-6069-YARN-5355.0001.patch, > YARN-6069-YARN-5355.0002.patch, YARN-6069-YARN-5355.0003.patch, > YARN-6069-YARN-5355.0004.patch, YARN-6069-YARN-5355.0005.patch > > > By default the browser prevents accessing resources from multiple domains. In > most cases the UIs would be loaded form a domain different from that of > timeline server. Hence without CORS support, it would be difficult for the > UIs to load data from timeline v2. > YARN-2277 must provide more info on the implementation. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6304) Skip rm.transitionToActive call to RM if RM is already active.
[ https://issues.apache.org/jira/browse/YARN-6304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated YARN-6304: Description: When elector elects RM to become active, even though RM is already in ACTIVE state AdminService does refresh on following configurations. # refreshAdminAcls # refreshAll to update the configurations. But ideally even these operations are need NOT to be done and can be skipped refreshing configurations on ACTIVE RM. However admin executes refresh command separately if there is any config changes to be done. was: When elector elects RM to become active, even though RM is already in ACTIVE state AdminService does refresh on following configurations. #. refreshAdminAcls # refreshAll to update the configurations. I think we can skip refreshing configurations on ACTIVE RM. However admin executes refresh command separately which indicates him failure if any. > Skip rm.transitionToActive call to RM if RM is already active. > --- > > Key: YARN-6304 > URL: https://issues.apache.org/jira/browse/YARN-6304 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S > Attachments: YARN-6304.0001.patch > > > When elector elects RM to become active, even though RM is already in ACTIVE > state AdminService does refresh on following configurations. > # refreshAdminAcls > # refreshAll to update the configurations. > But ideally even these operations are need NOT to be done and can be skipped > refreshing configurations on ACTIVE RM. However admin executes refresh > command separately if there is any config changes to be done. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6264) AM not launched when a single vcore is available on the cluster
[ https://issues.apache.org/jira/browse/YARN-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904595#comment-15904595 ] Karthik Kambatla commented on YARN-6264: +1. Committing this. > AM not launched when a single vcore is available on the cluster > --- > > Key: YARN-6264 > URL: https://issues.apache.org/jira/browse/YARN-6264 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6264.001.patch, YARN-6264.002.patch, > YARN-6264.003.patch, YARN-6264.004.patch, YARN-6264.005.patch, > YARN-6264.006.patch > > > In method {{canRunAppAM()}}, we will get a zero vcore if there is only one > vcores, and the {{maxAMShare}} is between 0 and 1 because > {{Resources#multiply}} round down double to a integer by default. This > potentially happens frequently when assigning a container to AM just after > preemption happens since it is more likely there is only one vcore on the > cluster just after preemption. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5829) FS preemption should reserve a node before considering containers on it for preemption
[ https://issues.apache.org/jira/browse/YARN-5829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904584#comment-15904584 ] Tao Jie commented on YARN-5829: --- Thank you [~miklos.szeg...@cloudera.com] for sharing your thought. 1, It is easy to confuse the reservation we are taking about with the current reservation mechanism in scheduler. IIRC, the purpose of current reservation is to prevent starvation of request with large resource. And our reservation here is to assign container on node to one exact application. 2, I feel both OK about 1)reuse/extend current reservation mechanism or 2)add another logic to handle the reservation for preemption. If is 2), it's better to find another name to avoid naming confusion. 3, {quote} 2. We also need to be careful with prioritizing reservations. For example how it works now is that a reservation takes priority before any other request. What happens, if I have a preemption from a lower priority request but there is a demand from a higher priority application? {quote} In my opinion, the reservation for preemption should have higher priority than current reservation in allocation. If starved application that triggers preempting is not satisfied as soon as possible, it will still in starvation and try to preempt more containers. However a normal application has reservation container on nodes would wait for a while since the resource is allocated to another starved application, it makes sense that application would get higher priority when itself becomes a starved application. > FS preemption should reserve a node before considering containers on it for > preemption > -- > > Key: YARN-5829 > URL: https://issues.apache.org/jira/browse/YARN-5829 > Project: Hadoop YARN > Issue Type: Sub-task > Components: fairscheduler >Reporter: Karthik Kambatla >Assignee: Miklos Szegedi > > FS preemption evaluates nodes for preemption, and subsequently preempts > identified containers. If this node is not reserved for a specific > application, any other application could be allocated resources on this node. > Reserving the node for the starved application before preempting containers > would help avoid this. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6318) timeline service schema creator fails if executed from a remote machine
[ https://issues.apache.org/jira/browse/YARN-6318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904481#comment-15904481 ] Rohith Sharma K S commented on YARN-6318: - IIUC, hbase-site.xml is expected to be placed at HBaseClient classpath to connect remote HBase cluster. What are we expecting more in such cases? > timeline service schema creator fails if executed from a remote machine > --- > > Key: YARN-6318 > URL: https://issues.apache.org/jira/browse/YARN-6318 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: 3.0.0-alpha1 >Reporter: Sangjin Lee > Labels: yarn-5355-merge-blocker > > The timeline service schema creator fails if executed from a remote machine > and the remote machine does not have the right {{hbase-site.xml}} file to > talk to that remote HBase cluster. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-5829) FS preemption should reserve a node before considering containers on it for preemption
[ https://issues.apache.org/jira/browse/YARN-5829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Szegedi reassigned YARN-5829: Assignee: Miklos Szegedi (was: Karthik Kambatla) > FS preemption should reserve a node before considering containers on it for > preemption > -- > > Key: YARN-5829 > URL: https://issues.apache.org/jira/browse/YARN-5829 > Project: Hadoop YARN > Issue Type: Sub-task > Components: fairscheduler >Reporter: Karthik Kambatla >Assignee: Miklos Szegedi > > FS preemption evaluates nodes for preemption, and subsequently preempts > identified containers. If this node is not reserved for a specific > application, any other application could be allocated resources on this node. > Reserving the node for the starved application before preempting containers > would help avoid this. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5829) FS preemption should reserve a node before considering containers on it for preemption
[ https://issues.apache.org/jira/browse/YARN-5829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904453#comment-15904453 ] Miklos Szegedi commented on YARN-5829: -- [~Tao Jie], I like the idea that you described in YARN-5636. I did some testing and prototyping on this Jira. I am open to a wider solution as well. 1. What I have learned so far is that preemption has a conflict with the following code: {code} isReservable(capability) && reserve(pendingAsk.getPerAllocationResource(), node, reservedContainer, type, schedulerKey) {code} Basically the preempted application will have an excess demand and it will reserve the lost resources. They will be assigned back to it, when they become free. 2. We also need to be careful with prioritizing reservations. For example how it works now is that a reservation takes priority before any other request. What happens, if I have a preemption from a lower priority request but there is a demand from a higher priority application? 3. It is a great idea to have a timeout. We also need to take into consideration proper release of the reservation. If the app is killed, all reservations should be released. I do not see such code in the current reservation code for FS. > FS preemption should reserve a node before considering containers on it for > preemption > -- > > Key: YARN-5829 > URL: https://issues.apache.org/jira/browse/YARN-5829 > Project: Hadoop YARN > Issue Type: Sub-task > Components: fairscheduler >Reporter: Karthik Kambatla >Assignee: Karthik Kambatla > > FS preemption evaluates nodes for preemption, and subsequently preempts > identified containers. If this node is not reserved for a specific > application, any other application could be allocated resources on this node. > Reserving the node for the starved application before preempting containers > would help avoid this. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6300) NULL_UPDATE_REQUESTS is redundant in TestFairScheduler
[ https://issues.apache.org/jira/browse/YARN-6300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904434#comment-15904434 ] Daniel Templeton commented on YARN-6300: Yeah, I accidentally pushed it into branch-2, but I made it work. :) > NULL_UPDATE_REQUESTS is redundant in TestFairScheduler > -- > > Key: YARN-6300 > URL: https://issues.apache.org/jira/browse/YARN-6300 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 3.0.0-alpha2 >Reporter: Daniel Templeton >Assignee: Yuanbo Liu >Priority: Minor > Labels: newbie > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: Selection_124.png, YARN-6300.001.patch > > > The {{TestFairScheduler.NULL_UPDATE_REQUESTS}} field hides > {{FairSchedulerTestBase.NULL_UPDATE_REQUESTS}}, which has the same value. > The {{NULL_UPDATE_REQUESTS}} field should be removed from > {{TestFairScheduler}}. > While you're at it, maybe also remove the unused import. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6300) NULL_UPDATE_REQUESTS is redundant in TestFairScheduler
[ https://issues.apache.org/jira/browse/YARN-6300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated YARN-6300: --- Fix Version/s: 2.9.0 > NULL_UPDATE_REQUESTS is redundant in TestFairScheduler > -- > > Key: YARN-6300 > URL: https://issues.apache.org/jira/browse/YARN-6300 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 3.0.0-alpha2 >Reporter: Daniel Templeton >Assignee: Yuanbo Liu >Priority: Minor > Labels: newbie > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: Selection_124.png, YARN-6300.001.patch > > > The {{TestFairScheduler.NULL_UPDATE_REQUESTS}} field hides > {{FairSchedulerTestBase.NULL_UPDATE_REQUESTS}}, which has the same value. > The {{NULL_UPDATE_REQUESTS}} field should be removed from > {{TestFairScheduler}}. > While you're at it, maybe also remove the unused import. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6300) NULL_UPDATE_REQUESTS is redundant in TestFairScheduler
[ https://issues.apache.org/jira/browse/YARN-6300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904329#comment-15904329 ] Yuanbo Liu commented on YARN-6300: -- [~templedf] Thanks for your commit !Selection_124.png! I've seen this patch in branch-2, so I guess I don't need to provide branch-2 patch any more, right? > NULL_UPDATE_REQUESTS is redundant in TestFairScheduler > -- > > Key: YARN-6300 > URL: https://issues.apache.org/jira/browse/YARN-6300 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 3.0.0-alpha2 >Reporter: Daniel Templeton >Assignee: Yuanbo Liu >Priority: Minor > Labels: newbie > Fix For: 3.0.0-alpha3 > > Attachments: Selection_124.png, YARN-6300.001.patch > > > The {{TestFairScheduler.NULL_UPDATE_REQUESTS}} field hides > {{FairSchedulerTestBase.NULL_UPDATE_REQUESTS}}, which has the same value. > The {{NULL_UPDATE_REQUESTS}} field should be removed from > {{TestFairScheduler}}. > While you're at it, maybe also remove the unused import. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6300) NULL_UPDATE_REQUESTS is redundant in TestFairScheduler
[ https://issues.apache.org/jira/browse/YARN-6300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuanbo Liu updated YARN-6300: - Attachment: Selection_124.png > NULL_UPDATE_REQUESTS is redundant in TestFairScheduler > -- > > Key: YARN-6300 > URL: https://issues.apache.org/jira/browse/YARN-6300 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 3.0.0-alpha2 >Reporter: Daniel Templeton >Assignee: Yuanbo Liu >Priority: Minor > Labels: newbie > Fix For: 3.0.0-alpha3 > > Attachments: Selection_124.png, YARN-6300.001.patch > > > The {{TestFairScheduler.NULL_UPDATE_REQUESTS}} field hides > {{FairSchedulerTestBase.NULL_UPDATE_REQUESTS}}, which has the same value. > The {{NULL_UPDATE_REQUESTS}} field should be removed from > {{TestFairScheduler}}. > While you're at it, maybe also remove the unused import. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue
[ https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15903442#comment-15903442 ] Sunil G edited comment on YARN-2113 at 3/10/17 2:48 AM: Attaching v0 patch for user-limit preemption. This patch is done on top of YARN-2009 where we have already done intra-queue preemption framework. This patch focus on adding support to do user-limit preemption if any user is under-served, and if some other users are abusing the limit. Basic test cases are added from preemption module. However I will be adding some more test cases from scheduler side to ensure that UL computation is also correct. [~leftnoteasy] and [~eepayne], please help to share some early feedback. was (Author: sunilg): Attaching v0 patch for user-limit preemption. This patch is done on top of YARN-2009 where we have already done intra-queue preemption framework. This patch focus on adding support to do user-limit preemption if any is used is under-served, and some other users are abusing the limit. Basic test cases are added from preemption module. However I will be adding some more test cases from scheduler side to ensure that UL computation is also correct. [~leftnoteasy] and [~eepayne], please help to share some early feedback. > Add cross-user preemption within CapacityScheduler's leaf-queue > --- > > Key: YARN-2113 > URL: https://issues.apache.org/jira/browse/YARN-2113 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler >Reporter: Vinod Kumar Vavilapalli >Assignee: Vinod Kumar Vavilapalli > Attachments: YARN-2113.v0.patch > > > Preemption today only works across queues and moves around resources across > queues per demand and usage. We should also have user-level preemption within > a queue, to balance capacity across users in a predictable manner. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6289) Fail to achieve data locality when runing MapReduce and Spark on HDFS
[ https://issues.apache.org/jira/browse/YARN-6289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904296#comment-15904296 ] Huangkaixuan commented on YARN-6289: Thanks [~leftnoteasy] 1、MR can get the locations of a block through FileSystem.getFileBlockLocations. Usually MR applications use FileSystem.getFileBlockLocations to compute splits, but I haven't seen it in the default Yarn scheduling policy (FIFO) 2、All nodes in the experiment are in the same rack, and all tasks are rack-local. RackAwareness will not affect the experimental results 3、the task failed to achieve data locality, even though there is no other job running on the cluster at the same time. it seems that Yarn didn’t attempt to allocate containers with data locality in the default scheduling mode > Fail to achieve data locality when runing MapReduce and Spark on HDFS > - > > Key: YARN-6289 > URL: https://issues.apache.org/jira/browse/YARN-6289 > Project: Hadoop YARN > Issue Type: Bug > Components: distributed-scheduling > Environment: Hardware configuration > CPU: 2 x Intel(R) Xeon(R) E5-2620 v2 @ 2.10GHz /15M Cache 6-Core 12-Thread > Memory: 128GB Memory (16x8GB) 1600MHz > Disk: 600GBx2 3.5-inch with RAID-1 > Network bandwidth: 968Mb/s > Software configuration > Spark-1.6.2 Hadoop-2.7.1 >Reporter: Huangkaixuan > Attachments: Hadoop_Spark_Conf.zip, YARN-DataLocality.docx > > > When running a simple wordcount experiment on YARN, I noticed that the task > failed to achieve data locality, even though there is no other job running on > the cluster at the same time. The experiment was done in a 7-node (1 master, > 6 data nodes/node managers) cluster and the input of the wordcount job (both > Spark and MapReduce) is a single-block file in HDFS which is two-way > replicated (replication factor = 2). I ran wordcount on YARN for 10 times. > The results show that only 30% of tasks can achieve data locality, which > seems like the result of a random placement of tasks. The experiment details > are in the attachment, and feel free to reproduce the experiments. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6301) Fair scheduler docs should explain the meaning of setting a queue's weight to zero
[ https://issues.apache.org/jira/browse/YARN-6301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904277#comment-15904277 ] Tao Jie commented on YARN-6301: --- Thank you [~templedf]. It is almost clear to me now. One thing I'd like to make it clear that "ad hoc queue" only works among its sibling queues. If one queue under another parent-queue has demand for resource, the "ad hoc queue" can still have resource due to fairshare of its parent-queue. If I am wrong, please correct me. > Fair scheduler docs should explain the meaning of setting a queue's weight to > zero > -- > > Key: YARN-6301 > URL: https://issues.apache.org/jira/browse/YARN-6301 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler >Affects Versions: 3.0.0-alpha2 >Reporter: Daniel Templeton >Assignee: Tao Jie > Labels: docs > Attachments: YARN-6301.001.patch, YARN-6301.002.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-1047) Expose # of pre-emptions as a queue counter
[ https://issues.apache.org/jira/browse/YARN-1047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904269#comment-15904269 ] Daniel Templeton commented on YARN-1047: Committed branch-2 patch. > Expose # of pre-emptions as a queue counter > --- > > Key: YARN-1047 > URL: https://issues.apache.org/jira/browse/YARN-1047 > Project: Hadoop YARN > Issue Type: Sub-task > Components: fairscheduler >Affects Versions: 2.0.2-alpha >Reporter: Philip Zeyliger >Assignee: Karthik Kambatla > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: YARN-1047.001.patch, YARN-1047.branch-2.001.patch > > > Since YARN supports pre-empting containers, a given queue should expose the > number of containers it has had pre-empted as a metric. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-1047) Expose # of pre-emptions as a queue counter
[ https://issues.apache.org/jira/browse/YARN-1047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated YARN-1047: --- Fix Version/s: 2.9.0 > Expose # of pre-emptions as a queue counter > --- > > Key: YARN-1047 > URL: https://issues.apache.org/jira/browse/YARN-1047 > Project: Hadoop YARN > Issue Type: Sub-task > Components: fairscheduler >Affects Versions: 2.0.2-alpha >Reporter: Philip Zeyliger >Assignee: Karthik Kambatla > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: YARN-1047.001.patch, YARN-1047.branch-2.001.patch > > > Since YARN supports pre-empting containers, a given queue should expose the > number of containers it has had pre-empted as a metric. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6264) AM not launched when a single vcore is available on the cluster
[ https://issues.apache.org/jira/browse/YARN-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904264#comment-15904264 ] Hadoop QA commented on YARN-6264: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 59s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 39s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 40m 25s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 49s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 95m 40s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6264 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12857138/YARN-6264.006.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 423a8ff3d64f 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / e96a0b8 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15222/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15222/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > AM not launched when a single vcore is available on the clust
[jira] [Commented] (YARN-1047) Expose # of pre-emptions as a queue counter
[ https://issues.apache.org/jira/browse/YARN-1047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904262#comment-15904262 ] Hudson commented on YARN-1047: -- FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #11383 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11383/]) YARN-1047. Expose # of pre-emptions as a queue counter (Contributed by (templedf: rev 846a0cd678fba743220f28cef844ac9011a3f934) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueMetrics.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairSchedulerPreemption.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java > Expose # of pre-emptions as a queue counter > --- > > Key: YARN-1047 > URL: https://issues.apache.org/jira/browse/YARN-1047 > Project: Hadoop YARN > Issue Type: Sub-task > Components: fairscheduler >Affects Versions: 2.0.2-alpha >Reporter: Philip Zeyliger >Assignee: Karthik Kambatla > Fix For: 3.0.0-alpha3 > > Attachments: YARN-1047.001.patch, YARN-1047.branch-2.001.patch > > > Since YARN supports pre-empting containers, a given queue should expose the > number of containers it has had pre-empted as a metric. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-1047) Expose # of pre-emptions as a queue counter
[ https://issues.apache.org/jira/browse/YARN-1047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated YARN-1047: --- Attachment: YARN-1047.branch-2.001.patch > Expose # of pre-emptions as a queue counter > --- > > Key: YARN-1047 > URL: https://issues.apache.org/jira/browse/YARN-1047 > Project: Hadoop YARN > Issue Type: Sub-task > Components: fairscheduler >Affects Versions: 2.0.2-alpha >Reporter: Philip Zeyliger >Assignee: Karthik Kambatla > Fix For: 3.0.0-alpha3 > > Attachments: YARN-1047.001.patch, YARN-1047.branch-2.001.patch > > > Since YARN supports pre-empting containers, a given queue should expose the > number of containers it has had pre-empted as a metric. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6318) timeline service schema creator fails if executed from a remote machine
Sangjin Lee created YARN-6318: - Summary: timeline service schema creator fails if executed from a remote machine Key: YARN-6318 URL: https://issues.apache.org/jira/browse/YARN-6318 Project: Hadoop YARN Issue Type: Sub-task Components: timelineserver Affects Versions: 3.0.0-alpha1 Reporter: Sangjin Lee The timeline service schema creator fails if executed from a remote machine and the remote machine does not have the right {{hbase-site.xml}} file to talk to that remote HBase cluster. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6301) Fair scheduler docs should explain the meaning of setting a queue's weight to zero
[ https://issues.apache.org/jira/browse/YARN-6301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tao Jie updated YARN-6301: -- Attachment: YARN-6301.002.patch > Fair scheduler docs should explain the meaning of setting a queue's weight to > zero > -- > > Key: YARN-6301 > URL: https://issues.apache.org/jira/browse/YARN-6301 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler >Affects Versions: 3.0.0-alpha2 >Reporter: Daniel Templeton >Assignee: Tao Jie > Labels: docs > Attachments: YARN-6301.001.patch, YARN-6301.002.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6246) Identifying starved apps does not need the scheduler writelock
[ https://issues.apache.org/jira/browse/YARN-6246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904237#comment-15904237 ] Tao Jie commented on YARN-6246: --- Thank you [~kasha] for working on this. It seems to me that checking starvation is only works on leafqueue, can we just go through leafqueues by {{queueMgr.getLeafQueues()}} rather than a top-down approach? I hope it could be more efficient. > Identifying starved apps does not need the scheduler writelock > -- > > Key: YARN-6246 > URL: https://issues.apache.org/jira/browse/YARN-6246 > Project: Hadoop YARN > Issue Type: Sub-task > Components: fairscheduler >Affects Versions: 2.9.0 >Reporter: Karthik Kambatla >Assignee: Karthik Kambatla > Attachments: YARN-6246.001.patch > > > Currently, the starvation checks are done holding the scheduler writelock. We > are probably better of doing this outside. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6314) Potential infinite redirection on YARN log redirection web service
[ https://issues.apache.org/jira/browse/YARN-6314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904199#comment-15904199 ] Xuan Gong commented on YARN-6314: - The testcase failure is not related > Potential infinite redirection on YARN log redirection web service > -- > > Key: YARN-6314 > URL: https://issues.apache.org/jira/browse/YARN-6314 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xuan Gong >Assignee: Xuan Gong > Attachments: YARN-6314.1.patch > > > In YARN-6113, we have added a re-direct NM web service to get container logs > which could cause the potential infinite redirection. > It can happens when: > * Call AHS web service to get a running/finished AM container log for a > running application. > * AHS web service would re-direct the request the specific NM given the > application is still running. And the NM would handle the request > * If the log file we requested has already been aggregated and deleted from > NM, the NM would re-direct the request back to AHS. > In this case, we would do step 2 and step 3 infinite times. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6313) yarn logs cli does not provide logs for a completed container even when the nm address is provided
[ https://issues.apache.org/jira/browse/YARN-6313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904198#comment-15904198 ] Xuan Gong commented on YARN-6313: - The testcase failure is not related > yarn logs cli does not provide logs for a completed container even when the > nm address is provided > -- > > Key: YARN-6313 > URL: https://issues.apache.org/jira/browse/YARN-6313 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Siddharth Seth >Assignee: Xuan Gong > Attachments: YARN-6313.1.patch > > > Running app. Completed container. > Provide the appId, containerId, nodeId - yarn logs does not return the logs. > Specific use case: Long Running app. One daemon crashed. Logs are not > accessible without shutting down the app. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6310) OutputStreams in AggregatedLogFormat.LogWriter can be left open upon exceptions
[ https://issues.apache.org/jira/browse/YARN-6310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904192#comment-15904192 ] Hadoop QA commented on YARN-6310: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 34s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 25m 16s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6310 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12857143/YARN-6310.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux a10212a61311 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / e96a0b8 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15223/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15223/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > OutputStreams in AggregatedLogFormat.LogWriter can be left open upon > exceptions > --- > > Key: YARN-6310 > URL: https://issues.apache.org/jira/browse/YARN-6310 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Affects Versions: 3.0.0-alpha2 >Reporter: Haibo Chen >Assignee: Haibo Chen
[jira] [Commented] (YARN-5269) Bubble exceptions and errors all the way up the calls, including to clients.
[ https://issues.apache.org/jira/browse/YARN-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904194#comment-15904194 ] Joep Rottinghuis commented on YARN-5269: After looking at the code I am indeed a little surprised that we're not properly doing this. All the plumbing seems correct on the client side: TimelineV2ClientImpl#putEntities vs TimelineV2ClientImpl#putEntitiesAsync correctly call TimelineEntityDispatcher#dispatchEntities(boolean sync,... with the correct argument. This argument does seem to make it into the params, and on the server side TimelineCollectorWebService#putEntities correctly pulls the async parameter from the rest call. See line 156: {code} boolean isAsync = async != null && async.trim().equalsIgnoreCase("true"); {code} However, this is where the problem starts. It simply calls TimelineCollector#putEntities and ignores the value of isAsync. It should instead have called TimelineCollector#putEntitiesAsync, which is currently not implemented. putEntities should call putEntitiesAsync and then after that call writer.flush() The fact that we flush on close and we flush periodically should be more of a concern of avoiding data loss; close in case sync is never called and the periodic flush to guard against having data from slow writers get buffered for a long time and expose us to risk of loss in case the collector crashes with data in its buffers. Size-based flush is a different concern to avoid blowing up memory footprint. The spooling behavior is also somewhat separate. We have two separate methods on our API putEntities and putEntitiesAsync and they should have different behavior beyond waiting for the request to be sent. I can file a separate bug from this one dealing with exception handling to tackle the sync vs async nature. During the meeting today I was thinking about the HBase writer that has a flush, which definitely blocks until data is flushed to HBase (ignoring the spooling for the moment). > Bubble exceptions and errors all the way up the calls, including to clients. > > > Key: YARN-5269 > URL: https://issues.apache.org/jira/browse/YARN-5269 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Joep Rottinghuis >Assignee: Haibo Chen > Labels: YARN-5355, yarn-5355-merge-blocker > > Currently we ignore (swallow) exception from the HBase side in many cases > (reads and writes). > Also, on the client side, neither TimelineClient#putEntities (the v2 flavor) > nor the #putEntitiesAsync method return any value. > For the second drop we may want to consider how we properly bubble up > exceptions throughout the write and reader call paths and if we want to > return a response in putEntities and some future kind of result for > putEntitiesAsync. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5669) Add support for Docker pull
[ https://issues.apache.org/jira/browse/YARN-5669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904185#comment-15904185 ] Hudson commented on YARN-5669: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11381 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11381/]) YARN-5669. Add support for docker pull command (Contribtued by (sidharta: rev e96a0b8c92b46aed7c1f5ccec13abc6c1043edba) * (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerPullCommand.java * (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerPullCommand.java > Add support for Docker pull > --- > > Key: YARN-5669 > URL: https://issues.apache.org/jira/browse/YARN-5669 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Zhankun Tang >Assignee: luhuichun > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: YARN-5669.001.patch > > > We need to add docker pull to support Docker image localization. Refer to > YARN-3854 for the details. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6042) Dump scheduler and queue state information into FairScheduler DEBUG log
[ https://issues.apache.org/jira/browse/YARN-6042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904172#comment-15904172 ] Hadoop QA commented on YARN-6042: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 30s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 58s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 33s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 20s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 4m 28s{color} | {color:orange} root: The patch generated 1 new + 255 unchanged - 1 fixed = 256 total (was 256) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 45s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 39s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 47s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}143m 51s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6042 | | GITHUB PR | https://github.com/apache/hadoop/pull/193 | | Optional Tests | asflicense mvnsite unit compile javac javadoc mvninstall findbugs checkstyle | | uname | Linux 24ca36814643 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 822a74f | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/15219/artifact/patchprocess/diff-checkstyle-root.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/15219/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15219/testReport/ | | module
[jira] [Commented] (YARN-1047) Expose # of pre-emptions as a queue counter
[ https://issues.apache.org/jira/browse/YARN-1047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904171#comment-15904171 ] Hadoop QA commented on YARN-1047: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 22s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 1 new + 50 unchanged - 0 fixed = 51 total (was 50) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 40m 14s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 62m 1s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-1047 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12857121/YARN-1047.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 77c747c53280 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 822a74f | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/15221/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15221/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15221/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Expose # of pre-emptions as a queue counter > --- > > Key: YARN-1047 > URL:
[jira] [Commented] (YARN-5948) Implement MutableConfigurationManager for handling storage into configuration store
[ https://issues.apache.org/jira/browse/YARN-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904168#comment-15904168 ] Jonathan Hung commented on YARN-5948: - Thanks, committed to YARN-5734. > Implement MutableConfigurationManager for handling storage into configuration > store > --- > > Key: YARN-5948 > URL: https://issues.apache.org/jira/browse/YARN-5948 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jonathan Hung >Assignee: Jonathan Hung > Fix For: YARN-5734 > > Attachments: YARN-5948.001.patch, YARN-5948-YARN-5734.002.patch, > YARN-5948-YARN-5734.003.patch, YARN-5948-YARN-5734.004.patch, > YARN-5948-YARN-5734.005.patch, YARN-5948-YARN-5734.006.patch, > YARN-5948-YARN-5734.007.patch, YARN-5948-YARN-5734.008.patch > > > The MutableConfigurationManager will take REST calls with desired client > configuration changes and call YarnConfigurationStore methods to store these > changes in the backing store. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6313) yarn logs cli does not provide logs for a completed container even when the nm address is provided
[ https://issues.apache.org/jira/browse/YARN-6313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904156#comment-15904156 ] Hadoop QA commented on YARN-6313: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 21s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 7s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 107 unchanged - 0 fixed = 108 total (was 107) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 29s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 11s{color} | {color:red} hadoop-yarn-client in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 68m 25s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.client.api.impl.TestAMRMClient | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6313 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12857118/YARN-6313.1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 3edd62c15046 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 822a74f | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/15220/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/15220/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15220/test
[jira] [Commented] (YARN-5948) Implement MutableConfigurationManager for handling storage into configuration store
[ https://issues.apache.org/jira/browse/YARN-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904152#comment-15904152 ] Hadoop QA commented on YARN-5948: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 4s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 43s{color} | {color:green} YARN-5734 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 20s{color} | {color:green} YARN-5734 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 24s{color} | {color:green} YARN-5734 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 56s{color} | {color:green} YARN-5734 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 47s{color} | {color:green} YARN-5734 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 39s{color} | {color:green} YARN-5734 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 28s{color} | {color:green} YARN-5734 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 51s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 15s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 20s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}140m 3s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMAdminService | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-5948 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12857094/YARN-5948-YARN-5734.008.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 43e84fb2fec6 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-5734 / 01ea2f3 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/15216/artifact/patchprocess/patch-unit-hadoop-y
[jira] [Updated] (YARN-6310) OutputStreams in AggregatedLogFormat.LogWriter can be left open upon exceptions
[ https://issues.apache.org/jira/browse/YARN-6310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-6310: - Attachment: YARN-6310.01.patch Upload a patch that replace explicit close calls with try clauses > OutputStreams in AggregatedLogFormat.LogWriter can be left open upon > exceptions > --- > > Key: YARN-6310 > URL: https://issues.apache.org/jira/browse/YARN-6310 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Affects Versions: 3.0.0-alpha2 >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: YARN-6310.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-1047) Expose # of pre-emptions as a queue counter
[ https://issues.apache.org/jira/browse/YARN-1047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904139#comment-15904139 ] Daniel Templeton commented on YARN-1047: LGTM. I'll come back and do a more careful review in an hour and presumably commit it. > Expose # of pre-emptions as a queue counter > --- > > Key: YARN-1047 > URL: https://issues.apache.org/jira/browse/YARN-1047 > Project: Hadoop YARN > Issue Type: Sub-task > Components: fairscheduler >Affects Versions: 2.0.2-alpha >Reporter: Philip Zeyliger >Assignee: Karthik Kambatla > Attachments: YARN-1047.001.patch > > > Since YARN supports pre-empting containers, a given queue should expose the > number of containers it has had pre-empted as a metric. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5669) Add support for Docker pull
[ https://issues.apache.org/jira/browse/YARN-5669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sidharta Seethana updated YARN-5669: Fix Version/s: 3.0.0-alpha3 2.9.0 > Add support for Docker pull > --- > > Key: YARN-5669 > URL: https://issues.apache.org/jira/browse/YARN-5669 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Zhankun Tang >Assignee: luhuichun > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: YARN-5669.001.patch > > > We need to add docker pull to support Docker image localization. Refer to > YARN-3854 for the details. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5669) Add support for Docker pull
[ https://issues.apache.org/jira/browse/YARN-5669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904134#comment-15904134 ] Sidharta Seethana commented on YARN-5669: - Committed to trunk and branch-2. Thanks, [~luhuichun] ! > Add support for Docker pull > --- > > Key: YARN-5669 > URL: https://issues.apache.org/jira/browse/YARN-5669 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Zhankun Tang >Assignee: luhuichun > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: YARN-5669.001.patch > > > We need to add docker pull to support Docker image localization. Refer to > YARN-3854 for the details. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5669) Add support for Docker pull
[ https://issues.apache.org/jira/browse/YARN-5669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904125#comment-15904125 ] Sidharta Seethana commented on YARN-5669: - The javac warning and the checkstyle issue don't seem to be relevant to this specific patch. > Add support for Docker pull > --- > > Key: YARN-5669 > URL: https://issues.apache.org/jira/browse/YARN-5669 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Zhankun Tang >Assignee: luhuichun > Attachments: YARN-5669.001.patch > > > We need to add docker pull to support Docker image localization. Refer to > YARN-3854 for the details. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6264) AM not launched when a single vcore is available on the cluster
[ https://issues.apache.org/jira/browse/YARN-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-6264: --- Attachment: YARN-6264.006.patch > AM not launched when a single vcore is available on the cluster > --- > > Key: YARN-6264 > URL: https://issues.apache.org/jira/browse/YARN-6264 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6264.001.patch, YARN-6264.002.patch, > YARN-6264.003.patch, YARN-6264.004.patch, YARN-6264.005.patch, > YARN-6264.006.patch > > > In method {{canRunAppAM()}}, we will get a zero vcore if there is only one > vcores, and the {{maxAMShare}} is between 0 and 1 because > {{Resources#multiply}} round down double to a integer by default. This > potentially happens frequently when assigning a container to AM just after > preemption happens since it is more likely there is only one vcore on the > cluster just after preemption. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6264) AM not launched when a single vcore is available on the cluster
[ https://issues.apache.org/jira/browse/YARN-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-6264: --- Attachment: (was: YARN-6264.006.patch) > AM not launched when a single vcore is available on the cluster > --- > > Key: YARN-6264 > URL: https://issues.apache.org/jira/browse/YARN-6264 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6264.001.patch, YARN-6264.002.patch, > YARN-6264.003.patch, YARN-6264.004.patch, YARN-6264.005.patch > > > In method {{canRunAppAM()}}, we will get a zero vcore if there is only one > vcores, and the {{maxAMShare}} is between 0 and 1 because > {{Resources#multiply}} round down double to a integer by default. This > potentially happens frequently when assigning a container to AM just after > preemption happens since it is more likely there is only one vcore on the > cluster just after preemption. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5948) Implement MutableConfigurationManager for handling storage into configuration store
[ https://issues.apache.org/jira/browse/YARN-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904101#comment-15904101 ] Wangda Tan commented on YARN-5948: -- +1, thanks [~jhung]. > Implement MutableConfigurationManager for handling storage into configuration > store > --- > > Key: YARN-5948 > URL: https://issues.apache.org/jira/browse/YARN-5948 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jonathan Hung >Assignee: Jonathan Hung > Attachments: YARN-5948.001.patch, YARN-5948-YARN-5734.002.patch, > YARN-5948-YARN-5734.003.patch, YARN-5948-YARN-5734.004.patch, > YARN-5948-YARN-5734.005.patch, YARN-5948-YARN-5734.006.patch, > YARN-5948-YARN-5734.007.patch, YARN-5948-YARN-5734.008.patch > > > The MutableConfigurationManager will take REST calls with desired client > configuration changes and call YarnConfigurationStore methods to store these > changes in the backing store. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6264) AM not launched when a single vcore is available on the cluster
[ https://issues.apache.org/jira/browse/YARN-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-6264: --- Attachment: YARN-6264.006.patch Upload patch v6 to fix unit test failure. > AM not launched when a single vcore is available on the cluster > --- > > Key: YARN-6264 > URL: https://issues.apache.org/jira/browse/YARN-6264 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6264.001.patch, YARN-6264.002.patch, > YARN-6264.003.patch, YARN-6264.004.patch, YARN-6264.005.patch, > YARN-6264.006.patch > > > In method {{canRunAppAM()}}, we will get a zero vcore if there is only one > vcores, and the {{maxAMShare}} is between 0 and 1 because > {{Resources#multiply}} round down double to a integer by default. This > potentially happens frequently when assigning a container to AM just after > preemption happens since it is more likely there is only one vcore on the > cluster just after preemption. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6264) AM not launched when a single vcore is available on the cluster
[ https://issues.apache.org/jira/browse/YARN-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904090#comment-15904090 ] Hadoop QA commented on YARN-6264: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 46s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 3m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 36s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m 16s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 41s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 99m 27s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6264 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12857095/YARN-6264.005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux f439936bd089 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 822a74f | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/15217/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15217/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U
[jira] [Commented] (YARN-6264) AM not launched when a single vcore is available on the cluster
[ https://issues.apache.org/jira/browse/YARN-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904083#comment-15904083 ] Hadoop QA commented on YARN-6264: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 51s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 42s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m 20s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 47s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 93m 29s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6264 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12857095/YARN-6264.005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 1b101af58fc8 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 822a74f | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/15218/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15218/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U:
[jira] [Updated] (YARN-1047) Expose # of pre-emptions as a queue counter
[ https://issues.apache.org/jira/browse/YARN-1047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karthik Kambatla updated YARN-1047: --- Attachment: YARN-1047.001.patch Straight-forward patch. > Expose # of pre-emptions as a queue counter > --- > > Key: YARN-1047 > URL: https://issues.apache.org/jira/browse/YARN-1047 > Project: Hadoop YARN > Issue Type: Sub-task > Components: fairscheduler >Affects Versions: 2.0.2-alpha >Reporter: Philip Zeyliger >Assignee: Karthik Kambatla > Attachments: YARN-1047.001.patch > > > Since YARN supports pre-empting containers, a given queue should expose the > number of containers it has had pre-empted as a metric. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6288) Refactor AppLogAggregatorImpl#uploadLogsForContainers
[ https://issues.apache.org/jira/browse/YARN-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904039#comment-15904039 ] Haibo Chen commented on YARN-6288: -- Looks like in LogWriter() we create a file and write to it, which seems not a good practice. Maybe we can pull that part into a LogWriter.initialize() method and then make LogWriter closable? > Refactor AppLogAggregatorImpl#uploadLogsForContainers > - > > Key: YARN-6288 > URL: https://issues.apache.org/jira/browse/YARN-6288 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Minor > Labels: supportability > Attachments: YARN-6288.01.patch > > > In AppLogAggregatorImpl.java, if an exception occurs in writing container log > to remote filesystem, the exception is not caught and ignored. > https://github.com/apache/hadoop/blob/f59e36b4ce71d3019ab91b136b6d7646316954e7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java#L398 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6264) AM not launched when a single vcore is available on the cluster
[ https://issues.apache.org/jira/browse/YARN-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904034#comment-15904034 ] Hadoop QA commented on YARN-6264: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 46s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 33s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m 29s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 39s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 97m 34s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart | | | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6264 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12857092/YARN-6264.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux b0cc7b111f8c 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 822a74f | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/15215/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15215/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/h
[jira] [Updated] (YARN-6313) yarn logs cli does not provide logs for a completed container even when the nm address is provided
[ https://issues.apache.org/jira/browse/YARN-6313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-6313: Attachment: YARN-6313.1.patch > yarn logs cli does not provide logs for a completed container even when the > nm address is provided > -- > > Key: YARN-6313 > URL: https://issues.apache.org/jira/browse/YARN-6313 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Siddharth Seth >Assignee: Xuan Gong > Attachments: YARN-6313.1.patch > > > Running app. Completed container. > Provide the appId, containerId, nodeId - yarn logs does not return the logs. > Specific use case: Long Running app. One daemon crashed. Logs are not > accessible without shutting down the app. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5269) Bubble exceptions and errors all the way up the calls, including to clients.
[ https://issues.apache.org/jira/browse/YARN-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904025#comment-15904025 ] Haibo Chen commented on YARN-5269: -- Thanks for reminding me of that [~vrushalic]! The time-based flush, in addition to the inherent size-based flush, would definitely alleviate the issue. > Bubble exceptions and errors all the way up the calls, including to clients. > > > Key: YARN-5269 > URL: https://issues.apache.org/jira/browse/YARN-5269 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Joep Rottinghuis >Assignee: Haibo Chen > Labels: YARN-5355, yarn-5355-merge-blocker > > Currently we ignore (swallow) exception from the HBase side in many cases > (reads and writes). > Also, on the client side, neither TimelineClient#putEntities (the v2 flavor) > nor the #putEntitiesAsync method return any value. > For the second drop we may want to consider how we properly bubble up > exceptions throughout the write and reader call paths and if we want to > return a response in putEntities and some future kind of result for > putEntitiesAsync. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5269) Bubble exceptions and errors all the way up the calls, including to clients.
[ https://issues.apache.org/jira/browse/YARN-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15904005#comment-15904005 ] Vrushali C commented on YARN-5269: -- Let me get back on the questions [~haibochen]. As such we have a flush done every 1 min by the writerFlusher, defined in TimelineCollectorManager. The default flush interval is 1 min, configuration via yarn config setting TIMELINE_SERVICE_WRITER_FLUSH_INTERVAL_SECONDS. Also, whenever a per app collector shuts down, the serviceStop invokes a flush before closing the table connection. > Bubble exceptions and errors all the way up the calls, including to clients. > > > Key: YARN-5269 > URL: https://issues.apache.org/jira/browse/YARN-5269 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Joep Rottinghuis >Assignee: Haibo Chen > Labels: YARN-5355, yarn-5355-merge-blocker > > Currently we ignore (swallow) exception from the HBase side in many cases > (reads and writes). > Also, on the client side, neither TimelineClient#putEntities (the v2 flavor) > nor the #putEntitiesAsync method return any value. > For the second drop we may want to consider how we properly bubble up > exceptions throughout the write and reader call paths and if we want to > return a response in putEntities and some future kind of result for > putEntitiesAsync. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-6284) hasAlreadyRun should be final in ResourceManager.StandByTransitionRunnable
[ https://issues.apache.org/jira/browse/YARN-6284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton reassigned YARN-6284: -- Assignee: Laura Adams (was: Daniel Templeton) > hasAlreadyRun should be final in ResourceManager.StandByTransitionRunnable > -- > > Key: YARN-6284 > URL: https://issues.apache.org/jira/browse/YARN-6284 > Project: Hadoop YARN > Issue Type: Improvement > Components: resourcemanager >Affects Versions: 3.0.0-alpha2 >Reporter: Daniel Templeton >Assignee: Laura Adams > Labels: newbie > > {code} > // The atomic variable to make sure multiple threads with the same > runnable > // run only once. > private AtomicBoolean hasAlreadyRun = new AtomicBoolean(false); > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5269) Bubble exceptions and errors all the way up the calls, including to clients.
[ https://issues.apache.org/jira/browse/YARN-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15903972#comment-15903972 ] Haibo Chen commented on YARN-5269: -- On the client side, putEntities() is a blocking call and putEntitiesAsync is a fire-and-forget operation. For each rest request, we check the response code from the server and wrap them in a generic YarnException. On the server side(TimelineCollectorWebService), even though clients can specify an async flag, this flag is always omitted for the time being. Therefore, the guarantees are the same as those provided by HBaseTimelineWriter.write(). IIUC, because BufferedMutator is totally async, HBaseTimelineWriter.write() does not guarantee anything to clients. Is that correct [~vrushalic]? As I think about this more, it seems that the sync putEntities() only ensures that entities are added to the buffer in bufferedMutator without any problem. How would this change with respect to SpooledBufferedMutator, [~vrushalic], [~jrottinghuis]? > Bubble exceptions and errors all the way up the calls, including to clients. > > > Key: YARN-5269 > URL: https://issues.apache.org/jira/browse/YARN-5269 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Joep Rottinghuis >Assignee: Haibo Chen > Labels: YARN-5355, yarn-5355-merge-blocker > > Currently we ignore (swallow) exception from the HBase side in many cases > (reads and writes). > Also, on the client side, neither TimelineClient#putEntities (the v2 flavor) > nor the #putEntitiesAsync method return any value. > For the second drop we may want to consider how we properly bubble up > exceptions throughout the write and reader call paths and if we want to > return a response in putEntities and some future kind of result for > putEntitiesAsync. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6042) Dump scheduler and queue state information into FairScheduler DEBUG log
[ https://issues.apache.org/jira/browse/YARN-6042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15903960#comment-15903960 ] Yufei Gu commented on YARN-6042: Thanks [~rchiang] for the review. Right now the default update interval is 0.5s. And we already have {{UPDATE_DEBUG_FREQUENCY}} to output debug message less frequently. Right now {{UPDATE_DEBUG_FREQUENCY}} is 5, which means every 0.5 * 5 s, there is a state dump. I increased it to 25 in patch v10. So FS dumps its state every 12.5s. > Dump scheduler and queue state information into FairScheduler DEBUG log > --- > > Key: YARN-6042 > URL: https://issues.apache.org/jira/browse/YARN-6042 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6042.001.patch, YARN-6042.002.patch, > YARN-6042.003.patch, YARN-6042.004.patch, YARN-6042.005.patch, > YARN-6042.006.patch, YARN-6042.007.patch, YARN-6042.008.patch, > YARN-6042.009.patch, YARN-6042.010.patch > > > To improve the debugging of scheduler issues it would be a big improvement to > be able to dump the scheduler state into a log on request. > The Dump the scheduler state at a point in time would allow debugging of a > scheduler that is not hung (deadlocked) but also not assigning containers. > Currently we do not have a proper overview of what state the scheduler and > the queues are in and we have to make assumptions or guess > The scheduler and queue state needed would include (not exhaustive): > - instantaneous and steady fair share (app / queue) > - AM share and resources > - weight > - app demand > - application run state (runnable/non runnable) > - last time at fair/min share -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6042) Dump scheduler and queue state information into FairScheduler DEBUG log
[ https://issues.apache.org/jira/browse/YARN-6042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-6042: --- Attachment: YARN-6042.010.patch > Dump scheduler and queue state information into FairScheduler DEBUG log > --- > > Key: YARN-6042 > URL: https://issues.apache.org/jira/browse/YARN-6042 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6042.001.patch, YARN-6042.002.patch, > YARN-6042.003.patch, YARN-6042.004.patch, YARN-6042.005.patch, > YARN-6042.006.patch, YARN-6042.007.patch, YARN-6042.008.patch, > YARN-6042.009.patch, YARN-6042.010.patch > > > To improve the debugging of scheduler issues it would be a big improvement to > be able to dump the scheduler state into a log on request. > The Dump the scheduler state at a point in time would allow debugging of a > scheduler that is not hung (deadlocked) but also not assigning containers. > Currently we do not have a proper overview of what state the scheduler and > the queues are in and we have to make assumptions or guess > The scheduler and queue state needed would include (not exhaustive): > - instantaneous and steady fair share (app / queue) > - AM share and resources > - weight > - app demand > - application run state (runnable/non runnable) > - last time at fair/min share -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-5269) Bubble exceptions and errors all the way up the calls, including to clients.
[ https://issues.apache.org/jira/browse/YARN-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15903540#comment-15903540 ] Haibo Chen edited comment on YARN-5269 at 3/9/17 10:08 PM: --- Based on today's discussion, questions we need to answer are 1) for the synchronous putEntities() API, what do we promise if no error/exception is returned to clients? In what scenarios do we bubble exceptions/errors to clients? 2) similarly for the asynchronous write API This is more to explicate the semantics+guarantees of our write API so that clients will have correct expectations. I'll check the existing code base and share my findings. [~vrushalic], [~jrottinghuis] can chime in on more complicated scenarios where spooled-buffered-mutator is involved. was (Author: haibochen): Based on today's discussion, questions we need to answer are 1) for the synchronous putEntities() API, what do we promise if no error/exception is returned to clients? In what scenarios do we bubble exceptions/errors to clients? 2) similarly for the asynchronous write API This is more to explicate the semantics+guarantees of our write API so that clients will have correct expectations. I'll check the existing code base and share my findings. [~vrushalic], [~jrottinghuis] chime in on more complicated scenarios where spooled-buffered-mutator is involved. > Bubble exceptions and errors all the way up the calls, including to clients. > > > Key: YARN-5269 > URL: https://issues.apache.org/jira/browse/YARN-5269 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Joep Rottinghuis >Assignee: Haibo Chen > Labels: YARN-5355, yarn-5355-merge-blocker > > Currently we ignore (swallow) exception from the HBase side in many cases > (reads and writes). > Also, on the client side, neither TimelineClient#putEntities (the v2 flavor) > nor the #putEntitiesAsync method return any value. > For the second drop we may want to consider how we properly bubble up > exceptions throughout the write and reader call paths and if we want to > return a response in putEntities and some future kind of result for > putEntitiesAsync. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6264) AM not launched when a single vcore is available on the cluster
[ https://issues.apache.org/jira/browse/YARN-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15903941#comment-15903941 ] Karthik Kambatla commented on YARN-6264: +1, pending Jenkins. > AM not launched when a single vcore is available on the cluster > --- > > Key: YARN-6264 > URL: https://issues.apache.org/jira/browse/YARN-6264 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6264.001.patch, YARN-6264.002.patch, > YARN-6264.003.patch, YARN-6264.004.patch, YARN-6264.005.patch > > > In method {{canRunAppAM()}}, we will get a zero vcore if there is only one > vcores, and the {{maxAMShare}} is between 0 and 1 because > {{Resources#multiply}} round down double to a integer by default. This > potentially happens frequently when assigning a container to AM just after > preemption happens since it is more likely there is only one vcore on the > cluster just after preemption. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6264) AM not launched when a single vcore is available on the cluster
[ https://issues.apache.org/jira/browse/YARN-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-6264: --- Description: In method {{canRunAppAM()}}, we will get a zero vcore if there is only one vcores, and the {{maxAMShare}} is between 0 and 1 because {{Resources#multiply}} round down double to a integer by default. This potentially happens frequently when assigning a container to AM just after preemption happens since it is more likely there is only one vcore on the cluster just after preemption. (was: In method {{canRunAppAM()}}, we will get a zero vcore if there is only one vcores, and the {{maxAMShare}} is between 0 and 1 because {{Resources#multiply}} round down double to a integer by default. This potentially happens frequently when assigning a container to AM just after preemption happens.) > AM not launched when a single vcore is available on the cluster > --- > > Key: YARN-6264 > URL: https://issues.apache.org/jira/browse/YARN-6264 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6264.001.patch, YARN-6264.002.patch, > YARN-6264.003.patch, YARN-6264.004.patch, YARN-6264.005.patch > > > In method {{canRunAppAM()}}, we will get a zero vcore if there is only one > vcores, and the {{maxAMShare}} is between 0 and 1 because > {{Resources#multiply}} round down double to a integer by default. This > potentially happens frequently when assigning a container to AM just after > preemption happens since it is more likely there is only one vcore on the > cluster just after preemption. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6298) Metric preemptCall is not used in new preemption.
[ https://issues.apache.org/jira/browse/YARN-6298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karthik Kambatla updated YARN-6298: --- Priority: Blocker (was: Major) > Metric preemptCall is not used in new preemption. > - > > Key: YARN-6298 > URL: https://issues.apache.org/jira/browse/YARN-6298 > Project: Hadoop YARN > Issue Type: Sub-task > Components: fairscheduler >Affects Versions: 2.8.0, 3.0.0-alpha2 >Reporter: Yufei Gu >Priority: Blocker > > Either get rid of it in Hadoop 3 or use it. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6298) Metric preemptCall is not used in new preemption.
[ https://issues.apache.org/jira/browse/YARN-6298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karthik Kambatla updated YARN-6298: --- Target Version/s: 3.0.0-beta1 Hadoop Flags: Incompatible change > Metric preemptCall is not used in new preemption. > - > > Key: YARN-6298 > URL: https://issues.apache.org/jira/browse/YARN-6298 > Project: Hadoop YARN > Issue Type: Sub-task > Components: fairscheduler >Affects Versions: 2.8.0, 3.0.0-alpha2 >Reporter: Yufei Gu > > Either get rid of it in Hadoop 3 or use it. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6264) AM not launched when a single vcore is available on the cluster
[ https://issues.apache.org/jira/browse/YARN-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-6264: --- Description: In method {{canRunAppAM()}}, we will get a zero vcore if there is only one vcores, and the {{maxAMShare}} is between 0 and 1 because {{Resources#multiply}} round down double to a integer by default. This potentially happens frequently when assigning a container to AM just after preemption happens. (was: In method {{canRunAppAM()}}, we will get a zero vcore if there is only one vcores, and the {{maxAMShare}} is between 0 and 1 because {{Resources#multiply}} round down double to a integer by default. This potentially happens frequently when preemption happens and assign a new container to AM. ) > AM not launched when a single vcore is available on the cluster > --- > > Key: YARN-6264 > URL: https://issues.apache.org/jira/browse/YARN-6264 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6264.001.patch, YARN-6264.002.patch, > YARN-6264.003.patch, YARN-6264.004.patch, YARN-6264.005.patch > > > In method {{canRunAppAM()}}, we will get a zero vcore if there is only one > vcores, and the {{maxAMShare}} is between 0 and 1 because > {{Resources#multiply}} round down double to a integer by default. This > potentially happens frequently when assigning a container to AM just after > preemption happens. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6264) AM not launched when a single vcore is available on the cluster
[ https://issues.apache.org/jira/browse/YARN-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15903923#comment-15903923 ] Yufei Gu commented on YARN-6264: Hi [~DjvuLee], changed the description to answer your question. > AM not launched when a single vcore is available on the cluster > --- > > Key: YARN-6264 > URL: https://issues.apache.org/jira/browse/YARN-6264 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6264.001.patch, YARN-6264.002.patch, > YARN-6264.003.patch, YARN-6264.004.patch, YARN-6264.005.patch > > > In method {{canRunAppAM()}}, we will get a zero vcore if there is only one > vcores, and the {{maxAMShare}} is between 0 and 1 because > {{Resources#multiply}} round down double to a integer by default. This > potentially happens frequently when preemption happens and assign a new > container to AM. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6264) AM not launched when a single vcore is available on the cluster
[ https://issues.apache.org/jira/browse/YARN-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-6264: --- Description: In method {{canRunAppAM()}}, we will get a zero vcore if there is only one vcores, and the {{maxAMShare}} is between 0 and 1 because {{Resources#multiply}} round down double to a integer by default. This potentially happens frequently when preemption happens and assign a new container to AM. (was: In method {{canRunAppAM()}}, we should use policy related resource comparison instead of using {{Resources.fitsIn()}} to determined if the queue has enough resource for the AM. ) > AM not launched when a single vcore is available on the cluster > --- > > Key: YARN-6264 > URL: https://issues.apache.org/jira/browse/YARN-6264 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6264.001.patch, YARN-6264.002.patch, > YARN-6264.003.patch, YARN-6264.004.patch, YARN-6264.005.patch > > > In method {{canRunAppAM()}}, we will get a zero vcore if there is only one > vcores, and the {{maxAMShare}} is between 0 and 1 because > {{Resources#multiply}} round down double to a integer by default. This > potentially happens frequently when preemption happens and assign a new > container to AM. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-6264) AM not launched when a single vcore is available on the cluster
[ https://issues.apache.org/jira/browse/YARN-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15903873#comment-15903873 ] Yufei Gu edited comment on YARN-6264 at 3/9/17 9:27 PM: Thanks [~kasha] for the review. Uploaded patch v4 for your comments. YARN-6317 is the JIRA for removing round-down version of multiply. was (Author: yufeigu): Thanks [~kasha] for the review. Uploaded patch v4 for your comments. > AM not launched when a single vcore is available on the cluster > --- > > Key: YARN-6264 > URL: https://issues.apache.org/jira/browse/YARN-6264 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6264.001.patch, YARN-6264.002.patch, > YARN-6264.003.patch, YARN-6264.004.patch > > > In method {{canRunAppAM()}}, we should use policy related resource comparison > instead of using {{Resources.fitsIn()}} to determined if the queue has enough > resource for the AM. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6246) Identifying starved apps does not need the scheduler writelock
[ https://issues.apache.org/jira/browse/YARN-6246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karthik Kambatla updated YARN-6246: --- Attachment: YARN-6246.001.patch Straight-forward patch that keeps the starvation checks in the Update thread, but runs while holding the scheduler readlock. > Identifying starved apps does not need the scheduler writelock > -- > > Key: YARN-6246 > URL: https://issues.apache.org/jira/browse/YARN-6246 > Project: Hadoop YARN > Issue Type: Sub-task > Components: fairscheduler >Affects Versions: 2.9.0 >Reporter: Karthik Kambatla >Assignee: Karthik Kambatla > Attachments: YARN-6246.001.patch > > > Currently, the starvation checks are done holding the scheduler writelock. We > are probably better of doing this outside. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6246) Identifying starved apps does not need the scheduler writelock
[ https://issues.apache.org/jira/browse/YARN-6246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karthik Kambatla updated YARN-6246: --- Description: Currently, the starvation checks are done holding the scheduler writelock. We are probably better of doing this outside. (was: Given the update thread holds the scheduler write-lock, we are probably better of computing starvation and identification of starved apps in a different thread. I am averse to adding a thread that runs on a *configurable timeout*, but maybe we could trigger this thread after every update run, or do this in the update thread but outside of the write-lock. ) > Identifying starved apps does not need the scheduler writelock > -- > > Key: YARN-6246 > URL: https://issues.apache.org/jira/browse/YARN-6246 > Project: Hadoop YARN > Issue Type: Sub-task > Components: fairscheduler >Affects Versions: 2.9.0 >Reporter: Karthik Kambatla >Assignee: Karthik Kambatla > > Currently, the starvation checks are done holding the scheduler writelock. We > are probably better of doing this outside. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6246) Identifying starved apps does not need the scheduler writelock
[ https://issues.apache.org/jira/browse/YARN-6246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karthik Kambatla updated YARN-6246: --- Summary: Identifying starved apps does not need the scheduler writelock (was: Move app starvation identification out of the update thread) > Identifying starved apps does not need the scheduler writelock > -- > > Key: YARN-6246 > URL: https://issues.apache.org/jira/browse/YARN-6246 > Project: Hadoop YARN > Issue Type: Sub-task > Components: fairscheduler >Affects Versions: 2.9.0 >Reporter: Karthik Kambatla >Assignee: Karthik Kambatla > > Given the update thread holds the scheduler write-lock, we are probably > better of computing starvation and identification of starved apps in a > different thread. > I am averse to adding a thread that runs on a *configurable timeout*, but > maybe we could trigger this thread after every update run, or do this in the > update thread but outside of the write-lock. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6264) AM not launched when a single vcore is available on the cluster
[ https://issues.apache.org/jira/browse/YARN-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-6264: --- Attachment: YARN-6264.005.patch v5 changes the assert messages. > AM not launched when a single vcore is available on the cluster > --- > > Key: YARN-6264 > URL: https://issues.apache.org/jira/browse/YARN-6264 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6264.001.patch, YARN-6264.002.patch, > YARN-6264.003.patch, YARN-6264.004.patch, YARN-6264.005.patch > > > In method {{canRunAppAM()}}, we should use policy related resource comparison > instead of using {{Resources.fitsIn()}} to determined if the queue has enough > resource for the AM. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5948) Implement MutableConfigurationManager for handling storage into configuration store
[ https://issues.apache.org/jira/browse/YARN-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15903900#comment-15903900 ] Jonathan Hung commented on YARN-5948: - Thanks for the review [~leftnoteasy], uploaded a patch addressing these two issues. > Implement MutableConfigurationManager for handling storage into configuration > store > --- > > Key: YARN-5948 > URL: https://issues.apache.org/jira/browse/YARN-5948 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jonathan Hung >Assignee: Jonathan Hung > Attachments: YARN-5948.001.patch, YARN-5948-YARN-5734.002.patch, > YARN-5948-YARN-5734.003.patch, YARN-5948-YARN-5734.004.patch, > YARN-5948-YARN-5734.005.patch, YARN-5948-YARN-5734.006.patch, > YARN-5948-YARN-5734.007.patch, YARN-5948-YARN-5734.008.patch > > > The MutableConfigurationManager will take REST calls with desired client > configuration changes and call YarnConfigurationStore methods to store these > changes in the backing store. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5948) Implement MutableConfigurationManager for handling storage into configuration store
[ https://issues.apache.org/jira/browse/YARN-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hung updated YARN-5948: Attachment: YARN-5948-YARN-5734.008.patch > Implement MutableConfigurationManager for handling storage into configuration > store > --- > > Key: YARN-5948 > URL: https://issues.apache.org/jira/browse/YARN-5948 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jonathan Hung >Assignee: Jonathan Hung > Attachments: YARN-5948.001.patch, YARN-5948-YARN-5734.002.patch, > YARN-5948-YARN-5734.003.patch, YARN-5948-YARN-5734.004.patch, > YARN-5948-YARN-5734.005.patch, YARN-5948-YARN-5734.006.patch, > YARN-5948-YARN-5734.007.patch, YARN-5948-YARN-5734.008.patch > > > The MutableConfigurationManager will take REST calls with desired client > configuration changes and call YarnConfigurationStore methods to store these > changes in the backing store. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6264) AM not launched when a single vcore is available on the cluster
[ https://issues.apache.org/jira/browse/YARN-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-6264: --- Attachment: YARN-6264.004.patch Thanks [~kasha] for the review. Uploaded patch v4 for your comments. > AM not launched when a single vcore is available on the cluster > --- > > Key: YARN-6264 > URL: https://issues.apache.org/jira/browse/YARN-6264 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6264.001.patch, YARN-6264.002.patch, > YARN-6264.003.patch, YARN-6264.004.patch > > > In method {{canRunAppAM()}}, we should use policy related resource comparison > instead of using {{Resources.fitsIn()}} to determined if the queue has enough > resource for the AM. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6317) Get rid of Resources#multiplyAndRoundDown since it duplicates Resources#multiply
[ https://issues.apache.org/jira/browse/YARN-6317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-6317: --- Labels: newbie (was: ) > Get rid of Resources#multiplyAndRoundDown since it duplicates > Resources#multiply > > > Key: YARN-6317 > URL: https://issues.apache.org/jira/browse/YARN-6317 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Yufei Gu >Priority: Minor > Labels: newbie > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6317) Get rid of Resources#multiplyAndRoundDown since it duplicates Resources#multiply
Yufei Gu created YARN-6317: -- Summary: Get rid of Resources#multiplyAndRoundDown since it duplicates Resources#multiply Key: YARN-6317 URL: https://issues.apache.org/jira/browse/YARN-6317 Project: Hadoop YARN Issue Type: Bug Reporter: Yufei Gu Priority: Minor -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6264) AM not launched when a single vcore is available on the cluster
[ https://issues.apache.org/jira/browse/YARN-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15903855#comment-15903855 ] Karthik Kambatla commented on YARN-6264: Thanks for following up on this [~yufeigu]. Comments on the patch: # Let us add a comment in FSLeafQueue for why we are using the round-up version instead of the regular one. # There is a round-down version of the multiply which is the same as the regular one. Can we file a follow-up JIRA to get rid of it? # In the tests, can we use assertEquals and add a message for each assert? > AM not launched when a single vcore is available on the cluster > --- > > Key: YARN-6264 > URL: https://issues.apache.org/jira/browse/YARN-6264 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6264.001.patch, YARN-6264.002.patch, > YARN-6264.003.patch > > > In method {{canRunAppAM()}}, we should use policy related resource comparison > instead of using {{Resources.fitsIn()}} to determined if the queue has enough > resource for the AM. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6264) AM not launched when a single vcore is available on the cluster
[ https://issues.apache.org/jira/browse/YARN-6264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karthik Kambatla updated YARN-6264: --- Summary: AM not launched when a single vcore is available on the cluster (was: Resource comparison should depends on policy) > AM not launched when a single vcore is available on the cluster > --- > > Key: YARN-6264 > URL: https://issues.apache.org/jira/browse/YARN-6264 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6264.001.patch, YARN-6264.002.patch, > YARN-6264.003.patch > > > In method {{canRunAppAM()}}, we should use policy related resource comparison > instead of using {{Resources.fitsIn()}} to determined if the queue has enough > resource for the AM. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6300) NULL_UPDATE_REQUESTS is redundant in TestFairScheduler
[ https://issues.apache.org/jira/browse/YARN-6300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15903848#comment-15903848 ] Hudson commented on YARN-6300: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11380 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11380/]) YARN-6300. NULL_UPDATE_REQUESTS is redundant in TestFairScheduler (templedf: rev 822a74f2ae955ea0893cc02fb36ceb49ceba8014) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java > NULL_UPDATE_REQUESTS is redundant in TestFairScheduler > -- > > Key: YARN-6300 > URL: https://issues.apache.org/jira/browse/YARN-6300 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 3.0.0-alpha2 >Reporter: Daniel Templeton >Assignee: Yuanbo Liu >Priority: Minor > Labels: newbie > Fix For: 3.0.0-alpha3 > > Attachments: YARN-6300.001.patch > > > The {{TestFairScheduler.NULL_UPDATE_REQUESTS}} field hides > {{FairSchedulerTestBase.NULL_UPDATE_REQUESTS}}, which has the same value. > The {{NULL_UPDATE_REQUESTS}} field should be removed from > {{TestFairScheduler}}. > While you're at it, maybe also remove the unused import. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6042) Dump scheduler and queue state information into FairScheduler DEBUG log
[ https://issues.apache.org/jira/browse/YARN-6042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15903800#comment-15903800 ] Ray Chiang commented on YARN-6042: -- Looks good so far. Another things occurs to me. The queue info is being dumped out every 3 seconds on my machine. Can we add a configuration option (log4j or yarn-default.xml) that would lower the output rate of these messages? I think an update every 15-30 seconds would be enough in many cases. > Dump scheduler and queue state information into FairScheduler DEBUG log > --- > > Key: YARN-6042 > URL: https://issues.apache.org/jira/browse/YARN-6042 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6042.001.patch, YARN-6042.002.patch, > YARN-6042.003.patch, YARN-6042.004.patch, YARN-6042.005.patch, > YARN-6042.006.patch, YARN-6042.007.patch, YARN-6042.008.patch, > YARN-6042.009.patch > > > To improve the debugging of scheduler issues it would be a big improvement to > be able to dump the scheduler state into a log on request. > The Dump the scheduler state at a point in time would allow debugging of a > scheduler that is not hung (deadlocked) but also not assigning containers. > Currently we do not have a proper overview of what state the scheduler and > the queues are in and we have to make assumptions or guess > The scheduler and queue state needed would include (not exhaustive): > - instantaneous and steady fair share (app / queue) > - AM share and resources > - weight > - app demand > - application run state (runnable/non runnable) > - last time at fair/min share -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6314) Potential infinite redirection on YARN log redirection web service
[ https://issues.apache.org/jira/browse/YARN-6314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15903671#comment-15903671 ] Hadoop QA commented on YARN-6314: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 48s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 34s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 2 new + 41 unchanged - 0 fixed = 43 total (was 41) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 19s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 50s{color} | {color:red} hadoop-yarn-server-applicationhistoryservice in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 50m 42s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.timeline.webapp.TestTimelineWebServices | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-6314 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12857063/YARN-6314.1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux e5a9b7817d9f 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 385d2cb | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/15214/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yar
[jira] [Created] (YARN-6316) Provide help information and documentation for TimelineSchemaCreator
Li Lu created YARN-6316: --- Summary: Provide help information and documentation for TimelineSchemaCreator Key: YARN-6316 URL: https://issues.apache.org/jira/browse/YARN-6316 Project: Hadoop YARN Issue Type: Sub-task Reporter: Li Lu Right now there is no help information for timeline schema creator. We may probably want to provide an option to print help. Also, ideally, if users passed in no argument, we may want to print out help, instead of directly create the tables. This will simplify cluster operations and timeline v2 deployments. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5669) Add support for Docker pull
[ https://issues.apache.org/jira/browse/YARN-5669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15903614#comment-15903614 ] Hadoop QA commented on YARN-5669: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 26s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager generated 1 new + 14 unchanged - 1 fixed = 15 total (was 15) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 17s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 28s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 35m 23s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-5669 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838686/YARN-5669.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux ea1ea38ec4ec 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 385d2cb | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | javac | https://builds.apache.org/job/PreCommit-YARN-Build/15213/artifact/patchprocess/diff-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/15213/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15213/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://builds.apache.org/job/PreCommit-YAR
[jira] [Created] (YARN-6315) Improve LocalResourcesTrackerImpl#isResourcePresent to return false for corrupted files
Kuhu Shukla created YARN-6315: - Summary: Improve LocalResourcesTrackerImpl#isResourcePresent to return false for corrupted files Key: YARN-6315 URL: https://issues.apache.org/jira/browse/YARN-6315 Project: Hadoop YARN Issue Type: Bug Affects Versions: 2.7.3, 2.8.1 Reporter: Kuhu Shukla Assignee: Kuhu Shukla We currently check if a resource is present by making sure that the file exists locally. There can be a case where the LocalizationTracker thinks that it has the resource if the file exists but with size 0 or less than the "expected" size of the LocalResource. This JIRA tracks the change to harden the isResourcePresent call to address that case. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-6284) hasAlreadyRun should be final in ResourceManager.StandByTransitionRunnable
[ https://issues.apache.org/jira/browse/YARN-6284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton reassigned YARN-6284: -- Assignee: Daniel Templeton > hasAlreadyRun should be final in ResourceManager.StandByTransitionRunnable > -- > > Key: YARN-6284 > URL: https://issues.apache.org/jira/browse/YARN-6284 > Project: Hadoop YARN > Issue Type: Improvement > Components: resourcemanager >Affects Versions: 3.0.0-alpha2 >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Labels: newbie > > {code} > // The atomic variable to make sure multiple threads with the same > runnable > // run only once. > private AtomicBoolean hasAlreadyRun = new AtomicBoolean(false); > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6300) NULL_UPDATE_REQUESTS is redundant in TestFairScheduler
[ https://issues.apache.org/jira/browse/YARN-6300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15903604#comment-15903604 ] Daniel Templeton commented on YARN-6300: +1 LGTM. I'll commit when I get a chance. > NULL_UPDATE_REQUESTS is redundant in TestFairScheduler > -- > > Key: YARN-6300 > URL: https://issues.apache.org/jira/browse/YARN-6300 > Project: Hadoop YARN > Issue Type: Improvement >Affects Versions: 3.0.0-alpha2 >Reporter: Daniel Templeton >Assignee: Yuanbo Liu >Priority: Minor > Labels: newbie > Attachments: YARN-6300.001.patch > > > The {{TestFairScheduler.NULL_UPDATE_REQUESTS}} field hides > {{FairSchedulerTestBase.NULL_UPDATE_REQUESTS}}, which has the same value. > The {{NULL_UPDATE_REQUESTS}} field should be removed from > {{TestFairScheduler}}. > While you're at it, maybe also remove the unused import. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6301) Fair scheduler docs should explain the meaning of setting a queue's weight to zero
[ https://issues.apache.org/jira/browse/YARN-6301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15903600#comment-15903600 ] Daniel Templeton commented on YARN-6301: Thanks, [~Tao Jie]. You're on the right track, but I'd like to be a little more explicit. A queue with weight 0 will not receive resources as long as there is demand from any other queue. A weight-0 queue is often called an "ad hoc queue." > Fair scheduler docs should explain the meaning of setting a queue's weight to > zero > -- > > Key: YARN-6301 > URL: https://issues.apache.org/jira/browse/YARN-6301 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler >Affects Versions: 3.0.0-alpha2 >Reporter: Daniel Templeton >Assignee: Tao Jie > Labels: docs > Attachments: YARN-6301.001.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-6301) Fair scheduler docs should explain the meaning of setting a queue's weight to zero
[ https://issues.apache.org/jira/browse/YARN-6301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton reassigned YARN-6301: -- Assignee: Tao Jie > Fair scheduler docs should explain the meaning of setting a queue's weight to > zero > -- > > Key: YARN-6301 > URL: https://issues.apache.org/jira/browse/YARN-6301 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler >Affects Versions: 3.0.0-alpha2 >Reporter: Daniel Templeton >Assignee: Tao Jie > Labels: docs > Attachments: YARN-6301.001.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5669) Add support for Docker pull
[ https://issues.apache.org/jira/browse/YARN-5669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15903590#comment-15903590 ] Hadoop QA commented on YARN-5669: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 24s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager generated 1 new + 14 unchanged - 1 fixed = 15 total (was 15) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 58s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 33m 13s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | YARN-5669 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838686/YARN-5669.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux ccb3ac6f38c7 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 385d2cb | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | javac | https://builds.apache.org/job/PreCommit-YARN-Build/15212/artifact/patchprocess/diff-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15212/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15212/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add support for Docker pull > --- > > Key: YARN-5669 > URL: https://issues.apache.org/jira/browse/YARN-5669 > Project: Hadoop YA
[jira] [Updated] (YARN-6314) Potential infinite redirection on YARN log redirection web service
[ https://issues.apache.org/jira/browse/YARN-6314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-6314: Attachment: YARN-6314.1.patch > Potential infinite redirection on YARN log redirection web service > -- > > Key: YARN-6314 > URL: https://issues.apache.org/jira/browse/YARN-6314 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Xuan Gong >Assignee: Xuan Gong > Attachments: YARN-6314.1.patch > > > In YARN-6113, we have added a re-direct NM web service to get container logs > which could cause the potential infinite redirection. > It can happens when: > * Call AHS web service to get a running/finished AM container log for a > running application. > * AHS web service would re-direct the request the specific NM given the > application is still running. And the NM would handle the request > * If the log file we requested has already been aggregated and deleted from > NM, the NM would re-direct the request back to AHS. > In this case, we would do step 2 and step 3 infinite times. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5269) Bubble exceptions and errors all the way up the calls, including to clients.
[ https://issues.apache.org/jira/browse/YARN-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15903540#comment-15903540 ] Haibo Chen commented on YARN-5269: -- Based on today's discussion, questions we need to answer are 1) for the synchronous putEntities() API, what do we promise if no error/exception is returned to clients? In what scenarios do we bubble exceptions/errors to clients? 2) similarly for the asynchronous write API This is more to explicate the semantics+guarantees of our write API so that clients will have correct expectations. I'll check the existing code base and share my findings. [~vrushalic], [~jrottinghuis] chime in on more complicated scenarios where spooled-buffered-mutator is involved. > Bubble exceptions and errors all the way up the calls, including to clients. > > > Key: YARN-5269 > URL: https://issues.apache.org/jira/browse/YARN-5269 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Joep Rottinghuis >Assignee: Haibo Chen > Labels: YARN-5355, yarn-5355-merge-blocker > > Currently we ignore (swallow) exception from the HBase side in many cases > (reads and writes). > Also, on the client side, neither TimelineClient#putEntities (the v2 flavor) > nor the #putEntitiesAsync method return any value. > For the second drop we may want to consider how we properly bubble up > exceptions throughout the write and reader call paths and if we want to > return a response in putEntities and some future kind of result for > putEntitiesAsync. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6314) Potential infinite redirection on YARN log redirection web service
Xuan Gong created YARN-6314: --- Summary: Potential infinite redirection on YARN log redirection web service Key: YARN-6314 URL: https://issues.apache.org/jira/browse/YARN-6314 Project: Hadoop YARN Issue Type: Sub-task Reporter: Xuan Gong Assignee: Xuan Gong In YARN-6113, we have added a re-direct NM web service to get container logs which could cause the potential infinite redirection. It can happens when: * Call AHS web service to get a running/finished AM container log for a running application. * AHS web service would re-direct the request the specific NM given the application is still running. And the NM would handle the request * If the log file we requested has already been aggregated and deleted from NM, the NM would re-direct the request back to AHS. In this case, we would do step 2 and step 3 infinite times. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5669) Add support for Docker pull
[ https://issues.apache.org/jira/browse/YARN-5669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15903519#comment-15903519 ] Sidharta Seethana commented on YARN-5669: - I have launched a commit build again - I'll take a look at this patch once the build is done. > Add support for Docker pull > --- > > Key: YARN-5669 > URL: https://issues.apache.org/jira/browse/YARN-5669 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Zhankun Tang >Assignee: luhuichun > Attachments: YARN-5669.001.patch > > > We need to add docker pull to support Docker image localization. Refer to > YARN-3854 for the details. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6307) Refactor FairShareComparator#compare
[ https://issues.apache.org/jira/browse/YARN-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-6307: --- Description: The method does three things: check the min share ratio, check weight ratio, break tied by submit time and name. They are mixed with each other which is not easy to read and maintenance, poor style. Additionally, there are potential performance issues, like no need to calculate weight ratio every time. was: The method did three things: check the min share ratio, check weight ratio, break tied by submit time and name. They are mixed with each other which is not easy to read and maintenance, poor style. Additionally, there are potential performance issues, like no need to calculate weight ratio every time. > Refactor FairShareComparator#compare > > > Key: YARN-6307 > URL: https://issues.apache.org/jira/browse/YARN-6307 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Reporter: Yufei Gu >Assignee: Yufei Gu > > The method does three things: check the min share ratio, check weight ratio, > break tied by submit time and name. They are mixed with each other which is > not easy to read and maintenance, poor style. Additionally, there are > potential performance issues, like no need to calculate weight ratio every > time. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-5269) Bubble exceptions and errors all the way up the calls, including to clients.
[ https://issues.apache.org/jira/browse/YARN-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen reassigned YARN-5269: Assignee: Haibo Chen (was: Varun Saxena) > Bubble exceptions and errors all the way up the calls, including to clients. > > > Key: YARN-5269 > URL: https://issues.apache.org/jira/browse/YARN-5269 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Joep Rottinghuis >Assignee: Haibo Chen > Labels: YARN-5355, yarn-5355-merge-blocker > > Currently we ignore (swallow) exception from the HBase side in many cases > (reads and writes). > Also, on the client side, neither TimelineClient#putEntities (the v2 flavor) > nor the #putEntitiesAsync method return any value. > For the second drop we may want to consider how we properly bubble up > exceptions throughout the write and reader call paths and if we want to > return a response in putEntities and some future kind of result for > putEntitiesAsync. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6313) yarn logs cli does not provide logs for a completed container even when the nm address is provided
Xuan Gong created YARN-6313: --- Summary: yarn logs cli does not provide logs for a completed container even when the nm address is provided Key: YARN-6313 URL: https://issues.apache.org/jira/browse/YARN-6313 Project: Hadoop YARN Issue Type: Sub-task Reporter: Siddharth Seth Assignee: Xuan Gong Running app. Completed container. Provide the appId, containerId, nodeId - yarn logs does not return the logs. Specific use case: Long Running app. One daemon crashed. Logs are not accessible without shutting down the app. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6208) Improve the log when FinishAppEvent sent to the NodeManager which didn't run the application
[ https://issues.apache.org/jira/browse/YARN-6208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15903443#comment-15903443 ] Daniel Templeton commented on YARN-6208: It's not like the log message is going to appear so often that we need to worry about optimizing for length. I much prefer having a log message that can be understood without looking at source code. > Improve the log when FinishAppEvent sent to the NodeManager which didn't run > the application > > > Key: YARN-6208 > URL: https://issues.apache.org/jira/browse/YARN-6208 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Minor > Labels: newbie, supportability > Attachments: YARN-6208.01.patch > > > When FinishAppEvent of an application is sent to a NodeManager and there are > no applications of the application ran on the NodeManager, we can see the > following log: > {code} > 2015-12-28 11:59:18,725 WARN > org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: > Event EventType: FINISH_APPLICATION sent to absent application > application_1446103803043_9892 > {code} > YARN-4520 made the log as follows: > {code} > LOG.warn("couldn't find application " + appID + " while processing" > + " FINISH_APPS event"); > {code} > and I'm thinking it can be improved. > * Make the log WARN from INFO > * Add why the NodeManager couldn't find the application. For example, > "because there were no containers of the application ran on the NodeManager." -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6249) TestFairSchedulerPreemption is inconsistently failing on trunk
[ https://issues.apache.org/jira/browse/YARN-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15903439#comment-15903439 ] Yufei Gu commented on YARN-6249: +1 (non-binding). [~kasha], wanna take a look? > TestFairSchedulerPreemption is inconsistently failing on trunk > -- > > Key: YARN-6249 > URL: https://issues.apache.org/jira/browse/YARN-6249 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler, resourcemanager >Affects Versions: 2.9.0 >Reporter: Sean Po >Assignee: Tao Jie > Attachments: YARN-6249.001.patch, YARN-6249.002.patch > > > Tests in TestFairSchedulerPreemption.java will inconsistently fail on trunk. > An example stack trace: > {noformat} > Tests run: 24, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 24.879 sec > <<< FAILURE! - in > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption > testPreemptionSelectNonAMContainer[MinSharePreemptionWithDRF](org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption) > Time elapsed: 10.475 sec <<< FAILURE! > java.lang.AssertionError: Incorrect number of containers on the greedy app > expected:<4> but was:<8> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.verifyPreemption(TestFairSchedulerPreemption.java:288) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.testPreemptionSelectNonAMContainer(TestFairSchedulerPreemption.java:363) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue
[ https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated YARN-2113: -- Attachment: YARN-2113.v0.patch Attaching v0 patch for user-limit preemption. This patch is done on top of YARN-2009 where we have already done intra-queue preemption framework. This patch focus on adding support to do user-limit preemption if any is used is under-served, and some other users are abusing the limit. Basic test cases are added from preemption module. However I will be adding some more test cases from scheduler side to ensure that UL computation is also correct. [~leftnoteasy] and [~eepayne], please help to share some early feedback. > Add cross-user preemption within CapacityScheduler's leaf-queue > --- > > Key: YARN-2113 > URL: https://issues.apache.org/jira/browse/YARN-2113 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler >Reporter: Vinod Kumar Vavilapalli >Assignee: Vinod Kumar Vavilapalli > Attachments: YARN-2113.v0.patch > > > Preemption today only works across queues and moves around resources across > queues per demand and usage. We should also have user-level preemption within > a queue, to balance capacity across users in a predictable manner. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6165) Intra-queue preemption occurs even when preemption is turned off for a specific queue.
[ https://issues.apache.org/jira/browse/YARN-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15903349#comment-15903349 ] Eric Payne commented on YARN-6165: -- Thanks Jason and Sunil. > Intra-queue preemption occurs even when preemption is turned off for a > specific queue. > -- > > Key: YARN-6165 > URL: https://issues.apache.org/jira/browse/YARN-6165 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler, scheduler preemption >Affects Versions: 3.0.0-alpha2 >Reporter: Eric Payne >Assignee: Eric Payne > Fix For: 2.9.0, 2.8.1, 3.0.0-alpha3 > > Attachments: YARN-6165.001.patch > > > Intra-queue preemption occurs even when preemption is turned on for the whole > cluster ({{yarn.resourcemanager.scheduler.monitor.enable == true}}) but > turned off for a specific queue > ({{yarn.scheduler.capacity.root.queue1.disable_preemption == true}}). -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5956) Refactor ClientRMService
[ https://issues.apache.org/jira/browse/YARN-5956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15903098#comment-15903098 ] Kai Sasaki commented on YARN-5956: -- [~rohithsharma] [~sunilg] Sorry I overlooked and thank you so much for checking. I updated the patch accordingly. > Refactor ClientRMService > > > Key: YARN-5956 > URL: https://issues.apache.org/jira/browse/YARN-5956 > Project: Hadoop YARN > Issue Type: Improvement > Components: resourcemanager >Affects Versions: 3.0.0-alpha2 >Reporter: Kai Sasaki >Assignee: Kai Sasaki >Priority: Minor > Attachments: YARN-5956.01.patch, YARN-5956.02.patch, > YARN-5956.03.patch, YARN-5956.04.patch, YARN-5956.05.patch, > YARN-5956.06.patch, YARN-5956.07.patch, YARN-5956.08.patch, > YARN-5956.09.patch, YARN-5956.10.patch, YARN-5956.11.patch, > YARN-5956.12.patch, YARN-5956.13.patch, YARN-5956.14.patch > > > Some refactoring can be done in {{ClientRMService}}. > - Remove redundant variable declaration > - Fill in missing javadocs > - Proper variable access modifier > - Fix some typos in method name and exception messages -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5956) Refactor ClientRMService
[ https://issues.apache.org/jira/browse/YARN-5956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Sasaki updated YARN-5956: - Attachment: YARN-5956.14.patch > Refactor ClientRMService > > > Key: YARN-5956 > URL: https://issues.apache.org/jira/browse/YARN-5956 > Project: Hadoop YARN > Issue Type: Improvement > Components: resourcemanager >Affects Versions: 3.0.0-alpha2 >Reporter: Kai Sasaki >Assignee: Kai Sasaki >Priority: Minor > Attachments: YARN-5956.01.patch, YARN-5956.02.patch, > YARN-5956.03.patch, YARN-5956.04.patch, YARN-5956.05.patch, > YARN-5956.06.patch, YARN-5956.07.patch, YARN-5956.08.patch, > YARN-5956.09.patch, YARN-5956.10.patch, YARN-5956.11.patch, > YARN-5956.12.patch, YARN-5956.13.patch, YARN-5956.14.patch > > > Some refactoring can be done in {{ClientRMService}}. > - Remove redundant variable declaration > - Fill in missing javadocs > - Proper variable access modifier > - Fix some typos in method name and exception messages -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6288) Refactor AppLogAggregatorImpl#uploadLogsForContainers
[ https://issues.apache.org/jira/browse/YARN-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15902868#comment-15902868 ] Akira Ajisaka commented on YARN-6288: - Thanks [~haibochen] for the comment. I tried to make LogWriter closeable, but createLogWriter() throws IOException, so we cannot use LogWriter for try-with-resources. > Refactor AppLogAggregatorImpl#uploadLogsForContainers > - > > Key: YARN-6288 > URL: https://issues.apache.org/jira/browse/YARN-6288 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Minor > Labels: supportability > Attachments: YARN-6288.01.patch > > > In AppLogAggregatorImpl.java, if an exception occurs in writing container log > to remote filesystem, the exception is not caught and ignored. > https://github.com/apache/hadoop/blob/f59e36b4ce71d3019ab91b136b6d7646316954e7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java#L398 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5956) Refactor ClientRMService
[ https://issues.apache.org/jira/browse/YARN-5956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15902833#comment-15902833 ] Sunil G commented on YARN-5956: --- Yes [~rohithsharma], Thanks for pointing out. in {{moveApplicationAcrossQueues}} and {[updateApplicationPriority}}, we should pass *application.getApplicationId()* instead of applicationId. > Refactor ClientRMService > > > Key: YARN-5956 > URL: https://issues.apache.org/jira/browse/YARN-5956 > Project: Hadoop YARN > Issue Type: Improvement > Components: resourcemanager >Affects Versions: 3.0.0-alpha2 >Reporter: Kai Sasaki >Assignee: Kai Sasaki >Priority: Minor > Attachments: YARN-5956.01.patch, YARN-5956.02.patch, > YARN-5956.03.patch, YARN-5956.04.patch, YARN-5956.05.patch, > YARN-5956.06.patch, YARN-5956.07.patch, YARN-5956.08.patch, > YARN-5956.09.patch, YARN-5956.10.patch, YARN-5956.11.patch, > YARN-5956.12.patch, YARN-5956.13.patch > > > Some refactoring can be done in {{ClientRMService}}. > - Remove redundant variable declaration > - Fill in missing javadocs > - Proper variable access modifier > - Fix some typos in method name and exception messages -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6290) Capacity scheduler page broken
[ https://issues.apache.org/jira/browse/YARN-6290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15902828#comment-15902828 ] Léopold Boudard commented on YARN-6290: --- [~seaokcs] Indeed, it's capacity scheduler that's being used here. > Capacity scheduler page broken > -- > > Key: YARN-6290 > URL: https://issues.apache.org/jira/browse/YARN-6290 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 2.7.3 >Reporter: Léopold Boudard > > Hello, > I have an issue very similar to this one > https://issues.apache.org/jira/browse/YARN-3478 > not being able to access to scheduler interface in yarn webapp. > Except that traceback is bit different (NullPointerException and could be be > caused by stale queue configuration. > I suspect QueueCapacitiesInfo.java initializing info object with null value > for some reason. > Below traceback: > ``` > 2017-03-06 10:20:00,945 ERROR webapp.Dispatcher > (Dispatcher.java:service(162)) - error handling URI: /cluster/scheduler > java.lang.reflect.InvocationTargetException > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at org.apache.hadoop.yarn.webapp.Dispatcher.service(Dispatcher.java:153) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) > at > com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:263) > at > com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:178) > at > com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:91) > at > com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:62) > at > com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:900) > at > com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:834) > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebAppFilter.doFilter(RMWebAppFilter.java:178) > at > com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:795) > at > com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163) > at > com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58) > at > com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118) > at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at > org.apache.hadoop.security.http.XFrameOptionsFilter.doFilter(XFrameOptionsFilter.java:57) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at > org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at > org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:614) > at > org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:573) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at > org.apache.hadoop.security.http.CrossOriginFilter.doFilter(CrossOriginFilter.java:95) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at > org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1294) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at > org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) > at > org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) > at > org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) > at > org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:767) > at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) > at > org.mortbay.jetty.handl