[jira] [Commented] (YARN-4006) YARN ATS Alternate Kerberos HTTP Authentication Changes

2016-05-20 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294747#comment-15294747
 ] 

Vinod Kumar Vavilapalli commented on YARN-4006:
---

bq. What is the question on this patch - it seems rather simple?
bq. (...) not sure what the issue is or why folks are against having an AltAuth 
option with ATS (...)

The description of the JIRA is not clear enough to understand what the real 
problem is that the patch is addressing ("They do not exactly work" - what 
doesn't work?).

Also combining the fact that (a) some of us who have been trying to push for 
progress don't know enough about AltKerberos and (b) the patches attached 
neither have any explanation nor do they have any tests to prove that they fix 
a valid bug, we are only left to guess what issue is being solved here.

> YARN ATS Alternate Kerberos HTTP Authentication Changes
> ---
>
> Key: YARN-4006
> URL: https://issues.apache.org/jira/browse/YARN-4006
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: security, timelineserver
>Affects Versions: 2.5.0, 2.6.0, 2.7.0, 2.5.1, 2.6.1, 2.8.0, 2.7.1, 2.7.2
>Reporter: Greg Senia
>Assignee: Greg Senia
>Priority: Blocker
> Attachments: YARN-4006-branch-trunk.patch, 
> YARN-4006-branch2.6.0.patch, sample-ats-alt-auth.patch
>
>
> When attempting to use The Hadoop Alternate Authentication Classes. They do 
> not exactly work with what was built with YARN-1935.
> I went ahead and made the following changes to support using a Custom 
> AltKerberos DelegationToken custom class.
> Changes to: TimelineAuthenticationFilterInitializer.class
> {code}
>String authType = filterConfig.get(AuthenticationFilter.AUTH_TYPE);
> LOG.info("AuthType Configured: "+authType);
> if (authType.equals(PseudoAuthenticationHandler.TYPE)) {
>   filterConfig.put(AuthenticationFilter.AUTH_TYPE,
>   PseudoDelegationTokenAuthenticationHandler.class.getName());
> LOG.info("AuthType: PseudoDelegationTokenAuthenticationHandler");
> } else if (authType.equals(KerberosAuthenticationHandler.TYPE) || 
> (UserGroupInformation.isSecurityEnabled() && 
> conf.get("hadoop.security.authentication").equals(KerberosAuthenticationHandler.TYPE)))
>  {
>   if (!(authType.equals(KerberosAuthenticationHandler.TYPE))) {
> filterConfig.put(AuthenticationFilter.AUTH_TYPE,
>   authType);
> LOG.info("AuthType: "+authType);
>   } else {
> filterConfig.put(AuthenticationFilter.AUTH_TYPE,
>   KerberosDelegationTokenAuthenticationHandler.class.getName());
> LOG.info("AuthType: KerberosDelegationTokenAuthenticationHandler");
>   } 
>   // Resolve _HOST into bind address
>   String bindAddress = conf.get(HttpServer2.BIND_ADDRESS);
>   String principal =
>   filterConfig.get(KerberosAuthenticationHandler.PRINCIPAL);
>   if (principal != null) {
> try {
>   principal = SecurityUtil.getServerPrincipal(principal, bindAddress);
> } catch (IOException ex) {
>   throw new RuntimeException(
>   "Could not resolve Kerberos principal name: " + ex.toString(), 
> ex);
> }
> filterConfig.put(KerberosAuthenticationHandler.PRINCIPAL,
> principal);
>   }
> }
>  {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-857) Localization failures should be available in container diagnostics

2016-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294743#comment-15294743
 ] 

Hadoop QA commented on YARN-857:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 2s 
{color} | {color:blue} The patch file was not named according to hadoop's 
naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute 
for instructions. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
49s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 patch generated 0 new + 83 unchanged - 15 fixed = 83 total (was 98) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 29s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 43s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805381/YARN-857-20160520.txt 
|
| JIRA Issue | YARN-857 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux dbc8990196af 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 500e946 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11605/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11605/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Localization failures should be available in container diagnost

[jira] [Created] (YARN-5122) "yarn logs" for running containers should print an explicit footer saying that the log may be incomplete

2016-05-20 Thread Vinod Kumar Vavilapalli (JIRA)
Vinod Kumar Vavilapalli created YARN-5122:
-

 Summary: "yarn logs" for running containers should print an 
explicit footer saying that the log may be incomplete
 Key: YARN-5122
 URL: https://issues.apache.org/jira/browse/YARN-5122
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Vinod Kumar Vavilapalli
Assignee: Vinod Kumar Vavilapalli


We can have a footer of the sort {quote}[This log file belongs to a running 
container and so may not be complete..]{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5115) Security risk by using CONTENT-DISPOSITION header in the container-logs web-service

2016-05-20 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-5115:
--
Issue Type: Sub-task  (was: Bug)
Parent: YARN-4904

> Security risk by using CONTENT-DISPOSITION header in the container-logs 
> web-service
> ---
>
> Key: YARN-5115
> URL: https://issues.apache.org/jira/browse/YARN-5115
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5115.1.patch
>
>
> In NMWebService/AHSWebservice, we have used CONTENT-DISPOSITION header for 
> show/download container logs. Looks like it has security risks. And people 
> have devised content-disposition hacking.
> The HTTP 1.1 Standard (RFC 2616) also mentions the possible security side 
> effects of content disposition:
> {code}
> 15.5 Content-Disposition Issues
>RFC 1806 [35], from which the often implemented Content-Disposition
>(see section 19.5.1) header in HTTP is derived, has a number of very
>serious security considerations. Content-Disposition is not part of
>the HTTP standard, but since it is widely implemented, we are
>documenting its use and risks for implementors. See RFC 2183 [49]
>(which updates RFC 1806) for details.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5115) Security risk by using CONTENT-DISPOSITION header in the container-logs web-service

2016-05-20 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-5115:
--
Summary: Security risk by using CONTENT-DISPOSITION header in the 
container-logs web-service  (was: Security risk by using CONTENT-DISPOSITION 
header)

> Security risk by using CONTENT-DISPOSITION header in the container-logs 
> web-service
> ---
>
> Key: YARN-5115
> URL: https://issues.apache.org/jira/browse/YARN-5115
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5115.1.patch
>
>
> In NMWebService/AHSWebservice, we have used CONTENT-DISPOSITION header for 
> show/download container logs. Looks like it has security risks. And people 
> have devised content-disposition hacking.
> The HTTP 1.1 Standard (RFC 2616) also mentions the possible security side 
> effects of content disposition:
> {code}
> 15.5 Content-Disposition Issues
>RFC 1806 [35], from which the often implemented Content-Disposition
>(see section 19.5.1) header in HTTP is derived, has a number of very
>serious security considerations. Content-Disposition is not part of
>the HTTP standard, but since it is widely implemented, we are
>documenting its use and risks for implementors. See RFC 2183 [49]
>(which updates RFC 1806) for details.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5088) Improve "yarn log" command-line to read the last K bytes for the log files

2016-05-20 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-5088:
--
Summary: Improve "yarn log" command-line to read the last K bytes for the 
log files  (was: Improve Yarn log Command line to read the last K bytes for the 
log files)

> Improve "yarn log" command-line to read the last K bytes for the log files
> --
>
> Key: YARN-5088
> URL: https://issues.apache.org/jira/browse/YARN-5088
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-857) Localization failures should be available in container diagnostics

2016-05-20 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-857:
-
Attachment: YARN-857-20160520.txt

Updated patch fixing the checkstyle issues.

[~vvasudev], can you please look at this? Tx.

> Localization failures should be available in container diagnostics
> --
>
> Key: YARN-857
> URL: https://issues.apache.org/jira/browse/YARN-857
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Hitesh Shah
>Assignee: Vinod Kumar Vavilapalli
>Priority: Critical
> Attachments: YARN-857-20160404.txt, YARN-857-20160405.txt, 
> YARN-857-20160515.txt, YARN-857-20160520.txt, YARN-857.1.patch, 
> YARN-857.2.patch
>
>
> at 
> org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:62)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:235)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)
> at 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.startLocalizer(DefaultContainerExecutor.java:106)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:978)
> Traced this down to DefaultExecutor which does not look at the exit code for 
> the localizer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4837) User facing aspects of 'AM blacklisting' feature need fixing

2016-05-20 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-4837:
--
Attachment: YARN-4837-20160520.txt

Here's an updated patch with fixes and new test.

[~rohithsharma]
bq. One suggestion is can default value for threshold reduce to less than 50%?
bq. 1) Should we disable am-blacklisting by default?
I would like to tackle both of these as part of YARN-4685 so that others also 
can see.

bq. 2. I am little bit confused with naming convention for blacklist with 
placesBlacklist. And is there any plan to support blacklist racks in the future?
Yes, that's the idea. In other parts of the code, this list gets passed along 
to filter both nodes and racks.

[~leftnoteasy]
Regarding renames, I've included the ones you pointed out. There are lot more 
to be done, but I deliberately avoided them given the current size of the patch.

Addressed other comments.

bq. 6) ResourceBlacklistRequest -> (Resource)Place(ment)BlacklistRequest?
This is public API, cannot rename it now.

Created a new TestNodeBlacklistingOnAMFailures, moved existing tests from 
TestAMRestart to this new class file. 
testAMBlacklistPreventsRestartOnSameNodeForMinicluster() is a bogus test, 
removed it.

[~sunilg]
bq. 1. yarn.resourcemanager.am-scheduling.node-blacklisting-enabled and 
yarn.resourcemanager.am-scheduling.node-blacklisting-disable-threshold to be 
added in yarn-default.xml.
Again, I deliberately deleted them for now. I'd like to discuss their 
re-addition as part of the outcome for YARN-4685.

> User facing aspects of 'AM blacklisting' feature need fixing
> 
>
> Key: YARN-4837
> URL: https://issues.apache.org/jira/browse/YARN-4837
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>Priority: Critical
> Attachments: YARN-4837-20160515.txt, YARN-4837-20160520.txt
>
>
> Was reviewing the user-facing aspects that we are releasing as part of 2.8.0.
> Looking at the 'AM blacklisting feature', I see several things to be fixed 
> before we release it in 2.8.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-4887) AM-RM protocol changes for identifying resource-requests explicitly

2016-05-20 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294698#comment-15294698
 ] 

Allen Wittenauer edited comment on YARN-4887 at 5/21/16 4:43 AM:
-

That's not a Yetus thing.  That's either a build system or just how Javadoc 
works thing.  (Pretty sure it is the latter: building javadoc requires classes 
built for method resolution.)

Also:

http://hadoop.apache.org/docs/r2.7.2/api/org/apache/hadoop/yarn/api/records/ResourceRequest.html

(among other links)

So yes, you need to fix your javadoc failures.


was (Author: aw):
That's not a Yetus thing.  That's either a build system or just how Javadoc 
works thing.  (Pretty sure it is the latter: building javadoc requires classes 
built for method resolution.)

Also:

http://hadoop.apache.org/docs/r2.7.2/api/org/apache/hadoop/yarn/api/records/ResourceRequest.html

(among other links)

> AM-RM protocol changes for identifying resource-requests explicitly
> ---
>
> Key: YARN-4887
> URL: https://issues.apache.org/jira/browse/YARN-4887
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-4887-v1.patch, YARN-4887-v2.patch
>
>
> YARN-4879 proposes the addition of a simple delta allocate protocol. This 
> JIRA is to track the changes in AM-RM protocol to accomplish it. The crux is 
> the addition of ID field in ResourceRequest and Container. The detailed 
> proposal is in the parent JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-4887) AM-RM protocol changes for identifying resource-requests explicitly

2016-05-20 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294698#comment-15294698
 ] 

Allen Wittenauer edited comment on YARN-4887 at 5/21/16 4:42 AM:
-

That's not a Yetus thing.  That's either a build system or just how Javadoc 
works thing.  (Pretty sure it is the latter: building javadoc requires classes 
built for method resolution.)

Also:

http://hadoop.apache.org/docs/r2.7.2/api/org/apache/hadoop/yarn/api/records/ResourceRequest.html

(among other links)


was (Author: aw):
That's not a Yetus thing.  That's either a build system or just how Javadoc 
works thing.  (Pretty sure it is the latter: building javadoc requires classes 
built for method resolution.)

> AM-RM protocol changes for identifying resource-requests explicitly
> ---
>
> Key: YARN-4887
> URL: https://issues.apache.org/jira/browse/YARN-4887
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-4887-v1.patch, YARN-4887-v2.patch
>
>
> YARN-4879 proposes the addition of a simple delta allocate protocol. This 
> JIRA is to track the changes in AM-RM protocol to accomplish it. The crux is 
> the addition of ID field in ResourceRequest and Container. The detailed 
> proposal is in the parent JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4887) AM-RM protocol changes for identifying resource-requests explicitly

2016-05-20 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294698#comment-15294698
 ] 

Allen Wittenauer commented on YARN-4887:


That's not a Yetus thing.  That's either a build system or just how Javadoc 
works thing.  (Pretty sure it is the latter: building javadoc requires classes 
built for method resolution.)

> AM-RM protocol changes for identifying resource-requests explicitly
> ---
>
> Key: YARN-4887
> URL: https://issues.apache.org/jira/browse/YARN-4887
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-4887-v1.patch, YARN-4887-v2.patch
>
>
> YARN-4879 proposes the addition of a simple delta allocate protocol. This 
> JIRA is to track the changes in AM-RM protocol to accomplish it. The crux is 
> the addition of ID field in ResourceRequest and Container. The detailed 
> proposal is in the parent JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5020) Fix Documentation for Yarn Capacity Scheduler on Resource Calculator

2016-05-20 Thread Takashi Ohnishi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294677#comment-15294677
 ] 

Takashi Ohnishi commented on YARN-5020:
---

Thanks [~jianhe] for committing! :)

> Fix Documentation for Yarn Capacity Scheduler on Resource Calculator
> 
>
> Key: YARN-5020
> URL: https://issues.apache.org/jira/browse/YARN-5020
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jo Desmet
>Assignee: Takashi Ohnishi
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-5020.1.patch
>
>
> Documentation refers to 'DefaultResourseCalculator' - which is spelled 
> incorrectly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4006) YARN ATS Alternate Kerberos HTTP Authentication Changes

2016-05-20 Thread Greg Senia (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294670#comment-15294670
 ] 

Greg Senia commented on YARN-4006:
--

[~lmccay] or [~aw] let me know what I need to do to get this fix committed. My 
previous employer is who ran into this issue back when we migrated to HDP 2.2 
and enabled ATS.  I attempted to get HWX to take the fix not sure what the 
issue is or why folks are against having an AltAuth option with ATS. 
Fortunately my new employer is using kerberos throughout their entire 
environment. My past employer took this fix into HDP 2.3 and compiled it in 
since it has not been committed into the mainline.



> YARN ATS Alternate Kerberos HTTP Authentication Changes
> ---
>
> Key: YARN-4006
> URL: https://issues.apache.org/jira/browse/YARN-4006
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: security, timelineserver
>Affects Versions: 2.5.0, 2.6.0, 2.7.0, 2.5.1, 2.6.1, 2.8.0, 2.7.1, 2.7.2
>Reporter: Greg Senia
>Assignee: Greg Senia
>Priority: Blocker
> Attachments: YARN-4006-branch-trunk.patch, 
> YARN-4006-branch2.6.0.patch, sample-ats-alt-auth.patch
>
>
> When attempting to use The Hadoop Alternate Authentication Classes. They do 
> not exactly work with what was built with YARN-1935.
> I went ahead and made the following changes to support using a Custom 
> AltKerberos DelegationToken custom class.
> Changes to: TimelineAuthenticationFilterInitializer.class
> {code}
>String authType = filterConfig.get(AuthenticationFilter.AUTH_TYPE);
> LOG.info("AuthType Configured: "+authType);
> if (authType.equals(PseudoAuthenticationHandler.TYPE)) {
>   filterConfig.put(AuthenticationFilter.AUTH_TYPE,
>   PseudoDelegationTokenAuthenticationHandler.class.getName());
> LOG.info("AuthType: PseudoDelegationTokenAuthenticationHandler");
> } else if (authType.equals(KerberosAuthenticationHandler.TYPE) || 
> (UserGroupInformation.isSecurityEnabled() && 
> conf.get("hadoop.security.authentication").equals(KerberosAuthenticationHandler.TYPE)))
>  {
>   if (!(authType.equals(KerberosAuthenticationHandler.TYPE))) {
> filterConfig.put(AuthenticationFilter.AUTH_TYPE,
>   authType);
> LOG.info("AuthType: "+authType);
>   } else {
> filterConfig.put(AuthenticationFilter.AUTH_TYPE,
>   KerberosDelegationTokenAuthenticationHandler.class.getName());
> LOG.info("AuthType: KerberosDelegationTokenAuthenticationHandler");
>   } 
>   // Resolve _HOST into bind address
>   String bindAddress = conf.get(HttpServer2.BIND_ADDRESS);
>   String principal =
>   filterConfig.get(KerberosAuthenticationHandler.PRINCIPAL);
>   if (principal != null) {
> try {
>   principal = SecurityUtil.getServerPrincipal(principal, bindAddress);
> } catch (IOException ex) {
>   throw new RuntimeException(
>   "Could not resolve Kerberos principal name: " + ex.toString(), 
> ex);
> }
> filterConfig.put(KerberosAuthenticationHandler.PRINCIPAL,
> principal);
>   }
> }
>  {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5113) Refactoring and other clean-up for distributed scheduling

2016-05-20 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294663#comment-15294663
 ] 

Arun Suresh commented on YARN-5113:
---

[~kkaranasos], Looks like {{TestYarnConfigurationFields}} is a legitimate error.

Can you please run the following testcases manually and ensure they pass as 
well (some of them should have been triggered already, but just to be sure)
* TestQueuingContainerManager
* TestDistributedScheduling
* TestLocalScheduler (By the way, I guess this should be refactored to 
TestDistributedScheduler)
* TestNodeQueueLoadMonitor

Some of the javadoc errors are legit. The rest mostly relate to generated 
sources.
I think all the checkstyles are fixable.

+1, pending the above

> Refactoring and other clean-up for distributed scheduling
> -
>
> Key: YARN-5113
> URL: https://issues.apache.org/jira/browse/YARN-5113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5113.001.patch, YARN-5113.002.patch, 
> YARN-5113.003.patch
>
>
> This JIRA focuses on the refactoring of classes related to Distributed 
> Scheduling



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5117) QueuingContainerManager does not start GUARANTEED Container even if Resources are available

2016-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294660#comment-15294660
 ] 

Hadoop QA commented on YARN-5117:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 49s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 6s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805364/YARN-5117.001.patch |
| JIRA Issue | YARN-5117 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0b2d8215d979 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 500e946 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/11603/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/11603/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11603/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11603/console |
| 

[jira] [Commented] (YARN-5116) Failed to execute "yarn application"

2016-05-20 Thread Jun Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294658#comment-15294658
 ] 

Jun Gong commented on YARN-5116:


Thanks [~sunilg] for the information. Yes, sub commands({{application}}, 
{{applicationattempt}} and {{container}}) have similar problems.

> Failed to execute "yarn application"
> 
>
> Key: YARN-5116
> URL: https://issues.apache.org/jira/browse/YARN-5116
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jun Gong
>Assignee: Jun Gong
> Attachments: YARN-5116.01.patch
>
>
> Use the trunk code.
> {code}
> $ bin/yarn application -list
> 16/05/20 11:35:45 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Exception in thread "main" 
> org.apache.commons.cli.UnrecognizedOptionException: Unrecognized option: -list
>   at org.apache.commons.cli.Parser.processOption(Parser.java:363)
>   at org.apache.commons.cli.Parser.parse(Parser.java:199)
>   at org.apache.commons.cli.Parser.parse(Parser.java:85)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:172)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:90)
> {code}
> It is cause by that the subcommand 'application' is deleted from command 
> args. The following command is OK.
> {code}
> $ bin/yarn application application -list
> 16/05/20 11:39:35 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Total number of applications (application-types: [] and states: [SUBMITTED, 
> ACCEPTED, RUNNING]):0
> Application-IdApplication-Name
> Application-Type  User   Queue   State
>  Final-State ProgressTracking-URL
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5117) QueuingContainerManager does not start GUARANTEED Container even if Resources are available

2016-05-20 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5117:
-
Attachment: YARN-5117.001.patch

Attaching patch. I changed the way we are calculating the allocated CPU usage 
of a given ProcessTreeInfo. Returned value is still normalized between 0.0 and 
1.0.

> QueuingContainerManager does not start GUARANTEED Container even if Resources 
> are available
> ---
>
> Key: YARN-5117
> URL: https://issues.apache.org/jira/browse/YARN-5117
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5117.001.patch
>
>
> When NM Queuing is turned on, it looks like GUARANTEED containers do not 
> start even when there are no containers running on the NM. The following is 
> seen in the logs :
> {noformat}
> .
> 2016-05-19 22:34:12,711 INFO  [IPC Server handler 0 on 49809] 
> queuing.QueuingContainerManagerImpl 
> (QueuingContainerManagerImpl.java:pickOpportunisticContainersToKill(351)) - 
> There are no sufficient resources to start guaranteed 
> container_1463711648301_0001_01_01 even after attempting to kill any 
> running opportunistic containers.
> .
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4887) AM-RM protocol changes for identifying resource-requests explicitly

2016-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294630#comment-15294630
 ] 

Hadoop QA commented on YARN-4887:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 57s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 34s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: patch generated 1 new + 
42 unchanged - 0 fixed = 43 total (was 42) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 2m 20s 
{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api generated 
10 new + 5406 unchanged - 0 fixed = 5416 total (was 5406) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 21s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 6s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 1s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805355/YARN-4887-v2.patch |
| JIRA Issue | YARN-4887 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux ac1184d7731d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 500e946 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 

[jira] [Commented] (YARN-5113) Refactoring and other clean-up for distributed scheduling

2016-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294617#comment-15294617
 ] 

Hadoop QA commented on YARN-5113:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 46s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 10s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 41s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: patch generated 19 new + 
403 unchanged - 44 fixed = 422 total (was 447) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
37s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 2m 54s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common 
generated 104 new + 1169 unchanged - 104 fixed = 1273 total (was 1273) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 2m 54s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 7 new + 585 unchanged - 7 fixed = 592 total (was 592) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 36s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 23s {color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 37s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 37m 4s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m 19s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed 

[jira] [Commented] (YARN-4887) AM-RM protocol changes for identifying resource-requests explicitly

2016-05-20 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294612#comment-15294612
 ] 

Subru Krishnan commented on YARN-4887:
--

[~cnauroth]/[~aw], any idea why Yetus Javadoc check is picking generated 
test-classes from target dir?

> AM-RM protocol changes for identifying resource-requests explicitly
> ---
>
> Key: YARN-4887
> URL: https://issues.apache.org/jira/browse/YARN-4887
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-4887-v1.patch, YARN-4887-v2.patch
>
>
> YARN-4879 proposes the addition of a simple delta allocate protocol. This 
> JIRA is to track the changes in AM-RM protocol to accomplish it. The crux is 
> the addition of ID field in ResourceRequest and Container. The detailed 
> proposal is in the parent JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4887) AM-RM protocol changes for identifying resource-requests explicitly

2016-05-20 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-4887:
-
Attachment: YARN-4887-v2.patch

Good catch [~leftnoteasy]. PFA updated patch (v2) that addresses your concern.

> AM-RM protocol changes for identifying resource-requests explicitly
> ---
>
> Key: YARN-4887
> URL: https://issues.apache.org/jira/browse/YARN-4887
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-4887-v1.patch, YARN-4887-v2.patch
>
>
> YARN-4879 proposes the addition of a simple delta allocate protocol. This 
> JIRA is to track the changes in AM-RM protocol to accomplish it. The crux is 
> the addition of ID field in ResourceRequest and Container. The detailed 
> proposal is in the parent JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5109) timestamps are stored unencoded causing parse errors

2016-05-20 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294610#comment-15294610
 ] 

Joep Rottinghuis commented on YARN-5109:


Seems sensible. Looking forward to see in context on patch



> timestamps are stored unencoded causing parse errors
> 
>
> Key: YARN-5109
> URL: https://issues.apache.org/jira/browse/YARN-5109
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>Priority: Blocker
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5109-YARN-2928.01.patch, 
> YARN-5109-YARN-2928.02.patch
>
>
> When we store timestamps (for example as part of the row key or part of the 
> column name for an event), the bytes are used as is without any encoding. If 
> the byte value happens to contain a separator character we use (e.g. "!" or 
> "="), it causes a parse failure when we read it.
> I came across this while looking into this error in the timeline reader:
> {noformat}
> 2016-05-17 21:28:38,643 WARN 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.TimelineStorageUtils:
>  incorrectly formatted column name: it will be discarded
> {noformat}
> I traced the data that was causing this, and the column name (for the event) 
> was the following:
> {noformat}
> i:e!YARN_RM_CONTAINER_CREATED=\x7F\xFF\xFE\xABDY=\x99=YARN_CONTAINER_ALLOCATED_HOST
> {noformat}
> Note that the column name is supposed to be of the format (event 
> id)=(timestamp)=(event info key). However, observe the timestamp portion:
> {noformat}
> \x7F\xFF\xFE\xABDY=\x99
> {noformat}
> The presence of the separator ("=") causes the parse error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5109) timestamps are stored unencoded causing parse errors

2016-05-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294601#comment-15294601
 ] 

Varun Saxena edited comment on YARN-5109 at 5/21/16 12:56 AM:
--

[~sjlee0],
Yes, type safety will have to be ensured within this encode via multiple 
instanceof checks which I agree is not the best solution.
The specific function I am talking about is 
{{TimelineFilterUtils#createFiltersFromColumnQualifiers}}. This is used for 
events and relations. Event filters and relation filters cannot be applied 
using HBase SingleColumnValueFilter so we fetch all the columns specified which 
are there in event filters and relation filters(i.e. events in event filters or 
entity types in relation filters). So for event filters we will have to create 
a QualifierFilter with prefix {{e!eventId=}}

I mentioned about {{Object... params}} as that is what came to my mind just 
before signing off for the day.

But on second thoughts, I think we can have a switch case based on column 
prefix and construct EventColumnName from there. We will have only 2 switch 
cases here other than default(i.e. ApplicationColumnPrefix.EVENT and 
EntityColumnPrefix.EVENT). The number of cases should not become humongous in 
this switch case even from a long term perspective. And if it does, we can 
revisit on a solution then.
I will go with this approach now.


was (Author: varun_saxena):
[~sjlee0],
Yes, type safety will have to be ensured within this encode.
The specific function I am talking about is 
{{TimelineFilterUtils#createFiltersFromColumnQualifiers}}. This is used for 
events and relations. Event filters and relation filters cannot be applied 
using HBase SingleColumnValueFilter so we fetch all the columns specified which 
are there in event filters and relation filters(i.e. events in event filters or 
entity types in relation filters). So for event filters we will have to create 
a QualifierFilter with prefix {{e!eventId=}}

I mentioned about {{Object... params}} as that is what came to my mind just 
before signing off for the day.

But on second thoughts, I think we can have a switch case based on column 
prefix and construct EventColumnName from there. We will have only 2 switch 
cases here other than default(i.e. ApplicationColumnPrefix.EVENT and 
EntityColumnPrefix.EVENT). The number of cases should not become humongous in 
this switch case even from a long term perspective. And if it does, we can 
revisit on a solution then.
I will go with this approach now.

> timestamps are stored unencoded causing parse errors
> 
>
> Key: YARN-5109
> URL: https://issues.apache.org/jira/browse/YARN-5109
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>Priority: Blocker
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5109-YARN-2928.01.patch, 
> YARN-5109-YARN-2928.02.patch
>
>
> When we store timestamps (for example as part of the row key or part of the 
> column name for an event), the bytes are used as is without any encoding. If 
> the byte value happens to contain a separator character we use (e.g. "!" or 
> "="), it causes a parse failure when we read it.
> I came across this while looking into this error in the timeline reader:
> {noformat}
> 2016-05-17 21:28:38,643 WARN 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.TimelineStorageUtils:
>  incorrectly formatted column name: it will be discarded
> {noformat}
> I traced the data that was causing this, and the column name (for the event) 
> was the following:
> {noformat}
> i:e!YARN_RM_CONTAINER_CREATED=\x7F\xFF\xFE\xABDY=\x99=YARN_CONTAINER_ALLOCATED_HOST
> {noformat}
> Note that the column name is supposed to be of the format (event 
> id)=(timestamp)=(event info key). However, observe the timestamp portion:
> {noformat}
> \x7F\xFF\xFE\xABDY=\x99
> {noformat}
> The presence of the separator ("=") causes the parse error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5109) timestamps are stored unencoded causing parse errors

2016-05-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294603#comment-15294603
 ] 

Varun Saxena commented on YARN-5109:


[~sjlee0], yeah this doesnt break anything. I was just curious to know why the 
different approaches.
Above makes sense. 

> timestamps are stored unencoded causing parse errors
> 
>
> Key: YARN-5109
> URL: https://issues.apache.org/jira/browse/YARN-5109
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>Priority: Blocker
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5109-YARN-2928.01.patch, 
> YARN-5109-YARN-2928.02.patch
>
>
> When we store timestamps (for example as part of the row key or part of the 
> column name for an event), the bytes are used as is without any encoding. If 
> the byte value happens to contain a separator character we use (e.g. "!" or 
> "="), it causes a parse failure when we read it.
> I came across this while looking into this error in the timeline reader:
> {noformat}
> 2016-05-17 21:28:38,643 WARN 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.TimelineStorageUtils:
>  incorrectly formatted column name: it will be discarded
> {noformat}
> I traced the data that was causing this, and the column name (for the event) 
> was the following:
> {noformat}
> i:e!YARN_RM_CONTAINER_CREATED=\x7F\xFF\xFE\xABDY=\x99=YARN_CONTAINER_ALLOCATED_HOST
> {noformat}
> Note that the column name is supposed to be of the format (event 
> id)=(timestamp)=(event info key). However, observe the timestamp portion:
> {noformat}
> \x7F\xFF\xFE\xABDY=\x99
> {noformat}
> The presence of the separator ("=") causes the parse error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5109) timestamps are stored unencoded causing parse errors

2016-05-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294601#comment-15294601
 ] 

Varun Saxena edited comment on YARN-5109 at 5/21/16 12:52 AM:
--

[~sjlee0],
Yes, type safety will have to be ensured within this encode.
The specific function I am talking about is 
{{TimelineFilterUtils#createFiltersFromColumnQualifiers}}. This is used for 
events and relations. Event filters and relation filters cannot be applied 
using HBase SingleColumnValueFilter so we fetch all the columns specified which 
are there in event filters and relation filters(i.e. events in event filters or 
entity types in relation filters). So for event filters we will have to create 
a QualifierFilter with prefix {{e!eventId=}}

I mentioned about {{Object... params}} as that is what came to my mind just 
before signing off for the day.

But on second thoughts, I think we can have a switch case based on column 
prefix and construct EventColumnName from there. We will have only 2 switch 
cases here other than default(i.e. ApplicationColumnPrefix.EVENT and 
EntityColumnPrefix.EVENT). The number of cases should not become humongous in 
this switch case even from a long term perspective. And if it does, we can 
revisit on a solution then.
I will go with this approach now.


was (Author: varun_saxena):
[~sjlee0],
Yes, type safety will have to be ensured within this encode.
The specific function I am talking about is 
{{TimelineFilterUtils#createFiltersFromColumnQualifiers}}. This is used for 
events and relations. Event filters and relation filters cannot be applied 
using HBase SingleColumnValueFilter so we fetch all the columns specified which 
are there in event filters and relation filters(i.e. events in event filters or 
entity types in relation filters).

I mentioned about {{Object... params}} as that is what came to my mind just 
before signing off for the day.

But on second thoughts, I think we can have a switch case based on column 
prefix and construct EventColumnName from there. We will have only 2 switch 
cases here other than default(i.e. ApplicationColumnPrefix.EVENT and 
EntityColumnPrefix.EVENT). The number of cases should not become humongous in 
this switch case even from a long term perspective. And if it does, we can 
revisit on a solution then.
I will go with this approach now.

> timestamps are stored unencoded causing parse errors
> 
>
> Key: YARN-5109
> URL: https://issues.apache.org/jira/browse/YARN-5109
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>Priority: Blocker
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5109-YARN-2928.01.patch, 
> YARN-5109-YARN-2928.02.patch
>
>
> When we store timestamps (for example as part of the row key or part of the 
> column name for an event), the bytes are used as is without any encoding. If 
> the byte value happens to contain a separator character we use (e.g. "!" or 
> "="), it causes a parse failure when we read it.
> I came across this while looking into this error in the timeline reader:
> {noformat}
> 2016-05-17 21:28:38,643 WARN 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.TimelineStorageUtils:
>  incorrectly formatted column name: it will be discarded
> {noformat}
> I traced the data that was causing this, and the column name (for the event) 
> was the following:
> {noformat}
> i:e!YARN_RM_CONTAINER_CREATED=\x7F\xFF\xFE\xABDY=\x99=YARN_CONTAINER_ALLOCATED_HOST
> {noformat}
> Note that the column name is supposed to be of the format (event 
> id)=(timestamp)=(event info key). However, observe the timestamp portion:
> {noformat}
> \x7F\xFF\xFE\xABDY=\x99
> {noformat}
> The presence of the separator ("=") causes the parse error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5109) timestamps are stored unencoded causing parse errors

2016-05-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294601#comment-15294601
 ] 

Varun Saxena commented on YARN-5109:


[~sjlee0],
Yes, type safety will have to be ensured within this encode.
The specific function I am talking about is 
{{TimelineFilterUtils#createFiltersFromColumnQualifiers}}. This is used for 
events and relations. Event filters and relation filters cannot be applied 
using HBase SingleColumnValueFilter so we fetch all the columns specified which 
are there in event filters and relation filters(i.e. events in event filters or 
entity types in relation filters).

I mentioned about {{Object... params}} as that is what came to my mind just 
before signing off for the day.

But on second thoughts, I think we can have a switch case based on column 
prefix and construct EventColumnName from there. We will have only 2 switch 
cases here other than default(i.e. ApplicationColumnPrefix.EVENT and 
EntityColumnPrefix.EVENT). The number of cases should not become humongous in 
this switch case even from a long term perspective. And if it does, we can 
revisit on a solution then.
I will go with this approach now.

> timestamps are stored unencoded causing parse errors
> 
>
> Key: YARN-5109
> URL: https://issues.apache.org/jira/browse/YARN-5109
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>Priority: Blocker
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5109-YARN-2928.01.patch, 
> YARN-5109-YARN-2928.02.patch
>
>
> When we store timestamps (for example as part of the row key or part of the 
> column name for an event), the bytes are used as is without any encoding. If 
> the byte value happens to contain a separator character we use (e.g. "!" or 
> "="), it causes a parse failure when we read it.
> I came across this while looking into this error in the timeline reader:
> {noformat}
> 2016-05-17 21:28:38,643 WARN 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.TimelineStorageUtils:
>  incorrectly formatted column name: it will be discarded
> {noformat}
> I traced the data that was causing this, and the column name (for the event) 
> was the following:
> {noformat}
> i:e!YARN_RM_CONTAINER_CREATED=\x7F\xFF\xFE\xABDY=\x99=YARN_CONTAINER_ALLOCATED_HOST
> {noformat}
> Note that the column name is supposed to be of the format (event 
> id)=(timestamp)=(event info key). However, observe the timestamp portion:
> {noformat}
> \x7F\xFF\xFE\xABDY=\x99
> {noformat}
> The presence of the separator ("=") causes the parse error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5109) timestamps are stored unencoded causing parse errors

2016-05-20 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294600#comment-15294600
 ] 

Joep Rottinghuis commented on YARN-5109:


Agreed with [~sjlee0] that we'd like to avoid non-type safe conversions. Would 
like to see where exactly the challenge lies indeed. Perhaps the filters can be 
parameterized. Hard to say without understanding the exact use-case.
I'm sure this ends up as a non-trivial refactor considering the various cases, 
prefix or not, compound column keys, or strings, rowkeys, and on top of that 
filters...

> timestamps are stored unencoded causing parse errors
> 
>
> Key: YARN-5109
> URL: https://issues.apache.org/jira/browse/YARN-5109
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>Priority: Blocker
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5109-YARN-2928.01.patch, 
> YARN-5109-YARN-2928.02.patch
>
>
> When we store timestamps (for example as part of the row key or part of the 
> column name for an event), the bytes are used as is without any encoding. If 
> the byte value happens to contain a separator character we use (e.g. "!" or 
> "="), it causes a parse failure when we read it.
> I came across this while looking into this error in the timeline reader:
> {noformat}
> 2016-05-17 21:28:38,643 WARN 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.TimelineStorageUtils:
>  incorrectly formatted column name: it will be discarded
> {noformat}
> I traced the data that was causing this, and the column name (for the event) 
> was the following:
> {noformat}
> i:e!YARN_RM_CONTAINER_CREATED=\x7F\xFF\xFE\xABDY=\x99=YARN_CONTAINER_ALLOCATED_HOST
> {noformat}
> Note that the column name is supposed to be of the format (event 
> id)=(timestamp)=(event info key). However, observe the timestamp portion:
> {noformat}
> \x7F\xFF\xFE\xABDY=\x99
> {noformat}
> The presence of the separator ("=") causes the parse error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5109) timestamps are stored unencoded causing parse errors

2016-05-20 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294595#comment-15294595
 ] 

Joep Rottinghuis commented on YARN-5109:


bq. The main motivation for reversing the user and the cluster in the entity 
table is to accommodate the fact that the table can get real large and we 
wanted to provide good partitioning by using the user dimension rather than the 
cluster dimension. We preserved the original order (cluster and then user) for 
the application table.

Indeed. Aside from sheer size, which would be roughly equal in both cases, we 
especially want to avoid hotspotting during writes with a very high update 
volume. Even if we can keep the total size under control with an expiration 
policy, fact remains that if there is a specifically large cluster (and/or just 
one cluster) the cluster prefix doesn't really help spread the load. If the 
user is first, we at least "salt" the key with the user, so the load gets 
spread across the various users.
Of course the same issue could happen the other way around, if somebody runs 
many clusters and all of them they run jobs emitting entities (with metric time 
series data) as one single user (let's say "hadoop"), we'd still hotspot.

The volume for the application table _should_ be smaller. In a multi-cluster 
setup, the load would also spread there. Range scans per cluster would be more 
efficient over the application table, at the cost of reduced parallelism during 
writes.

bq. The bottom line is that since this was the intended design and nothing is 
broken, we should not revisit it as part of this JIRA. Let me know if that is 
OK with you guys.
+1 agreed.

> timestamps are stored unencoded causing parse errors
> 
>
> Key: YARN-5109
> URL: https://issues.apache.org/jira/browse/YARN-5109
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>Priority: Blocker
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5109-YARN-2928.01.patch, 
> YARN-5109-YARN-2928.02.patch
>
>
> When we store timestamps (for example as part of the row key or part of the 
> column name for an event), the bytes are used as is without any encoding. If 
> the byte value happens to contain a separator character we use (e.g. "!" or 
> "="), it causes a parse failure when we read it.
> I came across this while looking into this error in the timeline reader:
> {noformat}
> 2016-05-17 21:28:38,643 WARN 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.TimelineStorageUtils:
>  incorrectly formatted column name: it will be discarded
> {noformat}
> I traced the data that was causing this, and the column name (for the event) 
> was the following:
> {noformat}
> i:e!YARN_RM_CONTAINER_CREATED=\x7F\xFF\xFE\xABDY=\x99=YARN_CONTAINER_ALLOCATED_HOST
> {noformat}
> Note that the column name is supposed to be of the format (event 
> id)=(timestamp)=(event info key). However, observe the timestamp portion:
> {noformat}
> \x7F\xFF\xFE\xABDY=\x99
> {noformat}
> The presence of the separator ("=") causes the parse error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5095) flow activities and flow runs are populated with wrong timestamp when RM restarts w/ recovery enabled

2016-05-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294593#comment-15294593
 ] 

Varun Saxena commented on YARN-5095:


Ok then lets go with my first option. I will take start time from 
appState(populated from state store) and set it before call to starting 
timeline collector. Will explicitly call the setStartTime method during 
recovery without touching the constructor.


> flow activities and flow runs are populated with wrong timestamp when RM 
> restarts w/ recovery enabled
> -
>
> Key: YARN-5095
> URL: https://issues.apache.org/jira/browse/YARN-5095
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>Priority: Critical
>  Labels: yarn-2928-1st-milestone
>
> I have the RM recovery enabled. I see that upon restart the RM populates 
> records into flow activity and flow runs but with *wrong* timestamps. What I 
> mean by the timestamp is the part of the row key:
> - flow activity: row created with the day of the RM restart
> - flow run: row created with the RM start time as the "run id"
> The following illustrates an example flow run:
> {noformat}
> metrics: [ ],
> events: [ ],
> id: "sjlee@Sleep job/1463433569917",
> type: "YARN_FLOW_RUN",
> createdtime: 1463422860987,
> info: {
> UID: "yarn_cluster!sjlee!Sleep job!1463433569917",
> SYSTEM_INFO_FLOW_RUN_ID: 1463433569917,
> SYSTEM_INFO_FLOW_NAME: "Sleep job",
> SYSTEM_INFO_FLOW_RUN_END_TIME: 1463422865033,
> SYSTEM_INFO_USER: "sjlee"
> },
> isrelatedto: { },
> relatesto: { }
> {noformat}
> The created time and the end time are correct (i.e. original time), whereas 
> the timestamp in the row key (= run id: 1463433569917) is actually later than 
> the end time and coincides with the RM restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5119) Timeout HA issue: Enable IPC ping for all calls by default

2016-05-20 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5119:
-
Component/s: nodemanager
 client

> Timeout HA issue: Enable IPC ping for all calls by default
> --
>
> Key: YARN-5119
> URL: https://issues.apache.org/jira/browse/YARN-5119
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client, nodemanager
>Reporter: Giovanni Matteo Fumarola
>
> We have RM work preserving HA setup with 2 RM instances – RM1 and RM2. RM1 is 
> initially active and so clients connect successfully to a RM1. RM1 
> subsequently hangs (for any reason) and RM2 takes over as active but clients 
> wait indefinitely on RM1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5119) Timeout HA issue: Enable IPC ping for all calls by default

2016-05-20 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5119:
-
Issue Type: Bug  (was: Improvement)

> Timeout HA issue: Enable IPC ping for all calls by default
> --
>
> Key: YARN-5119
> URL: https://issues.apache.org/jira/browse/YARN-5119
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Giovanni Matteo Fumarola
>
> We have RM work preserving HA setup with 2 RM instances – RM1 and RM2. RM1 is 
> initially active and so clients connect successfully to a RM1. RM1 
> subsequently hangs (for any reason) and RM2 takes over as active but clients 
> wait indefinitely on RM1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5120) Metric for RM async dispatcher queue size

2016-05-20 Thread Giovanni Matteo Fumarola (JIRA)
Giovanni Matteo Fumarola created YARN-5120:
--

 Summary: Metric for RM async dispatcher queue size
 Key: YARN-5120
 URL: https://issues.apache.org/jira/browse/YARN-5120
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Reporter: Giovanni Matteo Fumarola
Priority: Minor


It is difficult to identify the health of the RM AsyncDispatcher. 
Solution: Add a metric for the AsyncDispatcher queue size. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5119) Timeout HA issue: Enable IPC ping for all calls by default

2016-05-20 Thread Giovanni Matteo Fumarola (JIRA)
Giovanni Matteo Fumarola created YARN-5119:
--

 Summary: Timeout HA issue: Enable IPC ping for all calls by default
 Key: YARN-5119
 URL: https://issues.apache.org/jira/browse/YARN-5119
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Giovanni Matteo Fumarola


We have RM work preserving HA setup with 2 RM instances – RM1 and RM2. RM1 is 
initially active and so clients connect successfully to a RM1. RM1 subsequently 
hangs (for any reason) and RM2 takes over as active but clients wait 
indefinitely on RM1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5095) flow activities and flow runs are populated with wrong timestamp when RM restarts w/ recovery enabled

2016-05-20 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294577#comment-15294577
 ] 

Sangjin Lee commented on YARN-5095:
---

The main reason I went with the start time as opposed to the submit time in 
{{RMTimelineCollectorManager}} as the default flow run id is that the submit 
time is not available in {{ApplicationReport}} which we rely on in some of our 
unit tests such as {{TestMRTimelineEventHandling}}.

> flow activities and flow runs are populated with wrong timestamp when RM 
> restarts w/ recovery enabled
> -
>
> Key: YARN-5095
> URL: https://issues.apache.org/jira/browse/YARN-5095
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>Priority: Critical
>  Labels: yarn-2928-1st-milestone
>
> I have the RM recovery enabled. I see that upon restart the RM populates 
> records into flow activity and flow runs but with *wrong* timestamps. What I 
> mean by the timestamp is the part of the row key:
> - flow activity: row created with the day of the RM restart
> - flow run: row created with the RM start time as the "run id"
> The following illustrates an example flow run:
> {noformat}
> metrics: [ ],
> events: [ ],
> id: "sjlee@Sleep job/1463433569917",
> type: "YARN_FLOW_RUN",
> createdtime: 1463422860987,
> info: {
> UID: "yarn_cluster!sjlee!Sleep job!1463433569917",
> SYSTEM_INFO_FLOW_RUN_ID: 1463433569917,
> SYSTEM_INFO_FLOW_NAME: "Sleep job",
> SYSTEM_INFO_FLOW_RUN_END_TIME: 1463422865033,
> SYSTEM_INFO_USER: "sjlee"
> },
> isrelatedto: { },
> relatesto: { }
> {noformat}
> The created time and the end time are correct (i.e. original time), whereas 
> the timestamp in the row key (= run id: 1463433569917) is actually later than 
> the end time and coincides with the RM restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5113) Refactoring and other clean-up for distributed scheduling

2016-05-20 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5113:
-
Attachment: YARN-5113.003.patch

Attaching new patch (had forgotten to include some files in the last version).

> Refactoring and other clean-up for distributed scheduling
> -
>
> Key: YARN-5113
> URL: https://issues.apache.org/jira/browse/YARN-5113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5113.001.patch, YARN-5113.002.patch, 
> YARN-5113.003.patch
>
>
> This JIRA focuses on the refactoring of classes related to Distributed 
> Scheduling



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5109) timestamps are stored unencoded causing parse errors

2016-05-20 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294559#comment-15294559
 ] 

Sangjin Lee commented on YARN-5109:
---

Hmm, could you point to the specific method where the proposed {{byte[] 
encode(T key)}} would not work and we would need one that takes an {{Object}} 
array? That's bit worrisome as {{Object... params}} is not really type-safe, 
and it can be error-prone.

> timestamps are stored unencoded causing parse errors
> 
>
> Key: YARN-5109
> URL: https://issues.apache.org/jira/browse/YARN-5109
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>Priority: Blocker
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5109-YARN-2928.01.patch, 
> YARN-5109-YARN-2928.02.patch
>
>
> When we store timestamps (for example as part of the row key or part of the 
> column name for an event), the bytes are used as is without any encoding. If 
> the byte value happens to contain a separator character we use (e.g. "!" or 
> "="), it causes a parse failure when we read it.
> I came across this while looking into this error in the timeline reader:
> {noformat}
> 2016-05-17 21:28:38,643 WARN 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.TimelineStorageUtils:
>  incorrectly formatted column name: it will be discarded
> {noformat}
> I traced the data that was causing this, and the column name (for the event) 
> was the following:
> {noformat}
> i:e!YARN_RM_CONTAINER_CREATED=\x7F\xFF\xFE\xABDY=\x99=YARN_CONTAINER_ALLOCATED_HOST
> {noformat}
> Note that the column name is supposed to be of the format (event 
> id)=(timestamp)=(event info key). However, observe the timestamp portion:
> {noformat}
> \x7F\xFF\xFE\xABDY=\x99
> {noformat}
> The presence of the separator ("=") causes the parse error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5109) timestamps are stored unencoded causing parse errors

2016-05-20 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294555#comment-15294555
 ] 

Sangjin Lee commented on YARN-5109:
---

[~jrottinghuis], [~vrushalic], and I dug a little bit, and it appears to be 
intentional. See YARN-3906 and YARN-3815 (see [the attached 
doc|https://issues.apache.org/jira/secure/attachment/12743391/hbase-schema-proposal-for-aggregation.pdf]).
 The main motivation for reversing the user and the cluster in the entity table 
is to accommodate the fact that the table can get real large and we wanted to 
provide good partitioning by using the user dimension rather than the cluster 
dimension. We preserved the original order (cluster and then user) for the 
application table.

The bottom line is that since this was the intended design and nothing is 
broken, we should not revisit it as part of this JIRA. Let me know if that is 
OK with you guys.

> timestamps are stored unencoded causing parse errors
> 
>
> Key: YARN-5109
> URL: https://issues.apache.org/jira/browse/YARN-5109
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>Priority: Blocker
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5109-YARN-2928.01.patch, 
> YARN-5109-YARN-2928.02.patch
>
>
> When we store timestamps (for example as part of the row key or part of the 
> column name for an event), the bytes are used as is without any encoding. If 
> the byte value happens to contain a separator character we use (e.g. "!" or 
> "="), it causes a parse failure when we read it.
> I came across this while looking into this error in the timeline reader:
> {noformat}
> 2016-05-17 21:28:38,643 WARN 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.TimelineStorageUtils:
>  incorrectly formatted column name: it will be discarded
> {noformat}
> I traced the data that was causing this, and the column name (for the event) 
> was the following:
> {noformat}
> i:e!YARN_RM_CONTAINER_CREATED=\x7F\xFF\xFE\xABDY=\x99=YARN_CONTAINER_ALLOCATED_HOST
> {noformat}
> Note that the column name is supposed to be of the format (event 
> id)=(timestamp)=(event info key). However, observe the timestamp portion:
> {noformat}
> \x7F\xFF\xFE\xABDY=\x99
> {noformat}
> The presence of the separator ("=") causes the parse error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5095) flow activities and flow runs are populated with wrong timestamp when RM restarts w/ recovery enabled

2016-05-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294513#comment-15294513
 ] 

Varun Saxena edited comment on YARN-5095 at 5/20/16 11:20 PM:
--

Had a look at code for this.
We start timeline collector right after creating RMAppImpl object in 
RMAppManager#createAndPopulateNewRMApp. During start of timeline collector, 
after collector has been added to RMTimelineCollectorManager, we call postPut.
This is where flow run ID is set with application start time.
Now while recovering application, in RMAppImpl constructor, we initialize start 
time with current time. But the start time from state store is only updated 
when RECOVER event is handled. And that is done after timeline collector has 
been started and postPut has been called.
That is why current system time is sent in flow run ID.

We hence have 2 options to fix this. Take start time from state store and pass 
that as well in RMAppImpl constructor and set it.
Or set flow run ID equal to app submit time which is already set in RMAppImpl 
constructor.
I think we can go with latter.

Thoughts ?
cc [~sjlee0]


was (Author: varun_saxena):
Had a look at code for this.
We start timeline collector right after creating RMAppImpl object in 
RMAppManager#createAndPopulateNewRMApp. During start of timeline collector 
after collector has been added to RMTimelineCollectorManager, we call postPut.
This is where flow run ID is set with application start time.
Now while recovering application, in RMAppImpl constructor, we initialize start 
time with current time. But the start time from state store is only updated 
when RECOVER event is handled. And that is done after timeline collector has 
been started and postPut has been called.
That is why current system time is sent in flow run ID.

We hence have 2 options to fix this. Take start time from state store and pass 
that as well in RMAppImpl constructor and set it.
Or set flow run ID equal to app submit time which is already set in RMAppImpl 
constructor.
I think we can go with latter.

Thoughts ?
cc [~sjlee0]

> flow activities and flow runs are populated with wrong timestamp when RM 
> restarts w/ recovery enabled
> -
>
> Key: YARN-5095
> URL: https://issues.apache.org/jira/browse/YARN-5095
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>Priority: Critical
>  Labels: yarn-2928-1st-milestone
>
> I have the RM recovery enabled. I see that upon restart the RM populates 
> records into flow activity and flow runs but with *wrong* timestamps. What I 
> mean by the timestamp is the part of the row key:
> - flow activity: row created with the day of the RM restart
> - flow run: row created with the RM start time as the "run id"
> The following illustrates an example flow run:
> {noformat}
> metrics: [ ],
> events: [ ],
> id: "sjlee@Sleep job/1463433569917",
> type: "YARN_FLOW_RUN",
> createdtime: 1463422860987,
> info: {
> UID: "yarn_cluster!sjlee!Sleep job!1463433569917",
> SYSTEM_INFO_FLOW_RUN_ID: 1463433569917,
> SYSTEM_INFO_FLOW_NAME: "Sleep job",
> SYSTEM_INFO_FLOW_RUN_END_TIME: 1463422865033,
> SYSTEM_INFO_USER: "sjlee"
> },
> isrelatedto: { },
> relatesto: { }
> {noformat}
> The created time and the end time are correct (i.e. original time), whereas 
> the timestamp in the row key (= run id: 1463433569917) is actually later than 
> the end time and coincides with the RM restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5095) flow activities and flow runs are populated with wrong timestamp when RM restarts w/ recovery enabled

2016-05-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294513#comment-15294513
 ] 

Varun Saxena commented on YARN-5095:


Had a look at code for this.
We start timeline collector right after creating RMAppImpl object in 
RMAppManager#createAndPopulateNewRMApp. During start of timeline collector 
after collector has been added to RMTimelineCollectorManager, we call postPut.
This is where flow run ID is set with application start time.
Now while recovering application, in RMAppImpl constructor, we initialize start 
time with current time. But the start time from state store is only updated 
when RECOVER event is handled. And that is done after timeline collector has 
been started and postPut has been called.
That is why current system time is sent in flow run ID.

We hence have 2 options to fix this. Take start time from state store and pass 
that as well in RMAppImpl constructor and set it.
Or set flow run ID equal to app submit time which is already set in RMAppImpl 
constructor.
I think we can go with latter.

Thoughts ?
cc [~sjlee0]

> flow activities and flow runs are populated with wrong timestamp when RM 
> restarts w/ recovery enabled
> -
>
> Key: YARN-5095
> URL: https://issues.apache.org/jira/browse/YARN-5095
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>Priority: Critical
>  Labels: yarn-2928-1st-milestone
>
> I have the RM recovery enabled. I see that upon restart the RM populates 
> records into flow activity and flow runs but with *wrong* timestamps. What I 
> mean by the timestamp is the part of the row key:
> - flow activity: row created with the day of the RM restart
> - flow run: row created with the RM start time as the "run id"
> The following illustrates an example flow run:
> {noformat}
> metrics: [ ],
> events: [ ],
> id: "sjlee@Sleep job/1463433569917",
> type: "YARN_FLOW_RUN",
> createdtime: 1463422860987,
> info: {
> UID: "yarn_cluster!sjlee!Sleep job!1463433569917",
> SYSTEM_INFO_FLOW_RUN_ID: 1463433569917,
> SYSTEM_INFO_FLOW_NAME: "Sleep job",
> SYSTEM_INFO_FLOW_RUN_END_TIME: 1463422865033,
> SYSTEM_INFO_USER: "sjlee"
> },
> isrelatedto: { },
> relatesto: { }
> {noformat}
> The created time and the end time are correct (i.e. original time), whereas 
> the timestamp in the row key (= run id: 1463433569917) is actually later than 
> the end time and coincides with the RM restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1942) Many of ConverterUtils methods need to have public interfaces

2016-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294508#comment-15294508
 ] 

Hadoop QA commented on YARN-1942:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 34 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 44s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 44s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
0s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 6m 16s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 3m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 9m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 50s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 26s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 59s 
{color} | {color:red} root: patch generated 110 new + 2956 unchanged - 33 fixed 
= 3066 total (was 2989) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 6m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 3m 
7s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 11 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 11m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 7s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 18s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 7s 
{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the 
patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 30m 9s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 14s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 52s 
{color} | {color:green} hadoop-yarn-server-timeline-pluginstorage in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 18s 
{color} | {color:green} hadoop-yarn-applications-distributedshell in the patch 
passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 0s {color} | 
{color:red} 

[jira] [Reopened] (YARN-4919) Yarn logs should support a option to output logs as compressed archive

2016-05-20 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli reopened YARN-4919:
---

My mistake. This isn't a dup of YARN-1134.

That said, not sure why we need this after YARN-4913. A user can simply use the 
"-out" option and then zip the entire directory. Will close instead as Won't 
fix.

> Yarn logs should support a option to output logs as compressed archive
> --
>
> Key: YARN-4919
> URL: https://issues.apache.org/jira/browse/YARN-4919
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Xuan Gong
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-4919) Yarn logs should support a option to output logs as compressed archive

2016-05-20 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli resolved YARN-4919.
---
Resolution: Won't Fix

> Yarn logs should support a option to output logs as compressed archive
> --
>
> Key: YARN-4919
> URL: https://issues.apache.org/jira/browse/YARN-4919
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Xuan Gong
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5018) Online aggregation logic should not run immediately after collectors got started

2016-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294493#comment-15294493
 ] 

Hadoop QA commented on YARN-5018:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
6s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
42s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 42s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed 
with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 49s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed 
with JDK v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 13s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805333/YARN-5018-YARN-2928.004.patch
 |
| JIRA Issue | YARN-5018 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 486bcd8b31f0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | 

[jira] [Commented] (YARN-5094) some YARN container events have timestamp of -1 in REST output

2016-05-20 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294486#comment-15294486
 ] 

Li Lu commented on YARN-5094:
-

OK, seems like we missed something here: We use the incoming container event's 
timestamp as our event timestamp. However, {{ContainerEvent}} does not contain 
valid event timestamp since it's created via AbstractEvent(type), where 
timestamp is always set to -1:
{code}
  // use this if you DON'T care about the timestamp
  public AbstractEvent(TYPE type) {
this.type = type;
// We're not generating a real timestamp here.  It's too expensive.
timestamp = -1L;
  }
{code}

However, we do have the timestamp in RM SMP. This caused the difference. 

I'm working on a fix now... Probably we need to add timestamp support to 
{{ContainerEvent}}s so that we can use this information? 

> some YARN container events have timestamp of -1 in REST output
> --
>
> Key: YARN-5094
> URL: https://issues.apache.org/jira/browse/YARN-5094
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Li Lu
>  Labels: yarn-2928-1st-milestone
>
> Some events in the YARN container entities have timestamp of -1. The 
> RM-generated container events have proper timestamps. It appears that it's 
> the NM-generated events that have -1: YARN_CONTAINER_CREATED, 
> YARN_CONTAINER_FINISHED, YARN_NM_CONTAINER_LOCALIZATION_FINISHED, 
> YARN_NM_CONTAINER_LOCALIZATION_STARTED.
> In the YARN container page,
> {noformat}
> {
> id: "YARN_CONTAINER_CREATED",
> timestamp: -1,
> info: { }
> },
> {
> id: "YARN_CONTAINER_FINISHED",
> timestamp: -1,
> info: {
> YARN_CONTAINER_EXIT_STATUS: 0,
> YARN_CONTAINER_STATE: "RUNNING",
> YARN_CONTAINER_DIAGNOSTICS_INFO: ""
> }
> },
> {
> id: "YARN_NM_CONTAINER_LOCALIZATION_FINISHED",
> timestamp: -1,
> info: { }
> },
> {
> id: "YARN_NM_CONTAINER_LOCALIZATION_STARTED",
> timestamp: -1,
> info: { }
> }
> {noformat}
> I think the data itself is OK, but the values are not being populated in the 
> REST output?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1134) Add support for zipping/unzipping logs while in transit for the NM logs web-service

2016-05-20 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294490#comment-15294490
 ] 

Vinod Kumar Vavilapalli commented on YARN-1134:
---

bq. It looks like you've implemented a feature to create a gzip'd tarball on 
the client machine. The request to compress the log files that are served by 
the NM logs web service.
Agreed. It was my mistake to close YARN-4919 as a duplicate of this, it isn't.

> Add support for zipping/unzipping logs while in transit for the NM logs 
> web-service
> ---
>
> Key: YARN-1134
> URL: https://issues.apache.org/jira/browse/YARN-1134
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Xuan Gong
> Attachments: YARN-1134.1.patch
>
>
> As [~zjshen] pointed out at 
> [YARN-649|https://issues.apache.org/jira/browse/YARN-649?focusedCommentId=13698415=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13698415],
> {quote}
> For the long running applications, they may have a big log file, such that it 
> will take a long time to download the log file via the RESTful API. 
> Consequently, HTTP connection may timeout before downloading before 
> downloading a complete log file. Maybe it is good to zip the log file before 
> sending it, and unzip it after receiving it.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3409) Add constraint node labels

2016-05-20 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294457#comment-15294457
 ] 

Naganarasimha G R commented on YARN-3409:
-

Hi [~vinodkv] & [~wangda], 
I would like to start working on this jira... 
If ok then can come up with a document for it and discuss it further. Is it 
fine ?

> Add constraint node labels
> --
>
> Key: YARN-3409
> URL: https://issues.apache.org/jira/browse/YARN-3409
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, capacityscheduler, client
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>
> Specify only one label for each node (IAW, partition a cluster) is a way to 
> determinate how resources of a special set of nodes could be shared by a 
> group of entities (like teams, departments, etc.). Partitions of a cluster 
> has following characteristics:
> - Cluster divided to several disjoint sub clusters.
> - ACL/priority can apply on partition (Only market team / marke team has 
> priority to use the partition).
> - Percentage of capacities can apply on partition (Market team has 40% 
> minimum capacity and Dev team has 60% of minimum capacity of the partition).
> Constraints are orthogonal to partition, they’re describing attributes of 
> node’s hardware/software just for affinity. Some example of constraints:
> - glibc version
> - JDK version
> - Type of CPU (x86_64/i686)
> - Type of OS (windows, linux, etc.)
> With this, application can be able to ask for resource has (glibc.version >= 
> 2.20 && JDK.version >= 8u20 && x86_64).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5018) Online aggregation logic should not run immediately after collectors got started

2016-05-20 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-5018:

Attachment: YARN-5018-YARN-2928.004.patch

Fix the findbugs warnings. Sorry that was a real bug...

> Online aggregation logic should not run immediately after collectors got 
> started
> 
>
> Key: YARN-5018
> URL: https://issues.apache.org/jira/browse/YARN-5018
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5018-YARN-2928.001.patch, 
> YARN-5018-YARN-2928.002.patch, YARN-5018-YARN-2928.003.patch, 
> YARN-5018-YARN-2928.004.patch
>
>
> In app level collector, we launch the aggregation logic immediately after the 
> collector got started. However, at this time, important context data has yet 
> to be published to the container. Also, if the aggregation result is empty, 
> we do not need to publish them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4957) Add getNewReservation in ApplicationClientProtocol

2016-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294423#comment-15294423
 ] 

Hadoop QA commented on YARN-4957:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
1s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 10s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
25s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped branch modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 32s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
27s {color} | {color:green} root: patch generated 0 new + 469 unchanged - 7 
fixed = 469 total (was 476) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patch modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 0s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 44s 
{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 15s 
{color} | {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 25s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 12s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 35s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 33m 17s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 17s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 7s 
{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 111m 42s 
{color} | {color:red} hadoop-mapreduce-client-jobclient in 

[jira] [Commented] (YARN-5018) Online aggregation logic should not run immediately after collectors got started

2016-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294396#comment-15294396
 ] 

Hadoop QA commented on YARN-5018:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 36s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
51s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
50s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 54s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 47s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed 
with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 52s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed 
with JDK v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m 22s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 |
|  |  Null passed for non-null parameter of 
TimelineCollector.aggregateWithoutGroupId(Map, String, String) in 

[jira] [Commented] (YARN-4887) AM-RM protocol changes for identifying resource-requests explicitly

2016-05-20 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294395#comment-15294395
 ] 

Wangda Tan commented on YARN-4887:
--

[~subru], thanks for the patch.

Could we update:
{code}
201   @Public
202   @Stable
{code} 

To Public and Unstable? So we have chance to iterate it for next releases.

2) 
{code}
@Public @Stable public ...
{code}
Should in separate lines.

> AM-RM protocol changes for identifying resource-requests explicitly
> ---
>
> Key: YARN-4887
> URL: https://issues.apache.org/jira/browse/YARN-4887
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-4887-v1.patch
>
>
> YARN-4879 proposes the addition of a simple delta allocate protocol. This 
> JIRA is to track the changes in AM-RM protocol to accomplish it. The crux is 
> the addition of ID field in ResourceRequest and Container. The detailed 
> proposal is in the parent JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4888) Changes in RM AppSchedulingInfo for identifying resource-requests explicitly

2016-05-20 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-4888:
--
Summary: Changes in RM AppSchedulingInfo for identifying resource-requests 
explicitly  (was: Changes in RM AppSchedulingInfo for supporting delta protocol)

> Changes in RM AppSchedulingInfo for identifying resource-requests explicitly
> 
>
> Key: YARN-4888
> URL: https://issues.apache.org/jira/browse/YARN-4888
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
>
> YARN-4879 proposes the addition of a simple delta allocate protocol. This 
> JIRA is to track the changes in RM app scheduling data structures to 
> accomplish it. The detailed proposal is in the parent JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4889) Changes in AMRMClient for identifying resource-requests explicitly

2016-05-20 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-4889:
--
Summary: Changes in AMRMClient for identifying resource-requests explicitly 
 (was: Changes in AMRMClient for supporting delta protocol)

> Changes in AMRMClient for identifying resource-requests explicitly
> --
>
> Key: YARN-4889
> URL: https://issues.apache.org/jira/browse/YARN-4889
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
>
> YARN-4879 proposes the addition of a simple delta allocate protocol. This 
> JIRA is to track the changes in AMRMClient to accomplish it. The detailed 
> proposal is in the parent JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4887) AM-RM protocol changes for identifying resource-requests explicitly

2016-05-20 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-4887:
--
Summary: AM-RM protocol changes for identifying resource-requests 
explicitly  (was: AM-RM protocol changes for supporting delta protocol)

> AM-RM protocol changes for identifying resource-requests explicitly
> ---
>
> Key: YARN-4887
> URL: https://issues.apache.org/jira/browse/YARN-4887
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-4887-v1.patch
>
>
> YARN-4879 proposes the addition of a simple delta allocate protocol. This 
> JIRA is to track the changes in AM-RM protocol to accomplish it. The crux is 
> the addition of ID field in ResourceRequest and Container. The detailed 
> proposal is in the parent JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5055) max per user can be larger than max per queue

2016-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294374#comment-15294374
 ] 

Hadoop QA commented on YARN-5055:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
1s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 patch generated 0 new + 196 unchanged - 1 fixed = 196 total (was 197) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 29m 56s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 54s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestContainerResourceUsage |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805310/YARN-5055.005.patch |
| JIRA Issue | YARN-5055 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 64672f65420b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 500e946 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/11598/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/11598/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11598/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Commented] (YARN-4751) In 2.7, Labeled queue usage not shown properly in capacity scheduler UI

2016-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294336#comment-15294336
 ] 

Hadoop QA commented on YARN-4751:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
22s {color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} branch-2.7 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} branch-2.7 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
33s {color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} branch-2.7 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 5s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in branch-2.7 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} branch-2.7 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} branch-2.7 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 30s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 patch generated 17 new + 1298 unchanged - 4 fixed = 1315 total (was 1302) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s 
{color} | {color:red} The patch has 2933 line(s) that end in whitespace. Use 
git apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 1m 6s 
{color} | {color:red} The patch has 295 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 21s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 51m 46s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 119m 56s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 

[jira] [Updated] (YARN-5018) Online aggregation logic should not run immediately after collectors got started

2016-05-20 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-5018:

Attachment: YARN-5018-YARN-2928.003.patch

New patch to add checks in AppLevelTimelineCollector#AppLevelAggregator.run. To 
avoid data races I also made change to TimelineCollector so that it can publish 
the postPut event to collectors, and indicate all context info has been set up. 

> Online aggregation logic should not run immediately after collectors got 
> started
> 
>
> Key: YARN-5018
> URL: https://issues.apache.org/jira/browse/YARN-5018
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5018-YARN-2928.001.patch, 
> YARN-5018-YARN-2928.002.patch, YARN-5018-YARN-2928.003.patch
>
>
> In app level collector, we launch the aggregation logic immediately after the 
> collector got started. However, at this time, important context data has yet 
> to be published to the container. Also, if the aggregation result is empty, 
> we do not need to publish them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5109) timestamps are stored unencoded causing parse errors

2016-05-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294277#comment-15294277
 ] 

Varun Saxena commented on YARN-5109:


Infact even getColumnPrefixBytes would not work for the code in 
TimelineFilterUtils as we would not know which class the converter will 
take(say, EventColumnName or String). So probably we would need another method 
in converter which takes multiple Objects as parameters and interprets them in 
sequence and either encodes them or returns a key. Something like below. Then 
we can either pass a encoded byte array or the Object which needs to be encoded 
to ColumnPrefix to attach a prefix in front of the qualifier.
{code}
public interface KeyConverter {
  byte[] encode(Object... params);
 OR 
  T createKey(Object...params);
  byte[] encode(T key);
  T decode(byte[] bytes);
}
{code}

Will look at it tomorrow. 

> timestamps are stored unencoded causing parse errors
> 
>
> Key: YARN-5109
> URL: https://issues.apache.org/jira/browse/YARN-5109
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>Priority: Blocker
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5109-YARN-2928.01.patch, 
> YARN-5109-YARN-2928.02.patch
>
>
> When we store timestamps (for example as part of the row key or part of the 
> column name for an event), the bytes are used as is without any encoding. If 
> the byte value happens to contain a separator character we use (e.g. "!" or 
> "="), it causes a parse failure when we read it.
> I came across this while looking into this error in the timeline reader:
> {noformat}
> 2016-05-17 21:28:38,643 WARN 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.TimelineStorageUtils:
>  incorrectly formatted column name: it will be discarded
> {noformat}
> I traced the data that was causing this, and the column name (for the event) 
> was the following:
> {noformat}
> i:e!YARN_RM_CONTAINER_CREATED=\x7F\xFF\xFE\xABDY=\x99=YARN_CONTAINER_ALLOCATED_HOST
> {noformat}
> Note that the column name is supposed to be of the format (event 
> id)=(timestamp)=(event info key). However, observe the timestamp portion:
> {noformat}
> \x7F\xFF\xFE\xABDY=\x99
> {noformat}
> The presence of the separator ("=") causes the parse error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5055) max per user can be larger than max per queue

2016-05-20 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-5055:
--
Attachment: YARN-5055.005.patch

Fixing the indent ran the code over 80 characters. Now fixing that checkstyle...

> max per user can be larger than max per queue
> -
>
> Key: YARN-5055
> URL: https://issues.apache.org/jira/browse/YARN-5055
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler, resourcemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Eric Badger
>Priority: Minor
> Attachments: YARN-5055.001.patch, YARN-5055.002.patch, 
> YARN-5055.003.patch, YARN-5055.004.patch, YARN-5055.005.patch
>
>
> If user limit and/or user limit factor are >100% then the calculated maximum 
> values per user can exceed the maximum for the queue.  For example, maximum 
> AM resource per user could exceed maximum AM resource for the entire queue, 
> or max applications per user could be larger than max applications for the 
> queue.  The per-user values should be capped by the per queue values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5055) max per user can be larger than max per queue

2016-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294224#comment-15294224
 ] 

Hadoop QA commented on YARN-5055:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 41s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 21s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 patch generated 1 new + 196 unchanged - 1 fixed = 197 total (was 197) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 32m 0s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 20s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805293/YARN-5055.004.patch |
| JIRA Issue | YARN-5055 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 77c284e7e6ea 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 500e946 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/11597/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/11597/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/11597/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11597/testReport/ |
| modules | C: 

[jira] [Comment Edited] (YARN-5109) timestamps are stored unencoded causing parse errors

2016-05-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294198#comment-15294198
 ] 

Varun Saxena edited comment on YARN-5109 at 5/20/16 9:10 PM:
-

Just to update, the patch is almost complete.
Will have it up by tomorrow. The UT failures reported above have been fixed.

The pending part in the patch is with regards to removing 
getCompoundColQualBytes method and how to handle it as we use it while creating 
filters. As of now, getColumnPrefix method should work well. We will have to 
probably store reference to converters in ColumnPrefix class implementations 
then.
Also some javadocs need to be added.


was (Author: varun_saxena):
Just to update, the patch is almost complete.
Will have it up by

> timestamps are stored unencoded causing parse errors
> 
>
> Key: YARN-5109
> URL: https://issues.apache.org/jira/browse/YARN-5109
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>Priority: Blocker
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5109-YARN-2928.01.patch, 
> YARN-5109-YARN-2928.02.patch
>
>
> When we store timestamps (for example as part of the row key or part of the 
> column name for an event), the bytes are used as is without any encoding. If 
> the byte value happens to contain a separator character we use (e.g. "!" or 
> "="), it causes a parse failure when we read it.
> I came across this while looking into this error in the timeline reader:
> {noformat}
> 2016-05-17 21:28:38,643 WARN 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.TimelineStorageUtils:
>  incorrectly formatted column name: it will be discarded
> {noformat}
> I traced the data that was causing this, and the column name (for the event) 
> was the following:
> {noformat}
> i:e!YARN_RM_CONTAINER_CREATED=\x7F\xFF\xFE\xABDY=\x99=YARN_CONTAINER_ALLOCATED_HOST
> {noformat}
> Note that the column name is supposed to be of the format (event 
> id)=(timestamp)=(event info key). However, observe the timestamp portion:
> {noformat}
> \x7F\xFF\xFE\xABDY=\x99
> {noformat}
> The presence of the separator ("=") causes the parse error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5109) timestamps are stored unencoded causing parse errors

2016-05-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294198#comment-15294198
 ] 

Varun Saxena commented on YARN-5109:


Just to update, the patch is almost complete.
Will have it up by

> timestamps are stored unencoded causing parse errors
> 
>
> Key: YARN-5109
> URL: https://issues.apache.org/jira/browse/YARN-5109
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>Priority: Blocker
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5109-YARN-2928.01.patch, 
> YARN-5109-YARN-2928.02.patch
>
>
> When we store timestamps (for example as part of the row key or part of the 
> column name for an event), the bytes are used as is without any encoding. If 
> the byte value happens to contain a separator character we use (e.g. "!" or 
> "="), it causes a parse failure when we read it.
> I came across this while looking into this error in the timeline reader:
> {noformat}
> 2016-05-17 21:28:38,643 WARN 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.TimelineStorageUtils:
>  incorrectly formatted column name: it will be discarded
> {noformat}
> I traced the data that was causing this, and the column name (for the event) 
> was the following:
> {noformat}
> i:e!YARN_RM_CONTAINER_CREATED=\x7F\xFF\xFE\xABDY=\x99=YARN_CONTAINER_ALLOCATED_HOST
> {noformat}
> Note that the column name is supposed to be of the format (event 
> id)=(timestamp)=(event info key). However, observe the timestamp portion:
> {noformat}
> \x7F\xFF\xFE\xABDY=\x99
> {noformat}
> The presence of the separator ("=") causes the parse error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5097) NPE in Separator.joinEncoded()

2016-05-20 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294173#comment-15294173
 ] 

Li Lu commented on YARN-5097:
-

Seems like we're not generating findbugs files for the newly added module? 

> NPE in Separator.joinEncoded()
> --
>
> Key: YARN-5097
> URL: https://issues.apache.org/jira/browse/YARN-5097
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Vrushali C
>Priority: Critical
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5097-YARN-2928.01.patch, 
> YARN-5097-YARN-2928.02.patch
>
>
> Both in the RM log and the NM log, I see the following exception thrown. 
> First for RM,
> {noformat}
> 2016-05-16 14:19:29,930 ERROR 
> org.apache.hadoop.yarn.server.timelineservice.collector.TimelineCollector: 
> Error aggregating timeline metrics
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.Separator.joinEncoded(Separator.java:249)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.application.ApplicationRowKey.getRowKey(ApplicationRowKey.java:110)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineWriterImpl.write(HBaseTimelineWriterImpl.java:131)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.collector.AppLevelTimelineCollector$AppLevelAggregator.run(AppLevelTimelineCollector.java:136)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>   at java.lang.Thread.run(Thread.java:722)
> {noformat}
> In the NM log, I see a similar exception:
> {noformat}
> 2016-05-16 14:54:23,116 ERROR 
> org.apache.hadoop.yarn.server.timelineservice.collector.TimelineCollector: 
> Error aggregating timeline metrics
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.Separator.joinEncoded(Separator.java:249)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.application.ApplicationRowKey.getRowKey(ApplicationRowKey.java:110)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineWriterImpl.write(HBaseTimelineWriterImpl.java:131)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.collector.AppLevelTimelineCollector$AppLevelAggregator.run(AppLevelTimelineCollector.java:136)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5097) NPE in Separator.joinEncoded()

2016-05-20 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294149#comment-15294149
 ] 

Vrushali C commented on YARN-5097:
--

Not completely sure what that findbugs error means:
{code}
patch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 no findbugs output file 
(hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/target/findbugsXml.xml)
{code}

Also, that particular package is for the unit tests of the hbase writer.

> NPE in Separator.joinEncoded()
> --
>
> Key: YARN-5097
> URL: https://issues.apache.org/jira/browse/YARN-5097
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Vrushali C
>Priority: Critical
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5097-YARN-2928.01.patch, 
> YARN-5097-YARN-2928.02.patch
>
>
> Both in the RM log and the NM log, I see the following exception thrown. 
> First for RM,
> {noformat}
> 2016-05-16 14:19:29,930 ERROR 
> org.apache.hadoop.yarn.server.timelineservice.collector.TimelineCollector: 
> Error aggregating timeline metrics
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.Separator.joinEncoded(Separator.java:249)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.application.ApplicationRowKey.getRowKey(ApplicationRowKey.java:110)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineWriterImpl.write(HBaseTimelineWriterImpl.java:131)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.collector.AppLevelTimelineCollector$AppLevelAggregator.run(AppLevelTimelineCollector.java:136)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>   at java.lang.Thread.run(Thread.java:722)
> {noformat}
> In the NM log, I see a similar exception:
> {noformat}
> 2016-05-16 14:54:23,116 ERROR 
> org.apache.hadoop.yarn.server.timelineservice.collector.TimelineCollector: 
> Error aggregating timeline metrics
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.Separator.joinEncoded(Separator.java:249)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.application.ApplicationRowKey.getRowKey(ApplicationRowKey.java:110)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineWriterImpl.write(HBaseTimelineWriterImpl.java:131)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.collector.AppLevelTimelineCollector$AppLevelAggregator.run(AppLevelTimelineCollector.java:136)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5047) Refactor nodeUpdate across schedulers

2016-05-20 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294147#comment-15294147
 ] 

Wangda Tan commented on YARN-5047:
--

Thanks [~rchiang],

I have a similar comment which is mentioned by [~kasha]:
Now we have nodeUpdate and nodeUpdateInternal, which is a little confusing to 
me. Basically we have two choices:
1) Leave nodeUpdate only, and sub classes can override it.
2) Keep both of nodeUpdate and nodeUpdateInternal, make nodeUpdate cannot be 
overriden

Personally I would prefer #1, I'm not sure if we really need to call 
nodeUpdateInternal *inside nodeUpdate*.

Beyond comments from [~kasha], rest part of the patch looks good to me.

> Refactor nodeUpdate across schedulers
> -
>
> Key: YARN-5047
> URL: https://issues.apache.org/jira/browse/YARN-5047
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, scheduler
>Affects Versions: 3.0.0-alpha1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: YARN-5047.001.patch, YARN-5047.002.patch
>
>
> FairScheduler#nodeUpdate() and CapacityScheduler#nodeUpdate() have a lot of 
> commonality in their code.  See about refactoring the common parts into 
> AbstractYARNScheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5103) With NM recovery enabled, restarting NM multiple times results in AM restart

2016-05-20 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294140#comment-15294140
 ] 

Jason Lowe commented on YARN-5103:
--

Thanks for the patch!  I'm OK skipping the unit test for this case.

Rather than catching IOException and explicitly checking the instance we should 
let the normal catch processing do it for us, e.g.:
{code}
} catch (InterruptedException | InterruptedIOException e) {
   LOG.warn("Interrupted while waiting for exit code from " + containerId);
   notInterrupted = false;
} catch (IOException e) {
   LOG.error("Unable to recover container " + containerIdStr, e);
}
{code}

I noticed this is targeted to 2.9, but I would think this should go into at 
least 2.8 as well?

> With NM recovery enabled, restarting NM multiple times results in AM restart
> 
>
> Key: YARN-5103
> URL: https://issues.apache.org/jira/browse/YARN-5103
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Junping Du
>Priority: Critical
> Attachments: YARN-5103-demo.patch, YARN-5103.patch
>
>
> AM is restarted when NM is restarted multiple times even though NM recovery 
> is enabled.
> {Code:title=NM log on which AM attempt 1 was running }
>  ERROR launcher.RecoveredContainerLaunch 
> (RecoveredContainerLaunch.java:call(88)) - Unable to recover container 
> container_e12_1463043063682_0002_01_01
> java.io.IOException: java.lang.InterruptedException
>   at org.apache.hadoop.util.Shell.runCommand(Shell.java:579)
>   at org.apache.hadoop.util.Shell.run(Shell.java:487)
>   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.signalContainer(LinuxContainerExecutor.java:478)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.isContainerProcessAlive(LinuxContainerExecutor.java:542)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.reacquireContainer(ContainerExecutor.java:185)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.reacquireContainer(LinuxContainerExecutor.java:445)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.RecoveredContainerLaunch.call(RecoveredContainerLaunch.java:83)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.RecoveredContainerLaunch.call(RecoveredContainerLaunch.java:46)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {Code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5055) max per user can be larger than max per queue

2016-05-20 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-5055:
--
Attachment: YARN-5055.004.patch

Fixing checkstyle.

> max per user can be larger than max per queue
> -
>
> Key: YARN-5055
> URL: https://issues.apache.org/jira/browse/YARN-5055
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler, resourcemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Eric Badger
>Priority: Minor
> Attachments: YARN-5055.001.patch, YARN-5055.002.patch, 
> YARN-5055.003.patch, YARN-5055.004.patch
>
>
> If user limit and/or user limit factor are >100% then the calculated maximum 
> values per user can exceed the maximum for the queue.  For example, maximum 
> AM resource per user could exceed maximum AM resource for the entire queue, 
> or max applications per user could be larger than max applications for the 
> queue.  The per-user values should be capped by the per queue values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5055) max per user can be larger than max per queue

2016-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294096#comment-15294096
 ] 

Hadoop QA commented on YARN-5055:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 20s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 patch generated 1 new + 196 unchanged - 1 fixed = 197 total (was 197) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 30m 9s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 31s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805283/YARN-5055.003.patch |
| JIRA Issue | YARN-5055 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux afd9fc94bb6c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d364cea |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/11595/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/11595/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/11595/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11595/testReport/ |
| modules | C: 

[jira] [Commented] (YARN-5085) Add support for change of container ExecutionType

2016-05-20 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15294033#comment-15294033
 ] 

Konstantinos Karanasos commented on YARN-5085:
--

[~kasha], some more thoughts about the important points you brought up...

bq. Shouldn't the latter be solely at the discretion of RM/NM ensemble
I think the AM should always be able to request a change in the ExecutionType 
of its containers. This should be the case for containers at any stage of their 
execution, like [~asuresh] mentions.
Later, we might want to explore the "automatic" ExecutionType change by the 
NM/RM, as you mention, but that should not block us from explicitly asking for 
it through the AM.

bq. I see the need for a promotion, why would we want to demote?
I think it is better to keep the API more general in case we introduce more 
container types, especially if those don't have a strict hierarchy. As an 
example, we could have preemptable and non-preemptable OPPORTUNISTIC 
containers. In this case, we might want to change across container types, but 
that would not be a clear promotion.
However, I agree that we should initially disallow changing container type from 
GUARANTEED to OPPORTUNISTIC, as it is not clear what the behavior of such a 
change would be.

Makes sense?

> Add support for change of container ExecutionType
> -
>
> Key: YARN-5085
> URL: https://issues.apache.org/jira/browse/YARN-5085
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>
> YARN-2882 introduced the concept of {{ExecutionType}} for containers and it 
> also introduced the concept of OPPORTUNISTIC ExecutionType.
> YARN-4335 introduced changes to the ResourceRequest so that AMs may request 
> that the Container allocated against the ResourceRequest is of a particular 
> {{ExecutionType}}.
> This JIRA proposes to provide support for the AM to change the ExecutionType 
> of a previously requested Container.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5055) max per user can be larger than max per queue

2016-05-20 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-5055:
--
Attachment: YARN-5055.003.patch

Attaching new patch that fixes test failure. 

> max per user can be larger than max per queue
> -
>
> Key: YARN-5055
> URL: https://issues.apache.org/jira/browse/YARN-5055
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler, resourcemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Eric Badger
>Priority: Minor
> Attachments: YARN-5055.001.patch, YARN-5055.002.patch, 
> YARN-5055.003.patch
>
>
> If user limit and/or user limit factor are >100% then the calculated maximum 
> values per user can exceed the maximum for the queue.  For example, maximum 
> AM resource per user could exceed maximum AM resource for the entire queue, 
> or max applications per user could be larger than max applications for the 
> queue.  The per-user values should be capped by the per queue values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5020) Fix Documentation for Yarn Capacity Scheduler on Resource Calculator

2016-05-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15293991#comment-15293991
 ] 

Hudson commented on YARN-5020:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #9834 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9834/])
YARN-5020. Fix Documentation for Yarn Capacity Scheduler on Resource (jianhe: 
rev d364ceac85622e99133b3eb3becef0c8188e6f89)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md


> Fix Documentation for Yarn Capacity Scheduler on Resource Calculator
> 
>
> Key: YARN-5020
> URL: https://issues.apache.org/jira/browse/YARN-5020
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jo Desmet
>Assignee: Takashi Ohnishi
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-5020.1.patch
>
>
> Documentation refers to 'DefaultResourseCalculator' - which is spelled 
> incorrectly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5109) timestamps are stored unencoded causing parse errors

2016-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15293975#comment-15293975
 ] 

Hadoop QA commented on YARN-5109:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
21s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 47s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 49s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
29s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} YARN-2928 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 15s 
{color} | {color:red} 
branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 no findbugs output file 
(hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/target/findbugsXml.xml)
 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 38s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 43s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 43s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 29s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: patch 
generated 19 new + 2 unchanged - 0 fixed = 21 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 13s 
{color} | {color:red} 
patch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 no findbugs output file 
(hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/target/findbugsXml.xml)
 {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 38s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-jdk1.8.0_91
 with JDK v1.8.0_91 generated 10 new + 0 unchanged - 0 fixed = 10 total (was 0) 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 2m 28s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-jdk1.7.0_101
 with JDK v1.7.0_101 generated 2 new + 0 unchanged - 0 fixed = 2 total 

[jira] [Commented] (YARN-4308) ContainersAggregated CPU resource utilization reports negative usage in first few heartbeats

2016-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15293932#comment-15293932
 ] 

Hadoop QA commented on YARN-4308:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 53s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 34s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: patch generated 1 new + 
82 unchanged - 1 fixed = 83 total (was 83) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 4s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 25s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m 28s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805256/0004-YARN-4308.patch |
| JIRA Issue | YARN-4308 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c2bb8ec14a74 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 757050f |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/11592/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11592/testReport/ |
| modules | C:  hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
  U: hadoop-yarn-project/hadoop-yarn |
| Console output | 

[jira] [Commented] (YARN-4464) default value of yarn.resourcemanager.state-store.max-completed-applications should lower.

2016-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15293907#comment-15293907
 ] 

Hadoop QA commented on YARN-4464:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
4s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 13s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
42s {color} | {color:green} hadoop-yarn-project/hadoop-yarn: patch generated 0 
new + 209 unchanged - 1 fixed = 209 total (was 210) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 30s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 28s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 30s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805251/YARN-4464.004.patch |
| JIRA Issue | YARN-4464 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux f9f0284580c2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 757050f |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11590/testReport/ |
| modules | C:  hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common  U: 

[jira] [Commented] (YARN-5116) Failed to execute "yarn application"

2016-05-20 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15293901#comment-15293901
 ] 

Sunil G commented on YARN-5116:
---

[~hex108]
HADOOP-12932 has new sub command support. So few changes has happened along 
with that.

If I fire {{./yarn application}} command, I am getting exception instead of 
help page. [~aw], could you pls help to confirm if this is fine.
{noformat}
root@sunil-Inspiron-3543:/opt/hadoop/trunk/hadoop-3.0.0-alpha1-SNAPSHOT/bin# 
./yarn application
16/05/20 23:38:01 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
16/05/20 23:38:04 INFO impl.TimelineClientImpl: Timeline service address: 
http://0.0.0.0:8188/ws/v1/timeline/
16/05/20 23:38:04 INFO client.RMProxy: Connecting to ResourceManager at 
/127.0.0.1:25001
Invalid Command Usage : 
Exception in thread "main" java.lang.IllegalArgumentException: cmdLineSyntax 
not provided

{noformat}

> Failed to execute "yarn application"
> 
>
> Key: YARN-5116
> URL: https://issues.apache.org/jira/browse/YARN-5116
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jun Gong
>Assignee: Jun Gong
> Attachments: YARN-5116.01.patch
>
>
> Use the trunk code.
> {code}
> $ bin/yarn application -list
> 16/05/20 11:35:45 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Exception in thread "main" 
> org.apache.commons.cli.UnrecognizedOptionException: Unrecognized option: -list
>   at org.apache.commons.cli.Parser.processOption(Parser.java:363)
>   at org.apache.commons.cli.Parser.parse(Parser.java:199)
>   at org.apache.commons.cli.Parser.parse(Parser.java:85)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:172)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:90)
> {code}
> It is cause by that the subcommand 'application' is deleted from command 
> args. The following command is OK.
> {code}
> $ bin/yarn application application -list
> 16/05/20 11:39:35 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Total number of applications (application-types: [] and states: [SUBMITTED, 
> ACCEPTED, RUNNING]):0
> Application-IdApplication-Name
> Application-Type  User   Queue   State
>  Final-State ProgressTracking-URL
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4464) default value of yarn.resourcemanager.state-store.max-completed-applications should lower.

2016-05-20 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15293893#comment-15293893
 ] 

Jian He commented on YARN-4464:
---

ok, may be put it 1000 or a proportion of the number of completed apps in 
memory. 

> default value of yarn.resourcemanager.state-store.max-completed-applications 
> should lower.
> --
>
> Key: YARN-4464
> URL: https://issues.apache.org/jira/browse/YARN-4464
> Project: Hadoop YARN
>  Issue Type: Wish
>  Components: resourcemanager
>Reporter: KWON BYUNGCHANG
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: YARN-4464.001.patch, YARN-4464.002.patch, 
> YARN-4464.003.patch, YARN-4464.004.patch
>
>
> my cluster has 120 nodes.
> I configured RM Restart feature.
> {code}
> yarn.resourcemanager.recovery.enabled=true
> yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
> yarn.resourcemanager.fs.state-store.uri=/system/yarn/rmstore
> {code}
> unfortunately I did not configure 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> so that property configured default value 10,000.
> I have restarted RM due to changing another configuartion.
> I expected that RM restart immediately.
> recovery process was very slow.  I have waited about 20min.  
> realize missing 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> its default value is very huge.  
> need to change lower value or document notice on [RM Restart 
> page|http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5098) Yarn Application log Aggreagation fails due to NM can not get correct HDFS delegation token

2016-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15293889#comment-15293889
 ] 

Hadoop QA commented on YARN-5098:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 19s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 patch generated 7 new + 89 unchanged - 0 fixed = 96 total (was 89) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 34m 17s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m 25s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestContainerResourceUsage |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805102/YARN-5098.1.patch |
| JIRA Issue | YARN-5098 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 30da2c14c7bd 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 757050f |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/11589/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/11589/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit test logs |  

[jira] [Commented] (YARN-5018) Online aggregation logic should not run immediately after collectors got started

2016-05-20 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15293864#comment-15293864
 ] 

Li Lu commented on YARN-5018:
-

Yes I was thinking about the same thing. Let me add some checks (in a 
synchronous fashion, so that there's no race). 

> Online aggregation logic should not run immediately after collectors got 
> started
> 
>
> Key: YARN-5018
> URL: https://issues.apache.org/jira/browse/YARN-5018
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5018-YARN-2928.001.patch, 
> YARN-5018-YARN-2928.002.patch
>
>
> In app level collector, we launch the aggregation logic immediately after the 
> collector got started. However, at this time, important context data has yet 
> to be published to the container. Also, if the aggregation result is empty, 
> we do not need to publish them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4957) Add getNewReservation in ApplicationClientProtocol

2016-05-20 Thread Sean Po (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Po updated YARN-4957:
--
Attachment: YARN-4957.v8.patch

Fixed the remaining checkstyle issues in this patch.

> Add getNewReservation in ApplicationClientProtocol
> --
>
> Key: YARN-4957
> URL: https://issues.apache.org/jira/browse/YARN-4957
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications, client, resourcemanager
>Affects Versions: 2.8.0
>Reporter: Subru Krishnan
>Assignee: Sean Po
>  Labels: api-breaking
> Attachments: YARN-4957.v0.patch, YARN-4957.v1.patch, 
> YARN-4957.v2.patch, YARN-4957.v3.patch, YARN-4957.v4.patch, 
> YARN-4957.v5.1.patch, YARN-4957.v5.2.patch, YARN-4957.v5.patch, 
> YARN-4957.v7.patch, YARN-4957.v8.patch
>
>
> Currently submitReservation returns a ReservationId if sucessful. This JIRA 
> propose adding a getNewReservation in ApplicationClientProtocol for the 
> following reasons:
>   * Prevent zombie reservations in the face of client and/or network failures 
> post submitReservation
>   * Align reservation submission with application submission



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4308) ContainersAggregated CPU resource utilization reports negative usage in first few heartbeats

2016-05-20 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-4308:
--
Attachment: 0004-YARN-4308.patch

Thanks [~Naganarasimha Garla] and [~templedf]

Updating a new patch addressing the comments. Now using java doc comment in 
{{ResourceCalculatorProcessTree}} and its child classes. (not adding comments 
in test classes). Pls help to check the same and kindly let me know if any 
issues.

> ContainersAggregated CPU resource utilization reports negative usage in first 
> few heartbeats
> 
>
> Key: YARN-4308
> URL: https://issues.apache.org/jira/browse/YARN-4308
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.1
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: 0001-YARN-4308.patch, 0002-YARN-4308.patch, 
> 0003-YARN-4308.patch, 0004-YARN-4308.patch
>
>
> NodeManager reports ContainerAggregated CPU resource utilization as -ve value 
> in first few heartbeats cycles. I added a new debug print and received below 
> values from heartbeats.
> {noformat}
> INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  ContainersResource Utilization : CpuTrackerUsagePercent : -1.0 
> INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:ContainersResource
>  Utilization :  CpuTrackerUsagePercent : 198.94598
> {noformat}
> Its better we send 0 as CPU usage rather than sending a negative values in 
> heartbeats eventhough its happening in only first few heartbeats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-1942) Many of ConverterUtils methods need to have public interfaces

2016-05-20 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-1942:
-
Attachment: YARN-1942.9.patch

> Many of ConverterUtils methods need to have public interfaces
> -
>
> Key: YARN-1942
> URL: https://issues.apache.org/jira/browse/YARN-1942
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Affects Versions: 2.4.0
>Reporter: Thomas Graves
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-1942.1.patch, YARN-1942.2.patch, YARN-1942.3.patch, 
> YARN-1942.4.patch, YARN-1942.5.patch, YARN-1942.6.patch, YARN-1942.8.patch, 
> YARN-1942.9.patch
>
>
> ConverterUtils has a bunch of functions that are useful to application 
> masters.   It should either be made public or we make some of the utilities 
> in it public or we provide other external apis for application masters to 
> use.  Note that distributedshell and MR are both using these interfaces. 
> For instance the main use case I see right now is for getting the application 
> attempt id within the appmaster:
> String containerIdStr =
>   System.getenv(Environment.CONTAINER_ID.name());
> ConverterUtils.toContainerId
> ContainerId containerId = ConverterUtils.toContainerId(containerIdStr);
>   ApplicationAttemptId applicationAttemptId =
>   containerId.getApplicationAttemptId();
> I don't see any other way for the application master to get this information. 
>  If there is please let me know.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4464) default value of yarn.resourcemanager.state-store.max-completed-applications should lower.

2016-05-20 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-4464:
---
Attachment: YARN-4464.004.patch

*face palm*  Thanks for catching that.  Let's try that one more time.

> default value of yarn.resourcemanager.state-store.max-completed-applications 
> should lower.
> --
>
> Key: YARN-4464
> URL: https://issues.apache.org/jira/browse/YARN-4464
> Project: Hadoop YARN
>  Issue Type: Wish
>  Components: resourcemanager
>Reporter: KWON BYUNGCHANG
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: YARN-4464.001.patch, YARN-4464.002.patch, 
> YARN-4464.003.patch, YARN-4464.004.patch
>
>
> my cluster has 120 nodes.
> I configured RM Restart feature.
> {code}
> yarn.resourcemanager.recovery.enabled=true
> yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
> yarn.resourcemanager.fs.state-store.uri=/system/yarn/rmstore
> {code}
> unfortunately I did not configure 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> so that property configured default value 10,000.
> I have restarted RM due to changing another configuartion.
> I expected that RM restart immediately.
> recovery process was very slow.  I have waited about 20min.  
> realize missing 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> its default value is very huge.  
> need to change lower value or document notice on [RM Restart 
> page|http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5109) timestamps are stored unencoded causing parse errors

2016-05-20 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15293713#comment-15293713
 ] 

Sangjin Lee commented on YARN-5109:
---

The {{ApplicationTable}} and {{EntityTable}} javadoc also reflect it, so it 
appears to be intentional, but I may be forgetting something.

> timestamps are stored unencoded causing parse errors
> 
>
> Key: YARN-5109
> URL: https://issues.apache.org/jira/browse/YARN-5109
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>Priority: Blocker
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5109-YARN-2928.01.patch, 
> YARN-5109-YARN-2928.02.patch
>
>
> When we store timestamps (for example as part of the row key or part of the 
> column name for an event), the bytes are used as is without any encoding. If 
> the byte value happens to contain a separator character we use (e.g. "!" or 
> "="), it causes a parse failure when we read it.
> I came across this while looking into this error in the timeline reader:
> {noformat}
> 2016-05-17 21:28:38,643 WARN 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.TimelineStorageUtils:
>  incorrectly formatted column name: it will be discarded
> {noformat}
> I traced the data that was causing this, and the column name (for the event) 
> was the following:
> {noformat}
> i:e!YARN_RM_CONTAINER_CREATED=\x7F\xFF\xFE\xABDY=\x99=YARN_CONTAINER_ALLOCATED_HOST
> {noformat}
> Note that the column name is supposed to be of the format (event 
> id)=(timestamp)=(event info key). However, observe the timestamp portion:
> {noformat}
> \x7F\xFF\xFE\xABDY=\x99
> {noformat}
> The presence of the separator ("=") causes the parse error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5109) timestamps are stored unencoded causing parse errors

2016-05-20 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15293705#comment-15293705
 ] 

Sangjin Lee commented on YARN-5109:
---

Hmm, I don't remember there was a reason the application row key had to be 
different from the entity row key. Maybe I'm forgetting something. 
[~vrushalic], [~jrottinghuis], did we intentionally have the application row 
key structure different than the entity row key structure, or is this my error?

> timestamps are stored unencoded causing parse errors
> 
>
> Key: YARN-5109
> URL: https://issues.apache.org/jira/browse/YARN-5109
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>Priority: Blocker
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5109-YARN-2928.01.patch, 
> YARN-5109-YARN-2928.02.patch
>
>
> When we store timestamps (for example as part of the row key or part of the 
> column name for an event), the bytes are used as is without any encoding. If 
> the byte value happens to contain a separator character we use (e.g. "!" or 
> "="), it causes a parse failure when we read it.
> I came across this while looking into this error in the timeline reader:
> {noformat}
> 2016-05-17 21:28:38,643 WARN 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.TimelineStorageUtils:
>  incorrectly formatted column name: it will be discarded
> {noformat}
> I traced the data that was causing this, and the column name (for the event) 
> was the following:
> {noformat}
> i:e!YARN_RM_CONTAINER_CREATED=\x7F\xFF\xFE\xABDY=\x99=YARN_CONTAINER_ALLOCATED_HOST
> {noformat}
> Note that the column name is supposed to be of the format (event 
> id)=(timestamp)=(event info key). However, observe the timestamp portion:
> {noformat}
> \x7F\xFF\xFE\xABDY=\x99
> {noformat}
> The presence of the separator ("=") causes the parse error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4951) large IP ranges require a different naming strategy

2016-05-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15293675#comment-15293675
 ] 

Steve Loughran commented on YARN-4951:
--

this patch doesn't apply BTW; we'll need the full diff between branch2...HEAD 
or trunk...HEAD 

> large IP ranges require a different naming strategy
> ---
>
> Key: YARN-4951
> URL: https://issues.apache.org/jira/browse/YARN-4951
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Maron
>Assignee: Jonathan Maron
> Attachments: 
> 0001-YARN-4757-simplified-reverse-lookup-zone-approach-fo.patch
>
>
> Large subnet definitions (e.g. specifying a mask value of 255.255.224.0) 
> yield a large number of potential network addresses.  Therefore, the standard 
> naming convention of xx.xx.xx.in-addr.arpa needs to be modified to be more 
> general:  xx.xx.in-addr.arpa.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4951) large IP ranges require a different naming strategy

2016-05-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15293670#comment-15293670
 ] 

Steve Loughran commented on YARN-4951:
--

john, can you rebase this to trunk/branch2, submit with a name like 
YARN-4757-002.patch ? thanks

https://wiki.apache.org/hadoop/HowToContribute#Naming_your_patch

> large IP ranges require a different naming strategy
> ---
>
> Key: YARN-4951
> URL: https://issues.apache.org/jira/browse/YARN-4951
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Maron
>Assignee: Jonathan Maron
> Attachments: 
> 0001-YARN-4757-simplified-reverse-lookup-zone-approach-fo.patch
>
>
> Large subnet definitions (e.g. specifying a mask value of 255.255.224.0) 
> yield a large number of potential network addresses.  Therefore, the standard 
> naming convention of xx.xx.xx.in-addr.arpa needs to be modified to be more 
> general:  xx.xx.in-addr.arpa.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5109) timestamps are stored unencoded causing parse errors

2016-05-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15293666#comment-15293666
 ] 

Varun Saxena commented on YARN-5109:


[~sjlee0], any reason we have clusterid followed by user id in application row 
key and other way round for entity row key. Just noticed while coding.

> timestamps are stored unencoded causing parse errors
> 
>
> Key: YARN-5109
> URL: https://issues.apache.org/jira/browse/YARN-5109
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>Priority: Blocker
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5109-YARN-2928.01.patch, 
> YARN-5109-YARN-2928.02.patch
>
>
> When we store timestamps (for example as part of the row key or part of the 
> column name for an event), the bytes are used as is without any encoding. If 
> the byte value happens to contain a separator character we use (e.g. "!" or 
> "="), it causes a parse failure when we read it.
> I came across this while looking into this error in the timeline reader:
> {noformat}
> 2016-05-17 21:28:38,643 WARN 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.TimelineStorageUtils:
>  incorrectly formatted column name: it will be discarded
> {noformat}
> I traced the data that was causing this, and the column name (for the event) 
> was the following:
> {noformat}
> i:e!YARN_RM_CONTAINER_CREATED=\x7F\xFF\xFE\xABDY=\x99=YARN_CONTAINER_ALLOCATED_HOST
> {noformat}
> Note that the column name is supposed to be of the format (event 
> id)=(timestamp)=(event info key). However, observe the timestamp portion:
> {noformat}
> \x7F\xFF\xFE\xABDY=\x99
> {noformat}
> The presence of the separator ("=") causes the parse error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4757) [Umbrella] Simplified discovery of services via DNS mechanisms

2016-05-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15293665#comment-15293665
 ] 

Steve Loughran commented on YARN-4757:
--

I am supportive of this; we always had a goal of supporting DNS (somehow); our 
naming policy was designed for that.


A troublespot is going to be that user names, even/especially kerberos names, 
may not be valid for DNS. AD accounts often have spaces in; i18n names exist, 
then there is punctuation. In YARN-913 I did try using punycode, but that only 
converts high-ASCII, high-unicode chars down —it does nothing for spaces in a 
name.

We can control the other bits, but not that.

Related to that, the parse guidelines on p8 shouldn't have a mixedcase example 
"aUser", as that my be misconstrued as case being relevant.

p12; architecture good, just bear in mind that the DNS service will need to 
handle ZK node failure/failover, so be able to switch to a new ZK node and 
(presumably) rebuild its state from that one. Similarly, implementation will 
need to handle the startup state 'zk server not yet live'.

I think you may need to add a sequence diagram for that; essentially re-enum 
the tree with records updated to match the new state as appropriate

h3. implementation (p22). 

* I think you may want to use one of the guava caches for doing some caching 
here, it could make a big diff to ZK load in some  scenarios. Example: 
bootstrapping an app across the cluster which looks up the HBase record.

* I just had a look at how Antonio Lain did the DNS binding in Smartfrog, where 
the Anubis HA T-Space was used as a P2P equivalent to ZK: 
https://sourceforge.net/p/smartfrog/svn/HEAD/tree/trunk/core/components/dns/src/org/smartfrog/services/dns/
 . He used dnsjava, though also added the option to actually bring up BIND: 
https://sourceforge.net/p/smartfrog/svn/HEAD/tree/trunk/core/components/dns/src/org/smartfrog/services/dns/DNSBindNamedImpl.java

I'd go with dnsjava.

Note I also wrote a prototype REST server for the registry; that's hidden in 
slider but it could be copied over to {{yarn-registry}}; it helped find some 
JSON/jackson marshalling problems with field naming already. The Java registry 
API was all built on the option of going RESTy later, and it would be nice for 
going through Knox for remote access.

Allan: consul gets a mention in the docs. Essentially, the YARN service 
registry works as the repo, all that's needed is to serve this up as a 
different protocol through 1+ node. Because of the ZK option, code which works 
directly with ZK can work with the data without worrying about DNS, DNS caching 
and server failures.


> [Umbrella] Simplified discovery of services via DNS mechanisms
> --
>
> Key: YARN-4757
> URL: https://issues.apache.org/jira/browse/YARN-4757
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Jonathan Maron
> Attachments: 
> 0001-YARN-4757-Initial-code-submission-for-DNS-Service.patch, YARN-4757- 
> Simplified discovery of services via DNS mechanisms.pdf
>
>
> [See overview doc at YARN-4692, copying the sub-section (3.2.10.2) to track 
> all related efforts.]
> In addition to completing the present story of service­-registry (YARN-913), 
> we also need to simplify the access to the registry entries. The existing 
> read mechanisms of the YARN Service Registry are currently limited to a 
> registry specific (java) API and a REST interface. In practice, this makes it 
> very difficult for wiring up existing clients and services. For e.g, dynamic 
> configuration of dependent end­points of a service is not easy to implement 
> using the present registry­-read mechanisms, *without* code-changes to 
> existing services.
> A good solution to this is to expose the registry information through a more 
> generic and widely used discovery mechanism: DNS. Service Discovery via DNS 
> uses the well-­known DNS interfaces to browse the network for services. 
> YARN-913 in fact talked about such a DNS based mechanism but left it as a 
> future task. (Task) Having the registry information exposed via DNS 
> simplifies the life of services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5018) Online aggregation logic should not run immediately after collectors got started

2016-05-20 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15293651#comment-15293651
 ] 

Sangjin Lee commented on YARN-5018:
---

[~gtCarrera9], how about adding a check in {{AppLevelAggregator.run()}} so that 
it skips the run if the context is not completely set (e.g. checking for flow 
name) *or* there is no entity aggregated? That would be an additional fail-safe 
mechanism.

> Online aggregation logic should not run immediately after collectors got 
> started
> 
>
> Key: YARN-5018
> URL: https://issues.apache.org/jira/browse/YARN-5018
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5018-YARN-2928.001.patch, 
> YARN-5018-YARN-2928.002.patch
>
>
> In app level collector, we launch the aggregation logic immediately after the 
> collector got started. However, at this time, important context data has yet 
> to be published to the container. Also, if the aggregation result is empty, 
> we do not need to publish them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5103) With NM recovery enabled, restarting NM multiple times results in AM restart

2016-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15293621#comment-15293621
 ] 

Hadoop QA commented on YARN-5103:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 patch generated 0 new + 5 unchanged - 1 fixed = 5 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 19s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 0s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805233/YARN-5103.patch |
| JIRA Issue | YARN-5103 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c6172e5809a3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 757050f |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11588/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11588/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> With NM recovery enabled, restarting NM multiple times results in AM restart
> 
>
> Key: YARN-5103
> URL: https://issues.apache.org/jira/browse/YARN-5103
> Project: 

[jira] [Updated] (YARN-5103) With NM recovery enabled, restarting NM multiple times results in AM restart

2016-05-20 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-5103:
-
Attachment: YARN-5103.patch

It sounds a bit difficult to add unit test to cover case here - there are many 
objects need to mock and RecoveredContainerLaunch's internal logic need to 
check pid path which is not easily to mock (or we can change the logic there, 
but make code looks very tricky). 
I update the patch a bit given interrupted exception get wrapped up as 
InterruptedIOException in HADOOP-12074.
[~jlowe], would you help to review it? Thanks!

> With NM recovery enabled, restarting NM multiple times results in AM restart
> 
>
> Key: YARN-5103
> URL: https://issues.apache.org/jira/browse/YARN-5103
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Junping Du
>Priority: Critical
> Attachments: YARN-5103-demo.patch, YARN-5103.patch
>
>
> AM is restarted when NM is restarted multiple times even though NM recovery 
> is enabled.
> {Code:title=NM log on which AM attempt 1 was running }
>  ERROR launcher.RecoveredContainerLaunch 
> (RecoveredContainerLaunch.java:call(88)) - Unable to recover container 
> container_e12_1463043063682_0002_01_01
> java.io.IOException: java.lang.InterruptedException
>   at org.apache.hadoop.util.Shell.runCommand(Shell.java:579)
>   at org.apache.hadoop.util.Shell.run(Shell.java:487)
>   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.signalContainer(LinuxContainerExecutor.java:478)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.isContainerProcessAlive(LinuxContainerExecutor.java:542)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.reacquireContainer(ContainerExecutor.java:185)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.reacquireContainer(LinuxContainerExecutor.java:445)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.RecoveredContainerLaunch.call(RecoveredContainerLaunch.java:83)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.RecoveredContainerLaunch.call(RecoveredContainerLaunch.java:46)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {Code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5109) timestamps are stored unencoded causing parse errors

2016-05-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15293464#comment-15293464
 ] 

Varun Saxena commented on YARN-5109:


Yes I have taken that patch and working on top of it.

> timestamps are stored unencoded causing parse errors
> 
>
> Key: YARN-5109
> URL: https://issues.apache.org/jira/browse/YARN-5109
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>Priority: Blocker
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5109-YARN-2928.01.patch, 
> YARN-5109-YARN-2928.02.patch
>
>
> When we store timestamps (for example as part of the row key or part of the 
> column name for an event), the bytes are used as is without any encoding. If 
> the byte value happens to contain a separator character we use (e.g. "!" or 
> "="), it causes a parse failure when we read it.
> I came across this while looking into this error in the timeline reader:
> {noformat}
> 2016-05-17 21:28:38,643 WARN 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.TimelineStorageUtils:
>  incorrectly formatted column name: it will be discarded
> {noformat}
> I traced the data that was causing this, and the column name (for the event) 
> was the following:
> {noformat}
> i:e!YARN_RM_CONTAINER_CREATED=\x7F\xFF\xFE\xABDY=\x99=YARN_CONTAINER_ALLOCATED_HOST
> {noformat}
> Note that the column name is supposed to be of the format (event 
> id)=(timestamp)=(event info key). However, observe the timestamp portion:
> {noformat}
> \x7F\xFF\xFE\xABDY=\x99
> {noformat}
> The presence of the separator ("=") causes the parse error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5109) timestamps are stored unencoded causing parse errors

2016-05-20 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15293458#comment-15293458
 ] 

Joep Rottinghuis commented on YARN-5109:


The code in the patch I attached compiles and clears unit tests so by itself 
should be good to go modulo the items left to do as listed above.

> timestamps are stored unencoded causing parse errors
> 
>
> Key: YARN-5109
> URL: https://issues.apache.org/jira/browse/YARN-5109
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>Priority: Blocker
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5109-YARN-2928.01.patch, 
> YARN-5109-YARN-2928.02.patch
>
>
> When we store timestamps (for example as part of the row key or part of the 
> column name for an event), the bytes are used as is without any encoding. If 
> the byte value happens to contain a separator character we use (e.g. "!" or 
> "="), it causes a parse failure when we read it.
> I came across this while looking into this error in the timeline reader:
> {noformat}
> 2016-05-17 21:28:38,643 WARN 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.TimelineStorageUtils:
>  incorrectly formatted column name: it will be discarded
> {noformat}
> I traced the data that was causing this, and the column name (for the event) 
> was the following:
> {noformat}
> i:e!YARN_RM_CONTAINER_CREATED=\x7F\xFF\xFE\xABDY=\x99=YARN_CONTAINER_ALLOCATED_HOST
> {noformat}
> Note that the column name is supposed to be of the format (event 
> id)=(timestamp)=(event info key). However, observe the timestamp portion:
> {noformat}
> \x7F\xFF\xFE\xABDY=\x99
> {noformat}
> The presence of the separator ("=") causes the parse error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5112) Excessive log warnings on NM recovery

2016-05-20 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15293255#comment-15293255
 ] 

Junping Du commented on YARN-5112:
--

Thanks [~jianhe] for updating the patch. The left two checkstyle issues is not 
fixable and let's keep it there.
+1 on latest patch. Will commit it shortly if no further comments from others.

> Excessive log warnings on NM recovery
> -
>
> Key: YARN-5112
> URL: https://issues.apache.org/jira/browse/YARN-5112
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5112.2.patch, YARN-5112.3.patch, YARN-5112.4.patch, 
> YARN-5112.5.patch, YARN-5112.6.patch, YARN-5112.patch
>
>
> When there are a lot of apps to recover in NM store, NM prints these two 
> lines for each app, which gets annoying.
> {code}
> 2015-10-13 01:58:40,277 WARN  logaggregation.LogAggregationService 
> (LogAggregationService.java:verifyAndCreateRemoteLogDir(195)) - Remote Root 
> Log Dir [/app-logs] already exist, but with incorrect permissions. Expected: 
> [rwxrwxrwt], Found: [rwxrwxrwx]. The cluster may have problems with multiple 
> users.
> 336 2015-10-13 01:58:40,277 WARN  logaggregation.AppLogAggregatorImpl 
> (AppLogAggregatorImpl.java:(182)) - rollingMonitorInterval is set as 
> -1. The log rolling mornitoring interval is disabled. The logs will be 
> aggregated after this application is finished.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5016) Add support for a minimum retry interval for container retries

2016-05-20 Thread Jun Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15293245#comment-15293245
 ] 

Jun Gong commented on YARN-5016:


Thanks [~vvasudev] for the review and commit!

> Add support for a minimum retry interval for container retries
> --
>
> Key: YARN-5016
> URL: https://issues.apache.org/jira/browse/YARN-5016
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Varun Vasudev
>Assignee: Jun Gong
> Fix For: 2.9.0
>
> Attachments: YARN-5016.01.patch, YARN-5016.02.patch, 
> YARN-5016.03.patch
>
>
> The NM container re-launch feature should add support to specify a minimum 
> restart interval so that the minimum time between restarts can be set by 
> admins.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5118) Tests fails with localizer port bind exception.

2016-05-20 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15293045#comment-15293045
 ] 

Brahma Reddy Battula commented on YARN-5118:


will add {{yarn.nodemanager.localizer.address}} like below and upload patch.
{code}
  conf.set("yarn.nodemanager.localizer.address",
"0.0.0.0:" + ServerSocketUtil.getPort(8040, 10));
{code}
 *One of the testcase Trace:* 
{noformat}
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.BindException: 
Problem binding to [0.0.0.0:8040] java.net.BindException: Address already in 
use; For more details see:  http://wiki.apache.org/hadoop/BindException
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.hadoop.ipc.Server.bind(Server.java:530)
at org.apache.hadoop.ipc.Server$Listener.(Server.java:793)
at org.apache.hadoop.ipc.Server.(Server.java:2592)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:559)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:534)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800)
at 
org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl.createServer(RpcServerFactoryPBImpl.java:173)
at 
org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl.getServer(RpcServerFactoryPBImpl.java:132)
at 
org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC.getServer(HadoopYarnProtoRPC.java:65)
at org.apache.hadoop.yarn.ipc.YarnRPC.getServer(YarnRPC.java:54)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.createServer(ResourceLocalizationService.java:380)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.serviceStart(ResourceLocalizationService.java:356)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at 
org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceStart(ContainerManagerImpl.java:511)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at 
org.apache.hadoop.yarn.server.nodemanager.TestEventFlow.testSuccessfulContainerLaunch(TestEventFlow.java:136)
{noformat}

> Tests fails with localizer port bind exception.
> ---
>
> Key: YARN-5118
> URL: https://issues.apache.org/jira/browse/YARN-5118
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>
> Following test fails with localzier port bind expception.
> {noformat}
> TestQueuingContainerManager
> TestEventFlow
> TestNodeStatusUpdaterForLabels
> TestLogAggregationService
> {noformat}
> See following for more details:
> https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/1473/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5118) Tests fails with localizer port bind exception.

2016-05-20 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created YARN-5118:
--

 Summary: Tests fails with localizer port bind exception.
 Key: YARN-5118
 URL: https://issues.apache.org/jira/browse/YARN-5118
 Project: Hadoop YARN
  Issue Type: Bug
  Components: test
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


Following test fails with localzier port bind expception.

{noformat}
TestQueuingContainerManager
TestEventFlow
TestNodeStatusUpdaterForLabels
TestLogAggregationService
{noformat}

See following for more details:
https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/1473/testReport/




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5109) timestamps are stored unencoded causing parse errors

2016-05-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15293028#comment-15293028
 ] 

Varun Saxena commented on YARN-5109:


Thanks Sangjin and Joep for the pseudocode and prototype.
Now I can clearly get what both of you were alluding to in the meeting. On the 
face of it, this should work in all the cases.

Will check this in detail and hopefully have a concrete patch soon.

> timestamps are stored unencoded causing parse errors
> 
>
> Key: YARN-5109
> URL: https://issues.apache.org/jira/browse/YARN-5109
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>Priority: Blocker
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5109-YARN-2928.01.patch, 
> YARN-5109-YARN-2928.02.patch
>
>
> When we store timestamps (for example as part of the row key or part of the 
> column name for an event), the bytes are used as is without any encoding. If 
> the byte value happens to contain a separator character we use (e.g. "!" or 
> "="), it causes a parse failure when we read it.
> I came across this while looking into this error in the timeline reader:
> {noformat}
> 2016-05-17 21:28:38,643 WARN 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.TimelineStorageUtils:
>  incorrectly formatted column name: it will be discarded
> {noformat}
> I traced the data that was causing this, and the column name (for the event) 
> was the following:
> {noformat}
> i:e!YARN_RM_CONTAINER_CREATED=\x7F\xFF\xFE\xABDY=\x99=YARN_CONTAINER_ALLOCATED_HOST
> {noformat}
> Note that the column name is supposed to be of the format (event 
> id)=(timestamp)=(event info key). However, observe the timestamp portion:
> {noformat}
> \x7F\xFF\xFE\xABDY=\x99
> {noformat}
> The presence of the separator ("=") causes the parse error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5117) QueuingContainerManager does not start GUARANTEED Container even if Resources are available

2016-05-20 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15293027#comment-15293027
 ] 

Arun Suresh commented on YARN-5117:
---

The {{ContainersMonitorImpl::hasResourcesAvailable()}} function performs the 
following check :
{noformat}
if (this.containersAllocation.getCPU()
  + allocatedCpuUsage(pti) > 1.0f) {
return false;
  }
{noformat}

It looks like the condition is always true. I guess instead of 1.0f it should 
be total number of available cores ?

> QueuingContainerManager does not start GUARANTEED Container even if Resources 
> are available
> ---
>
> Key: YARN-5117
> URL: https://issues.apache.org/jira/browse/YARN-5117
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
>
> When NM Queuing is turned on, it looks like GUARANTEED containers do not 
> start even when there are no containers running on the NM. The following is 
> seen in the logs :
> {noformat}
> .
> 2016-05-19 22:34:12,711 INFO  [IPC Server handler 0 on 49809] 
> queuing.QueuingContainerManagerImpl 
> (QueuingContainerManagerImpl.java:pickOpportunisticContainersToKill(351)) - 
> There are no sufficient resources to start guaranteed 
> container_1463711648301_0001_01_01 even after attempting to kill any 
> running opportunistic containers.
> .
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5117) QueuingContainerManager does not start GUARANTEED Container even if Resources are available

2016-05-20 Thread Arun Suresh (JIRA)
Arun Suresh created YARN-5117:
-

 Summary: QueuingContainerManager does not start GUARANTEED 
Container even if Resources are available
 Key: YARN-5117
 URL: https://issues.apache.org/jira/browse/YARN-5117
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Arun Suresh
Assignee: Konstantinos Karanasos


When NM Queuing is turned on, it looks like GUARANTEED containers do not start 
even when there are no containers running on the NM. The following is seen in 
the logs :

{noformat}
.
2016-05-19 22:34:12,711 INFO  [IPC Server handler 0 on 49809] 
queuing.QueuingContainerManagerImpl 
(QueuingContainerManagerImpl.java:pickOpportunisticContainersToKill(351)) - 
There are no sufficient resources to start guaranteed 
container_1463711648301_0001_01_01 even after attempting to kill any 
running opportunistic containers.
.
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5116) Failed to execute "yarn application"

2016-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292952#comment-15292952
 ] 

Hadoop QA commented on YARN-5116:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 4s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
11s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 0m 47s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805163/YARN-5116.01.patch |
| JIRA Issue | YARN-5116 |
| Optional Tests |  asflicense  shellcheck  shelldocs  |
| uname | Linux d64d6c8f9f74 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0287c49 |
| shellcheck | v0.4.4 |
| modules | C: hadoop-yarn-project/hadoop-yarn U: 
hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11586/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Failed to execute "yarn application"
> 
>
> Key: YARN-5116
> URL: https://issues.apache.org/jira/browse/YARN-5116
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jun Gong
>Assignee: Jun Gong
> Attachments: YARN-5116.01.patch
>
>
> Use the trunk code.
> {code}
> $ bin/yarn application -list
> 16/05/20 11:35:45 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Exception in thread "main" 
> org.apache.commons.cli.UnrecognizedOptionException: Unrecognized option: -list
>   at org.apache.commons.cli.Parser.processOption(Parser.java:363)
>   at org.apache.commons.cli.Parser.parse(Parser.java:199)
>   at org.apache.commons.cli.Parser.parse(Parser.java:85)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:172)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at 
> org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:90)
> {code}
> It is cause by that the subcommand 'application' is deleted from command 
> args. The following command is OK.
> {code}
> $ bin/yarn application application -list
> 16/05/20 11:39:35 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Total number of applications (application-types: [] and states: [SUBMITTED, 
> ACCEPTED, RUNNING]):0
> Application-IdApplication-Name
> Application-Type  User   Queue   State
>  Final-State ProgressTracking-URL
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >