[jira] [Created] (YARN-5854) Throw more accurate exceptions from LinuxContainerExecutor#init

2016-11-07 Thread Zhe Zhang (JIRA)
Zhe Zhang created YARN-5854:
---

 Summary: Throw more accurate exceptions from 
LinuxContainerExecutor#init
 Key: YARN-5854
 URL: https://issues.apache.org/jira/browse/YARN-5854
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager, yarn
Reporter: Zhe Zhang
Assignee: Jonathan Hung
Priority: Minor


YARN-5822 logs {{ContainerExecutionException}}, but doesn't include exception 
{{e}} in the IOException it throws.

Another improvement is to reduce the duplicate exception messages:
# "Failed to bootstrap configured resource subsystems!"
# "Failed to initialize linux container runtime(s)!"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5853) TestDelegationTokenRenewer#testRMRestartWithExpiredToken fails intermittently on Power

2016-11-07 Thread Yussuf Shaikh (JIRA)
Yussuf Shaikh created YARN-5853:
---

 Summary: TestDelegationTokenRenewer#testRMRestartWithExpiredToken 
fails intermittently on Power
 Key: YARN-5853
 URL: https://issues.apache.org/jira/browse/YARN-5853
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0-alpha1
 Environment: # uname -a
Linux pts00452-vm10 3.10.0-327.el7.ppc64le #1 SMP Thu Oct 29 17:31:13 EDT 
2015 ppc64le ppc64le ppc64le GNU/Linux
# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.2 (Maipo)
Reporter: Yussuf Shaikh


The test testRMRestartWithExpiredToken fails intermittently with the following 
error:

Stacktrace:
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertNotNull(Assert.java:621)
at org.junit.Assert.assertNotNull(Assert.java:631)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer.testRMRestartWithExpiredToken(TestDelegationTokenRenewer.java:1060)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5852) Consolidate CSAssignment, ContainerAllocation, ContainerAllocationContext class in CapacityScheduler

2016-11-07 Thread Jian He (JIRA)
Jian He created YARN-5852:
-

 Summary: Consolidate CSAssignment, ContainerAllocation, 
ContainerAllocationContext class in CapacityScheduler
 Key: YARN-5852
 URL: https://issues.apache.org/jira/browse/YARN-5852
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He


Quite a few data structures which wraps container related info with similar 
names: CSAssignment, ContainerAllocation, ContainerAllocationContext, And a 
bunch of code to convert one from another. we should consolidate those to be a 
single one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5851) TestContainerManagerSecurity testContainerManager[1] failed

2016-11-07 Thread Haibo Chen (JIRA)
Haibo Chen created YARN-5851:


 Summary: TestContainerManagerSecurity testContainerManager[1] 
failed 
 Key: YARN-5851
 URL: https://issues.apache.org/jira/browse/YARN-5851
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 3.0.0-alpha1
Reporter: Haibo Chen


---
Test set: org.apache.hadoop.yarn.server.TestContainerManagerSecurity
---
Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 21.727 sec <<< 
FAILURE! - in org.apache.hadoop.yarn.server.TestContainerManagerSecurity
testContainerManager[1](org.apache.hadoop.yarn.server.TestContainerManagerSecurity)
  Time elapsed: 0.005 sec  <<< ERROR!
java.lang.IllegalArgumentException: Can't get Kerberos realm
at sun.security.krb5.Config.getDefaultRealm(Config.java:1029)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.apache.hadoop.security.authentication.util.KerberosUtil.getDefaultRealm(KerberosUtil.java:88)
at 
org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:63)
at 
org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:291)
at 
org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:337)
at 
org.apache.hadoop.yarn.server.TestContainerManagerSecurity.(TestContainerManagerSecurity.java:151)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5850) Document fair scheduler properties waitTimeBeforeKill and preemptionInterval

2016-11-07 Thread Grant Sohn (JIRA)
Grant Sohn created YARN-5850:


 Summary: Document fair scheduler properties waitTimeBeforeKill and 
preemptionInterval
 Key: YARN-5850
 URL: https://issues.apache.org/jira/browse/YARN-5850
 Project: Hadoop YARN
  Issue Type: Task
  Components: documentation
Affects Versions: 2.6.0
Reporter: Grant Sohn
Priority: Minor


In FairSchedulerConfiguration.java there are 2 parameters which are not 
described in hadoop-yarn/hadoop-yarn-site/FairScheduler.html

{noformat}
  protected static final String PREEMPTION_INTERVAL = CONF_PREFIX + 
"preemptionInterval";
  protected static final int DEFAULT_PREEMPTION_INTERVAL = 5000;
  protected static final String WAIT_TIME_BEFORE_KILL = CONF_PREFIX + 
"waitTimeBeforeKill";
  protected static final int DEFAULT_WAIT_TIME_BEFORE_KILL = 15000;
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5849) Automatically create YARN control group for pre-mounted cgroups

2016-11-07 Thread Miklos Szegedi (JIRA)
Miklos Szegedi created YARN-5849:


 Summary: Automatically create YARN control group for pre-mounted 
cgroups
 Key: YARN-5849
 URL: https://issues.apache.org/jira/browse/YARN-5849
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 3.0.0-alpha1, 2.7.3, 3.0.0-alpha2
Reporter: Miklos Szegedi
Assignee: Miklos Szegedi
Priority: Minor


Yarn can be launched with linux-container-executor.cgroups.mount set to false. 
It will search for the cgroup mount paths set up by the administrator parsing 
the /etc/mtab file. You can also specify resource.percentage-physical-cpu-limit 
to limit the CPU resources assigned to containers.
linux-container-executor.cgroups.hierarchy is the root of the settings of all 
YARN containers. If this is specified but not created YARN will fail at startup:
Caused by: java.io.FileNotFoundException: 
/cgroups/cpu/hadoop-yarn/cpu.cfs_period_us (Permission denied)
org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler.updateCgroup(CgroupsLCEResourcesHandler.java:263)

This JIRA is about automatically creating YARN control group in the case above. 
It reduces the cost of administration.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5848) public/crossdomain.xml is problematic

2016-11-07 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created YARN-5848:
--

 Summary: public/crossdomain.xml is problematic
 Key: YARN-5848
 URL: https://issues.apache.org/jira/browse/YARN-5848
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-ui-v2
Affects Versions: 3.0.0-alpha2
Reporter: Allen Wittenauer


crossdomain.xml should really have an ASF header in it and be in the src 
directory somewhere.  There's zero reason for it to have RAT exception given 
that comments are possible in xml files.  It's also not in a standard maven 
location, which should really be fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5842) spark job getting failed with memory not avail

2016-11-07 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved YARN-5842.

Resolution: Invalid

Please email u...@hadoop.apache.org for questions. JIRA is for reporting 
confirmed bugs.

> spark job getting failed with memory not avail
> --
>
> Key: YARN-5842
> URL: https://issues.apache.org/jira/browse/YARN-5842
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: applications
> Environment: spark running in emr 4.3 with hadoop 2.7 and spark 1.6.0
>Reporter: Mohamed Kajamoideen
>
> > config <- spark_config()
> > config$`sparklyr.shell.driver-memory` <- "4G"
> > config$`sparklyr.shell.executor-memory` <- "4G"
> > sc <- spark_connect(master = "yarn-client", config = config) 
> Error: org.apache.spark.SparkException: Job aborted due to stage failure: 
> Task 0 in stage 27.0 failed 4 times, most recent failure: Lost task 0.3 in 
> stage 27.0 (TID 1941, ip- .ec2.internal): org.apache.spark.SparkException: 
> Values to assemble cannot be null.
>   at 
> org.apache.spark.ml.feature.VectorAssembler$$anonfun$assemble$1.apply(VectorAssembler.scala:154)
>   at 
> org.apache.spark.ml.feature.VectorAssembler$$anonfun$assemble$1.apply(VectorAssembler.scala:137)
>   at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>   at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
>   at 
> org.apache.spark.ml.feature.VectorAssembler$.assemble(VectorAssembler.scala:137)
>   at 
> org.apache.spark.ml.feature.VectorAssembler$$anonfun$3.apply(VectorAssembler.scala:95)
>   at 
> org.apache.spark.ml.feature.VectorAssembler$$anonfun$3.apply(VectorAssembler.scala:94)
>   at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown
>  Sou



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5847) Revert health check exit code check

2016-11-07 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created YARN-5847:
--

 Summary: Revert health check exit code check
 Key: YARN-5847
 URL: https://issues.apache.org/jira/browse/YARN-5847
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 3.0.0-alpha1
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 3.0.0-alpha2


Earlier fix to YARN-5567 is reverted because its not ideal to get the whole 
cluster down because of a bad script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5847) Revert health check exit code check

2016-11-07 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved YARN-5847.

Resolution: Fixed

> Revert health check exit code check
> ---
>
> Key: YARN-5847
> URL: https://issues.apache.org/jira/browse/YARN-5847
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: 3.0.0-alpha2
>
>
> Earlier fix to YARN-5567 is reverted because its not ideal to get the whole 
> cluster down because of a bad script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: Updated 2.8.0-SNAPSHOT artifact

2016-11-07 Thread Sangjin Lee
+1. Resetting the 2.8 effort and the branch at this point may be
counter-productive. IMO we should focus on resolving the remaining blockers
and getting it out the door. I also think that we should seriously consider
2.9 as well, as a fairly large number of changes have accumulated in
branch-2 (over branch-2.8).


Sangjin

On Fri, Nov 4, 2016 at 3:38 PM, Jason Lowe 
wrote:

> At this point my preference would be to do the most expeditious thing to
> release 2.8, whether that's sticking with the branch-2.8 we have today or
> re-cutting it on branch-2.  Doing a quick JIRA query, there's been almost
> 2,400 JIRAs resolved in 2.8.0 (1).  For many of them, it's well-past time
> they saw a release vehicle.  If re-cutting the branch means we have to wrap
> up a few extra things that are still in-progress on branch-2 or add a few
> more blockers to the list before we release then I'd rather stay where
> we're at and ship it ASAP.
>
> Jason
> (1) https://issues.apache.org/jira/issues/?jql=project%20in%
> 20%28hadoop%2C%20yarn%2C%20mapreduce%2C%20hdfs%29%
> 20and%20resolution%20%3D%20Fixed%20and%20fixVersion%20%3D%202.8.0
>
>
>
>
>
> On Tuesday, October 25, 2016 5:31 PM, Karthik Kambatla <
> ka...@cloudera.com> wrote:
>
>
>  Is there value in releasing current branch-2.8? Aren't we better off
> re-cutting the branch off of branch-2?
>
> On Tue, Oct 25, 2016 at 12:20 AM, Akira Ajisaka <
> ajisa...@oss.nttdata.co.jp>
> wrote:
>
> > It's almost a year since branch-2.8 has cut.
> > I'm thinking we need to release 2.8.0 ASAP.
> >
> > According to the following list, there are 5 blocker and 6 critical
> issues.
> > https://issues.apache.org/jira/issues/?filter=12334985
> >
> > Regards,
> > Akira
> >
> >
> > On 10/18/16 10:47, Brahma Reddy Battula wrote:
> >
> >> Hi Vinod,
> >>
> >> Any plan on first RC for branch-2.8 ? I think, it has been long time.
> >>
> >>
> >>
> >>
> >> --Brahma Reddy Battula
> >>
> >> -Original Message-
> >> From: Vinod Kumar Vavilapalli [mailto:vino...@apache.org]
> >> Sent: 20 August 2016 00:56
> >> To: Jonathan Eagles
> >> Cc: common-...@hadoop.apache.org
> >> Subject: Re: Updated 2.8.0-SNAPSHOT artifact
> >>
> >> Jon,
> >>
> >> That is around the time when I branched 2.8, so I guess you were getting
> >> SNAPSHOT artifacts till then from the branch-2 nightly builds.
> >>
> >> If you need it, we can set up SNAPSHOT builds. Or just wait for the
> first
> >> RC, which is around the corner.
> >>
> >> +Vinod
> >>
> >> On Jul 28, 2016, at 4:27 PM, Jonathan Eagles  wrote:
> >>>
> >>> Latest snapshot is uploaded in Nov 2015, but checkins are still coming
> >>> in quite frequently.
> >>> https://repository.apache.org/content/repositories/snapshots/org/apach
> >>> e/hadoop/hadoop-yarn-api/
> >>>
> >>> Are there any plans to start producing updated SNAPSHOT artifacts for
> >>> current hadoop development lines?
> >>>
> >>
> >>
> >> -
> >> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> >> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >>
> >>
> >> -
> >> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> >> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >>
> >>
> >
> > -
> > To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
> >
> >
>
>
>
>


Fwd: [VOTE] Merge YARN-3368 (new web UI) to trunk

2016-11-07 Thread Rohith Sharma K S
I just noticed that my voted mail has been sent to Wangda and forgotten to
keep yarn-dev in cc. Its my bad:-(  I am forwarding my voted mail to
yarn-dev.

Thanks & Regards
Rohith Sharma K S


-- Forwarded message --
From: Rohith Sharma K S 
Date: 3 November 2016 at 12:06
Subject: Re: [VOTE] Merge YARN-3368 (new web UI) to trunk
To: Wangda Tan 


+1

Built from YARN-3368 branch and hosted in cluster. It is pretty much good
user experience UI.
I hosted new web UI in same port as existing UI. I was able to experience
Queue, Application and Nodes pages.


Thanks & Regards
Rohith Sharma K S

On 1 November 2016 at 04:23, Wangda Tan  wrote:

> YARN Devs,
>
> We propose to merge YARN-3368 (YARN next generation web UI) development
> branch into trunk for better development, would like to hear your thoughts
> before sending out vote mail.
>
> The new UI will co-exist with the old YARN UI, by default it is disabled.
> Please refer to User documentation of the new YARN UI
>  -project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnUI2.md>
> for
> more details.
>
> In addition, There’re two working-in-progress features need the new UI to
> be merged to trunk for further development.
>
>   1) UI of YARN Timeline Server v2 (YARN-2928)
>   2) UI of YARN ResourceManager Federation (YARN-2915).
>
> *Status of YARN next generation web UI*
>
> Completed features
>
>- Cluster Overview Page
>- Scheduler page
>- Applications / Application / Application-attempts pages
>- Nodes / Node page
>
> Integration to YARN
>
>- Hosts new web UI in RM
>- Integrates to maven build / package
>
> Miscs:
>
>- Added dependencies to LICENSE.txt/NOTICE.txt
>- Documented how to use it. (In hadoop-yarn-project/hadoop-yarn/hadoop-
>yarn-site/src/site/markdown/YarnUI2.md)
>
> Major items will finish on trunk:
>
>- Security support
>
> We have run the new UI in our internal cluster for more than 3 months, lots
> of people have tried the new UI and gave lots of valuable feedbacks and
> reported suggestions / issues to us. We fixed many of them so now we
> believe it is more ready for wider folks to try.
>
> Merge JIRA for Jenkins is: https://issues.apache.org/jira/browse/YARN-4734
> .
> The latest Jenkins run
>  entId=15620808&page=com.atlassian.jira.plugin.system.
> issuetabpanels:comment-tabpanel#comment-15620808>
> gave
> +1.
>
> The vote will run for 7 days, ending Sun, 11/06/2016. Please feel free to
> comment if you have any questions/doubts. I'll start with my +1 (binding).
>
> Please share your thoughts about this.
>
> Thanks,
> Wangda
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-11-07 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/218/

[Nov 6, 2016 9:13:31 PM] (wangda) YARN-4733. [YARN-3368] Initial commit of new 
YARN web UI. (wangda)
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-4517. Add nodes page and fix bunch of 
license issues. (Varun Saxena
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-4849. [YARN-3368] cleanup code base, 
integrate web UI related build
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-4514. [YARN-3368] Cleanup hardcoded 
configurations, such as RM/ATS
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-5019. [YARN-3368] Change urls in new 
YARN ui from camel casing to
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-4515. [YARN-3368] Support hosting web UI 
framework inside YARN RM.
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-5183. [YARN-3368] Support for responsive 
navbar when window is
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-5161. [YARN-3368] Add Apache Hadoop logo 
in YarnUI home page. (Kai
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-5321. [YARN-3368] Add resource usage for 
application by node
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-3334. [YARN-3368] Introduce REFRESH 
button in various UI pages
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-5509. Build error due to preparing 
3.0.0-alpha2 deployment. (Kai
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-5488. [YARN-3368] Applications table 
overflows beyond the page
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-4849. Addendum patch to fix document. 
(Wangda Tan via Sunil G)
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-4849. Addendum patch to fix license. 
(Wangda Tan via Sunil G)
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-5504. [YARN-3368] Fix YARN UI build 
pom.xml (Sreenath Somarajapuram
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-5583. [YARN-3368] Fix wrong paths in 
.gitignore (Sreenath
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-5503. [YARN-3368] Add missing hidden 
files in webapp folder for
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-4849. Addendum patch to fix ASF 
warnings. (Wangda Tan via Sunil G)
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-5598. [YARN-3368] Fix create-release to 
be able to generate bits
[Nov 6, 2016 9:13:31 PM] (wangda)  YARN-4849. Addendum patch to fix javadocs. 
(Sunil G via wangda)
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-5682. [YARN-3368] Fix maven build to 
keep all generated or
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-5698. [YARN-3368] Launch new YARN UI 
under hadoop web app port.
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-4849. Addendum patch to remove unwanted 
files from rat exclusions.
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-5145. [YARN-3368] Move new YARN UI 
configuration to
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-5741. [YARN-3368] Update UI2 
documentation for new UI2 path (Kai
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-5779. [YARN-3368] Document limits/notes 
of the new YARN UI (Wangda
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-5772. [YARN-3368] Replace old Hadoop 
logo with new one (Akhil P B
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-5500. [YARN-3368]  ‘Master node' link 
under application tab is
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-5497. [YARN-3368] Use different color 
for Undefined and Succeeded
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-5490. [YARN-3368] Fix various alignment 
issues and broken
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-5779. [YARN-3368] Addendum patch to 
document limits/notes of the
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-5785. [YARN-3368] Accessing applications 
and containers list from
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-5804. New UI2 is not able to launch with 
jetty 9 upgrade post
[Nov 6, 2016 9:13:31 PM] (wangda) YARN-4849. Addendum patch to improve pom for 
yarn-ui. (Wangda Tan via
[Nov 7, 2016 2:16:31 AM] (aajisaka) HDFS-10970. Update jackson from 1.9.13 to 
2.x in hadoop-hdfs.
[Nov 7, 2016 2:19:21 AM] (aajisaka) MAPREDUCE-6790. Update jackson from 1.9.13 
to 2.x in hadoop-mapreduce.




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.TestEncryptionZones 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 

Timed out junit tests :

   
org.apache.hadoop.yarn.client.api.impl.TestOpportunisticContainerAllocation 
   org.apache.hadoop.tools.TestHadoopArchives 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/218/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/218/artifact/out/diff-compile-javac-root.txt
  [168K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/218/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-tru

[jira] [Created] (YARN-5846) Improve the fairscheduler attemptScheduler

2016-11-07 Thread zhengchenyu (JIRA)
zhengchenyu created YARN-5846:
-

 Summary: Improve the fairscheduler attemptScheduler 
 Key: YARN-5846
 URL: https://issues.apache.org/jira/browse/YARN-5846
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: fairscheduler
Affects Versions: 2.7.1
 Environment: CentOS-7.1
Reporter: zhengchenyu
Priority: Minor
 Fix For: 2.7.1


when I assign a container, we must consider two factor:
(1) sort the queue and application, and select the proper request. 
(2) then we assure this request's host is just this node (data locality). 
or skip this loop!
this algorithm regard the sorting queue and application as primary factor. when 
yarn consider data locality, for example, 
yarn.scheduler.fair.locality.threshold.node=1, 
yarn.scheduler.fair.locality.threshold.rack=1 (or 
yarn.scheduler.fair.locality-delay-rack-ms and 
yarn.scheduler.fair.locality-delay-node-ms is very large) and lots of 
applications are runnig, the process of assigning contianer becomes very slow.
I think data locality is more important then the sequence of the queue and 
applications. 
I wanna a new algorithm like this:
(1) when resourcemanager accept a new request, notice the RMNodeImpl, 
and then record this association between RMNode and request
(2) when assign containers for node, we assign container by 
RMNodeImpl's association between RMNode and request directly
(3) then I consider the priority of queue and applation. In one object 
of RMNodeImpl, we sort the request of association.
(4) and I think the sorting of current algorithm is consuming, in 
especial, losts of applications are running, lots of sorting are called. so I 
think we should sort the queue and applicaiton in a daemon thread, because less 
error of queues's sequences is allowed.







--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5845) Skip aclUpdated event publish to timelineserver or recovery

2016-11-07 Thread Bibin A Chundatt (JIRA)
Bibin A Chundatt created YARN-5845:
--

 Summary: Skip aclUpdated event publish to timelineserver or 
recovery
 Key: YARN-5845
 URL: https://issues.apache.org/jira/browse/YARN-5845
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Bibin A Chundatt
Assignee: Bibin A Chundatt
Priority: Critical


Currently ACL update event is send to timeline server even on recovery 
{{RMAppManager#createAndPopulateNewRMApp}}.
For 10K completed application when RM is restarted 10K ACL updated event is 
added to timelinesever causing unnecessary over loading of system

{code}
String appViewACLs = submissionContext.getAMContainerSpec()
.getApplicationACLs().get(ApplicationAccessType.VIEW_APP);
rmContext.getSystemMetricsPublisher().appACLsUpdated(
application, appViewACLs, System.currentTimeMillis());
{code}
*Events on each RM restart*
{noformat}
"events": [{
"timestamp": 1478520292543,
"eventtype": "YARN_APPLICATION_ACLS_UPDATED",
"eventinfo": {}
}, {
"timestamp": 1478519600537,
"eventtype": "YARN_APPLICATION_ACLS_UPDATED",
"eventinfo": {}
}, {
"timestamp": 1478519557101,
"eventtype": "YARN_APPLICATION_ACLS_UPDATED",
"eventinfo": {}
}, 
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: Updated 2.8.0-SNAPSHOT artifact

2016-11-07 Thread Steve Loughran

> On 4 Nov 2016, at 22:38, Jason Lowe  wrote:
> 
> At this point my preference would be to do the most expeditious thing to 
> release 2.8, whether that's sticking with the branch-2.8 we have today or 
> re-cutting it on branch-2.  Doing a quick JIRA query, there's been almost 
> 2,400 JIRAs resolved in 2.8.0 (1).  For many of them, it's well-past time 
> they saw a release vehicle.  If re-cutting the branch means we have to wrap 
> up a few extra things that are still in-progress on branch-2 or add a few 
> more blockers to the list before we release then I'd rather stay where we're 
> at and ship it ASAP.


we have to deal with an upgrade to the aws sdk that removes the org.json code; 
ASF licensing now prevents us shipping that
> 
> Jason
> (1) 
> https://issues.apache.org/jira/issues/?jql=project%20in%20%28hadoop%2C%20yarn%2C%20mapreduce%2C%20hdfs%29%20and%20resolution%20%3D%20Fixed%20and%20fixVersion%20%3D%202.8.0
> 
> 
> 
> 
> 
>On Tuesday, October 25, 2016 5:31 PM, Karthik Kambatla 
>  wrote:
> 
> 
> Is there value in releasing current branch-2.8? Aren't we better off
> re-cutting the branch off of branch-2?
> 
> On Tue, Oct 25, 2016 at 12:20 AM, Akira Ajisaka 
> wrote:
> 
>> It's almost a year since branch-2.8 has cut.
>> I'm thinking we need to release 2.8.0 ASAP.
>> 
>> According to the following list, there are 5 blocker and 6 critical issues.
>> https://issues.apache.org/jira/issues/?filter=12334985
>> 
>> Regards,
>> Akira
>> 
>> 
>> On 10/18/16 10:47, Brahma Reddy Battula wrote:
>> 
>>> Hi Vinod,
>>> 
>>> Any plan on first RC for branch-2.8 ? I think, it has been long time.
>>> 
>>> 
>>> 
>>> 
>>> --Brahma Reddy Battula
>>> 
>>> -Original Message-
>>> From: Vinod Kumar Vavilapalli [mailto:vino...@apache.org]
>>> Sent: 20 August 2016 00:56
>>> To: Jonathan Eagles
>>> Cc: common-...@hadoop.apache.org
>>> Subject: Re: Updated 2.8.0-SNAPSHOT artifact
>>> 
>>> Jon,
>>> 
>>> That is around the time when I branched 2.8, so I guess you were getting
>>> SNAPSHOT artifacts till then from the branch-2 nightly builds.
>>> 
>>> If you need it, we can set up SNAPSHOT builds. Or just wait for the first
>>> RC, which is around the corner.
>>> 
>>> +Vinod
>>> 
>>> On Jul 28, 2016, at 4:27 PM, Jonathan Eagles  wrote:
 
 Latest snapshot is uploaded in Nov 2015, but checkins are still coming
 in quite frequently.
 https://repository.apache.org/content/repositories/snapshots/org/apach
 e/hadoop/hadoop-yarn-api/
 
 Are there any plans to start producing updated SNAPSHOT artifacts for
 current hadoop development lines?
 
>>> 
>>> 
>>> -
>>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>>> 
>>> 
>>> -
>>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>>> 
>>> 
>> 
>> -
>> To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
>> 
>> 
> 
> 


-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5844) fair ordering policy with DRF

2016-11-07 Thread kyungwan nam (JIRA)
kyungwan nam created YARN-5844:
--

 Summary: fair ordering policy with DRF
 Key: YARN-5844
 URL: https://issues.apache.org/jira/browse/YARN-5844
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: kyungwan nam


FairOrderingPolicy that was added in YARN-3319 is memory-based fair sharing.
therefore, it does not respect vcores demand.
multi-resources fair sharing with Dominant Resource Fairness should be added.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org