[jira] [Commented] (YARN-2437) start-yarn.sh/stop-yarn should give info

2014-12-06 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14236720#comment-14236720
 ] 

Tsuyoshi OZAWA commented on YARN-2437:
--

Looks good to me.

 start-yarn.sh/stop-yarn should give info
 

 Key: YARN-2437
 URL: https://issues.apache.org/jira/browse/YARN-2437
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: scripts
Reporter: Allen Wittenauer
Assignee: Varun Saxena
  Labels: newbie
 Fix For: 2.7.0

 Attachments: YARN-2437.001.patch, YARN-2437.002.patch, YARN-2437.patch


 With the merger and cleanup of the daemon launch code, yarn-daemons.sh no 
 longer prints Starting information.  This should be made more of an analog 
 of start-dfs.sh/stop-dfs.sh.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2914) Potential race condition in SharedCacheUploaderMetrics/CleanerMetrics/ClientSCMMetrics#getInstance()

2014-12-06 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14236724#comment-14236724
 ] 

Varun Saxena commented on YARN-2914:


Thanks for reviewing the latest patch [~ozawa], [~tedyu], [~ctrezzo] and 
[~sjlee0] .

 Potential race condition in 
 SharedCacheUploaderMetrics/CleanerMetrics/ClientSCMMetrics#getInstance()
 

 Key: YARN-2914
 URL: https://issues.apache.org/jira/browse/YARN-2914
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Ted Yu
Assignee: Varun Saxena
Priority: Minor
 Fix For: 2.7.0

 Attachments: YARN-2914.002.patch, YARN-2914.patch


 {code}
   public static ClientSCMMetrics getInstance() {
 ClientSCMMetrics topMetrics = Singleton.INSTANCE.impl;
 if (topMetrics == null) {
   throw new IllegalStateException(
 {code}
 getInstance() doesn't hold lock on Singleton.this
 This may result in IllegalStateException being thrown prematurely.
 [~ctrezzo] reported that SharedCacheUploaderMetrics has also same kind of 
 race condition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2921) MockRM#waitForState methods can be too slow and flaky

2014-12-06 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated YARN-2921:
-
Attachment: YARN-2921.002.patch

Updating a patch based on a discussion with Karthik.

 MockRM#waitForState methods can be too slow and flaky
 -

 Key: YARN-2921
 URL: https://issues.apache.org/jira/browse/YARN-2921
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: test
Affects Versions: 2.6.0
Reporter: Karthik Kambatla
Assignee: Tsuyoshi OZAWA
 Attachments: YARN-2921.001.patch, YARN-2921.002.patch


 MockRM#waitForState methods currently sleep for too long (2 seconds and 1 
 second). This leads to slow tests and sometimes failures if the 
 App/AppAttempt moves to another state. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2461) Fix PROCFS_USE_SMAPS_BASED_RSS_ENABLED property in YarnConfiguration

2014-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14236734#comment-14236734
 ] 

Hudson commented on YARN-2461:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #27 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/27/])
YARN-2461. Fix PROCFS_USE_SMAPS_BASED_RSS_ENABLED property in 
YarnConfiguration. (rchiang via rkanter) (rkanter: rev 
3c72f54ef581b4f3e2eb84e1e24e459c38d3f769)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java


 Fix PROCFS_USE_SMAPS_BASED_RSS_ENABLED property in YarnConfiguration
 

 Key: YARN-2461
 URL: https://issues.apache.org/jira/browse/YARN-2461
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Minor
  Labels: newbie
 Fix For: 2.7.0

 Attachments: YARN-2461-01.patch


 The property PROCFS_USE_SMAPS_BASED_RSS_ENABLED has an extra period.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2056) Disable preemption at Queue level

2014-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14236737#comment-14236737
 ] 

Hudson commented on YARN-2056:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #27 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/27/])
YARN-2056. Disable preemption at Queue level. Contributed by Eric Payne (jlowe: 
rev 4b130821995a3cfe20c71e38e0f63294085c0491)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java


 Disable preemption at Queue level
 -

 Key: YARN-2056
 URL: https://issues.apache.org/jira/browse/YARN-2056
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Affects Versions: 2.4.0
Reporter: Mayank Bansal
Assignee: Eric Payne
 Fix For: 2.7.0

 Attachments: YARN-2056.201408202039.txt, YARN-2056.201408260128.txt, 
 YARN-2056.201408310117.txt, YARN-2056.201409022208.txt, 
 YARN-2056.201409181916.txt, YARN-2056.201409210049.txt, 
 YARN-2056.201409232329.txt, YARN-2056.201409242210.txt, 
 YARN-2056.201410132225.txt, YARN-2056.201410141330.txt, 
 YARN-2056.201410232244.txt, YARN-2056.201410311746.txt, 
 YARN-2056.201411041635.txt, YARN-2056.201411072153.txt, 
 YARN-2056.201411122305.txt, YARN-2056.201411132215.txt, 
 YARN-2056.201411142002.txt


 We need to be able to disable preemption at individual queue level



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2869) CapacityScheduler should trim sub queue names when parse configuration

2014-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14236735#comment-14236735
 ] 

Hudson commented on YARN-2869:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #27 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/27/])
YARN-2869. CapacityScheduler should trim sub queue names when parse 
configuration. Contributed by Wangda Tan (jianhe: rev 
e69af836f34f16fba565ab112c9bf0d367675b16)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestQueueParsing.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java


 CapacityScheduler should trim sub queue names when parse configuration
 --

 Key: YARN-2869
 URL: https://issues.apache.org/jira/browse/YARN-2869
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacityscheduler, resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Fix For: 2.7.0

 Attachments: YARN-2869-1.patch, YARN-2869-2.patch, YARN-2869-3.patch


 Currently, capacity scheduler doesn't trim sub queue name when parsing queue 
 names, for example, the configuration
 {code}
 configuration
  property
  name...root.queues/name
  value a, b  , c/value
  /property
  property
  name...root.b.capacity/name
  value100/value
  /property
   
  ...
 /property
 {code}
 Will fail with error: 
 {code}
 java.lang.IllegalArgumentException: Illegal capacity of -1.0 for queue root. 
 a 
   at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.getCapacity(CapacitySchedulerConfiguration.java:332)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.getCapacityFromConf(LeafQueue.java:196)
 
 {code}
 It will try to find a queues with name  a,  b  , and  c, which is 
 apparently wrong, we should do trimming on these sub queue names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2461) Fix PROCFS_USE_SMAPS_BASED_RSS_ENABLED property in YarnConfiguration

2014-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14236745#comment-14236745
 ] 

Hudson commented on YARN-2461:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #766 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/766/])
YARN-2461. Fix PROCFS_USE_SMAPS_BASED_RSS_ENABLED property in 
YarnConfiguration. (rchiang via rkanter) (rkanter: rev 
3c72f54ef581b4f3e2eb84e1e24e459c38d3f769)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java


 Fix PROCFS_USE_SMAPS_BASED_RSS_ENABLED property in YarnConfiguration
 

 Key: YARN-2461
 URL: https://issues.apache.org/jira/browse/YARN-2461
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Minor
  Labels: newbie
 Fix For: 2.7.0

 Attachments: YARN-2461-01.patch


 The property PROCFS_USE_SMAPS_BASED_RSS_ENABLED has an extra period.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2056) Disable preemption at Queue level

2014-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14236748#comment-14236748
 ] 

Hudson commented on YARN-2056:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #766 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/766/])
YARN-2056. Disable preemption at Queue level. Contributed by Eric Payne (jlowe: 
rev 4b130821995a3cfe20c71e38e0f63294085c0491)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
* hadoop-yarn-project/CHANGES.txt


 Disable preemption at Queue level
 -

 Key: YARN-2056
 URL: https://issues.apache.org/jira/browse/YARN-2056
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Affects Versions: 2.4.0
Reporter: Mayank Bansal
Assignee: Eric Payne
 Fix For: 2.7.0

 Attachments: YARN-2056.201408202039.txt, YARN-2056.201408260128.txt, 
 YARN-2056.201408310117.txt, YARN-2056.201409022208.txt, 
 YARN-2056.201409181916.txt, YARN-2056.201409210049.txt, 
 YARN-2056.201409232329.txt, YARN-2056.201409242210.txt, 
 YARN-2056.201410132225.txt, YARN-2056.201410141330.txt, 
 YARN-2056.201410232244.txt, YARN-2056.201410311746.txt, 
 YARN-2056.201411041635.txt, YARN-2056.201411072153.txt, 
 YARN-2056.201411122305.txt, YARN-2056.201411132215.txt, 
 YARN-2056.201411142002.txt


 We need to be able to disable preemption at individual queue level



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2869) CapacityScheduler should trim sub queue names when parse configuration

2014-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14236746#comment-14236746
 ] 

Hudson commented on YARN-2869:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #766 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/766/])
YARN-2869. CapacityScheduler should trim sub queue names when parse 
configuration. Contributed by Wangda Tan (jianhe: rev 
e69af836f34f16fba565ab112c9bf0d367675b16)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestQueueParsing.java


 CapacityScheduler should trim sub queue names when parse configuration
 --

 Key: YARN-2869
 URL: https://issues.apache.org/jira/browse/YARN-2869
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacityscheduler, resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Fix For: 2.7.0

 Attachments: YARN-2869-1.patch, YARN-2869-2.patch, YARN-2869-3.patch


 Currently, capacity scheduler doesn't trim sub queue name when parsing queue 
 names, for example, the configuration
 {code}
 configuration
  property
  name...root.queues/name
  value a, b  , c/value
  /property
  property
  name...root.b.capacity/name
  value100/value
  /property
   
  ...
 /property
 {code}
 Will fail with error: 
 {code}
 java.lang.IllegalArgumentException: Illegal capacity of -1.0 for queue root. 
 a 
   at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.getCapacity(CapacitySchedulerConfiguration.java:332)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.getCapacityFromConf(LeafQueue.java:196)
 
 {code}
 It will try to find a queues with name  a,  b  , and  c, which is 
 apparently wrong, we should do trimming on these sub queue names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2517) Implement TimelineClientAsync

2014-12-06 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14236763#comment-14236763
 ] 

Tsuyoshi OZAWA commented on YARN-2517:
--

Thanks for your comments, [~zjshen], [~sjlee0], [~mitdesai], [~hitesh]. 

If we don't care communication problem, fire-and-forget approach in 
putEntitiesAsync sounds reasonable to me. In this case, it's reasonable to me 
to use Future without callbacks. v2 design is based on Future, so I appreciate 
if you have feedbacks. One possible API to be added is flush() to assure to 
send all pending requests. It can be added on separate JIRA.

 Implement TimelineClientAsync
 -

 Key: YARN-2517
 URL: https://issues.apache.org/jira/browse/YARN-2517
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Zhijie Shen
Assignee: Tsuyoshi OZAWA
 Attachments: YARN-2517.1.patch, YARN-2517.2.patch


 In some scenarios, we'd like to put timeline entities in another thread no to 
 block the current one.
 It's good to have a TimelineClientAsync like AMRMClientAsync and 
 NMClientAsync. It can buffer entities, put them in a separate thread, and 
 have callback to handle the responses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2056) Disable preemption at Queue level

2014-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14236796#comment-14236796
 ] 

Hudson commented on YARN-2056:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #27 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/27/])
YARN-2056. Disable preemption at Queue level. Contributed by Eric Payne (jlowe: 
rev 4b130821995a3cfe20c71e38e0f63294085c0491)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java


 Disable preemption at Queue level
 -

 Key: YARN-2056
 URL: https://issues.apache.org/jira/browse/YARN-2056
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Affects Versions: 2.4.0
Reporter: Mayank Bansal
Assignee: Eric Payne
 Fix For: 2.7.0

 Attachments: YARN-2056.201408202039.txt, YARN-2056.201408260128.txt, 
 YARN-2056.201408310117.txt, YARN-2056.201409022208.txt, 
 YARN-2056.201409181916.txt, YARN-2056.201409210049.txt, 
 YARN-2056.201409232329.txt, YARN-2056.201409242210.txt, 
 YARN-2056.201410132225.txt, YARN-2056.201410141330.txt, 
 YARN-2056.201410232244.txt, YARN-2056.201410311746.txt, 
 YARN-2056.201411041635.txt, YARN-2056.201411072153.txt, 
 YARN-2056.201411122305.txt, YARN-2056.201411132215.txt, 
 YARN-2056.201411142002.txt


 We need to be able to disable preemption at individual queue level



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2461) Fix PROCFS_USE_SMAPS_BASED_RSS_ENABLED property in YarnConfiguration

2014-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14236793#comment-14236793
 ] 

Hudson commented on YARN-2461:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #27 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/27/])
YARN-2461. Fix PROCFS_USE_SMAPS_BASED_RSS_ENABLED property in 
YarnConfiguration. (rchiang via rkanter) (rkanter: rev 
3c72f54ef581b4f3e2eb84e1e24e459c38d3f769)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java


 Fix PROCFS_USE_SMAPS_BASED_RSS_ENABLED property in YarnConfiguration
 

 Key: YARN-2461
 URL: https://issues.apache.org/jira/browse/YARN-2461
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Minor
  Labels: newbie
 Fix For: 2.7.0

 Attachments: YARN-2461-01.patch


 The property PROCFS_USE_SMAPS_BASED_RSS_ENABLED has an extra period.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2869) CapacityScheduler should trim sub queue names when parse configuration

2014-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14236794#comment-14236794
 ] 

Hudson commented on YARN-2869:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #27 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/27/])
YARN-2869. CapacityScheduler should trim sub queue names when parse 
configuration. Contributed by Wangda Tan (jianhe: rev 
e69af836f34f16fba565ab112c9bf0d367675b16)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestQueueParsing.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java


 CapacityScheduler should trim sub queue names when parse configuration
 --

 Key: YARN-2869
 URL: https://issues.apache.org/jira/browse/YARN-2869
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacityscheduler, resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Fix For: 2.7.0

 Attachments: YARN-2869-1.patch, YARN-2869-2.patch, YARN-2869-3.patch


 Currently, capacity scheduler doesn't trim sub queue name when parsing queue 
 names, for example, the configuration
 {code}
 configuration
  property
  name...root.queues/name
  value a, b  , c/value
  /property
  property
  name...root.b.capacity/name
  value100/value
  /property
   
  ...
 /property
 {code}
 Will fail with error: 
 {code}
 java.lang.IllegalArgumentException: Illegal capacity of -1.0 for queue root. 
 a 
   at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.getCapacity(CapacitySchedulerConfiguration.java:332)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.getCapacityFromConf(LeafQueue.java:196)
 
 {code}
 It will try to find a queues with name  a,  b  , and  c, which is 
 apparently wrong, we should do trimming on these sub queue names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2461) Fix PROCFS_USE_SMAPS_BASED_RSS_ENABLED property in YarnConfiguration

2014-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14236804#comment-14236804
 ] 

Hudson commented on YARN-2461:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1959 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1959/])
YARN-2461. Fix PROCFS_USE_SMAPS_BASED_RSS_ENABLED property in 
YarnConfiguration. (rchiang via rkanter) (rkanter: rev 
3c72f54ef581b4f3e2eb84e1e24e459c38d3f769)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java


 Fix PROCFS_USE_SMAPS_BASED_RSS_ENABLED property in YarnConfiguration
 

 Key: YARN-2461
 URL: https://issues.apache.org/jira/browse/YARN-2461
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Minor
  Labels: newbie
 Fix For: 2.7.0

 Attachments: YARN-2461-01.patch


 The property PROCFS_USE_SMAPS_BASED_RSS_ENABLED has an extra period.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2056) Disable preemption at Queue level

2014-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14236807#comment-14236807
 ] 

Hudson commented on YARN-2056:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1959 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1959/])
YARN-2056. Disable preemption at Queue level. Contributed by Eric Payne (jlowe: 
rev 4b130821995a3cfe20c71e38e0f63294085c0491)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java


 Disable preemption at Queue level
 -

 Key: YARN-2056
 URL: https://issues.apache.org/jira/browse/YARN-2056
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Affects Versions: 2.4.0
Reporter: Mayank Bansal
Assignee: Eric Payne
 Fix For: 2.7.0

 Attachments: YARN-2056.201408202039.txt, YARN-2056.201408260128.txt, 
 YARN-2056.201408310117.txt, YARN-2056.201409022208.txt, 
 YARN-2056.201409181916.txt, YARN-2056.201409210049.txt, 
 YARN-2056.201409232329.txt, YARN-2056.201409242210.txt, 
 YARN-2056.201410132225.txt, YARN-2056.201410141330.txt, 
 YARN-2056.201410232244.txt, YARN-2056.201410311746.txt, 
 YARN-2056.201411041635.txt, YARN-2056.201411072153.txt, 
 YARN-2056.201411122305.txt, YARN-2056.201411132215.txt, 
 YARN-2056.201411142002.txt


 We need to be able to disable preemption at individual queue level



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2869) CapacityScheduler should trim sub queue names when parse configuration

2014-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14236805#comment-14236805
 ] 

Hudson commented on YARN-2869:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1959 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1959/])
YARN-2869. CapacityScheduler should trim sub queue names when parse 
configuration. Contributed by Wangda Tan (jianhe: rev 
e69af836f34f16fba565ab112c9bf0d367675b16)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestQueueParsing.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
* hadoop-yarn-project/CHANGES.txt


 CapacityScheduler should trim sub queue names when parse configuration
 --

 Key: YARN-2869
 URL: https://issues.apache.org/jira/browse/YARN-2869
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacityscheduler, resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Fix For: 2.7.0

 Attachments: YARN-2869-1.patch, YARN-2869-2.patch, YARN-2869-3.patch


 Currently, capacity scheduler doesn't trim sub queue name when parsing queue 
 names, for example, the configuration
 {code}
 configuration
  property
  name...root.queues/name
  value a, b  , c/value
  /property
  property
  name...root.b.capacity/name
  value100/value
  /property
   
  ...
 /property
 {code}
 Will fail with error: 
 {code}
 java.lang.IllegalArgumentException: Illegal capacity of -1.0 for queue root. 
 a 
   at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.getCapacity(CapacitySchedulerConfiguration.java:332)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.getCapacityFromConf(LeafQueue.java:196)
 
 {code}
 It will try to find a queues with name  a,  b  , and  c, which is 
 apparently wrong, we should do trimming on these sub queue names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-1723) AMRMClientAsync missing blacklist addition and removal functionality

2014-12-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-1723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bartosz Ɓugowski updated YARN-1723:
---
Attachment: YARN-1723.1.patch

Tests are already for AMRMClient.

 AMRMClientAsync missing blacklist addition and removal functionality
 

 Key: YARN-1723
 URL: https://issues.apache.org/jira/browse/YARN-1723
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.2.0
Reporter: Bikas Saha
 Fix For: 2.7.0

 Attachments: YARN-1723.1.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2461) Fix PROCFS_USE_SMAPS_BASED_RSS_ENABLED property in YarnConfiguration

2014-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14236827#comment-14236827
 ] 

Hudson commented on YARN-2461:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1981 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1981/])
YARN-2461. Fix PROCFS_USE_SMAPS_BASED_RSS_ENABLED property in 
YarnConfiguration. (rchiang via rkanter) (rkanter: rev 
3c72f54ef581b4f3e2eb84e1e24e459c38d3f769)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java


 Fix PROCFS_USE_SMAPS_BASED_RSS_ENABLED property in YarnConfiguration
 

 Key: YARN-2461
 URL: https://issues.apache.org/jira/browse/YARN-2461
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Minor
  Labels: newbie
 Fix For: 2.7.0

 Attachments: YARN-2461-01.patch


 The property PROCFS_USE_SMAPS_BASED_RSS_ENABLED has an extra period.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2869) CapacityScheduler should trim sub queue names when parse configuration

2014-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14236828#comment-14236828
 ] 

Hudson commented on YARN-2869:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1981 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1981/])
YARN-2869. CapacityScheduler should trim sub queue names when parse 
configuration. Contributed by Wangda Tan (jianhe: rev 
e69af836f34f16fba565ab112c9bf0d367675b16)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestQueueParsing.java


 CapacityScheduler should trim sub queue names when parse configuration
 --

 Key: YARN-2869
 URL: https://issues.apache.org/jira/browse/YARN-2869
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacityscheduler, resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Fix For: 2.7.0

 Attachments: YARN-2869-1.patch, YARN-2869-2.patch, YARN-2869-3.patch


 Currently, capacity scheduler doesn't trim sub queue name when parsing queue 
 names, for example, the configuration
 {code}
 configuration
  property
  name...root.queues/name
  value a, b  , c/value
  /property
  property
  name...root.b.capacity/name
  value100/value
  /property
   
  ...
 /property
 {code}
 Will fail with error: 
 {code}
 java.lang.IllegalArgumentException: Illegal capacity of -1.0 for queue root. 
 a 
   at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.getCapacity(CapacitySchedulerConfiguration.java:332)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.getCapacityFromConf(LeafQueue.java:196)
 
 {code}
 It will try to find a queues with name  a,  b  , and  c, which is 
 apparently wrong, we should do trimming on these sub queue names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2056) Disable preemption at Queue level

2014-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14236830#comment-14236830
 ] 

Hudson commented on YARN-2056:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1981 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1981/])
YARN-2056. Disable preemption at Queue level. Contributed by Eric Payne (jlowe: 
rev 4b130821995a3cfe20c71e38e0f63294085c0491)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java


 Disable preemption at Queue level
 -

 Key: YARN-2056
 URL: https://issues.apache.org/jira/browse/YARN-2056
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Affects Versions: 2.4.0
Reporter: Mayank Bansal
Assignee: Eric Payne
 Fix For: 2.7.0

 Attachments: YARN-2056.201408202039.txt, YARN-2056.201408260128.txt, 
 YARN-2056.201408310117.txt, YARN-2056.201409022208.txt, 
 YARN-2056.201409181916.txt, YARN-2056.201409210049.txt, 
 YARN-2056.201409232329.txt, YARN-2056.201409242210.txt, 
 YARN-2056.201410132225.txt, YARN-2056.201410141330.txt, 
 YARN-2056.201410232244.txt, YARN-2056.201410311746.txt, 
 YARN-2056.201411041635.txt, YARN-2056.201411072153.txt, 
 YARN-2056.201411122305.txt, YARN-2056.201411132215.txt, 
 YARN-2056.201411142002.txt


 We need to be able to disable preemption at individual queue level



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2461) Fix PROCFS_USE_SMAPS_BASED_RSS_ENABLED property in YarnConfiguration

2014-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14236839#comment-14236839
 ] 

Hudson commented on YARN-2461:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #27 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/27/])
YARN-2461. Fix PROCFS_USE_SMAPS_BASED_RSS_ENABLED property in 
YarnConfiguration. (rchiang via rkanter) (rkanter: rev 
3c72f54ef581b4f3e2eb84e1e24e459c38d3f769)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java


 Fix PROCFS_USE_SMAPS_BASED_RSS_ENABLED property in YarnConfiguration
 

 Key: YARN-2461
 URL: https://issues.apache.org/jira/browse/YARN-2461
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Minor
  Labels: newbie
 Fix For: 2.7.0

 Attachments: YARN-2461-01.patch


 The property PROCFS_USE_SMAPS_BASED_RSS_ENABLED has an extra period.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2869) CapacityScheduler should trim sub queue names when parse configuration

2014-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14236840#comment-14236840
 ] 

Hudson commented on YARN-2869:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #27 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/27/])
YARN-2869. CapacityScheduler should trim sub queue names when parse 
configuration. Contributed by Wangda Tan (jianhe: rev 
e69af836f34f16fba565ab112c9bf0d367675b16)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestQueueParsing.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java


 CapacityScheduler should trim sub queue names when parse configuration
 --

 Key: YARN-2869
 URL: https://issues.apache.org/jira/browse/YARN-2869
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacityscheduler, resourcemanager
Reporter: Wangda Tan
Assignee: Wangda Tan
 Fix For: 2.7.0

 Attachments: YARN-2869-1.patch, YARN-2869-2.patch, YARN-2869-3.patch


 Currently, capacity scheduler doesn't trim sub queue name when parsing queue 
 names, for example, the configuration
 {code}
 configuration
  property
  name...root.queues/name
  value a, b  , c/value
  /property
  property
  name...root.b.capacity/name
  value100/value
  /property
   
  ...
 /property
 {code}
 Will fail with error: 
 {code}
 java.lang.IllegalArgumentException: Illegal capacity of -1.0 for queue root. 
 a 
   at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.getCapacity(CapacitySchedulerConfiguration.java:332)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.getCapacityFromConf(LeafQueue.java:196)
 
 {code}
 It will try to find a queues with name  a,  b  , and  c, which is 
 apparently wrong, we should do trimming on these sub queue names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2056) Disable preemption at Queue level

2014-12-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14236842#comment-14236842
 ] 

Hudson commented on YARN-2056:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #27 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/27/])
YARN-2056. Disable preemption at Queue level. Contributed by Eric Payne (jlowe: 
rev 4b130821995a3cfe20c71e38e0f63294085c0491)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java


 Disable preemption at Queue level
 -

 Key: YARN-2056
 URL: https://issues.apache.org/jira/browse/YARN-2056
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Affects Versions: 2.4.0
Reporter: Mayank Bansal
Assignee: Eric Payne
 Fix For: 2.7.0

 Attachments: YARN-2056.201408202039.txt, YARN-2056.201408260128.txt, 
 YARN-2056.201408310117.txt, YARN-2056.201409022208.txt, 
 YARN-2056.201409181916.txt, YARN-2056.201409210049.txt, 
 YARN-2056.201409232329.txt, YARN-2056.201409242210.txt, 
 YARN-2056.201410132225.txt, YARN-2056.201410141330.txt, 
 YARN-2056.201410232244.txt, YARN-2056.201410311746.txt, 
 YARN-2056.201411041635.txt, YARN-2056.201411072153.txt, 
 YARN-2056.201411122305.txt, YARN-2056.201411132215.txt, 
 YARN-2056.201411142002.txt


 We need to be able to disable preemption at individual queue level



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1723) AMRMClientAsync missing blacklist addition and removal functionality

2014-12-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14236854#comment-14236854
 ] 

Hadoop QA commented on YARN-1723:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12685545/YARN-1723.1.patch
  against trunk revision e227fb8.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client:

  
org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/6022//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6022//console

This message is automatically generated.

 AMRMClientAsync missing blacklist addition and removal functionality
 

 Key: YARN-1723
 URL: https://issues.apache.org/jira/browse/YARN-1723
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.2.0
Reporter: Bikas Saha
 Fix For: 2.7.0

 Attachments: YARN-1723.1.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2921) MockRM#waitForState methods can be too slow and flaky

2014-12-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14236872#comment-14236872
 ] 

Hadoop QA commented on YARN-2921:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12685534/YARN-2921.002.patch
  against trunk revision e227fb8.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  
org.apache.hadoop.yarn.server.resourcemanager.security.TestClientToAMTokens
  
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices
  
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesNodeLabels
  
org.apache.hadoop.yarn.server.resourcemanager.security.TestAMRMTokens
  
org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart
  
org.apache.hadoop.yarn.server.resourcemanager.TestApplicationCleanup
  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestQueueMappings
  
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched
  org.apache.hadoop.yarn.server.resourcemanager.TestRMHA
  
org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA
  
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesDelegationTokens
  
org.apache.hadoop.yarn.server.resourcemanager.TestApplicationMasterLauncher
  
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification
  
org.apache.hadoop.yarn.server.resourcemanager.TestContainerResourceUsage
  
org.apache.hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer
  
org.apache.hadoop.yarn.server.resourcemanager.TestMoveApplication
  
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA
  
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesNodes

  The following test timeouts occurred in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart
org.apache.hadoop.yarn.server.resourcemanager.TestWorkPreservingRMRestart
org.apache.hadoop.yarn.server.resourcemanager.TestRM
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesApps
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerQueueACLs
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerQueueACLs
org.apache.hadoop.yarn.server.resourcemanager.TestApplicationMasterService
org.apache.hadoop.yarn.server.resourcemanager.TestFifoScheduler

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/6021//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6021//console

This message is automatically generated.

 MockRM#waitForState methods can be too slow and flaky
 -

 Key: YARN-2921
 URL: https://issues.apache.org/jira/browse/YARN-2921
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: test
Affects Versions: 2.6.0
Reporter: Karthik Kambatla
Assignee: Tsuyoshi OZAWA
 Attachments: YARN-2921.001.patch, YARN-2921.002.patch


 MockRM#waitForState methods currently sleep for too long (2 seconds and 1 
 second). This leads to slow tests and sometimes failures if the 
 App/AppAttempt moves to another state. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2921) MockRM#waitForState methods can be too slow and flaky

2014-12-06 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated YARN-2921:
-
Attachment: YARN-2921.003.patch

Oh, I found that the duration of sleeping is wrong. Fixing sleep time.

 MockRM#waitForState methods can be too slow and flaky
 -

 Key: YARN-2921
 URL: https://issues.apache.org/jira/browse/YARN-2921
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: test
Affects Versions: 2.6.0
Reporter: Karthik Kambatla
Assignee: Tsuyoshi OZAWA
 Attachments: YARN-2921.001.patch, YARN-2921.002.patch, 
 YARN-2921.003.patch


 MockRM#waitForState methods currently sleep for too long (2 seconds and 1 
 second). This leads to slow tests and sometimes failures if the 
 App/AppAttempt moves to another state. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2921) MockRM#waitForState methods can be too slow and flaky

2014-12-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14236913#comment-14236913
 ] 

Hadoop QA commented on YARN-2921:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12685553/YARN-2921.003.patch
  against trunk revision e227fb8.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart
  
org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart
  
org.apache.hadoop.yarn.server.resourcemanager.TestContainerResourceUsage
  
org.apache.hadoop.yarn.server.resourcemanager.TestWorkPreservingRMRestart
  
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesApps

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/6023//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6023//console

This message is automatically generated.

 MockRM#waitForState methods can be too slow and flaky
 -

 Key: YARN-2921
 URL: https://issues.apache.org/jira/browse/YARN-2921
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: test
Affects Versions: 2.6.0
Reporter: Karthik Kambatla
Assignee: Tsuyoshi OZAWA
 Attachments: YARN-2921.001.patch, YARN-2921.002.patch, 
 YARN-2921.003.patch


 MockRM#waitForState methods currently sleep for too long (2 seconds and 1 
 second). This leads to slow tests and sometimes failures if the 
 App/AppAttempt moves to another state. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2902) Killing a container that is localizing can orphan resources in the DOWNLOADING state

2014-12-06 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-2902:
---
Attachment: YARN-2902.patch

 Killing a container that is localizing can orphan resources in the 
 DOWNLOADING state
 

 Key: YARN-2902
 URL: https://issues.apache.org/jira/browse/YARN-2902
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Affects Versions: 2.5.0
Reporter: Jason Lowe
Assignee: Varun Saxena
 Fix For: 2.7.0

 Attachments: YARN-2902.patch


 If a container is in the process of localizing when it is stopped/killed then 
 resources are left in the DOWNLOADING state.  If no other container comes 
 along and requests these resources they linger around with no reference 
 counts but aren't cleaned up during normal cache cleanup scans since it will 
 never delete resources in the DOWNLOADING state even if their reference count 
 is zero.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2902) Killing a container that is localizing can orphan resources in the DOWNLOADING state

2014-12-06 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14236917#comment-14236917
 ] 

Varun Saxena commented on YARN-2902:


[~jlowe], kindly review.

 Killing a container that is localizing can orphan resources in the 
 DOWNLOADING state
 

 Key: YARN-2902
 URL: https://issues.apache.org/jira/browse/YARN-2902
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Affects Versions: 2.5.0
Reporter: Jason Lowe
Assignee: Varun Saxena
 Fix For: 2.7.0

 Attachments: YARN-2902.patch


 If a container is in the process of localizing when it is stopped/killed then 
 resources are left in the DOWNLOADING state.  If no other container comes 
 along and requests these resources they linger around with no reference 
 counts but aren't cleaned up during normal cache cleanup scans since it will 
 never delete resources in the DOWNLOADING state even if their reference count 
 is zero.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2902) Killing a container that is localizing can orphan resources in the DOWNLOADING state

2014-12-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14236934#comment-14236934
 ] 

Hadoop QA commented on YARN-2902:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12685562/YARN-2902.patch
  against trunk revision e227fb8.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/6024//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6024//console

This message is automatically generated.

 Killing a container that is localizing can orphan resources in the 
 DOWNLOADING state
 

 Key: YARN-2902
 URL: https://issues.apache.org/jira/browse/YARN-2902
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Affects Versions: 2.5.0
Reporter: Jason Lowe
Assignee: Varun Saxena
 Fix For: 2.7.0

 Attachments: YARN-2902.patch


 If a container is in the process of localizing when it is stopped/killed then 
 resources are left in the DOWNLOADING state.  If no other container comes 
 along and requests these resources they linger around with no reference 
 counts but aren't cleaned up during normal cache cleanup scans since it will 
 never delete resources in the DOWNLOADING state even if their reference count 
 is zero.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2902) Killing a container that is localizing can orphan resources in the DOWNLOADING state

2014-12-06 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14236943#comment-14236943
 ] 

Varun Saxena commented on YARN-2902:


Looking further at the code, I think we can call 
*LocalResourcesTrackerImpl#remove* irrespective of whether cache target size 
decided by *yarn.nodemanager.localizer.cache.target-size-mb* config has been 
reached or not.
This is because in below piece of code which is called from 
ResourceLocalizationService#handleCacheCleanup, we check the cumulative size of 
all the resources(which have reference count of 0) against the cache target 
size. And in case of resource having its state as *DOWNLOADING*, call to 
LocalizedResource#getSize will always return -1. Because it seems size is only 
updated once the state changes to *LOCALIZED*.

{code:title=ResourceRetentionSet.java|borderStyle=solid}
public void addResources(LocalResourcesTracker newTracker) {
for (LocalizedResource resource : newTracker) {
  currentSize += resource.getSize();
  if (resource.getRefCount()  0) {
// always retain resources in use
continue;
  }
  retain.put(resource, newTracker);
}
for (IteratorMap.EntryLocalizedResource,LocalResourcesTracker i =
   retain.entrySet().iterator();
 currentSize - delSize  targetSize  i.hasNext();) {
  Map.EntryLocalizedResource,LocalResourcesTracker rsrc = i.next();
  LocalizedResource resource = rsrc.getKey();
  LocalResourcesTracker tracker = rsrc.getValue();
  if (tracker.remove(resource, delService)) {
delSize += resource.getSize();
i.remove();
  }
}
  }
{code}

 Killing a container that is localizing can orphan resources in the 
 DOWNLOADING state
 

 Key: YARN-2902
 URL: https://issues.apache.org/jira/browse/YARN-2902
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: nodemanager
Affects Versions: 2.5.0
Reporter: Jason Lowe
Assignee: Varun Saxena
 Fix For: 2.7.0

 Attachments: YARN-2902.patch


 If a container is in the process of localizing when it is stopped/killed then 
 resources are left in the DOWNLOADING state.  If no other container comes 
 along and requests these resources they linger around with no reference 
 counts but aren't cleaned up during normal cache cleanup scans since it will 
 never delete resources in the DOWNLOADING state even if their reference count 
 is zero.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2928) Application Timeline Server (ATS) next gen: phase 1

2014-12-06 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14236959#comment-14236959
 ] 

Vinod Kumar Vavilapalli commented on YARN-2928:
---

Thanks a bunch for filing this [~sjlee0]! I see a design doc is in order. I'll 
get folks who have been working on YARN-1530 to help this transition with as 
much code and API reuse as possible, with the flexibility of going beyond in 
addressing things like scalability and newer requirements.

 Application Timeline Server (ATS) next gen: phase 1
 ---

 Key: YARN-2928
 URL: https://issues.apache.org/jira/browse/YARN-2928
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: timelineserver
Reporter: Sangjin Lee
Assignee: Sangjin Lee

 We have the application timeline server implemented in yarn per YARN-1530 and 
 YARN-321. Although it is a great feature, we have recognized several critical 
 issues and features that need to be address.
 This JIRA proposes the design and implementation changes to address those. 
 This is phase 1 of this effort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2927) YARN InMemorySCMStore properties need fixing

2014-12-06 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-2927:
---
Issue Type: Sub-task  (was: Bug)
Parent: YARN-1492

 YARN InMemorySCMStore properties need fixing
 

 Key: YARN-2927
 URL: https://issues.apache.org/jira/browse/YARN-2927
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: 2.6.0
Reporter: Ray Chiang
Assignee: Ray Chiang
  Labels: newbie, supportability
 Attachments: YARN-2927.001.patch, YARN-2927.002.patch


 I see these properties in the yarn-default.xml file:
   yarn.sharedcache.store.in-memory.check-period-mins
   yarn.sharedcache.store.in-memory.initial-delay-mins
   yarn.sharedcache.store.in-memory.staleness-period-mins
 YarnConfiguration looks like it's missing some properties:
   public static final String SHARED_CACHE_PREFIX = yarn.sharedcache.;
   public static final String SCM_STORE_PREFIX = SHARED_CACHE_PREFIX + 
 store.;
   public static final String IN_MEMORY_STORE_PREFIX = SHARED_CACHE_PREFIX + 
 in-memory.;
   public static final String IN_MEMORY_STALENESS_PERIOD_MINS = 
 IN_MEMORY_STORE_PREFIX + staleness-period-mins;
 It looks like the definition for IN_MEMORY_STORE_PREFIX should be:
   public static final String IN_MEMORY_STORE_PREFIX = SCM_STORE_PREFIX + 
 in-memory.;
 Just to be clear, there are properties that exist in yarn-default.xml that 
 are effectively misspelled in the *Java* file, not the .xml file.  This is 
 similar to YARN-2461 and MAPREDUCE-6087.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2912) Jersey Tests failing with port in use

2014-12-06 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-2912:
---
Attachment: YARN-2912.patch

 Jersey Tests failing with port in use
 -

 Key: YARN-2912
 URL: https://issues.apache.org/jira/browse/YARN-2912
 Project: Hadoop YARN
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
 Environment: jenkins on java 8
Reporter: Steve Loughran
Assignee: Varun Saxena
 Attachments: YARN-2912.patch


 Jersey tests like TestNMWebServices apps are failing with port in use.
 The jersey test runner appears to always use the same port unless a system 
 property is set to point to a different one. Every test should really be 
 changing that sysprop in a @Before method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2912) Jersey Tests failing with port in use

2014-12-06 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14236998#comment-14236998
 ] 

Varun Saxena commented on YARN-2912:


[~ste...@apache.org], well opening up of a socket on the port is done by 
Jersey. Port being occupied by another process running on the same machine is 
also a possibility.
So, I have simply added code for rolling increment of ports for Jersey tests. 
Increment is by 10 to ensure that if the port is occupied because another 
application is using it, it will mitigate the risk of next port being occupied 
if the application uses multiple ports

 Jersey Tests failing with port in use
 -

 Key: YARN-2912
 URL: https://issues.apache.org/jira/browse/YARN-2912
 Project: Hadoop YARN
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
 Environment: jenkins on java 8
Reporter: Steve Loughran
Assignee: Varun Saxena
 Fix For: 2.7.0

 Attachments: YARN-2912.patch


 Jersey tests like TestNMWebServices apps are failing with port in use.
 The jersey test runner appears to always use the same port unless a system 
 property is set to point to a different one. Every test should really be 
 changing that sysprop in a @Before method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (YARN-2287) Add audit log levels for NM and RM

2014-12-06 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena reassigned YARN-2287:
--

Assignee: Varun Saxena

 Add audit log levels for NM and RM
 --

 Key: YARN-2287
 URL: https://issues.apache.org/jira/browse/YARN-2287
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager, resourcemanager
Affects Versions: 2.4.1
Reporter: Varun Saxena
Assignee: Varun Saxena
 Attachments: YARN-2287-patch-1.patch, YARN-2287.patch


 NM and RM audit logging can be done based on log level as some of the audit 
 logs, especially the container audit logs appear too many times. By 
 introducing log level, certain audit logs can be suppressed, if not required 
 in deployment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (YARN-2255) YARN Audit logging not added to log4j.properties

2014-12-06 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena reassigned YARN-2255:
--

Assignee: Varun Saxena

 YARN Audit logging not added to log4j.properties
 

 Key: YARN-2255
 URL: https://issues.apache.org/jira/browse/YARN-2255
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Varun Saxena
Assignee: Varun Saxena

 log4j.properties file which is part of the hadoop package, doesnt have YARN 
 Audit logging tied to it. This leads to audit logs getting generated in 
 normal log files. Audit logs should be generated in a separate log file



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2912) Jersey Tests failing with port in use

2014-12-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14237008#comment-14237008
 ] 

Hadoop QA commented on YARN-2912:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12685569/YARN-2912.patch
  against trunk revision 9297f98.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 14 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  
org.apache.hadoop.yarn.server.resourcemanager.TestContainerResourceUsage

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/6025//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/6025//console

This message is automatically generated.

 Jersey Tests failing with port in use
 -

 Key: YARN-2912
 URL: https://issues.apache.org/jira/browse/YARN-2912
 Project: Hadoop YARN
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
 Environment: jenkins on java 8
Reporter: Steve Loughran
Assignee: Varun Saxena
 Fix For: 2.7.0

 Attachments: YARN-2912.patch


 Jersey tests like TestNMWebServices apps are failing with port in use.
 The jersey test runner appears to always use the same port unless a system 
 property is set to point to a different one. Every test should really be 
 changing that sysprop in a @Before method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2929) Adding separator ApplicationConstants.FILE_PATH_SEPARATOR for better Windows support

2014-12-06 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14237043#comment-14237043
 ] 

Chris Nauroth commented on YARN-2929:
-

Hi, [~ozawa].

I would not expect this change to be necessary.  To achieve cross-platform 
application submissions, I believe you would just need to pass a given file 
system path through the {{org.apache.hadoop.fs.Path}} class.  In that class, we 
have implemented logic for handling Windows paths.  Part of that logic replaces 
all back slashes with forward slashes when running on Windows.  For example, 
{{new Path(C:\foo\bar).toString()}} yields the string {{C:/foo/bar}} when 
running on Windows.  The forward slash format works fine on both Linux and 
Windows.

This is different from the classpath separator.  We didn't have any similar 
special handling for that, which is why we needed to implement YARN-1824.

I looked at SPARK-1825, and it seems there is a question about handling of 
SPARK_HOME.  I'd expect this path could differ between client and server.  
Regardless of the issue of path separator, the actual path is likely to be 
different, so it seems like the server side really needs to be responsible for 
injecting this.

I don't have any experience with Spark though, so please let me know if I'm 
missing something.  Thanks!

 Adding separator ApplicationConstants.FILE_PATH_SEPARATOR for better Windows 
 support
 

 Key: YARN-2929
 URL: https://issues.apache.org/jira/browse/YARN-2929
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
 Attachments: YARN-2929.001.patch


 Some frameworks like Spark is tackling to run jobs on Windows(SPARK-1825). 
 For better multiple platform support, we should introduce 
 ApplicationConstants.FILE_PATH_SEPARATOR for making filepath 
 platform-independent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)