[jira] [Assigned] (MAPREDUCE-6534) Sorting based on attempt not working in JHS

2015-11-03 Thread Mohammad Shahid Khan (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Shahid Khan reassigned MAPREDUCE-6534:
---

Assignee: Mohammad Shahid Khan

> Sorting based on attempt not working in JHS
> ---
>
> Key: MAPREDUCE-6534
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6534
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Mohammad Shahid Khan
>
> Steps to reproduce
> ===
> 1.Submit application with 1100 maps
> 2.Check successful maps in JHS
> 3.Try sorting based on attempt
> /jobhistory/attempts/job_1446629175560_0005/m/SUCCESSFUL



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6534) Sorting based on attempt not working in JHS

2015-11-03 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated MAPREDUCE-6534:

Summary: Sorting based on attempt not working in JHS  (was: Sorting based 
on applicaiton attempt not working in JHS)

> Sorting based on attempt not working in JHS
> ---
>
> Key: MAPREDUCE-6534
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6534
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>
> Steps to reproduce
> ===
> 1.Submit application with 1100 maps
> 2.Check successful maps in JHS
> 3.Try sorting based on attempt
> /jobhistory/attempts/job_1446629175560_0005/m/SUCCESSFUL



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (MAPREDUCE-6534) Sorting based on applicaiton attempt not working in JHS

2015-11-03 Thread Bibin A Chundatt (JIRA)
Bibin A Chundatt created MAPREDUCE-6534:
---

 Summary: Sorting based on applicaiton attempt not working in JHS
 Key: MAPREDUCE-6534
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6534
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Bibin A Chundatt


Steps to reproduce
===

1.Submit application with 1100 maps
2.Check successful maps in JHS
3.Try sorting based on attempt

/jobhistory/attempts/job_1446629175560_0005/m/SUCCESSFUL



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6362) History Plugin should be updated

2015-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14988795#comment-14988795
 ] 

Hadoop QA commented on MAPREDUCE-6362:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 8s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
4s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 15s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
38s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 15s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
trunk cannot run convertXmlToText from findbugs {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 17s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 4s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 54s 
{color} | {color:red} Patch generated 1 new checkstyle issues in root (total 
was 19, now 20). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
41s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 39s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-web-proxy-jdk1.8.0_60
 with JDK v1.8.0_60 generated 1 new issues (was 25, now 26). {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 39s 
{color} | {color:red} 
hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-hs-plugins-jdk1.8.0_60
 with JDK v1.8.0_60 generated 1 new issues (was 1, now 2). {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 49s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 19s 
{color} | {color:green} hadoop-yarn-server-web-proxy in the patch passed with 
JDK v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s 
{color} | {color:green} hadoop-mapreduce-client-hs-plugins in the patch passed 
with JDK v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 1s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 21s 
{color} | {color:green} hadoop-yarn-server-web-proxy in the patch passe

[jira] [Updated] (MAPREDUCE-6362) History Plugin should be updated

2015-11-03 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated MAPREDUCE-6362:
---
Target Version/s: 2.7.3  (was: 2.7.2)

Moving out all non-critical / non-blocker issues that didn't make it out of 
2.7.2 into 2.7.3.

> History Plugin should be updated
> 
>
> Key: MAPREDUCE-6362
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6362
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 2.6.1
>Reporter: Mit Desai
>Assignee: Mit Desai
> Attachments: MAPREDUCE-6362.patch
>
>
> As applications complete, the RM tracks their IDs in a completed list. This 
> list is routinely truncated to limit the total number of application 
> remembered by the RM.
> When a user clicks the History for a job, either the browser is redirected to 
> the application's tracking link obtained from the stored application 
> instance. But when the application has been purged from the RM, an error is 
> displayed.
> In very busy clusters the rate at which applications complete can cause 
> applications to be purged from the RM's internal list within hours, which 
> breaks the proxy URLs users have saved for their jobs.
> We would like the RM to provide valid tracking links persist so that users 
> are not frustrated by broken links.
> With the current plugin in place, redirections for the Mapreduce jobs works 
> but we need the add functionality for tez jobs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (MAPREDUCE-6355) 2.5 client cannot communicate with 2.5 job on 2.6 cluster

2015-11-03 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli resolved MAPREDUCE-6355.

Resolution: Won't Fix

Similar to YARN-3575, this token format-change (YARN-668) was a _"necessary 
evil"_ to support rolling-upgrades starting Hadoop 2.6. I requested offline 
that this be filed largely for documentation concerns.

There is only one way sites can avoid this incompatibility: all apps are to be 
upgraded to 2.6+ once you migrate your cluster to 2.6+ from < 2.6.

We *can* have an elaborate fix where the 2.5 clients tells YARN of its version 
so that RM can generate and propagate the right token-format, but at this 
stage, I am not sure of its value. Closing this for now as won't fix. Please 
reopen if you disagree. Thanks.


> 2.5 client cannot communicate with 2.5 job on 2.6 cluster
> -
>
> Key: MAPREDUCE-6355
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6355
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Jason Lowe
>
> Trying to run a job on a Hadoop 2.6 cluster from a Hadoop 2.5 client 
> submitting a job that uses Hadoop 2.5 jars results in a job that succeeds but 
> the client cannot communicate with the AM while the job is running.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6512) FileOutputCommitter tasks unconditionally create parent directories

2015-11-03 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14988166#comment-14988166
 ] 

Jason Lowe commented on MAPREDUCE-6512:
---

I would love to fix this, since it would help jobs fail when they collide in 
the output directory, but I'm not sure we can.  I fear there may be custom 
output formats or other setups where they are implicitly (or explicitly) 
relying on the automatic parent directory creation to create the output 
directory itself.  If the output directory was being created by this behavior 
and we stop doing it then we'll start failing jobs that used to work before.

> FileOutputCommitter tasks unconditionally create parent directories
> ---
>
> Key: MAPREDUCE-6512
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6512
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Chang Li
>Assignee: Chang Li
> Attachments: MAPREDUCE-6512.2.patch, MAPREDUCE-6512.2.patch, 
> MAPREDUCE-6512.patch, MAPREDUCE-6512.patch
>
>
> If the output directory is deleted then subsequent tasks should fail. Instead 
> they blindly create the missing parent directories, leading the job to be 
> "succesful" despite potentially missing almost all of the output. Task 
> attempts should fail if the parent app attempt directory is missing when they 
> go to create their task attempt directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6529) AppMaster will not retry to request resource if AppMaster happens to decide to not use the resource

2015-11-03 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14987366#comment-14987366
 ] 

Steve Loughran commented on MAPREDUCE-6529:
---

It's really antisocial to discard unwanted containers, because if your app runs 
in a queue with pre-emption allowed, other people's work gets killed so you get 
those containers —containers you just discard.

labelling is how different machine capabilities should be marked up (e.g. GPU, 
faster-cpu). This is how we do it in other YARN apps. What's needed here is the 
ability to include (potentially different) labels in the requests for different 
phases of the work. (e.g mappers anywhere, but reducers run on 
"production-reducer" nodes only)


> AppMaster will not retry to request resource if AppMaster happens to decide 
> to not use the resource
> ---
>
> Key: MAPREDUCE-6529
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6529
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: mr-am
>Affects Versions: 2.6.0
>Reporter: Wei Chen
>
> I am viewing code in RMContainerAllocator.java.   I want to do some 
> improvement  so that the AppMaster could give up some containers that may not 
> be optimal  when it receives new assigned containers.  But I found that if 
> AppMaster give up the containers, it will not retry to request the resource 
> again.
> int RMContainerRequestor.java, Set ask  is used to ask 
> resource from ResourceManager. I found each container could only be requested 
> once. It mean ask can be filled by addResourceRequestToAsk(ResourceRequest 
> remoteRequest[]), but it can only added for once for each container. If we 
> give up one assigned container, It will never request again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5785) Derive heap size or mapreduce.*.memory.mb automatically

2015-11-03 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated MAPREDUCE-5785:
---
Release Note: 
The memory values for mapreduce.map/reduce.memory.mb keys, if left to their 
default values of -1, will now be automatically inferred from the heap size 
value system property (-Xmx) specified for mapreduce.map/reduce.java.opts keys.

The converse is also done, i.e. if mapreduce.map/reduce.memory.mb values are 
specified, but no -Xmx is supplied for mapreduce.map/reduce.java.opts keys, 
then the -Xmx value will be derived from the former's value.

If neither is specified, then a default value of 1024 MB gets used.

For both these conversions, a scaling factor specified by property 
mapreduce.job.heap.memory-mb.ratio is used, to account for overheads between 
heap usage vs. actual physical memory usage.

Existing configs or job code that already specify both the set of properties 
explicitly would not be affected by this inferring change.

> Derive heap size or mapreduce.*.memory.mb automatically
> ---
>
> Key: MAPREDUCE-5785
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5785
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>  Components: mr-am, task
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
> Fix For: 3.0.0
>
> Attachments: MAPREDUCE-5785.v01.patch, MAPREDUCE-5785.v02.patch, 
> MAPREDUCE-5785.v03.patch, mr-5785-4.patch, mr-5785-5.patch, mr-5785-6.patch, 
> mr-5785-7.patch, mr-5785-8.patch, mr-5785-9.patch
>
>
> Currently users have to set 2 memory-related configs per Job / per task type. 
>  One first chooses some container size map reduce.\*.memory.mb and then a 
> corresponding maximum Java heap size Xmx < map reduce.\*.memory.mb. This 
> makes sure that the JVM's C-heap (native memory + Java heap) does not exceed 
> this mapreduce.*.memory.mb. If one forgets to tune Xmx, MR-AM might be 
> - allocating big containers whereas the JVM will only use the default 
> -Xmx200m.
> - allocating small containers that will OOM because Xmx is too high.
> With this JIRA, we propose to set Xmx automatically based on an empirical 
> ratio that can be adjusted. Xmx is not changed automatically if provided by 
> the user.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)