[jira] [Commented] (SPARK-10781) Allow certain number of failed tasks and allow job to succeed

2019-01-09 Thread nxet (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-10781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16738932#comment-16738932
 ] 

nxet commented on SPARK-10781:
--

I met the same problem as some empty sequence files cause the failure of the 
whole job,but by MR can run 
normally(mapreduce.map.failures.maxpercent,mapreduce.reduce.failures.maxpercent),the
 following is my source files:

_116.1 M  348.3 M  /20181226/1545753600402.lzo_deflate
97.0 M  290.9 M  /20181226/1545754236750.lzo_deflate
113.3 M  339.8 M  /20181226/1545754856515.lzo_deflate
126.5 M  379.5 M  /20181226/1545753600402.lzo_deflate
92.9 M  278.6 M  /20181226/1545754233009.lzo_deflate
117.7 M  353.2 M  /20181226/1545754850857.lzo_deflate
0 M  0 M  /20181226/1545755455381.lzo_deflate
0 M  0 M  /20181226/1545756056457.lzo_deflate_

> Allow certain number of failed tasks and allow job to succeed
> -
>
> Key: SPARK-10781
> URL: https://issues.apache.org/jira/browse/SPARK-10781
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 1.5.0
>Reporter: Thomas Graves
>Priority: Major
> Attachments: SPARK_10781_Proposed_Solution.pdf
>
>
> MapReduce has this config mapreduce.map.failures.maxpercent and 
> mapreduce.reduce.failures.maxpercent which allows for a certain percent of 
> tasks to fail but the job to still succeed.  
> This could be a useful feature in Spark also if a job doesn't need all the 
> tasks to be successful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-10781) Allow certain number of failed tasks and allow job to succeed

2018-06-19 Thread Hieu Tri Huynh (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-10781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517716#comment-16517716
 ] 

Hieu Tri Huynh commented on SPARK-10781:


I attached a proposed solution for this Jira. Hope to receive opinions from all 
of you. Thank you.

[^SPARK_10781_Proposed_Solution.pdf]

> Allow certain number of failed tasks and allow job to succeed
> -
>
> Key: SPARK-10781
> URL: https://issues.apache.org/jira/browse/SPARK-10781
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 1.5.0
>Reporter: Thomas Graves
>Priority: Major
> Attachments: SPARK_10781_Proposed_Solution.pdf
>
>
> MapReduce has this config mapreduce.map.failures.maxpercent and 
> mapreduce.reduce.failures.maxpercent which allows for a certain percent of 
> tasks to fail but the job to still succeed.  
> This could be a useful feature in Spark also if a job doesn't need all the 
> tasks to be successful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-10781) Allow certain number of failed tasks and allow job to succeed

2018-03-31 Thread Fei Niu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-10781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16421558#comment-16421558
 ] 

Fei Niu commented on SPARK-10781:
-

This can be a very useful feature. For example, if your sequence file format 
itself is bad, currently there is no way to catch the exception and move on. It 
makes some data set not able to process.

> Allow certain number of failed tasks and allow job to succeed
> -
>
> Key: SPARK-10781
> URL: https://issues.apache.org/jira/browse/SPARK-10781
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 1.5.0
>Reporter: Thomas Graves
>Priority: Major
>
> MapReduce has this config mapreduce.map.failures.maxpercent and 
> mapreduce.reduce.failures.maxpercent which allows for a certain percent of 
> tasks to fail but the job to still succeed.  
> This could be a useful feature in Spark also if a job doesn't need all the 
> tasks to be successful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org