[ https://issues.apache.org/jira/browse/SPARK-10781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16738932#comment-16738932 ]
nxet edited comment on SPARK-10781 at 1/10/19 2:52 AM: ------------------------------------------------------- I met the same problem as some empty sequence files cause the failure of the whole job,but by MR can run normally(mapreduce.map.failures.maxpercent,mapreduce.reduce.failures.maxpercent),the following is my source files: _116.1 M 348.3 M /20181226/1545753600402.lzo_deflate_ _97.0 M 290.9 M /20181226/1545754236750.lzo_deflate_ _113.3 M 339.8 M /20181226/1545754856515.lzo_deflate_ _126.5 M 379.5 M /20181226/1545753600402.lzo_deflate_ _92.9 M 278.6 M /20181226/1545754233009.lzo_deflate_ _117.7 M 353.2 M /20181226/1545754850857.lzo_deflate_ _0 M 0 M /20181226/1545755455381.lzo_deflate_ _0 M 0 M /20181226/1545756056457.lzo_deflate_ was (Author: nxet): I met the same problem as some empty sequence files cause the failure of the whole job,but by MR can run normally(mapreduce.map.failures.maxpercent,mapreduce.reduce.failures.maxpercent),the following is my source files: _116.1 M 348.3 M /20181226/1545753600402.lzo_deflate 97.0 M 290.9 M /20181226/1545754236750.lzo_deflate 113.3 M 339.8 M /20181226/1545754856515.lzo_deflate 126.5 M 379.5 M /20181226/1545753600402.lzo_deflate 92.9 M 278.6 M /20181226/1545754233009.lzo_deflate 117.7 M 353.2 M /20181226/1545754850857.lzo_deflate 0 M 0 M /20181226/1545755455381.lzo_deflate 0 M 0 M /20181226/1545756056457.lzo_deflate_ > Allow certain number of failed tasks and allow job to succeed > ------------------------------------------------------------- > > Key: SPARK-10781 > URL: https://issues.apache.org/jira/browse/SPARK-10781 > Project: Spark > Issue Type: Improvement > Components: Spark Core > Affects Versions: 1.5.0 > Reporter: Thomas Graves > Priority: Major > Attachments: SPARK_10781_Proposed_Solution.pdf > > > MapReduce has this config mapreduce.map.failures.maxpercent and > mapreduce.reduce.failures.maxpercent which allows for a certain percent of > tasks to fail but the job to still succeed. > This could be a useful feature in Spark also if a job doesn't need all the > tasks to be successful. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org