[ https://issues.apache.org/jira/browse/SPARK-2867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207997#comment-14207997 ]
Romi Kuntsman commented on SPARK-2867: -------------------------------------- In the latest code, it seems to be resolved // Use configured output committer if already set if (conf.getOutputCommitter == null) { hadoopConf.setOutputCommitter(classOf[FileOutputCommitter]) } https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala#L934 > saveAsHadoopFile() in PairRDDFunction.scala should allow use other > OutputCommiter class > --------------------------------------------------------------------------------------- > > Key: SPARK-2867 > URL: https://issues.apache.org/jira/browse/SPARK-2867 > Project: Spark > Issue Type: Bug > Components: Spark Core > Affects Versions: 1.0.0, 1.1.0 > Reporter: Joseph Su > Priority: Minor > > The saveAsHadoopFile() in PairRDDFunction.scala hard-coded the > OutputCommitter class as FileOutputCommitter because of the following code in > the source: > hadoopConf.setOutputCommitter(classOf[FileOutputCommitter]) > However, OutputCommitter is a changeable option in regular Hadoop MapReduce > program. Users can specify "mapred.output.committer.class" to change the > committer class used by other Hadoop programs. > The saveAsHadoopFile() function should remove this hard-coded assignment > and provide a way to specify the OutputCommitte used here. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org