[ https://issues.apache.org/jira/browse/SPARK-16610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15405071#comment-15405071 ]
Hyukjin Kwon commented on SPARK-16610: -------------------------------------- One thought is, we might have to document that we don't respect Hadoop configuration anymore officially if it is. > When writing ORC files, orc.compress should not be overridden if users do not > set "compression" in the options > -------------------------------------------------------------------------------------------------------------- > > Key: SPARK-16610 > URL: https://issues.apache.org/jira/browse/SPARK-16610 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 2.0.0 > Reporter: Yin Huai > > For ORC source, Spark SQL has a writer option {{compression}}, which is used > to set the codec and its value will be also set to orc.compress (the orc conf > used for codec). However, if a user only set {{orc.compress}} in the writer > option, we should not use the default value of "compression" (snappy) as the > codec. Instead, we should respect the value of {{orc.compress}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org