[ https://issues.apache.org/jira/browse/SPARK-10146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14706205#comment-14706205 ]
Yin Huai commented on SPARK-10146: ---------------------------------- One possible way to do it is that every data source defines a list of confs that can be applied to its reader/writer and we let users set those confs in SQLConf or through data source options. Then, we propagate those confs to the reader/writer. > Have an easy way to set data source reader/writer specific confs > ---------------------------------------------------------------- > > Key: SPARK-10146 > URL: https://issues.apache.org/jira/browse/SPARK-10146 > Project: Spark > Issue Type: Improvement > Components: SQL > Reporter: Yin Huai > Priority: Critical > > Right now, it is hard to set data source reader/writer specifics confs > correctly (e.g. parquet's row group size). Users need to set those confs in > hadoop conf before start the application or through > {{org.apache.spark.deploy.SparkHadoopUtil.get.conf}} at runtime. It will be > great if we can have an easy to set those confs. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org