Github user HyukjinKwon commented on a diff in the pull request: https://github.com/apache/spark/pull/21837#discussion_r204608411 --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala --- @@ -1430,6 +1431,18 @@ object SQLConf { "This only takes effect when spark.sql.repl.eagerEval.enabled is set to true.") .intConf .createWithDefault(20) + + val AVRO_COMPRESSION_CODEC = buildConf("spark.sql.avro.compression.codec") + .doc("Compression codec used in writing of AVRO files.") + .stringConf + .createWithDefault("snappy") + + val AVRO_DEFLATE_LEVEL = buildConf("spark.sql.avro.deflate.level") + .doc("Compression level for the deflate codec used in writing of AVRO files. " + + "Valid value must be in the range of from 1 to 9 inclusive. " + + "The default value is -1 which corresponds to 6 level in the current implementation.") --- End diff -- Per https://github.com/apache/spark/pull/21837/files/f8b580ba33736a19fb14a6d7fa9fc929b4cf20ba#r204300978, I guess the default compression level is not 6? I think we better find out what -1 means and describe it here. Also, can we do the check like `checkValue(_ => -1, ...)` here?
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org