Github user zzl1787 commented on a diff in the pull request: https://github.com/apache/spark/pull/19129#discussion_r153682329 --- Diff: docs/sql-programming-guide.md --- @@ -1587,6 +1580,10 @@ options. Note that this is different from the Hive behavior. - As a result, `DROP TABLE` statements on those tables will not remove the data. + - From Spark 2.0.1, `spark.sql.parquet.cacheMetadata` is no longer used. See + [SPARK-16321](https://issues.apache.org/jira/browse/SPARK-16321) and + [SPARK-15639](https://issues.apache.org/jira/browse/SPARK-15639) for details. --- End diff -- @dongjoon-hyun Ok, got this, and thank you. Finally I find the parameter to control this. `spark.sql.filesourceTableRelationCacheSize = 0` This will disable the metadata cache.
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org