[GitHub] spark pull request #19129: [SPARK-13656][SQL] Delete spark.sql.parquet.cache...
Github user zzl1787 commented on a diff in the pull request: https://github.com/apache/spark/pull/19129#discussion_r153682329 --- Diff: docs/sql-programming-guide.md --- @@ -1587,6 +1580,10 @@ options. Note that this is different from the Hive behavior. - As a result, `DROP TABLE` statements on those tables will not remove the data. + - From Spark 2.0.1, `spark.sql.parquet.cacheMetadata` is no longer used. See + [SPARK-16321](https://issues.apache.org/jira/browse/SPARK-16321) and + [SPARK-15639](https://issues.apache.org/jira/browse/SPARK-15639) for details. --- End diff -- @dongjoon-hyun Ok, got this, and thank you. Finally I find the parameter to control this. `spark.sql.filesourceTableRelationCacheSize = 0` This will disable the metadata cache. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #19129: [SPARK-13656][SQL] Delete spark.sql.parquet.cache...
Github user dongjoon-hyun commented on a diff in the pull request: https://github.com/apache/spark/pull/19129#discussion_r153565297 --- Diff: docs/sql-programming-guide.md --- @@ -1587,6 +1580,10 @@ options. Note that this is different from the Hive behavior. - As a result, `DROP TABLE` statements on those tables will not remove the data. + - From Spark 2.0.1, `spark.sql.parquet.cacheMetadata` is no longer used. See + [SPARK-16321](https://issues.apache.org/jira/browse/SPARK-16321) and + [SPARK-15639](https://issues.apache.org/jira/browse/SPARK-15639) for details. --- End diff -- Hi, @zzl1787 . This is Apache Spark 2.3. In Apache Spark 2.3, the metadata cache is not controlled by this parameter. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #19129: [SPARK-13656][SQL] Delete spark.sql.parquet.cache...
Github user zzl1787 commented on a diff in the pull request: https://github.com/apache/spark/pull/19129#discussion_r153467044 --- Diff: docs/sql-programming-guide.md --- @@ -1587,6 +1580,10 @@ options. Note that this is different from the Hive behavior. - As a result, `DROP TABLE` statements on those tables will not remove the data. + - From Spark 2.0.1, `spark.sql.parquet.cacheMetadata` is no longer used. See + [SPARK-16321](https://issues.apache.org/jira/browse/SPARK-16321) and + [SPARK-15639](https://issues.apache.org/jira/browse/SPARK-15639) for details. --- End diff -- Hi, I'm new to spark. I wonder how to disable metadata caching after deleting this conf. I created an external table, and the parquet files in specified location are updated daily, So I want to disable metadata caching rather than executing 'refresh table xxx'. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #19129: [SPARK-13656][SQL] Delete spark.sql.parquet.cache...
Github user asfgit closed the pull request at: https://github.com/apache/spark/pull/19129 --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #19129: [SPARK-13656][SQL] Delete spark.sql.parquet.cache...
Github user dongjoon-hyun commented on a diff in the pull request: https://github.com/apache/spark/pull/19129#discussion_r137622298 --- Diff: docs/sql-programming-guide.md --- @@ -1587,6 +1580,10 @@ options. Note that this is different from the Hive behavior. - As a result, `DROP TABLE` statements on those tables will not remove the data. + - From Spark 2.0.1, `spark.sql.parquet.cacheMetadata` is no longer used. See + [SPARK-16321](https://issues.apache.org/jira/browse/SPARK-16321) and + [SPARK-15639](https://issues.apache.org/jira/browse/SPARK-15639) for details. --- End diff -- Thank you! --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #19129: [SPARK-13656][SQL] Delete spark.sql.parquet.cache...
Github user dongjoon-hyun commented on a diff in the pull request: https://github.com/apache/spark/pull/19129#discussion_r137621847 --- Diff: docs/sql-programming-guide.md --- @@ -1587,6 +1580,10 @@ options. Note that this is different from the Hive behavior. - As a result, `DROP TABLE` statements on those tables will not remove the data. + - From Spark 2.0.1, `spark.sql.parquet.cacheMetadata` is no longer used. See + [SPARK-16321](https://issues.apache.org/jira/browse/SPARK-16321) and + [SPARK-15639](https://issues.apache.org/jira/browse/SPARK-15639) for details. --- End diff -- I will update like this. ``` - From Spark 2.0.0, `spark.sql.parquet.cacheMetadata` is no longer used. See [SPARK-13664](https://issues.apache.org/jira/browse/SPARK-13664) for details. ``` --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #19129: [SPARK-13656][SQL] Delete spark.sql.parquet.cache...
Github user dongjoon-hyun commented on a diff in the pull request: https://github.com/apache/spark/pull/19129#discussion_r137621009 --- Diff: docs/sql-programming-guide.md --- @@ -1587,6 +1580,10 @@ options. Note that this is different from the Hive behavior. - As a result, `DROP TABLE` statements on those tables will not remove the data. + - From Spark 2.0.1, `spark.sql.parquet.cacheMetadata` is no longer used. See + [SPARK-16321](https://issues.apache.org/jira/browse/SPARK-16321) and + [SPARK-15639](https://issues.apache.org/jira/browse/SPARK-15639) for details. --- End diff -- `buildInternalScan` is dead since SPARK-13664? --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #19129: [SPARK-13656][SQL] Delete spark.sql.parquet.cache...
Github user dongjoon-hyun commented on a diff in the pull request: https://github.com/apache/spark/pull/19129#discussion_r137620763 --- Diff: docs/sql-programming-guide.md --- @@ -1587,6 +1580,10 @@ options. Note that this is different from the Hive behavior. - As a result, `DROP TABLE` statements on those tables will not remove the data. + - From Spark 2.0.1, `spark.sql.parquet.cacheMetadata` is no longer used. See + [SPARK-16321](https://issues.apache.org/jira/browse/SPARK-16321) and + [SPARK-15639](https://issues.apache.org/jira/browse/SPARK-15639) for details. --- End diff -- Maybe, here. - https://github.com/apache/spark/commit/678b96e77bf77a64b8df14b19db5a3bb18febfe3#diff-51313f5011e9f5a41af166d766950ba2L400 --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #19129: [SPARK-13656][SQL] Delete spark.sql.parquet.cache...
Github user dongjoon-hyun commented on a diff in the pull request: https://github.com/apache/spark/pull/19129#discussion_r137620489 --- Diff: docs/sql-programming-guide.md --- @@ -1587,6 +1580,10 @@ options. Note that this is different from the Hive behavior. - As a result, `DROP TABLE` statements on those tables will not remove the data. + - From Spark 2.0.1, `spark.sql.parquet.cacheMetadata` is no longer used. See + [SPARK-16321](https://issues.apache.org/jira/browse/SPARK-16321) and + [SPARK-15639](https://issues.apache.org/jira/browse/SPARK-15639) for details. --- End diff -- For `initializeLocalJobFunc`, I think @HyukjinKwon 's comment is more correct. 678b96e77b [SPARK-14535][SQL] Remove buildInternalScan from FileFormat --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #19129: [SPARK-13656][SQL] Delete spark.sql.parquet.cache...
Github user gatorsmile commented on a diff in the pull request: https://github.com/apache/spark/pull/19129#discussion_r137619510 --- Diff: docs/sql-programming-guide.md --- @@ -1587,6 +1580,10 @@ options. Note that this is different from the Hive behavior. - As a result, `DROP TABLE` statements on those tables will not remove the data. + - From Spark 2.0.1, `spark.sql.parquet.cacheMetadata` is no longer used. See + [SPARK-16321](https://issues.apache.org/jira/browse/SPARK-16321) and + [SPARK-15639](https://issues.apache.org/jira/browse/SPARK-15639) for details. --- End diff -- It sounds like https://issues.apache.org/jira/browse/SPARK-13664 is the one that removes the usage of this conf. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #19129: [SPARK-13656][SQL] Delete spark.sql.parquet.cache...
Github user dongjoon-hyun commented on a diff in the pull request: https://github.com/apache/spark/pull/19129#discussion_r137619452 --- Diff: docs/sql-programming-guide.md --- @@ -1587,6 +1580,10 @@ options. Note that this is different from the Hive behavior. - As a result, `DROP TABLE` statements on those tables will not remove the data. + - From Spark 2.0.1, `spark.sql.parquet.cacheMetadata` is no longer used. See + [SPARK-16321](https://issues.apache.org/jira/browse/SPARK-16321) and + [SPARK-15639](https://issues.apache.org/jira/browse/SPARK-15639) for details. --- End diff -- Oh, then, it's another transitive search. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #19129: [SPARK-13656][SQL] Delete spark.sql.parquet.cache...
Github user gatorsmile commented on a diff in the pull request: https://github.com/apache/spark/pull/19129#discussion_r137619209 --- Diff: docs/sql-programming-guide.md --- @@ -1587,6 +1580,10 @@ options. Note that this is different from the Hive behavior. - As a result, `DROP TABLE` statements on those tables will not remove the data. + - From Spark 2.0.1, `spark.sql.parquet.cacheMetadata` is no longer used. See + [SPARK-16321](https://issues.apache.org/jira/browse/SPARK-16321) and + [SPARK-15639](https://issues.apache.org/jira/browse/SPARK-15639) for details. --- End diff -- There is no caller for `initializeLocalJobFunc `. Thus, `initializeLocalJobFunc ` is a dead code. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #19129: [SPARK-13656][SQL] Delete spark.sql.parquet.cache...
Github user dongjoon-hyun commented on a diff in the pull request: https://github.com/apache/spark/pull/19129#discussion_r137617487 --- Diff: docs/sql-programming-guide.md --- @@ -1587,6 +1580,10 @@ options. Note that this is different from the Hive behavior. - As a result, `DROP TABLE` statements on those tables will not remove the data. + - From Spark 2.0.1, `spark.sql.parquet.cacheMetadata` is no longer used. See + [SPARK-16321](https://issues.apache.org/jira/browse/SPARK-16321) and + [SPARK-15639](https://issues.apache.org/jira/browse/SPARK-15639) for details. --- End diff -- https://github.com/apache/spark/pull/13701 is `[SPARK-15639][SPARK-16321][SQL] Push down filter at RowGroups level for parquet reader`. It's removed here. - https://github.com/apache/spark/pull/13701/files#diff-ee26d4c4be21e92e92a02e9f16dbc285L625 --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #19129: [SPARK-13656][SQL] Delete spark.sql.parquet.cache...
Github user gatorsmile commented on a diff in the pull request: https://github.com/apache/spark/pull/19129#discussion_r137616632 --- Diff: docs/sql-programming-guide.md --- @@ -1587,6 +1580,10 @@ options. Note that this is different from the Hive behavior. - As a result, `DROP TABLE` statements on those tables will not remove the data. + - From Spark 2.0.1, `spark.sql.parquet.cacheMetadata` is no longer used. See + [SPARK-16321](https://issues.apache.org/jira/browse/SPARK-16321) and + [SPARK-15639](https://issues.apache.org/jira/browse/SPARK-15639) for details. --- End diff -- These two jiras are wrong. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #19129: [SPARK-13656][SQL] Delete spark.sql.parquet.cache...
GitHub user dongjoon-hyun opened a pull request: https://github.com/apache/spark/pull/19129 [SPARK-13656][SQL] Delete spark.sql.parquet.cacheMetadata from SQLConf and docs ## What changes were proposed in this pull request? `spark.sql.parquet.cacheMetadata` is not used anymore now. This PR removes from SQLConf and docs. ## How was this patch tested? Pass the existing Jenkins. You can merge this pull request into a Git repository by running: $ git pull https://github.com/dongjoon-hyun/spark SPARK-13656 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/spark/pull/19129.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #19129 commit 3b305d067424a83026ad388fb7d50018099070e7 Author: Dongjoon HyunDate: 2017-09-05T07:37:46Z [SPARK-13656][SQL] Delete spark.sql.parquet.cacheMetadata from SQLConf and docs --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org