[GitHub] [hudi] parisni commented on a diff in pull request #9056: [HUDI-6456] [DOC] Add parquet blooms documentation

2023-07-10 Thread via GitHub


parisni commented on code in PR #9056:
URL: https://github.com/apache/hudi/pull/9056#discussion_r1258233512


##
website/docs/configurations.md:
##
@@ -20,6 +20,7 @@ hoodie.datasource.hive_sync.support_timestamp  false
 It helps to have a central configuration file for your common cross job 
configurations/tunings, so all the jobs on your cluster can utilize it. It also 
works with Spark SQL DML/DDL, and helps avoid having to pass configs inside the 
SQL statements.
 
 By default, Hudi would load the configuration file under `/etc/hudi/conf` 
directory. You can specify a different configuration directory location by 
setting the `HUDI_CONF_DIR` environment variable.
+- [**Parquet Configs**](#PARQUET_CONFIG): These configs makes it possible to 
bring native parquet features
 - [**Spark Datasource Configs**](#SPARK_DATASOURCE): These configs control the 
Hudi Spark Datasource, providing ability to define keys/partitioning, pick out 
the write operation, specify how to merge records or choosing query type to 
read.

Review Comment:
   makes sense. fixing this.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hudi] parisni commented on a diff in pull request #9056: [HUDI-6456] [DOC] Add parquet blooms documentation

2023-06-30 Thread via GitHub


parisni commented on code in PR #9056:
URL: https://github.com/apache/hudi/pull/9056#discussion_r1247776991


##
website/docs/configurations.md:
##
@@ -197,7 +197,10 @@ Options useful for reading tables via 
`read.format.option(...)`
 
 ### Write Options {#Write-Options}
 
-You can pass down any of the WriteClient level configs directly using 
`options()` or `option(k,v)` methods.
+Hudi supports [parquet modular encryption](/docs/encryption) and [parquet 
bloom filters](/docs/parquet_bloom) through hadoop configurations.
+

Review Comment:
   added parquet_config heading



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org