vinothchandar commented on a change in pull request #1390: [HUDI-634] Write 
release blog and document breaking changes for 0.5.2 release
URL: https://github.com/apache/incubator-hudi/pull/1390#discussion_r390529875
 
 

 ##########
 File path: docs/_pages/releases.cn.md
 ##########
 @@ -7,6 +7,31 @@ last_modified_at: 2019-12-30T15:59:57-04:00
 language: cn
 ---
 
+## [Release 
0.5.2-incubating](https://github.com/apache/incubator-hudi/releases/tag/release-0.5.2-incubating)
 ([docs](/docs/0.5.2-quick-start-guide.html))
+
+### Download Information
+ * Source Release : [Apache Hudi(incubating) 0.5.2-incubating Source 
Release](https://www.apache.org/dist/incubator/hudi/0.5.2-incubating/hudi-0.5.2-incubating.src.tgz)
 
([asc](https://www.apache.org/dist/incubator/hudi/0.5.1-incubating/hudi-0.5.1-incubating.src.tgz.asc),
 
[sha512](https://www.apache.org/dist/incubator/hudi/0.5.1-incubating/hudi-0.5.1-incubating.src.tgz.sha512))
+ * Apache Hudi (incubating) jars corresponding to this release is available 
[here](https://repository.apache.org/#nexus-search;quick~hudi)
+
+### Release Highlights
+ * CLI supports `temp_query` and `temp_delete` to query and delete temp view. 
This command creates a temp table. Users can write HiveQL queries against the 
table to filter the desired row.
+ * `TimestampBasedKeyGenerator` supports for data types convertible to String. 
Previously `TimestampBasedKeyGenerator` only supports `Double`, `Long`, `Float` 
and `String` 4 data types for the partition key. Now, users can convert date 
type to string in `TimestampBasedKeyGenerator`.
+ * Hudi now supports incremental pulling from defined partitions. For some use 
case that users only need to pull the incremental part of certain partitions, 
it can run faster by only load relevant parquet files.
+ * CLI allows users to specify option to print additional commit metadata, 
e.g. *Total Log Blocks*, *Total Rollback Blocks*, *Total Updated Records 
Compacted* and so on.
+ * With 0.5.2, hudi allows partition path to be updated with `GLOBAL_BLOOM` 
index.
+ * Client allows to overwrite the payload implementation in 
`hoodie.properties`. Previously, once the payload class is set once in 
`hoodie.properties`, it cannot be changed. In some cases, if a code refactor is 
done and the jar updated, one may need to pass the new payload class name.
+ * With 0.5.2, the community has supported to published the coverage to 
codecov.io on every build. With this feature, the community will know the 
change of test coverage more clearly.
+ * A `JdbcbasedSchemaProvider` schema provider has been provided to get 
metadata through JDBC. For the use case that users want to synchronize data 
from MySQL, and at the same time, want to get the schema from the database, 
it's very helpful.
+ * Simplify `HoodieBloomIndex` without the need for 2GB limit handling. Prior 
to spark 2.4.0, each spark partition has a limit of 2GB. In Hudi 0.5.1, after 
we upgraded to spark 2.4.4, we don't have the limitation anymore. Hence 
removing the safe parallelism constraint we had in` HoodieBloomIndex`.
+ * Write Client restructuring has moved classes around 
([HUDI-554](https://issues.apache.org/jira/browse/HUDI-554))
+   - `client` now has all the various client classes, that do the transaction 
management
 
 Review comment:
   can we remove the bullets and summarize it further?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to