The Apache Hudi community is pleased to announce the release of Apache Hudi 0.5.3.
Apache Hudi (pronounced Hoodie) stands for Hadoop Upserts Deletes and Incrementals. Apache Hudi manages storage of large analytical datasets on DFS (Cloud stores, HDFS or any Hadoop FileSystem compatible storage) and provides the ability to update/delete records as well capture changes. 0.5.3 is a bug fix release and is the first release after graduating as TLP. It includes more than 35 resolved issues, comprising general improvements and bug-fixes. Hudi 0.5.3 enables Embedded Timeline Server and Incremental Cleaning by default for both delta-streamer and spark datasource writes. Apart from multiple bug fixes, this release also improves write performance like avoiding unnecessary loading of data after writes and improving parallelism while searching for existing files for writing new records. For details on how to use Hudi, please look at the quick start page located at https://hudi.apache.org/docs/quick-start-guide.html If you'd like to download the source release, you can find it here: https://github.com/apache/hudi/releases/tag/release-0.5.3 You can read more about the release (including release notes) here: https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12322822&version=12348256 We would like to thank all contributors, the community, and the Apache Software Foundation for enabling this release and we look forward to continued collaboration. We welcome your help and feedback. For more information on how to report problems, and to get involved, visit the project website at: http://hudi.apache.org/ Kind regards, Sivabalan Narayanan (Hudi 0.5.3 Release Manager) On behalf of the Apache Hudi
