Thanks Vinoth. These are really exciting items and hats off to you and team
in pushing the releases swiftly and improving the framework all the time. I
hope someday I will start contributing once I will get free from my major
deliverables and have more understanding the nitty gritty details of Hudi.

You have mentioned Spark3.0 support in next release. We were actually
thinking of moving to Spark 3.0 but thought it’s too early with 0.6
release. Is 0.6 not fully tested with Spark 3.0 ?


On Wed, 23 Sep 2020 at 8:25 AM, Vinoth Chandar <vin...@apache.org> wrote:

> Hello all,
>
>
>
> Pursuant to our conversation around release planning, I am happy to share
>
> the initial set of proposals for the next minor/major releases (minor
>
> release ofc can go out based on time)
>
>
>
> *Next Minor version 0.6.1 (with stuff that did not make it to 0.6.0..) *
>
> Flink/Writer common refactoring for Flink
>
> Small file handling support w/o caching
>
> Spark3 Support
>
> Remaining bootstrap items
>
> Completing bulk_insertV2 (sort mode, de-dup etc)
>
> Full list here :
>
> https://issues.apache.org/jira/projects/HUDI/versions/12348168
>
> <https://issues.apache.org/jira/projects/HUDI/versions/12348168>
>
>
>
> *0.7.0 with major new features *
>
> RFC-15: metadata, range index (w/ spark support), bloom index (eliminate
>
> file listing, query pruning, improve bloom index perf)
>
> RFC-08: Record Index (to solve global index scalability/perf)
>
> RFC-18/19: Clustering/Insert overwrite
>
> Spark 3 based datasource rewrite (structured streaming sink/source,
>
> DELETE/MERGE)
>
> Incremental Query on logs (Hive, Spark)
>
> Parallel writing support
>
> Redesign of marker files for S3
>
> Stretch: ORC, PrestoSQL Support
>
>
>
> Full list here :
>
> https://issues.apache.org/jira/projects/HUDI/versions/12348721
>
>
>
> Please chime in with your thoughts. If you would like to commit to
>
> contributing a feature towards a release, please do so by marking *`Fix
>
> Version/s`* field with that release number.
>
>
>
> Thanks
>
> Vinoth
>
>

Reply via email to