+1
On 2021/07/02 03:40:51, Vinoth Chandar wrote:
> Hi all,
>
> When we incubated Hudi, we made some initial choices around collaboration
> tools of choice. I am wondering if there are still optimal, given the scale
> of the community at this point.
>
> Specifically, two points.
>
> A) Our iss
Congratulations @Gary Li and @Wenning Ding
On 2021/05/11 19:42:43, Vinoth Chandar wrote:
> Hello all,
>
> Please join me in congratulating our newest set of committers and PMCs.
>
> *Wenning Ding (Committer) *
> Wenning has been a consistent contributor to Hudi, over the past year or
> so. He
+1 , Cannot agree more.
*aux metadata* and metatable, can make hudi have large preformance
optimization on query end.
Can continuous develop.
cache service may the necessary component in cloud native environment.
On 2021/04/13 05:29:55, Vinoth Chandar wrote:
> Hello all,
>
> Reading one more
+1, for release more frequently
On 2021/03/01 04:56:15, Gary Li wrote:
> Hi All,
>
> I’d like to start a discussion about the 0.8.0 release planning. Recently, we
> made a great progress on the Flink writer(thank you Danny) and landed many
> bugfix/perf improvement commits. I think it’s a goo
Thank you all, hudi is a very interesting project, and the community will
develop better and better
Li Wei,
Thanks
First, I think it is necessary to improve spark sql, because the main scenario
of hudi is datalake or warehouse, and spark has strong ecological capabilities
in this field.
Second, but in the long run, Hudi needs a more general SQL layer, and it is
very necessary to embrace calcite. Then based
+1
On 2020/07/06 03:30:43, Vinoth Chandar wrote:
> Hi all,
>
> As we scale the community, its important that more of us are able to help
> users, users becoming contributors.
>
> In the past, we have drafted faqs, trouble shooting guides. But I feel
> sometimes, more hands on walk through sess
m to check the confict in
AbstractHoodieWriteClient.commit()
I created a issue https://issues.apache.org/jira/browse/HUDI-944
Best Regards,
Wei Li.
read
can be fast, also user can use asynchronous compaction to collapse older
smaller parquet files into larger parquet files
Best Regards,
Wei Li.
On 2020/05/14 16:54:24, Vinoth Chandar wrote:
> Hi Wei,
>
> Thanks for starting this thread. I am trying to understand your concern
created an RFC with more details
https://cwiki.apache.org/confluence/display/HUDI/RFC+-+19+hudi+support+log+append+scenario+with+better+write+and+asynchronous+compaction
Best Regards,
Wei Li.
://cwiki.apache.org/confluence/display/HUDI/RFC+-+17+Abstract+common+meta+sync+module+support+multiple+meta+service
. Any feedback is appreciated.
Best Regards,
Wei Li.
11 matches
Mail list logo