I took a quick look at the PR and it looks like a great feature to have. It
provides unified APIs for data sources to perform the commonly used
operations easily and efficiently, so users don't have to implement
customer extensions on their own. Thanks Anton for the work!

On Thu, Jun 24, 2021 at 9:42 PM L. C. Hsieh <vii...@apache.org> wrote:

> Thanks Anton. I'm voluntarily to be the shepherd of the SPIP. This is also
> my first time to shepherd a SPIP, so please let me know if anything I can
> improve.
>
> This looks great features and the rationale claimed by the proposal makes
> sense. These operations are getting more common and more important in big
> data workloads. Instead of building custom extensions by individual data
> sources, it makes more sense to support the API from Spark.
>
> Please provide your thoughts about the proposal and the design. Appreciate
> your feedback. Thank you!
>
> On 2021/06/24 23:53:32, Anton Okolnychyi <aokolnyc...@gmail.com> wrote:
> > Hey everyone,
> >
> > I'd like to start a discussion on adding support for executing row-level
> > operations such as DELETE, UPDATE, MERGE for v2 tables (SPARK-35801). The
> > execution should be the same across data sources and the best way to do
> > that is to implement it in Spark.
> >
> > Right now, Spark can only parse and to some extent analyze DELETE,
> UPDATE,
> > MERGE commands. Data sources that support row-level changes have to build
> > custom Spark extensions to execute such statements. The goal of this
> effort
> > is to come up with a flexible and easy-to-use API that will work across
> > data sources.
> >
> > Design doc:
> >
> https://docs.google.com/document/d/12Ywmc47j3l2WF4anG5vL4qlrhT2OKigb7_EbIKhxg60/
> >
> > PR for handling DELETE statements:
> > https://github.com/apache/spark/pull/33008
> >
> > Any feedback is more than welcome.
> >
> > Liang-Chi was kind enough to shepherd this effort. Thanks!
> >
> > - Anton
> >
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>
>

Reply via email to