Yeah, we have discussed the incremental readers server times, here is the
conclusion [1].   I also wrote a document to show the thoughts behind the
discussion[2], you might be interested in it.

In my opinion, the next release 0.10.0 will include the basic flink sink
connector and batch reader.  and I expect that the following 0.11.0 will
include the cdc ingestion and streaming(append-only log, not CDC) reader
for flink.  CDC streaming readers may be included in 0.12.0 in my opinion.

1. https://github.com/apache/iceberg/issues/360#issuecomment-653532308
2.
https://docs.google.com/document/d/1bBKDD4l-pQFXaMb4nOyVK-Sl3N2NTTG37uOCQx8rKVc/edit#heading=h.ljqc7bxmc6ej


On Tue, Oct 20, 2020 at 4:43 AM Ryan Blue <rb...@netflix.com> wrote:

> Hi Ashish,
>
> We've discussed this use case quite a bit, but I don't think that there
> are currently any readers that expose the deletes as a stream. Right now,
> all of the readers produce records from the current tables state. I think
> @OpenInx <open...@gmail.com> and @Jingsong Li <jingsongl...@gmail.com> have
> some plans to expose such a reader for Flink, though. Maybe they can work
> with you to on some milestones and a roadmap.
>
> rb
>
> On Fri, Oct 16, 2020 at 11:28 AM Ashish Mehta <mehta.ashis...@gmail.com>
> wrote:
>
>> Hi,
>>
>> Is there a spec/proposal/milestone issue that talks about Incremental
>> reads for UPSERT? i.e. allowing clients to read a dataSet's APPEND/DELETES
>> with some options of exposing actually deleted rows.
>>
>> I am in view that exposing deleted rows might be trivial with positional
>> deletes, so an option might be actually helpful if the client would be
>> creating positional deletes in his use case.
>>
>> Thanks,
>> Ashish
>>
>
>
> --
> Ryan Blue
> Software Engineer
> Netflix
>

Reply via email to