Thanks Abid,

Count me in, and drop a note, if I can help in any way.

Thanks,
Peter

On Tue, Oct 11, 2022, 20:13 <abmo.w...@icloud.com.invalid> wrote:

> Hi Martijn,
>
> Yes catalog integration exists and catalogs can be created using Flink
> SQL.
>
> https://iceberg.apache.org/docs/latest/flink/#creating-catalogs-and-using-catalogs
> has more details.
> We may need some discussion within Iceberg community but based on the
> current iceberg-flink code structure we are looking to externalize this as
> well.
>
> Thanks
> Abid
>
>
> On 2022/10/11 08:24:44 Martijn Visser wrote:
> > Hi Abid,
> >
> > Thanks for the FLIP. I have a question about Iceberg's Catalog: has that
> > integration between Flink and Iceberg been created already and are you
> > looking to externalize that as well?
> >
> > Thanks,
> >
> > Martijn
> >
> > On Tue, Oct 11, 2022 at 12:14 AM <ab...@icloud.com.invalid> wrote:
> >
> > > Hi Marton,
> > >
> > > Yes, we are initiating this as part of the Externalize Flink Connectors
> > > effort. Plan is to externalize the existing Flink connector from
> Iceberg
> > > repo into a separate repo under the Flink umbrella.
> > >
> > > Sorry about the doc permissions! I was able to create a FLIP-267:
> > >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP+267%3A+Iceberg+Connector
> > > <
> > >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP+267%3A+Iceberg+Connector
> > > >
> > > Lets use that to discuss.
> > >
> > > Thanks
> > > Abid
> > >
> > > On 2022/10/10 12:57:32 Márton Balassi wrote:
> > > > Hi Abid,
> > > >
> > > > Just to clarify does your suggestion mean that the Iceberg community
> > > would
> > > > like to remove the iceberg-flink connector from the Iceberg codebase
> and
> > > > maintain it under Flink instead? A new separate repo under the Flink
> > > > project umbrella given the current existing effort to extract
> connectors
> > > to
> > > > their individual repos (externalize) makes sense to me.
> > > >
> > > > [1] https://lists.apache.org/thread/mpzzlpob9ymkjfybm96vz2y2m5fjyvfo
> > > >
> > > > Best,
> > > > Marton
> > > >
> > > >
> > > > On Mon, Oct 10, 2022 at 5:31 AM Jingsong Li <ji...@gmail.com> wrote:
> > > >
> > > > > Thanks Abid for driving.
> > > > >
> > > > > +1 for this.
> > > > >
> > > > > Can you open the permissions for
> > > > >
> > > > >
> > >
> https://docs.google.com/document/d/1WC8xkPiVdwtsKL2VSPAUgzm9EjrPs8ZRjEtcwv93ISI/edit?usp=sharing
> > > > > ?
> > > > >
> > > > > Best,
> > > > > Jingsong
> > > > >
> > > > > On Mon, Oct 10, 2022 at 9:22 AM Abid Mohammed
> > > > > <ab...@icloud.com.invalid> wrote:
> > > > > >
> > > > > > Hi,
> > > > > >
> > > > > > I would like to start a discussion about contributing Iceberg
> Flink
> > > > > Connector to Flink.
> > > > > >
> > > > > > I created a doc <
> > > > >
> > >
> https://docs.google.com/document/d/1WC8xkPiVdwtsKL2VSPAUgzm9EjrPs8ZRjEtcwv93ISI/edit?usp=sharing
> > > >
> > > > > with all the details following the Flink Connector template as I
> don’t
> > > have
> > > > > permissions to create a FLIP yet.
> > > > > > High level details are captured below:
> > > > > >
> > > > > > Motivation:
> > > > > >
> > > > > > This FLIP aims to contribute the existing Apache Iceberg Flink
> > > Connector
> > > > > to Flink.
> > > > > >
> > > > > > Apache Iceberg is an open table format for huge analytic
> datasets.
> > > > > Iceberg adds tables to compute engines including Spark, Trino,
> > > PrestoDB,
> > > > > Flink, Hive and Impala using a high-performance table format that
> works
> > > > > just like a SQL table.
> > > > > > Iceberg avoids unpleasant surprises. Schema evolution works and
> won’t
> > > > > inadvertently un-delete data. Users don’t need to know about
> > > partitioning
> > > > > to get fast queries. Iceberg was designed to solve correctness
> > > problems in
> > > > > eventually-consistent cloud object stores.
> > > > > >
> > > > > > Iceberg supports both Flink’s DataStream API and Table API.
> Based on
> > > the
> > > > > guideline of the Flink community, only the latest 2 minor versions
> are
> > > > > actively maintained. See the Multi-Engine Support#apache-flink for
> > > further
> > > > > details.
> > > > > >
> > > > > >
> > > > > > Iceberg connector supports:
> > > > > >
> > > > > >         • Source: detailed Source design <
> > > > >
> > >
> https://docs.google.com/document/d/1q6xaBxUPFwYsW9aXWxYUh7die6O7rDeAPFQcTAMQ0GM/edit#
> > > >,
> > > > > based on FLIP-27
> > > > > >         • Sink: detailed Sink design and interfaces used <
> > > > >
> > >
> https://docs.google.com/document/d/1O-dPaFct59wUWQECXEEYIkl9_MOoG3zTbC2V-fZRwrg/edit#
> > > > > >
> > > > > >         • Usable in both DataStream and Table API/SQL
> > > > > >         • DataStream read/append/overwrite
> > > > > >         • SQL create/alter/drop table, select, insert into,
> insert
> > > > > overwrite
> > > > > >         • Streaming or batch read in Java API
> > > > > >         • Support for Flink’s Python API
> > > > > >
> > > > > > See Iceberg Flink  <
> > > https://iceberg.apache.org/docs/latest/flink/#flink>for
> > > > > detailed usage instructions.
> > > > > >
> > > > > > Looking forward to the discussion!
> > > > > >
> > > > > > Thanks
> > > > > > Abid
> > > > >
> > > >
> >

Reply via email to