Hi,

It is a good question that how to avoid write to a table accidentally.
I think there are other ways to solve the problem, such as we can provide a
view instead of a table to the users or add a table constraint.

Best,
Hequn

On Fri, Oct 5, 2018 at 1:30 PM Shuyi Chen <suez1...@gmail.com> wrote:

> In the case of normal Flink job, I agree we can infer the table type from
> the queries. However, for SQL client, the query is adhoc and not known
> beforehand. In such case, we might want to enforce the table open mode at
> startup time, so users won't accidentally write to a Kafka topic that is
> supposed to be written only by some producer. What do you guys think?
>
> Shuyi
>
> On Thu, Oct 4, 2018 at 7:31 AM Hequn Cheng <chenghe...@gmail.com> wrote:
>
> > Hi,
> >
> > Thanks a lot for the proposal. I like the idea to unify table
> definitions.
> > I think we can drop the table type since the type can be derived from the
> > sql, i.e, a table be inserted can only be a sink table.
> >
> > I left some minor suggestions in the document, mainly include:
> > - Maybe we also need to allow define properties for tables.
> > - Support specify Computed Columns in a table
> > - Support define keys for sources.
> >
> > Best, Hequn
> >
> >
> > On Thu, Oct 4, 2018 at 4:09 PM Shuyi Chen <suez1...@gmail.com> wrote:
> >
> > > Thanks a lot for the proposal, Timo. I left a few comments. Also, it
> > seems
> > > the example in the doc does not have the table type (source, sink and
> > both)
> > > property anymore. Are you suggesting drop it? I think the table type
> > > properties is still useful as it can restrict a certain connector to be
> > > only source/sink, for example, we usually want a Kafka topic to be
> either
> > > read-only or write-only, but not both.
> > >
> > > Shuyi
> > >
> > > On Mon, Oct 1, 2018 at 1:53 AM Timo Walther <twal...@apache.org>
> wrote:
> > >
> > > > Hi everyone,
> > > >
> > > > as some of you might have noticed, in the last two releases we aimed
> to
> > > > unify SQL connectors and make them more modular. The first connectors
> > > > and formats have been implemented and are usable via the SQL Client
> and
> > > > Java/Scala/SQL APIs.
> > > >
> > > > However, after writing more connectors/example programs and talking
> to
> > > > users, there are still a couple of improvements that should be
> applied
> > > > to unified SQL connector API.
> > > >
> > > > I wrote a design document [1] that discusses limitations that I have
> > > > observed and consideres feedback that I have collected over the last
> > > > months. I don't know whether we will implement all of these
> > > > improvements, but it would be great to get feedback for a
> satisfactory
> > > > API and for future priorization.
> > > >
> > > > The general goal should be to connect to external systems as
> convenient
> > > > and type-safe as possible. Any feedback is highly appreciated.
> > > >
> > > > Thanks,
> > > >
> > > > Timo
> > > >
> > > > [1]
> > > >
> > > >
> > >
> >
> https://docs.google.com/document/d/1Yaxp1UJUFW-peGLt8EIidwKIZEWrrA-pznWLuvaH39Y/edit?usp=sharing
> > > >
> > > >
> > >
> > > --
> > > "So you have to trust that the dots will somehow connect in your
> future."
> > >
> >
>
>
> --
> "So you have to trust that the dots will somehow connect in your future."
>

Reply via email to