Hi Dylan,

thanks for reaching out to the Flink community and excuse our late
response. I am not an expert for the Table API and its JDBC connector but
what you describe sounds like a missing feature. Also given that
FLINK-12198 enabled this feature for the JDBCInputFormat indicates that we
might simply need to make it configurable from the JdbcTableSource. I am
pulling in Jark and Leonard who worked on the JdbcTableSource and might
help you to get this feature into Flink. Their response could take a week
because they are currently on vacation if I am not mistaken.

What you could already do is to open an issue linking FLINK-12198 and
describing the problem and your solution proposal.

[1] https://issues.apache.org/jira/browse/FLINK-12198

Cheers,
Till

On Wed, Oct 7, 2020 at 5:00 PM Dylan Forciea <dy...@oseberg.io> wrote:

> I appreciate it! Let me know if you want me to submit a PR against the
> issue after it is created. It wasn’t a huge amount of code, so it’s
> probably not a big deal if you wanted to redo it.
>
> Thanks,
> Dylan
>
> From: Shengkai Fang <fskm...@gmail.com>
> Date: Wednesday, October 7, 2020 at 9:06 AM
> To: Dylan Forciea <dy...@oseberg.io>
> Subject: Re: autoCommit for postgres jdbc streaming in Table/SQL API
>
> Sorry for late response. +1 to support it. I will open a jira about it
> later.
>
> Dylan Forciea <dy...@oseberg.io<mailto:dy...@oseberg.io>>于2020年10月7日
> 周三下午9:53写道:
>
>
>
>
>
>
>
>
>
>
>
>
>
> I hadn’t heard a response on this, so I’m going to expand this to the dev
> email list.
>
>
>
> If this is indeed an issue and not my misunderstanding, I have most of a
> patch already coded up. Please let me know, and I can create a JIRA issue
> and send out a PR.
>
>
>
> Regards,
>
> Dylan Forciea
>
> Oseberg
>
>
>
>
> From: Dylan Forciea <dy...@oseberg.io<mailto:dy...@oseberg.io>>
>
>
> Date: Thursday, October 1, 2020 at 5:14 PM
>
>
> To: "user@flink.apache.org<mailto:user@flink.apache.org>" <
> user@flink.apache.org<mailto:user@flink.apache.org>>
>
>
> Subject: autoCommit for postgres jdbc streaming in Table/SQL API
>
>
>
>
>
>
> Hi! I’ve just recently started evaluating Flink for our ETL needs, and I
> ran across an issue with streaming postgres data via the Table/SQL API.
>
>
>
> I see that the API has the scan.fetch-size option, but not
> scan.auto-commit per
>
>
>
>
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/jdbc.html
> . I had attempted to load a large table in, but it completely slurped it
> into memory before starting the streaming. I modified the flink source code
> to add a scan.auto-commit
>
> option, and I was then able to immediately start streaming and cut my
> memory usage way down.
>
>
>
> I see in this thread that there was a similar issue resolved for
> JDBCInputFormat in this thread:
>
>
>
>
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-JDBC-Disable-auto-commit-mode-td27256.html
> , but I don’t see a way to utilize that in the Table/SQL API.
>
>
>
> Am I missing something on how to pull this off?
>
>
>
> Regards,
>
> Dylan Forciea
>
> Oseberg
>
>
>
>
>
>

Reply via email to