Hi All!
Is it possible to insert into a table without specifying all columns of the
target table?
In other words can we use the default / NULL values of the table when not
specified somehow?
For example:
Query schema: [a: STRING]
Sink schema: [a: STRING, b: STRING]
I would like to be able to si
Hi haifang,
1. Maybe filters not being correctly pushed down or the performance impact of
single-concurrency writing to Iceberg.
Can you please check the actual number of records written to Iceberg?
Additionally, could you provide the version of the Iceberg connector and the
SQL statement use
Please send email to user-unsubscr...@flink.apache.org if you want to
unsubscribe the mail from user@flink.apache.org, and you can refer [1][2]
for more details.
请发送任意内容的邮件到 user-unsubscr...@flink.apache.org 地址来取消订阅来自
user@flink.apache.org 邮件组的邮件,你可以参考[1][2] 管理你的邮件订阅。
Best,
Jiabao
[1] https://fl
Just to give more context, my setup uses Apache Flink 1.18 with the
adaptive scheduler enabled, issues happen randomly particularly
post-restart behaviors.
After each restart, the system logs indicate "Adding split(s) to reader:",
signifying the reassignment of partitions across different TaskMana
Hi Patrick,
Ideally it would be great if you could provide a small reproducer. It
could be that something isn't working properly in hiding the
internally used version of Scala from users, or something else.
Without a reproducer, it's quite hard to debug.
Best regards,
Martijn
On Wed, Jan 10, 20
Hi Martijn,
Many thanks for your reply. Yes I have seen the examples. I removed the
flink-scala dependency and only use the java libs for everything. So there
should no flink-scala API references in the stack.
These are the flink dependencies we are using:
"org.apache.flink" % "flink-core" % f
Hi Praveen,
There have been discussions around an LTS version [1] but no consensus
has yet been reached on that topic.
Best regards,
Martijn
[1] https://lists.apache.org/thread/qvw66of180t3425pnqf2mlx042zhlgnn
On Wed, Jan 10, 2024 at 12:08 PM Praveen Chandna via user
wrote:
>
> Hello
>
>
>
>
Hello
Once Flink 2.0 will be released in Dec 2024, what would be the release plan for
Flink 1.x ?
Those who are using the Flink 1.20 or earlier releases, do they need to migrate
to Flink 2.0 for Bug Fixes or Flink 1.x release track will be active for Bug
fixes.
Thanks !!
// Regards
Praveen Ch
Hi haifang,
lower-bound and upper-bound are defined as long types, and it seems difficult
to fill in the value of timestamp.
However, you may use WHERE t > TIMESTAMP '2022-01-01 07:00:01.333', as JDBC
supports filter pushdown.
Best,
Jiabao
On 2024/01/10 08:31:23 haifang luo wrote:
> Hello~~
>
Hello~~
My Flink version: 1.15.4
[image: image.png]
'scan.partition.column' type is timestamp, how should I fill in
'scan.partition.lower-bound' and 'scan.partition.upper-bound'?
Thank you for your reply~~
11 matches
Mail list logo