Hi Fabian,
Thanks for the reply. I also created a JIRA:
https://issues.apache.org/jira/browse/FLINK-18641 yesterday. I think we can
extend our discussion there.
Best Regards,
Brian
From: Fabian Hueske
Sent: Tuesday, July 21, 2020 17:35
To: Zhou, Brian
Cc: user; Arvid Heise; Piotr Nowojski
Sub
Anyone can help us on this issue?
Best Regards,
Brian
From: Zhou, Brian
Sent: Wednesday, July 15, 2020 18:26
To: 'user@flink.apache.org'
Subject: Pravega connector cannot recover from the checkpoint due to "Failure
to finalize checkpoint"
Hi community,
To give some background, https://github.co
Hi community,
To give some background, https://github.com/pravega/flink-connectors is a
Pravega connector for Flink. The ReaderCheckpointHook[1] class uses the Flink
`MasterTriggerRestoreHook` interface to trigger the Pravega checkpoint during
Flink checkpoints to make sure the data recovery. We
Hi,
In this document
https://ci.apache.org/projects/flink/flink-docs-stable/ops/filesystems/s3.html#hadooppresto-s3-file-systems-plugins,
it mentioned that
* Presto is the recommended file system for checkpointing to S3.
Is there a reason for that? Is there some bottleneck for s3 hadoop plu
Hi,
Thanks for the information. I replied in the comment of this issue:
https://issues.apache.org/jira/browse/FLINK-16693?focusedCommentId=17065486&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17065486
Best Regards,
Brian
-Original Message-
From: Timo
Hi,
Thanks for the reference, Jark. In Pravega connector, user will define Schema
first and then create the table with the descriptor using the schema, see [1]
and error also came from this test case. We also tried the recommended
`bridgedTo(Timestamp.class)` method in the schema construction,
Hi Jark,
I saw this mail and found this is a similar issue I raised to the community
several days ago.[1] Can you have a look to see if it’s the same issue as this.
If yes, there is a further question. From the Pravega connector side, the issue
is raised in our Batch Table API which means users
Hi community,
Pravega connector is a connector that provides both Batch and Streaming Table
API implementation. We uses descriptor API to build Table source. When we plan
to upgrade to Flink 1.10, we found the unit tests are not passing with our
existing Batch Table API. There is a type convers