Hi Hunk,
there is documentation about watermarking in FlinkSQL [1]. There is also a
FlinkSQL cookbook entry about watermarking [2]. Essentially, you define the
watermark strategy in your CREATE TABLE statement and specify the lateness
for a given event (not the period in which watermarks are automa
Hi dear engineers,
I have one question about watermark generating mechanism in Flink SQL. There
are two mechanisms called Periodic Watermarks and Punctuated Watermarks, I want
to use Periodic Watermarks with interval 5 seconds (meaning watermarks will be
generated every 5 seconds), how should
Hi Vignesh,
403 status code makes this look like an authorization issue.
>
* Some digging into the presto configs and I had this one turned off
topresto.s3.use-instance-credentials: "false". (Is this right?)*
>From the document[1], it is recommended that set hive.
*s3.use-instance-credentials* to
Apologies for the mistake of calculation
120*6*2KB = 1440KB = 1.4MB
> On 18-Oct-2022, at 1:35 AM, Puneet Duggal wrote:
>
> Hi,
>
> I am working on a use case which uses Flink CEP for pattern detection.
>
> Flink Version - 1.12.1
> Deployment Mode - Session Mode (Highly Available)
> State Back
Hello all,
I am trying to achieve flink application checkpointing to s3 using the
recommended presto s3 filesystem plugin.
My application is deployed in a kubernetes cluster (EKS) in flink
application mode.
When I start the application I am getting a forbidden 403 response
```Caused by:
com.face
Hi David,
Many thanks for your reply. I understand then that there is no easy way to
do a simple processing-time join (purely based on SQL without using the
table API) where you:
- Save elements seen on the right in the current state (in general this
state can be regarded as a materialised view,
Thanks for sharing the stacktrace. That specific error shouldn't cause the
session cluster to shut down. It gets handled in JobMaster#onStart [1]
where handleJobMasterError is called that triggers the fatal error handler
only for fatal errors. May you share the entire logs of this run? That
would h