Hey Jacqlyn,
According to the stack trace, it seems that there is a problem when
the checkpoint is triggered. Is this the problem after the restore?
would you like to share some logs related to restoring?
Best,
Yanfei
Jacqlyn Bender via user 于2023年9月8日周五 05:11写道:
>
> Hey folks,
>
>
> We
Hi, Xianxun
Do you mean the unix_timestamp() is parsed to the time when the query is
compiled in streaming mode?
Best,
Ron
Xianxun Ye 于2023年9月7日周四 18:19写道:
> Hi Team,
>
> I want to Serialize the ResolvedExpression to String or byte[] and transmit
> it into LookupFunction, and parse it back to
Hey folks,
We experienced a pipeline failure where our job manager restarted and we
were for some reason unable to restore from our last successful checkpoint.
We had regularly completed checkpoints every 10 minutes up to this failure
and 0 failed checkpoints logged. Using Flink version 1.17.1.
Thanks to Jane for following up on this issue! +1 for adding it back first.
For the deprecation, considering that users aren't usually motivated to
upgrade to a major version (1.14, from two years ago, wasn't that old,
which may be
part of the reason for not receiving more feedback), I'd
Apology.
The question is, from our understanding we do not need to implement the
counter for numRecordsOutPerSecond metric explicity in our codes? As this
metric is automatically exposed to prometheus for every txns that goes in
to asyncInvoke() of our RichAsyncFunction every single second?
I
+1 to fix it first.
I also agree to deprecate it if there are few people using it,
but this should be another discussion thread within dev+user ML.
In the future, we are planning to introduce user-defined-operator
based on the TVF functionality which I think can fully subsume
the UDTAG, cc @Timo
Hi,
The Flink Doc mentions flink-s3-fs-presto as the recommended one for
checkpointing. Do we have any more details on how this was concluded? I am
part of a platform team where we are deciding whether to support both
filesystems or not. This will be a very useful input for us.
Thanks in
Hi,
I am new to flink. I am trying to write a job that updates the Keyed State
when a Broadcast Message is received in KeyedBroadcastProcessFunction.
I was wondering will the *ctx.applyToKeyedState* in the
processBroadCastElement will get completed before further messages are
processed in the
Hi Nick,
Short (and somewhat superficial answer):
* (assuming your producer supports exactly once mode (e.g. Kafka))
* Duplicates should only ever appear when your job restarts after a hiccup
* However if you job is properly configured (checkpointing/Kafka
transactions) everything
Hi
i am configured with exactly ones
i see that flink producer send duplicate messages ( sometime few copies)
that consumed latter only ones by other application,
How can I avoid duplications ?
regards'
nick
Jung,
I don't want to sound unhelpful, but I think the best thing for you to do
is simply to try these different models in your local env.
It should be very easy to get started with the Kubernetes Operator on
Kind/Minikube (
I don't have active knowledge of the Win compat but I know guys who use
that and I would do something like:
* Standalone AD server
* Create keytab for each user
* Mount it
* Start workload with "security.kerberos.login.keytab"
AFAIK there are similar tools on Win like MIT kerberos has if kinit is
Hi,
You cannot access the keyed state within #open(). It can only be
accessed under a keyed context ( a key is selected while processing an
element, e.g. #processElement).
Best,
Zakelly
On Thu, Sep 7, 2023 at 4:55 PM Krzysztof Chmielewski
wrote:
>
> Hi,
> I'm having a problem with my toy flink
Hi Krzysztof again,
Just for clarity … your sample code [1] tries to count the number of events per
key.
Assuming this is your intention?
Anyway your previous implementation initialized the keyed state keyCounterState
in the open function that is the right place to do this,
you just wouldn’t
Hi,
What's your question?
Best
Ron
patricia lee 于2023年9月7日周四 14:29写道:
> Hi flink users,
>
> I used Async IO (RichAsyncFunction) for sending 100 txns to a 3rd party.
>
> I check the runtimeContex that it has metric of numRecordsSent, we wanted
> to expose this metric to our prometheus server
Hello Chen,
Thanks for your reply! I have further questions as following...
1. In case of non-reactive mode in Flink 1.18, if the autoscaler adjusts
parallelism, what is the difference by using 'reactive' mode?
2. In case if I use Flink 1.15~1.17 without autoscaler, is the difference
of using
Thanks,
that helped.
Regards,
Krzysztof Chmielewski
czw., 7 wrz 2023 o 09:52 Schwalbe Matthias
napisał(a):
> Hi Krzysztof,
>
>
>
> You cannot access keyed state in open().
>
> Keyed state has a value per key.
>
> In theory you would have to initialize per possible key, which is quite
>
Hi,
I have a toy Flink job [1] where I have a KeyedProcessFunction
implementation [2] that also implements the CheckpointedFunction. My stream
definition has .keyBy(...) call as you can see in [1].
However when I'm trying to run this toy job I'm getting an exception
from
Hi Krzysztof,
You cannot access keyed state in open().
Keyed state has a value per key.
In theory you would have to initialize per possible key, which is quite
impractical.
However you don’t need to initialize state, the initial state per key default
to the default value of the type (null for
Hi,
I'm having a problem with my toy flink job where I would like to access a
ValueState of a keyed stream. The Job setup can be found here [1], it is
fairly simple
env
.addSource(new CheckpointCountingSource(100, 60))
.keyBy(value -> value)
.process(new KeyCounter())
.addSink(new ConsoleSink());
Hi Team,I want to Serialize the ResolvedExpression to String or byte[] and transmit it into LookupFunction, and parse it back to ResolvedExpression in the LookupFunction.For my case:Select * from left_stream as s join dim_table for system_time as of s.proc_time as d on
Hi Hangxiang,
We are using flink 1.14, the state backend is EmbeddedRocksDBStateBackend ,
and the Checkpoint Storage is filesystem.
This is the checkpoint configuration from our running jobs
Checkpointing Mode Exactly Once
Checkpoint Storage FileSystemCheckpointStorage
State Backend
Hi all!
We update operator to version 1.6.0 and the problem remains.
According to the log of these errors, there are even more errors than it was in
version 1.4.0
How can this be fixed?
2023-09-07 06:11:37,742 o.a.f.k.o.FlinkOperator[INFO ] Starting Flink
Kubernetes Operator
Hi, Yifan.
Which flink version are you using ?
You are using filesystem instead of rocksdb so that your checkpoint size
may not be incremental IIUC.
On Thu, Sep 7, 2023 at 10:52 AM Yifan He via user
wrote:
> Hi Shammon,
>
> We are using RocksDB,and the configuration is below:
>
Hi, Yifan.
If you enable the debug level log, you could see the log like 'Generated
hash xxx for node xxx'. I haven't found other ways to find the operator id
of SQL jobs, maybe I missed something, or we should export this info more
directly.
Unfortunately, there is no default state name for an
Hi team,
I want to implement a flink job to read avro files from s3 path, and output to
a kafka topic.
Currently, I am using AvroInputFormat like this:
AvroInputFormat avroInputFormat =
new AvroInputFormat<>(new Path(S3PathString), Session.class);
TypeInformation typeInfo =
26 matches
Mail list logo