What is root cause of back pressure?
The reason why I ask is we investigated and applied metrics to measure time
to process event and ends up finding bottle neck at frequent managed state
updates. Our approach was keeping mem cache and periodical updates states
before checkpointing cycle kick in.
Hi rhashmi
We are also experiencing slow checkpoints when there exist back pressure.
It seems there is no good method to handle back pressure now.
We work around it by setting a larger number of checkpoint timeout. The
default value is 10min. But checkpoints usually take more time to complete
So what is the resolution? flink consuming messages from kafka. Flink went
down about a day ago, so now flink has to process 24 hour worth of events.
But i hit backpressure, as of right now checkpoint are timing out. Is there
any recommendation how to handle this situation?
Seems like trigger
Hi,
I am sorry it worked with the BoundedOutOfOrdernessTimestampExtractor,
somehow I replayed my events from kafka and the older events were also on
the bus and it didnt correlate with my new events.
Now i cleaned up my code and restarted it from the begninning and it works.
Thanks a lot for
Hi Kostas,
I am okay with processing time at the moment but as my events already have a
creation timestamp added to them and also to explore further the event time
aspect with FlinkCEP, I proceeded further with evaluating with event time.
For this I tried both
1. AscendingTimestampExtractor:
You could also remove the autoWatermarkInterval, if you are satisfied with
ProcessingTime.
Although keep in mind that processingTime assigns timestamps to elements based
on the order
that they arrive to the operator. This means that replaying the same stream can
give different
results.
If
Hi,
I tried to reproduce this but I could successfully run an example streaming
program on a YARN cluster with the -d flag.
What version of Flink are you using? Could you maybe post some code so that we
might try and understand the problem better?
Best,
Aljoscha
> On 5. May 2017, at 10:23,
Hi Kostas,
My application didn't have any timestamp extractor nor my events had any
timestamp. Still I was using event time for processing it, probably that's
why it was blocked.
Now I removed the part where I mention timechracteristics as Event time and
it works now.
For example:
Previously:
Hi Martin,
thanks for your answer..
For the vertex degree, I passed a map (vertex_id -> degree) to the
constructor.
Regards,
Ali
Zitat von Martin Junghanns :
Hi Ali :)
You could compute the degrees beforehand (e.g. using the
Graph.[in|out|get]degrees()) methods
Hi Biplob,
Great to hear that everything worked out and that you are not blocked!
For the timestamp assigning issue, you mean that you specified no timestamp
extractor in your job and all your elements had Long.MIN_VALUE timestamp right?
Kostas
> On May 31, 2017, at 1:28 PM, Biplob Biswas
Hi Dawid,
Thanks for the response. Timeout patterns work like a charm, I saw it
previously but didn't understood what it does, thanks for explaining that.
Also, my problem with no alerts is solved now.
The problem was that I was using "Event Time" for processing whereas my
events didn't have
This is what you are looking for:
https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/windows.html#incremental-window-aggregation-with-foldfunction
Cheers,
Gyula
William Saar ezt írta (időpont: 2017. máj. 31., Sze,
1:36):
> Nice! The solution is actually starting
Thanks Nico for a good gist on memory management of flink…
If I have 64 GB in each task manager …should I be aggressive of setting
taskmanager.heap.mb to a very large value 56GB let;s say,563200 was a
mistake,that’s abruptly high.
I am actually trying to come up with an understanding how to
13 matches
Mail list logo