@Aljoscha Thank you for the pointer to ProcessFunction. That looks like a
better approach with less code and other overhead.
After restoring, the job is both reading new elements and emitting some, but
nowhere near as many as expected. I’ll investigate further after switching to
ProcessFunction
Hi Ufuk,
Realized I sent an incomplete mail. Continuing my previous reply here.
Thanks for hint on the region. The bucket is in eu-west-1 region.
The flink configuration for checkpoints and savepoints is as below –
state.backend.fs.checkpointdir: s3://flink-bucket/flink-checkpoints
state.sav
Hi Ufuk,
Thanks for replying.
The savepoint path is correct and it exists.
FYI, I used the monitoring REST APIs to cancel the job with savepoint.
[cid:image001.png@01D2A167.3E215140]
Abhinav Bajaj
Lead Engineer
HERE Predictive Analytics
Office: +12062092767
Mobile: +17083299516
HERE Seattle
7
Thanx for your response.
When using time windows, doesn`t flink know the load per window? I have
observed this behavior in windows as well.
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/load-balancing-of-keys-to-operators-tp12303p12308.htm
Hi,
Before doing it myself I thought it would be better to ask.
We need to consume from kafka 0.8 and produce to kafka 0.10 in a flink app.
I guess there will be classes and package names conflicts for a lot of
dependencies of both connectors.
The obvious solution it to make a “shaded” version o
Hi,
using keyBy Flink ensures that every set of records with same key is
send to the same operator, otherwise it would not be possible to process
them as a whole. It depends on your use case if it is also ok that
another operator processes parts of this set of records. You can
implement you o
What do you get form the sys out printing in CoFlatMapFunImpl? Could it be that
all the elements are being processed before the control input element arrives
and that they are therefore dropped?
> On 17 Mar 2017, at 09:14, Tarandeep Singh wrote:
>
> Hi Gordon,
>
> When I use getInput (input c
Hi,
Thanks! The proposal sounds very good to us too.
Bruno
On Sun, 19 Mar 2017 at 10:57 Florian König
wrote:
> Thanks Gordon for the detailed explanation! That makes sense and explains
> the expected behaviour.
>
> The JIRA for the new metric also sounds very good. Can’t wait to have this
> in
I am using a simple streaming job where I use keyBy on the stream to process
events per key. The keys may vary in number (few keys to thousands). I have
noticed a behavior of Flink and I need clarification on that. When we use
keyBy on the stream, flink assigns keys to parallel operators so each
op
As a general remark, I think the ProcessFunction
(https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/stream/process_function.html)
could be better suited for implementing such a use case.
I did run tests on Flink 1.2 and master with a simple processing-time windowing
job. After per
Hello users,
As Maciek, I'm currently trying to make apache Zeppelin 0.7 working with
Flink. I have two versions of flink available (1.1.2 and 1.2.0). Each one
is running in High-availability mode.
When running jobs from Zeppelin in Flink local mode, everything works fine.
But when trying to subm
Hi Stu,
thanks for reporting back.
I tried to reproduce the "method flatten() not found" error but did not
succeed.
It would be great if you could open a JIRA issue and describe how to
reproduce the problem.
Thank you,
Fabian
2017-03-17 16:42 GMT+01:00 Stu Smith :
> Thank you! Just in case som
Hey Abhinav,
the Exception is thrown if the S3 object does not exist.
Can you double check that it actually does exist (no typos, etc.)?
Could this be related to accessing a different region than expected?
– Ufuk
On Mon, Mar 20, 2017 at 9:38 AM, Timo Walther wrote:
> Hi Abhinav,
>
> can you
Here is the JIRA: https://issues.apache.org/jira/browse/FLINK-6125
On Mon, Mar 20, 2017 at 10:27 AM, Robert Metzger
wrote:
> Hi Craig,
>
> I was able to reproduce the issue with maven 3.3 in Flink 1.2. I'll look
> into it.
>
> On Fri, Mar 17, 2017 at 11:56 PM, Foster, Craig
> wrote:
>
>> Ping.
Hi Craig,
I was able to reproduce the issue with maven 3.3 in Flink 1.2. I'll look
into it.
On Fri, Mar 17, 2017 at 11:56 PM, Foster, Craig wrote:
> Ping. So I’ve built with 3.0.5 and it does give proper shading. So it does
> get me yet another workaround where my only recourse is to use a max
Hi Abhinav,
can you check if you have configured your AWS setup correctly? The S3
configuration might be missing.
https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/aws.html#missing-s3-filesystem-configuration
Regards,
Timo
Am 18/03/17 um 01:33 schrieb Bajaj, Abhinav:
Hi,
16 matches
Mail list logo