Hi folks,
I have a streaming job that consumes from of a kafka topic. The topic is
pretty active so the local-mode single worker is obviously not able to keep
up with the fire-hose. I expect the job to skip records and continue on.
However, I'm getting an exception from the LegacyFetcher which kil
Actually, the thing with the JSON plans is slightly different now:
There are two types of plans:
1) The plan that describes the user program originally. That is what you
get from env.getExecutionPlan().
In the Batch API, this has the result of the optimizer, in the streaming
API the stream graph.
Hi,
don’t worry, it’s good to get questions about this stuff. :D
You are right, if Flink is not clever about the state your JVMs can run out of
memory and blow up. We are currently working on several things that should make
this more robust:
1) Put Flink Windows on Flink’s partitioned state abst
This is now part of the master branch and should be part of the SNAPSHOT builds
soon. The HA docs have a short paragraph on how to configure it.
– Ufuk
> On 21 Dec 2015, at 12:10, Ufuk Celebi wrote:
>
>
>> On 17 Dec 2015, at 19:36, Cory Monty wrote:
>>
>> Hey Ufuk,
>>
>> We can try buildin
Hi Aljoscha,
Just thinking on the EventTimeTrigger example you provided, and I'm going
to apologise in advance for taking more of your time!, but I'm thinking
that should I go down that route any long allowedLateness, we'll run into
issues with memory use, unless Flink is smart enough, configurab
Hi Aljoscha,
Thanks for the info!
Andy
On Fri, 15 Jan 2016 at 10:12 Aljoscha Krettek wrote:
> Hi,
> I imagine you are taking about CountTrigger, DeltaTrigger, and
> Continuous*Trigger. For these we never purge. They are a leftover artifact
> from an earlier approach to implementing windowing s
Thanks Till! I'll keep an eye out on the JIRA issue. Many thanks for the
prompt reply.
Cheers,
David
On Fri, Jan 15, 2016 at 4:16 AM, Till Rohrmann
wrote:
> Hi David,
>
> this is definitely an error on our side which might be caused by the
> latest changes to the project structure (removing fli
Hi,
I think the reason why you are seeing output across all parallel machines is
that the sink itself has parallelism=10 again. So even though there is only one
parallel instance of the All-WIndow Operator, the results of this get shipped
(round-robin) to the 10 parallel instances of the file si
> On 14 Jan 2016, at 22:00, kovas boguta wrote:
>
> On Thu, Jan 14, 2016 at 5:52 AM, Ufuk Celebi wrote:
> Hey Kovas
>
> sorry for the long delay.
>
> It was worth the wait! Thanks for the detailed response.
>
> > Ideally, I could force certain ResultPartitions to only be manually
> > relea
Hi,
Thanks for the response.
1) regarding the JIRA issue related to the .global and .forward functions – I
believe it is a good idea to be removed as they are confusing. Actually, they
are totally missing from the documentation webpage
https://ci.apache.org/projects/flink/flink-docs-master/apis
You can set Flink’s log level to DEBUG in the log4j.properties file.
Furthermore, you can activate logging of Akka’s life cycle events via
akka.log.lifecycle.events:
true which you specify in flink-conf.yaml.
Cheers,
Till
On Fri, Jan 15, 2016 at 12:41 PM, Frederick Ayala
wrote:
> Hi Stephan,
Hi Stephan,
Other jobs run fine but this one is not working on the machine that I was
using previously (16GB RAM) [1]
Is there a way to debug the Akka messages to understand what's happening
between the JobManager and the Client? I can add logging and send it.
Thanks!
Fred
[1] The failure star
Hi!
Do you get this problem with other Jobs as well?
The logs suggest that the JobManager receives the job and starts tasks, but
the Client thinks it lost connection.
Greetings,
Stephan
On Fri, Jan 15, 2016 at 10:31 AM, Frederick Ayala
wrote:
> Hi Robert,
>
> Thanks for your reply.
>
> I set
Hi David,
this is definitely an error on our side which might be caused by the latest
changes to the project structure (removing flink-staging directory). I’ve
filed a JIRA issue https://issues.apache.org/jira/browse/FLINK-3241. It
should be fixed soon.
In the meantime it should work if you build
Hi,
I imagine you are taking about CountTrigger, DeltaTrigger, and
Continuous*Trigger. For these we never purge. They are a leftover artifact from
an earlier approach to implementing windowing strategies that was inspired by
IBM InfoSphere streams. Here, all triggers are essentially accumulating
Hi Robert,
Thanks for your reply.
I set the akka.ask.timeout to 10k seconds just to see what happened. I
tried different values but non did the trick.
My problem was solved by using a machine with more RAM. However, it was not
clear that the memory was the problem :)
Attached are the log and th
Hi Radu,
I'm sorry for the delayed response.
I'm not sure what the purpose of DataStream.global() actually is. I've
opened a JIRA to document or remove it:
https://issues.apache.org/jira/browse/FLINK-3240.
For getting the final metrics, you can just call ".timeWindowAll()",
without a ".global()"
Hi Frederick,
sorry for the delayed response.
I have no idea what the problem could be.
Has the exception been thrown from the env.execute() call?
Why did you set the akka.ask.timeout to 10k seconds?
On Wed, Jan 13, 2016 at 2:13 AM, Frederick Ayala
wrote:
> Hi,
>
> I am having an error whil
Thanks Aljoscha, that's very enlightening.
Can you please also explain what the default behaviour is? I.e. if I use
one if the accumulating inbuilt triggers, when does the state get purged?
(With your info I can now probably work things out, but you may give more
insight :)
Also, are there plans
19 matches
Mail list logo