What is the recommended practice for using a dedicated ExecutionContexts
inside Flink code?
We are making some external network calls using Futures. Currently all of
them are running on the global execution context (import
scala.concurrent.ExecutionContext.Implicits.global).
Thanks
-Soumya
mblingEventTimeWindows.
>
> I also open a Jira issue for the bug so that we can keep track of it:
> https://issues.apache.org/jira/browse/FLINK-4028
>
> Cheers,
> Aljoscha
>
> On Tue, 7 Jun 2016 at 14:57 Soumya Simanta
> wrote:
>
>> The problem is why is the wind
iPhone
> On Jun 7, 2016, at 3:47 PM, Chesnay Schepler wrote:
>
> could you state a specific problem?
>
>> On 07.06.2016 06:40, Soumya Simanta wrote:
>> I've a simple program which takes some inputs from a command line (Socket
>> stream) and then aggregates bas
I've a simple program which takes some inputs from a command line (Socket
stream) and then aggregates based on the key.
When running this program on my local machine I see some output that is
counter intuitive to my understanding of windows in Flink.
The start time of the Window is around the tim
Is there a standard Flink S3 source yet?
Thanks
-Soumya
We are about to deploy a Flink job on YARN in production. Given that it is
a long running process we want to have alerting and monitoring mechanisms
in place.
Any existing solutions or suggestions to implement a new one would we
appreciated.
Thanks!
bles-new-streaming-applications-part-1/
> [3] http://flink.apache.org/news/2015/12/04/Introducing-windows.html
>
> 2016-02-03 5:39 GMT+01:00 Soumya Simanta :
>
>> I'm getting started with Flink and had a very fundamental doubt.
>>
>> 1) Where does Flink capture/store
I'm getting started with Flink and had a very fundamental doubt.
1) Where does Flink capture/store intermediate state?
For example, two streams of data have a common key. The streams can lag in
time (second, hours or even days). My understanding is that Flink somehow
needs to store the data from