ace isolation and security policies on
> these. How do this work if the flink cluster is standalone on AWS ?
>
>
> Best Regards
> CVP
>
> On Fri, Mar 24, 2017 at 8:49 AM, Philippe Caparroy
> mailto:philippe.capar...@orange.fr>> wrote:
> Hi,
>
> If I
Hi,
If I can give my 2 cents.
One simple solution to your problem is using weave (https://www.weave.works/) a
Docker network plugin.
We’ve been working for more then year with dockerized
(Flink+zookeeper+Yarn+spark+Kafka+hadoop+elasticsearch ) cluster using weave.
Design your docker container
I think there is an error in the code snippet describing the ProcessFunction
time out example :
https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/stream/process_function.html
@Override
public void onTimer(long timestamp, OnTimerContext ctx,
Collector> out)
throws
ny case, we could file an issue
> and allow other partitioners for keyed streams.
>
> Best,
> Max
>
>
> On Thu, Aug 11, 2016 at 10:53 PM, Philippe Caparroy
> wrote:
>> Hi there,
>>
>> It seems not possible to use some custom partitioner in the contex
Hi there,
It seems not possible to use some custom partitioner in the context of the
KeyedStream, without modifying the KeyedStream.
protected DataStream setConnectionType(StreamPartitioner partitioner) {
throw new UnsupportedOperationException("Cannot override
partitioning for
You should have a look at this project : https://github.com/addthis/stream-lib
You can use it within Flink, storing intermediate values in a local state.
> Le 9 juin 2016 à 15:29, Yukun Guo a écrit :
>
> Thank you very much for the detailed answer. Now I understand a DataStream
> can be re
Just transform the list in a DataStream. A datastream can be finite.
One solution, in the context of a Streaming environment is to use Kafka, or any
other distributed broker, although Flink ships with a KafkaSource.
1)Create a Kafka Topic dedicated to your list of key/values. Inject your va