Re: can flink sql handle udf-generated timestamp field

2019-06-06 Thread Yu Yang
Thank you Fabian! We will try the approach that you suggest. On Thu, Jun 6, 2019 at 1:03 AM Fabian Hueske wrote: > Hi Yu, > > When you register a DataStream as a Table, you can create a new attribute > that contains the event timestamp of the DataStream records. > For that, you would need to as

NotSerializableException: DropwizardHistogramWrapper inside AggregateFunction

2019-06-06 Thread Vijay Balakrishnan
HI, I have a class defined : public class MGroupingWindowAggregate implements AggregateFunction.. { > private final Map keyHistMap = new TreeMap<>(); > } > In the constructor, I initialize it. > public MGroupingWindowAggregate() { > Histogram minHist = new Histogram(new > SlidingTimeWindowReservo

Flink 1.7.1 flink-s3-fs-hadoop-1.7.1 doesn't delete older chk- directories

2019-06-06 Thread anaray
Hi, I am using 1.7.1 and we store checkpoints in Ceph and we use flink-s3-fs-hadoop-1.7.1 to connect to Ceph. I have only 1 checkpoint retained. Issue I see is that previous/old chk- directories are still around. I verified that those older doesn't contain any checkpoint data. But the directories

Ipv6 supported?

2019-06-06 Thread Siew Wai Yow
Hi guys, May i know flink support ipv6? Thanks Yow

Re: Does Flink Kafka connector has max_pending_offsets concept?

2019-06-06 Thread xwang355
Thanks Fabian. This is really helpful. -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

RE: Change sink topology

2019-06-06 Thread Smirnov Sergey Vladimirovich
Great, thanks! From: Fabian Hueske [mailto:fhue...@gmail.com] Sent: Thursday, June 6, 2019 3:07 PM To: Smirnov Sergey Vladimirovich Cc: user@flink.apache.org Subject: Re: Change sink topology Hi Sergey, I would not consider this to be a topology change (the sink operator would still be a Kafka

Re: Change sink topology

2019-06-06 Thread Fabian Hueske
Hi Sergey, I would not consider this to be a topology change (the sink operator would still be a Kafka producer). It seems that dynamic topic selection is possible with a KeyedSerializationSchema (Check "Advanced Serialization Schema: [1]). Best, Fabian [1] https://ci.apache.org/projects/flink/f

Change sink topology

2019-06-06 Thread Smirnov Sergey Vladimirovich
Hi flink, Im wonder, is it possible to dynamically (while job running) change sink topology* - by adding new sink on the fly? Say, we have input stream and by analyzing message property we decided to put this message into some kafka topic, i.e. choosen_topic = function(message.property). Simpli

Re: Does Flink Kafka connector has max_pending_offsets concept?

2019-06-06 Thread Fabian Hueske
Hi Ben, Flink correctly maintains the offsets of all partitions that are read by a Kafka consumer. A checkpoint is only complete when all functions successful checkpoint their state. For a Kafka consumer, this state is the current reading offset. In case of a failure the offsets and the state of a

Re: Weird behavior with CoFlatMapFunction

2019-06-06 Thread Fabian Hueske
Hi, There are a few things to point out about your example: 1. The the CoFlatMapFunction is probably executed in parallel. The configuration is only applied to one of the parallel function instances. You probably want to broadcast the configuration changes to all function instances. Have a look a

Weird behavior with CoFlatMapFunction

2019-06-06 Thread Andy Hoang
Hi guys, I want to merge 2 diffrent stream, one is config stream and the other is the value json, to check again that config. Its seem like the CoFlatMapFunction should be used. Here my sample: val filterStream: ConnectedStreams[ControlEvent, JsValue]=(specificControlStream).connect(eventS

Re: can flink sql handle udf-generated timestamp field

2019-06-06 Thread Fabian Hueske
Hi Yu, When you register a DataStream as a Table, you can create a new attribute that contains the event timestamp of the DataStream records. For that, you would need to assign timestamps and generate watermarks before registering the stream: FlinkKafkaConsumer kafkaConsumer = new FlinkKa