Re: How to get Kafka record's timestamp using Flink's KeyedDeserializationSchema?

2018-11-16 Thread Flink Developer
Thanks. Is there an alternative way to obtain the Kafka record timestamps using FlinkKafkaConsumer? ‐‐‐ Original Message ‐‐‐ On Friday, November 16, 2018 9:59 AM, Andrey Zagrebin wrote: > Hi, > > I think this is still on-going effort. You can monitor the corresponding > issues [1]

Re: Flink Serialization and case class fields limit

2018-11-16 Thread Fabian Hueske
Hi Andrea, I wrote the post 2.5 years ago. Sorry, I don't think that I kept the code somewhere, but the mechanics in Flink should still be the same. Best, Fabian Am Fr., 16. Nov. 2018, 20:06 hat Andrea Sella geschrieben: > Hi Andrey, > > My bad, I forgot to say that I am using Scala 2.11,

Re: Kinesis Shards and Parallelism

2018-11-16 Thread shkob1
Actually was looking at the task manager level, i did have more slots than shards, so it does make sense i had an idle task manager while other task managers split the shards between their slots Thanks! -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Re: Flink Serialization and case class fields limit

2018-11-16 Thread Andrea Sella
Hi Andrey, My bad, I forgot to say that I am using Scala 2.11, that’s why I asked about the limitation, and Flink 1.5.5. If I recall correctly CaseClassSerilizer and CaseClassTypeInfo don’t rely on unapply and tupled functions, so I'd say that Flink doesn't have this kind of limitation with

Re: BucketingSink vs StreamingFileSink

2018-11-16 Thread Andrey Zagrebin
Hi, StreamingFileSink is supposed to subsume BucketingSink which will be deprecated. StreamingFileSink fixes some issues of BucketingSink, especially with AWS s3 and adds more flexibility with defining the rolling policy. StreamingFileSink does not support older hadoop versions at the moment,

Re: Could not find previous entry with key.

2018-11-16 Thread Steve Bistline
Implemented hashcode() on both the DEVICE_ID and the MOTION_DIRECTION ( the pattern is built around this one ). Still giving me the following error: java.lang.RuntimeException: Exception occurred while processing valve output watermark: at

Re: Flink Serialization and case class fields limit

2018-11-16 Thread Andrey Zagrebin
Hi Andrea, 22 limit comes from Scala [1], not Flink. I am not sure about any repo for the post, but I also cc'ed Fabian, maybe he will point to some if it exists. Best, Andrey [1] https://underscore.io/blog/posts/2016/10/11/twenty-two.html > On 16 Nov 2018, at 13:10, Andrea Sella wrote: > >

Re: How to get Kafka record's timestamp using Flink's KeyedDeserializationSchema?

2018-11-16 Thread Andrey Zagrebin
Hi, I think this is still on-going effort. You can monitor the corresponding issues [1] and [2]. Best, Andrey [1] https://issues.apache.org/jira/browse/FLINK-8500 [2] https://issues.apache.org/jira/browse/FLINK-8354 > On 16 Nov 2018, at

Re: Standalone HA cluster: Fatal error occurred in the cluster entrypoint.

2018-11-16 Thread Olga Luganska
Hi, Miki Thank you for reply! I have deleted zookeeper data and was able to restart cluster. Olga Sent from my iPhone On Nov 16, 2018, at 4:38 AM, miki haiat mailto:miko5...@gmail.com>> wrote: I "solved" this issue by cleaning the zookeeper information and start the cluster again all the

Re: Null Pointer Exception

2018-11-16 Thread Ken Krugler
Hi Steve, I don’t have experience with the Flink CEP, but based on the previous stack trace you posted I’m guessing that for one of the records, value.getAudio_FFT1() returns null. — Ken > On Nov 16, 2018, at 3:40 AM, Steve Bistline wrote: > > I have a fairly straightforward project that is

Flink Serialization and case class fields limit

2018-11-16 Thread Andrea Sella
Hey squirrels, I've started to study more in-depth Flink Serialization and its "type system". I have a generated case class using scalapb that has more than 30 fields; I've seen that Flink still uses the CaseClassSerializer, the TypeInformation is CaseClassTypeInfo, even if in the docs[1] is

Null Pointer Exception

2018-11-16 Thread Steve Bistline
I have a fairly straightforward project that is generating a null pointer and heap space error. Any thoughts on where to begin debugging this? I suspect it is in this part of the code somewhere. Pattern pattern = Pattern . begin("first event").subtype(IoTEvent.class)

Re: Standalone HA cluster: Fatal error occurred in the cluster entrypoint.

2018-11-16 Thread miki haiat
I "solved" this issue by cleaning the zookeeper information and start the cluster again all the the checkpoint and job graph data will be erased and basacly you will start a new cluster... It's happened to me allot on a 1.5.x On a 1.6 things are running perfect . I'm not sure way this error is

BucketingSink vs StreamingFileSink

2018-11-16 Thread Edward Rojas
Hello, We are currently using Flink 1.5 and we use the BucketingSink to save the result of job processing to HDFS. The data is in JSON format and we store one object per line in the resulting files. We are planning to upgrade to Flink 1.6 and we see that there is this new StreamingFileSink,

The flink checkpoint time interval is not normal

2018-11-16 Thread 远远
the flink app ruuning on flink1.5.4 on yarn, it should be triggered every 5 min... [image: image.png]

How to get Kafka record's timestamp using Flink's KeyedDeserializationSchema?

2018-11-16 Thread Flink Developer
Hi, I have a flink app which uses the FlinkKafkaConsumer. I am interested in retrieving the Kafka timestamp for a given record/offset using the *KeyedDeserializationSchema* which provides topic, partition, offset and message. How can the timestamp be obtained through this interface? Thank you