Re: Metrics not reported to graphite

2016-09-01 Thread Jack Huang
Found the solution to the follow-up question: https://ci.apache.org/projects/flink/flink-docs-release-1.1/setup/config.html#metrics On Thu, Sep 1, 2016 at 3:46 PM, Jack Huang wrote: > Hi Greg, > > Following your hint, I found the solution here ( > https://issues.apache.org/jira/

Re: Metrics not reported to graphite

2016-09-01 Thread Jack Huang
their names starting with the host ip address? Thanks, Jack On Thu, Sep 1, 2016 at 3:04 PM, Greg Hogan wrote: > Have you copied the required jar files into your lib/ directory? Only JMX > support is provided in the distribution. > > On Thu, Sep 1, 2016 at 5:07 PM, Jack Huang wrote:

Metrics not reported to graphite

2016-09-01 Thread Jack Huang
Hi all, I followed the instruction for reporting metrics to a Graphite server on the official document ( https://ci.apache.org/projects/flink/flink-docs-release-1.1/apis/metrics.html#metric-types ). Specifically, I have the following config/code in my project metrics.reporters: graphite metrics

Re: Handle deserialization error

2016-09-01 Thread Jack Huang
; and chain it with a flatMap operator where you can use your custom > deserializer and handle deserialization errors. > > Best, > Yassine > > On Aug 27, 2016 02:37, "Jack Huang" wrote: > >> Hi all, >> >> I have a custom deserializer which I pass to a

Re: Cannot pass objects with null-valued fields to the next operator

2016-09-01 Thread Jack Huang
or nullable fields? > > Stephan > > > On Mon, Aug 29, 2016 at 8:04 PM, Jack Huang wrote: > >> Hi all, >> >> It seems like flink does not allow passing case class objects with >> null-valued fields to the next operators. I am getting the following error >>

Cannot pass objects with null-valued fields to the next operator

2016-08-29 Thread Jack Huang
Hi all, It seems like flink does not allow passing case class objects with null-valued fields to the next operators. I am getting the following error message: *Caused by: java.lang.RuntimeException: Could not forward element to next operator* at org.apache.flink.streaming.runtime.task

Handle deserialization error

2016-08-26 Thread Jack Huang
Hi all, I have a custom deserializer which I pass to a Kafka source to transform JSON string to Scala case class. val events = env.addSource(new FlinkKafkaConsumer09[Event]("events", new JsonSerde(classOf[Event], new Event), kafkaProp)) ​ There are time when the JSON message is malformed, in wh

Re: Cannot use WindowedStream.fold with EventTimeSessionWindows

2016-08-17 Thread Jack Huang
window function. > > Cheers, > Till > > On Wed, Aug 17, 2016 at 3:21 AM, Jack Huang wrote: > >> Hi all, >> >> I want to window a series of events using SessionWindow and use fold >> function to incrementally aggregate the result. >> >>

Cannot use WindowedStream.fold with EventTimeSessionWindows

2016-08-16 Thread Jack Huang
Hi all, I want to window a series of events using SessionWindow and use fold function to incrementally aggregate the result. events .keyBy(_.id) .window(EventTimeSessionWindows.withGap(Time.minutes(1))) .fold(new Session)(eventFolder) ​ However I get java.lang.UnsupportedOperationEx

Re: Parsing source JSON String as Scala Case Class

2016-08-05 Thread Jack Huang
tributing them). > > Making a Scala 'val' a 'lazy val' often does the trick (at minimal > performance cost). > > On Thu, Aug 4, 2016 at 3:56 AM, Jack Huang wrote: > >> Hi all, >> >> I want to read a source of JSON String as Scala Case Cla

Parsing source JSON String as Scala Case Class

2016-08-03 Thread Jack Huang
Hi all, I want to read a source of JSON String as Scala Case Class. I don't want to have to write a serde for every case class I have. The idea is: val events = env.addSource(new FlinkKafkaConsumer09[Event]("events", new JsonSerde(classOf[Event]), kafkaProp)) ​ I was implementing my own JsonSer

Re: Container running beyond physical memory limits when processing DataStream

2016-08-03 Thread Jack Huang
Hi Max, Changing yarn-heap-cutoff-ratio works seem to suffice for the time being. Thanks for your help. Regards, Jack On Tue, Aug 2, 2016 at 11:11 AM, Jack Huang wrote: > Hi Max, > > Is there a way to limit the JVM memory usage (something like the -Xmx > flag) for the task manage

Re: Container running beyond physical memory limits when processing DataStream

2016-08-02 Thread Jack Huang
t; > > > > > > > > > > > On Fri, Jul 29, 2016 at 11:19 AM, Jack Huang wrote: > >> > >> Hi Max, > >> > >> Each events are only a few hundred bytes. I am reading from a Kafka > topic > >> from some offset in the past, so th

Container running beyond physical memory limits when processing DataStream

2016-07-28 Thread Jack Huang
Hi all, I am running a test Flink streaming task under YARN. It reads messages from a Kafka topic and writes them to local file system. object PricerEvent { def main(args:Array[String]) { val kafkaProp = new Properties() kafkaProp.setProperty("bootstrap.servers", "localhost:66

Periodically evicting internal states when using mapWithState()

2016-06-06 Thread Jack Huang
Hi all, I have an incoming stream of event objects, each with its session ID. I am writing a task that aggregate the events by session. The general logics looks like case class Event(sessionId:Int, data:String)case class Session(id:Int, var events:List[Event]) val events = ... //some source event

Replays message in Kafka topics with FlinkKafkaConsumer09

2016-04-21 Thread Jack Huang
;auto.offset.reset", "earliest") env.addSource(new FlinkKafkaConsumer09[String](input, new SimpleStringSchema, kafkaProp)) .print ​ I thought *auto.offset.reset* is going to do the trick. What am I missing here? Thanks, Jack Huang

Re: Checkpoint and restore states

2016-04-21 Thread Jack Huang
ob manager automatically restarts the job under the same job ID 7. Observe from the output that the states are restored Jack Jack Huang On Thu, Apr 21, 2016 at 1:40 AM, Aljoscha Krettek wrote: > Hi, > yes Stefano is spot on! The state is only restored if a job is restarted > because

Re: Checkpoint and restore states

2016-04-20 Thread Jack Huang
ount > } > def restoreState(state: Integer) { > count = state > } > } Thanks, Jack Huang On Wed, Apr 20, 2016 at 5:36 AM, Stefano Baghino < stefano.bagh...@radicalbit.io> wrote: > My bad, thanks for pointing that out. > > On Wed, Apr 20, 2016 at 1:49 PM, Alj

Checkpoint and restore states

2016-04-19 Thread Jack Huang
yBy({s => s}) > > > > *.mapWithState((in:String, count:Option[Int]) => {val newCount = > count.getOrElse(0) + 1((in, newCount), Some(newCount))})* > .print() Thanks, Jack Huang