Re: Simple stdout sink for testing Table API?

2018-06-23 Thread Hequn Cheng
Hi chrisr, It seems there are no "single line" ways to solve your problem. To print results on screen, you can use the DataStream.print() / DataSet.print() method, and to limit the output you can add a FilterFunction. The code looks like: Table projection1 = customers >

Re: [BucketingSink] incorrect indexing of part files, when part suffix is specified

2018-06-23 Thread zhangminglei
Hi, Rinat I tried this situation you said and it works fine for me. The partCounter incremented as we hope. When the new part file is created, I did not see any same part index. Here is my code for that, you can take a look. In my case, the max index of part file is part-0-683PartSuffix, other

Simple stdout sink for testing Table API?

2018-06-23 Thread chrisr123
Is there a simple way to output the first few rows of a Flink table to stdout when developing an application? I just want to see the first 10-20 rows on screen during development to make sure my logic is correct. There doesnt seem to be something like print(10) in the API to see the first n

Custom Watermarks with Flink

2018-06-23 Thread Wyatt Frelot
I originally posted to Stack Overflow because I was trying to figure out he to do this in Flink. Wondering how to implement something of the sort: (1) *write up: * https://drive.google.com/file/d/0Bw69DO1tid2_SzVVendtUV9WMVdIUXptQ1hHSl9KNjAyMTBn/view?usp=drivesdk (2) *original post: *

Re: [BucketingSink] incorrect indexing of part files, when part suffix is specified

2018-06-23 Thread Rinat
Hi mates, could anyone please have a look on my PR, that fixes issue of incorrect indexing in BucketingSink component ? Thx > On 18 Jun 2018, at 10:55, Rinat wrote: > > I’ve created a JIRA issue https://issues.apache.org/jira/browse/FLINK-9603 >

Re: Stream Join With Early firings

2018-06-23 Thread Johannes Schulte
Thanks Fabian! This seems to be the way to go On Tue, Jun 19, 2018 at 12:18 PM Fabian Hueske wrote: > Hi Johannes, > > You are right. You should approach the problem with the semantics that you > need before thinking about optimizations such as state size. > > The Table API / SQL offers (in

Re: A question about Kryo and Window State

2018-06-23 Thread Vishal Santoshi
Actually, yes. I have a job already running with "FieldSerializer" in production. Any insights will be appreciated. On Sat, Jun 23, 2018 at 7:39 AM, Vishal Santoshi wrote: > Thanks. > > On Thu, Jun 21, 2018 at 4:34 AM, Tzu-Li (Gordon) Tai > wrote: > >> Hi Vishal, >> >> Kryo has a serializer

Re: A question about Kryo and Window State

2018-06-23 Thread Vishal Santoshi
Thanks. On Thu, Jun 21, 2018 at 4:34 AM, Tzu-Li (Gordon) Tai wrote: > Hi Vishal, > > Kryo has a serializer called `CompatibleFieldSerializer` that allows for > simple backward compatibility changes, such as adding non-optional fields / > removing fields. > > If using the KryoSerializer is a

Few question about upgrade from 1.4 to 1.5 flink ( some very basic )

2018-06-23 Thread Vishal Santoshi
1. Can or has any one done a rolling upgrade from 1.4 to 1.5 ? I am not sure we can. It seems that JM cannot recover jobs with this exception Caused by: java.io.InvalidClassException: org.apache.flink.runtime.jobgraph.tasks.CheckpointCoordinatorConfiguration; local class incompatible: stream

Re: [Flink-9407] Question about proposed ORC Sink !

2018-06-23 Thread zhangminglei
Yes, it should be exit. Thanks to Ted Yu. Very exactly! Cheers Zhangminglei > 在 2018年6月23日,下午12:40,Ted Yu 写道: > > For #1, the word exist should be exit, right ? > Thanks > > Original message > From: zhangminglei <18717838...@163.com> > Date: 6/23/18 10:12 AM (GMT+08:00) >