Re: Stream in loop and not getting to sink (Parquet writer )

2018-11-30 Thread Avi Levi
Thanks looks good. Do you know a way to use PaquetWriter or ParquetAvroWriters with a BucketingSink file ? something like : val bucketingSink = new BucketingSink[String]("/base/path")

Re: Questions about UDTF in flink SQL

2018-11-30 Thread wangsan
Hi Rong, Yes, what Jark described is exactly whet I need. Currently we have a work around for this problem, by using a UDF whose result type is a Map. I will took a look on your proposals and PR. Thanks for your help and suggestions. Best, Wangsan > On Dec 1, 2018, at 7:30 AM, Rong Rong

Re: Questions about UDTF in flink SQL

2018-11-30 Thread Rong Rong
Hi Wangsan, If your require is essentially wha Jark describe, we already have a proposal following up [FLINK-9249] in its related/parent task: [FLINK-9484]. We are already implementing some of these internally and have one PR ready for review for FLINK-9294. Please kindly take a look and see if

Re: Using FlinkKinesisConsumer through a proxy

2018-11-30 Thread Vijay Balakrishnan
Hi Gordon, Finally figured out my issue.Do not need to add http:// in proxyHost name. String proxyHost= "proxy-chaincom";//not http://proxy-chain...com kinesisConsumerConfig.setProperty(AWSUtil.AWS_CLIENT_CONFIG_PREFIX + "proxyHost", proxyHost);//<== mo http:// in proxyHost name TIA, Vijay

RE: Apache Flink 1.7.0 jar complete ?

2018-11-30 Thread LINZ, Arnaud
Hi, I was probably too much of an early bird, removing the cache did solve that problem. However this test program still does not end (cf. https://issues.apache.org/jira/browse/FLINK-10832 ) so I still can’t use that version… I wonder why I’m the only one with this problem ? I’ve tested it on

Re: How to apply watermark on datastream and then do join operation on it

2018-11-30 Thread Fabian Hueske
Hi, Welcome to the mailing list. What exactly is your problem? Do you receive an error message? Is the program not compiling? Do you receive no output? Regardless of that, I would recommend to provide the timestamp extractors to the Kafka source functions. Also, I would have a close look at the

Re: Custom scheduler in Flink

2018-11-30 Thread Felipe Gutierrez
thanks. I saw the google docs just now. I am gonna print and study it at the weekend. *--* *-- Felipe Gutierrez* *-- skype: felipe.o.gutierrez* *--* *https://felipeogutierrez.blogspot.com * On Fri, Nov 30, 2018 at 4:42 PM Felipe Gutierrez <

Re: Custom scheduler in Flink

2018-11-30 Thread Felipe Gutierrez
thanks. I will check it out *--* *-- Felipe Gutierrez* *-- skype: felipe.o.gutierrez* *--* *https://felipeogutierrez.blogspot.com * On Fri, Nov 30, 2018 at 3:51 PM Till Rohrmann wrote: > Hi Felipe, > > https://issues.apache.org/jira/browse/FLINK-10429

Re: Changes in Flink 1.6.2

2018-11-30 Thread Dominik Wosiński
Hey, @Dawid is right. It is a known issue in Scala. This is due to the functional nature of Scala and is explained on StackOverflow[1]. Best Regards, Dom. [1] https://stackoverflow.com/questions/7498677/why-is-this-reference-ambiguous pt., 30 lis 2018 o 15:56 Dawid Wysakowicz napisał(a): > Hi

Re: Changes in Flink 1.6.2

2018-11-30 Thread Boris Lublinsky
It does, weird Boris Lublinsky FDP Architect boris.lublin...@lightbend.com https://www.lightbend.com/ > On Nov 30, 2018, at 8:56 AM, Dawid Wysakowicz wrote: > > Hi Boris, > > I am not a scala expert, so I won't be able explain the root cause > completely, but it is because you access

Re: Changes in Flink 1.6.2

2018-11-30 Thread Dawid Wysakowicz
Hi Boris, I am not a scala expert, so I won't be able explain the root cause completely, but it is because you access empty-parameter java method as scala parameterless one (I don't know why it doesn't work). If you change your code to: env.getStreamGraph.getJobGraph().getJobID it will work.

Re: Custom scheduler in Flink

2018-11-30 Thread Till Rohrmann
Hi Felipe, https://issues.apache.org/jira/browse/FLINK-10429 might also be interesting. The community is currently working on making the Scheduler pluggable to make it easier to extend this component. Cheers, Till On Wed, Nov 28, 2018 at 2:56 PM Felipe Gutierrez < felipe.o.gutier...@gmail.com>

Re: Changes in Flink 1.6.2

2018-11-30 Thread Boris Lublinsky
Dominik, Any feedback on this? Boris Lublinsky FDP Architect boris.lublin...@lightbend.com https://www.lightbend.com/ > On Nov 28, 2018, at 2:56 PM, Boris Lublinsky > wrote: > > Here is the code > > def executeLocal() : Unit = { > val env =

Re: Apache Flink 1.7.0 jar complete ?

2018-11-30 Thread Till Rohrmann
Hi Arnaud, I tried to setup the same testing project as you've described and it worked for me. Could you maybe try to clear your Maven repository? Maybe not all dependencies had been properly mirrored to Maven central. Cheers, Till On Fri, Nov 30, 2018 at 2:31 PM Till Rohrmann wrote: > Thanks

Re: Stream in loop and not getting to sink (Parquet writer )

2018-11-30 Thread Kostas Kloudas
And for a Java example which is actually similar to your pipeline, you can check the ParquetStreamingFileSinkITCase. On Fri, Nov 30, 2018 at 2:39 PM Kostas Kloudas wrote: > Hi Avi, > > At a first glance I am not seeing anything wrong with your code. > Did you verify that there are elements

Re: Stream in loop and not getting to sink (Parquet writer )

2018-11-30 Thread Kostas Kloudas
Hi Avi, At a first glance I am not seeing anything wrong with your code. Did you verify that there are elements flowing in your pipeline and that checkpoints are actually completed? And also can you check the logs at Job and Task Manager for anything suspicious? Unfortunately, we do not allow

Re: Check-pointing error

2018-11-30 Thread Felipe Quirce
Hi Chesnay, I tried with the version 1.7.0 and I had the same error. 2018-11-30 13:13:00,718 INFO org.apache.flink.runtime.taskmanager.Task - keyedstats-processor-165 -> map2alert-165 -> Process -> Sink: sink-level165 (1/4) (a972c963d4ee576a88c9116e946eec62) switched from

Re: Apache Flink 1.7.0 jar complete ?

2018-11-30 Thread Till Rohrmann
Thanks for reporting this problem Arnaud. I will investigate this problem. Cheers, Till On Fri, Nov 30, 2018 at 12:20 PM LINZ, Arnaud wrote: > Hi, > > > > When trying to update to 1.7.0, a simple local cluster test fails with : > > > > 12:03:55.182 [main] DEBUG

Re: Looking for relevant sources related to connecting Apache Flink and Edgent.

2018-11-30 Thread Felipe Gutierrez
Cool, thanks! I am able to verify the Execution Query Plan on this example: https://github.com/felipegutierrez/flink-first/blob/master/src/main/java/flink/example/streaming/SocketWindowWordCountFilterJava.java I am also going to build a little POC like you said. Thanks, Felipe *--* *-- Felipe

Apache Flink 1.7.0 jar complete ?

2018-11-30 Thread LINZ, Arnaud
Hi, When trying to update to 1.7.0, a simple local cluster test fails with : 12:03:55.182 [main] DEBUG o.a.f.s.a.graph.StreamGraphGenerator - Transforming SinkTransformation{id=2, name='Print to Std. Out', outputType=GenericType, parallelism=1} 12:03:55.182 [main] DEBUG

Re: [Table API example] Table program cannot be compiled. This is a bug. Please file an issue

2018-11-30 Thread Fabian Hueske
Hi Marvin, Can you post the query (+ schema of tables) that lead to this exception? Thank you, Fabian Am Fr., 30. Nov. 2018 um 10:55 Uhr schrieb Marvin777 < xymaqingxiang...@gmail.com>: > Hi all, > > I have a simple test for looking at Flink Table API and hit an exception > reported as a bug.

Re: Looking for relevant sources related to connecting Apache Flink and Edgent.

2018-11-30 Thread Fabian Hueske
Hi Felipe, You can define TableSources (for SQL, Table API) that support filter push-down. The optimizer will figure out this opportunity and hand filters to a custom TableSource. I should add that AFAIK this feature is not used very often (expect some rough edges) and that the API is likely to

Re: Flink SQL

2018-11-30 Thread Dominik Wosiński
Hey, Not exactly sure by what you mean by "nothing" but generally the concept is. The data is fed to the dynamic table and the result of the query creates another dynamic table. So, if the resulting query returns an empty table, no data will indeed be written to the S3. Not sure if this was what

[Table API example] Table program cannot be compiled. This is a bug. Please file an issue

2018-11-30 Thread Marvin777
Hi all, I have a simple test for looking at Flink Table API and hit an exception reported as a bug. I wonder though if it is a missing something. BTW, the example is flink-examples-table-with-dependencies.jar, and the version is 1.4.2 . Thanks Mavin. [image: image.png]

Re: Questions about UDTF in flink SQL

2018-11-30 Thread Jark Wu
Hi Wangsan, If I understand correctly, you want the return type of UDTF is determined by the actual arguments, not a fixed result type. For example: udtf("int, string, long", inputField)returns a composite type with [f0: INT, f1: VARCHAR, f2: BIGINT] udtf("int", inputField)returns an

[ANNOUNCE] Apache Flink 1.7.0 released

2018-11-30 Thread Till Rohrmann
The Apache Flink community is very happy to announce the release of Apache Flink 1.7.0, which is the next major release. Apache Flink® is an open-source stream processing framework for distributed, high-performing, always-available, and accurate data streaming applications. The release is

Re: Looking for relevant sources related to connecting Apache Flink and Edgent.

2018-11-30 Thread Felipe Gutierrez
I guess this message from 2016 is very related of what I am looking for ( http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-Execution-Plan-td4290.html). I am posting here for future references. I am going to implement a toy example to visualize this. Do you guys see this