[ANNOUNCE] Release 1.8.0, release candidate #5

2019-04-03 Thread Aljoscha Krettek
Hi All, Voting on RC 5 for Flink 1.8.0 has started: https://lists.apache.org/thread.html/36cd645bac32b4f73972f08118768b03121bb6254217202c11bc6fd5@%3Cdev.flink.apache.org%3E Please check this out if you want to verify your applications against this new Flink release. Best, Aljoscha

Re: End to End Performance Testing

2019-04-03 Thread Yu Li
A short answer is no, the test platform is not included in the blink branch. FWIW (to make it clear, I'm not the decision maker so just some of my own opinions), the test platform is for production usage and includes simulation of online jobs and probably some confidential stuff, so I don't think

Re: How is Collector out element processed?

2019-04-03 Thread Zili Chen
Collector should follow the "push" mechanism. Best, tison. Son Mai 于2019年4月4日周四 下午12:11写道: > Hi Tison, > > so are you saying that the output will be iterated on when the next > operator that called them? and they are not processed in push but pull > mechanism by the next operator like sink? >

Re: How is Collector out element processed?

2019-04-03 Thread Son Mai
Hi Tison, so are you saying that the output will be iterated on when the next operator that called them? and they are not processed in push but pull mechanism by the next operator like sink? Thanks, On Thu, Apr 4, 2019 at 9:46 AM Zili Chen wrote: > Hi Son, > > As from Collector's document, it

Re: Install flink-1.7.2 on Azure with Hadoop 2.7 failed

2019-04-03 Thread Guowei Ma
The exception means that yarn expect the the timestamp of the slf4j-log4j12-1.7.15.jar is "2019-02-11 22:32:52" but yarn find the slf4j-log4j12-1.7.15.jar

Re: How is Collector out element processed?

2019-04-03 Thread Zili Chen
Hi Son, As from Collector's document, it collects a record and forwards it. The collector is the "push" counterpart of the Iterator which "pulls" data in. Best, tison. Son Mai 于2019年4月4日周四 上午10:15写道: > Hi all, > > when I create new classes extending ProcessFunction or implementing > WindowFun

How is Collector out element processed?

2019-04-03 Thread Son Mai
Hi all, when I create new classes extending ProcessFunction or implementing WindowFunction, there is a *Collector out* for output. How is this output processed in the next stage, for example a Sink or another WindowAssigner? Is it processed immediately by the next operator by push mechanism, or i

Re: TUMBLE function with MONTH/YEAR intervals produce milliseconds window unexpectedly

2019-04-03 Thread Vinod Mehra
Hi Dawid! I have filed a bug for a related issue to this thread: https://jira.apache.org/jira/browse/FLINK-12105 (TUMBLE INTERVAL value errors out for 100 or more value). How should we go about supporting MONTH and YEAR? If you have ideas please let me know, I will be happy to work with you to fi

Re: Source reinterpretAsKeyedStream

2019-04-03 Thread Konstantin Knauf
Hi Adrienne, you can only use DataStream#reinterpretAsKeyedStream on a stream, which has previously been keyed/partitioned by Flink with exactly the same KeySelector as given to reinterpretAsKeyedStream. It does not work with a key-partitioned stream, which has been partitioned by any other proces

Install flink-1.7.2 on Azure with Hadoop 2.7 failed

2019-04-03 Thread Reminia Scarlet
Hi all: I tried to install flink-1.7.2 free hadoop version on Azure with hadoop 2.7. And when I start to submit a flink job to yarn, like this: bin/flink run -m yarn-cluster -yn 2 ./examples/batch/WordCount.jar Exceptions came out: org.apache.flink.client.deployment.ClusterDeploymentExceptio

Re: Flink - Type Erasure Exception trying to use Tuple6 instead of Tuple

2019-04-03 Thread Vijay Balakrishnan
Hi Tim, Thanks for your reply. I am not seeing an option to specify a .returns(new TypeHint>(){}) with KeyedStream ?? > monitoringTupleKeyedStream = kinesisStream.keyBy(new > KeySelector() { > public Tuple getKey(Monitoring mon) throws Exception {..return > new Tuple6<>(..}})

[DISCUSS] Drop Elasticssearch 1 connector

2019-04-03 Thread Chesnay Schepler
Hello everyone, I'm proposing to remove the connector for elasticsearch 1. The connector is used significantly less than more recent versions (2&5 are downloaded 4-5x more), and hasn't seen any development for over a hear, yet still incurred maintenance overhead due to licensing and testing.

RE: InvalidProgramException when trying to sort a group within a dataset

2019-04-03 Thread Papadopoulos, Konstantinos
Hi Chesnay, Thanks for your support. ThresholdAcvFact class is a simple POJO with the following definition: public class ThresholdAcvFact { private Long timePeriodId; private Long geographyId; private Long productId; private Long customerId; private Double basePrice; pri

Flink SQL hangs in StreamTableEnvironment.sqlUpdate, keeps executing and seems never stop, finally lead to java.lang.OutOfMemoryError: GC overhead limit exceeded

2019-04-03 Thread 徐涛
Hi Experts, There is a Flink application(Version 1.7.2) which is written in Flink SQL, and the SQL in the application is quite long, consists of about 10 tables, 1500 lines in total. When executing, I found it is hanged in StreamTableEnvironment.sqlUpdate, keep executing some code about

Re: BucketAssigner - Confusion

2019-04-03 Thread Chesnay Schepler
BucketID is a variable type, and conceptually you can use any type so long as you can provide a serializer for it (BucketAssigner#getSerializer). The documentation is wrong in this instance. The convenience Flink APIs (StreamingFileSink#forRowFormat/StreamingFileSink#forBulkFormat) default to

Re: InvalidProgramException when trying to sort a group within a dataset

2019-04-03 Thread Chesnay Schepler
Your user-defined functions are referencing the class "com.iri.aa.etl.rgm.service.ThresholdAcvCalcServiceImpl" which isn't serializable. My guess is that "ThresholdAcvFact" is a non-static inner class, however I would need to see the entire class to give an accurate analysis. On 02/04/2019 1

Re: Reserving Kafka offset in Flink after modifying app

2019-04-03 Thread shengjk1
Mapbe this page can help you https://ci.apache.org/projects/flink/flink-docs-release-1.7/ops/state/savepoints.html Best, Shengjk1 On 03/26/2019 09:51,Son Mai wrote: Hi Konstantin, Thanks for the response. What still concerned me is: Am I able to recover from checkpoints even if I change m