Re: Hadoop_Compatability

2020-08-09 Thread C DINESH
es are required. > The Flink hadoop-compatibility dependency requires hadoop-common and > hadoop-mapreduce-client-core, and whatever transitive dependencies these > have. > > On 06/08/2020 13:08, C DINESH wrote: > > Hi All, > > From 1.9 version there is no *flink-shaded-hado

Hadoop_Compatability

2020-08-06 Thread C DINESH
Hi All, >From 1.9 version there is no *flink-shaded-hadoop2 dependency. To use Hadoop APIS like *IntWritable , LongWritable. What are the dependencies we need to add to use these APIs. I tried searching in google. Not able to understand the solution. Please guide me. Thanks in Advance. Dinesh.

DashBoard Name

2020-07-19 Thread C DINESH
Hi All, In flink UI the name of the dashboard is Apache Flink dash Board. we have different environments. If I want to change the name of the dash board. Where do i need to change it? Thanks & Regards, Dinesh.

ElasticSearch_Sink

2020-07-15 Thread C DINESH
Hello All, Can we implement 2 Phase Commit Protocol for elastic search sink. Will there be any limitations? Thanks in advance. Warm regards, Dinesh.

Mongodb_sink

2020-07-14 Thread C DINESH
Hello all, Can we implement TwoPhaseCommitProtocol for mongodb to get EXACTLY_ONCE semantics. Will there be any limitation for it? Thanks, Dinesh.

Re: Dynamic source and sink.

2020-07-03 Thread C DINESH
Still restart the job, and optimize the downtime by using session mode. > > Best, > Paul Lam > > 2020年7月2日 11:23,C DINESH 写道: > > Hi Danny, > > Thanks for the response. > > In short without restarting we cannot add new sinks or sources. > > For better understa

Re: Dynamic source and sink.

2020-07-01 Thread C DINESH
> to run the appended operators you want, > But the you should keep the consistency semantics by yourself. > > Best, > Danny Chan > 在 2020年6月28日 +0800 PM3:30,C DINESH ,写道: > > Hi All, > > In a flink job I have a pipeline. It is consuming data from one kafka > top

Dynamic source and sink.

2020-06-28 Thread C DINESH
Hi All, In a flink job I have a pipeline. It is consuming data from one kafka topic and storing data to Elastic search cluster. without restarting the job can we add another kafka cluster and another elastic search sink to the job. Which means i will supply the new kafka cluster and elastic searc

Re: Stateful-fun-Basic-Hello

2020-05-25 Thread C DINESH
Hi Team, I mean to say that know I understood. but in the documentation page flink-conf.yaml is not mentioned On Mon, May 25, 2020 at 7:18 PM C DINESH wrote: > Thanks Gordon, > > I read the documentation several times. But I didn't understand at that > time, flink-conf.

Re: How to control the banlence by using FlinkKafkaConsumer.setStartFromTimestamp(timestamp)

2020-05-25 Thread C DINESH
understand the problem more clearly. Thanks, Dinesh. On Mon, May 25, 2020 at 6:58 PM C DINESH wrote: > HI Jary, > > The easiest and simple solution is while creating consumer you can pass > different config based on your requirements > > Example : > > For creating consumer

Re: Stateful-fun-Basic-Hello

2020-05-25 Thread C DINESH
Thanks Gordon, I read the documentation several times. But I didn't understand at that time, flink-conf.yaml is not there. can you please suggest 1. how to increase parallelism 2. how to give checkpoints to the job As far as I know there is no documentation regarding this. or Are these features

Re: How to control the banlence by using FlinkKafkaConsumer.setStartFromTimestamp(timestamp)

2020-05-25 Thread C DINESH
d from topic A > wil aways let > the data from topic B as ‘late data’ in window operator. What I wanted > is that 1 records from A and 3600 records from B by using > FlinkKafkaConsumer.setStartFromTimestamp(timestamp) so that I can > simulate consume data as in real production envi

Re: Query Rest API from IDE during runtime

2020-05-23 Thread C DINESH
Hi Annemarie, You need to use http client to connect to the job managaer. //Creating a HttpClient object CloseableHttpClient httpclient = HttpClients.createDefault(); //Creating a HttpGet object HttpGet httpget = new HttpGet("https://${jobmanager:port}/jobs "); //Exec

stateful-fun2.0 checkpointing

2020-05-23 Thread C DINESH
Hi Team, 1. How can we enable checkpointing in stateful-fun2.0 2. How to set parallelism Thanks, Dinesh.

Re: How to control the banlence by using FlinkKafkaConsumer.setStartFromTimestamp(timestamp)

2020-05-23 Thread C DINESH
Hi Jary, What you mean by step banlence . Could you please provide a concrete example On Fri, May 22, 2020 at 3:46 PM Jary Zhen wrote: > Hello everyone, > >First,a brief pipeline introduction: > env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime) > consume multi kafka

Stateful-fun-Basic-Hello

2020-05-23 Thread C DINESH
Hi Team, I am writing my first stateful fun basic hello application. I am getting the following Exception. $ ./bin/flink run -c org.apache.flink.statefun.flink.core.StatefulFunctionsJob ./stateful-sun-hello-java-1.0-SNAPSHOT-jar-with-dependencies.jar ---

Statefulfun.io

2020-05-19 Thread C DINESH
Hi Team, Is streaming ledger is replaced by statefulfun.io. Or am i missing something? Thanks and regards, Dinesh.