Re: Kafka connection issues

2020-09-23 Thread Ramya Ramamurthy
-6 at offset 834989830 to node be-kafka-dragonpit-broker-5:8017 (id: 5 rack: null) On Wed, Sep 23, 2020 at 3:08 PM Kostas Kloudas wrote: > Hi Ramya, > > Unfortunately I cannot see them. > > Kostas > > On Wed, Sep 23, 2020 at 10:27 AM Ramya Ramamurthy > wrote: > > &g

Re: Kafka connection issues

2020-09-23 Thread Ramya Ramamurthy
load them somewhere and > post the links here? > Also I think that the TaskManager logs may be able to help a bit more. > Could you please provide them here? > > Cheers, > Kostas > > On Tue, Sep 22, 2020 at 8:58 AM Ramya Ramamurthy > wrote: > > > Hi, >

Kafka connection issues

2020-09-22 Thread Ramya Ramamurthy
Hi, We are seeing an issue with Flink on our production. The version is 1.7 which we use. We started seeing sudden lag on kafka, and the consumers were no longer working/accepting messages. On trying to enable debug mode, the below errors were seen [image: image.jpeg] I am not sure why this

Re: Flink Redis connectivity

2020-07-23 Thread Ramya Ramamurthy
o: dev > Subject: Re: Flink Redis connectivity > > Hi, > > I think you could implement `RichMapFunction` and create `redisClient` > in the `open` method. > > Best, > Yangze Guo > > On Tue, Jul 21, 2020 at 6:43 PM Ramya Ramamurthy > wrote: > > > > Hi,

Flink Redis connectivity

2020-07-21 Thread Ramya Ramamurthy
Hi, As per the understanding we have from the documentation, I guess its not possible to take the redis connection within the Data Stream. In that case, how should i proceed ? How can i access a DB client object within the stream ?? I am using Flink 1.7. any help here would be appreciated.

Flink memory consumption

2020-07-10 Thread Ramya Ramamurthy
Hi, I have Flink 1.7 running on our production. I can see that the Memory used by the Job Managers are pretty high. Below is a snapshot of our pods running the JM. I had to commit 3GB memory, with a memory limit of 5GB. I have one cluster per job, to make it scalable based on our traffic rate and

Flink 1.10 with GCS for checkpoints

2020-06-15 Thread Ramya Ramamurthy
Hi, We are trying to upgrade our Flink from 1.7 to 1.10. We have our checkpoints on Google Cloud Storage today. But this is not working well with 1.10. And below is the error we get. any help here would be appreciated. We followed the below blog for GCS related configurations.

Re: Flink on Kubes -- issues

2020-06-12 Thread Ramya Ramamurthy
eally matters. The committed memory should > > increase automatically when it's needed. > > > > Thank you~ > > > > Xintong Song > > > > > > > > On Fri, Jun 12, 2020 at 2:24 PM Ramya Ramamurthy > > wrote: > > > >> Hi Xintong, >

Re: Flink on Kubes -- issues

2020-06-12 Thread Ramya Ramamurthy
s > configuring 'taskmanager.heap.size' to 4GB. If RocksDB is used in your > workload, you may need to further increase the off-heap memory size. > > Thank you~ > > Xintong Song > > > > On Fri, Jun 12, 2020 at 1:11 PM Ramya Ramamurthy > wrote: > > >

Re: Flink on Kubes -- issues

2020-06-11 Thread Ramya Ramamurthy
ion which comes with native Kubernetes support. > > Cheers, > Till > > On Tue, Jun 9, 2020 at 8:45 AM Ramya Ramamurthy wrote: > > > Hi, > > > > My flink jobs are constantly going down beyond an hour with the below > > exception. > >

Flink on Kubes -- issues

2020-06-09 Thread Ramya Ramamurthy
Hi, My flink jobs are constantly going down beyond an hour with the below exception. This is Flink 1.7 on kubes, with checkpoints to Google storage. AsynchronousException{java.lang.Exception: Could not materialize checkpoint 21 for operator Source: Kafka011TableSource(sid, _zpsbd3, _zpsbd4,

Re: REST Monitoring Savepoint failed

2020-02-02 Thread Ramya Ramamurthy
lism > resuming from the taken savepoint. > > A side note, the rescaling feature has been removed in Flink >= 1.9.0 > because of some inherent limitations. > > [1] https://issues.apache.org/jira/browse/FLINK-10354 > > Cheers, > Till > > On Fri, Jan 31, 2020 at 11:49

Re: REST Monitoring Savepoint failed

2020-01-30 Thread Ramya Ramamurthy
us in progressing. Thanks, On Thu, Jan 30, 2020 at 8:45 PM Till Rohrmann wrote: > Hi Ramya, > > I think this message is better suited for the user ML list. Which version > of Flink are you using? Have you checked the Flink logs to see whether they > contain anything suspicio

Re: REST Monitoring Savepoint failed

2020-01-30 Thread Ramya Ramamurthy
us in progressing. Thanks, On Thu, Jan 30, 2020 at 8:45 PM Till Rohrmann wrote: > Hi Ramya, > > I think this message is better suited for the user ML list. Which version > of Flink are you using? Have you checked the Flink logs to see whether they > contain anything suspicio

REST Monitoring Savepoint failed

2020-01-30 Thread Ramya Ramamurthy
Hi, I am trying to dynamically increase the parallelism of the job. In the process of it, while I am trying to trigger the savepoint, i get the following error. Any help would be appreciated. The URL triggered is : http://stgflink.ssdev.in:8081/jobs/2865c1f40340bcb19a88a01d6ef8ff4f/savepoints/ {

Re: Flink Kafka Issues

2019-07-31 Thread Ramya Ramamurthy
mp? > > Unfortunately the image does not work in apache mailing list. Can you post > the image somewhere and send the link instead? > > Thanks, > > Jiangjie (Becket) Qin > > > > On Thu, Jul 18, 2019 at 9:36 AM Ramya Ramamurthy > wrote: > > > Hi, &

Flink "allow lateness" for tables

2019-07-17 Thread Ramya Ramamurthy
Hi, I would like to know if there is some configuration which enabled to configure allow lateness in table. The documentation mentions about streams and not tables. If this is not present, is there a way to collect it on a side output for tables. Today, we see some late packet drops in Flink,

Re: Flink Elasticsearch Sink Issue

2019-06-21 Thread Ramya Ramamurthy
packet/batch arrives. Thanks. On Fri, Jun 21, 2019 at 6:07 PM Ramya Ramamurthy wrote: > Yes, we do maintain checkpoints > env.enableCheckpointing(30); > > But we assumed it is for Kafka consumer offsets. Not sure how this is > useful in this case? Can you pls. elaborate on t

Re: Flink Elasticsearch Sink Issue

2019-06-21 Thread Ramya Ramamurthy
ion? > > On Fri, Jun 21, 2019, 13:17 Ramya Ramamurthy wrote: > > > Hi, > > > > We use Kafka->Flink->Elasticsearch in our project. > > The data to the elasticsearch is not getting flushed, till the next batch > > arrives. > > E.g.: If t

Flink Elasticsearch Sink Issue

2019-06-21 Thread Ramya Ramamurthy
Hi, We use Kafka->Flink->Elasticsearch in our project. The data to the elasticsearch is not getting flushed, till the next batch arrives. E.g.: If the first batch contains 1000 packets, this gets pushed to the Elastic, only after the next batch arrives [irrespective of reaching the batch time

Application Related Configurations

2019-05-27 Thread Ramya Ramamurthy
Hi, I would like to know the best ways to read application configurations from a Flink Job. Is MySQL Connectors supported, so that some application related configurations can be read from SQL ?? How can re-read/re-load of configurations can be handled here ?? Can somebody help with some best

Re: Exceptions: org.apache.flink.table.factories.DeserializationSchemaFactory

2019-02-11 Thread Ramya Ramamurthy
INF/services/org.apache.flink.table.factories.TableFactory? Is the > JsonXXXFactory listed there? > > If not, maybe a Maven service file transformer is missing to collect all > table factories into the service file. > > Regards, > Timo > > > Am 06.02.19 um 13:37 schrieb Ramya Ramamurthy: > > Hi,

Flink SQL computing TOP count attributes

2019-02-11 Thread Ramya Ramamurthy
Hi, I am making use of Flink SQL for processing my data. And I would like to compute TOP counts, for many of the parameters in a row. Is there any better way to do this as i can see that LIMIT is not supported in Batch Queries. Any help is appreciated. Thanks,

Exceptions: org.apache.flink.table.factories.DeserializationSchemaFactory

2019-02-06 Thread Ramya Ramamurthy
Hi, I am trying to read from Kafka and push to Elasticsearch. I use Flink SQL tables as well. I have added all the dependencies in my pom file. But still i get this error. If i add the sql-jar to my /lib folder everything works fine. But i want to know whats being missed in creating a fat jar. i

Flink Table Case Statements appending spaces

2019-01-29 Thread Ramya Ramamurthy
Hi, I have encountered a weird issue. When constructing a Table Query with CASE Statements, like below: .append("CASE ") .append("WHEN aggrcat = '0' AND botcode='r4' THEN 'monitoring' ") .append("WHEN aggrcat = '1' AND botcode='r4' THEN 'aggregator' ") .append("WHEN aggrcat = '2' AND

Re: Side Outputs for late arriving records

2019-01-29 Thread Ramya Ramamurthy
nResult.getSideOutput(lateTag); > > Best, Fabian > > Am Mo., 28. Jan. 2019 um 11:09 Uhr schrieb Ramya Ramamurthy < > hair...@gmail.com>: > > > Hi, > > > > We were trying to collect the sideOutput. > > But failed to understand as to how to convert this

numLateRecordsDropped

2019-01-29 Thread Ramya Ramamurthy
Hi, I am reading from Kafka and pushing it to the ES. There are basically 2 operators that we have, one to consume from Kafka and the other does the operation on the Flink table and pushes to ES. [image: image.png] I am able to see the Flink Metrics : numLateRecordsDropped on my second

Re: Side Outputs for late arriving records

2019-01-28 Thread Ramya Ramamurthy
ss. > > Best, Fabian > > Am Do., 24. Jan. 2019 um 06:42 Uhr schrieb Ramya Ramamurthy < > hair...@gmail.com>: > > > Hi, > > > > I have a query with regard to Late arriving records. > > We are using Flink 1.7 with Kafka Consumers of version 2.11.0.11. &g

Side Outputs for late arriving records

2019-01-23 Thread Ramya Ramamurthy
Hi, I have a query with regard to Late arriving records. We are using Flink 1.7 with Kafka Consumers of version 2.11.0.11. In my sink operators, which converts this table to a stream which is being pushed to Elastic Search, I am able to see this metric " *numLateRecordsDropped*". My Kafka

Data not getting passed between operators

2019-01-16 Thread Ramya Ramamurthy
Hi I have a Flink 1.7 with Kafka 0.11 and ES 6.5 setup. I can see the Flink Kafka Consumer consuming messages, but these are not passed on to the next level, that is the elasticsearch sink. Unable to find any logs relevant to this. Logs about my kafka consumers 2019-01-16 17:28:05,860 DEBUG

Re: ElasticSearch Connector

2019-01-10 Thread Ramya Ramamurthy
way to implement it with kafka[2] > > > > > 1. > > https://ci.apache.org/projects/flink/flink-docs-stable/dev/event_timestamps_watermarks.html#assigning-timestamps > 2. > > https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/connectors/kafka.html#kafka-cons

Re: ElasticSearch Connector

2019-01-10 Thread Ramya Ramamurthy
? You can check the > ES connector for table API here: > > https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/table/connect.html#elasticsearch-connector > > Best, > > Dawid > > On 10/01/2019 09:21, Ramya Ramamurthy wrote: > > Hi, > > > > I am le

ElasticSearch Connector

2019-01-10 Thread Ramya Ramamurthy
Hi, I am learning to Flink. With Flink 1.7.1, trying to read from Kafka and insert to ElasticSearch. I have a kafka connector convert the data to a Flink table. In order to insert into Elasticsearch, I have converted this table to a datastream, in order to be able to use the ElasticSearchSink.