-6 at offset
834989830 to node be-kafka-dragonpit-broker-5:8017 (id: 5 rack: null)
On Wed, Sep 23, 2020 at 3:08 PM Kostas Kloudas wrote:
> Hi Ramya,
>
> Unfortunately I cannot see them.
>
> Kostas
>
> On Wed, Sep 23, 2020 at 10:27 AM Ramya Ramamurthy
> wrote:
> >
&g
load them somewhere and
> post the links here?
> Also I think that the TaskManager logs may be able to help a bit more.
> Could you please provide them here?
>
> Cheers,
> Kostas
>
> On Tue, Sep 22, 2020 at 8:58 AM Ramya Ramamurthy
> wrote:
>
> > Hi,
>
Hi,
We are seeing an issue with Flink on our production. The version is 1.7
which we use.
We started seeing sudden lag on kafka, and the consumers were no longer
working/accepting messages. On trying to enable debug mode, the below
errors were seen
[image: image.jpeg]
I am not sure why this
o: dev
> Subject: Re: Flink Redis connectivity
>
> Hi,
>
> I think you could implement `RichMapFunction` and create `redisClient`
> in the `open` method.
>
> Best,
> Yangze Guo
>
> On Tue, Jul 21, 2020 at 6:43 PM Ramya Ramamurthy
> wrote:
> >
> > Hi,
Hi,
As per the understanding we have from the documentation, I guess its not
possible to take the redis connection within the Data Stream. In that case,
how should i proceed ? How can i access a DB client object within the
stream ??
I am using Flink 1.7. any help here would be appreciated.
Hi,
I have Flink 1.7 running on our production. I can see that the Memory used
by the Job Managers are pretty high. Below is a snapshot of our pods
running the JM.
I had to commit 3GB memory, with a memory limit of 5GB. I have one cluster
per job, to make it scalable based on our traffic rate and
Hi,
We are trying to upgrade our Flink from 1.7 to 1.10. We have our
checkpoints on Google Cloud Storage today. But this is not working well
with 1.10.
And below is the error we get.
any help here would be appreciated.
We followed the below blog for GCS related configurations.
eally matters. The committed memory should
> > increase automatically when it's needed.
> >
> > Thank you~
> >
> > Xintong Song
> >
> >
> >
> > On Fri, Jun 12, 2020 at 2:24 PM Ramya Ramamurthy
> > wrote:
> >
> >> Hi Xintong,
>
s
> configuring 'taskmanager.heap.size' to 4GB. If RocksDB is used in your
> workload, you may need to further increase the off-heap memory size.
>
> Thank you~
>
> Xintong Song
>
>
>
> On Fri, Jun 12, 2020 at 1:11 PM Ramya Ramamurthy
> wrote:
>
> >
ion which comes with native Kubernetes support.
>
> Cheers,
> Till
>
> On Tue, Jun 9, 2020 at 8:45 AM Ramya Ramamurthy wrote:
>
> > Hi,
> >
> > My flink jobs are constantly going down beyond an hour with the below
> > exception.
> >
Hi,
My flink jobs are constantly going down beyond an hour with the below
exception.
This is Flink 1.7 on kubes, with checkpoints to Google storage.
AsynchronousException{java.lang.Exception: Could not materialize
checkpoint 21 for operator Source: Kafka011TableSource(sid, _zpsbd3,
_zpsbd4,
lism
> resuming from the taken savepoint.
>
> A side note, the rescaling feature has been removed in Flink >= 1.9.0
> because of some inherent limitations.
>
> [1] https://issues.apache.org/jira/browse/FLINK-10354
>
> Cheers,
> Till
>
> On Fri, Jan 31, 2020 at 11:49
us in progressing.
Thanks,
On Thu, Jan 30, 2020 at 8:45 PM Till Rohrmann wrote:
> Hi Ramya,
>
> I think this message is better suited for the user ML list. Which version
> of Flink are you using? Have you checked the Flink logs to see whether they
> contain anything suspicio
us in progressing.
Thanks,
On Thu, Jan 30, 2020 at 8:45 PM Till Rohrmann wrote:
> Hi Ramya,
>
> I think this message is better suited for the user ML list. Which version
> of Flink are you using? Have you checked the Flink logs to see whether they
> contain anything suspicio
Hi,
I am trying to dynamically increase the parallelism of the job. In the
process of it, while I am trying to trigger the savepoint, i get
the following error. Any help would be appreciated.
The URL triggered is :
http://stgflink.ssdev.in:8081/jobs/2865c1f40340bcb19a88a01d6ef8ff4f/savepoints/
{
mp?
>
> Unfortunately the image does not work in apache mailing list. Can you post
> the image somewhere and send the link instead?
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
>
>
> On Thu, Jul 18, 2019 at 9:36 AM Ramya Ramamurthy
> wrote:
>
> > Hi,
&
Hi,
I would like to know if there is some configuration which enabled to
configure allow lateness in table. The documentation mentions about streams
and not tables.
If this is not present, is there a way to collect it on a side output for
tables.
Today, we see some late packet drops in Flink,
packet/batch arrives.
Thanks.
On Fri, Jun 21, 2019 at 6:07 PM Ramya Ramamurthy wrote:
> Yes, we do maintain checkpoints
> env.enableCheckpointing(30);
>
> But we assumed it is for Kafka consumer offsets. Not sure how this is
> useful in this case? Can you pls. elaborate on t
ion?
>
> On Fri, Jun 21, 2019, 13:17 Ramya Ramamurthy wrote:
>
> > Hi,
> >
> > We use Kafka->Flink->Elasticsearch in our project.
> > The data to the elasticsearch is not getting flushed, till the next batch
> > arrives.
> > E.g.: If t
Hi,
We use Kafka->Flink->Elasticsearch in our project.
The data to the elasticsearch is not getting flushed, till the next batch
arrives.
E.g.: If the first batch contains 1000 packets, this gets pushed to the
Elastic, only after the next batch arrives [irrespective of reaching the
batch time
Hi,
I would like to know the best ways to read application configurations from
a Flink Job. Is MySQL Connectors supported, so that some application
related configurations can be read from SQL ??
How can re-read/re-load of configurations can be handled here ??
Can somebody help with some best
INF/services/org.apache.flink.table.factories.TableFactory? Is the
> JsonXXXFactory listed there?
>
> If not, maybe a Maven service file transformer is missing to collect all
> table factories into the service file.
>
> Regards,
> Timo
>
>
> Am 06.02.19 um 13:37 schrieb Ramya Ramamurthy:
> > Hi,
Hi,
I am making use of Flink SQL for processing my data.
And I would like to compute TOP counts, for many of the parameters in a row.
Is there any better way to do this as i can see that LIMIT is not supported
in Batch Queries.
Any help is appreciated.
Thanks,
Hi,
I am trying to read from Kafka and push to Elasticsearch. I use Flink SQL
tables as well.
I have added all the dependencies in my pom file. But still i get this
error.
If i add the sql-jar to my /lib folder everything works fine. But i want to
know whats being missed in creating a fat jar.
i
Hi,
I have encountered a weird issue. When constructing a Table Query with CASE
Statements, like below:
.append("CASE ")
.append("WHEN aggrcat = '0' AND botcode='r4' THEN 'monitoring' ")
.append("WHEN aggrcat = '1' AND botcode='r4' THEN 'aggregator' ")
.append("WHEN aggrcat = '2' AND
nResult.getSideOutput(lateTag);
>
> Best, Fabian
>
> Am Mo., 28. Jan. 2019 um 11:09 Uhr schrieb Ramya Ramamurthy <
> hair...@gmail.com>:
>
> > Hi,
> >
> > We were trying to collect the sideOutput.
> > But failed to understand as to how to convert this
Hi,
I am reading from Kafka and pushing it to the ES. There are basically 2
operators that we have, one to consume from Kafka and the other does the
operation on the Flink table and pushes to ES.
[image: image.png]
I am able to see the Flink Metrics : numLateRecordsDropped on my second
ss.
>
> Best, Fabian
>
> Am Do., 24. Jan. 2019 um 06:42 Uhr schrieb Ramya Ramamurthy <
> hair...@gmail.com>:
>
> > Hi,
> >
> > I have a query with regard to Late arriving records.
> > We are using Flink 1.7 with Kafka Consumers of version 2.11.0.11.
&g
Hi,
I have a query with regard to Late arriving records.
We are using Flink 1.7 with Kafka Consumers of version 2.11.0.11.
In my sink operators, which converts this table to a stream which is being
pushed to Elastic Search, I am able to see this metric "
*numLateRecordsDropped*".
My Kafka
Hi
I have a Flink 1.7 with Kafka 0.11 and ES 6.5 setup.
I can see the Flink Kafka Consumer consuming messages, but these are not
passed on to the next level, that is the elasticsearch sink. Unable to find
any logs relevant to this.
Logs about my kafka consumers
2019-01-16 17:28:05,860 DEBUG
way to implement it with kafka[2]
>
>
>
>
> 1.
>
> https://ci.apache.org/projects/flink/flink-docs-stable/dev/event_timestamps_watermarks.html#assigning-timestamps
> 2.
>
> https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/connectors/kafka.html#kafka-cons
? You can check the
> ES connector for table API here:
>
> https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/table/connect.html#elasticsearch-connector
>
> Best,
>
> Dawid
>
> On 10/01/2019 09:21, Ramya Ramamurthy wrote:
> > Hi,
> >
> > I am le
Hi,
I am learning to Flink. With Flink 1.7.1, trying to read from Kafka and
insert to ElasticSearch. I have a kafka connector convert the data to a
Flink table. In order to insert into Elasticsearch, I have converted this
table to a datastream, in order to be able to use the ElasticSearchSink.
33 matches
Mail list logo