Thanks for the reply.
Gyula and Max.
Prasanna
On Sat, 26 Nov 2022, 00:24 Maximilian Michels, wrote:
> Hi John, hi Prasanna, hi Rui,
>
> Gyula already gave great answers to your questions, just adding to it:
>
> >What’s the reason to add auto scaling to the Operator instead of to the
>
HI max,
This is a great initiative and good discussion going on.
We have set up flink cluster using Amazon ECS . So It would be good to
design in such a way that we can deploy the autoscaler in a separate docker
image which could observe the JM, JOBS and emit outputs that can use to
trigger the
Hi,
Team, We are writing our own prometheus reporter to make sure that we are
capturing data in histograms rather than summaries.
We were able to do it successfully in version 1.12.7.
But while upgrading to version 1.14.3 , we find
that MetricRegistryTestUtils is not available in the src code
NVM was able to find it in a different place
https://github.com/apache/flink/blob/release-1.14.3-rc1/flink-runtime/src/test/java/org/apache/flink/runtime/metrics/MetricRegistryTestUtils.java
On Tue, Feb 1, 2022 at 11:58 AM Prasanna kumar <
prasannakumarram...@gmail.com> wrote:
> Hi,
@gmail.com>
> > > > > wrote:
> > > > > >
> > > > > > > +1 for fixing it in these versions and doing quick releases.
> > Looks
> > > > good
> > > > > > to
> > > > > > > me.
> > >
1+ for making Updates for 1.12.5 .
We are looking for fix in 1.12 version.
Please notify once the fix is done.
On Mon, Dec 13, 2021 at 9:45 AM Leonard Xu wrote:
> +1 for the quick release and the special vote period 24h.
>
> > 2021年12月13日 上午11:49,Dian Fu 写道:
> >
> > +1 for the proposal and
Hi all,
We are using Flink for our eventing system. Overall we are very happy with
the tech, documentation and community support and quick replies in mails.
My last 1 year experience with versions.
We were working on 1.10 initially during our research phase then we
stabilised with 1.11 as we
Hi Flinksters,
Our repo which is a maven based java project(flink) went through SCA
scan using WhiteSource tool and following are the HIGH severity issues
reported. The target vulnerable jar is not found when we build the
dependency tree of the project.
Could any one let us know if flink uses
Deep,
1) Is it a cpu/memory/io intensive job ??
Based on that you could allocate resources.
>From the question, if the CPU is not utilised , you could run multiple
containers on the same machine(tm) ...
Following may not be exact case as yours but to give you an idea.
Few months back I have
Thanks for the Reply Yun,
I see that when I publish the messages to SNS from map operator, in case of
any errors I find the checkpointing mechanism takes care of "no data loss".
One scenario I could not replicate is that, the method from SDK unable to
send messages to SNS but remains silent not
Hi Team,
Following is the pipeline
Kafka => Processing => SNS Topics .
Flink Does not provide a SNS connector out of the box.
a) I implemented the above by using AWS SDK and published the messages in
the Map operator itself.
The pipeline is working well. I see messages flowing to SNS topics.
Hi Flink Dev Team,
Dynamic AutoScaling Based on the incoming data load would be a great
feature.
We should be able have some rule say If the load increased by 20% , add
extra resource should be added.
Or time based say during these peak hours the pipeline should scale
automatically by 50%.
This
Hi ,
I did not find out of box flink sink connector for http and SQS mechanism.
Has anyone implemented it?
Wanted to know if we are writing a custom sink function , whether it
would affect semantic exactly one guarantees ?
Thanks ,
Prasanna
you~
>
> Xintong Song
>
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/memory/mem_tuning.html#heap-state-backend
>
> On Thu, Jul 16, 2020 at 10:35 AM Prasanna kumar <
> prasannakumarram...@gmail.com> wrote:
>
>> Hi
>>
>>
Hi,
We are testing flink and storm for our streaming pipelines on various
features.
In terms of Latency,i see the flink comes up short on storm even if more
CPU is given to it. Will Explain in detail.
*Machine*. t2.large 4 core 16 gb. is used for Used for flink task manager
and storm supervisor
Hi ,
I have pipeline. Source-> Map(JSON transform)-> Sink..
Both source and sink are Kafka.
What is the best checkpoint ing mechanism?
Is setting checkpoints incremental a good option? What should be careful
of?
I am running it on aws emr.
Will checkpoint slow the speed?
Thanks,
Prasanna.
Hi ,
I used t2.medium machines for the task manager nodes. It has 2 CPU and 4GB
memory.
But the task manager screen shows that there are 4 slots.
Generally we should match the number of slots to the number of cores.
[image: image.png]
Our pipeline is Source -> Simple Transform -> Sink.
What
Hi Community ,
Could anyone let me know if Flink is used in US healthcare tech space ?
Thanks,
Prasanna.
Hi,
I have the following usecase to implement in my organization.
Say there is huge relational database(1000 tables for each of our 30k
customers) in our monolith setup
We want to reduce the load on the DB and prevent the applications from
hitting it for latest events. So an extract is done
I tried to setup flink locally as mentioned in the link
https://ci.apache.org/projects/flink/flink-docs-stable/dev/projectsetup/java_api_quickstart.html
.
I ended getting the following error
[INFO] Generating project in Interactive mode
[WARNING] No archetype found in remote catalog. Defaulting
20 matches
Mail list logo