Thank you Guowei. That was the trick!
By default jobs from completed section are expired and removed in 1 hour. I
have increased jobstore.expiration-time and now completed jobs are retained.
Thanks,
Jins
From: Guowei Ma
Date: Wednesday, April 10, 2019 at 3:29 AM
To: Jins George
Cc: Timothy
Any input on this UI behavior ?
Thanks,
Jins
From: Timothy Victor
Date: Monday, April 8, 2019 at 10:47 AM
To: Jins George
Cc: user
Subject: Re: Flink 1.7.2 UI : Jobs removed from Completed Jobs section
I face the same issue in Flink 1.7.1.
Would be good to know a solution.
Tim
On Mon, Apr
- Stopping the JobMaster for job
dwellalert-ubuntu-0403174608-698009a0(b274377e6a223078d6f40b9c0620ee0d).
Restart Strategy Conf:
restart-strategy: fixed-delay
restart-strategy.fixed-delay.attempts: 10
restart-strategy.fixed-delay.delay: 10 s
Thanks
Jins George
Thank you Gary. That was helpful.
Thanks,
Jins George
On 2/17/19 10:03 AM, Gary Yao wrote:
Hi Jins George,
Every TM brings additional overhead, e.g., more heartbeat messages. However, a
cluster with 28 TMs would not be considered big as there are users that are
running Flink applications on
Thanks Gary. Understood the behavior.
I am leaning towards running 7 TM on each machine(8 core), I have 4 nodes, that
will end up 28 taskmanagers and 1 job manager. I was wondering if this can
bring additional burden on jobmanager? Is it recommended?
Thanks,
Jins George
On 2/14/19 8:49 AM
tinue using the 'new'
mode ?
Thanks,
Jins George
8081 is the default port for standalone cluster.
For Yarn flink cluster,
Go to the Running applications and from the list of applications.
You can get the Flink UI by clicking Application master link for the yarn
session.
Regards,
Jins
On Feb 1, 2018, at 8:06 AM, Raja.Aravapalli
mailto:raja.a
Thank You Ufuk & Shannon. Since my kafka consumer is
UnboundedKafkaSource from BEAM, not sure if records-lag-max metrics is
exposed. Let me research further.
Thanks,
Jins George
On 01/08/2018 10:11 AM, Shannon Carey wrote:
Right, backpressure only measures backpressure on the inside of
1.2/w beam 2.0.
Thanks,
Jins George
Thanks Aljoscha. I have not tried with 1.3. I will try and check the
behavior.
Regarding setting UIDs to operators from Beam, do you know if thats
something planned for a near future release ?
Thanks,
Jins George
On 11/30/2017 01:48 AM, Aljoscha Krettek wrote:
Hi,
I think you might be
/savepoints.html
Thanks,
Jins George
Hi Aviad,
I had a similar situation and my solution was to use the flink
monitoring rest api (/jobs/{jobid}/checkpoints) to get the mapping
between job and checkpoint file.
Wrap this in a script and run periodically( in my case, it was 30 sec).
You can also configure each job with an external
changed since
they JVMs are already running.
If I may ask, what’s your use case for this? Are you still using Beam on Flink
or are you using vanilla Flink with this?
Best,
Aljoscha
On 11. Jul 2017, at 07:24, Jins George wrote:
Thanks Nico. I am able to pass arguments to the main program, that
Thanks Nico. I am able to pass arguments to the main program, that
works, but not exactly that I was looking for.
I guess to have all worker jvms the same system property, I have to
set it at yarn-session creation time using -D ( haven't tried it yet)
Thanks,
Jins George
On 07/10/20
Hello,
I want to set the path of a properties file as System property in my
application(something like -Dkey=value).
Is there a way to set it while submitting a flink job to running YARN
Session? I am using //bin/flink run/ to submit the job to a already
running YARN session.
Thanks,
Jins
machine. I
have set the YARN_CONF_DIR on the client machine and placed
yarn-site.xml , core-site.xml etc. However it does not seems to be
picking these files.
Is this the right way to sumit to a Remote Yarn cluster ?
Thanks,
Jins George
16 matches
Mail list logo