Thank you, Gyula, we are working on validate setting larger
taskmanager.memory.jvm-overhead.fraction to ease this problem, and on the other
side, we try to find a way in deployment path to ease this problem.
I agree with you proposal, may be I could find sometime to make a pr for
FLINK-33548
I understand your problem but I think you are trying to find a solution in
the wrong place.
Have you tried setting taskmanager.memory.jvm-overhead.fraction ? That
would reserve more memory from the total process memory for non-JVM use.
Gyula
On Tue, Dec 5, 2023 at 1:50 PM richard.su wrote:
>
Hi Richard,
Shouldn't the solution then be solving the glibc problem?
Best regards,
Martijn
On Tue, Dec 5, 2023 at 1:49 PM richard.su wrote:
>
> Sorry, "To be clear, we need a container has memory larger than request, and
> confirm this pod has Guarantee Qos." which need to be "To be clear,
Sorry, "To be clear, we need a container has memory larger than request, and
confirm this pod has Guarantee Qos." which need to be "To be clear, we need a
container has memory larger than process.size, and confirm this pod has
Guarantee Qos."
Thanks.
Richard Su
> 2023年12月5日 20:47,richard.su
Hi, Gyula, yes, this is a special case in our scenarios, sorry about that it's
hard to understand, which we want to reserved some memory beyond the
jobmanager or task manager's process.To be clear, we need a container has
memory larger than request, and confirm this pod has Guarantee Qos.
Richard, I still don't understand why the current setup doesn't work for
you. According to
https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/deployment/memory/mem_setup/
:
The process memory config (which is what we configure) translates directly
into the container request size.
I think the new configuration could be :
"kubernetes.taskmanager.memory.amount" and "kubernetes.jobmanager.memory.amout"
once we can calculate the limit-factor by the different of requests and limits.
when native mode, we no longer check the process.size as default memory, but
using this
Hi, Gyula, from my opinion, this still will using flinkDeployment's resource
filed to set jobManager.memory.process.size, and I have told an uncovered case
that:
When user wants to define a flinkdeployment with jobmanager has 1G memory
resources in container field but config
This is the proposal according to FLINK-33548:
spec:
taskManager:
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
I honestly think this is much more intuitive and easier than using the
podTemplate, which is
Hi Gyula,
FLINK-33548 proposes adding a new resource field to match with Kubernetes
pod resource configuration. Here's my suggestion: instead of adding a new
resource field, let's use a pod template for more advanced resource setup.
Adding a new resource field might confuse users. This change can
Sorry Gyula, let me explain more about the point of 2, if I avoid the
override, I will got a jobmanager pod still with resources consist with
“jobmanager.memory.process.size”, but a flinkdeployment with a resource larger
than that.
Thanks for your time.
Richard Su
> 2023年12月5日
Thank you for your time, Gyula, I have more question about Flink-33548, we can
have more discussion about this and make progress:
1. I agree with you about declaring resources in FlinkDeployment resource
sections. But Flink Operator will override the “jobmanager.memory.process.size”
and
As you can see in the jira ticket there hasn't been any progress, nobody
started to work on this yet.
I personally don't think it's confusing to declare resources in the
FlinkDeployment resource sections. It's well documented and worked very
well so far for most users.
This is pretty common
Hi, Gyula, is there had any progress in FLINK-33548? I would like to join the
discussion but I haven't seen any discussion in the url.
I also make flinkdeployment by flink operator, which indeed will override the
process size by TaskmanagerSpec.resources or JobmanagerSpec.resources, which
Hi!
Please see the discussion in
https://lists.apache.org/thread/6p5tk6obmk1qxf169so498z4vk8cg969
and the ticket: https://issues.apache.org/jira/browse/FLINK-33548
We should follow the approach outlined there. If you are interested you are
welcome to pick up the operator ticket.
Unfortunately
Hello everyone,
I've encountered an issue while using flink kubernetes native, Despite setting
resource limits in the pod template, it appears that these limits and requests
are not considered during JobManager(JM) and TaskManager (TM) pod deployment.
I find the a issue had opened in jira
16 matches
Mail list logo