Strange to see that you are using spark 3.1.2 witch is EOL and you are reading source files from 3.4.0-SNAPSHOT
fre. 10. mar. 2023 kl. 19:01 skrev Ismail Yenigul <ismailyeni...@gmail.com>: > and If you look at the code > > > https://github.com/apache/spark/blob/e64262f417bf381bdc664dfd1cbcfaa5aa7221fe/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/BasicExecutorFeatureStep.scala#L194 > > .editOrNewResources() > .addToRequests("memory", executorMemoryQuantity) > .addToLimits("memory", executorMemoryQuantity) > .addToRequests("cpu", executorCpuQuantity) > .addToLimits(executorResourceQuantities.asJava) > .endResources() > > addToRequests and addToLimits for memory have the same value. > maybe it is by design. but can I set custom values for them if I use > podtemplate? > > > > Ismail Yenigul <ismailyeni...@gmail.com>, 10 Mar 2023 Cum, 20:52 > tarihinde şunu yazdı: > >> Hi, >> using spark version v.3.1.2 >> >> spark.executor.memory is set. >> But the problem is not setting spark.executor.memory, the problem is that >> whatever value I set spark.executor.memory, >> spark executor pod has the same value for resources.limit.memory and >> resources.request.memory. >> I want to be able to set different values for them. >> >> >> >> >> Mich Talebzadeh <mich.talebza...@gmail.com>, 10 Mar 2023 Cum, 20:44 >> tarihinde şunu yazdı: >> >>> What are those currently set in spark-submit and which spark version on >>> k8s >>> >>> --conf spark.driver.memory=2000m \ >>> --conf spark.executor.memory=2000m \ >>> >>> HTH >>> >>> >>> >>> view my Linkedin profile >>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/> >>> >>> >>> https://en.everybodywiki.com/Mich_Talebzadeh >>> >>> >>> >>> *Disclaimer:* Use it at your own risk. Any and all responsibility for >>> any loss, damage or destruction of data or any other property which may >>> arise from relying on this email's technical content is explicitly >>> disclaimed. The author will in no case be liable for any monetary damages >>> arising from such loss, damage or destruction. >>> >>> >>> >>> >>> On Fri, 10 Mar 2023 at 17:39, Ismail Yenigul <ismailyeni...@gmail.com> >>> wrote: >>> >>>> Hi, >>>> >>>> There is a cpu parameter to set spark executor on k8s >>>> spark.kubernetes.executor.limit.cores and >>>> spark.kubernetes.executor.request.cores >>>> but there is no parameter to set memory request different then limits >>>> memory (such as spark.kubernetes.executor.request.memory) >>>> For that reason, >>>> spark.executor.memory is assigned to requests.memory and limits.memory >>>> like the following >>>> >>>> Limits: >>>> memory: 5734MiRequests: >>>> cpu: 4 >>>> memory: 5734Mi >>>> >>>> >>>> Is there any special reason to not have >>>> spark.kubernetes.executor.request.memory parameter? >>>> and can I use spark.kubernetes.executor.podTemplateFile parameter to >>>> set smaller memory request than the memory limit in pod template file? >>>> >>>> >>>> Limits: >>>> memory: 5734MiRequests: >>>> cpu: 4 >>>> memory: 1024Mi >>>> >>>> >>>> Thanks >>>> >>>> -- Bjørn Jørgensen Vestre Aspehaug 4, 6010 Ålesund Norge +47 480 94 297