[ https://issues.apache.org/jira/browse/SPARK-32744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17970352#comment-17970352 ]
Laurenceau Julien commented on SPARK-32744: ------------------------------------------- As it is, Spark assumes that real cpu/memory usage is 100% efficient. However, most spark jobs are not 100% compute and also do IO which are not cpu intensive. In reality, it is common to observe very low resource utilization on Kubernetes clusters used as Spark backend. As low as 30% real cpu in my experience. This is a waste, that could be easily solved with this feature. > request executor cores with decimal when spark on k8s > ----------------------------------------------------- > > Key: SPARK-32744 > URL: https://issues.apache.org/jira/browse/SPARK-32744 > Project: Spark > Issue Type: Improvement > Components: Kubernetes > Affects Versions: 3.0.0 > Reporter: Yu Wang > Priority: Minor > Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png > > > In current spark version which does not support to request executor cores > with decimal when spark on k8s . because the cores is Int type in > CoarseGrainedExecutorBackend class. > !screenshot-1.png! -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org