zwangsheng commented on code in PR #6876:
URL: https://github.com/apache/kyuubi/pull/6876#discussion_r1899881820
##########
kyuubi-server/src/main/scala/org/apache/kyuubi/engine/spark/SparkProcessBuilder.scala:
##########
@@ -294,6 +336,14 @@ class SparkProcessBuilder(
}
}
+ def isK8sClusterMode: Boolean = {
Review Comment:
clusterManager().exists(cm => cm.toLowerCase(Locale.ROOT).startsWith("k8s"))
&& deployMode().exists(_.toLowerCase(Locale.ROOT) == "cluster")
##########
docs/deployment/engine_on_kubernetes.md:
##########
@@ -48,6 +48,12 @@ The minimum required configurations are:
* spark.kubernetes.file.upload.path (path on S3 or HDFS)
* spark.kubernetes.authenticate.driver.serviceAccountName ([viz
ServiceAccount](#serviceaccount))
+The vanilla Spark neither support rolling nor expiration mechanism for
`spark.kubernetes.file.upload.path`, if you use
+file system that does not support TTL, e.g. HDFS, additional cleanup
mechanisms are needed to prevent the files in this
+directory from growing indefinitely. Since Kyuubi v1.11.0, you can configure
`spark.kubernetes.file.upload.path` with
+placeholders `{{YEAR}}`, `{{MONTH}}` and `{{DAY}}`, and enable
`kyuubi.kubernetes.spark.autoCreateFileUploadPath.enabled`
+to let Kyuubi server create the directory with 777 permission automatically
before submitting Spark application.
+
Review Comment:
It seems that our current implementation does not solve the problem of file
growth. Will this issue be solved in subsequent PRs?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]