xieshuaihu closed pull request #46278: [SPARK-48040][CONNECT][WIP]Spark connect
supports scheduler pool
URL: https://github.com/apache/spark/pull/46278
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go t
xieshuaihu commented on PR #46278:
URL: https://github.com/apache/spark/pull/46278#issuecomment-2088566513
@hvanhovell @HyukjinKwon
Thers are two reasons to support set scheduler pool in spark connect.
1. Vanilla spark supports fair scheduler and pools, if server runs in a
specifi
hvanhovell commented on PR #46278:
URL: https://github.com/apache/spark/pull/46278#issuecomment-2086174926
I am not 100% sure we should expose this as a client side conf. A client
shouldn't have to set these things. Can't we just make the connect sever use a
specific scheduler pool?
-
xieshuaihu commented on PR #46278:
URL: https://github.com/apache/spark/pull/46278#issuecomment-2084934296
@HyukjinKwon
I add a new rpc to make the `setSchedulerPool` api less confuse.
Please let me know if this PR is in the right way? If is, more unit test
will be added.
--
Th
HyukjinKwon commented on PR #46278:
URL: https://github.com/apache/spark/pull/46278#issuecomment-2084282381
Oh, okay. I misread the PR. I thought you're making `spark.scheduler.mode` a
runtime conf. Okay I got that it makes sense.
--
This is an automated message from the Apache Git Servic
xieshuaihu commented on PR #46278:
URL: https://github.com/apache/spark/pull/46278#issuecomment-2084128139
Let me clarify this question.
In vanilla spark, we could do this
```scala
// create context, this config could be only once
// or set in spark-submit: --conf spark.sc