Spark 3.5 have added an method `supportsReliableStorage`  in the 
`ShuffleDriverComponents` which indicate whether writing  shuffle data to a 
distributed filesystem or persisting it in a remote shuffle service.
Uniffle is a general purpose remote shuffle service 
(https://github.com/apache/incubator-uniffle).  It can enhance the experience 
of Spark on K8S. After Spark 3.5 is released, Uniffle will support the 
`ShuffleDriverComponents`.  you can see [1].
If you have interest about more details of Uniffle, you can  see [2]

[1] https://github.com/apache/incubator-uniffle/issues/802.
[2] 
https://uniffle.apache.org/blog/2023/07/21/Uniffle%20-%20New%20chapter%20for%20the%20shuffle%20in%20the%20cloud%20native%20era

发件人: Mich Talebzadeh <mich.talebza...@gmail.com>
日期: 2023年8月8日 星期二 06:53
抄送: dev <dev@spark.apache.org>
主题: [Internet]Re: Improving Dynamic Allocation Logic for Spark 4+

On the subject of dynamic allocation, is the following message a cause for 
concern when running Spark on k8s?

INFO ExecutorAllocationManager: Dynamic allocation is enabled without a shuffle 
service.

Mich Talebzadeh,
Solutions Architect/Engineering Lead
London
United Kingdom


 
[https://ci3.googleusercontent.com/mail-sig/AIorK4zholKucR2Q9yMrKbHNn-o1TuS4mYXyi2KO6Xmx6ikHPySa9MLaLZ8t2hrA6AUcxSxDgHIwmKE]
   view my Linkedin 
profile<https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>

 https://en.everybodywiki.com/Mich_Talebzadeh



Disclaimer: Use it at your own risk. Any and all responsibility for any loss, 
damage or destruction of data or any other property which may arise from 
relying on this email's technical content is explicitly disclaimed. The author 
will in no case be liable for any monetary damages arising from such loss, 
damage or destruction.




On Mon, 7 Aug 2023 at 23:42, Mich Talebzadeh 
<mich.talebza...@gmail.com<mailto:mich.talebza...@gmail.com>> wrote:

Hi,

From what I have seen spark on a serverless cluster has hard up getting the 
driver going in a timely manner

Annotations:  
autopilot.gke.io/resource-adjustment<http://autopilot.gke.io/resource-adjustment>:
                
{"input":{"containers":[{"limits":{"memory":"1433Mi"},"requests":{"cpu":"1","memory":"1433Mi"},"name":"spark-kubernetes-driver"}]},"output...
              
autopilot.gke.io/warden-version<http://autopilot.gke.io/warden-version>: 2.7.41

This is on spark 3.4.1 with Java 11 both the host running spark-submit and the 
docker itself

I am not sure how relevant this is to this discussion but it looks like a kind 
of blocker for now. What config params can help here and what can be done?

Thanks

Mich Talebzadeh,
Solutions Architect/Engineering Lead
London
United Kingdom


 
[https://ci3.googleusercontent.com/mail-sig/AIorK4zholKucR2Q9yMrKbHNn-o1TuS4mYXyi2KO6Xmx6ikHPySa9MLaLZ8t2hrA6AUcxSxDgHIwmKE]
   view my Linkedin 
profile<https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>

 https://en.everybodywiki.com/Mich_Talebzadeh



Disclaimer: Use it at your own risk. Any and all responsibility for any loss, 
damage or destruction of data or any other property which may arise from 
relying on this email's technical content is explicitly disclaimed. The author 
will in no case be liable for any monetary damages arising from such loss, 
damage or destruction.




On Mon, 7 Aug 2023 at 22:39, Holden Karau 
<hol...@pigscanfly.ca<mailto:hol...@pigscanfly.ca>> wrote:
Oh great point

On Mon, Aug 7, 2023 at 2:23 PM bo yang 
<bobyan...@gmail.com<mailto:bobyan...@gmail.com>> wrote:
Thanks Holden for bringing this up!

Maybe another thing to think about is how to make dynamic allocation more 
friendly with Kubernetes and disaggregated shuffle storage?



On Mon, Aug 7, 2023 at 1:27 PM Holden Karau 
<hol...@pigscanfly.ca<mailto:hol...@pigscanfly.ca>> wrote:
So I wondering if there is interesting in revisiting some of how Spark is doing 
it's dynamica allocation for Spark 4+?

Some things that I've been thinking about:

- Advisory user input (e.g. a way to say after X is done I know I need Y where 
Y might be a bunch of GPU machines)
- Configurable tolerance (e.g. if we have at most Z% over target no-op)
- Past runs of same job (e.g. stage X of job Y had a peak of K)
- Faster executor launches (I'm a little fuzzy on what we can do here but, one 
area for example is we setup and tear down an RPC connection to the driver with 
a blocking call which does seem to have some locking inside of the driver at 
first glance)

Is this an area other folks are thinking about? Should I make an epic we can 
track ideas in? Or are folks generally happy with today's dynamic allocation 
(or just busy with other things)?

--
Twitter: https://twitter.com/holdenkarau
Books (Learning Spark, High Performance Spark, etc.): https://amzn.to/2MaRAG9 
<https://amzn.to/2MaRAG9>
YouTube Live Streams: https://www.youtube.com/user/holdenkarau
--
Twitter: https://twitter.com/holdenkarau
Books (Learning Spark, High Performance Spark, etc.): https://amzn.to/2MaRAG9 
<https://amzn.to/2MaRAG9>
YouTube Live Streams: https://www.youtube.com/user/holdenkarau

Reply via email to