Hi Folks,
I have some questions about how Spark scheduler works:
- How does Spark know how many resources a job might need?
- How does it fairly share resources between multiple jobs?
- Does it "know" about data and partition sizes and use that information
for scheduling?

Mohit.

Reply via email to