Hi, What's the relationship between Spark worker and executor memory settings in standalone mode? Do they work independently or does the worker cap executor memory?
Also, is the number of concurrent executors per worker capped by the number of CPU cores configured for the worker?