Hi,

We've a number of Spark Streaming /Kafka jobs that would benefit of an even
spread of consumers over physical hosts in order to maximize network usage.
As far as I can see, the Spark Mesos scheduler accepts resource offers
until all required Mem + CPU allocation has been satisfied.

This basic resource allocation policy results in large executors spread
over few nodes, resulting in many Kafka consumers in a single node (e.g.
from 12 consumers, I've seen allocations of 7/3/2)

Is there a way to tune this behavior to achieve executor allocation on a
given number of hosts?

-kr, Gerard.

Reply via email to