Hi,

I'm trying to get *FAIR *scheduling to work in a spark streaming app
(1.6.0).

I've found a previous mailing list where it is indicated to do:

dstream.foreachRDD { rdd =>
rdd.sparkContext.setLocalProperty("spark.scheduler.pool", "pool1") // set
the pool rdd.count() // or whatever job }

This seems to work, in the sense that If I have 5 foreachRDD in my code,
each one is sent to a different queue, but they still get executed one
after the other rather than at the same time.
Am I missing something?

The scheduler config and scheduler mode are being picked alright as I can
see them on the Spark UI

//Context config

*spark.scheduler.mode=FAIR*

Here is my scheduler config:


*<?xml version="1.0"?> <allocations> <pool name="A">
<schedulingMode>FAIR</schedulingMode> <weight>2</weight>
<minShare>1</minShare> </pool> <pool name="B">
<schedulingMode>FAIR</schedulingMode> <weight>1</weight>
<minShare>0</minShare> </pool> <pool name=C">
<schedulingMode>FAIR</schedulingMode> <weight>1</weight>
<minShare>0</minShare> </pool> <pool name=D">
<schedulingMode>FAIR</schedulingMode> <weight>1</weight>
<minShare>0</minShare> </pool> <pool name="E">
<schedulingMode>FAIR</schedulingMode> <weight>2</weight>
<minShare>1</minShare> </pool>*


Any idea on what could be wrong?

Reply via email to