I have Spark running in standalone mode with 4 executors, and each executor
with 5 cores each (spark.executor.cores=5).  However, when I'm processing
an RDD with ~90,000 partitions, I only get 4 parallel tasks.  Shouldn't I
be getting 4x5=20 parallel task executions?

Reply via email to