Hello -
we are running a Spark job, and getting the following error -

"LiveListenerBus: Dropping SparkListenerEvent because no remaining room in
event queue"

As per the recommendation in the Spark Docs -

I've increased the value of property
spark.scheduler.listenerbus.eventqueue.capacity to 90000 (from the default
10000)
and also increased the Diver memory

That seems to have mitigated the issue.

The question is - is there is any Code optimization (or any other) that can
be done to resolve this problem ?
Pls note - we are primarily using functions like - reduce(),
collectAsList() and persist() as part of the job.

Reply via email to