Hey,
I’m wondering if anyone has run into issues with Spark 1.5 and a FileNotFound
exception with shuffle.index files? It’s been cropping up with very large joins
and aggregations, and causing all of our jobs to fail towards the end. The
memory limit for the executors (we’re running on mesos)
> https://issues.apache.org/jira/browse/SPARK-11293
> <https://issues.apache.org/jira/browse/SPARK-11293>
>
> Romi Kuntsman, Big Data Engineer
> http://www.totango.com <http://www.totango.com/>
>
> On Wed, Nov 18, 2015 at 2:00 PM, Tom Arnfeld <t...@duedil.co