All the Jars and Java versions are consistent in my setup. In fact, I have
Spark sorting 1TB of data using the exact same setup, except with another
file system as storage for the data nodes. Could it be that there is actual
corruption in the files written?
On Tue, Mar 29, 2016 at 12:00 PM, Simon
2016-03-29 11:25 GMT+02:00 Robert Schmidtke :
> Is there a meaningful way for me to find out what exactly is going wrong
> here? Any help and hints are greatly appreciated!
Maybe a version mismatch between the jars on the cluster?
Hi everyone,
I'm running the Intel HiBench TeraSort (1TB) Spark Scala benchmark on Spark
1.6.0. After some time, I'm seeing one task fail too many times, despite
being rescheduled on different nodes with the following stacktrace:
16/03/27 22:25:04 WARN scheduler.TaskSetManager: Lost task 97.0 in