Hi guys,
I'm having a problem where respawning a failed executor during a job that
reads/writes parquet on S3 causes subsequent tasks to fail because of
missing AWS keys.
Setup:
I'm using Spark 1.5.2 with Hadoop 2.7 and running experiments on a simple
standalone cluster:
1 master
2 workers
My
On 17 Mar 2016, at 16:01, Allen George
> wrote:
Hi guys,
I'm having a problem where respawning a failed executor during a job that
reads/writes parquet on S3 causes subsequent tasks to fail because of missing
AWS keys.
Setup:
I'm using