Mind sharing code? I think only shuffle failures lead to stage failures and
re-tries.
Jacek
On 19 Jun 2016 4:35 p.m., "Ted Yu" wrote:
> You can utilize a counter in external storage (NoSQL e.g.)
> When the counter reaches 2, stop throwing exception so that the task
> passes.
>
> FYI
>
> On Sun,
You can utilize a counter in external storage (NoSQL e.g.)
When the counter reaches 2, stop throwing exception so that the task passes.
FYI
On Sun, Jun 19, 2016 at 3:22 AM, Jacek Laskowski wrote:
> Hi,
>
> Thanks Burak for the idea, but it *only* fails the tasks that
> eventually fail the entir
Hi,
Thanks Burak for the idea, but it *only* fails the tasks that
eventually fail the entire job not a particular stage (just once or
twice) before the entire job is failed. The idea is to see the
attempts in web UI as there's a special handling for cases where a
stage failed once or twice before
Hi Jacek,
Can't you simply have a mapPartitions task throw an exception or something?
Are you trying to do something more esoteric?
Best,
Burak
On Sat, Jun 18, 2016 at 5:35 AM, Jacek Laskowski wrote:
> Hi,
>
> Following up on this question, is a stage considered failed only when
> there is a F
Hi,
Following up on this question, is a stage considered failed only when
there is a FetchFailed exception? Can I have a failed stage with only
a single-stage job?
Appreciate any help on this...(as my family doesn't like me spending
the weekend with Spark :))
Pozdrawiam,
Jacek Laskowski
htt
Hi,
I'm trying to see some stats about failing stages in web UI and want
to "create" few failed stages. Is this possible using spark-shell at
all? Which setup of Spark/spark-shell would allow for such a scenario.
I could write a Scala code if that's the only way to have failing stages.
Please gu