Can you look at the logs from the executor or in the UI? They should
give an exception with the reason for the task failure. Also in the
future, for this type of e-mail please only e-mail the "user@" list
and not both lists.

- Patrick

On Sat, May 31, 2014 at 3:22 AM, prabeesh k <prabsma...@gmail.com> wrote:
> Hi,
>
> scenario : Read data from HDFS and apply hive query  on it and the result is
> written back to HDFS.
>
>  Scheme creation, Querying  and saveAsTextFile are working fine with
> following mode
>
> local mode
> mesos cluster with single node
> spark cluster with multi node
>
> Schema creation and querying are working fine with mesos multi node cluster.
> But  while trying to write back to HDFS using saveAsTextFile, the following
> error occurs
>
>  14/05/30 10:16:35 INFO DAGScheduler: The failed fetch was from Stage 4
> (mapPartitionsWithIndex at Operator.scala:333); marking it for resubmission
> 14/05/30 10:16:35 INFO DAGScheduler: Executor lost:
> 201405291518-3644595722-5050-17933-1 (epoch 148)
>
> Let me know your thoughts regarding this.
>
> Regards,
> prabeesh

Reply via email to