We are using Sparks 1.0 for Spark Streaming on Spark Standalone cluster and
seeing the following error.
Job aborted due to stage failure: Task 3475.0:15 failed 4 times, most
recent failure: Exception failure in TID 216394 on host
hslave33102.sjc9.service-now.com: java.lang.Exception:
We are using Sparks 1.0.
I'm using DStream operations such as map, filter and reduceByKeyAndWindow
and doing a foreach operation on DStream.
--
View this message in context:
All the operations being done are using the dstream. I do read an RDD in
memory which is collected and converted into a map and used for lookups as
part of DStream operations. This RDD is loaded only once and converted into
map that is then used on streamed data.
Do you mean non streaming jobs on
Not at all. Don't have any such code.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-Could-not-compute-split-block-not-found-tp11186p11231.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
Here is the log file.
streaming.gz
http://apache-spark-user-list.1001560.n3.nabble.com/file/n11240/streaming.gz
There are quite few AskTimeouts that have happening for about 2 minutes and
then followed by block not found errors.
Thanks
Kanwal
--
View this message in context:
We are using Spark 1.0.0 deployed on Spark Standalone cluster and I'm getting
the following exception. With previous version I've seen this error occur
along with OutOfMemory errors which I'm not seeing with Sparks 1.0.
Any suggestions?
Job aborted due to stage failure: Task 3748.0:20 failed 4
Please see sample code attached at
https://issues.apache.org/jira/browse/SPARK-944.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Writing-data-to-HBase-using-Spark-tp7304p7305.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
Spark Streaming job was running on two worker nodes and then there was an
error on one of the nodes. The spark job showed running but no progress was
being made and not processing any new messages. Based on the driver log
files I see the following errors.
I would expect the stream reading would
Yes I'm using akka as well. But if that is the problem then I should have
been facing this issue in my local setup as well. I'm only running into this
error on using the spark standalone cluster.
But will try out your suggestion and let you know.
Thanks
Kanwal
--
View this message in context:
I've removed the dependency on akka in a separate project but still running
into the same error. In the POM Dependency Hierarchy I do see 2.4.1 - shaded
and 2.5.0 being included. If there is a conflict with project dependency I
would think I should be getting the same error in my local setup as
10 matches
Mail list logo