I found the reason, it is about sc. Thanks
On Tue, Jul 14, 2015 at 9:45 PM, Akhil Das
wrote:
> Someone else also reported this error with spark 1.4.0
>
> Thanks
> Best Regards
>
> On Tue, Jul 14, 2015 at 6:57 PM, Arthur Chan
> wrote:
>
>> Hi, Below is the log form the worker.
>>
>>
>> 15/07/14
Someone else also reported this error with spark 1.4.0
Thanks
Best Regards
On Tue, Jul 14, 2015 at 6:57 PM, Arthur Chan
wrote:
> Hi, Below is the log form the worker.
>
>
> 15/07/14 17:18:56 ERROR FileAppender: Error writing stream to file
> /spark/app-20150714171703-0004/5/stderr
>
> java.io.I
Hi, Below is the log form the worker.
15/07/14 17:18:56 ERROR FileAppender: Error writing stream to file
/spark/app-20150714171703-0004/5/stderr
java.io.IOException: Stream closed
at java.io.BufferedInputStream.getBufIfOpen(BufferedInputStream.java:170)
at java.io.BufferedInputStream.read1(Buf
gt;
> 15/07/14 18:27:40 INFO Executor: Running task 4.0 in stage 174.0 (TID 4517)
>
> 15/07/14 18:27:40 INFO Executor: Running task 5.0 in stage 174.0 (TID 4518)
>
> 15/07/14 18:27:40 INFO Executor: Running task 6.0 in stage 174.0 (TID 4519)
>
> 15/07/14 18:27
nning task 7.0 in stage 174.0 (TID 4520)
15/07/14 18:27:40 INFO Executor: Running task 8.0 in stage 174.0 (TID 4521)
15/07/14 18:27:40 ERROR Executor: Exception in task 1.0 in stage 174.0 (TID
4514)
java.lang.IllegalStateException: unread block data
at
java.io.ObjectI
I got the same problem, maybe java serializer is unstable
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-IllegalStateException-unread-block-data-tp20668p21463.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
same issue anyone help please
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-IllegalStateException-unread-block-data-tp20668p20745.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
I found solution.
I use HADOOP_MAPRED_HOME in my environment what clashes with spark.
After I set empty HADOOP_MAPRED_HOME spark's started working.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-IllegalStateException-unread-block-data-tp20668p207
"Restored" ment reboot slave node with unchanged IP.
"Funny" thing is that for small files spark works fine.
I checked hadoop with hdfs also and I'm able to run wordcount on it without
any problems (i.e. file about 50GB size).
--
View this message in context:
http://apache-spark-user-list.10015
Try to run the spark-shell in standalone mode
(MASTER=spark://yourmasterurl:7077 $SPARK_HOME/bin/spark-shell), and do a
small count ( val d = sc.parallelize(1 to 1000).count()), If that is
failing, then something is wrong with your cluster setup as its saying
Connection refused: node001/10.180.49.2
1)
> 14/12/12 20:25:02 WARN scheduler.TaskSetManager: Loss was due to
> java.lang.IllegalStateException
> java.lang.IllegalStateException: unread block data
> at
> java.io.ObjectInputStream$BlockDataInputStream.setBlockDataMode(ObjectInputStream.java:2421)
>
er: Lost TID 61 (task 1.0:61)
> 14/12/12 20:25:02 WARN scheduler.TaskSetManager: Loss was due to
> java.lang.IllegalStateException
> java.lang.IllegalStateException: unread block data
> at
> java.io.ObjectInputStream$BlockDataInputStream.setBlockDataMode(ObjectInpu
(task 1.0:61)
14/12/12 20:25:02 WARN scheduler.TaskSetManager: Loss was due to
java.lang.IllegalStateException
java.lang.IllegalStateException: unread block data
at
java.io.ObjectInputStream$BlockDataInputStream.setBlockDataMode(ObjectInputStream.java:2421)
at
Hi,
I get exactly the same error. It runs on my local machine but not on the
cluster. I am running the example pi.py example.
Best,
Tassilo
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/stage-failure-java-lang-IllegalStateException-unread-block-data-tp1
The worker side has error message as this,
14/10/30 18:29:00 INFO Worker: Asked to launch executor
app-20141030182900-0006/0 for testspark_v1
14/10/30 18:29:01 INFO ExecutorRunner: Launch command: "java" "-cp"
"::/root/spark-1.1.0/conf:/root/spark-1.1.0/assembly/target/scala-2.10/spark-assembly-1.
.1 in stage 0.0 (TID
1, node001, ANY, 1265 bytes)
14/10/30 17:51:53 INFO TaskSetManager: Lost task 0.1 in stage 0.0 (TID 1) on
executor node001: java.lang.IllegalStateException (unread block data)
[duplicate 1]
14/10/30 17:51:53 INFO TaskSetManager: Starting task 0.2 in stage 0.0 (TID
2, node001
Did you ever find a sln to this problem? I'm having similar issues.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-IllegalStateException-unread-block-data-while-running-the-sampe-WordCount-program-from-Ecle-tp8388p11412.html
Sent from the Apache
s.apache.org/jira/browse/SPARK-1867
The exception:
Exception in thread "main" org.apache.spark.SparkException: Job aborted:
Task 0.0:1 failed 32 times (most recent failure: Exception failure:
java.lang.IllegalStateException: unread block data)
at
org.apache.spark.schedule
18 matches
Mail list logo