My spark-sql command:

spark-sql --driver-memory 2g --master spark://hadoop04.xx.xx.com:8241 --conf
spark.driver.cores=20 --conf spark.cores.max=20 --conf
spark.executor.memory=2g --conf spark.driver.memory=2g --conf
spark.akka.frameSize=500 --conf spark.eventLog.enabled=true --conf
spark.eventLog.dir=file:///newdisk/tmp/spark-events --conf
spark.executor.extraJavaOptions="-XX:-UseGCOverheadLimit -XX:+PrintGCDetails
-XX:+PrintGCTimeStamps -XX:+UseConcMarkSweepGC" --driver-java-options
"-XX:-UseGCOverheadLimit -XX:+PrintGCDetails -XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=dumpsparksql.log"

spark-sql output:

...
15/07/02 16:59:53 ERROR SparkSQLDriver: Failed in [insert overwrite table
phonesall  select phoneNumber from phone]
org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in
stage 5.0 failed 4 times, most recent failure: Lost task 2.3 in stage 5.0
(TID 46, hadoop04.xx.xx.com): java.io.StreamCorruptedException: invalid type
code: 61
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1373)
        at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1963)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1887)
        at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1770)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1346)
        at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1963)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1887)
        at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1770)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1346)
        at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1963)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1887)
        at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1770)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1346)
        at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1963)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1887)
        at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1770)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1346)
        at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1963)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1887)
        at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1770)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1346)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:368)
        at
org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:68)
        at
org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:94)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:57)
        at org.apache.spark.scheduler.Task.run(Task.scala:64)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
        at java.lang.Thread.run(Thread.java:679)

Driver stacktrace:
        at
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1204)
        at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1193)
        at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1192)
        at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1192)
        at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693)
        at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693)
        at scala.Option.foreach(Option.scala:236)
        at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:693)
        at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1393)
        at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in
stage 5.0 failed 4 times, most recent failure: Lost task 2.3 in stage 5.0
(TID 46, m1-ite-hadoop04.m1.baidu.com): java.io.StreamCorruptedException:
invalid type code: 61
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1373)
        at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1963)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1887)
        at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1770)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1346)
        at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1963)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1887)
        at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1770)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1346)
        at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1963)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1887)
        at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1770)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1346)
        at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1963)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1887)
        at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1770)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1346)
        at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1963)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1887)
        at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1770)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1346)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:368)
        at
org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:68)
        at
org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:94)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:57)
        at org.apache.spark.scheduler.Task.run(Task.scala:64)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
        at java.lang.Thread.run(Thread.java:679)
...

why is this happen?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/insert-overwrite-table-phonesall-in-spark-sql-resulted-in-java-io-StreamCorruptedException-tp23579.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to