[ 
https://issues.apache.org/jira/browse/SPARK-4810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247081#comment-14247081
 ] 

Patrick Wendell commented on SPARK-4810:
----------------------------------------

Actually can I suggest we move this to the spark users list? This JIRA we use 
primarily for tracking of identified bugs. For information how to join the user 
list see this page:

http://spark.apache.org/community.html

> Failed to run collect
> ---------------------
>
>                 Key: SPARK-4810
>                 URL: https://issues.apache.org/jira/browse/SPARK-4810
>             Project: Spark
>          Issue Type: Question
>         Environment: Spark 1.1.1 prebuilt for hadoop 2.4.0
>            Reporter: newjunwei
>
> my application failed like below.i want to know the possible reason.Not 
> enough memory may cause this?
> Evironment: Spark 1.1.1 prebuilt for hadoop 2.4.0, standalone deploying mode.
> But no problem when running using local master for test  or running to 
> process another smaller size data.
> I am sure my real data to process is large which is about 200 million 
> key-value data.The smaller size data is about one tenth of the real. I got my 
> result by collect, and  the result will be very large size too. Now, i 
> consider this problem is caused of so many  failed task when to collect a 
> large result. Is it the truth?
> 2014-12-09 21:51:47,830 WARN 
> org.apache.spark.Logging$class.logWarning(Logging.scala:71) - Lost task 60.1 
> in stage 1.1 (TID 566, server-21): java.io.IOException: 
> org.apache.spark.SparkException: Failed to get broadcast_4_piece0 of 
> broadcast_4
>         org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:930)
>         
> org.apache.spark.broadcast.TorrentBroadcast.readObject(TorrentBroadcast.scala:155)
>         sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>         
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         java.lang.reflect.Method.invoke(Method.java:597)
>         java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:969)
>         java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1871)
>         
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1775)
>         java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1327)
>         
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1969)
>         java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
>         
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1775)
>         java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1327)
>         java.io.ObjectInputStream.readObject(ObjectInputStream.java:349)
>         
> org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
>         
> org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87)
>         org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:160)
>         
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
>         
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
>         java.lang.Thread.run(Thread.java:662)
> 2014-12-09 21:51:49,460 INFO 
> org.apache.spark.Logging$class.logInfo(Logging.scala:59) - Starting task 60.2 
> in stage 1.1 (TID 603, server-11, PROCESS_LOCAL, 1295 bytes)
> 2014-12-09 21:51:49,461 INFO 
> org.apache.spark.Logging$class.logInfo(Logging.scala:59) - Lost task 9.3 in 
> stage 1.1 (TID 579) on executor server-11: java.io.IOException 
> (org.apache.spark.SparkException: Failed to get broadcast_4_piece0 of 
> broadcast_4) [duplicate 1]
> 2014-12-09 21:51:49,487 ERROR 
> org.apache.spark.Logging$class.logError(Logging.scala:75) - Task 9 in stage 
> 1.1 failed 4 times; aborting job
> 2014-12-09 21:51:49,494 INFO 
> org.apache.spark.Logging$class.logInfo(Logging.scala:59) - Cancelling stage 1
> 2014-12-09 21:51:49,498 INFO 
> org.apache.spark.Logging$class.logInfo(Logging.scala:59) - Stage 1 was 
> cancelled
> 2014-12-09 21:51:49,511 INFO 
> org.apache.spark.Logging$class.logInfo(Logging.scala:59) - Failed to run 
> collect at StatVideoService.scala:62



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to