Pau Tallada CrespĂ­ created SPARK-10924:
------------------------------------------

             Summary: Failed to update accumulators for ShuffleMapTask: Broken 
pipe
                 Key: SPARK-10924
                 URL: https://issues.apache.org/jira/browse/SPARK-10924
             Project: Spark
          Issue Type: Bug
          Components: PySpark, Shuffle
    Affects Versions: 1.3.1
         Environment: Centos 6.7, HDP 2.2
            Reporter: Pau Tallada CrespĂ­
            Priority: Minor


When running Spark jobs, this error appears many times on the output, but the 
job keeps running and produces results.

I've found similar bug which display similar error messages, but none of them 
say "Broken pipe".

15/10/05 11:05:37 ERROR DAGScheduler: Failed to update accumulators for 
ShuffleMapTask(49, 29)
java.net.SocketException: Broken pipe
        at java.net.SocketOutputStream.socketWrite0(Native Method)
        at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113)
        at java.net.SocketOutputStream.write(SocketOutputStream.java:159)
        at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
        at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
        at java.io.DataOutputStream.flush(DataOutputStream.java:123)
        at 
org.apache.spark.api.python.PythonAccumulatorParam.addInPlace(PythonRDD.scala:827)
        at 
org.apache.spark.api.python.PythonAccumulatorParam.addInPlace(PythonRDD.scala:789)
        at org.apache.spark.Accumulable.$plus$plus$eq(Accumulators.scala:81)
        at 
org.apache.spark.Accumulators$$anonfun$add$2.apply(Accumulators.scala:323)
        at 
org.apache.spark.Accumulators$$anonfun$add$2.apply(Accumulators.scala:321)
        at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
        at 
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
        at 
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
        at 
scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
        at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
        at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
        at org.apache.spark.Accumulators$.add(Accumulators.scala:321)
        at 
org.apache.spark.scheduler.DAGScheduler.updateAccumulators(DAGScheduler.scala:890)
        at 
org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:974)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1390)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to