mridulm commented on code in PR #47197:
URL: https://github.com/apache/spark/pull/47197#discussion_r1670374260


##########
core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala:
##########
@@ -328,19 +354,16 @@ private[spark] object TaskMetrics extends Logging {
    */
   def fromAccumulators(accums: Seq[AccumulatorV2[_, _]]): TaskMetrics = {
     val tm = new TaskMetrics
-    val externalAccums = new java.util.ArrayList[AccumulatorV2[Any, Any]]()
     for (acc <- accums) {
       val name = acc.name
-      val tmpAcc = acc.asInstanceOf[AccumulatorV2[Any, Any]]
       if (name.isDefined && tm.nameToAccums.contains(name.get)) {
         val tmAcc = tm.nameToAccums(name.get).asInstanceOf[AccumulatorV2[Any, 
Any]]
         tmAcc.metadata = acc.metadata
-        tmAcc.merge(tmpAcc)
+        tmAcc.merge(acc.asInstanceOf[AccumulatorV2[Any, Any]])
       } else {
-        externalAccums.add(tmpAcc)
+        tm._externalAccums += acc

Review Comment:
   It has to do with some nits with Java memory model guarantees - the updates 
to the list need not be visible when queried from the read/write lock since it 
did not go through the same barrier when updates were made.
   
   (I am reviewing only based on diff though (afk); so not sure if there are 
mitigating reasons why it might not apply)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to