Github user squito commented on a diff in the pull request:

    https://github.com/apache/spark/pull/8760#discussion_r52685053
  
    --- Diff: 
core/src/main/scala/org/apache/spark/scheduler/BlacklistTracker.scala ---
    @@ -0,0 +1,253 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.spark.scheduler
    +
    +import java.util.concurrent.TimeUnit
    +
    +import scala.collection.mutable
    +
    +import org.apache.spark.SparkConf
    +import org.apache.spark.Success
    +import org.apache.spark.TaskEndReason
    +import org.apache.spark.annotation.DeveloperApi
    +import org.apache.spark.util.SystemClock
    +import org.apache.spark.util.ThreadUtils
    +import org.apache.spark.util.Utils
    +import org.apache.spark.util.Clock
    +
    +/**
    + * BlacklistTracker is design to track problematic executors and node on 
application level.
    + * It is shared by all TaskSet, so that once a new TaskSet coming, it 
could be benefit from
    + * previous experience of other TaskSet.
    + *
    + * Once task finished, the callback method in TaskSetManager should update
    + * executorIdToFailureStatus Map.
    + */
    +private[spark] class BlacklistTracker(
    +    sparkConf: SparkConf,
    +    clock: Clock = new SystemClock()) extends BlacklistCache{
    +  // maintain a ExecutorId --> FailureStatus HashMap
    +  private val executorIdToFailureStatus: mutable.HashMap[String, 
FailureStatus] = mutable.HashMap()
    +
    +  // Apply Strategy pattern here to change different blacklist detection 
logic
    +  private val strategy = BlacklistStrategy(sparkConf)
    +
    +  // A daemon thread to expire blacklist executor periodically
    +  private val scheduler = 
ThreadUtils.newDaemonSingleThreadScheduledExecutor(
    +      "spark-scheduler-blacklist-expire-timer")
    +
    +  private val recoverPeriod = sparkConf.getTimeAsSeconds(
    +    "spark.scheduler.blacklist.recoverPeriod", "60s")
    +
    +  def start(): Unit = {
    +    val scheduleTask = new Runnable() {
    +      override def run(): Unit = {
    +        Utils.logUncaughtExceptions(expireExecutorsInBlackList())
    +      }
    +    }
    +    scheduler.scheduleAtFixedRate(scheduleTask, 0L, recoverPeriod, 
TimeUnit.SECONDS)
    +  }
    +
    +  def stop(): Unit = {
    +    scheduler.shutdown()
    +    scheduler.awaitTermination(10, TimeUnit.SECONDS)
    +  }
    +
    +  // The actual implementation is delegated to strategy
    +  /** VisibleForTesting */
    +  private[scheduler] def expireExecutorsInBlackList(): Unit = synchronized 
{
    +    val updated = 
strategy.expireExecutorsInBlackList(executorIdToFailureStatus, clock)
    +    if (updated) {
    +      invalidateCache()
    +    }
    +  }
    --- End diff --
    
    this is a good point.  effort was made to avoid doing too much work with 
this lock, by caching the set of blacklisted nodes and executors.  But maybe we 
can do a bit better.
    
    The only reason we need to synchronize is b/c of the background thread that 
expires executors from the blacklist -- this is just called from a 
`TaskSetManger`, which in turn can only get called from threads with a lock on 
the `TaskScheduler`.  So if instead of updating the cache in the background 
thread, we just have each of the methods check themselves if the blacklist 
needs to be updated, I think we could completely eliminate the need for the 
lock.
    
    You'd still occasionally be pausing scheduling to run 
updateFailedExecutors, but even with 1000s of executors this seems pretty 
minor, and it is not running very often (60s by default).  We could avoid the 
overhead of synchronization for scheduling every task, however.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to