agrawaldevesh commented on a change in pull request #28708: URL: https://github.com/apache/spark/pull/28708#discussion_r442581482
########## File path: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ########## @@ -775,7 +802,12 @@ private[spark] class MapOutputTrackerMaster( override def stop(): Unit = { mapOutputRequests.offer(PoisonPill) threadpool.shutdown() - sendTracker(StopMapOutputTracker) + try { Review comment: I am curious on why sendTracker didn't throw an exception before and now it does ? Did the migration cause this ? ########## File path: core/src/main/scala/org/apache/spark/SparkEnv.scala ########## @@ -367,7 +367,8 @@ object SparkEnv extends Logging { externalShuffleClient } else { None - }, blockManagerInfo)), + }, blockManagerInfo, + mapOutputTracker.asInstanceOf[MapOutputTrackerMaster])), Review comment: Should there be a `isDriver` check here ? I believe the MapOutputTrackerMaster is only available on the driver ? ########## File path: core/src/main/scala/org/apache/spark/internal/config/package.scala ########## @@ -420,6 +420,30 @@ package object config { .booleanConf .createWithDefault(false) + private[spark] val STORAGE_SHUFFLE_DECOMMISSION_ENABLED = + ConfigBuilder("spark.storage.decommission.shuffleBlocks.enabled") + .doc("Whether to transfer shuffle blocks during block manager decommissioning. Requires " + + "a migratable shuffle resolver (like sort based shuffe)") + .version("3.1.0") + .booleanConf + .createWithDefault(false) + + private[spark] val STORAGE_SHUFFLE_DECOMMISSION_MAX_THREADS = + ConfigBuilder("spark.storage.decommission.shuffleBlocks.maxThreads") + .doc("Maximum number of threads to use in migrating shuffle files.") + .version("3.1.0") + .intConf + .checkValue(_ > 0, "The maximum number of threads should be positive") + .createWithDefault(10) Review comment: There is precedence with SHUFFLE_MAPOUTPUT_DISPATCHER_NUM_THREADS to use 8 threads. Should we keep the same magic constant for consistency ? ########## File path: core/src/main/scala/org/apache/spark/scheduler/MapStatus.scala ########## @@ -33,9 +33,11 @@ import org.apache.spark.util.Utils * task ran on as well as the sizes of outputs for each reducer, for passing on to the reduce tasks. Review comment: I think this comment above still refers to "block manager address that the task run on" ... it should be updated. ########## File path: core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockResolver.scala ########## @@ -148,6 +170,82 @@ private[spark] class IndexShuffleBlockResolver( } } + /** + * Write a provided shuffle block as a stream. Used for block migrations. + * ShuffleBlockBatchIds must contain the full range represented in the ShuffleIndexBlock. + * Requires the caller to delete any shuffle index blocks where the shuffle block fails to + * put. + */ + override def putShuffleBlockAsStream(blockId: BlockId, serializerManager: SerializerManager): + StreamCallbackWithID = { + val file = blockId match { + case ShuffleIndexBlockId(shuffleId, mapId, _) => + getIndexFile(shuffleId, mapId) + case ShuffleDataBlockId(shuffleId, mapId, _) => + getDataFile(shuffleId, mapId) + case _ => + throw new Exception(s"Unexpected shuffle block transfer ${blockId} as " + Review comment: How about a more informative unchecked exception type like IllegalStateException ? ########## File path: core/src/main/scala/org/apache/spark/storage/BlockId.scala ########## @@ -40,6 +40,9 @@ sealed abstract class BlockId { def isRDD: Boolean = isInstanceOf[RDDBlockId] def isShuffle: Boolean = isInstanceOf[ShuffleBlockId] || isInstanceOf[ShuffleBlockBatchId] def isBroadcast: Boolean = isInstanceOf[BroadcastBlockId] + def isInternalShuffle: Boolean = { Review comment: I kind of think that the name isInternalShuffle is a bit funny :-). What's internal about it ? Its more about being a data or an index block ? And in general shuffles are always internal (since they are within the same job) unlike rdd blocks that can be shared across jobs. One way to work around this might be to inline this method. Or I wonder what happen if you change isShuffle to also include shuffle data and index blocks ? ########## File path: core/src/main/scala/org/apache/spark/storage/BlockManagerDecommissioner.scala ########## @@ -0,0 +1,280 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.spark.storage + +import java.util.concurrent.ExecutorService + +import scala.collection.JavaConverters._ +import scala.collection.mutable +import scala.util.control.NonFatal + +import org.apache.spark._ +import org.apache.spark.internal.Logging +import org.apache.spark.internal.config +import org.apache.spark.shuffle.{MigratableResolver, ShuffleBlockInfo} +import org.apache.spark.storage.BlockManagerMessages.ReplicateBlock +import org.apache.spark.util.ThreadUtils + +/** + * Class to handle block manager decommissioning retries. + * It creates a Thread to retry offloading all RDD cache and Shuffle blocks + */ +private[storage] class BlockManagerDecommissioner( + conf: SparkConf, + bm: BlockManager) extends Logging { + + private val maxReplicationFailuresForDecommission = + conf.get(config.STORAGE_DECOMMISSION_MAX_REPLICATION_FAILURE_PER_BLOCK) + + /** + * This runnable consumes any shuffle blocks in the queue for migration. This part of a + * producer/consumer where the main migration loop updates the queue of blocks to be migrated + * periodically. On migration failure, the current thread will reinsert the block for another + * thread to consume. Each thread migrates blocks to a different particular executor to avoid + * distribute the blocks as quickly as possible without overwhelming any particular executor. + * + * There is no preference for which peer a given block is migrated to. + * This is notable different than the RDD cache block migration (further down in this file) + * which uses the existing priority mechanism for determining where to replicate blocks to. + * Generally speaking cache blocks are less impactful as they normally represent narrow + * transformations and we normally have less cache present than shuffle data. + * + * The producer/consumer model is chosen for shuffle block migration to maximize + * the chance of migrating all shuffle blocks before the executor is forced to exit. + */ + private class ShuffleMigrationRunnable(peer: BlockManagerId) extends Runnable { + @volatile var running = true + override def run(): Unit = { + var migrating: Option[ShuffleBlockInfo] = None + logInfo(s"Starting migration thread for ${peer}") + // Once a block fails to transfer to an executor stop trying to transfer more blocks + try { + while (running && !Thread.interrupted()) { + val migrating = Option(shufflesToMigrate.poll()) + migrating match { + case None => + logDebug("Nothing to migrate") + // Nothing to do right now, but maybe a transfer will fail or a new block + // will finish being committed. + val SLEEP_TIME_SECS = 1 + Thread.sleep(SLEEP_TIME_SECS * 1000L) + case Some(shuffleBlockInfo) => + logInfo(s"Trying to migrate shuffle ${shuffleBlockInfo} to ${peer}") + val blocks = + bm.migratableResolver.getMigrationBlocks(shuffleBlockInfo) + logInfo(s"Got migration sub-blocks ${blocks}") + blocks.foreach { case (blockId, buffer) => + logInfo(s"Migrating sub-block ${blockId}") + bm.blockTransferService.uploadBlockSync( + peer.host, + peer.port, + peer.executorId, + blockId, + buffer, + StorageLevel.DISK_ONLY, + null)// class tag, we don't need for shuffle + logDebug(s"Migrated sub block ${blockId}") + } + logInfo(s"Migrated ${shuffleBlockInfo} to ${peer}") + } + } + // This catch is intentionally outside of the while running block. + // if we encounter errors migrating to an executor we want to stop. + } catch { + case e: Exception => + migrating match { + case Some(shuffleMap) => + logError(s"Error ${e} during migration, adding ${shuffleMap} back to migration queue") + shufflesToMigrate.add(shuffleMap) + case None => + logError(s"Error ${e} while waiting for block to migrate") + } + } + } + } + + // Shuffles which are either in queue for migrations or migrated + private val migratingShuffles = mutable.HashSet[ShuffleBlockInfo]() + + // Shuffles which are queued for migration + private[storage] val shufflesToMigrate = + new java.util.concurrent.ConcurrentLinkedQueue[ShuffleBlockInfo]() + + @volatile private var stopped = false + + private val migrationPeers = + mutable.HashMap[BlockManagerId, ShuffleMigrationRunnable]() + + private lazy val blockMigrationExecutor = + ThreadUtils.newDaemonSingleThreadExecutor("block-manager-decommission") + + private val blockMigrationRunnable = new Runnable { + val sleepInterval = conf.get(config.STORAGE_DECOMMISSION_REPLICATION_REATTEMPT_INTERVAL) + + override def run(): Unit = { + if (!conf.get(config.STORAGE_RDD_DECOMMISSION_ENABLED) && + !conf.get(config.STORAGE_SHUFFLE_DECOMMISSION_ENABLED)) { + logWarning("Decommissioning, but no task configured set one or both:\n" + + s"${config.STORAGE_RDD_DECOMMISSION_ENABLED.key}\n" + + s"${config.STORAGE_SHUFFLE_DECOMMISSION_ENABLED.key}") + stopped = true + } + while (!stopped && !Thread.interrupted()) { + logInfo("Iterating on migrating from the block manager.") + try { + // If enabled we migrate shuffle blocks first as they are more expensive. + if (conf.get(config.STORAGE_SHUFFLE_DECOMMISSION_ENABLED)) { + logDebug("Attempting to replicate all shuffle blocks") + offloadShuffleBlocks() + logInfo("Done starting workers to migrate shuffle blocks") + } + if (conf.get(config.STORAGE_RDD_DECOMMISSION_ENABLED)) { + logDebug("Attempting to replicate all cached RDD blocks") + decommissionRddCacheBlocks() + logInfo("Attempt to replicate all cached blocks done") + } + logInfo(s"Waiting for ${sleepInterval} before refreshing migrations.") + Thread.sleep(sleepInterval) + } catch { + case e: InterruptedException => + logInfo("Interrupted during migration, will not refresh migrations.") + stopped = true + case NonFatal(e) => + logError("Error occurred while trying to replicate for block manager decommissioning.", + e) + stopped = true + } + } + } + } + + lazy val shuffleMigrationPool = ThreadUtils.newDaemonCachedThreadPool( + "migrate-shuffles", + conf.get(config.STORAGE_SHUFFLE_DECOMMISSION_MAX_THREADS)) + /** + * Tries to offload all shuffle blocks that are registered with the shuffle service locally. + * Note: this does not delete the shuffle files in-case there is an in-progress fetch + * but rather shadows them. + * Requires an Indexed based shuffle resolver. + * Note: if called in testing please call stopOffloadingShuffleBlocks to avoid thread leakage. + */ + private[storage] def offloadShuffleBlocks(): Unit = { + // Update the queue of shuffles to be migrated + logInfo("Offloading shuffle blocks") + val localShuffles = bm.migratableResolver.getStoredShuffles() + val newShufflesToMigrate = localShuffles.&~(migratingShuffles).toSeq + shufflesToMigrate.addAll(newShufflesToMigrate.asJava) + migratingShuffles ++= newShufflesToMigrate + + // Update the threads doing migrations + // TODO: Sort & only start as many threads as min(||blocks||, ||targets||) using location pref Review comment: This TODO appears to have been addressed. ########## File path: core/src/main/scala/org/apache/spark/storage/BlockManagerMasterEndpoint.scala ########## @@ -489,6 +491,24 @@ class BlockManagerMasterEndpoint( storageLevel: StorageLevel, memSize: Long, diskSize: Long): Boolean = { + logInfo(s"Updating block info on master ${blockId} for ${blockManagerId}") + + if (blockId.isInternalShuffle) { + blockId match { + case ShuffleIndexBlockId(shuffleId, mapId, _) => + // Don't update the map output on just the index block Review comment: I see that `getMigrationBlocks` returns the sub blocks in the order of `[index, data]`, but I wonder if the sub blocks are indeed migrated strictly in order ? Is it ever possible to have a data block updated before an index block ? It seems that we would want both, before we update the mapOutputTracker -- but I am not 100% clear on how that is ensured. ########## File path: core/src/main/scala/org/apache/spark/internal/config/package.scala ########## @@ -420,6 +420,30 @@ package object config { .booleanConf .createWithDefault(false) + private[spark] val STORAGE_SHUFFLE_DECOMMISSION_ENABLED = Review comment: Does it make sense to name the constant similar to the config ? like STORAGE_DECOMMISSION_SHUFFLE_BLOCKS_ENABLED ? ########## File path: core/src/main/scala/org/apache/spark/internal/config/package.scala ########## @@ -420,6 +420,30 @@ package object config { .booleanConf .createWithDefault(false) + private[spark] val STORAGE_SHUFFLE_DECOMMISSION_ENABLED = + ConfigBuilder("spark.storage.decommission.shuffleBlocks.enabled") + .doc("Whether to transfer shuffle blocks during block manager decommissioning. Requires " + + "a migratable shuffle resolver (like sort based shuffe)") + .version("3.1.0") + .booleanConf + .createWithDefault(false) + + private[spark] val STORAGE_SHUFFLE_DECOMMISSION_MAX_THREADS = Review comment: Same comment: What do you think of making the constant and the config have consistent naming ? ########## File path: core/src/main/scala/org/apache/spark/network/netty/NettyBlockTransferService.scala ########## @@ -168,7 +168,8 @@ private[spark] class NettyBlockTransferService( // Everything else is encoded using our binary protocol. val metadata = JavaUtils.bufferToArray(serializer.newInstance().serialize((level, classTag))) - val asStream = blockData.size() > conf.get(config.MAX_REMOTE_BLOCK_SIZE_FETCH_TO_MEM) + val asStream = (blockData.size() > conf.get(config.MAX_REMOTE_BLOCK_SIZE_FETCH_TO_MEM) || + blockId.isInternalShuffle || blockId.isShuffle) Review comment: Can you add some comments here about why is streaming enabled for shuffle blocks (index/data/range) but not for RDD blocks ? I kind of think that the decision to stream or not should continue to be based on the physical property of the block like its size, but perhaps I am missing something. ########## File path: core/src/main/scala/org/apache/spark/scheduler/MapStatus.scala ########## @@ -178,6 +184,10 @@ private[spark] class HighlyCompressedMapStatus private ( override def location: BlockManagerId = loc + override def updateLocation(bm: BlockManagerId): Unit = { Review comment: If I understand correctly, this is overridden in both CompressedMapStatus and HighlyCompressedMapStatus. Should it just be moved to the base class MapStatus ? ########## File path: core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockResolver.scala ########## @@ -55,6 +58,25 @@ private[spark] class IndexShuffleBlockResolver( def getDataFile(shuffleId: Int, mapId: Long): File = getDataFile(shuffleId, mapId, None) + /** + * Get the shuffle files that are stored locally. Used for block migrations. + */ + override def getStoredShuffles(): Set[ShuffleBlockInfo] = { + // Matches ShuffleIndexBlockId name + val pattern = "shuffle_(\\d+)_(\\d+)_.+\\.index".r + val rootDirs = blockManager.diskBlockManager.localDirs + // ExecutorDiskUtil puts things inside one level hashed sub directories + val searchDirs = rootDirs.flatMap(_.listFiles()).filter(_.isDirectory()) ++ rootDirs + val filenames = searchDirs.flatMap(_.list()) + logDebug(s"Got block files ${filenames.toList}") + filenames.flatMap { fname => + pattern.findAllIn(fname).matchData.map { + matched => ShuffleBlockInfo(matched.group(1).toInt, matched.group(2).toLong) + } + }.toSet Review comment: I am wondering why is this a Set instead of a Seq ? ie, when would fileNames have caused a duplicate SuffleBlockInfo ? Can this be huge ? Should this be an Iterator instead ? ########## File path: core/src/main/scala/org/apache/spark/internal/config/package.scala ########## @@ -420,6 +420,30 @@ package object config { .booleanConf .createWithDefault(false) + private[spark] val STORAGE_SHUFFLE_DECOMMISSION_ENABLED = + ConfigBuilder("spark.storage.decommission.shuffleBlocks.enabled") + .doc("Whether to transfer shuffle blocks during block manager decommissioning. Requires " + + "a migratable shuffle resolver (like sort based shuffe)") + .version("3.1.0") + .booleanConf + .createWithDefault(false) + + private[spark] val STORAGE_SHUFFLE_DECOMMISSION_MAX_THREADS = + ConfigBuilder("spark.storage.decommission.shuffleBlocks.maxThreads") + .doc("Maximum number of threads to use in migrating shuffle files.") + .version("3.1.0") + .intConf + .checkValue(_ > 0, "The maximum number of threads should be positive") + .createWithDefault(10) + + + private[spark] val STORAGE_RDD_DECOMMISSION_ENABLED = Review comment: Same comment about keeping the config and the constant string naming consistent. ########## File path: core/src/main/scala/org/apache/spark/scheduler/MapStatus.scala ########## @@ -126,6 +128,10 @@ private[spark] class CompressedMapStatus( override def location: BlockManagerId = loc + override def updateLocation(bm: BlockManagerId): Unit = { Review comment: nit: How about the variable name newLoc instead of bm ? ########## File path: core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockResolver.scala ########## @@ -148,6 +170,82 @@ private[spark] class IndexShuffleBlockResolver( } } + /** + * Write a provided shuffle block as a stream. Used for block migrations. + * ShuffleBlockBatchIds must contain the full range represented in the ShuffleIndexBlock. + * Requires the caller to delete any shuffle index blocks where the shuffle block fails to + * put. + */ + override def putShuffleBlockAsStream(blockId: BlockId, serializerManager: SerializerManager): + StreamCallbackWithID = { + val file = blockId match { + case ShuffleIndexBlockId(shuffleId, mapId, _) => + getIndexFile(shuffleId, mapId) + case ShuffleDataBlockId(shuffleId, mapId, _) => + getDataFile(shuffleId, mapId) + case _ => + throw new Exception(s"Unexpected shuffle block transfer ${blockId} as " + + s"${blockId.getClass().getSimpleName()}") + } + val fileTmp = Utils.tempFileWith(file) + val channel = Channels.newChannel( + serializerManager.wrapStream(blockId, + new FileOutputStream(fileTmp))) + + new StreamCallbackWithID { + + override def getID: String = blockId.name + + override def onData(streamId: String, buf: ByteBuffer): Unit = { + while (buf.hasRemaining) { + channel.write(buf) + } + } + + override def onComplete(streamId: String): Unit = { + logTrace(s"Done receiving shuffle block $blockId, now storing on local disk.") + channel.close() + val diskSize = fileTmp.length() + this.synchronized { Review comment: I didn't follow why this is locked ? ########## File path: core/src/main/scala/org/apache/spark/storage/BlockManager.scala ########## @@ -242,8 +244,7 @@ private[spark] class BlockManager( private var blockReplicationPolicy: BlockReplicationPolicy = _ - private var blockManagerDecommissioning: Boolean = false - private var decommissionManager: Option[BlockManagerDecommissionManager] = None + @volatile private var decommissioner: Option[BlockManagerDecommissioner] = None Review comment: Just for my understanding: why is decommissioner volatile now ? Would a comment help ? ########## File path: core/src/main/scala/org/apache/spark/storage/BlockManagerDecommissioner.scala ########## @@ -0,0 +1,281 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.spark.storage + +import java.util.concurrent.ExecutorService + +import scala.collection.JavaConverters._ +import scala.collection.mutable +import scala.util.control.NonFatal + +import org.apache.spark._ +import org.apache.spark.internal.Logging +import org.apache.spark.internal.config +import org.apache.spark.shuffle.{MigratableResolver, ShuffleBlockInfo} +import org.apache.spark.storage.BlockManagerMessages.ReplicateBlock +import org.apache.spark.util.ThreadUtils + +/** + * Class to handle block manager decommissioning retries. + * It creates a Thread to retry offloading all RDD cache and Shuffle blocks + */ +private[storage] class BlockManagerDecommissioner( + conf: SparkConf, + bm: BlockManager) extends Logging { + + private val maxReplicationFailuresForDecommission = + conf.get(config.STORAGE_DECOMMISSION_MAX_REPLICATION_FAILURE_PER_BLOCK) + + /** + * This runnable consumes any shuffle blocks in the queue for migration. This part of a + * producer/consumer where the main migration loop updates the queue of blocks to be migrated + * periodically. On migration failure, the current thread will reinsert the block for another + * thread to consume. Each thread migrates blocks to a different particular executor to avoid + * distribute the blocks as quickly as possible without overwhelming any particular executor. + * + * There is no preference for which peer a given block is migrated to. + * This is notable different than the RDD cache block migration (further down in this file) + * which uses the existing priority mechanism for determining where to replicate blocks to. + * Generally speaking cache blocks are less impactful as they normally represent narrow + * transformations and we normally have less cache present than shuffle data. + * + * The producer/consumer model is chosen for shuffle block migration to maximize + * the chance of migrating all shuffle blocks before the executor is forced to exit. + */ + private class ShuffleMigrationRunnable(peer: BlockManagerId) extends Runnable { + @volatile var running = true + override def run(): Unit = { + var migrating: Option[ShuffleBlockInfo] = None + logInfo(s"Starting migration thread for ${peer}") + // Once a block fails to transfer to an executor stop trying to transfer more blocks + try { + while (running && !Thread.interrupted()) { + val migrating = Option(shufflesToMigrate.poll()) + migrating match { + case None => + logInfo("Nothing to migrate") + // Nothing to do right now, but maybe a transfer will fail or a new block + // will finish being committed. + val SLEEP_TIME_SECS = 1 + Thread.sleep(SLEEP_TIME_SECS * 1000L) + case Some(shuffleBlockInfo) => + logInfo(s"Trying to migrate shuffle ${shuffleBlockInfo} to ${peer}") + val blocks = + bm.migratableResolver.getMigrationBlocks(shuffleBlockInfo) + logInfo(s"Got migration sub-blocks ${blocks}") + blocks.foreach { case (blockId, buffer) => + logInfo(s"Migrating sub-block ${blockId}") + bm.blockTransferService.uploadBlockSync( + peer.host, + peer.port, + peer.executorId, + blockId, + buffer, + StorageLevel.DISK_ONLY, + null)// class tag, we don't need for shuffle + logInfo(s"Migrated sub block ${blockId}") + } + logInfo(s"Migrated ${shuffleBlockInfo} to ${peer}") + } + } + // This catch is intentionally outside of the while running block. + // if we encounter errors migrating to an executor we want to stop. + } catch { + case e: Exception => + migrating match { + case Some(shuffleMap) => + logError(s"Error ${e} during migration, adding ${shuffleMap} back to migration queue") + shufflesToMigrate.add(shuffleMap) Review comment: I think @dongjoon-hyun's comment is still valid: How do we ensure that we only retry a block for a given number of times ? ########## File path: core/src/main/scala/org/apache/spark/storage/BlockManager.scala ########## @@ -650,6 +658,23 @@ private[spark] class BlockManager( blockId: BlockId, level: StorageLevel, classTag: ClassTag[_]): StreamCallbackWithID = { + + if (decommissioner.isDefined) { + throw new BlockSavedOnDecommissionedBlockManagerException(blockId) + } + + if (blockId.isShuffle || blockId.isInternalShuffle) { + logInfo(s"Putting shuffle block ${blockId}") + try { + return migratableResolver.putShuffleBlockAsStream(blockId, serializerManager) + } catch { + case e: ClassCastException => throw new SparkException( Review comment: Ah I see: That's why the migratableResolver is a lazy val. Is the blockId pertinent to log here ? I believe that the class cast exception would be thrown when the resolver is not of the correct type on the first access. Would a regular function help here, or perhaps the class cast exception check can be moved inside of the migratableResolver assignment ? ########## File path: core/src/main/scala/org/apache/spark/storage/BlockManagerDecommissioner.scala ########## @@ -0,0 +1,280 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.spark.storage + +import java.util.concurrent.ExecutorService + +import scala.collection.JavaConverters._ +import scala.collection.mutable +import scala.util.control.NonFatal + +import org.apache.spark._ +import org.apache.spark.internal.Logging +import org.apache.spark.internal.config +import org.apache.spark.shuffle.{MigratableResolver, ShuffleBlockInfo} +import org.apache.spark.storage.BlockManagerMessages.ReplicateBlock +import org.apache.spark.util.ThreadUtils + +/** + * Class to handle block manager decommissioning retries. + * It creates a Thread to retry offloading all RDD cache and Shuffle blocks + */ +private[storage] class BlockManagerDecommissioner( + conf: SparkConf, + bm: BlockManager) extends Logging { + + private val maxReplicationFailuresForDecommission = + conf.get(config.STORAGE_DECOMMISSION_MAX_REPLICATION_FAILURE_PER_BLOCK) + + /** + * This runnable consumes any shuffle blocks in the queue for migration. This part of a + * producer/consumer where the main migration loop updates the queue of blocks to be migrated + * periodically. On migration failure, the current thread will reinsert the block for another + * thread to consume. Each thread migrates blocks to a different particular executor to avoid + * distribute the blocks as quickly as possible without overwhelming any particular executor. + * + * There is no preference for which peer a given block is migrated to. + * This is notable different than the RDD cache block migration (further down in this file) + * which uses the existing priority mechanism for determining where to replicate blocks to. + * Generally speaking cache blocks are less impactful as they normally represent narrow + * transformations and we normally have less cache present than shuffle data. + * + * The producer/consumer model is chosen for shuffle block migration to maximize + * the chance of migrating all shuffle blocks before the executor is forced to exit. + */ + private class ShuffleMigrationRunnable(peer: BlockManagerId) extends Runnable { + @volatile var running = true + override def run(): Unit = { + var migrating: Option[ShuffleBlockInfo] = None + logInfo(s"Starting migration thread for ${peer}") + // Once a block fails to transfer to an executor stop trying to transfer more blocks + try { + while (running && !Thread.interrupted()) { + val migrating = Option(shufflesToMigrate.poll()) + migrating match { + case None => + logDebug("Nothing to migrate") + // Nothing to do right now, but maybe a transfer will fail or a new block + // will finish being committed. + val SLEEP_TIME_SECS = 1 Review comment: Should we inline SLEEP_TIME_SECS ? And why sleep 1 second ? Why not like 100ms ? I also wonder if it would complicate the code a lot to switch this to a blocked poll, and use a poison pill when running is made false ? We wouldn't need the "polling" then. ########## File path: core/src/main/scala/org/apache/spark/storage/BlockManager.scala ########## @@ -650,6 +658,23 @@ private[spark] class BlockManager( blockId: BlockId, level: StorageLevel, classTag: ClassTag[_]): StreamCallbackWithID = { + + if (decommissioner.isDefined) { Review comment: I wonder if it would help readability to have a helper method: isDecommissioning that checks for the presence of the decommissioner. It wasn't like totally obvious to me on first reading it. ########## File path: core/src/main/scala/org/apache/spark/storage/BlockManagerDecommissioner.scala ########## @@ -0,0 +1,280 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.spark.storage + +import java.util.concurrent.ExecutorService + +import scala.collection.JavaConverters._ +import scala.collection.mutable +import scala.util.control.NonFatal + +import org.apache.spark._ +import org.apache.spark.internal.Logging +import org.apache.spark.internal.config +import org.apache.spark.shuffle.{MigratableResolver, ShuffleBlockInfo} +import org.apache.spark.storage.BlockManagerMessages.ReplicateBlock +import org.apache.spark.util.ThreadUtils + +/** + * Class to handle block manager decommissioning retries. + * It creates a Thread to retry offloading all RDD cache and Shuffle blocks + */ +private[storage] class BlockManagerDecommissioner( + conf: SparkConf, + bm: BlockManager) extends Logging { + + private val maxReplicationFailuresForDecommission = + conf.get(config.STORAGE_DECOMMISSION_MAX_REPLICATION_FAILURE_PER_BLOCK) + + /** + * This runnable consumes any shuffle blocks in the queue for migration. This part of a + * producer/consumer where the main migration loop updates the queue of blocks to be migrated + * periodically. On migration failure, the current thread will reinsert the block for another + * thread to consume. Each thread migrates blocks to a different particular executor to avoid + * distribute the blocks as quickly as possible without overwhelming any particular executor. + * + * There is no preference for which peer a given block is migrated to. + * This is notable different than the RDD cache block migration (further down in this file) + * which uses the existing priority mechanism for determining where to replicate blocks to. + * Generally speaking cache blocks are less impactful as they normally represent narrow + * transformations and we normally have less cache present than shuffle data. + * + * The producer/consumer model is chosen for shuffle block migration to maximize + * the chance of migrating all shuffle blocks before the executor is forced to exit. + */ + private class ShuffleMigrationRunnable(peer: BlockManagerId) extends Runnable { + @volatile var running = true + override def run(): Unit = { + var migrating: Option[ShuffleBlockInfo] = None + logInfo(s"Starting migration thread for ${peer}") + // Once a block fails to transfer to an executor stop trying to transfer more blocks + try { + while (running && !Thread.interrupted()) { + val migrating = Option(shufflesToMigrate.poll()) + migrating match { + case None => + logDebug("Nothing to migrate") + // Nothing to do right now, but maybe a transfer will fail or a new block + // will finish being committed. + val SLEEP_TIME_SECS = 1 + Thread.sleep(SLEEP_TIME_SECS * 1000L) + case Some(shuffleBlockInfo) => + logInfo(s"Trying to migrate shuffle ${shuffleBlockInfo} to ${peer}") + val blocks = + bm.migratableResolver.getMigrationBlocks(shuffleBlockInfo) + logInfo(s"Got migration sub-blocks ${blocks}") + blocks.foreach { case (blockId, buffer) => + logInfo(s"Migrating sub-block ${blockId}") + bm.blockTransferService.uploadBlockSync( + peer.host, + peer.port, + peer.executorId, + blockId, + buffer, + StorageLevel.DISK_ONLY, + null)// class tag, we don't need for shuffle + logDebug(s"Migrated sub block ${blockId}") + } + logInfo(s"Migrated ${shuffleBlockInfo} to ${peer}") + } + } + // This catch is intentionally outside of the while running block. + // if we encounter errors migrating to an executor we want to stop. + } catch { + case e: Exception => + migrating match { + case Some(shuffleMap) => + logError(s"Error ${e} during migration, adding ${shuffleMap} back to migration queue") Review comment: Should we use `logError(<msg>, e)` here so that we can get the full stack trace of `e` ? ########## File path: core/src/main/scala/org/apache/spark/storage/BlockManagerMasterEndpoint.scala ########## @@ -489,6 +491,24 @@ class BlockManagerMasterEndpoint( storageLevel: StorageLevel, memSize: Long, diskSize: Long): Boolean = { + logInfo(s"Updating block info on master ${blockId} for ${blockManagerId}") Review comment: Should these log lines be turned into logDebug ? I feel that they would end up spamming the driver log for each block. ########## File path: core/src/main/scala/org/apache/spark/storage/BlockManagerDecommissioner.scala ########## @@ -0,0 +1,280 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.spark.storage + +import java.util.concurrent.ExecutorService + +import scala.collection.JavaConverters._ +import scala.collection.mutable +import scala.util.control.NonFatal + +import org.apache.spark._ +import org.apache.spark.internal.Logging +import org.apache.spark.internal.config +import org.apache.spark.shuffle.{MigratableResolver, ShuffleBlockInfo} +import org.apache.spark.storage.BlockManagerMessages.ReplicateBlock +import org.apache.spark.util.ThreadUtils + +/** + * Class to handle block manager decommissioning retries. + * It creates a Thread to retry offloading all RDD cache and Shuffle blocks + */ +private[storage] class BlockManagerDecommissioner( + conf: SparkConf, + bm: BlockManager) extends Logging { + + private val maxReplicationFailuresForDecommission = + conf.get(config.STORAGE_DECOMMISSION_MAX_REPLICATION_FAILURE_PER_BLOCK) + + /** + * This runnable consumes any shuffle blocks in the queue for migration. This part of a + * producer/consumer where the main migration loop updates the queue of blocks to be migrated + * periodically. On migration failure, the current thread will reinsert the block for another + * thread to consume. Each thread migrates blocks to a different particular executor to avoid + * distribute the blocks as quickly as possible without overwhelming any particular executor. + * + * There is no preference for which peer a given block is migrated to. + * This is notable different than the RDD cache block migration (further down in this file) + * which uses the existing priority mechanism for determining where to replicate blocks to. + * Generally speaking cache blocks are less impactful as they normally represent narrow + * transformations and we normally have less cache present than shuffle data. + * + * The producer/consumer model is chosen for shuffle block migration to maximize + * the chance of migrating all shuffle blocks before the executor is forced to exit. + */ + private class ShuffleMigrationRunnable(peer: BlockManagerId) extends Runnable { + @volatile var running = true + override def run(): Unit = { + var migrating: Option[ShuffleBlockInfo] = None + logInfo(s"Starting migration thread for ${peer}") + // Once a block fails to transfer to an executor stop trying to transfer more blocks + try { + while (running && !Thread.interrupted()) { + val migrating = Option(shufflesToMigrate.poll()) + migrating match { + case None => + logDebug("Nothing to migrate") + // Nothing to do right now, but maybe a transfer will fail or a new block + // will finish being committed. + val SLEEP_TIME_SECS = 1 + Thread.sleep(SLEEP_TIME_SECS * 1000L) + case Some(shuffleBlockInfo) => + logInfo(s"Trying to migrate shuffle ${shuffleBlockInfo} to ${peer}") + val blocks = + bm.migratableResolver.getMigrationBlocks(shuffleBlockInfo) + logInfo(s"Got migration sub-blocks ${blocks}") + blocks.foreach { case (blockId, buffer) => + logInfo(s"Migrating sub-block ${blockId}") + bm.blockTransferService.uploadBlockSync( + peer.host, + peer.port, + peer.executorId, + blockId, + buffer, + StorageLevel.DISK_ONLY, + null)// class tag, we don't need for shuffle + logDebug(s"Migrated sub block ${blockId}") + } + logInfo(s"Migrated ${shuffleBlockInfo} to ${peer}") + } + } + // This catch is intentionally outside of the while running block. + // if we encounter errors migrating to an executor we want to stop. + } catch { + case e: Exception => + migrating match { + case Some(shuffleMap) => + logError(s"Error ${e} during migration, adding ${shuffleMap} back to migration queue") + shufflesToMigrate.add(shuffleMap) + case None => + logError(s"Error ${e} while waiting for block to migrate") + } + } + } + } + + // Shuffles which are either in queue for migrations or migrated + private val migratingShuffles = mutable.HashSet[ShuffleBlockInfo]() + + // Shuffles which are queued for migration + private[storage] val shufflesToMigrate = + new java.util.concurrent.ConcurrentLinkedQueue[ShuffleBlockInfo]() + + @volatile private var stopped = false + + private val migrationPeers = + mutable.HashMap[BlockManagerId, ShuffleMigrationRunnable]() + + private lazy val blockMigrationExecutor = + ThreadUtils.newDaemonSingleThreadExecutor("block-manager-decommission") + + private val blockMigrationRunnable = new Runnable { + val sleepInterval = conf.get(config.STORAGE_DECOMMISSION_REPLICATION_REATTEMPT_INTERVAL) + + override def run(): Unit = { + if (!conf.get(config.STORAGE_RDD_DECOMMISSION_ENABLED) && + !conf.get(config.STORAGE_SHUFFLE_DECOMMISSION_ENABLED)) { + logWarning("Decommissioning, but no task configured set one or both:\n" + + s"${config.STORAGE_RDD_DECOMMISSION_ENABLED.key}\n" + + s"${config.STORAGE_SHUFFLE_DECOMMISSION_ENABLED.key}") + stopped = true + } + while (!stopped && !Thread.interrupted()) { + logInfo("Iterating on migrating from the block manager.") + try { + // If enabled we migrate shuffle blocks first as they are more expensive. + if (conf.get(config.STORAGE_SHUFFLE_DECOMMISSION_ENABLED)) { + logDebug("Attempting to replicate all shuffle blocks") + offloadShuffleBlocks() + logInfo("Done starting workers to migrate shuffle blocks") + } + if (conf.get(config.STORAGE_RDD_DECOMMISSION_ENABLED)) { + logDebug("Attempting to replicate all cached RDD blocks") + decommissionRddCacheBlocks() + logInfo("Attempt to replicate all cached blocks done") + } + logInfo(s"Waiting for ${sleepInterval} before refreshing migrations.") + Thread.sleep(sleepInterval) + } catch { + case e: InterruptedException => + logInfo("Interrupted during migration, will not refresh migrations.") + stopped = true + case NonFatal(e) => + logError("Error occurred while trying to replicate for block manager decommissioning.", + e) + stopped = true + } + } + } + } + + lazy val shuffleMigrationPool = ThreadUtils.newDaemonCachedThreadPool( + "migrate-shuffles", + conf.get(config.STORAGE_SHUFFLE_DECOMMISSION_MAX_THREADS)) + /** Review comment: new line before this comment block ? ########## File path: core/src/main/scala/org/apache/spark/storage/BlockManagerDecommissioner.scala ########## @@ -0,0 +1,280 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.spark.storage + +import java.util.concurrent.ExecutorService + +import scala.collection.JavaConverters._ +import scala.collection.mutable +import scala.util.control.NonFatal + +import org.apache.spark._ +import org.apache.spark.internal.Logging +import org.apache.spark.internal.config +import org.apache.spark.shuffle.{MigratableResolver, ShuffleBlockInfo} +import org.apache.spark.storage.BlockManagerMessages.ReplicateBlock +import org.apache.spark.util.ThreadUtils + +/** + * Class to handle block manager decommissioning retries. + * It creates a Thread to retry offloading all RDD cache and Shuffle blocks + */ +private[storage] class BlockManagerDecommissioner( + conf: SparkConf, + bm: BlockManager) extends Logging { + + private val maxReplicationFailuresForDecommission = + conf.get(config.STORAGE_DECOMMISSION_MAX_REPLICATION_FAILURE_PER_BLOCK) + + /** + * This runnable consumes any shuffle blocks in the queue for migration. This part of a + * producer/consumer where the main migration loop updates the queue of blocks to be migrated + * periodically. On migration failure, the current thread will reinsert the block for another + * thread to consume. Each thread migrates blocks to a different particular executor to avoid + * distribute the blocks as quickly as possible without overwhelming any particular executor. + * + * There is no preference for which peer a given block is migrated to. + * This is notable different than the RDD cache block migration (further down in this file) + * which uses the existing priority mechanism for determining where to replicate blocks to. + * Generally speaking cache blocks are less impactful as they normally represent narrow + * transformations and we normally have less cache present than shuffle data. + * + * The producer/consumer model is chosen for shuffle block migration to maximize + * the chance of migrating all shuffle blocks before the executor is forced to exit. + */ + private class ShuffleMigrationRunnable(peer: BlockManagerId) extends Runnable { + @volatile var running = true + override def run(): Unit = { + var migrating: Option[ShuffleBlockInfo] = None + logInfo(s"Starting migration thread for ${peer}") + // Once a block fails to transfer to an executor stop trying to transfer more blocks + try { + while (running && !Thread.interrupted()) { + val migrating = Option(shufflesToMigrate.poll()) + migrating match { + case None => + logDebug("Nothing to migrate") + // Nothing to do right now, but maybe a transfer will fail or a new block + // will finish being committed. + val SLEEP_TIME_SECS = 1 + Thread.sleep(SLEEP_TIME_SECS * 1000L) + case Some(shuffleBlockInfo) => + logInfo(s"Trying to migrate shuffle ${shuffleBlockInfo} to ${peer}") + val blocks = + bm.migratableResolver.getMigrationBlocks(shuffleBlockInfo) + logInfo(s"Got migration sub-blocks ${blocks}") + blocks.foreach { case (blockId, buffer) => + logInfo(s"Migrating sub-block ${blockId}") + bm.blockTransferService.uploadBlockSync( + peer.host, + peer.port, + peer.executorId, + blockId, + buffer, + StorageLevel.DISK_ONLY, + null)// class tag, we don't need for shuffle + logDebug(s"Migrated sub block ${blockId}") + } + logInfo(s"Migrated ${shuffleBlockInfo} to ${peer}") + } + } + // This catch is intentionally outside of the while running block. + // if we encounter errors migrating to an executor we want to stop. + } catch { + case e: Exception => + migrating match { + case Some(shuffleMap) => + logError(s"Error ${e} during migration, adding ${shuffleMap} back to migration queue") + shufflesToMigrate.add(shuffleMap) + case None => + logError(s"Error ${e} while waiting for block to migrate") + } + } + } + } + + // Shuffles which are either in queue for migrations or migrated + private val migratingShuffles = mutable.HashSet[ShuffleBlockInfo]() + + // Shuffles which are queued for migration + private[storage] val shufflesToMigrate = + new java.util.concurrent.ConcurrentLinkedQueue[ShuffleBlockInfo]() + + @volatile private var stopped = false + + private val migrationPeers = + mutable.HashMap[BlockManagerId, ShuffleMigrationRunnable]() + + private lazy val blockMigrationExecutor = + ThreadUtils.newDaemonSingleThreadExecutor("block-manager-decommission") + + private val blockMigrationRunnable = new Runnable { + val sleepInterval = conf.get(config.STORAGE_DECOMMISSION_REPLICATION_REATTEMPT_INTERVAL) + + override def run(): Unit = { + if (!conf.get(config.STORAGE_RDD_DECOMMISSION_ENABLED) && + !conf.get(config.STORAGE_SHUFFLE_DECOMMISSION_ENABLED)) { + logWarning("Decommissioning, but no task configured set one or both:\n" + + s"${config.STORAGE_RDD_DECOMMISSION_ENABLED.key}\n" + + s"${config.STORAGE_SHUFFLE_DECOMMISSION_ENABLED.key}") + stopped = true + } + while (!stopped && !Thread.interrupted()) { + logInfo("Iterating on migrating from the block manager.") + try { + // If enabled we migrate shuffle blocks first as they are more expensive. + if (conf.get(config.STORAGE_SHUFFLE_DECOMMISSION_ENABLED)) { + logDebug("Attempting to replicate all shuffle blocks") + offloadShuffleBlocks() Review comment: Since this only starts the offloading threads, should it be named so: startOffloadingShuffleBlocks ? Then the log line below would make more sense. ########## File path: core/src/main/scala/org/apache/spark/storage/BlockManagerDecommissioner.scala ########## @@ -0,0 +1,280 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.spark.storage + +import java.util.concurrent.ExecutorService + +import scala.collection.JavaConverters._ +import scala.collection.mutable +import scala.util.control.NonFatal + +import org.apache.spark._ +import org.apache.spark.internal.Logging +import org.apache.spark.internal.config +import org.apache.spark.shuffle.{MigratableResolver, ShuffleBlockInfo} +import org.apache.spark.storage.BlockManagerMessages.ReplicateBlock +import org.apache.spark.util.ThreadUtils + +/** + * Class to handle block manager decommissioning retries. + * It creates a Thread to retry offloading all RDD cache and Shuffle blocks + */ +private[storage] class BlockManagerDecommissioner( + conf: SparkConf, + bm: BlockManager) extends Logging { + + private val maxReplicationFailuresForDecommission = + conf.get(config.STORAGE_DECOMMISSION_MAX_REPLICATION_FAILURE_PER_BLOCK) + + /** + * This runnable consumes any shuffle blocks in the queue for migration. This part of a + * producer/consumer where the main migration loop updates the queue of blocks to be migrated + * periodically. On migration failure, the current thread will reinsert the block for another + * thread to consume. Each thread migrates blocks to a different particular executor to avoid + * distribute the blocks as quickly as possible without overwhelming any particular executor. + * + * There is no preference for which peer a given block is migrated to. + * This is notable different than the RDD cache block migration (further down in this file) + * which uses the existing priority mechanism for determining where to replicate blocks to. + * Generally speaking cache blocks are less impactful as they normally represent narrow + * transformations and we normally have less cache present than shuffle data. + * + * The producer/consumer model is chosen for shuffle block migration to maximize + * the chance of migrating all shuffle blocks before the executor is forced to exit. + */ + private class ShuffleMigrationRunnable(peer: BlockManagerId) extends Runnable { + @volatile var running = true + override def run(): Unit = { + var migrating: Option[ShuffleBlockInfo] = None + logInfo(s"Starting migration thread for ${peer}") + // Once a block fails to transfer to an executor stop trying to transfer more blocks + try { + while (running && !Thread.interrupted()) { + val migrating = Option(shufflesToMigrate.poll()) + migrating match { + case None => + logDebug("Nothing to migrate") + // Nothing to do right now, but maybe a transfer will fail or a new block + // will finish being committed. + val SLEEP_TIME_SECS = 1 + Thread.sleep(SLEEP_TIME_SECS * 1000L) + case Some(shuffleBlockInfo) => + logInfo(s"Trying to migrate shuffle ${shuffleBlockInfo} to ${peer}") + val blocks = + bm.migratableResolver.getMigrationBlocks(shuffleBlockInfo) + logInfo(s"Got migration sub-blocks ${blocks}") + blocks.foreach { case (blockId, buffer) => + logInfo(s"Migrating sub-block ${blockId}") + bm.blockTransferService.uploadBlockSync( + peer.host, + peer.port, + peer.executorId, + blockId, + buffer, + StorageLevel.DISK_ONLY, + null)// class tag, we don't need for shuffle + logDebug(s"Migrated sub block ${blockId}") + } + logInfo(s"Migrated ${shuffleBlockInfo} to ${peer}") + } + } + // This catch is intentionally outside of the while running block. + // if we encounter errors migrating to an executor we want to stop. + } catch { + case e: Exception => + migrating match { + case Some(shuffleMap) => + logError(s"Error ${e} during migration, adding ${shuffleMap} back to migration queue") + shufflesToMigrate.add(shuffleMap) + case None => + logError(s"Error ${e} while waiting for block to migrate") + } + } + } + } + + // Shuffles which are either in queue for migrations or migrated + private val migratingShuffles = mutable.HashSet[ShuffleBlockInfo]() + + // Shuffles which are queued for migration + private[storage] val shufflesToMigrate = + new java.util.concurrent.ConcurrentLinkedQueue[ShuffleBlockInfo]() + + @volatile private var stopped = false + + private val migrationPeers = + mutable.HashMap[BlockManagerId, ShuffleMigrationRunnable]() + + private lazy val blockMigrationExecutor = + ThreadUtils.newDaemonSingleThreadExecutor("block-manager-decommission") + + private val blockMigrationRunnable = new Runnable { + val sleepInterval = conf.get(config.STORAGE_DECOMMISSION_REPLICATION_REATTEMPT_INTERVAL) + + override def run(): Unit = { + if (!conf.get(config.STORAGE_RDD_DECOMMISSION_ENABLED) && + !conf.get(config.STORAGE_SHUFFLE_DECOMMISSION_ENABLED)) { + logWarning("Decommissioning, but no task configured set one or both:\n" + + s"${config.STORAGE_RDD_DECOMMISSION_ENABLED.key}\n" + + s"${config.STORAGE_SHUFFLE_DECOMMISSION_ENABLED.key}") + stopped = true + } + while (!stopped && !Thread.interrupted()) { + logInfo("Iterating on migrating from the block manager.") + try { + // If enabled we migrate shuffle blocks first as they are more expensive. Review comment: I think that this comment is not very accurate: The migrations are always going on in parallel. All we do here is to start the shuffle migrations first (But they can totally contend with the (cached) rdd migrations) ########## File path: core/src/main/scala/org/apache/spark/storage/BlockManagerDecommissioner.scala ########## @@ -0,0 +1,280 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.spark.storage + +import java.util.concurrent.ExecutorService + +import scala.collection.JavaConverters._ +import scala.collection.mutable +import scala.util.control.NonFatal + +import org.apache.spark._ +import org.apache.spark.internal.Logging +import org.apache.spark.internal.config +import org.apache.spark.shuffle.{MigratableResolver, ShuffleBlockInfo} +import org.apache.spark.storage.BlockManagerMessages.ReplicateBlock +import org.apache.spark.util.ThreadUtils + +/** + * Class to handle block manager decommissioning retries. + * It creates a Thread to retry offloading all RDD cache and Shuffle blocks + */ +private[storage] class BlockManagerDecommissioner( + conf: SparkConf, + bm: BlockManager) extends Logging { + + private val maxReplicationFailuresForDecommission = + conf.get(config.STORAGE_DECOMMISSION_MAX_REPLICATION_FAILURE_PER_BLOCK) + + /** + * This runnable consumes any shuffle blocks in the queue for migration. This part of a + * producer/consumer where the main migration loop updates the queue of blocks to be migrated + * periodically. On migration failure, the current thread will reinsert the block for another + * thread to consume. Each thread migrates blocks to a different particular executor to avoid + * distribute the blocks as quickly as possible without overwhelming any particular executor. + * + * There is no preference for which peer a given block is migrated to. + * This is notable different than the RDD cache block migration (further down in this file) + * which uses the existing priority mechanism for determining where to replicate blocks to. + * Generally speaking cache blocks are less impactful as they normally represent narrow + * transformations and we normally have less cache present than shuffle data. + * + * The producer/consumer model is chosen for shuffle block migration to maximize + * the chance of migrating all shuffle blocks before the executor is forced to exit. + */ + private class ShuffleMigrationRunnable(peer: BlockManagerId) extends Runnable { + @volatile var running = true + override def run(): Unit = { + var migrating: Option[ShuffleBlockInfo] = None + logInfo(s"Starting migration thread for ${peer}") + // Once a block fails to transfer to an executor stop trying to transfer more blocks + try { + while (running && !Thread.interrupted()) { + val migrating = Option(shufflesToMigrate.poll()) + migrating match { + case None => + logDebug("Nothing to migrate") + // Nothing to do right now, but maybe a transfer will fail or a new block + // will finish being committed. + val SLEEP_TIME_SECS = 1 + Thread.sleep(SLEEP_TIME_SECS * 1000L) + case Some(shuffleBlockInfo) => + logInfo(s"Trying to migrate shuffle ${shuffleBlockInfo} to ${peer}") + val blocks = + bm.migratableResolver.getMigrationBlocks(shuffleBlockInfo) + logInfo(s"Got migration sub-blocks ${blocks}") + blocks.foreach { case (blockId, buffer) => + logInfo(s"Migrating sub-block ${blockId}") + bm.blockTransferService.uploadBlockSync( + peer.host, + peer.port, + peer.executorId, + blockId, + buffer, + StorageLevel.DISK_ONLY, + null)// class tag, we don't need for shuffle + logDebug(s"Migrated sub block ${blockId}") + } + logInfo(s"Migrated ${shuffleBlockInfo} to ${peer}") + } + } + // This catch is intentionally outside of the while running block. + // if we encounter errors migrating to an executor we want to stop. + } catch { + case e: Exception => + migrating match { + case Some(shuffleMap) => + logError(s"Error ${e} during migration, adding ${shuffleMap} back to migration queue") + shufflesToMigrate.add(shuffleMap) + case None => + logError(s"Error ${e} while waiting for block to migrate") + } + } + } + } + + // Shuffles which are either in queue for migrations or migrated + private val migratingShuffles = mutable.HashSet[ShuffleBlockInfo]() + + // Shuffles which are queued for migration + private[storage] val shufflesToMigrate = + new java.util.concurrent.ConcurrentLinkedQueue[ShuffleBlockInfo]() + + @volatile private var stopped = false + + private val migrationPeers = + mutable.HashMap[BlockManagerId, ShuffleMigrationRunnable]() + + private lazy val blockMigrationExecutor = + ThreadUtils.newDaemonSingleThreadExecutor("block-manager-decommission") + + private val blockMigrationRunnable = new Runnable { + val sleepInterval = conf.get(config.STORAGE_DECOMMISSION_REPLICATION_REATTEMPT_INTERVAL) + + override def run(): Unit = { + if (!conf.get(config.STORAGE_RDD_DECOMMISSION_ENABLED) && + !conf.get(config.STORAGE_SHUFFLE_DECOMMISSION_ENABLED)) { + logWarning("Decommissioning, but no task configured set one or both:\n" + + s"${config.STORAGE_RDD_DECOMMISSION_ENABLED.key}\n" + + s"${config.STORAGE_SHUFFLE_DECOMMISSION_ENABLED.key}") + stopped = true + } + while (!stopped && !Thread.interrupted()) { + logInfo("Iterating on migrating from the block manager.") + try { + // If enabled we migrate shuffle blocks first as they are more expensive. + if (conf.get(config.STORAGE_SHUFFLE_DECOMMISSION_ENABLED)) { + logDebug("Attempting to replicate all shuffle blocks") + offloadShuffleBlocks() + logInfo("Done starting workers to migrate shuffle blocks") + } + if (conf.get(config.STORAGE_RDD_DECOMMISSION_ENABLED)) { + logDebug("Attempting to replicate all cached RDD blocks") + decommissionRddCacheBlocks() + logInfo("Attempt to replicate all cached blocks done") + } + logInfo(s"Waiting for ${sleepInterval} before refreshing migrations.") + Thread.sleep(sleepInterval) + } catch { + case e: InterruptedException => + logInfo("Interrupted during migration, will not refresh migrations.") + stopped = true + case NonFatal(e) => + logError("Error occurred while trying to replicate for block manager decommissioning.", + e) + stopped = true + } + } + } + } + + lazy val shuffleMigrationPool = ThreadUtils.newDaemonCachedThreadPool( + "migrate-shuffles", + conf.get(config.STORAGE_SHUFFLE_DECOMMISSION_MAX_THREADS)) + /** + * Tries to offload all shuffle blocks that are registered with the shuffle service locally. + * Note: this does not delete the shuffle files in-case there is an in-progress fetch + * but rather shadows them. + * Requires an Indexed based shuffle resolver. + * Note: if called in testing please call stopOffloadingShuffleBlocks to avoid thread leakage. + */ + private[storage] def offloadShuffleBlocks(): Unit = { + // Update the queue of shuffles to be migrated + logInfo("Offloading shuffle blocks") + val localShuffles = bm.migratableResolver.getStoredShuffles() + val newShufflesToMigrate = localShuffles.&~(migratingShuffles).toSeq + shufflesToMigrate.addAll(newShufflesToMigrate.asJava) + migratingShuffles ++= newShufflesToMigrate + + // Update the threads doing migrations + // TODO: Sort & only start as many threads as min(||blocks||, ||targets||) using location pref + val livePeerSet = bm.getPeers(false).toSet + val currentPeerSet = migrationPeers.keys.toSet + val deadPeers = currentPeerSet.&~(livePeerSet) Review comment: I am not sure if it is just me but I found this way of doing set subtraction a bit hard to follow. I am wondering what do you think of dong `currentPeerSet.diff(livePeerSet)` ? ########## File path: core/src/main/scala/org/apache/spark/storage/BlockManagerDecommissioner.scala ########## @@ -0,0 +1,281 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.spark.storage + +import java.util.concurrent.ExecutorService + +import scala.collection.JavaConverters._ +import scala.collection.mutable +import scala.util.control.NonFatal + +import org.apache.spark._ +import org.apache.spark.internal.Logging +import org.apache.spark.internal.config +import org.apache.spark.shuffle.{MigratableResolver, ShuffleBlockInfo} +import org.apache.spark.storage.BlockManagerMessages.ReplicateBlock +import org.apache.spark.util.ThreadUtils + +/** + * Class to handle block manager decommissioning retries. + * It creates a Thread to retry offloading all RDD cache and Shuffle blocks + */ +private[storage] class BlockManagerDecommissioner( + conf: SparkConf, + bm: BlockManager) extends Logging { + + private val maxReplicationFailuresForDecommission = + conf.get(config.STORAGE_DECOMMISSION_MAX_REPLICATION_FAILURE_PER_BLOCK) + + /** + * This runnable consumes any shuffle blocks in the queue for migration. This part of a + * producer/consumer where the main migration loop updates the queue of blocks to be migrated + * periodically. On migration failure, the current thread will reinsert the block for another + * thread to consume. Each thread migrates blocks to a different particular executor to avoid + * distribute the blocks as quickly as possible without overwhelming any particular executor. + * + * There is no preference for which peer a given block is migrated to. + * This is notable different than the RDD cache block migration (further down in this file) + * which uses the existing priority mechanism for determining where to replicate blocks to. + * Generally speaking cache blocks are less impactful as they normally represent narrow + * transformations and we normally have less cache present than shuffle data. + * + * The producer/consumer model is chosen for shuffle block migration to maximize + * the chance of migrating all shuffle blocks before the executor is forced to exit. + */ + private class ShuffleMigrationRunnable(peer: BlockManagerId) extends Runnable { + @volatile var running = true + override def run(): Unit = { + var migrating: Option[ShuffleBlockInfo] = None + logInfo(s"Starting migration thread for ${peer}") + // Once a block fails to transfer to an executor stop trying to transfer more blocks + try { + while (running && !Thread.interrupted()) { + val migrating = Option(shufflesToMigrate.poll()) + migrating match { + case None => + logInfo("Nothing to migrate") + // Nothing to do right now, but maybe a transfer will fail or a new block + // will finish being committed. + val SLEEP_TIME_SECS = 1 + Thread.sleep(SLEEP_TIME_SECS * 1000L) + case Some(shuffleBlockInfo) => + logInfo(s"Trying to migrate shuffle ${shuffleBlockInfo} to ${peer}") + val blocks = + bm.migratableResolver.getMigrationBlocks(shuffleBlockInfo) + logInfo(s"Got migration sub-blocks ${blocks}") + blocks.foreach { case (blockId, buffer) => + logInfo(s"Migrating sub-block ${blockId}") + bm.blockTransferService.uploadBlockSync( + peer.host, + peer.port, + peer.executorId, + blockId, + buffer, + StorageLevel.DISK_ONLY, + null)// class tag, we don't need for shuffle + logInfo(s"Migrated sub block ${blockId}") + } + logInfo(s"Migrated ${shuffleBlockInfo} to ${peer}") + } + } + // This catch is intentionally outside of the while running block. + // if we encounter errors migrating to an executor we want to stop. + } catch { + case e: Exception => + migrating match { + case Some(shuffleMap) => + logError(s"Error ${e} during migration, adding ${shuffleMap} back to migration queue") + shufflesToMigrate.add(shuffleMap) + case None => + logError(s"Error ${e} while waiting for block to migrate") Review comment: Same comment as above: Please consider putting the full exception 'e' into logError. ########## File path: core/src/main/scala/org/apache/spark/storage/BlockManagerDecommissioner.scala ########## @@ -0,0 +1,280 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.spark.storage + +import java.util.concurrent.ExecutorService + +import scala.collection.JavaConverters._ +import scala.collection.mutable +import scala.util.control.NonFatal + +import org.apache.spark._ +import org.apache.spark.internal.Logging +import org.apache.spark.internal.config +import org.apache.spark.shuffle.{MigratableResolver, ShuffleBlockInfo} +import org.apache.spark.storage.BlockManagerMessages.ReplicateBlock +import org.apache.spark.util.ThreadUtils + +/** + * Class to handle block manager decommissioning retries. + * It creates a Thread to retry offloading all RDD cache and Shuffle blocks + */ +private[storage] class BlockManagerDecommissioner( + conf: SparkConf, + bm: BlockManager) extends Logging { + + private val maxReplicationFailuresForDecommission = + conf.get(config.STORAGE_DECOMMISSION_MAX_REPLICATION_FAILURE_PER_BLOCK) + + /** + * This runnable consumes any shuffle blocks in the queue for migration. This part of a + * producer/consumer where the main migration loop updates the queue of blocks to be migrated + * periodically. On migration failure, the current thread will reinsert the block for another + * thread to consume. Each thread migrates blocks to a different particular executor to avoid + * distribute the blocks as quickly as possible without overwhelming any particular executor. + * + * There is no preference for which peer a given block is migrated to. + * This is notable different than the RDD cache block migration (further down in this file) + * which uses the existing priority mechanism for determining where to replicate blocks to. + * Generally speaking cache blocks are less impactful as they normally represent narrow + * transformations and we normally have less cache present than shuffle data. + * + * The producer/consumer model is chosen for shuffle block migration to maximize + * the chance of migrating all shuffle blocks before the executor is forced to exit. + */ + private class ShuffleMigrationRunnable(peer: BlockManagerId) extends Runnable { + @volatile var running = true + override def run(): Unit = { + var migrating: Option[ShuffleBlockInfo] = None + logInfo(s"Starting migration thread for ${peer}") + // Once a block fails to transfer to an executor stop trying to transfer more blocks + try { + while (running && !Thread.interrupted()) { + val migrating = Option(shufflesToMigrate.poll()) + migrating match { + case None => + logDebug("Nothing to migrate") + // Nothing to do right now, but maybe a transfer will fail or a new block + // will finish being committed. + val SLEEP_TIME_SECS = 1 + Thread.sleep(SLEEP_TIME_SECS * 1000L) + case Some(shuffleBlockInfo) => + logInfo(s"Trying to migrate shuffle ${shuffleBlockInfo} to ${peer}") + val blocks = + bm.migratableResolver.getMigrationBlocks(shuffleBlockInfo) + logInfo(s"Got migration sub-blocks ${blocks}") + blocks.foreach { case (blockId, buffer) => + logInfo(s"Migrating sub-block ${blockId}") + bm.blockTransferService.uploadBlockSync( + peer.host, + peer.port, + peer.executorId, + blockId, + buffer, + StorageLevel.DISK_ONLY, + null)// class tag, we don't need for shuffle + logDebug(s"Migrated sub block ${blockId}") + } + logInfo(s"Migrated ${shuffleBlockInfo} to ${peer}") + } + } + // This catch is intentionally outside of the while running block. + // if we encounter errors migrating to an executor we want to stop. + } catch { + case e: Exception => + migrating match { + case Some(shuffleMap) => + logError(s"Error ${e} during migration, adding ${shuffleMap} back to migration queue") + shufflesToMigrate.add(shuffleMap) + case None => + logError(s"Error ${e} while waiting for block to migrate") + } + } + } + } + + // Shuffles which are either in queue for migrations or migrated + private val migratingShuffles = mutable.HashSet[ShuffleBlockInfo]() + + // Shuffles which are queued for migration + private[storage] val shufflesToMigrate = + new java.util.concurrent.ConcurrentLinkedQueue[ShuffleBlockInfo]() + + @volatile private var stopped = false + + private val migrationPeers = + mutable.HashMap[BlockManagerId, ShuffleMigrationRunnable]() + + private lazy val blockMigrationExecutor = + ThreadUtils.newDaemonSingleThreadExecutor("block-manager-decommission") + + private val blockMigrationRunnable = new Runnable { + val sleepInterval = conf.get(config.STORAGE_DECOMMISSION_REPLICATION_REATTEMPT_INTERVAL) + + override def run(): Unit = { + if (!conf.get(config.STORAGE_RDD_DECOMMISSION_ENABLED) && + !conf.get(config.STORAGE_SHUFFLE_DECOMMISSION_ENABLED)) { + logWarning("Decommissioning, but no task configured set one or both:\n" + + s"${config.STORAGE_RDD_DECOMMISSION_ENABLED.key}\n" + + s"${config.STORAGE_SHUFFLE_DECOMMISSION_ENABLED.key}") + stopped = true + } + while (!stopped && !Thread.interrupted()) { + logInfo("Iterating on migrating from the block manager.") + try { + // If enabled we migrate shuffle blocks first as they are more expensive. + if (conf.get(config.STORAGE_SHUFFLE_DECOMMISSION_ENABLED)) { + logDebug("Attempting to replicate all shuffle blocks") + offloadShuffleBlocks() + logInfo("Done starting workers to migrate shuffle blocks") + } + if (conf.get(config.STORAGE_RDD_DECOMMISSION_ENABLED)) { + logDebug("Attempting to replicate all cached RDD blocks") + decommissionRddCacheBlocks() + logInfo("Attempt to replicate all cached blocks done") + } + logInfo(s"Waiting for ${sleepInterval} before refreshing migrations.") + Thread.sleep(sleepInterval) Review comment: The default setting of STORAGE_DECOMMISSION_REPLICATION_REATTEMPT_INTERVAL is 30 seconds. I think that is too high considering that ideally should work on the migration asap as the new blocks are written ? Should we reduce that ? In addition, `decommissionRddCacheBlocks` method will synchronously block until all RDD blocks have been migrated (I believe). That will further lead to more time loss. When a node is being decom'd, it does not have a lot of time. For example spot kills are like given 2 minutes to clean up. So I think we should make sure that the decommissioner is going as fast as needed. I am wondering if there should be two runnables here: one for shuffle blocks and one for persisted blocks ? ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org