Github user colorant commented on a diff in the pull request: https://github.com/apache/spark/pull/1241#discussion_r14696400 --- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala --- @@ -72,23 +74,48 @@ private[spark] class BlockManager( // Actual storage of where blocks are kept private var tachyonInitialized = false private[storage] val memoryStore = new MemoryStore(this, maxMemory) - private[storage] val diskStore = new DiskStore(this, diskBlockManager) + private[spark] val diskStore = new DiskStore(this, diskBlockManager) private[storage] lazy val tachyonStore: TachyonStore = { val storeDir = conf.get("spark.tachyonStore.baseDir", "/tmp_spark_tachyon") val appFolderName = conf.get("spark.tachyonStore.folderName") val tachyonStorePath = s"$storeDir/$appFolderName/${this.executorId}" val tachyonMaster = conf.get("spark.tachyonStore.url", "tachyon://localhost:19998") val tachyonBlockManager = - new TachyonBlockManager(shuffleBlockManager, tachyonStorePath, tachyonMaster) + new TachyonBlockManager(this, tachyonStorePath, tachyonMaster) tachyonInitialized = true new TachyonStore(this, tachyonBlockManager) } + val shuffleManager = { --- End diff -- the problem here is that original shuffle Manager and blockManager's initial sequence is not defined. And they have multiple dependency , shuffleManager need to access some member from blockManager. And some of the BlockManager's member all access shuffleManager. Thus I make it work the same way as the ConnectionManager. If not init with blockManger, might need to enforce initial sequence in SparkEnv and might add extra code to make sure when they access each other. the instance in SparkEnv is already initiated.
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---