[ 
https://issues.apache.org/jira/browse/SPARK-21660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121125#comment-16121125
 ] 

Saisai Shao commented on SPARK-21660:
-------------------------------------

Will yarn NM handle this bad disk problem and return a good disk for 
recoveryPath? I guess yarn should handle this problem.

> Yarn ShuffleService failed to start when the chosen directory become read-only
> ------------------------------------------------------------------------------
>
>                 Key: SPARK-21660
>                 URL: https://issues.apache.org/jira/browse/SPARK-21660
>             Project: Spark
>          Issue Type: Bug
>          Components: Shuffle, YARN
>    Affects Versions: 2.1.1
>            Reporter: lishuming
>
> h3. Background
> In our production environment,disks corrupt to `read-only` status almost once 
> a month. Now the strategy of Yarn ShuffleService which chooses an available 
> directory(disk) to store Shuffle info(DB) is as 
> below(https://github.com/apache/spark/blob/master/common/network-yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java#L340):
> 1. If NameNode's recoveryPath not empty and shuffle DB exists in the 
> recoveryPath, return the recoveryPath;
> 2. If recoveryPath empty and shuffle DB exists in 
> `yarn.nodemanager.local-dirs`, set recoveryPath as the existing DB path and 
> return the path;
> 3. If recoveryPath not empty(shuffle DB not exists in the path) and shuffle 
> DB exists in `yarn.nodemanager.local-dirs`, mv the existing shuffle DB to 
> recoveryPath and return the path;
> 4. If all above don't hit, we choose the first disk of 
> `yarn.nodemanager.local-dirs`as the recoveryPath;
> All above strategy don't consider the chosen disk(directory) is writable or 
> not, so in our environment we meet such exception:
> {code:java}
> 2017-06-25 07:15:43,512 ERROR org.apache.spark.network.util.LevelDBProvider: 
> error opening leveldb file /mnt/dfs/12/yarn/local/registeredExecutors.ldb. 
> Creating new file, will not be able to recover state for existing applications
> at 
> org.apache.spark.network.util.LevelDBProvider.initLevelDB(LevelDBProvider.java:48)
> at 
> org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.<init>(ExternalShuffleBlockResolver.java:116)
> at 
> org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.<init>(ExternalShuffleBlockResolver.java:94)
> at 
> org.apache.spark.network.shuffle.ExternalShuffleBlockHandler.<init>(ExternalShuffleBlockHandler.java:66)
> at 
> org.apache.spark.network.yarn.YarnShuffleService.serviceInit(YarnShuffleService.java:167)
> 2017-06-25 07:15:43,514 WARN org.apache.spark.network.util.LevelDBProvider: 
> error deleting /mnt/dfs/12/yarn/local/registeredExecutors.ldb
> 2017-06-25 07:15:43,515 INFO org.apache.hadoop.service.AbstractService: 
> Service spark_shuffle failed in state INITED; cause: java.io.IOException: 
> Unable to create state store
> at 
> org.apache.spark.network.util.LevelDBProvider.initLevelDB(LevelDBProvider.java:77)
> at 
> org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.<init>(ExternalShuffleBlockResolver.java:116)
> at 
> org.apache.spark.network.shuffle.ExternalShuffleBlockResolver.<init>(ExternalShuffleBlockResolver.java:94)
> at 
> org.apache.spark.network.shuffle.ExternalShuffleBlockHandler.<init>(ExternalShuffleBlockHandler.java:66)
> at 
> org.apache.spark.network.yarn.YarnShuffleService.serviceInit(YarnShuffleService.java:167)
> at 
> org.apache.spark.network.util.LevelDBProvider.initLevelDB(LevelDBProvider.java:75)
> {code}
> h3. Consideration
> 1. For many production environment, `yarn.nodemanager.local-dirs` always has 
> more than 1 disk, so we can make a better chosen strategy to avoid the 
> problem above;
> 2. Can we add a strategy to check the DB directory we choose is writable, so 
> avoid the problem above?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to