I recently upgraded from CDH 5.1.2 to CDH 5.3.0. I know, contact Cloudera, but this is actually a generic issue. After the upgrade I brought up the DNs and after all of them had checked in I ended up with missing blocks. I tracked this down in the DN logs to an error at startup where the DN is failing to create subdirectories. This happens at BlockPoolSliceStorage.doUpgrade(). It appears that the directory structure has changed with HDFS-6482 and the DN is pre-creating all of the directories at DN startup time. If the disk is near full, then it fails to create the subdirectories because it consumes the remaining space. If the hdfs configuration allows failed drives (dfs.datanode.failed.volumes.tolerated > 0), then the DN will start without the now full disk and report all of the blocks except the ones on the full disk.
I didn't find any type of warning in the Apache release notes. It might be useful for people in a similar situation. For the Cloudera folks on this list, there is no warning or note in your upgrade instructions that I could find either. Some questions: 1. How much free space is needed per disk to pre-create the directory structure. Is it dependent on the type of filesystem? I calculated 256MB given my reading of the ticket, but I may have misunderstood something. 2. Now that block locations are calculated using the block id, are there restrictions on where blocks can be placed? I assume that the location is not verified on a read for backwards compatibility, if that is not true, then someone needs to comment on HDFS-1312 that the older utilities cannot be used. I need to move blocks from the full disks to other locations, I'm looking for any restrictions in doing that.