anoopsjohn commented on a change in pull request #2006: URL: https://github.com/apache/hbase/pull/2006#discussion_r450656251
########## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterWalManager.java ########## @@ -360,11 +372,13 @@ public void splitLog(final Set<ServerName> serverNames, PathFilter filter) throw } /** - * For meta region open and closed normally on a server, it may leave some meta - * WAL in the server's wal dir. Since meta region is no long on this server, - * The SCP won't split those meta wals, just leaving them there. So deleting - * the wal dir will fail since the dir is not empty. Actually We can safely achive those - * meta log and Archiving the meta log and delete the dir. + * The hbase:meta region may OPEN and CLOSE without issue on a server and then move elsewhere. + * On CLOSE, the WAL for the hbase:meta table may not be archived yet (The WAL is only needed if + * hbase:meta did not close cleanaly). Since meta region is no long on this server, + * the ServerCrashProcedure won't split these leftover hbase:meta WALs, just leaving them in + * the WAL splitting dir. If we try to delete the WAL splitting for the server, it fail since + * the dir is not totally empty. We can safely archive these hbase:meta log; then the + * WAL dir can be deleted. Review comment: Ya this was an issue in zk based split also and later fixed I guess. One Q. When the META region was moved to another RS, the meta wal file would have been closed right? Later the file would have been archived by the chore. This is how normal wal files will get archived. So is that not at all happening for META wals? Or it is happening but in test, the Chore did not get a chance to archive it yet. This fix is needed anyways. Am trying to see do we have another issue also ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org