[ 
https://issues.apache.org/jira/browse/HBASE-12465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14576581#comment-14576581
 ] 

Clay B. commented on HBASE-12465:
---------------------------------

If of use, Jeffrey, Alicia, et al. the errors we (Biju and Sudarshan) saw were:

Fatal to the running HMaster as well as any HMaster starting:
{code}
2015-06-06 02:08:52,374 ERROR org.apache.hadoop.hbase.backup.HFileArchiver: 
Failed to archive class org.apache.hadoop.hbase.backup.HFi
leArchiver$FileablePath, 
file:hdfs://cluster1/hbase/.tmp/data/default/xyxyTablenamE12345/002cd7fbf10def3bb3149ed85707fabf/e/f2806ebb34ab493ebe4b623fac585776_SeqId_385900210_
2015-06-06 02:08:52,374 WARN org.apache.hadoop.hbase.backup.HFileArchiver: 
Couldn't archive class 
org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, 
file:hdfs://cluster1/hbase/.tmp/data/default/xyxyTablenamE12345/002cd7fbf10def3bb3149ed85707fabf/e/f2806ebb34ab493ebe4b623fac585776_SeqId_385900210_
 into backup directory: 
hdfs://cluster1/hbase/archive/data/default/xyxyTablenamE12345/002cd7fbf10def3bb3149ed85707fabf/e
2015-06-06 02:08:52,374 WARN org.apache.hadoop.hbase.backup.HFileArchiver: 
Failed to complete archive of: [class 
org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, 
file:hdfs://cluster1/hbase/.tmp/data/default/xyxyTablenamE12345/002cd7fbf10def3bb3149ed85707fabf/e/f2806ebb34ab493ebe4b623fac585776_SeqId_385900210_].
 Those files are still in the original location, and they may slow down reads.
2015-06-06 02:08:52,374 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled 
exception. Starting shutdown.
java.io.IOException: Received error when attempting to archive files ([class 
org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, 
file:hdfs://cluster1/hbase/.tmp/data/default/xyxyTablenamE12345/002cd7fbf10def3bb3149ed85707fabf/e]),
 cannot delete region directory.
        at 
org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:148)
        at 
org.apache.hadoop.hbase.master.MasterFileSystem.checkTempDir(MasterFileSystem.java:503)
        at 
org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:149)
        at 
org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:127)
        at 
org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:789)  
        at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:606)
        at java.lang.Thread.run(Thread.java:745)  
2015-06-06 02:08:52,375 INFO org.apache.hadoop.hbase.master.HMaster: Aborting
2015-06-06 02:08:52,375 DEBUG org.apache.hadoop.hbase.master.HMaster: Stopping 
service threads
[... more shutting down here ...]
2015-06-06 02:28:15,583 DEBUG 
org.apache.hadoop.hbase.master.ActiveMasterManager: A master is now available
2015-06-06 02:28:15,583 INFO 
org.apache.hadoop.hbase.master.ActiveMasterManager: Registered Active 
Master=cluster1-bcpc-r2n7.example.com,60000,1433572093679
2015-06-06 02:28:15,588 INFO org.apache.hadoop.conf.Configuration.deprecation: 
fs.default.name is deprecated. Instead, use fs.defaultFS
2015-06-06 02:28:15,968 INFO org.apache.hadoop.conf.Configuration.deprecation: 
hadoop.native.lib is deprecated. Instead, use io.native.lib.available
2015-06-06 02:28:16,164 DEBUG org.apache.hadoop.hbase.util.FSTableDescriptors: 
Current tableInfoPath = 
hdfs://cluster1/hbase/data/hbase/meta/.tabledesc/.tableinfo.0000000001
2015-06-06 02:28:16,195 DEBUG org.apache.hadoop.hbase.util.FSTableDescriptors: 
TableInfo already exists.. Skipping creation
2015-06-06 02:28:17,850 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: 
ARCHIVING 
hdfs://cluster1/hbase/.tmp/data/default/xyxyTablenamE12345/002cd7fbf10def3bb3149ed85707fabf
2015-06-06 02:28:17,860 DEBUG org.apache.hadoop.hbase.backup.HFileArchiver: 
Archiving [class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, 
file:hdfs://cluster1/hbase/.tmp/data/default/xyxyTablenamE12345/002cd7fbf10def3bb3149ed85707fabf/e]
2015-06-06 02:28:17,885 WARN org.apache.hadoop.hbase.backup.HFileArchiver: 
Failed to archive class 
org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, 
file:hdfs://cluster1/hbase/.tmp/data/default/xyxyTablenamE12345/002cd7fbf10def3bb3149ed85707fabf/e/f2806ebb34ab493ebe4b623fac585776_SeqId_385900210_
 on try #0
org.apache.hadoop.security.AccessControlException: Permission denied: 
user=hbase, access=WRITE, 
inode="/hbase/.tmp/data/default/xyxyTablenamE12345/002cd7fbf10def3bb3149ed85707fabf/e/f2806ebb34ab493ebe4b623fac585776_SeqId_385900210_":userName:supergroup:-rwxr-xr-x
        at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:234)
        at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:164)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5202)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5184)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:5146)
[... more file permission failures ...]
{code}

> HBase master start fails due to incorrect file creations
> --------------------------------------------------------
>
>                 Key: HBASE-12465
>                 URL: https://issues.apache.org/jira/browse/HBASE-12465
>             Project: HBase
>          Issue Type: Bug
>          Components: master
>    Affects Versions: 0.96.0
>         Environment: Ubuntu
>            Reporter: Biju Nair
>            Assignee: Alicia Ying Shu
>              Labels: hbase, hbase-bulkload
>
> - Start of HBase master fails due to the following error found in the log.
> 2014-11-11 20:25:58,860 WARN org.apache.hadoop.hbase.backup.HFileArchiver: 
> Failed to archive class 
> org.apache.hadoop.hbase.backup.HFileArchiver$FileablePa
> th,file:hdfs://YYYY/hbase/.tmp/data/default/tbl/00820520f5cb7839395e83f40c8d97c2/e/52bf9eee7a27460c8d9e2a26fa43c918_SeqId_282271246_
>  on try #1
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=hbase,access=WRITE,inode="/hbase/.tmp/data/default/tbl/00820520f5cb7839395e83f40c8d97c2/e/52bf9eee7a27460c8d9e2a26fa43c918_SeqId_282271246_":devuser:supergroup:-rwxr-xr-x
> -  All the files that hbase master was complaining about are created under an 
> users user-id instead on "hbase" user resulting in incorrect access 
> permission for the master to act on.
> - Looks like this was due to bulk load done using LoadIncrementalHFiles 
> program.
> - HBASE-12052 is another scenario similar to this one. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to