On 2019/04/08 23:11:50, Josh Elser <els...@apache.org> wrote:
>
>
> On 4/7/19 10:44 PM, melank...@synergentl.com wrote:
> >
> >
> > On 2019/04/04 15:15:37, Josh Elser <els...@apache.org> wrote:
> >> Looks like your RegionServer process might have died if you can't
> >> connect to its RPC port.
> >>
> >> Did you look in the RegionServer log for any mention of an ERROR or
> >> FATAL log message?
> >>
> >> On 4/4/19 8:20 AM, melank...@synergentl.com wrote:
> >>> I have installed Hadoop single node
> >>> http://intellitech.pro/tutorial-hadoop-first-lab/ and Hbase
> >>> http://intellitech.pro/hbase-installation-on-ubuntu/ successfully. I am
> >>> using a Java agent to connect to the Hbase. After a random time period
> >>> Hbase stop working and the java agent gives following error message.
> >>>
> >>> Call exception, tries=7, retries=7, started=8321 ms ago, cancelled=false,
> >>> msg=Call to db-2.c.xxx-dev.internal/xx.xx.0.21:16201 failed on connection
> >>> exception:
> >>> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException:
> >>> Connection refused: db-2.c.xxx-dev.internal/xx.xx.0.21:16201,
> >>> details=row 'xxx,00000000001:1553904000000,99999999999999' on table
> >>> 'hbase:meta' at region=hbase:meta,,1.1588230740,
> >>> hostname=db-2.c.xxx-dev.internal,16201,1553683263844, seqNum=-1
> >>> Here are the Hbase and zookeeper logs
> >>>
> >>> hbase-hduser-regionserver-db-2.log
> >>>
> >>> [main] zookeeper.ZooKeeperMain: Processing delete 2019-03-30 02:11:44,089
> >>> DEBUG [main-SendThread(localhost:2181)] zookeeper.ClientCnxn: Reading
> >>> reply sessionid:0x169bd98c099006e, packet:: clientPath:null
> >>> serverPath:null finished:false header:: 1,2 replyHeader:: 1,300964,0
> >>> request::
> >>> '/hbase/rs/db-2.c.stl-cardio-dev.internal%2C16201%2C1553683263844,-1
> >>> response:: null
> >>> hbase-hduser-zookeeper-db-2.log
> >>>
> >>> server.FinalRequestProcessor: sessionid:0x169bd98c099004a
> >>> type:getChildren cxid:0x28e3ad zxid:0xfffffffffffffffe txntype:unknown
> >>> reqpath:/hbase/splitWAL
> >>> my hbase-site.xml file is as follows
> >>>
> >>> <configuration>
> >>> //Here you have to set the path where you want HBase to store its
> >>> files.
> >>> <property>
> >>> <name>hbase.rootdir</name>
> >>> <value>hdfs://localhost:9000/hbase</value>
> >>> </property>
> >>> <property>
> >>> <name>hbase.zookeeper.quorum</name>
> >>> <value>localhost</value>
> >>> </property>
> >>> //Here you have to set the path where you want HBase to store its
> >>> built in zookeeper files.
> >>> <property>
> >>> <name>hbase.zookeeper.property.dataDir</name>
> >>> <value>${hbase.tmp.dir}/zookeeper</value>
> >>> </property>
> >>> <property>
> >>> <name>hbase.cluster.distributed</name>
> >>> <value>true</value>
> >>> </property>
> >>> <property>
> >>> <name>hbase.zookeeper.property.clientPort</name>
> >>> <value>2181</value>
> >>> </property>
> >>> </configuration>
> >>> when I restart the Hbase it will start working again and stop working
> >>> after few days. I am wondering what would be the fix for this.
> >>>
> >>> Thanks.
> >>> BR,
> >>> Melanka
> >>>
> >> Hi Josh,
> > Sorry for the late reply. I restarted the Hbase on 05/04/2019 and it was
> > again down on 06/04/2019 at 00.06 AM.
> >
> > Log from hbase-root-regionserver-db-2 is as follows.
> >
> > 2019-04-04 04:42:26,047 DEBUG [main-SendThread(localhost:2181)]
> > zookeeper.ClientCnxn: Reading reply sessionid:0x169d86a879b00bf, packet::
> > clientPath:null serverPath:null finished:false header:: 67,2 replyHeader::
> > 67,776370,0 request::
> > '/hbase/rs/db-2.c.stl-cardio-dev.internal%2C16201%2C1554352093266,-1
> > response:: null
> > 2019-04-04 04:42:26,047 DEBUG [main-EventThread]
> > zookeeper.ZooKeeperWatcher: regionserver:16201-0x169d86a879b00bf,
> > quorum=localhost:2181, baseZNode=/hbase Received ZooKeeper Event,
> > type=NodeDeleted, state=SyncConnected,
> > path=/hbase/rs/db-2.c.stl-cardio-dev.internal,16201,1554352093266
> > 2019-04-04 04:42:26,047 DEBUG [main-EventThread]
> > zookeeper.ZooKeeperWatcher: regionserver:16201-0x169d86a879b00bf,
> > quorum=localhost:2181, baseZNode=/hbase Received ZooKeeper Event,
> > type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs
> > 2019-04-04 04:42:26,050 DEBUG
> > [regionserver/db-2.c.xxx-dev.internal/xx.xxx.0.21:16201]
> > zookeeper.ZooKeeper: Closing session: 0x169d86a879b00bf
> > 2019-04-04 04:42:26,050 DEBUG
> > [regionserver/db-2.c.xxx-dev.internal/xx.xx.0.21:16201]
> > zookeeper.ClientCnxn: Closing client for session: 0x169d86a879b00bf
> > 2019-04-04 04:42:26,056 DEBUG [main-SendThread(localhost:2181)]
> > zookeeper.ClientCnxn: Reading reply sessionid:0x169d86a879b00bf, packet::
> > clientPath:null serverPath:null finished:false header:: 68,-11
> > replyHeader:: 68,776371,0 request:: null response:: null
> > 2019-04-04 04:42:26,056 DEBUG
> > [regionserver/db-2.c.xxx-dev.internal/xx.xxx.0.21:16201]
> > zookeeper.ClientCnxn: Disconnecting client for session: 0x169d86a879b00bf
> > 2019-04-04 04:42:26,056 INFO
> > [regionserver/db-2.c.xxx-dev.internal/xxx.xxx.0.21:16201]
> > zookeeper.ZooKeeper: Session: 0x169d86a879b00bf closed
> > 2019-04-04 04:42:26,056 INFO
> > [regionserver/db-2.c.xxx-dev.internal/xxx.xxx.0.21:16201]
> > regionserver.HRegionServer: stopping server
> > db-2.c.xxx-dev.internal,16201,1554352093266; zookeeper connection closed.
> > 2019-04-04 04:42:26,056 INFO
> > [regionserver/db-2.c.xxx-dev.internal/xxx.0.21:16201]
> > regionserver.HRegionServer:
> > regionserver/db-2.c.xxx-dev.internal/xxx.0.21:16201 exiting
> > 2019-04-04 04:42:26,056 ERROR [main] regionserver.HRegionServerCommandLine:
> > Region server exiting
> > java.lang.RuntimeException: HRegionServer Aborted
> > at
> > org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:68)
> > at
> > org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87)
> > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> > at
> > org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:127)
> > at
> > org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2831)
> > 2019-04-04 04:42:26,057 INFO [main-EventThread] zookeeper.ClientCnxn:
> > EventThread shut down for session: 0x169d86a879b00bf
> > 2019-04-04 04:42:26,063 INFO [Thread-5] regionserver.ShutdownHook:
> > Shutdown hook starting; hbase.shutdown.hook=true;
> > fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@35a9782c
> > 2019-04-04 04:42:26,067 INFO [Thread-5] regionserver.ShutdownHook:
> > Starting fs shutdown hook thread.
> > 2019-04-04 04:42:26,073 INFO [Thread-5] regionserver.ShutdownHook:
> > Shutdown hook finished.
> >
> > Log from the hbase-hduser-regionserver-db-2 is ass follows.
> >
> > 2019-04-05 13:48:14,734 DEBUG [main] zookeeper.ZooKeeperMain: Processing
> > delete
> > 2019-04-05 13:48:14,754 DEBUG [main-SendThread(localhost:2181)]
> > zookeeper.ClientCnxn: Reading reply sessionid:0x169eb8b7c230010, packet::
> > clientPath:null serverPath:null finished:false header:: 1,2 replyHeader::
> > 1,783888,0 request::
> > '/hbase/rs/db-2.c.xxx-dev.internal%2C16201%2C1554434982329,-1 response::
> > null
>
> Look harder, specifically for a FATAL message. `grep` is your friend.
>
Hi Josh,
There were FATAL logs in following files.
hbase-root-master-db-2.log
2019-04-04 04:27:14,029 FATAL [db-2:16000.activeMasterManager] master.HMaster:
Failed to become active master
org.apache.hadoop.security.AccessControlException: Permission denied:
user=root, access=WRITE, inode="/hbase":hduser:supergroup:drwxr-xr-x
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:238)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:182)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6512)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:3973)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:3925)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3909)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:786)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:589)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:2046)
at
org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:707)
at
org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:703)
at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at
org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
at
org.apache.hadoop.hbase.master.MasterFileSystem.checkTempDir(MasterFileSystem.java:569)
at
org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:173)
at
org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:141)
at
org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:744)
at org.apache.hadoop.hbase.master.HMaster.access$600(HMaster.java:208)
at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:2035)
at java.lang.Thread.run(Thread.java:748)
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
Permission denied: user=root, access=WRITE,
inode="/hbase":hduser:supergroup:drwxr-xr-x
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:238)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:182)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6512)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:3973)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:3925)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3909)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:786)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:589)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
at org.apache.hadoop.ipc.Client.call(Client.java:1476)
at org.apache.hadoop.ipc.Client.call(Client.java:1413)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy16.delete(Unknown Source)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:545)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy17.delete(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:307)
at com.sun.proxy.$Proxy18.delete(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:2044)
... 11 more
2019-04-04 04:27:14,044 FATAL [db-2:16000.activeMasterManager] master.HMaster:
Unhandled exception. Starting shutdown.
org.apache.hadoop.security.AccessControlException: Permission denied:
user=root, access=WRITE, inode="/hbase":hduser:supergroup:drwxr-xr-x
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:238)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:182)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6512)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:3973)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:3925)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3909)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:786)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:589)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:2046)
at
org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:707)
at
org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:703)
at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at
org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:714)
at
org.apache.hadoop.hbase.master.MasterFileSystem.checkTempDir(MasterFileSystem.java:569)
at
org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:173)
at
org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:141)
at
org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:744)
at org.apache.hadoop.hbase.master.HMaster.access$600(HMaster.java:208)
at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:2035)
at java.lang.Thread.run(Thread.java:748)
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
Permission denied: user=root, access=WRITE,
inode="/hbase":hduser:supergroup:drwxr-xr-x
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:238)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:182)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6512)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:3973)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:3925)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3909)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:786)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:589)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
at org.apache.hadoop.ipc.Client.call(Client.java:1476)
at org.apache.hadoop.ipc.Client.call(Client.java:1413)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy16.delete(Unknown Source)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:545)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy17.delete(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:307)
at com.sun.proxy.$Proxy18.delete(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:2044)
... 11 more
hbase-root-regionserver-db-2.log
2019-04-04 04:42:19,337 FATAL
[regionserver/db-2.c.xxx-dev.internal/10.128.0.21:16201]
regionserver.HRegionServer: enode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
org.apache.hadoop.security.AccessControlException: Permission denied:
user=root, access=WRITE, inode="/hbase/WALs":hduser:supergroup:drwxr-xr-x
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:238)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:179)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6512)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6494)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6446)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4248)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4218)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3007)
at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2975)
at
org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
at
org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at
org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
at
org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1881)
at
org.apache.hadoop.hbase.regionserver.wal.FSHLog.<init>(FSHLog.java:448)
at
org.apache.hadoop.hbase.wal.DefaultWALProvider.getWAL(DefaultWALProvider.java:138)
at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:241)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.getWAL(HRegionServer.java:1948)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.buildServerLoad(HRegionServer.java:1250)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:1202)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1024)
at java.lang.Thread.run(Thread.java:748)
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
Permission denied: user=root, access=WRITE,
inode="/hbase/WALs":hduser:supergroup:drwxr-xr-x
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:238)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:179)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6512)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6494)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6446)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4248)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4218)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
at org.apache.hadoop.ipc.Client.call(Client.java:1476)
at org.apache.hadoop.ipc.Client.call(Client.java:1413)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy16.mkdirs(Unknown Source)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:563)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy17.mkdirs(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:307)
at com.sun.proxy.$Proxy18.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3005)
... 15 more
2019-04-04 04:42:19,345 FATAL
[regionserver/db-2.c.xxx-dev.internal/10.128.0.21:16201]
regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: []
2019-04-04 04:42:26,056 ERROR [main] regionserver.HRegionServerCommandLine:
Region server exiting
java.lang.RuntimeException: HRegionServer Aborted
at
org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:68)
at
org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at
org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:127)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2831)
Thanks.