Hi Ted: 
I tries to use hbase shell to test dynamic add coprocessor, but still failed.

[root@aven01 hbase-1.2.0]# hadoop fs -ls /
16/07/25 09:51:24 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Found 9 items
...
-rwxrwxrwx   1 root      supergroup    1771201 2016-07-23 14:13 /test.jar
...

[root@aven01 hbase-1.2.0]# hbase shell
2016-07-25 09:29:33,769 WARN  [main] util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/opt/hbase-1.2.0/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/opt/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.0, r25b281972df2f5b15c426c8963cbf77dd853a5ad, Thu Feb 18 23:01:49 
CST 2016

hbase(main):002:0> disable 'testTbl'
0 row(s) in 0.1020 seconds

hbase(main):003:0> alter 'testTbl', METHOD => 'table_att', 
'Coprocessor'=>'hdfs://aven01:50001/test.jar|org.apache.hadoop.hbase.coprocessor.transactional.TestRegionEndpoint|1073741823|'

ERROR: org.apache.hadoop.hbase.DoNotRetryIOException: Class 
org.apache.hadoop.hbase.coprocessor.transactional.TestRegionEndpoint cannot be 
loaded Set hbase.table.sanity.checks to false at conf or table descriptor if 
you want to bypass sanity checks
        at 
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1680)
        at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1541)
        at org.apache.hadoop.hbase.master.HMaster.modifyTable(HMaster.java:2028)
        at 
org.apache.hadoop.hbase.master.MasterRpcServices.modifyTable(MasterRpcServices.java:1170)
        at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55680)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
        at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
        at java.lang.Thread.run(Thread.java:745)

still no new log in regionserver...
is there any possible that this function can't work in hbase 1.2?













-----邮件原件-----
发件人: Ma, Sheng-Chen (Aven) [mailto:shengchen...@esgyn.cn] 
发送时间: 2016年7月25日 15:30
收件人: user@hbase.apache.org
主题: 答复: 答复: 答复: how to Dynamic load of Coprocessors

hi
I am now use root to run the code . and there is no exception in region server 
log.
Still the same exception : can' t load class

And the strange thing is  even I use shengchen.ma to run the code (I have 
restart hbase & Hadoop) ,there is no longer error log in region server log .


-----邮件原件-----
发件人: Ted Yu [mailto:yuzhih...@gmail.com]
发送时间: 2016年7月24日 0:05
收件人: user@hbase.apache.org
主题: Re: 答复: 答复: how to Dynamic load of Coprocessors

Did you run your program as user 'hbase' ?

From the region server log snippet it seems you were running as '
shengchen.ma'

Cheers


On Sat, Jul 23, 2016 at 7:25 AM, Ma, Sheng-Chen (Aven) < shengchen...@esgyn.cn> 
wrote:

> yes, it’s the master log.
>
> 2016-07-23 14:17:53,175 INFO
> [aven01.novalocal,16088,1469275959674_ChoreService_1]
> regionserver.HRegionServer:
> aven01.novalocal,16088,1469275959674-MemstoreFlusherChore requesting 
> flush of hbase:meta,,1.1588230740 because info has an old edit so 
> flush to free WALs after random delay 126800ms
> 2016-07-23 14:17:57,520 WARN  [group-cache-0]
> security.ShellBasedUnixGroupsMapping: got exception trying to get 
> groups for user shengchen.ma ExitCodeException exitCode=1: id:
> shengchen.ma: No such user
>
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
>         at org.apache.hadoop.util.Shell.run(Shell.java:455)
>         at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:791)
>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:774)
>         at
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:84)
>         at
> org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:52)
>         at
> org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
>         at org.apache.hadoop.security.Groups.getGroups(Groups.java:139)
>         at
> org.apache.hadoop.hbase.security.UserProvider$1.getGroupStrings(UserProvider.java:93)
>         at
> org.apache.hadoop.hbase.security.UserProvider$1.access$000(UserProvider.java:81)
>         at
> org.apache.hadoop.hbase.security.UserProvider$1$1.call(UserProvider.java:108)
>         at
> org.apache.hadoop.hbase.security.UserProvider$1$1.call(UserProvider.java:104)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
>         at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
> 2016-07-23 14:18:03,207 INFO
> [aven01.novalocal,16088,1469275959674_ChoreService_1]
> regionserver.HRegionServer:
> aven01.novalocal,16088,1469275959674-MemstoreFlusherChore requesting 
> flush of hbase:meta,,1.1588230740 because info has an old edit so 
> flush to free WALs after random delay 71020ms
>
> Got the error. But how to make the test.jar accessed for all user?
>
> [root@aven01 bin]# ./hadoop fs -ls /
> 16/07/23 14:15:58 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes 
> where applicable Found 9 items ...
> -rwxrwxrwx   1 root      supergroup    1771201 2016-07-23 14:13 /test.jar
> ...
>
>
> -----邮件原件-----
> 发件人: Ted Yu [mailto:yuzhih...@gmail.com]
> 发送时间: 2016年7月23日 0:37
> 收件人: user@hbase.apache.org
> 主题: Re: 答复: how to Dynamic load of Coprocessors
>
> The pasted log was from master, right ?
>
> Can you take a look at region server involved ?
>
> Cheers
>
> On Fri, Jul 22, 2016 at 9:23 AM, Ma, Sheng-Chen (Aven) < 
> shengchen...@esgyn.cn> wrote:
>
> > Hi Ted:
> > Following is part of log. I have changed log level to TRACE. But 
> > still can’t find any clue.
> >
> > 2016-07-22 12:03:38,261 DEBUG [ProcedureExecutor-1]
> > procedure2.ProcedureExecutor: Procedure completed in 1.6310sec:
> > DisableTableProcedure (table=testTbl) id=109 owner=shengchen.ma 
> > state=F INISHED
> > 2016-07-22 12:03:38,866 TRACE
> > [RpcServer.reader=7,bindAddress=aven01.novalocal,port=39479]
> ipc.RpcServer:
> > RequestHeader call_id: 14 method_name: "IsMasterRunning" request_param:
> > true totalRequestSize: 23 bytes
> > 2016-07-22 12:03:38,866 TRACE
> > [B.defaultRpcServer.handler=24,queue=0,port=39479] ipc.RpcServer: callId:
> > 14 service: MasterService methodName: IsMasterRunning size: 23
> connection:
> > 192.168.3.10:49752 executing as shengchen.ma
> > 2016-07-22 12:03:38,866 TRACE
> > [B.defaultRpcServer.handler=24,queue=0,port=39479] ipc.RpcServer: callId:
> > 14 service: MasterService methodName: IsMasterRunning size: 23
> connection:
> > 192.168.3.10:49752 param: TODO: class 
> > org.apache.hadoop.hbase.protobuf.generated.MasterProtos$IsMasterRunn
> > in
> > gRequest
> > connection: 192.168.3.10:49752, response is_master_running: true
> > queueTime: 0 processingTime: 0 totalTime: 0
> > 2016-07-22 12:03:38,878 TRACE
> > [RpcServer.reader=7,bindAddress=aven01.novalocal,port=39479]
> ipc.RpcServer:
> > RequestHeader call_id: 15 method_name: "getProcedureResult"
> request_param:
> > true totalRequestSize: 28 bytes
> > 2016-07-22 12:03:38,878 TRACE
> > [B.defaultRpcServer.handler=7,queue=1,port=39479] ipc.RpcServer:
> > callId: 15
> > service: MasterService methodName: getProcedureResult size: 28
> connection:
> > 192.168.3.10:49752 executing as shengchen.ma
> > 2016-07-22 12:03:38,878 DEBUG
> > [B.defaultRpcServer.handler=7,queue=1,port=39479]
> master.MasterRpcServices:
> > Checking to see if procedure is done procId=109
> > 2016-07-22 12:03:38,878 TRACE
> > [B.defaultRpcServer.handler=7,queue=1,port=39479] ipc.RpcServer:
> > callId: 15
> > service: MasterService methodName: getProcedureResult size: 28
> connection:
> > 192.168.3.10:49752 param: TODO: class 
> > org.apache.hadoop.hbase.protobuf.generated.MasterProtos$GetProcedure
> > Re
> > sultRequest
> > connection: 192.168.3.10:49752, response state: FINISHED start_time:
> > 1469189016526 last_update: 1469189018157 queueTime: 0 processingTime:
> > 0
> > totalTime: 0
> > 2016-07-22 12:03:38,888 TRACE
> > [RpcServer.reader=7,bindAddress=aven01.novalocal,port=39479]
> ipc.RpcServer:
> > RequestHeader call_id: 16 method_name: "IsMasterRunning" request_param:
> > true totalRequestSize: 23 bytes
> > 2016-07-22 12:03:38,888 TRACE
> > [B.defaultRpcServer.handler=9,queue=0,port=39479] ipc.RpcServer:
> > callId: 16
> > service: MasterService methodName: IsMasterRunning size: 23 connection:
> > 192.168.3.10:49752 executing as shengchen.ma
> > 2016-07-22 12:03:38,888 TRACE
> > [B.defaultRpcServer.handler=9,queue=0,port=39479] ipc.RpcServer:
> > callId: 16
> > service: MasterService methodName: IsMasterRunning size: 23 connection:
> > 192.168.3.10:49752 param: TODO: class 
> > org.apache.hadoop.hbase.protobuf.generated.MasterProtos$IsMasterRunn
> > in
> > gRequest
> > connection: 192.168.3.10:49752, response is_master_running: true
> > queueTime: 0 processingTime: 0 totalTime: 0
> > 2016-07-22 12:03:39,007 TRACE
> > [RpcServer.reader=7,bindAddress=aven01.novalocal,port=39479]
> ipc.RpcServer:
> > RequestHeader call_id: 17 method_name: "ModifyTable" request_param:
> > true
> > totalRequestSize: 505 bytes
> > 2016-07-22 12:03:39,007 TRACE
> > [B.defaultRpcServer.handler=17,queue=2,port=39479] ipc.RpcServer: callId:
> > 17 service: MasterService methodName: ModifyTable size: 505 connection:
> > 192.168.3.10:49752 executing as shengchen.ma
> > 2016-07-22 12:03:39,047 DEBUG
> > [B.defaultRpcServer.handler=17,queue=2,port=39479]
> > util.CoprocessorClassLoader: Skipping exempt class 
> > org.apache.hadoop.hbase.coprocessor.transactional.TestRegionEndpoint
> > - delegating directly to parent
> > 2016-07-22 12:03:39,048 DEBUG
> > [B.defaultRpcServer.handler=17,queue=2,port=39479] ipc.RpcServer:
> > B.defaultRpcServer.handler=17,queue=2,port=39479: callId: 17 service:
> > MasterService methodName: ModifyTable size: 505 connection:
> > 192.168.3.10:49752
> > org.apache.hadoop.hbase.DoNotRetryIOException: Class 
> > org.apache.hadoop.hbase.coprocessor.transactional.TestRegionEndpoint
> > cannot be loaded Set hbase.table.sanity.checks to false at conf or 
> > table descriptor if you want to bypass sanity checks
> >         at
> >
> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(
> HMaster.java:1680)
> >         at
> >
> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMas
> ter.java:1541)
> >         at
> > org.apache.hadoop.hbase.master.HMaster.modifyTable(HMaster.java:2028)
> >         at
> >
> org.apache.hadoop.hbase.master.MasterRpcServices.modifyTable(MasterRpc
> Services.java:1170)
> >         at
> >
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$
> 2.callBlockingMethod(MasterProtos.java:55680)
> >         at
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
> >         at
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
> >         at
> >
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:
> 133)
> >         at
> > org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> >         at java.lang.Thread.run(Thread.java:745)
> > 2016-07-22 12:03:39,622 DEBUG
> > [RpcServer.reader=7,bindAddress=aven01.novalocal,port=39479]
> ipc.RpcServer:
> > RpcServer.listener,port=39479: Caught exception while 
> > reading:Connection reset by peer
> > 2016-07-22 12:03:39,622 DEBUG
> > [RpcServer.reader=7,bindAddress=aven01.novalocal,port=39479]
> ipc.RpcServer:
> > RpcServer.listener,port=39479: DISCONNECTING client
> > 192.168.3.10:49752 because read count=-1. Number of active
> > connections: 3
> > 2016-07-22 12:03:39,623 WARN
> > [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:17570]
> > server.NIOServerCnxn: Exception causing close of session
> > 0x1561182c97e0009 due to java.io.IOException: Connection reset by 
> > peer
> > 2016-07-22 12:03:39,623 INFO
> > [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:17570]
> > server.NIOServerCnxn: Closed socket connection for client /
> > 192.168.3.10:49751 which had sessionid 0x1561182c97e0009
> > 2016-07-22 12:03:40,127 TRACE
> > [RpcServer.reader=1,bindAddress=aven01.novalocal,port=39479]
> ipc.RpcServer:
> > RequestHeader call_id: 5466 method_name: "RegionServerReport"
> > request_param: true totalRequestSize: 5748 bytes
> > 2016-07-22 12:03:40,128 TRACE
> > [B.defaultRpcServer.handler=12,queue=0,port=39479] ipc.RpcServer: callId:
> > 5466 service: RegionServerStatusService methodName: 
> > RegionServerReport
> > size: 5.6 K connection: 192.168.0.17:49483 executing as traf
> > 2016-07-22 12:03:40,128 TRACE
> > [B.defaultRpcServer.handler=12,queue=0,port=39479] master.ServerManager:
> > 7e8aa2d93aa716ad2068808d938f0786, existingValue=-1,
> > completeSequenceId=-1
> > 2016-07-22 12:03:40,128 TRACE
> > [B.defaultRpcServer.handler=12,queue=0,port=39479] master.ServerManager:
> > 7e8aa2d93aa716ad2068808d938f0786, family=mt_, existingValue=-1,
> > completeSequenceId=-1
> > 2016-07-22 12:03:40,128 TRACE
> > [B.defaultRpcServer.handler=12,queue=0,port=39479] master.ServerManager:
> > 7e8aa2d93aa716ad2068808d938f0786, family=tddlcf, existingValue=-1,
> > completeSequenceId=-1
> > 2016-07-22 12:03:40,128 TRACE
> > [B.defaultRpcServer.handler=12,queue=0,port=39479] master.ServerManager:
> > 3052710e85ab46a686a22e23bff937e2, existingValue=-1,
> > completeSequenceId=-1
> > 2016-07-22 12:03:40,128 TRACE
> > [B.defaultRpcServer.handler=12,queue=0,port=39479] master.ServerManager:
> > 3052710e85ab46a686a22e23bff937e2, family=cpf, existingValue=-1,
> > completeSequenceId=-1
> >
> > -----邮件原件-----
> > 发件人: Ted Yu [mailto:yuzhih...@gmail.com]
> > 发送时间: 2016年7月22日 19:48
> > 收件人: user@hbase.apache.org
> > 主题: Re: how to Dynamic load of Coprocessors
> >
> > w.r.t. the DoNotRetryIOException, can you take a look at region 
> > server log where testTbl region(s) was hosted ?
> >
> > See if there is some clue why the sanity check failed.
> >
> > Thanks
> >
> > On Fri, Jul 22, 2016 at 1:12 AM, Ma, Sheng-Chen (Aven) < 
> > shengchen...@esgyn.cn> wrote:
> >
> > > Hi all:
> > > I want to dynamic add coprocessor in order to not restart hbase.
> > >
> > > Following is my code:
> > >         Path path = new Path("/coprocessor_jars");
> > >         FileSystem fs = FileSystem.get(conf);
> > >         FileStatus[] status = fs.listStatus(path);
> > >         Path[] listedPaths = FileUtil.stat2Paths(status);
> > >         for (Path p : listedPaths) {
> > >             if(p.getName().contains("test.jar")){
> > >                 hdfsPath = p;
> > >             }
> > >         }
> > >         HBaseAdmin hadmin = new HBaseAdmin(conf);
> > >         HTableDescriptor tableDesc = 
> > > hadmin.getTableDescriptor("testTbl".getBytes());
> > >
> > > tableDesc.addCoprocessor("org.apache.hadoop.hbase.coprocessor.tran
> > > sa
> > > ct
> > > ional.TestRegionEndpoint",
> > > hdfsPath,
> > >                 Coprocessor.PRIORITY_USER, null);
> > >         //
> > >
> > tableDesc.removeCoprocessor("org.apache.hadoop.hbase.coprocessor.tra
> > ns
> > actional.TrxRegionEndpoint");
> > >         for (Entry<ImmutableBytesWritable, ImmutableBytesWritable> 
> > > entry
> > :
> > > tableDesc.getValues().entrySet()) {
> > >             
> > > System.out.println(Bytes.toString(entry.getKey().get())
> > > + " = " + Bytes.toString(entry.getValue().get()));
> > >         }
> > >         hadmin.disableTable("testTbl".getBytes());
> > >         hadmin.modifyTable("testTbl", tableDesc);
> > >         hadmin.enableTable("testTbl");
> > >
> > > the syso print : coprocessor$1 = hdfs:// 
> > > 192.168.0.17:17400/coprocessor_jars/test.jar|org.apache.hadoop.hba
> > > se .c oprocessor.transactional.TestRegionEndpoint|1073741823|
> > >
> > > and the remote side return Exception:
> > > org.apache.hadoop.hbase.DoNotRetryIOException:
> > > org.apache.hadoop.hbase.DoNotRetryIOException: Class 
> > > org.apache.hadoop.hbase.coprocessor.transactional.TestRegionEndpoi
> > > nt cannot be loaded Set hbase.table.sanity.checks to false at conf 
> > > or table descriptor if you want to bypass sanity checks
> > >
> > > I use hbase 1.2 and the test.jar is not under hbase/lib, I just 
> > > put the test.jar in hdfs.
> > >
> > > If I add test.jar to hbase/lib but not restart hbase, the upon 
> > > code still throw same exception.
> > > If I add test.jar to hbase/lib and restart hbase, the upon code 
> > > will exec successful.
> > >
> > > But my requirement is not restart hbase.
> > >
> > > Is there someone can give me a favor.
> > >
> > > Thanks
> > >
> > >
> > >
> >
>

Reply via email to