[ https://issues.apache.org/jira/browse/PHOENIX-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16070769#comment-16070769 ]
Kanagha Pradha commented on PHOENIX-3721: ----------------------------------------- To add more details, following is the stack trace for exception: Mon Jun 26 22:43:50 GMT+00:00 2017, org.apache.hadoop.hbase.client.RpcRetryingCaller@4a48485e, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.TableExistsException): org.apache.hadoop.hbase.TableExistsException: SYSTEM.MUTEX at org.apache.hadoop.hbase.master.handler.CreateTableHandler.prepare(CreateTableHandler.java:119) at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1888) at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:2130) at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:44918) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2210) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104) at org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:74) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) HBaseAdmin.createTable() still throws RemoteWithExtrasException in Hbase-0.98 version (while in clustered mode). Hence it is uncaught in createSysMutexTable(). > CSV bulk load doesn't work well with SYSTEM.MUTEX > ------------------------------------------------- > > Key: PHOENIX-3721 > URL: https://issues.apache.org/jira/browse/PHOENIX-3721 > Project: Phoenix > Issue Type: Bug > Affects Versions: 4.10.0 > Reporter: Sergey Soldatov > Priority: Blocker > > This is quite strange. I'm using HBase 1.2.4 and current master branch. > During the running CSV bulk load in the regular way I got the following > exception: > {noformat} > xception in thread "main" java.sql.SQLException: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hbase.TableExistsException): > SYSTEM.MUTEX > at > org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2465) > at > org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2382) > at > org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76) > at > org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2382) > at > org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255) > at > org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:149) > at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221) > at java.sql.DriverManager.getConnection(DriverManager.java:664) > at java.sql.DriverManager.getConnection(DriverManager.java:208) > at org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:337) > at org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:329) > at > org.apache.phoenix.mapreduce.AbstractBulkLoadTool.loadData(AbstractBulkLoadTool.java:209) > at > org.apache.phoenix.mapreduce.AbstractBulkLoadTool.run(AbstractBulkLoadTool.java:183) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) > at > org.apache.phoenix.mapreduce.CsvBulkLoadTool.main(CsvBulkLoadTool.java:109) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at org.apache.hadoop.util.RunJar.run(RunJar.java:221) > at org.apache.hadoop.util.RunJar.main(RunJar.java:136) > Caused by: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hbase.TableExistsException): > SYSTEM.MUTEX > at > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:285) > at > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:106) > at > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:58) > at > org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) > at > org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:498) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1061) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:856) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:809) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:495) > {noformat} > Checked the code and it seems that the problem is in the createSysMutexTable > function. Its expect TableExistsException (and skip it), but in my case the > exception is wrapped by RemoteException, so it's not skipped and the init > fails. The easy fix is to handle RemoteException and check that it wraps > TableExistsException, but it looks a bit ugly. > [~jamestaylor] [~samarthjain] any thoughts? -- This message was sent by Atlassian JIRA (v6.4.14#64029)