[jira] [Commented] (HBASE-15124) Document the new 'normalization' feature in refguid
[ https://issues.apache.org/jira/browse/HBASE-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16306999#comment-16306999 ] Romil Choksi commented on HBASE-15124: -- +1, glad to see it in hbase book refguide > Document the new 'normalization' feature in refguid > --- > > Key: HBASE-15124 > URL: https://issues.apache.org/jira/browse/HBASE-15124 > Project: HBase > Issue Type: Task > Components: documentation >Affects Versions: 1.3.0 >Reporter: stack >Assignee: Romil Choksi >Priority: Critical > Fix For: 3.0.0 > > Attachments: HBASE-15124.master.001.patch > > > A nice new feature is coming in to 1.2.0, normalization. A small bit of doc > on it in refguide would help. > Should define what normalization is. > Should say a sentence or two on how it works and when. > Throw in the output of shell commands. > A paragraph or so. I can help. > Marking critical against 1.2.0. Not a blocker. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-19516) IntegrationTestBulkLoad and IntegrationTestImportTsv run into 'RuntimeException: java.lang.RuntimeException: org.apache.hadoop.hbase.DistributedHBaseCluster@1bb564e2 not
Romil Choksi created HBASE-19516: Summary: IntegrationTestBulkLoad and IntegrationTestImportTsv run into 'RuntimeException: java.lang.RuntimeException: org.apache.hadoop.hbase.DistributedHBaseCluster@1bb564e2 not an instance of org.apache.hadoop.hbase.MiniHBaseCluster' Key: HBASE-19516 URL: https://issues.apache.org/jira/browse/HBASE-19516 Project: HBase Issue Type: Bug Affects Versions: 2.0 Reporter: Romil Choksi IntegrationTestBulkLoad and IntegrationTestImportTsv run into 'RuntimeException: java.lang.RuntimeException: org.apache.hadoop.hbase.DistributedHBaseCluster@1bb564e2 not an instance of org.apache.hadoop.hbase.MiniHBaseCluster' {code} 2017-12-14 22:26:00,118 ERROR [main] util.AbstractHBaseTool: Error running command-line tool java.lang.RuntimeException: java.lang.RuntimeException: org.apache.hadoop.hbase.DistributedHBaseCluster@1bb564e2 not an instance of org.apache.hadoop.hbase.MiniHBaseCluster at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:219) at org.apache.hadoop.hbase.HBaseCommonTestingUtility.waitFor(HBaseCommonTestingUtility.java:249) at org.apache.hadoop.hbase.HBaseTestingUtility.waitUntilAllRegionsAssigned(HBaseTestingUtility.java:3255) at org.apache.hadoop.hbase.HBaseTestingUtility.waitUntilAllRegionsAssigned(HBaseTestingUtility.java:3227) at org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1378) at org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1409) at org.apache.hadoop.hbase.HBaseTestingUtility.createTable(HBaseTestingUtility.java:1326) at org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.setupTable(IntegrationTestBulkLoad.java:249) at org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.runLoad(IntegrationTestBulkLoad.java:229) at org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.testBulkLoad(IntegrationTestBulkLoad.java:223) at org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.runTestFromCommandLine(IntegrationTestBulkLoad.java:792) at org.apache.hadoop.hbase.IntegrationTestBase.doWork(IntegrationTestBase.java:155) at org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:154) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad.main(IntegrationTestBulkLoad.java:815) Caused by: java.lang.RuntimeException: org.apache.hadoop.hbase.DistributedHBaseCluster@1bb564e2 not an instance of org.apache.hadoop.hbase.MiniHBaseCluster at org.apache.hadoop.hbase.HBaseTestingUtility.getMiniHBaseCluster(HBaseTestingUtility.java:1069) at org.apache.hadoop.hbase.HBaseTestingUtility.getHBaseCluster(HBaseTestingUtility.java:2711) at org.apache.hadoop.hbase.HBaseTestingUtility$4.evaluate(HBaseTestingUtility.java:3285) at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:191) ... 14 more {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19218) Master stuck thinking hbase:namespace is assigned after restart preventing intialization
[ https://issues.apache.org/jira/browse/HBASE-19218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Romil Choksi updated HBASE-19218: - Attachment: hbase-site.xml hbase-hbase-master-ctr-e134-1499953498516-282290-01-03.hwx.site.log.zip [~tedyu] Uploaded master log file and hbase site > Master stuck thinking hbase:namespace is assigned after restart preventing > intialization > > > Key: HBASE-19218 > URL: https://issues.apache.org/jira/browse/HBASE-19218 > Project: HBase > Issue Type: Bug >Reporter: Josh Elser >Priority: Critical > Fix For: 2.0.0-beta-1 > > Attachments: > hbase-hbase-master-ctr-e134-1499953498516-282290-01-03.hwx.site.log.zip, > hbase-site.xml > > > Our [~romil.choksi] brought this one to my attention after trying to get some > cluster tests running. > The Master seems to have gotten stuck never initializing after it thinks that > hbase:namespace was already deployed on the cluster when it actually was not. > On a Master restart, it reads the location out of meta and assumes that it's > there (I assume this invalid entry is the issue): > {noformat} > 2017-11-08 00:29:17,556 INFO > [ctr-e134-1499953498516-282290-01-03:2.masterManager] > assignment.RegionStateStore: Load hbase:meta entry region={ENCODED => > f147f204a579b885c351bdc0a7ebbf94, NAME => > 'hbase:namespace,,1510084256045.f147f204a579b885c351bdc0a7ebbf94.', STARTKEY > => '', ENDKEY => ''} regionState=OPENING > lastHost=ctr-e134-1499953498516-282290-01-05.hwx.site,16020,1510084579728 > regionLocation=ctr-e134-1499953498516-282290-01-05.hwx.site,16020,1510100695534 > {noformat} > Prior to this, the RS5 went through the ServerCrashProcedure, but it looks > like this bailed out unexpectedly: > {noformat} > 2017-11-08 00:25:25,187 WARN > [ctr-e134-1499953498516-282290-01-03:2.masterManager] > master.ServerManager: Expiration of > ctr-e134-1499953498516-282290-01-05.hwx.site,16020,1510084579728 but > server not online > 2017-11-08 00:25:25,187 INFO [ProcExecWrkr-5] > procedure.ServerCrashProcedure: Start pid=36, > state=RUNNABLE:SERVER_CRASH_START; ServerCrashProcedure > server=ctr-e134-1499953498516-282290-01-03.hwx.site,16020,1510084580111, > splitWal=t > rue, meta=false > 2017-11-08 00:25:25,188 INFO > [ctr-e134-1499953498516-282290-01-03:2.masterManager] > master.ServerManager: Processing expiration of > ctr-e134-1499953498516-282290-01-05.hwx.site,16020,1510084579728 on > ctr-e134-1499953498516-28 > 2290-01-03.hwx.site,2,1510100690324 > ... > 2017-11-08 00:25:27,211 ERROR [ProcExecWrkr-22] procedure2.ProcedureExecutor: > CODE-BUG: Uncaught runtime exception: pid=40, ppid=37, > state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure > table=hbase:namespace, region=f147f204a579b885c351bdc0a7ebbf94 > java.lang.NullPointerException > at > java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936) > at > org.apache.hadoop.hbase.procedure2.RemoteProcedureDispatcher.addOperationToNode(RemoteProcedureDispatcher.java:171) > at > org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.addToRemoteDispatcher(RegionTransitionProcedure.java:223) > at > org.apache.hadoop.hbase.master.assignment.AssignProcedure.updateTransition(AssignProcedure.java:252) > at > org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:309) > at > org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:82) > at > org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:845) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1452) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1221) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$800(ProcedureExecutor.java:77) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1731) > 2017-11-08 00:25:27,239 FATAL [ProcExecWrkr-22] procedure2.ProcedureExecutor: > CODE-BUG: Uncaught runtime exception for pid=37, > state=FAILED:SERVER_CRASH_FINISH, exception=java.lang.NullPointerException > via CODE-BUG: Uncaught runtime exception: pid=40, ppid=37, > state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure > table=hbase:namespace, > region=f147f204a579b885c351bdc0a7ebbf94:java.lang.NullPointerException; > ServerCrashProcedure > server=ctr-e134-1499953498516-282290-01-05.hwx.site,16020,1510084579728, > splitWal=true, meta=false > java.lang.UnsupportedOperationException:
[jira] [Updated] (HBASE-16190) IntegrationTestDDLMasterFailover failed with IllegalArgumentException: n must be positive
[ https://issues.apache.org/jira/browse/HBASE-16190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Romil Choksi updated HBASE-16190: - Attachment: 16190.v1.txt Attaching a patch here. Verified it by re-running the test and it went fine > IntegrationTestDDLMasterFailover failed with IllegalArgumentException: n must > be positive > - > > Key: HBASE-16190 > URL: https://issues.apache.org/jira/browse/HBASE-16190 > Project: HBase > Issue Type: Bug >Reporter: Romil Choksi >Priority: Minor > Labels: integration-test > Attachments: 16190.v1.txt, HBASE-16190.patch > > > IntegrationTestDDLMasterFailover failed with IllegalArgumentException: n must > be positive > {code} > 2016-07-05 > 12:19:22,154|beaver.machine|INFO|4569|14008027684|MainThread|2016-07-05 > 12:19:21,661 INFO [main] hbase.IntegrationTestDDLMasterFailover: Runtime is > up > 2016-07-05 > 12:19:22,154|beaver.machine|INFO|4569|14008027684|MainThread|2016-07-05 > 12:19:22,026 ERROR [main] hbase.IntegrationTestDDLMasterFailover: Found > exception in thread: Thread-11 > 2016-07-05 > 12:19:22,154|beaver.machine|INFO|4569|14008027684|MainThread|2016-07-05 > 12:19:21,384 INFO [Thread-16] hbase.IntegrationTestDDLMasterFailover: > Performing Action: CREATE_TABLE > 2016-07-05 > 12:19:22,154|beaver.machine|INFO|4569|14008027684|MainThread|2016-07-05 > 12:19:22,027 INFO [Thread-16] hbase.IntegrationTestDDLMasterFailover: > Thread-16 stopped > 2016-07-05 > 12:19:22,154|beaver.machine|INFO|4569|14008027684|MainThread|2016-07-05 > 12:19:20,506 INFO [Thread-30] hbase.IntegrationTestDDLMasterFailover: > Performing Action: ADD_COLUMNFAMILY > 2016-07-05 > 12:19:22,154|beaver.machine|INFO|4569|14008027684|MainThread|java.lang.IllegalArgumentException: > n must be positive > 2016-07-05 > 12:19:22,154|beaver.machine|INFO|4569|14008027684|MainThread|at > java.util.Random.nextInt(Random.java:300) > 2016-07-05 > 12:19:22,155|beaver.machine|INFO|4569|14008027684|MainThread|at > org.apache.commons.lang.math.JVMRandom.nextInt(JVMRandom.java:118) > 2016-07-05 > 12:19:22,155|beaver.machine|INFO|4569|14008027684|MainThread|at > org.apache.commons.lang.math.RandomUtils.nextInt(RandomUtils.java:88) > 2016-07-05 > 12:19:22,155|beaver.machine|INFO|4569|14008027684|MainThread|at > org.apache.commons.lang.math.RandomUtils.nextInt(RandomUtils.java:74) > 2016-07-05 > 12:19:22,155|beaver.machine|INFO|4569|14008027684|MainThread|at > org.apache.hadoop.hbase.IntegrationTestDDLMasterFailover$TableAction.selectTable(IntegrationTestDDLMasterFailover.java:212) > 2016-07-05 > 12:19:22,155|beaver.machine|INFO|4569|14008027684|MainThread|at > org.apache.hadoop.hbase.IntegrationTestDDLMasterFailover$AddColumnFamilyAction.perform(IntegrationTestDDLMasterFailover.java:421) > 2016-07-05 > 12:19:22,155|beaver.machine|INFO|4569|14008027684|MainThread|at > org.apache.hadoop.hbase.IntegrationTestDDLMasterFailover$Worker.run(IntegrationTestDDLMasterFailover.java:695) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16190) IntegrationTestDDLMasterFailover failed with IllegalArgumentException: n must be positive
[ https://issues.apache.org/jira/browse/HBASE-16190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365563#comment-15365563 ] Romil Choksi commented on HBASE-16190: -- [~chenheng] Thanks for the pointer. I do have a simple fix ready with tableMap.isEmpty() check moved under synchronized. Want to run the test with this change a few times to make sure its actually fixed. > IntegrationTestDDLMasterFailover failed with IllegalArgumentException: n must > be positive > - > > Key: HBASE-16190 > URL: https://issues.apache.org/jira/browse/HBASE-16190 > Project: HBase > Issue Type: Bug >Reporter: Romil Choksi >Priority: Minor > Labels: integration-test > Attachments: HBASE-16190.patch > > > IntegrationTestDDLMasterFailover failed with IllegalArgumentException: n must > be positive > {code} > 2016-07-05 > 12:19:22,154|beaver.machine|INFO|4569|14008027684|MainThread|2016-07-05 > 12:19:21,661 INFO [main] hbase.IntegrationTestDDLMasterFailover: Runtime is > up > 2016-07-05 > 12:19:22,154|beaver.machine|INFO|4569|14008027684|MainThread|2016-07-05 > 12:19:22,026 ERROR [main] hbase.IntegrationTestDDLMasterFailover: Found > exception in thread: Thread-11 > 2016-07-05 > 12:19:22,154|beaver.machine|INFO|4569|14008027684|MainThread|2016-07-05 > 12:19:21,384 INFO [Thread-16] hbase.IntegrationTestDDLMasterFailover: > Performing Action: CREATE_TABLE > 2016-07-05 > 12:19:22,154|beaver.machine|INFO|4569|14008027684|MainThread|2016-07-05 > 12:19:22,027 INFO [Thread-16] hbase.IntegrationTestDDLMasterFailover: > Thread-16 stopped > 2016-07-05 > 12:19:22,154|beaver.machine|INFO|4569|14008027684|MainThread|2016-07-05 > 12:19:20,506 INFO [Thread-30] hbase.IntegrationTestDDLMasterFailover: > Performing Action: ADD_COLUMNFAMILY > 2016-07-05 > 12:19:22,154|beaver.machine|INFO|4569|14008027684|MainThread|java.lang.IllegalArgumentException: > n must be positive > 2016-07-05 > 12:19:22,154|beaver.machine|INFO|4569|14008027684|MainThread|at > java.util.Random.nextInt(Random.java:300) > 2016-07-05 > 12:19:22,155|beaver.machine|INFO|4569|14008027684|MainThread|at > org.apache.commons.lang.math.JVMRandom.nextInt(JVMRandom.java:118) > 2016-07-05 > 12:19:22,155|beaver.machine|INFO|4569|14008027684|MainThread|at > org.apache.commons.lang.math.RandomUtils.nextInt(RandomUtils.java:88) > 2016-07-05 > 12:19:22,155|beaver.machine|INFO|4569|14008027684|MainThread|at > org.apache.commons.lang.math.RandomUtils.nextInt(RandomUtils.java:74) > 2016-07-05 > 12:19:22,155|beaver.machine|INFO|4569|14008027684|MainThread|at > org.apache.hadoop.hbase.IntegrationTestDDLMasterFailover$TableAction.selectTable(IntegrationTestDDLMasterFailover.java:212) > 2016-07-05 > 12:19:22,155|beaver.machine|INFO|4569|14008027684|MainThread|at > org.apache.hadoop.hbase.IntegrationTestDDLMasterFailover$AddColumnFamilyAction.perform(IntegrationTestDDLMasterFailover.java:421) > 2016-07-05 > 12:19:22,155|beaver.machine|INFO|4569|14008027684|MainThread|at > org.apache.hadoop.hbase.IntegrationTestDDLMasterFailover$Worker.run(IntegrationTestDDLMasterFailover.java:695) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-16190) IntegrationTestDDLMasterFailover failed with IllegalArgumentException: n must be positive
Romil Choksi created HBASE-16190: Summary: IntegrationTestDDLMasterFailover failed with IllegalArgumentException: n must be positive Key: HBASE-16190 URL: https://issues.apache.org/jira/browse/HBASE-16190 Project: HBase Issue Type: Bug Reporter: Romil Choksi Priority: Minor IntegrationTestDDLMasterFailover failed with IllegalArgumentException: n must be positive {code} 2016-07-05 12:19:22,154|beaver.machine|INFO|4569|14008027684|MainThread|2016-07-05 12:19:21,661 INFO [main] hbase.IntegrationTestDDLMasterFailover: Runtime is up 2016-07-05 12:19:22,154|beaver.machine|INFO|4569|14008027684|MainThread|2016-07-05 12:19:22,026 ERROR [main] hbase.IntegrationTestDDLMasterFailover: Found exception in thread: Thread-11 2016-07-05 12:19:22,154|beaver.machine|INFO|4569|14008027684|MainThread|2016-07-05 12:19:21,384 INFO [Thread-16] hbase.IntegrationTestDDLMasterFailover: Performing Action: CREATE_TABLE 2016-07-05 12:19:22,154|beaver.machine|INFO|4569|14008027684|MainThread|2016-07-05 12:19:22,027 INFO [Thread-16] hbase.IntegrationTestDDLMasterFailover: Thread-16 stopped 2016-07-05 12:19:22,154|beaver.machine|INFO|4569|14008027684|MainThread|2016-07-05 12:19:20,506 INFO [Thread-30] hbase.IntegrationTestDDLMasterFailover: Performing Action: ADD_COLUMNFAMILY 2016-07-05 12:19:22,154|beaver.machine|INFO|4569|14008027684|MainThread|java.lang.IllegalArgumentException: n must be positive 2016-07-05 12:19:22,154|beaver.machine|INFO|4569|14008027684|MainThread|at java.util.Random.nextInt(Random.java:300) 2016-07-05 12:19:22,155|beaver.machine|INFO|4569|14008027684|MainThread|at org.apache.commons.lang.math.JVMRandom.nextInt(JVMRandom.java:118) 2016-07-05 12:19:22,155|beaver.machine|INFO|4569|14008027684|MainThread|at org.apache.commons.lang.math.RandomUtils.nextInt(RandomUtils.java:88) 2016-07-05 12:19:22,155|beaver.machine|INFO|4569|14008027684|MainThread|at org.apache.commons.lang.math.RandomUtils.nextInt(RandomUtils.java:74) 2016-07-05 12:19:22,155|beaver.machine|INFO|4569|14008027684|MainThread|at org.apache.hadoop.hbase.IntegrationTestDDLMasterFailover$TableAction.selectTable(IntegrationTestDDLMasterFailover.java:212) 2016-07-05 12:19:22,155|beaver.machine|INFO|4569|14008027684|MainThread|at org.apache.hadoop.hbase.IntegrationTestDDLMasterFailover$AddColumnFamilyAction.perform(IntegrationTestDDLMasterFailover.java:421) 2016-07-05 12:19:22,155|beaver.machine|INFO|4569|14008027684|MainThread|at org.apache.hadoop.hbase.IntegrationTestDDLMasterFailover$Worker.run(IntegrationTestDDLMasterFailover.java:695) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-16178) HBase restore command fails on cluster with encrypted HDFS
Romil Choksi created HBASE-16178: Summary: HBase restore command fails on cluster with encrypted HDFS Key: HBASE-16178 URL: https://issues.apache.org/jira/browse/HBASE-16178 Project: HBase Issue Type: Bug Affects Versions: 2.0.0 Environment: Cluster with Encrypted HDFS Reporter: Romil Choksi HBase restore command fails to move hfile into an encryption zone {code:title= HDFS namenode log} 2016-07-05 07:27:00,580 INFO ipc.Server (Server.java:logException(2401)) - IPC Server handler 31 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.rename from :53481 Call#130 Retry#0 java.io.IOException: /apps/hbase/staging/hbase__table_29ov3nxj1o__7o65g4lakspqe1mlku17g0n6e2c61v72o632puuntpfcf3tf41n69bfaso00gvlp/cf1/8cf0242072534ee0a7ee8710b9235c3e can't be moved into an encryption zone. at org.apache.hadoop.hdfs.server.namenode.EncryptionZoneManager.checkMoveValidity(EncryptionZoneManager.java:272) at org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.unprotectedRenameTo(FSDirRenameOp.java:187) at org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameTo(FSDirRenameOp.java:474) at org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameToInt(FSDirRenameOp.java:73) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:3761) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:986) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename(ClientNamenodeProtocolServerSideTranslatorPB.java:583) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-16065) hbase backup set describe command does not inform if the set does not exist
Romil Choksi created HBASE-16065: Summary: hbase backup set describe command does not inform if the set does not exist Key: HBASE-16065 URL: https://issues.apache.org/jira/browse/HBASE-16065 Project: HBase Issue Type: Bug Affects Versions: 2.0.0 Reporter: Romil Choksi Priority: Minor Fix For: 2.0.0 hbase backup set describe command does not inform if the set does not exist from hbase shell {code} hbase@hbase-test-rc-7:~> hbase backup set list test_set={t1,t2} hbase@cluster-name:~> hbase backup set describe test_set1 test_set1={} {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-16064) HBase shell delete backup command shows HDFS permission error, after successfully deleting the intended backup
Romil Choksi created HBASE-16064: Summary: HBase shell delete backup command shows HDFS permission error, after successfully deleting the intended backup Key: HBASE-16064 URL: https://issues.apache.org/jira/browse/HBASE-16064 Project: HBase Issue Type: Bug Affects Versions: 2.0.0 Reporter: Romil Choksi Fix For: 2.0.0 HBase delete backup command shows error, after successfully deleting the intended backup {code} hbase@cluster-name:~$ hbase backup delete backup_1465950334243 2016-06-15 00:36:18,883 INFO [main] util.BackupClientUtil: No data has been found in hdfs://cluster-name:8020/user/hbase/backup_1465950334243/default/table_ttx7w0jgw8. 2016-06-15 00:36:18,894 ERROR [main] util.BackupClientUtil: Cleaning up backup data of backup_1465950334243 at hdfs://cluster-name:8020/user/hbase failed due to Permission denied: user=hbase, access=WRITE, inode="/user/hbase":hdfs:hdfs:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:216) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827) at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete(FSDirDeleteOp.java:92) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3822) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:1071) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:619) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307) . {code} Backup has been successfully deleted but the backup root dir under /user/hbase dir still persists {code} hbase@cluster-name:~$ hdfs dfs -ls /user/hbase Found 6 items drwx-- - hbase hbase 0 2016-06-15 00:26 /user/hbase/.staging drwxr-xr-x - hbase hbase 0 2016-06-15 00:36 /user/hbase/backup_1465950334243 drwxr-xr-x - hbase hbase 0 2016-06-15 00:26 /user/hbase/hbase-staging {code} /user/hbase/backup_1465950334243 is now empty though -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-16059) Region normalizer failed to trigger merge action where expected
Romil Choksi created HBASE-16059: Summary: Region normalizer failed to trigger merge action where expected Key: HBASE-16059 URL: https://issues.apache.org/jira/browse/HBASE-16059 Project: HBase Issue Type: Bug Components: master Reporter: Romil Choksi Fix For: 2.0.0 Region normalizer failed to trigger merge action where expected Steps to reproduce: - Pre-split the test table into 5 regions with keys 1,3,7,8 - Insert some data for each of the split. 27K rows for regions starting with key 1, and 100K rows for each of the regions with start key 3,7 and 8 - Scan the test table, and verify that these regions exists - 1) STARTKEY => ‘' ENDKEY => ’1’ 2) STARTKEY => ’1’ ENDKEY => ’3’ - Turn on normalization, verify normalization switch is enabled and that normalization is true for test table - Run normalizer a few times - Scan test table again, verify that regions don’t exist anymore 1) STARTKEY => ‘' ENDKEY => ’1’ 2) STARTKEY => ’1’ ENDKEY => ’3’, but instead a new region is created with STARTKEY => ’’ ENDKEY => ’3’ The test now fails, with the last step failing at assertion. Looking into the Master log, I see that normalization plan was computed for the test table but it decides that no normalization is needed for the test table, and that the regions look good. {code:title = Master.log} 2016-06-17 00:41:46,895 DEBUG [B.defaultRpcServer.handler=4,queue=1,port=2] normalizer.SimpleRegionNormalizer: Computing normalization plan for table: table_zrof6ea383, number of regions: 5 2016-06-17 00:41:46,895 DEBUG [B.defaultRpcServer.handler=4,queue=1,port=2] normalizer.SimpleRegionNormalizer: Table table_zrof6ea383, total aggregated regions size: 13 2016-06-17 00:41:46,896 DEBUG [B.defaultRpcServer.handler=4,queue=1,port=2] normalizer.SimpleRegionNormalizer: Table table_zrof6ea383, average region size: 2.6 2016-06-17 00:41:46,896 DEBUG [B.defaultRpcServer.handler=4,queue=1,port=2] normalizer.SimpleRegionNormalizer: No normalization needed, regions look good for table: table_zrof6ea383 {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16039) Incremental backup action failed with NPE
[ https://issues.apache.org/jira/browse/HBASE-16039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Romil Choksi updated HBASE-16039: - Description: Incremental backup action failed with NPE. Creating a full backup went fine but creating an incremental backup failed {code} hbase@cluster_name:~$ hbase backup create incremental hdfs://cluster-name:8020/user/hbase "table_02uvzkggro" 2016-06-15 06:38:28,605 INFO [main] util.BackupClientUtil: Using existing backup root dir: hdfs://cluster-name:8020/user/hbase 2016-06-15 06:38:30,483 ERROR [main] util.AbstractHBaseTool: Error running command-line tool org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): at org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.cleanupTargetDir(FullTableBackupProcedure.java:198) at org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.failBackup(FullTableBackupProcedure.java:276) at org.apache.hadoop.hbase.backup.master.IncrementalTableBackupProcedure.executeFromState(IncrementalTableBackupProcedure.java:186) at org.apache.hadoop.hbase.backup.master.IncrementalTableBackupProcedure.executeFromState(IncrementalTableBackupProcedure.java:54) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:107) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:443) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:934) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:736) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:689) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$200(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$1.run(ProcedureExecutor.java:416) {code} from Master log {code} 2016-06-15 06:38:29,875 ERROR [ProcedureExecutorThread-3] master.FullTableBackupProcedure: Unexpected exception in incremental-backup: incremental copy backup_1465972709112org.apache.hadoop.hbase.TableInfoMissingException: No table descriptor file under hdfs://cluster-name:8020/apps/hbase/data/data/default/table_pjtxpp3r74 org.apache.hadoop.hbase.backup.impl.BackupException: org.apache.hadoop.hbase.TableInfoMissingException: No table descriptor file under hdfs://cluster-name:8020/apps/hbase/data/data/default/table_pjtxpp3r74 at org.apache.hadoop.hbase.backup.util.BackupServerUtil.copyTableRegionInfo(BackupServerUtil.java:196) at org.apache.hadoop.hbase.backup.master.IncrementalTableBackupProcedure.executeFromState(IncrementalTableBackupProcedure.java:178) at org.apache.hadoop.hbase.backup.master.IncrementalTableBackupProcedure.executeFromState(IncrementalTableBackupProcedure.java:54) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:107) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:443) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:934) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:736) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:689) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$200(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$1.run(ProcedureExecutor.java:416) Caused by: org.apache.hadoop.hbase.TableInfoMissingException: No table descriptor file under hdfs://cluster-name:8020/apps/hbase/data/data/default/table_pjtxpp3r74 at org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorFromFs(FSTableDescriptors.java:509) at org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorFromFs(FSTableDescriptors.java:496) at org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorFromFs(FSTableDescriptors.java:476) at org.apache.hadoop.hbase.backup.util.BackupServerUtil.copyTableRegionInfo(BackupServerUtil.java:172) ... 9 more 2016-06-15 06:38:29,875 INFO [ProcedureExecutorThread-3-EventThread] zookeeper.ClientCnxn: EventThread shut down 2016-06-15 06:38:29,875 ERROR [ProcedureExecutorThread-3] master.FullTableBackupProcedure: BackupId=backup_1465972709112,startts=1465972709342,failedts=1465972709875,failedphase=null,failedmessage=org.apache.hadoop.hbase.TableInfoMissingException: No table descriptor file under hdfs://cluster-name:8020/apps/hbase/data/data/default/table_pjtxpp3r74 2016-06-15 06:38:29,884 ERROR [ProcedureExecutorThread-3] procedure2.ProcedureExecutor: CODE-BUG: Uncatched runtime exception for procedure: IncrementalTableBackupProcedure (targetRootDir=hdfs://cluster-name:8020/user/hbase) id=100 state=FINISHED java.lang.NullPointerException at
[jira] [Created] (HBASE-16039) Incremental backup action failed with NPE
Romil Choksi created HBASE-16039: Summary: Incremental backup action failed with NPE Key: HBASE-16039 URL: https://issues.apache.org/jira/browse/HBASE-16039 Project: HBase Issue Type: Bug Components: hbase Affects Versions: 2.0.0 Reporter: Romil Choksi Fix For: 2.0.0 Incremental backup action failed with NPE. Creating a full backup went fine but creating an incremental backup failed hbase@cluster_name:~$ hbase backup create incremental hdfs://cluster-name:8020/user/hbase "table_02uvzkggro" 2016-06-15 06:38:28,605 INFO [main] util.BackupClientUtil: Using existing backup root dir: hdfs://cluster-name:8020/user/hbase 2016-06-15 06:38:30,483 ERROR [main] util.AbstractHBaseTool: Error running command-line tool org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): at org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.cleanupTargetDir(FullTableBackupProcedure.java:198) at org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.failBackup(FullTableBackupProcedure.java:276) at org.apache.hadoop.hbase.backup.master.IncrementalTableBackupProcedure.executeFromState(IncrementalTableBackupProcedure.java:186) at org.apache.hadoop.hbase.backup.master.IncrementalTableBackupProcedure.executeFromState(IncrementalTableBackupProcedure.java:54) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:107) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:443) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:934) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:736) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:689) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$200(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$1.run(ProcedureExecutor.java:416) {code} from Master log, around same timestamp but error not for same table {code} 2016-06-15 06:38:29,875 ERROR [ProcedureExecutorThread-3] master.FullTableBackupProcedure: Unexpected exception in incremental-backup: incremental copy backup_1465972709112org.apache.hadoop.hbase.TableInfoMissingException: No table descriptor file under hdfs://cluster-name:8020/apps/hbase/data/data/default/table_pjtxpp3r74 org.apache.hadoop.hbase.backup.impl.BackupException: org.apache.hadoop.hbase.TableInfoMissingException: No table descriptor file under hdfs://cluster-name:8020/apps/hbase/data/data/default/table_pjtxpp3r74 at org.apache.hadoop.hbase.backup.util.BackupServerUtil.copyTableRegionInfo(BackupServerUtil.java:196) at org.apache.hadoop.hbase.backup.master.IncrementalTableBackupProcedure.executeFromState(IncrementalTableBackupProcedure.java:178) at org.apache.hadoop.hbase.backup.master.IncrementalTableBackupProcedure.executeFromState(IncrementalTableBackupProcedure.java:54) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:107) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:443) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:934) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:736) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:689) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$200(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$1.run(ProcedureExecutor.java:416) Caused by: org.apache.hadoop.hbase.TableInfoMissingException: No table descriptor file under hdfs://cluster-name:8020/apps/hbase/data/data/default/table_pjtxpp3r74 at org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorFromFs(FSTableDescriptors.java:509) at org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorFromFs(FSTableDescriptors.java:496) at org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorFromFs(FSTableDescriptors.java:476) at org.apache.hadoop.hbase.backup.util.BackupServerUtil.copyTableRegionInfo(BackupServerUtil.java:172) ... 9 more 2016-06-15 06:38:29,875 INFO [ProcedureExecutorThread-3-EventThread] zookeeper.ClientCnxn: EventThread shut down 2016-06-15 06:38:29,875 ERROR [ProcedureExecutorThread-3] master.FullTableBackupProcedure: BackupId=backup_1465972709112,startts=1465972709342,failedts=1465972709875,failedphase=null,failedmessage=org.apache.hadoop.hbase.TableInfoMissingException: No table descriptor file under hdfs://cluster-name:8020/apps/hbase/data/data/default/table_pjtxpp3r74 2016-06-15 06:38:29,884 ERROR [ProcedureExecutorThread-3]
[jira] [Commented] (HBASE-15584) Revisit handling of BackupState#CANCELLED
[ https://issues.apache.org/jira/browse/HBASE-15584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15323450#comment-15323450 ] Romil Choksi commented on HBASE-15584: -- [~tedyu] I tried the list_procedures command while I had a couple of full backups in progress. {code} hbase(main):008:0> list_procedures Id Name State Start_Time Last_Update 1 CreateTableProcedure (table=hbase:backup) ROLLEDBACK Wed Jun 08 23:42:00 UTC 2016 Wed Jun 08 23:42:00 UTC 2016 36 FullTableBackupProcedure (targetRootDir=hdfs://hbase-test-secure-rc-7:8020/user/hbase) RUNNABLE Thu Jun 09 21:20:44 UTC 2016 Thu Jun 09 21:20:46 UTC 2016 37 FullTableBackupProcedure (targetRootDir=hdfs://hbase-test-secure-rc-7:8020/user/hbase) RUNNABLE Thu Jun 09 21:20:46 UTC 2016 Thu Jun 09 21:20:46 UTC 2016 3 row(s) in 0.0240 seconds {code} Now just looking at the list_procedures output, I am not sure which procId belongs to which backup process. I can probably take a look at the hbase backup history, and compare the timestamps. Or go through the logs to get the necessary details. In such cases, I think it will be better for the user to have cancel option that takes in backupId and doesn't have to deal with figuring out the procId. > Revisit handling of BackupState#CANCELLED > - > > Key: HBASE-15584 > URL: https://issues.apache.org/jira/browse/HBASE-15584 > Project: HBase > Issue Type: Sub-task >Reporter: Ted Yu >Priority: Minor > > During review of HBASE-15411, Enis made the following point: > {code} > nobody puts the backup in cancelled state. setCancelled() is not used. So if > I abort a backup, who writes to the system table the new state? > Not sure whether this is a phase 1 patch issue or due to this patch. We can > open a new jira and address it there if you do not want to do it in this > patch. > Also maybe this should be named ABORTED rather than CANCELLED. > {code} > This issue is to decide whether this state should be kept (e.g. through > notification from procedure V2 framework in response to abortion). > If it is to be kept, the state should be renamed ABORTED. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14367) Add normalization support to shell
[ https://issues.apache.org/jira/browse/HBASE-14367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15097039#comment-15097039 ] Romil Choksi commented on HBASE-14367: -- Opened this JIRA https://issues.apache.org/jira/browse/HBASE-14804 for create table comment, and it has been resolved > Add normalization support to shell > -- > > Key: HBASE-14367 > URL: https://issues.apache.org/jira/browse/HBASE-14367 > Project: HBase > Issue Type: Bug > Components: Balancer, shell >Affects Versions: 1.1.2 >Reporter: Lars George >Assignee: Mikhail Antonov > Fix For: 2.0.0, 1.2.0, 1.3.0 > > Attachments: HBASE-14367-branch-1.2.v1.patch, > HBASE-14367-branch-1.2.v2.patch, HBASE-14367-branch-1.2.v3.patch, > HBASE-14367-branch-1.v1.patch, HBASE-14367-v1.patch, HBASE-14367.patch > > > https://issues.apache.org/jira/browse/HBASE-13103 adds support for setting a > normalization flag per {{HTableDescriptor}}, along with the server side chore > to do the work. > What is lacking is to easily set this from the shell, right now you need to > use the Java API to modify the descriptor. This issue is to add the flag as a > known attribute key and/or other means to toggle this per table. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-14925) Develop HBase shell command/tool to list table's region info through command line
Romil Choksi created HBASE-14925: Summary: Develop HBase shell command/tool to list table's region info through command line Key: HBASE-14925 URL: https://issues.apache.org/jira/browse/HBASE-14925 Project: HBase Issue Type: Improvement Components: shell Reporter: Romil Choksi I am going through the hbase shell commands to see if there is anything I can use to get all the regions info just for a particular table. I don’t see any such command that provides me that information. It would be better to have a command that provides region info, start key, end key etc taking a table name as the input parameter. This is available through HBase UI on clicking on a particular table's link A tool/shell command to get a list of regions for a table or all tables in a tabular structured output (that is machine readable) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-14867) SimpleRegionNormalizer needs to have better heuristics to trigger merge operation
Romil Choksi created HBASE-14867: Summary: SimpleRegionNormalizer needs to have better heuristics to trigger merge operation Key: HBASE-14867 URL: https://issues.apache.org/jira/browse/HBASE-14867 Project: HBase Issue Type: Bug Components: master Affects Versions: 1.2.0 Reporter: Romil Choksi SimpleRegionNormalizer needs to have better heuristics to trigger merge operation. SimpleRegionNormalizer is not able to trigger a merge action if the table's smallest region has neighboring regions that are larger than table's average region size, whereas there are other smaller regions whose combined size is less than the average region size. For example, - Consider a table with six region, say r1 to r6. - Keep r1 as empty and create some data say, 100K rows of data for each of the regions r2, r3 and r4. Create smaller amount of data for regions r5 and r6, say about 27K rows of data. - Run the normalizer. Verify the number the regions for that table and also check the master log to see if any merge action was triggered as a result of normalization. In such scenario, it would be better to have a merge action triggered for those two smaller regions r5 and r6 even though either of them is not the smallest one -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-14868) Table Attributes in shell output of desc command does not have a closing paranthesis
Romil Choksi created HBASE-14868: Summary: Table Attributes in shell output of desc command does not have a closing paranthesis Key: HBASE-14868 URL: https://issues.apache.org/jira/browse/HBASE-14868 Project: HBase Issue Type: Bug Components: shell Affects Versions: 1.2.0 Reporter: Romil Choksi Assignee: Romil Choksi Priority: Trivial Table Attributes in shell output of desc command does not have a closing paranthesis {code} hbase(main):011:0> desc 'table_qa638m7y2r' Table table_qa638m7y2r is ENABLED table_qa638m7y2r, {TABLE_ATTRIBUTES => {NORMALIZATION_ENABLED => 'true'} COLUMN FAMILIES DESCRIPTION {NAME => 'cf1', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLO CKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 1 row(s) in 0.0610 second {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-14838) Region Normalizer does not merge empty region of a table
Romil Choksi created HBASE-14838: Summary: Region Normalizer does not merge empty region of a table Key: HBASE-14838 URL: https://issues.apache.org/jira/browse/HBASE-14838 Project: HBase Issue Type: Bug Affects Versions: 1.1.2, 1.2.0 Reporter: Romil Choksi Assignee: Josh Elser Region Normalizer does not merge empty region of a table Steps to repro: - Create an empty table with few, say 5-6 regions without any data in any of them - Verify hbase:meta table to verify the regions for the table or check HMaster UI - Enable normalizer switch and normalization for this table - Run normalizer, by 'normalize' command from hbase shell - Verify the regions for table by scanning hbase:meta table or checking HMaster web UI The empty regions are not merged on running the region normalizer. This seems to be an edge case with completely empty regions since the Normalizer checks for: smallestRegion (in this case 0 size) + smallestNeighborOfSmallestRegion (in this case 0 size) > avg region size (in this case 0 size) thanks to [~elserj] for verifying this from the source code side -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14367) Add normalization support to shell
[ https://issues.apache.org/jira/browse/HBASE-14367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003237#comment-15003237 ] Romil Choksi commented on HBASE-14367: -- I am trying to create a new table and set the NORMALIZATION_ENABLED as true, but seems like the argument NORMALIZATION_ENABLED is being ignored. And the attribute NORMALIZATION_ENABLED is not displayed on doing a desc command on that table {code} hbase(main):020:0> create 'test-table-4', 'cf', {NORMALIZATION_ENABLED => 'true'} An argument ignored (unknown or overridden): NORMALIZATION_ENABLED 0 row(s) in 4.2670 seconds => Hbase::Table - test-table-4 hbase(main):021:0> desc 'test-table-4' Table test-table-4 is ENABLED test-table-4 COLUMN FAMILIES DESCRIPTION {NAME => 'cf', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOC KCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 1 row(s) in 0.0430 seconds {code} However, on doing an alter command on that table we can set the NORMALIZATION_ENABLED attribute for that table {code} hbase(main):022:0> alter 'test-table-4', {NORMALIZATION_ENABLED => 'true'} Unknown argument ignored: NORMALIZATION_ENABLED Updating all regions with the new schema... 1/1 regions updated. Done. 0 row(s) in 2.3640 seconds hbase(main):023:0> desc 'test-table-4' Table test-table-4 is ENABLED test-table-4, {TABLE_ATTRIBUTES => {NORMALIZATION_ENABLED => 'true'} COLUMN FAMILIES DESCRIPTION {NAME => 'cf', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOC KCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 1 row(s) in 0.0190 seconds {code} I think it would be better to have a single step process to enable normalization while creating the table itself, rather than a two step process to alter the table later on to enable normalization > Add normalization support to shell > -- > > Key: HBASE-14367 > URL: https://issues.apache.org/jira/browse/HBASE-14367 > Project: HBase > Issue Type: Bug > Components: Balancer, shell >Affects Versions: 1.1.2 >Reporter: Lars George >Assignee: Mikhail Antonov > Fix For: 2.0.0, 1.2.0, 1.3.0 > > Attachments: HBASE-14367-branch-1.2.v1.patch, > HBASE-14367-branch-1.2.v2.patch, HBASE-14367-branch-1.2.v3.patch, > HBASE-14367-branch-1.v1.patch, HBASE-14367-v1.patch, HBASE-14367.patch > > > https://issues.apache.org/jira/browse/HBASE-13103 adds support for setting a > normalization flag per {{HTableDescriptor}}, along with the server side chore > to do the work. > What is lacking is to easily set this from the shell, right now you need to > use the Java API to modify the descriptor. This issue is to add the flag as a > known attribute key and/or other means to toggle this per table. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-14804) HBase shell's create table command ignores 'NORMALIZATION_ENABLED' attribute
Romil Choksi created HBASE-14804: Summary: HBase shell's create table command ignores 'NORMALIZATION_ENABLED' attribute Key: HBASE-14804 URL: https://issues.apache.org/jira/browse/HBASE-14804 Project: HBase Issue Type: Bug Components: shell Affects Versions: 1.1.2 Reporter: Romil Choksi I am trying to create a new table and set the NORMALIZATION_ENABLED as true, but seems like the argument NORMALIZATION_ENABLED is being ignored. And the attribute NORMALIZATION_ENABLED is not displayed on doing a desc command on that table hbase(main):020:0> create 'test-table-4', 'cf', {NORMALIZATION_ENABLED => 'true'} An argument ignored (unknown or overridden): NORMALIZATION_ENABLED 0 row(s) in 4.2670 seconds => Hbase::Table - test-table-4 hbase(main):021:0> desc 'test-table-4' Table test-table-4 is ENABLED test-table-4 COLUMN FAMILIES DESCRIPTION {NAME => 'cf', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOC KCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 1 row(s) in 0.0430 seconds However, on doing an alter command on that table we can set the NORMALIZATION_ENABLED attribute for that table hbase(main):022:0> alter 'test-table-4', {NORMALIZATION_ENABLED => 'true'} Unknown argument ignored: NORMALIZATION_ENABLED Updating all regions with the new schema... 1/1 regions updated. Done. 0 row(s) in 2.3640 seconds hbase(main):023:0> desc 'test-table-4' Table test-table-4 is ENABLED test-table-4, {TABLE_ATTRIBUTES => {NORMALIZATION_ENABLED => 'true'} COLUMN FAMILIES DESCRIPTION {NAME => 'cf', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOC KCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 1 row(s) in 0.0190 seconds I think it would be better to have a single step process to enable normalization while creating the table itself, rather than a two step process to alter the table later on to enable normalization -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14804) HBase shell's create table command ignores 'NORMALIZATION_ENABLED' attribute
[ https://issues.apache.org/jira/browse/HBASE-14804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Romil Choksi updated HBASE-14804: - Description: I am trying to create a new table and set the NORMALIZATION_ENABLED as true, but seems like the argument NORMALIZATION_ENABLED is being ignored. And the attribute NORMALIZATION_ENABLED is not displayed on doing a desc command on that table {code} hbase(main):020:0> create 'test-table-4', 'cf', {NORMALIZATION_ENABLED => 'true'} An argument ignored (unknown or overridden): NORMALIZATION_ENABLED 0 row(s) in 4.2670 seconds => Hbase::Table - test-table-4 hbase(main):021:0> desc 'test-table-4' Table test-table-4 is ENABLED test-table-4 COLUMN FAMILIES DESCRIPTION {NAME => 'cf', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOC KCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 1 row(s) in 0.0430 seconds {code} However, on doing an alter command on that table we can set the NORMALIZATION_ENABLED attribute for that table {code} hbase(main):022:0> alter 'test-table-4', {NORMALIZATION_ENABLED => 'true'} Unknown argument ignored: NORMALIZATION_ENABLED Updating all regions with the new schema... 1/1 regions updated. Done. 0 row(s) in 2.3640 seconds hbase(main):023:0> desc 'test-table-4' Table test-table-4 is ENABLED test-table-4, {TABLE_ATTRIBUTES => {NORMALIZATION_ENABLED => 'true'} COLUMN FAMILIES DESCRIPTION {NAME => 'cf', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOC KCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 1 row(s) in 0.0190 seconds {code} I think it would be better to have a single step process to enable normalization while creating the table itself, rather than a two step process to alter the table later on to enable normalization was: I am trying to create a new table and set the NORMALIZATION_ENABLED as true, but seems like the argument NORMALIZATION_ENABLED is being ignored. And the attribute NORMALIZATION_ENABLED is not displayed on doing a desc command on that table hbase(main):020:0> create 'test-table-4', 'cf', {NORMALIZATION_ENABLED => 'true'} An argument ignored (unknown or overridden): NORMALIZATION_ENABLED 0 row(s) in 4.2670 seconds => Hbase::Table - test-table-4 hbase(main):021:0> desc 'test-table-4' Table test-table-4 is ENABLED test-table-4 COLUMN FAMILIES DESCRIPTION {NAME => 'cf', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOC KCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 1 row(s) in 0.0430 seconds
[jira] [Commented] (HBASE-14805) status should show the master in shell
[ https://issues.apache.org/jira/browse/HBASE-14805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003447#comment-15003447 ] Romil Choksi commented on HBASE-14805: -- [~enis] Have you tried zk_dump command from hbase shell, it gives out active master, backup masters and region servers {code} hbase(main):060:0> zk_dump HBase is rooted at /hbase-secure Active master address: hbase-dalm20-rc-2.novalocal,2,1447301843054 Backup master addresses: Region server holding hbase:meta: hbase-dalm20-rc-7.novalocal,16020,1447301860073 Region servers: hbase-dalm20-rc-5.novalocal,16020,1447301868926 hbase-dalm20-rc-1.novalocal,16020,1447301859425 hbase-dalm20-rc-2.novalocal,16020,1447301856988 {code} > status should show the master in shell > -- > > Key: HBASE-14805 > URL: https://issues.apache.org/jira/browse/HBASE-14805 > Project: HBase > Issue Type: Improvement >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 2.0.0, 1.2.0, 1.3.0 > > Attachments: hbase-14805_v1.patch > > > {{status 'simple'}} or {{'detailed'}} only shows the regionservers and > regions, but not the active master. Actually, there is no way to know about > the active masters from the shell it seems. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13663) HMaster fails to restart 'HMaster: Failed to become active master'
Romil Choksi created HBASE-13663: Summary: HMaster fails to restart 'HMaster: Failed to become active master' Key: HBASE-13663 URL: https://issues.apache.org/jira/browse/HBASE-13663 Project: HBase Issue Type: Bug Components: hbase Affects Versions: 1.1.0 Reporter: Romil Choksi HMaster fails to restart 'HMaster: Failed to become active master' from Master log: {code} 2015-05-08 11:25:14,020 FATAL [MasterNOde:16000.activeMasterManager] master.HMaster: Failed to become active master java.lang.NullPointerException at org.apache.hadoop.hbase.master.AssignmentManager.rebuildUserRegions(AssignmentManager.java:2885) at org.apache.hadoop.hbase.master.AssignmentManager.joinCluster(AssignmentManager.java:483) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:763) at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:182) at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1632) at java.lang.Thread.run(Thread.java:745) 2015-05-08 11:25:14,023 FATAL [MasterNOde:16000.activeMasterManager] master.HMaster: Master server abort: loaded coprocessors are: [] 2015-05-08 11:25:14,023 FATAL [MasterNOde:16000.activeMasterManager] master.HMaster: Unhandled exception. Starting shutdown. java.lang.NullPointerException at org.apache.hadoop.hbase.master.AssignmentManager.rebuildUserRegions(AssignmentManager.java:2885) at org.apache.hadoop.hbase.master.AssignmentManager.joinCluster(AssignmentManager.java:483) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:763) at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:182) at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1632) at java.lang.Thread.run(Thread.java:745) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-13653) Uninitialized HRegionServer#walFactory may result in NullPointerException at region server startup
Romil Choksi created HBASE-13653: Summary: Uninitialized HRegionServer#walFactory may result in NullPointerException at region server startup Key: HBASE-13653 URL: https://issues.apache.org/jira/browse/HBASE-13653 Project: HBase Issue Type: Bug Components: hbase Reporter: Romil Choksi hbase --config /tmp/hbaseConf org.apache.hadoop.hbase.IntegrationTestIngest --monkey unbalance causes NPE {code} 2015-05-08 08:44:20,885 ERROR [B.defaultRpcServer.handler=28,queue=1,port=16000] master.ServerManager: Received exception in RPC for warmup server:RegionServer1,16020,1431074656202region: {ENCODED = 40133c823b6d9d9dece99db1aad62730, NAME = 'SYSTEM.SEQUENCE,2\x00\x00\x00,1431070054641.40133c823b6d9d9dece99db1aad62730.', STARTKEY = '2\x00\x00\x00', ENDKEY = '3\x00\x00\x00'}exception: java.io.IOException: java.io.IOException at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2154) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NullPointerException at org.apache.hadoop.hbase.regionserver.HRegionServer.getWAL(HRegionServer.java:1825) at org.apache.hadoop.hbase.regionserver.RSRpcServices.warmupRegion(RSRpcServices.java:1559) at org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:21997) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112) ... 4 more {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)