[jira] [Created] (HBASE-26827) RegionServer JVM crash when compact mob table
Yi Mei created HBASE-26827: -- Summary: RegionServer JVM crash when compact mob table Key: HBASE-26827 URL: https://issues.apache.org/jira/browse/HBASE-26827 Project: HBase Issue Type: Bug Components: Compaction Reporter: Yi Mei When compact a mob table, RS JVM may crash or failed to do compaction as the following logs: {code:java} 2022-03-11T16:18:44,089 ERROR [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45525-shortCompactions-0] regionserver.CompactSplit$CompactionRunner(675): Compaction failed Request=regionName=t1,,1646986716811.964618e679a2434aa7d27018baef8154., storeName=A, fileCount=2, fileSize=2.0 M (1010.2 K, 1010.2 K), priority=1, time=1646986723135java.io.IOException: Mob compaction failed for region: 964618e679a2434aa7d27018baef8154at org.apache.hadoop.hbase.mob.DefaultMobStoreCompactor.performCompaction(DefaultMobStoreCompactor.java:574) ~[classes/:?]at org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:365) ~[classes/:?]at org.apache.hadoop.hbase.mob.DefaultMobStoreCompactor.compact(DefaultMobStoreCompactor.java:225) ~[classes/:?]at org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:125) ~[classes/:?]at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1141) ~[classes/:?]at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:2442) ~[classes/:?]at org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.doCompaction(CompactSplit.java:656) ~[classes/:?]at org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.run(CompactSplit.java:702) ~[classes/:?]at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_292]at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_292]at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_292]Caused by: java.io.IOException: Added a key not lexically larger than previous. Current cell = org.apache.hadoop.hbase.PrivateCellUtil$ValueAndTagRewriteByteBufferExtendedCell@565d5bac, prevCell = user/A:filed01/1646986721047/Put/vlen=0/mvcc=0 at org.apache.hadoop.hbase.util.BloomContext.sanityCheck(BloomContext.java:63) ~[classes/:?]at org.apache.hadoop.hbase.util.BloomContext.writeBloom(BloomContext.java:54) ~[classes/:?]at org.apache.hadoop.hbase.regionserver.StoreFileWriter.appendGeneralBloomfilter(StoreFileWriter.java:296) ~[classes/:?]at org.apache.hadoop.hbase.regionserver.StoreFileWriter.append(StoreFileWriter.java:315) ~[classes/:?]at org.apache.hadoop.hbase.mob.DefaultMobStoreCompactor.performCompaction(DefaultMobStoreCompactor.java:464) ~[classes/:?]... 10 more {code} It is the same problem as [HBASE-25929|https://issues.apache.org/jira/browse/HBASE-25929], because DefaultMobStoreCompactor overwrite performCompaction method of DefaultCompactor. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HBASE-26670) HFileLinkCleaner should be added even if snapshot is disabled
Yi Mei created HBASE-26670: -- Summary: HFileLinkCleaner should be added even if snapshot is disabled Key: HBASE-26670 URL: https://issues.apache.org/jira/browse/HBASE-26670 Project: HBase Issue Type: Bug Reporter: Yi Mei We encountered a problem in our cluster: 1. Cluster has many snapshots, cause the archive directory is too big. 2. We delete some snapshots, but the cleaner runs slowly because this is a race in synchronized method of SnapshotHFileCleaner. 3. We delete all snapshots, and disable snapshot feature(hbase.snapshot.enabled=false), so the cleaner will skip the synchronized method in SnapshotHFileCleaner. 4. After cleaner runs, some back reference and data files under archive directory are deleted, but they are still used by some restored tables. This does not meet expectations. One solution is add HFileLinkCleaner even if snapshot is disabled. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (HBASE-26646) WALPlayer should obtain token from filesystem
[ https://issues.apache.org/jira/browse/HBASE-26646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-26646. Fix Version/s: 2.5.0 3.0.0-alpha-3 2.4.10 Resolution: Fixed > WALPlayer should obtain token from filesystem > - > > Key: HBASE-26646 > URL: https://issues.apache.org/jira/browse/HBASE-26646 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > Fix For: 2.5.0, 3.0.0-alpha-3, 2.4.10 > > > When we use WALPlayer, we got the following exceptions: > {code:java} > 2021-12-27 17:20:13,388 WARN [main] org.apache.hadoop.mapred.YarnChild: > Exception running child : java.io.IOException: Failed on local exception: > java.io.IOException: org.apache.hadoop.security.AccessControlException: > Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host > is: "c4-hadoop-tst-st95.bj/10.132.18.11"; destination host is: > "c4-hadoop-tst-ct01.bj":58300; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:775) > at org.apache.hadoop.ipc.Client.call(Client.java:1488) > at org.apache.hadoop.ipc.Client.call(Client.java:1415) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) > at com.sun.proxy.$Proxy13.getFileInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:807) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:249) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:107) > at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2115) > at > org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1221) > at > org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1217) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1233) > at > org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:64) > at > org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.init(ProtobufLogReader.java:168) > at > org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:332) > at > org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:314) > at > org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:302) > at > org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:444) > at > org.apache.hadoop.hbase.wal.AbstractFSWALProvider.openReader(AbstractFSWALProvider.java:497) > at > org.apache.hadoop.hbase.mapreduce.WALInputFormat$WALRecordReader.openReader(WALInputFormat.java:161) > at > org.apache.hadoop.hbase.mapreduce.WALInputFormat$WALRecordReader.initialize(WALInputFormat.java:154) > at > org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:552) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:790) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1885) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HBASE-26646) WALPlayer should obtain token from filesystem
Yi Mei created HBASE-26646: -- Summary: WALPlayer should obtain token from filesystem Key: HBASE-26646 URL: https://issues.apache.org/jira/browse/HBASE-26646 Project: HBase Issue Type: Bug Reporter: Yi Mei When we use WALPlayer, we got the following exceptions: {code:java} 2021-12-27 17:20:13,388 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.io.IOException: Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host is: "c4-hadoop-tst-st95.bj/10.132.18.11"; destination host is: "c4-hadoop-tst-ct01.bj":58300; at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:775) at org.apache.hadoop.ipc.Client.call(Client.java:1488) at org.apache.hadoop.ipc.Client.call(Client.java:1415) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at com.sun.proxy.$Proxy13.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:807) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:249) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:107) at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2115) at org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1221) at org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1217) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1233) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:64) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.init(ProtobufLogReader.java:168) at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:332) at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:314) at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:302) at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:444) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.openReader(AbstractFSWALProvider.java:497) at org.apache.hadoop.hbase.mapreduce.WALInputFormat$WALRecordReader.openReader(WALInputFormat.java:161) at org.apache.hadoop.hbase.mapreduce.WALInputFormat$WALRecordReader.initialize(WALInputFormat.java:154) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:552) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:790) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1885) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (HBASE-26625) ExportSnapshot tool failed to copy data files for tables with merge region
[ https://issues.apache.org/jira/browse/HBASE-26625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-26625. Fix Version/s: 2.5.0 3.0.0-alpha-3 2.4.10 Resolution: Fixed > ExportSnapshot tool failed to copy data files for tables with merge region > -- > > Key: HBASE-26625 > URL: https://issues.apache.org/jira/browse/HBASE-26625 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > Fix For: 2.5.0, 3.0.0-alpha-3, 2.4.10 > > > When export snapshot for a table with merge regions, we found following > exceptions: > {code:java} > 2021-12-24 17:14:41,563 INFO [main] snapshot.ExportSnapshot: Finalize the > Snapshot Export > 2021-12-24 17:14:41,589 INFO [main] snapshot.ExportSnapshot: Verify snapshot > integrity > 2021-12-24 17:14:41,683 ERROR [main] snapshot.ExportSnapshot: Snapshot export > failed > org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException: Missing parent > hfile for: 043a9fe8aa7c469d8324956a57849db5.8e935527eb39a2cf9bf0f596754b5853 > path=A/a=t42=8e935527eb39a2cf9bf0f596754b5853-043a9fe8aa7c469d8324956a57849db5 > at > org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.concurrentVisitReferencedFiles(SnapshotReferenceUtil.java:232) > at > org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.concurrentVisitReferencedFiles(SnapshotReferenceUtil.java:195) > at > org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.verifySnapshot(SnapshotReferenceUtil.java:172) > at > org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.verifySnapshot(SnapshotReferenceUtil.java:156) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot.verifySnapshot(ExportSnapshot.java:851) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot.doWork(ExportSnapshot.java:1096) > at > org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:154) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at > org.apache.hadoop.hbase.util.AbstractHBaseTool.doStaticMain(AbstractHBaseTool.java:280) > at > org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:1144) > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HBASE-26625) ExportSnapshot tool fail to copy data files for tables with merge region
Yi Mei created HBASE-26625: -- Summary: ExportSnapshot tool fail to copy data files for tables with merge region Key: HBASE-26625 URL: https://issues.apache.org/jira/browse/HBASE-26625 Project: HBase Issue Type: Bug Reporter: Yi Mei When export snapshot for a table with merge regions, we found following exceptions: {code:java} 2021-12-24 17:14:41,563 INFO [main] snapshot.ExportSnapshot: Finalize the Snapshot Export 2021-12-24 17:14:41,589 INFO [main] snapshot.ExportSnapshot: Verify snapshot integrity 2021-12-24 17:14:41,683 ERROR [main] snapshot.ExportSnapshot: Snapshot export failed org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException: Missing parent hfile for: 043a9fe8aa7c469d8324956a57849db5.8e935527eb39a2cf9bf0f596754b5853 path=A/a=t42=8e935527eb39a2cf9bf0f596754b5853-043a9fe8aa7c469d8324956a57849db5 at org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.concurrentVisitReferencedFiles(SnapshotReferenceUtil.java:232) at org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.concurrentVisitReferencedFiles(SnapshotReferenceUtil.java:195) at org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.verifySnapshot(SnapshotReferenceUtil.java:172) at org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil.verifySnapshot(SnapshotReferenceUtil.java:156) at org.apache.hadoop.hbase.snapshot.ExportSnapshot.verifySnapshot(ExportSnapshot.java:851) at org.apache.hadoop.hbase.snapshot.ExportSnapshot.doWork(ExportSnapshot.java:1096) at org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:154) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hbase.util.AbstractHBaseTool.doStaticMain(AbstractHBaseTool.java:280) at org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:1144) {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (HBASE-26615) Snapshot referenced data files are deleted when delete a table with merge regions
[ https://issues.apache.org/jira/browse/HBASE-26615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-26615. Fix Version/s: 2.5.0 3.0.0-alpha-3 2.4.10 Resolution: Fixed > Snapshot referenced data files are deleted when delete a table with merge > regions > - > > Key: HBASE-26615 > URL: https://issues.apache.org/jira/browse/HBASE-26615 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Fix For: 2.5.0, 3.0.0-alpha-3, 2.4.10 > > > In our cluster, we have a feature: take a snapshot when delete a table. > But when we restore the snapshot, we found that some data files are deleted. > The problem is that, when delete a table with merge regions, HBase only > archive regions in meta, and merged parent regions are deleted in file system > which contain data files in snapshot . > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HBASE-26615) Snapshot referenced data files are deleted when delete a table with merge regions
Yi Mei created HBASE-26615: -- Summary: Snapshot referenced data files are deleted when delete a table with merge regions Key: HBASE-26615 URL: https://issues.apache.org/jira/browse/HBASE-26615 Project: HBase Issue Type: Bug Reporter: Yi Mei In our cluster, we have a feature: take a snapshot when delete a table. But when we restore the snapshot, we found that some data files are deleted. The problem is that, when delete a table, HBase only archive regions in meta. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (HBASE-26261) Store configuration loss when use update_config
[ https://issues.apache.org/jira/browse/HBASE-26261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-26261. Fix Version/s: 2.4.7 2.3.7 3.0.0-alpha-2 2.5.0 Resolution: Fixed Pushed to branch-2.3. Thanks [~zhangduo] for reviewing. > Store configuration loss when use update_config > --- > > Key: HBASE-26261 > URL: https://issues.apache.org/jira/browse/HBASE-26261 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > Fix For: 2.5.0, 3.0.0-alpha-2, 2.3.7, 2.4.7 > > > When use update_config shell command, some store configuration is loss. > When initialize store, the conf is set by: > {code:java} > this.conf = new CompoundConfiguration() > .add(confParam) > .addBytesMap(region.getTableDescriptor().getValues()) > .addStringMap(family.getConfiguration()) > .addBytesMap(family.getValues()); > {code} > when change configuration, the conf is set by: > {code:java} > this.conf = new CompoundConfiguration() > .add(conf) > .addBytesMap(getColumnFamilyDescriptor().getValues()); > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-26270) Provide getConfiguration method for Region and Store interface
[ https://issues.apache.org/jira/browse/HBASE-26270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-26270. Fix Version/s: 2.4.7 2.3.7 3.0.0-alpha-2 2.5.0 Release Note: Provide a 'getReadOnlyConfiguration' for Store and Region interface Resolution: Fixed > Provide getConfiguration method for Region and Store interface > -- > > Key: HBASE-26270 > URL: https://issues.apache.org/jira/browse/HBASE-26270 > Project: HBase > Issue Type: Improvement >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > Fix For: 2.5.0, 3.0.0-alpha-2, 2.3.7, 2.4.7 > > > In [HBASE-26261|https://issues.apache.org/jira/browse/HBASE-26261], > [~zhangduo] suggest that we should provide getConfiguration method for Region > and Store interface -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-26270) Provide getConfiguration method for Region and Store interface
Yi Mei created HBASE-26270: -- Summary: Provide getConfiguration method for Region and Store interface Key: HBASE-26270 URL: https://issues.apache.org/jira/browse/HBASE-26270 Project: HBase Issue Type: Improvement Reporter: Yi Mei In [HBASE-26261|https://issues.apache.org/jira/browse/HBASE-26261], [~zhangduo] suggest that we should provide getConfiguration method for Region and Store interface -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-26261) Store configuration loss when use update_config
Yi Mei created HBASE-26261: -- Summary: Store configuration loss when use update_config Key: HBASE-26261 URL: https://issues.apache.org/jira/browse/HBASE-26261 Project: HBase Issue Type: Bug Reporter: Yi Mei When use update_config shell command, some store configuration is loss. When initialize store, the conf is set by: {code:java} this.conf = new CompoundConfiguration() .add(confParam) .addBytesMap(region.getTableDescriptor().getValues()) .addStringMap(family.getConfiguration()) .addBytesMap(family.getValues()); {code} when change configuration, the conf is set by: {code:java} this.conf = new CompoundConfiguration() .add(conf) .addBytesMap(getColumnFamilyDescriptor().getValues()); {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-24734) RegionInfo#containsRange should support check meta table
[ https://issues.apache.org/jira/browse/HBASE-24734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-24734. Fix Version/s: 2.4.5 3.0.0-alpha-2 2.3.6 2.5.0 Resolution: Fixed > RegionInfo#containsRange should support check meta table > > > Key: HBASE-24734 > URL: https://issues.apache.org/jira/browse/HBASE-24734 > Project: HBase > Issue Type: Sub-task > Components: HFile, MTTR >Reporter: Michael Stack >Priority: Major > Fix For: 2.5.0, 2.3.6, 3.0.0-alpha-2, 2.4.5 > > > Came across this when we were testing the 'split-to-hfile' feature running > ITBLL: > > {code:java} > 2020-07-10 10:16:49,983 INFO org.apache.hadoop.hbase.regionserver.HRegion: > Closing region hbase:meta,,1.15882307402020-07-10 10:16:49,997 INFO > org.apache.hadoop.hbase.regionserver.HRegion: Closed > hbase:meta,,1.15882307402020-07-10 10:16:49,998 WARN > org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler: Fatal error > occurred while opening region hbase:meta,,1.1588230740, > aborting...java.lang.IllegalArgumentException: Invalid range: > IntegrationTestBigLinkedList,,1594350463222.8f89e01a5245e79946e22d8a8ab4698b. > > > IntegrationTestBigLinkedList,\x10\x02J\xA1,1594349535271.be24dc276f686e6dcc7fb9d3f91c8387. > at > org.apache.hadoop.hbase.client.RegionInfoBuilder$MutableRegionInfo.containsRange(RegionInfoBuilder.java:300) > at > org.apache.hadoop.hbase.regionserver.HStore.tryCommitRecoveredHFile(HStore.java:) > at > org.apache.hadoop.hbase.regionserver.HRegion.loadRecoveredHFilesIfAny(HRegion.java:5442) > at > org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:1010) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:950) >at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7490) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegionFromTableDir(HRegion.java:7448) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7424) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7382) > at > org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7333) > at > org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.process(AssignRegionHandler.java:135) > at > org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) > at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) > at > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > at java.base/java.lang.Thread.run(Thread.java:834)2020-07-10 > 10:16:50,005 ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: * > ABORTING region server hbasedn149.example.org,16020,1594375563853: Failed to > open region hbase:meta,,1.1588230740 and can not recover > *java.lang.IllegalArgumentException: Invalid range: > IntegrationTestBigLinkedList,,1594350463222.8f89e01a5245e79946e22d8a8ab4698b. > > > IntegrationTestBigLinkedList,\x10\x02J\xA1,1594349535271.be24dc276f686e6dcc7fb9d3f91c8387. > {code} > Seems basic case of wrong comparator. Below passes if I use the meta > comparator > {code:java} > @Test > public void testBinaryKeys() throws Exception { > Set set = new TreeSet<>(CellComparatorImpl.COMPARATOR); > final byte [] fam = Bytes.toBytes("col"); > final byte [] qf = Bytes.toBytes("umn"); > final byte [] nb = new byte[0]; > Cell [] keys = { > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,\u\u,2"), fam, qf, 2, > nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,\u0001,3"), fam, qf, 3, nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,,1"), fam, qf, 1, nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,\u1000,5"), fam, qf, 5, nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,a,4"), fam, qf, 4, nb)), > createByteBufferKeyValueFromKeyValue( > new KeyValue(Bytes.toBytes("a,a,0"), fam, qf, 0, nb)), > }; > // Add to set with bad comparator > Collections.addAll(set, keys); > // This will output the keys incorrectly. > boolean assertion = false; > int count = 0; > try { > for (Cell k: set) { > assertTrue("count=" + count + ", " + k.toString(), count++ == > k.getTimestamp()); > } > } catch (AssertionError e) { > // Expected > assertion = true; > } > assertTrue(assertion); > // M
[jira] [Resolved] (HBASE-25929) RegionServer JVM crash when compaction
[ https://issues.apache.org/jira/browse/HBASE-25929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-25929. Fix Version/s: 2.4.4 2.3.6 2.5.0 3.0.0-alpha-1 Resolution: Fixed > RegionServer JVM crash when compaction > -- > > Key: HBASE-25929 > URL: https://issues.apache.org/jira/browse/HBASE-25929 > Project: HBase > Issue Type: Bug > Components: Compaction >Affects Versions: 3.0.0-alpha-1, 2.5.0, 2.3.5, 2.4.3 >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Critical > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.4 > > Attachments: hs_err_pid27712.log, hs_err_pid28814.log > > > In our cluster, we found region servers may be crashed in several cases. > In hs_err_pid27712.log: > {code:java} > Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) > J 2687 sun.misc.Unsafe.copyMemory(Ljava/lang/Object;JLjava/lang/Object;JJ)V > (0 bytes) @ 0x7f85c987eda7 [0x7f85c987ed40+0x67] > J 5884 C1 > org.apache.hadoop.hbase.util.UnsafeAccess.unsafeCopy(Ljava/lang/Object;JLjava/lang/Object;JJ)V > (62 bytes) @ 0x7f85c93fd904 [0x7f85c93fd780+0x184] > J 4274 C1 > org.apache.hadoop.hbase.util.UnsafeAccess.copy(Ljava/nio/ByteBuffer;I[BII)V > (73 bytes) @ 0x7f85c9d57a94 [0x7f85c9d574a0+0x5f4] > J 5211 C2 > org.apache.hadoop.hbase.util.ByteBufferUtils.copyFromBufferToArray([BLjava/nio/ByteBuffer;III)V > (69 bytes) @ 0x7f85ca039a34 [0x7f85ca0399a0+0x94] > J 5985 C1 > org.apache.hadoop.hbase.CellUtil.copyQualifierTo(Lorg/apache/hadoop/hbase/Cell;[BI)I > (59 bytes) @ 0x7f85c9296a34 [0x7f85c92964c0+0x574] > J 6011 C1 org.apache.hadoop.hbase.ByteBufferKeyValue.getQualifierArray()[B (5 > bytes) @ 0x7f85c913e094 [0x7f85c913d4c0+0xbd4] > J 6004 C1 > org.apache.hadoop.hbase.CellUtil.getCellKeyAsString(Lorg/apache/hadoop/hbase/Cell;Ljava/util/function/Function;)Ljava/lang/String; > (211 bytes) @ 0x7f85c93737b4 [0x7f85c93722e0+0x14d4] > J 6000 C1 > org.apache.hadoop.hbase.CellUtil.getCellKeyAsString(Lorg/apache/hadoop/hbase/Cell;)Ljava/lang/String; > (10 bytes) @ 0x7f85c9854d14 [0x7f85c9854ba0+0x174] > j > org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.getMidpoint(Lorg/apache/hadoop/hbase/CellComparator;Lorg/apache/hadoop/hbase/Cell;Lorg/apache/hadoop/hbase/Cell;Lorg/apache/hadoop/hbase/io/hfile/HFileContext;)Lorg/apache/hadoop/hbase/Cell;+132 > j org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.finishBlock()V+102 > j org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.checkBlockBoundary()V+32 > j > org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.append(Lorg/apache/hadoop/hbase/Cell;)V+77 > j > org.apache.hadoop.hbase.regionserver.StoreFileWriter.append(Lorg/apache/hadoop/hbase/Cell;)V+20 > j > org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Lorg/apache/hadoop/hbase/regionserver/compactions/Compactor$FileDetails;Lorg/apache/hadoop/hbase/regionserver/InternalScanner;Lorg/apache/hadoop/hbase/regionserver/CellSink;JZLorg/apache/hadoop/hbase/regionserver/throttle/ThroughputController;ZI)Z+318 > j > org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Lorg/apache/hadoop/hbase/regionserver/compactions/CompactionRequestImpl;Lorg/apache/hadoop/hbase/regionserver/compactions/Compactor$InternalScannerFactory;Lorg/apache/hadoop/hbase/regionserver/compactions/Compactor$CellSinkFactory;Lorg/apache/hadoop/hbase/regionserver/throttle/ThroughputController;Lorg/apache/hadoop/hbase/security/User;)Ljava/util/List;+221 > j > org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(Lorg/apache/hadoop/hbase/regionserver/compactions/CompactionRequestImpl;Lorg/apache/hadoop/hbase/regionserver/throttle/ThroughputController;Lorg/apache/hadoop/hbase/security/User;)Ljava/util/List;+12 > j > org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(Lorg/apache/hadoop/hbase/regionserver/throttle/ThroughputController;Lorg/apache/hadoop/hbase/security/User;)Ljava/util/List;+16 > j > org.apache.hadoop.hbase.regionserver.HStore.compact(Lorg/apache/hadoop/hbase/regionserver/compactions/CompactionContext;Lorg/apache/hadoop/hbase/regionserver/throttle/ThroughputController;Lorg/apache/hadoop/hbase/security/User;)Ljava/util/List;+194 > {code} > In hs_err_pid28814.log: > {code:java} > Stack: [0x7f6d8e69b000,0x7f6d8e6dc000], sp=0x7f6d8e6d9e88, free > space=251k > Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native > code) > V [libjvm.so+0x747fa0] > J 2989 sun.misc.Unsafe.copyMemory(Ljava/lang/Object;JLjava/lang/Object;JJ)V > (0 bytes) @ 0x7f751db756e1 [0x7f751db75600+0xe1] > j > org.apache.hadoop.hbase.util.UnsafeAccess.unsafeCopy(Ljava/lang/Obj
[jira] [Created] (HBASE-25929) RegionServer JVM crash when compaction
Yi Mei created HBASE-25929: -- Summary: RegionServer JVM crash when compaction Key: HBASE-25929 URL: https://issues.apache.org/jira/browse/HBASE-25929 Project: HBase Issue Type: Bug Components: Compaction Affects Versions: 2.4.3, 2.3.5, 3.0.0-alpha-1, 2.5.0 Reporter: Yi Mei Assignee: Yi Mei Attachments: hs_err_pid27712.log, hs_err_pid28814.log In our cluster, we found region servers may be crashed in several cases. In hs_err_pid27712.log: {code:java} Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) J 2687 sun.misc.Unsafe.copyMemory(Ljava/lang/Object;JLjava/lang/Object;JJ)V (0 bytes) @ 0x7f85c987eda7 [0x7f85c987ed40+0x67] J 5884 C1 org.apache.hadoop.hbase.util.UnsafeAccess.unsafeCopy(Ljava/lang/Object;JLjava/lang/Object;JJ)V (62 bytes) @ 0x7f85c93fd904 [0x7f85c93fd780+0x184] J 4274 C1 org.apache.hadoop.hbase.util.UnsafeAccess.copy(Ljava/nio/ByteBuffer;I[BII)V (73 bytes) @ 0x7f85c9d57a94 [0x7f85c9d574a0+0x5f4] J 5211 C2 org.apache.hadoop.hbase.util.ByteBufferUtils.copyFromBufferToArray([BLjava/nio/ByteBuffer;III)V (69 bytes) @ 0x7f85ca039a34 [0x7f85ca0399a0+0x94] J 5985 C1 org.apache.hadoop.hbase.CellUtil.copyQualifierTo(Lorg/apache/hadoop/hbase/Cell;[BI)I (59 bytes) @ 0x7f85c9296a34 [0x7f85c92964c0+0x574] J 6011 C1 org.apache.hadoop.hbase.ByteBufferKeyValue.getQualifierArray()[B (5 bytes) @ 0x7f85c913e094 [0x7f85c913d4c0+0xbd4] J 6004 C1 org.apache.hadoop.hbase.CellUtil.getCellKeyAsString(Lorg/apache/hadoop/hbase/Cell;Ljava/util/function/Function;)Ljava/lang/String; (211 bytes) @ 0x7f85c93737b4 [0x7f85c93722e0+0x14d4] J 6000 C1 org.apache.hadoop.hbase.CellUtil.getCellKeyAsString(Lorg/apache/hadoop/hbase/Cell;)Ljava/lang/String; (10 bytes) @ 0x7f85c9854d14 [0x7f85c9854ba0+0x174] j org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.getMidpoint(Lorg/apache/hadoop/hbase/CellComparator;Lorg/apache/hadoop/hbase/Cell;Lorg/apache/hadoop/hbase/Cell;Lorg/apache/hadoop/hbase/io/hfile/HFileContext;)Lorg/apache/hadoop/hbase/Cell;+132 j org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.finishBlock()V+102 j org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.checkBlockBoundary()V+32 j org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.append(Lorg/apache/hadoop/hbase/Cell;)V+77 j org.apache.hadoop.hbase.regionserver.StoreFileWriter.append(Lorg/apache/hadoop/hbase/Cell;)V+20 j org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Lorg/apache/hadoop/hbase/regionserver/compactions/Compactor$FileDetails;Lorg/apache/hadoop/hbase/regionserver/InternalScanner;Lorg/apache/hadoop/hbase/regionserver/CellSink;JZLorg/apache/hadoop/hbase/regionserver/throttle/ThroughputController;ZI)Z+318 j org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Lorg/apache/hadoop/hbase/regionserver/compactions/CompactionRequestImpl;Lorg/apache/hadoop/hbase/regionserver/compactions/Compactor$InternalScannerFactory;Lorg/apache/hadoop/hbase/regionserver/compactions/Compactor$CellSinkFactory;Lorg/apache/hadoop/hbase/regionserver/throttle/ThroughputController;Lorg/apache/hadoop/hbase/security/User;)Ljava/util/List;+221 j org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(Lorg/apache/hadoop/hbase/regionserver/compactions/CompactionRequestImpl;Lorg/apache/hadoop/hbase/regionserver/throttle/ThroughputController;Lorg/apache/hadoop/hbase/security/User;)Ljava/util/List;+12 j org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(Lorg/apache/hadoop/hbase/regionserver/throttle/ThroughputController;Lorg/apache/hadoop/hbase/security/User;)Ljava/util/List;+16 j org.apache.hadoop.hbase.regionserver.HStore.compact(Lorg/apache/hadoop/hbase/regionserver/compactions/CompactionContext;Lorg/apache/hadoop/hbase/regionserver/throttle/ThroughputController;Lorg/apache/hadoop/hbase/security/User;)Ljava/util/List;+194 {code} In hs_err_pid28814.log: {code:java} Stack: [0x7f6d8e69b000,0x7f6d8e6dc000], sp=0x7f6d8e6d9e88, free space=251k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0x747fa0] J 2989 sun.misc.Unsafe.copyMemory(Ljava/lang/Object;JLjava/lang/Object;JJ)V (0 bytes) @ 0x7f751db756e1 [0x7f751db75600+0xe1] j org.apache.hadoop.hbase.util.UnsafeAccess.unsafeCopy(Ljava/lang/Object;JLjava/lang/Object;JJ)V+36 j org.apache.hadoop.hbase.util.UnsafeAccess.copy(Ljava/nio/ByteBuffer;I[BII)V+69 j org.apache.hadoop.hbase.util.ByteBufferUtils.copyFromBufferToArray([BLjava/nio/ByteBuffer;III)V+39 j org.apache.hadoop.hbase.CellUtil.copyQualifierTo(Lorg/apache/hadoop/hbase/Cell;[BI)I+31 J 12082 C2 org.apache.hadoop.hbase.ByteBufferKeyValue.getQualifierArray()[B (5 bytes) @ 0x7f751ef15fbc [0x7f751ef15dc0+0x1fc] J 16584 C2 org.apache.hadoop.hbase.CellUtil.getCellKeyAs
[jira] [Resolved] (HBASE-25747) Remove unused getWriteAvailable method in OperationQuota
[ https://issues.apache.org/jira/browse/HBASE-25747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-25747. Fix Version/s: 2.4.3 2.5.0 3.0.0-alpha-1 Resolution: Fixed > Remove unused getWriteAvailable method in OperationQuota > > > Key: HBASE-25747 > URL: https://issues.apache.org/jira/browse/HBASE-25747 > Project: HBase > Issue Type: Improvement > Components: Quotas >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.5.0, 2.4.3 > > > The getWriteAvailable method is unused in OperationQuota, because for write > operation, the size is accurate. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25747) Remove unused getWriteAvailable method in OperationQuota
Yi Mei created HBASE-25747: -- Summary: Remove unused getWriteAvailable method in OperationQuota Key: HBASE-25747 URL: https://issues.apache.org/jira/browse/HBASE-25747 Project: HBase Issue Type: Improvement Components: Quotas Reporter: Yi Mei Assignee: Yi Mei The getWriteAvailable method is unused in OperationQuota, because for write operation, the size is accurate. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25736) Scan should be limited by read capacity unit quota if read size quota is not set
Yi Mei created HBASE-25736: -- Summary: Scan should be limited by read capacity unit quota if read size quota is not set Key: HBASE-25736 URL: https://issues.apache.org/jira/browse/HBASE-25736 Project: HBase Issue Type: Improvement Components: Quotas Reporter: Yi Mei Scan is currently limited by available size of quota, and quota size only considers the READ_SIZE type: {code:java} long maxQuotaResultSize = Math.min(maxScannerResultSize, quota.getReadAvailable()); {code} If read size is not set, we should limit the result size by read capacity unit to avoid exceeding quota. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-25636) Expose HBCK report as metrics
[ https://issues.apache.org/jira/browse/HBASE-25636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-25636. Hadoop Flags: Reviewed Resolution: Fixed > Expose HBCK report as metrics > - > > Key: HBASE-25636 > URL: https://issues.apache.org/jira/browse/HBASE-25636 > Project: HBase > Issue Type: Improvement > Components: metrics >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.5.0, 2.4.3 > > > Currently, we have a HBCK Report page in master UI to show the problems of > HBCK Chore report and CatalogJanitor Consistency report. We can expose these > problems as metrics, so we can configure an alert. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25636) Expost HBCK report as metrics
Yi Mei created HBASE-25636: -- Summary: Expost HBCK report as metrics Key: HBASE-25636 URL: https://issues.apache.org/jira/browse/HBASE-25636 Project: HBase Issue Type: Improvement Reporter: Yi Mei Currently, we have a HBCK Report page in master UI to show the problems of HBCK Chore report and CatalogJanitor Consistency report. We can expose these problems as metrics, so we can configure an alert. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-25097) Wrong RIT page number in Master UI
[ https://issues.apache.org/jira/browse/HBASE-25097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-25097. Fix Version/s: 2.2.7 2.4.0 2.3.3 3.0.0-alpha-1 Assignee: Yi Mei Resolution: Fixed > Wrong RIT page number in Master UI > -- > > Key: HBASE-25097 > URL: https://issues.apache.org/jira/browse/HBASE-25097 > Project: HBase > Issue Type: Bug > Components: UI >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.3.3, 2.4.0, 2.2.7 > > Attachments: 1.png, 2.png > > > In the following picture, there are 71 RIT totally, 10 in per page, so there > should be 8 pages, rather than 15 pages: > !1.png! > !2.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25097) Wrong RIT page number in Master UI
Yi Mei created HBASE-25097: -- Summary: Wrong RIT page number in Master UI Key: HBASE-25097 URL: https://issues.apache.org/jira/browse/HBASE-25097 Project: HBase Issue Type: Bug Reporter: Yi Mei Attachments: 1.png, 2.png In the following picture, there are 71 RIT totally, 10 in per page, so there should be 8 pages, rather than 15 pages: !1.png! !2.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25048) [HBCK2] Bypassed parent procedures are not updated in store
Yi Mei created HBASE-25048: -- Summary: [HBCK2] Bypassed parent procedures are not updated in store Key: HBASE-25048 URL: https://issues.apache.org/jira/browse/HBASE-25048 Project: HBase Issue Type: Bug Reporter: Yi Mei See code in [ProcedureExecutor|https://github.com/apache/hbase/blob/master/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/ProcedureExecutor.java#L980]: {code:java} Procedure current = procedure; while (current != null) { LOG.debug("Bypassing {}", current); current.bypass(getEnvironment()); store.update(procedure); // update current procedure long parentID = current.getParentProcId(); current = getProcedure(parentID); } {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-25047) WAL split edits number is negative in RegionServerUI
Yi Mei created HBASE-25047: -- Summary: WAL split edits number is negative in RegionServerUI Key: HBASE-25047 URL: https://issues.apache.org/jira/browse/HBASE-25047 Project: HBase Issue Type: Bug Reporter: Yi Mei Attachments: 2020-09-16 11-38-13屏幕截图.png !2020-09-16 11-38-13屏幕截图.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24653) Show snapshot owner on Master WebUI
Yi Mei created HBASE-24653: -- Summary: Show snapshot owner on Master WebUI Key: HBASE-24653 URL: https://issues.apache.org/jira/browse/HBASE-24653 Project: HBase Issue Type: Improvement Reporter: Yi Mei Now Master UI shows lots of snapshot informations, and owner is also useful to find out who create this snapshot. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-24364) [Chaos Monkey] Invalid data block encoding in ChangeEncodingAction
[ https://issues.apache.org/jira/browse/HBASE-24364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-24364. Fix Version/s: 2.2.5 2.3.0 3.0.0-alpha-1 Resolution: Fixed > [Chaos Monkey] Invalid data block encoding in ChangeEncodingAction > -- > > Key: HBASE-24364 > URL: https://issues.apache.org/jira/browse/HBASE-24364 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0, 2.2.5 > > > I found the following exception when I run ITBLL: > {code:java} > 2020-05-12 11:43:14,201 WARN [ChaosMonkey] policies.Policy: Exception > performing action: > java.lang.IllegalArgumentException: There is no data block encoder for given > id '6' > at > org.apache.hadoop.hbase.io.encoding.DataBlockEncoding.getEncodingById(DataBlockEncoding.java:168) > at > org.apache.hadoop.hbase.chaos.actions.ChangeEncodingAction.lambda$perform$0(ChangeEncodingAction.java:50) > at > org.apache.hadoop.hbase.chaos.actions.Action.modifyAllTableColumns(Action.java:356) > at > org.apache.hadoop.hbase.chaos.actions.ChangeEncodingAction.perform(ChangeEncodingAction.java:48) > at > org.apache.hadoop.hbase.chaos.policies.PeriodicRandomActionPolicy.runOneIteration(PeriodicRandomActionPolicy.java:59) > at > org.apache.hadoop.hbase.chaos.policies.PeriodicPolicy.run(PeriodicPolicy.java:41) > at java.lang.Thread.run(Thread.java:748) > {code} > Because PREFIX_TREE is removed in DataBlockEncoding: > {code:java} > /** Disable data block encoding. */ > NONE(0, null), > // id 1 is reserved for the BITSET algorithm to be added later > PREFIX(2, "org.apache.hadoop.hbase.io.encoding.PrefixKeyDeltaEncoder"), > DIFF(3, "org.apache.hadoop.hbase.io.encoding.DiffKeyDeltaEncoder"), > FAST_DIFF(4, "org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder"), > // id 5 is reserved for the COPY_KEY algorithm for benchmarking > // COPY_KEY(5, "org.apache.hadoop.hbase.io.encoding.CopyKeyDataBlockEncoder"), > // PREFIX_TREE(6, "org.apache.hadoop.hbase.codec.prefixtree.PrefixTreeCodec"), > ROW_INDEX_V1(7, "org.apache.hadoop.hbase.io.encoding.RowIndexCodecV1"); > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24364) [Chaos Monkey] Invalid data block encoding in ChangeEncodingAction
Yi Mei created HBASE-24364: -- Summary: [Chaos Monkey] Invalid data block encoding in ChangeEncodingAction Key: HBASE-24364 URL: https://issues.apache.org/jira/browse/HBASE-24364 Project: HBase Issue Type: Bug Reporter: Yi Mei I found the following exception when I run ITBLL: {code:java} 2020-05-12 11:43:14,201 WARN [ChaosMonkey] policies.Policy: Exception performing action: java.lang.IllegalArgumentException: There is no data block encoder for given id '6' at org.apache.hadoop.hbase.io.encoding.DataBlockEncoding.getEncodingById(DataBlockEncoding.java:168) at org.apache.hadoop.hbase.chaos.actions.ChangeEncodingAction.lambda$perform$0(ChangeEncodingAction.java:50) at org.apache.hadoop.hbase.chaos.actions.Action.modifyAllTableColumns(Action.java:356) at org.apache.hadoop.hbase.chaos.actions.ChangeEncodingAction.perform(ChangeEncodingAction.java:48) at org.apache.hadoop.hbase.chaos.policies.PeriodicRandomActionPolicy.runOneIteration(PeriodicRandomActionPolicy.java:59) at org.apache.hadoop.hbase.chaos.policies.PeriodicPolicy.run(PeriodicPolicy.java:41) at java.lang.Thread.run(Thread.java:748) {code} Because PREFIX_TREE is removed in DataBlockEncoding: {code:java} /** Disable data block encoding. */ NONE(0, null), // id 1 is reserved for the BITSET algorithm to be added later PREFIX(2, "org.apache.hadoop.hbase.io.encoding.PrefixKeyDeltaEncoder"), DIFF(3, "org.apache.hadoop.hbase.io.encoding.DiffKeyDeltaEncoder"), FAST_DIFF(4, "org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder"), // id 5 is reserved for the COPY_KEY algorithm for benchmarking // COPY_KEY(5, "org.apache.hadoop.hbase.io.encoding.CopyKeyDataBlockEncoder"), // PREFIX_TREE(6, "org.apache.hadoop.hbase.codec.prefixtree.PrefixTreeCodec"), ROW_INDEX_V1(7, "org.apache.hadoop.hbase.io.encoding.RowIndexCodecV1"); {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24103) [Flakey Tests] TestSnapshotScannerHDFSAclController
Yi Mei created HBASE-24103: -- Summary: [Flakey Tests] TestSnapshotScannerHDFSAclController Key: HBASE-24103 URL: https://issues.apache.org/jira/browse/HBASE-24103 Project: HBase Issue Type: Bug Reporter: Yi Mei According to HBASE-24097, TestSnapshotScannerHDFSAclController is still flakey: https://builds.apache.org/job/HBase-Flaky-Tests/job/branch-2/5950/ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-23824) TestSnapshotScannerHDFSAclController is flakey
Yi Mei created HBASE-23824: -- Summary: TestSnapshotScannerHDFSAclController is flakey Key: HBASE-23824 URL: https://issues.apache.org/jira/browse/HBASE-23824 Project: HBase Issue Type: Bug Reporter: Yi Mei Assignee: Yi Mei See [https://builds.apache.org/view/H-L/view/HBase/job/HBase-Find-Flaky-Tests/job/branch-2/lastSuccessfulBuild/artifact/dashboard.html] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-23555) TestQuotaThrottle is broken
Yi Mei created HBASE-23555: -- Summary: TestQuotaThrottle is broken Key: HBASE-23555 URL: https://issues.apache.org/jira/browse/HBASE-23555 Project: HBase Issue Type: Bug Reporter: Yi Mei Assignee: Yi Mei TestQuotaThrottle is broken now. And it is anotated as Ignore because it's flakey so the Jenkins test can not report it. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-23553) Snapshot referenced data files are deleted in some case
Yi Mei created HBASE-23553: -- Summary: Snapshot referenced data files are deleted in some case Key: HBASE-23553 URL: https://issues.apache.org/jira/browse/HBASE-23553 Project: HBase Issue Type: Bug Reporter: Yi Mei We scan snapshot in our cluster and got following exception: {code:java} java.io.IOException: java.io.IOException: java.io.FileNotFoundException: Unable to open link: org.apache.hadoop.hbase.io.HFileLink locations=[hdfs://tjwqsrv-galaxy98/hbase/tjwqsrv-galaxy98/data/default/galaxy_online_fds_object_table/06dd90d8540b56343859b63a6134450c/A/4a6cf05f419a9f61059cb05a962f, hdfs://tjwqsrv-galaxy98/hbase/tjwqsrv-galaxy98/.tmp/data/default/galaxy_online_fds_object_table/06dd90d8540b56343859b63a6134450c/A/4a6cf05f419a9f61059cb05a962f, hdfs://tjwqsrv-galaxy98/hbase/tjwqsrv-galaxy98/archive/data/default/galaxy_online_fds_object_table/06dd90d8540b56343859b63a6134450c/A/4a6cf05f419a9f61059cb05a962f] at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:867) at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:778) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:749) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5306) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5271) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5243) at org.apache.hadoop.hbase.client.ClientSideRegionScanner.(ClientSideRegionScanner.java:72) at org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormatImpl$RecordReader.initialize(TableSnapshotInputFormatImpl.java:239) at org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormat$TableSnapshotRegionRecordReader.initialize(TableSnapshotInputFormat.java:150) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:552) at {code} I checked to namenode logs and found that this file is deleted by hbase cleaner although a snapshot still referenced to this file. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-23042) Parameters are incorrect in procedures jsp
[ https://issues.apache.org/jira/browse/HBASE-23042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-23042. Fix Version/s: 2.2.3 2.3.0 3.0.0 Resolution: Fixed > Parameters are incorrect in procedures jsp > -- > > Key: HBASE-23042 > URL: https://issues.apache.org/jira/browse/HBASE-23042 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.2.3 > > Attachments: 1.png > > > In procedures jps, the parameters of table name, region start end keys are > wrong, please see the first picture. > This is because all bytes params are encoded in base64. It is confusing. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-23170) Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME
[ https://issues.apache.org/jira/browse/HBASE-23170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-23170. Fix Version/s: 2.2.3 2.3.0 3.0.0 Resolution: Fixed > Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME > - > > Key: HBASE-23170 > URL: https://issues.apache.org/jira/browse/HBASE-23170 > Project: HBase > Issue Type: Improvement >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.2.3 > > > Admin#getRegionServers returns the server names. > ClusterMetrics.Option.LIVE_SERVERS returns the map of server names and > metrics, while the metrics are not useful for Admin#getRegionServers method. > Please see [HBASE-21938|https://issues.apache.org/jira/browse/HBASE-21938] > for more details. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-23170) Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME
Yi Mei created HBASE-23170: -- Summary: Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME Key: HBASE-23170 URL: https://issues.apache.org/jira/browse/HBASE-23170 Project: HBase Issue Type: Improvement Reporter: Yi Mei Assignee: Yi Mei Admin#getRegionServers returns the server names. ClusterMetrics.Option.LIVE_SERVERS returns the map of server names and metrics, while the metrics are not useful for Admin#getRegionServers method. Please see [HBASE-21938|https://issues.apache.org/jira/browse/HBASE-21938] for more details. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-23042) Parameters are incorrect in procedures jsp
Yi Mei created HBASE-23042: -- Summary: Parameters are incorrect in procedures jsp Key: HBASE-23042 URL: https://issues.apache.org/jira/browse/HBASE-23042 Project: HBase Issue Type: Bug Reporter: Yi Mei In procedures jps, the parameters of table name, region start end keys are wrong, please see the first picture. This is because all bytes params are encoded in base64. It is confusing. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-23039) HBCK2 bypass -r command does not work
Yi Mei created HBASE-23039: -- Summary: HBCK2 bypass -r command does not work Key: HBASE-23039 URL: https://issues.apache.org/jira/browse/HBASE-23039 Project: HBase Issue Type: Bug Reporter: Yi Mei The recursiveFlag is wrong: {code:java} boolean overrideFlag = commandLine.hasOption(override.getOpt()); boolean recursiveFlag = commandLine.hasOption(override.getOpt()); {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-22878) Show table throttle quotas in table jsp
[ https://issues.apache.org/jira/browse/HBASE-22878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-22878. Resolution: Fixed > Show table throttle quotas in table jsp > --- > > Key: HBASE-22878 > URL: https://issues.apache.org/jira/browse/HBASE-22878 > Project: HBase > Issue Type: Sub-task >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.2.1 > > Attachments: 1.png, 2.png > > > Currently, table jsp shows space quotas but has no throttle quotas. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Resolved] (HBASE-22946) Fix TableNotFound when grant/revoke if AccessController is not loaded
[ https://issues.apache.org/jira/browse/HBASE-22946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-22946. Resolution: Fixed > Fix TableNotFound when grant/revoke if AccessController is not loaded > - > > Key: HBASE-22946 > URL: https://issues.apache.org/jira/browse/HBASE-22946 > Project: HBase > Issue Type: Sub-task >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.2.2 > > > When doing grant, revoke..., a TableNotFoundException will occur if > AccessController if is not configured. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Created] (HBASE-22946) Fix TableNotFound when grant/revoke if AccessController is not loaded
Yi Mei created HBASE-22946: -- Summary: Fix TableNotFound when grant/revoke if AccessController is not loaded Key: HBASE-22946 URL: https://issues.apache.org/jira/browse/HBASE-22946 Project: HBase Issue Type: Sub-task Reporter: Yi Mei When doing grant, revoke..., a TableNotFoundException will occur if AccessController if is not configured. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Created] (HBASE-22945) Show quota infos in master UI
Yi Mei created HBASE-22945: -- Summary: Show quota infos in master UI Key: HBASE-22945 URL: https://issues.apache.org/jira/browse/HBASE-22945 Project: HBase Issue Type: Sub-task Reporter: Yi Mei Add a page in master UI to show the following quota infos: if rpc throttle is enabled; if exceed throttle quota is enabled; namespace throtlles; user throttles. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Resolved] (HBASE-22879) user_permission command failed to show global permission
[ https://issues.apache.org/jira/browse/HBASE-22879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-22879. Resolution: Fixed > user_permission command failed to show global permission > > > Key: HBASE-22879 > URL: https://issues.apache.org/jira/browse/HBASE-22879 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.2.2 > > > When use user_permission command to show global permissions, the following > exception occurred: > {code:java} > hbase(main):001:0> user_permission > User Namespace,Table,Family,Qualifier:Permission > ERROR: failed to coerce > org.apache.hadoop.hbase.security.access.GlobalPermission to > org.apache.hadoop.hbase.security.access.TablePermission > For usage try 'help "user_permission"' > Took 1.1249 seconds > hbase(main):002:0> > {code} -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Created] (HBASE-22879) user_permission command failed to show global permission
Yi Mei created HBASE-22879: -- Summary: user_permission command failed to show global permission Key: HBASE-22879 URL: https://issues.apache.org/jira/browse/HBASE-22879 Project: HBase Issue Type: Bug Reporter: Yi Mei When use user_permission command to show global permissions, the following exception occurred: {code:java} hbase(main):001:0> user_permission User Namespace,Table,Family,Qualifier:Permission ERROR: failed to coerce org.apache.hadoop.hbase.security.access.GlobalPermission to org.apache.hadoop.hbase.security.access.TablePermission For usage try 'help "user_permission"' Took 1.1249 seconds hbase(main):002:0> {code} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Created] (HBASE-22878) Show table throttle quotas in table jsp
Yi Mei created HBASE-22878: -- Summary: Show table throttle quotas in table jsp Key: HBASE-22878 URL: https://issues.apache.org/jira/browse/HBASE-22878 Project: HBase Issue Type: Sub-task Reporter: Yi Mei Currently, table jsp shows space quotas and has no throttle quotas. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Resolved] (HBASE-22842) Tmp directory should not be deleted when master restart used for user scan snapshot feature
[ https://issues.apache.org/jira/browse/HBASE-22842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-22842. Resolution: Fixed > Tmp directory should not be deleted when master restart used for user scan > snapshot feature > --- > > Key: HBASE-22842 > URL: https://issues.apache.org/jira/browse/HBASE-22842 > Project: HBase > Issue Type: Sub-task >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Fix For: 3.0.0, 2.3.0 > > > When create table, table directories are firstly created in tmp directory and > then move to data directory. So HDFS ACLs are set at the following tmp > directories used for ACLs inherited: > {code:java} > {hbase-rootdir}/.tmp/data > {hbase-rootdir}/.tmp/data/{namespace} > {hbase-rootdir}/.tmp/data/{namespace}/{table} > {code} > When master restart, it will delete tmp directory and this will break this > feature. > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Created] (HBASE-22842) Tmp directory should not be deleted when master restart used for user scan snapshot feature
Yi Mei created HBASE-22842: -- Summary: Tmp directory should not be deleted when master restart used for user scan snapshot feature Key: HBASE-22842 URL: https://issues.apache.org/jira/browse/HBASE-22842 Project: HBase Issue Type: Sub-task Reporter: Yi Mei When create table, table directories are firstly created in tmp directory and then move to data directory. So HDFS ACLs are set at the following tmp directories used for ACLs inherited: {hbase-rootdir}/.tmp/data {hbase-rootdir}/.tmp/data/\{namespace} {hbase-rootdir}/.tmp/data/\{namespace}/\{table} When master restart, it will delete tmp directory and this will break this feature. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Resolved] (HBASE-22776) Rename config names in user scan snapshot feature
[ https://issues.apache.org/jira/browse/HBASE-22776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-22776. Resolution: Fixed > Rename config names in user scan snapshot feature > - > > Key: HBASE-22776 > URL: https://issues.apache.org/jira/browse/HBASE-22776 > Project: HBase > Issue Type: Sub-task >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Fix For: 3.0.0, 2.3.0 > > > As discussed in HBASE-22578, HBASE-22580, the config names are not so proper. > And make the SnapshotScannerHDFSAclCleaner load automatically if this feature > is enabled. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Created] (HBASE-22776) Rename config names in user scan snapshot feature
Yi Mei created HBASE-22776: -- Summary: Rename config names in user scan snapshot feature Key: HBASE-22776 URL: https://issues.apache.org/jira/browse/HBASE-22776 Project: HBase Issue Type: Sub-task Reporter: Yi Mei As discussed in HBASE-22578, HBASE-22580, the config names are not so proper. And make the SnapshotScannerHDFSAclCleaner load automatically if this feature is enabled. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Resolved] (HBASE-22580) Add a table attribute to make user scan snapshot feature configurable for table
[ https://issues.apache.org/jira/browse/HBASE-22580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-22580. Resolution: Fixed > Add a table attribute to make user scan snapshot feature configurable for > table > --- > > Key: HBASE-22580 > URL: https://issues.apache.org/jira/browse/HBASE-22580 > Project: HBase > Issue Type: Sub-task >Reporter: Yi Mei >Priority: Major > Fix For: 3.0.0, 2.3.0 > > > If a cluster enable user scan snapshot feature, it will work for all tables. > Since this feature will make some operations such as grant, revoke, > snapshot... slower and some tables don't use scan snaphot to make scan > faster. So add a table attribute to make it configurable at table level, in > general, the feature is disabled by default, and if someone use feature, must > enable the attribute of the specific table firstly. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Created] (HBASE-22625) documet use scan snapshot feature
Yi Mei created HBASE-22625: -- Summary: documet use scan snapshot feature Key: HBASE-22625 URL: https://issues.apache.org/jira/browse/HBASE-22625 Project: HBase Issue Type: Sub-task Reporter: Yi Mei Add the design doc in dev-support/design-docs{{ and describe }}the feature in the reference guide. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HBASE-21995) Add a coprocessor to set HDFS ACL for hbase granted user
[ https://issues.apache.org/jira/browse/HBASE-21995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-21995. Resolution: Fixed > Add a coprocessor to set HDFS ACL for hbase granted user > > > Key: HBASE-21995 > URL: https://issues.apache.org/jira/browse/HBASE-21995 > Project: HBase > Issue Type: Sub-task >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Fix For: 3.0.0, 2.3.0 > > > To make hbase granted user have the access to scan table snapshots, use HDFS > ACLs to set user read permission over hfiles. > The basic implementation is: > 1. For public directories such as 'data' and 'archive', set other users' > permission to '--x' to make everyone have the permission to access the > directory. > 2. For namespace or table directories such as 'data/ns/table', > 'archive/ns/table' and '.hbase-snapshot/snapshotName', set user 'r-x' acl and > default 'r-x' acl when following operations happen: > grant to namespace or table / revoke from namespace or table / snapshot table > > For more details, please reference the design doc: > https://docs.google.com/document/d/1D2iAdbrW5CcKc2SthJBXA1n2tTMTftuVaFtxbOWFuqM/edit#heading=h.uwo33s7kz427 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-22580) Add a table attribute to make user scan snapshot feature configurable for table
Yi Mei created HBASE-22580: -- Summary: Add a table attribute to make user scan snapshot feature configurable for table Key: HBASE-22580 URL: https://issues.apache.org/jira/browse/HBASE-22580 Project: HBase Issue Type: Sub-task Reporter: Yi Mei If a cluster enable user scan snapshot feature, it will work for all tables. Since this feature will make some operations such as grant, revoke, snapshot... slower and some tables don't use scan snaphot to make scan faster. So add a table attribute to make it configurable at table level, in general, the feature is disabled by default, and if someone use feature, must enable the attribute of the specific table firstly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-22579) Add a tool to sync HBase permission and HDFS acls when enable user scan snapshot feature
Yi Mei created HBASE-22579: -- Summary: Add a tool to sync HBase permission and HDFS acls when enable user scan snapshot feature Key: HBASE-22579 URL: https://issues.apache.org/jira/browse/HBASE-22579 Project: HBase Issue Type: Sub-task Reporter: Yi Mei When a cluster enable user scan snapshot feature, need a tool to set HDFS acls for HBase granted users who own read permission. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-22578) HFileCleaner should not delete empty ns/table directories used for user san snapshot feature
Yi Mei created HBASE-22578: -- Summary: HFileCleaner should not delete empty ns/table directories used for user san snapshot feature Key: HBASE-22578 URL: https://issues.apache.org/jira/browse/HBASE-22578 Project: HBase Issue Type: Sub-task Reporter: Yi Mei HBASE-21995 add a coprocessor to set HDFS acls for HBase users who own HBase read permission to mask users have the ability to scan snapshot directly. It creates empty directories for namespace and table under archive directory and set HDFS acls to these directories after namespace or table is created, in this way, users can read files under archive directory. But the HFileCleaner will delete empty directories and this will break this feature. So if the user scan snapshot feature is enabled, HFileCleaner should not delete empty ns/table directories. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HBASE-22513) Admin#getQuota does not work correctly if exceedThrottleQuota is set
[ https://issues.apache.org/jira/browse/HBASE-22513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-22513. Resolution: Fixed > Admin#getQuota does not work correctly if exceedThrottleQuota is set > > > Key: HBASE-22513 > URL: https://issues.apache.org/jira/browse/HBASE-22513 > Project: HBase > Issue Type: Bug > Components: Quotas >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Fix For: 3.0.0, 2.2.0, 2.3.0 > > > Admin#getQuota get nothing if exceedThrottleQuota is set, because > exceedThrottleQuota is a special row key in quota table and can not be parsed > to a QuotaSettings. > The shell command results are as follows: > {code:java} > hbase(main):018:0> list_quotas > OWNER QUOTAS > 0 row(s) > Took 0.0342 seconds > hbase(main):019:0> scan 'hbase:quota' > ROW COLUMN+CELL > exceedThrottleQuota column=q:s, timestamp=1559199136449, value=\x00 > n.ang column=q:s, timestamp=1559122413584, > value=PBUF\x12\x08*\x06\x08\x04\x10" \x02 > n.ns1 column=q:s, timestamp=1559203286943, > value=PBUF\x12\x10\x1A\x06\x08\x04\x10\x05 \x02*\x06\x08\x04\x10\x05 > \x02\x1A\x0A\x08\x > 80\x80\x80\x80\x80\xC0\x0C\x10\x03 > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-22513) Admin#getQuota does not work correctly if exceedThrottleQuota is set
Yi Mei created HBASE-22513: -- Summary: Admin#getQuota does not work correctly if exceedThrottleQuota is set Key: HBASE-22513 URL: https://issues.apache.org/jira/browse/HBASE-22513 Project: HBase Issue Type: Bug Reporter: Yi Mei Admin#getQuota get nothing if exceedThrottleQuota is set, because exceedThrottleQuota is a special row key in quota table and can not be parsed to a QuotaSettings. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-22271) Implement grant/revoke/delete table acls/delete namespace acls in Procedure
Yi Mei created HBASE-22271: -- Summary: Implement grant/revoke/delete table acls/delete namespace acls in Procedure Key: HBASE-22271 URL: https://issues.apache.org/jira/browse/HBASE-22271 Project: HBase Issue Type: Sub-task Reporter: Yi Mei Use UpdatePermissionProcedure to implement grant/revoke and RemovePermissionProcedure to implement delete namespace or table acls when delete namespace or table. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-22208) Create auth manager and expose it in RS
Yi Mei created HBASE-22208: -- Summary: Create auth manager and expose it in RS Key: HBASE-22208 URL: https://issues.apache.org/jira/browse/HBASE-22208 Project: HBase Issue Type: Sub-task Reporter: Yi Mei In HBase access control service, auth manager cache all global, namespace and table permissions, and performs authorization checks for a given user's assigned permissions. The auth manager instance is created when master, RS and region load AccessController. Its cache is refreshed when acl znode changed. We can create auth manager when master and RS start and expose it in order to use procedure to refresh its cache rather than watch ZK. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-22117) Move hasPermission/checkPermissions from region server to master
Yi Mei created HBASE-22117: -- Summary: Move hasPermission/checkPermissions from region server to master Key: HBASE-22117 URL: https://issues.apache.org/jira/browse/HBASE-22117 Project: HBase Issue Type: Sub-task Reporter: Yi Mei Assignee: Yi Mei Create a sub-task to move acl methods: hasPermission/checkPermissions from regionserver to master. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-22085) User can get self permissions without ADMIN global/namespace/table privilege
Yi Mei created HBASE-22085: -- Summary: User can get self permissions without ADMIN global/namespace/table privilege Key: HBASE-22085 URL: https://issues.apache.org/jira/browse/HBASE-22085 Project: HBase Issue Type: Sub-task Reporter: Yi Mei Currently, the get user permissions require ADMIN privilege on the global/namespace/table level. It's reasonable that user can get self permissions without any special privilege, but ADMIN privilege will be required to get permissions for other users. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-22084) Rename AccessControlLists to AccessControlStorage
Yi Mei created HBASE-22084: -- Summary: Rename AccessControlLists to AccessControlStorage Key: HBASE-22084 URL: https://issues.apache.org/jira/browse/HBASE-22084 Project: HBase Issue Type: Sub-task Reporter: Yi Mei AccessControlLists is a utility class which deal with get/put/delete operations with hbase acl table. The name of the class is confusing, so shall we rename it to AccessControlStorage? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-22015) UserPermission should be annotated as InterfaceAudience.Public
Yi Mei created HBASE-22015: -- Summary: UserPermission should be annotated as InterfaceAudience.Public Key: HBASE-22015 URL: https://issues.apache.org/jira/browse/HBASE-22015 Project: HBase Issue Type: Sub-task Reporter: Yi Mei HBASE-11318 mark UserPermission as InterfaceAudience.Private. HBASE-11452 instroduce AccessControlClient#getUserPermissions and return UserPermission list but the UserPermission class is Private. I also encounter the same problem when I want to move getUserPermissions method as a admin api in HBASE-21911, otherwise the api of getUserPermissions may be {code:java} Map> getUserPermissions{code} So shall we mark the UserPermission as Public? discussions are welcomed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21995) Add a coprocessor to set HDFS ACL for hbase granted user
Yi Mei created HBASE-21995: -- Summary: Add a coprocessor to set HDFS ACL for hbase granted user Key: HBASE-21995 URL: https://issues.apache.org/jira/browse/HBASE-21995 Project: HBase Issue Type: Sub-task Reporter: Yi Mei To make hbase granted user have the access to scan table snapshots, use HDFS ACLs to set user read permission over hfiles. The basic implementation is: 1. For public directories such as 'data' and 'archive', set other users' permission to '--x' to make everyone have the permission to access the directory. 2. For namespace or table directories such as 'data/ns/table', 'archive/ns/table' and '.hbase-snapshot/snapshotName', set user 'r-x' acl and default 'r-x' acl when following operations happen: grant to namespace or table / revoke from namespace or table / snapshot table For more details, please reference the design doc: https://docs.google.com/document/d/1D2iAdbrW5CcKc2SthJBXA1n2tTMTftuVaFtxbOWFuqM/edit#heading=h.uwo33s7kz427 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21977) Skip replay WAL and update seqid when open regions restored from snapshot
Yi Mei created HBASE-21977: -- Summary: Skip replay WAL and update seqid when open regions restored from snapshot Key: HBASE-21977 URL: https://issues.apache.org/jira/browse/HBASE-21977 Project: HBase Issue Type: Sub-task Reporter: Yi Mei TableSnapshotScanner restore a snapshot and then open the restored regions. When open these regions, we can skip replay WAL and update seqid. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21974) Change Admin#grant/revoke parameter from UserPermission to user and Permission
Yi Mei created HBASE-21974: -- Summary: Change Admin#grant/revoke parameter from UserPermission to user and Permission Key: HBASE-21974 URL: https://issues.apache.org/jira/browse/HBASE-21974 Project: HBase Issue Type: Sub-task Reporter: Yi Mei Assignee: Yi Mei HBASE-21739 introduce Admin#grant and Admin#revoke, the parameter of the two methods is a UserPermission, which is annotated as InterfaceAudience.Private, need to change it to a String of user name and a Permission(InterfaceAudience.Public). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21958) Make CLUSTER scope quota work right when use rs group
Yi Mei created HBASE-21958: -- Summary: Make CLUSTER scope quota work right when use rs group Key: HBASE-21958 URL: https://issues.apache.org/jira/browse/HBASE-21958 Project: HBase Issue Type: Sub-task Reporter: Yi Mei HBASE-21820 implement CLUSTER scope quota in a simple way, use [ClusterLimit / RSNum] to divide cluster limit into machine limit. But when use rs group feature, namespace tables are on a subset of region servers, the cluster limit should be shared by the rs group, not all region servers. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21911) Move getUserPermissions/hasPermission/checkPermissions from regionserver to master
Yi Mei created HBASE-21911: -- Summary: Move getUserPermissions/hasPermission/checkPermissions from regionserver to master Key: HBASE-21911 URL: https://issues.apache.org/jira/browse/HBASE-21911 Project: HBase Issue Type: Sub-task Reporter: Yi Mei Create a sub-task to move acl methods: getUserPermissions/ hasPermission/ checkPermissions from regionserver to master. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21820) Implement CLUSTER quota scope
Yi Mei created HBASE-21820: -- Summary: Implement CLUSTER quota scope Key: HBASE-21820 URL: https://issues.apache.org/jira/browse/HBASE-21820 Project: HBase Issue Type: Sub-task Reporter: Yi Mei There are two kinds of quota scope: CLUSTER and MACHINE. CLUSTER quota means quota limit is shared by all machines of cluster. MACHINE quota means quota limit is used by single region server. Currently, all set quota operations use MACHINE scope as default and CLUSTER scope has not been implemented. So open this issue to implement CLUSTER quota scope. To split cluster quota limit to machines, the basic idea is for user, namespace, user over namespace and region server quota, use [ClusterLimit / RSNum] as machine limit. For table and user over table quota, use [ClusterLimit / TotalTableRegionNum * MachineTableRegionNum] as machine limit. Suggestions are welcomed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21783) Support allow exceed user/table/ns rpc throttle quota if region server has available quota
Yi Mei created HBASE-21783: -- Summary: Support allow exceed user/table/ns rpc throttle quota if region server has available quota Key: HBASE-21783 URL: https://issues.apache.org/jira/browse/HBASE-21783 Project: HBase Issue Type: Sub-task Reporter: Yi Mei Currently, all types of rpc throttle quota (include region server, namespace, table and user quota) are hard limit, which means once requests exceed the amount, they will be throttled. In some situation, user use out of all their own quotas but region server still has available quota because other users don't consume at the same time, in this case, we can allow user consume additional quota. So add a configuration named ALLOW_EXCEED to meet this requirement when set region server quota. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HBASE-21603) Move access control service from regionserver to master
[ https://issues.apache.org/jira/browse/HBASE-21603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei resolved HBASE-21603. Resolution: Won't Fix Split the task into several tasks, see HBASE-21739. > Move access control service from regionserver to master > --- > > Key: HBASE-21603 > URL: https://issues.apache.org/jira/browse/HBASE-21603 > Project: HBase > Issue Type: Sub-task >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > > Create a sub task to move access control service from regionserver to master. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21739) Move grant/revoke from regionserver to master
Yi Mei created HBASE-21739: -- Summary: Move grant/revoke from regionserver to master Key: HBASE-21739 URL: https://issues.apache.org/jira/browse/HBASE-21739 Project: HBase Issue Type: Sub-task Reporter: Yi Mei Assignee: Yi Mei Create a sub-task to move grant/revoke from regionserver to master. Other access control operations(getUserPermissions/ checkPermissions/ hasPermission) will be moved in another sub-task. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21733) SnapshotQuotaObserverChore should only fetch space quotas
Yi Mei created HBASE-21733: -- Summary: SnapshotQuotaObserverChore should only fetch space quotas Key: HBASE-21733 URL: https://issues.apache.org/jira/browse/HBASE-21733 Project: HBase Issue Type: Bug Reporter: Yi Mei In SnapshotQuotaObserverChore.getSnapshotsFromTables method, it fetches space quotas using the following filter: {code:java} QuotaFilter filter = new QuotaFilter(); filter.addTypeFilter(QuotaType.SPACE); {code} but the QuotaType filter hasn't been implemented. And if there is throttle quotas in quota table, it will encounter Exception as follows: {code:java} java.lang.IllegalStateException: Expected only one of namespace and tablename to be null at org.apache.hadoop.hbase.quotas.SnapshotQuotaObserverChore.getSnapshotsToComputeSize(SnapshotQuotaObserverChore.java:137) at org.apache.hadoop.hbase.quotas.TestSnapshotQuotaObserverChore.testSnapshotsFromNamespaces(TestSnapshotQuotaObserverChore.java:184) {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21713) Support set region server throttle quota
Yi Mei created HBASE-21713: -- Summary: Support set region server throttle quota Key: HBASE-21713 URL: https://issues.apache.org/jira/browse/HBASE-21713 Project: HBase Issue Type: Sub-task Reporter: Yi Mei Support set region server throttle quota which represents the read/write ability of region servers. Use the following command to set RS quota: set_quota TYPE => THROTTLE, REGIONSERVER => 'all', THROTTLE_TYPE => WRITE, LIMIT => '2req/sec' -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21694) Add append_peer_exclude_tableCFs and remove_peer_exclude_tableCFs shell commands
Yi Mei created HBASE-21694: -- Summary: Add append_peer_exclude_tableCFs and remove_peer_exclude_tableCFs shell commands Key: HBASE-21694 URL: https://issues.apache.org/jira/browse/HBASE-21694 Project: HBase Issue Type: Improvement Reporter: Yi Mei Assignee: Yi Mei To add or remove table-cfs to peers' exclude-table-cfs list, the current way is use set_peer_exclude_tableCFs. But one has to copy all exclude-table-cfs of the peer firstly and modify it, which is error-prone and time-consuming. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21662) Add append_peer_exclude_namespaces and remove_peer_exclude_namespaces shell commands
Yi Mei created HBASE-21662: -- Summary: Add append_peer_exclude_namespaces and remove_peer_exclude_namespaces shell commands Key: HBASE-21662 URL: https://issues.apache.org/jira/browse/HBASE-21662 Project: HBase Issue Type: Task Reporter: Yi Mei To add or remove a namespace to peers' exclude namespace list, the current way is use set_peer_exclude_namespaces. But one have to copy all exclude namespaces of the peer firstly and modify it, which is error-prone and time-consuming. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21603) Move access control service from regionserver to master
Yi Mei created HBASE-21603: -- Summary: Move access control service from regionserver to master Key: HBASE-21603 URL: https://issues.apache.org/jira/browse/HBASE-21603 Project: HBase Issue Type: Sub-task Reporter: Yi Mei Create a sub task to move access control service from regionserver to master. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21602) Procedure v2 access control
Yi Mei created HBASE-21602: -- Summary: Procedure v2 access control Key: HBASE-21602 URL: https://issues.apache.org/jira/browse/HBASE-21602 Project: HBase Issue Type: Task Reporter: Yi Mei Now the access control service (grant and revoke) is done by regionserver with acl region. The process of grant/revoke is: client call grant, regionserver with acl region will check and then put permission row in acl table, and also write to the ZooKeeper acl node(/hbase/acl). Each regionserver watch acl znode and update local acl cache when node children changed. Create this issus to use procedure v2 framework and reduce zk depencency. Any suggestions are welcomed. Thanks. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21578) Fix wrong throttling exception for capacity unit
Yi Mei created HBASE-21578: -- Summary: Fix wrong throttling exception for capacity unit Key: HBASE-21578 URL: https://issues.apache.org/jira/browse/HBASE-21578 Project: HBase Issue Type: Bug Reporter: Yi Mei HBASE-21034 provides a new throttle type: capacity unit, but the throttling exception is confusing: {noformat} 2018-12-11 14:38:41,503 DEBUG [Time-limited test] client.RpcRetryingCallerImpl(131): Call exception, tries=6, retries=7, started=0 ms ago, cancelled=false, msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: write size limit exceeded - wait 10sec at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:106) at org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwWriteSizeExceeded(RpcThrottlingException.java:96) at org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:179){noformat} Need to make the exception more clearly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21300) Fix the wrong reference file path when restoring snapshots for tables with MOB columns
Yi Mei created HBASE-21300: -- Summary: Fix the wrong reference file path when restoring snapshots for tables with MOB columns Key: HBASE-21300 URL: https://issues.apache.org/jira/browse/HBASE-21300 Project: HBase Issue Type: Bug Affects Versions: 3.0.0, 2.2.0 Reporter: Yi Mei When restoring snapshots for tables with MOB columns, the reference files for mob region are created under hbase root dir, rather than restore dir. Some of the mob reference file paths are as follows: {quote}hdfs:/7ae0d109-3ca4-d0e7-7250-62ed234ab247/mobdir/data/ns_testMob/testMob hdfs:/7ae0d109-3ca4-d0e7-7250-62ed234ab247/mobdir/data/ns_testMob/testMob/057a856eb65753c6e6bdb168ba58a0b2 hdfs:/7ae0d109-3ca4-d0e7-7250-62ed234ab247/mobdir/data/ns_testMob/testMob/057a856eb65753c6e6bdb168ba58a0b2/A hdfs:/7ae0d109-3ca4-d0e7-7250-62ed234ab247/mobdir/data/ns_testMob/testMob/057a856eb65753c6e6bdb168ba58a0b2/A/d41d8cd98f00b204e9800998ecf8427e201810120fc8e2446f174598a7280a81b1134cee hdfs:/7ae0d109-3ca4-d0e7-7250-62ed234ab247/mobdir/data/ns_testMob/testMob/057a856eb65753c6e6bdb168ba58a0b2/A/ns_testMob=testMob=057a856eb65753c6e6bdb168ba58a0b2-d41d8cd98f00b204e9800998ecf8427e201810120fc8e2446f174598a7280a81b1134cee {quote} The restore dir files are as follows: {quote}hdfs://hbase/.tmpdir-to-restore-snapshot/856e06fa-e018-4e95-9647-2cfbd5161e7e hdfs://hbase/.tmpdir-to-restore-snapshot/856e06fa-e018-4e95-9647-2cfbd5161e7e/data hdfs://hbase/.tmpdir-to-restore-snapshot/856e06fa-e018-4e95-9647-2cfbd5161e7e/data/ns_testMob hdfs://hbase/.tmpdir-to-restore-snapshot/856e06fa-e018-4e95-9647-2cfbd5161e7e/data/ns_testMob/testMob hdfs://hbase/.tmpdir-to-restore-snapshot/856e06fa-e018-4e95-9647-2cfbd5161e7e/data/ns_testMob/testMob/ecdf66f0d8c09a816faf37336ad262e1 hdfs://hbase/.tmpdir-to-restore-snapshot/856e06fa-e018-4e95-9647-2cfbd5161e7e/data/ns_testMob/testMob/ecdf66f0d8c09a816faf37336ad262e1/.regioninfo hdfs://hbase/.tmpdir-to-restore-snapshot/856e06fa-e018-4e95-9647-2cfbd5161e7e/data/ns_testMob/testMob/ecdf66f0d8c09a816faf37336ad262e1/A hdfs://hbase/.tmpdir-to-restore-snapshot/856e06fa-e018-4e95-9647-2cfbd5161e7e/data/ns_testMob/testMob/ecdf66f0d8c09a816faf37336ad262e1/A/ns_testMob=testMob=ecdf66f0d8c09a816faf37336ad262e1-7208172df03b46518370643aa28ffd05 {quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21159) Add shell command to switch throttle on or off
Yi Mei created HBASE-21159: -- Summary: Add shell command to switch throttle on or off Key: HBASE-21159 URL: https://issues.apache.org/jira/browse/HBASE-21159 Project: HBase Issue Type: Sub-task Affects Versions: 3.0.0, 2.2.0 Reporter: Yi Mei Add shell command to switch throttle on or off. When throttle is off, HBase will not throttle any request. This feature may be useful in production environment. We can use the following commands to switch throttle: throttle_switch true / throttle_switch false -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21064) Support set region server quota and allow exceed user/table/ns quota when rs has available quota
Yi Mei created HBASE-21064: -- Summary: Support set region server quota and allow exceed user/table/ns quota when rs has available quota Key: HBASE-21064 URL: https://issues.apache.org/jira/browse/HBASE-21064 Project: HBase Issue Type: Task Affects Versions: 3.0.0, 2.2.0 Reporter: Yi Mei Assignee: Yi Mei Support set region server quota which represents the read/write ability of region servers in HBase shell. And add a switch named "ALLOW_EXCEED" to allow exceed user/table/ns quota when region server has available quota. Use the following command to set RS quota: set_quota TYPE => THROTTLE, REGIONSERVER => 'rs', LIMIT => '10req/sec', ALLOW_EXCEED => true -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21034) Add new throttle type: read/write capacity unit
Yi Mei created HBASE-21034: -- Summary: Add new throttle type: read/write capacity unit Key: HBASE-21034 URL: https://issues.apache.org/jira/browse/HBASE-21034 Project: HBase Issue Type: Task Affects Versions: 3.0.0, 2.2.0 Reporter: Yi Mei Add new throttle type: read/write capacity unit like DynamoDB. One read capacity unit represents that read up to 1K data per time unit. If data size is more than 1K, the consume additional read capacity units. One write capacity unit represents that one write for an item up to 1 KB in size per time unit. If data size is more than 1K, the consume additional write capacity units. For example, 100 read capacity units per second means that, HBase user can read 100 times for 1K data in every second, or 50 times for 2K data in every second and so on. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-21019) Use StealJobQueue to better utilize handlers in RpcScheduer
Yi Mei created HBASE-21019: -- Summary: Use StealJobQueue to better utilize handlers in RpcScheduer Key: HBASE-21019 URL: https://issues.apache.org/jira/browse/HBASE-21019 Project: HBase Issue Type: Task Affects Versions: 3.0.0, 2.2.0 Reporter: Yi Mei In HBASE-20965, we propose MasterFifoRpcScheduler to handle RSReport requests in independent handlers, other requests are handled in other handlers. To better utilize handlers, we can use StealJobQueue to allow other handlers to steal jobs from RSReport handlers when there is a few RSReport handlers. And it's the same for SimpleRpcScheduler. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HBASE-20965) Separate region server report requests to new handlers
Yi Mei created HBASE-20965: -- Summary: Separate region server report requests to new handlers Key: HBASE-20965 URL: https://issues.apache.org/jira/browse/HBASE-20965 Project: HBase Issue Type: Task Reporter: Yi Mei Assignee: Yi Mei Fix For: 3.0.0, 2.2.0 In master rpc scheduler, all rpc requests are executed in a thread pool. This task separates rs report requests to new handlers. -- This message was sent by Atlassian JIRA (v7.6.3#76005)