[jira] [Created] (HBASE-19340) SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell
zhaoyuan created HBASE-19340: Summary: SPLIT_POLICY and FLUSH_POLICY cann't be set directly by hbase shell Key: HBASE-19340 URL: https://issues.apache.org/jira/browse/HBASE-19340 Project: HBase Issue Type: Bug Affects Versions: 1.2.6 Reporter: zhaoyuan Fix For: 1.2.8 Recently I wanna try to alter the split policy for a table on my cluster which version is 1.2.6 and as far as I know The SPLIT_POLICY is an attribute of the HTable so I run the command in hbase shell console below. alter 'tablex',SPLIT_POLICY => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy' However, It gave the information like this and I confused Unknown argument ignored: SPLIT_POLICY Updating all regions with the new schema... So I check the source code That admin.rb might miss the setting for this argument . htd.setMaxFileSize(JLong.valueOf(arg.delete(MAX_FILESIZE))) if arg[MAX_FILESIZE] htd.setReadOnly(JBoolean.valueOf(arg.delete(READONLY))) if arg[READONLY] ... So I think it may be a bug in hbase-server ,is it? -- This message was sent by Atlassian JIRA (v6.4.14#64029)
Delete columns by prefix
Hi all, I have come across a rather old issue https://issues.apache.org/jira/browse/HBASE-5268, which is marked as Won't fix and I would like to open a discussion about the topic described. I understand the difficulties that the full implementation brings to the get/scan process, but I think that the use-case described in the JIRA might be beneficial to a lot of use-cases. My question is, whether the problem of retrieving the deletion marker for a prefix of a qualifier could be solved by introducing some structure to the qualifier itself. Let me elaborate on this: - suppose we give qualified a structure, say in a form of `.`, with a fixed delimiter character (.) - delete operation would be allowed only on the part, and will therefore be written as a "delete prefix" marker on qualifier `.` - a get operation on a qualifier containing the delimiter character (might be configurable), would then have to fetch only the row start (to be able to determine whether the row as a whole was not deleted) and then delete marker for the qualifier `.` I think this implementation would not suffer from the issues described in the original JIRA and would be still practically usable for users, while being a lot more efficient then what listing all the qualifiers being actually written and deleting then one by one. Thanks for any comments or insights. Best, Jan
Re: [DISCUSS] Performance degradation in master (compared to 2-alpha-1)
Happy Thanksgiving to you. Since handling regression isn't the goal of HBASE-18294, mind logging another JIRA ? I applied patch v6 from HBASE-18294 on branch-2 and observed minor conflict (CellUtil.java). Meaning compaction / flush code is mostly the same between branch-2 and master. Over HBASE-19338, > 20% boost was reported with patch. However, that is not enough to bring performance back. Cheers On Thu, Nov 23, 2017 at 7:35 AM, Eshcar Hillel wrote: > Happy Thanksgiving all, > In recent benchmarks I ran in HBASE-18294 I discovered major performance > degradation of master code w.r.t 2-alpha-1 code.I am running write-only > workload (similar to the one reported in HBASE-16417). I am using the same > hardware and same configuration settings (specifically, I testes both basic > memstore compaction with optimal parameters, and no memsore > compaction).While in 2-alpha-1 code I see throughput of ~110Kops for basic > compaction and ~80Kops for no compaction, in the master code I get only > 60Kops and 55Kops, respectively. *This is almost 50% reduction in > performance*. > (1) Did anyone else noticed such degradation?(2) Do we have any systematic > automatic/semi-automatic method to track the sources of this performance > issue? > Thanks,Eshcar > > >
[DISCUSS] Performance degradation in master (compared to 2-alpha-1)
Happy Thanksgiving all, In recent benchmarks I ran in HBASE-18294 I discovered major performance degradation of master code w.r.t 2-alpha-1 code.I am running write-only workload (similar to the one reported in HBASE-16417). I am using the same hardware and same configuration settings (specifically, I testes both basic memstore compaction with optimal parameters, and no memsore compaction).While in 2-alpha-1 code I see throughput of ~110Kops for basic compaction and ~80Kops for no compaction, in the master code I get only 60Kops and 55Kops, respectively. *This is almost 50% reduction in performance*. (1) Did anyone else noticed such degradation?(2) Do we have any systematic automatic/semi-automatic method to track the sources of this performance issue? Thanks,Eshcar
[jira] [Created] (HBASE-19339) Enable TestAcidGuaranteesWithEagerPolicy and TestAcidGuaranteesWithAdaptivePolicy
Chia-Ping Tsai created HBASE-19339: -- Summary: Enable TestAcidGuaranteesWithEagerPolicy and TestAcidGuaranteesWithAdaptivePolicy Key: HBASE-19339 URL: https://issues.apache.org/jira/browse/HBASE-19339 Project: HBase Issue Type: Task Reporter: Chia-Ping Tsai Assignee: Chia-Ping Tsai It is a follow-up of HBASE-19266. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-19338) Performance regression in RegionServerRpcQuotaManager to get ugi
binlijin created HBASE-19338: Summary: Performance regression in RegionServerRpcQuotaManager to get ugi Key: HBASE-19338 URL: https://issues.apache.org/jira/browse/HBASE-19338 Project: HBase Issue Type: Improvement Affects Versions: 3.0.0, 2.0.0-beta-2 Reporter: binlijin Assignee: binlijin Priority: Critical we find hbase-2.0.0-beta-1.SNAPSHOT have performance regression with yscb put and have some finding. {code} "RpcServer.default.FPBQ.Fifo.handler=131,queue=17,port=16020" #245 daemon prio=5 os_prio=0 tid=0x7fc82b22e000 nid=0x3a5db waiting for monitor entry [0x7fc50fafa000] java.lang.Thread.State: BLOCKED (on object monitor) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647) - waiting to lock <0x7fcaedc20830> (a java.lang.Class for org.apache.hadoop.security.UserGroupInformation) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.(User.java:264) at org.apache.hadoop.hbase.security.User.getCurrent(User.java:162) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:179) at org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager.checkQuota(RegionServerRpcQuotaManager.java:162) at org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2521) at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41560) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:406) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:325) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:305) {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-19337) AsyncMetaTableAccessor may hang when call ScanController.terminate many times
Guanghao Zhang created HBASE-19337: -- Summary: AsyncMetaTableAccessor may hang when call ScanController.terminate many times Key: HBASE-19337 URL: https://issues.apache.org/jira/browse/HBASE-19337 Project: HBase Issue Type: Bug Reporter: Guanghao Zhang Code in ScanControllerImpl. {code} private void preCheck() { Preconditions.checkState(Thread.currentThread() == callerThread, "The current thread is %s, expected thread is %s, " + "you should not call this method outside onNext or onHeartbeat", Thread.currentThread(), callerThread); Preconditions.checkState(state.equals(ScanControllerState.INITIALIZED), "Invalid Stopper state %s", state); } @Override public void terminate() { preCheck(); state = ScanControllerState.TERMINATED; } {code} So if call terminate on a already terminated scan, it will throw IllegalStateException. -- This message was sent by Atlassian JIRA (v6.4.14#64029)