[hbase] branch master updated: HBASE-22870 reflection fails to access a private nested class

2019-08-17 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new 7697d48  HBASE-22870 reflection fails to access a private nested class
7697d48 is described below

commit 7697d48cd7b636396d5f73be43326a4501a9ea43
Author: satanson 
AuthorDate: Sun Aug 18 09:47:06 2019 +0800

HBASE-22870 reflection fails to access a private nested class

Signed-off-by Reid Chan 
---
 .../java/org/apache/hadoop/hbase/regionserver/HRegionServer.java| 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
index 61361be..a0a6b4c 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
@@ -2475,9 +2475,11 @@ public class HRegionServer extends HasThread implements
   this.abortMonitor = new Timer("Abort regionserver monitor", true);
   TimerTask abortTimeoutTask = null;
   try {
-abortTimeoutTask =
+Constructor timerTaskCtor =
   Class.forName(conf.get(ABORT_TIMEOUT_TASK, 
SystemExitWhenAbortTimeout.class.getName()))
-
.asSubclass(TimerTask.class).getDeclaredConstructor().newInstance();
+.asSubclass(TimerTask.class).getDeclaredConstructor();
+timerTaskCtor.setAccessible(true);
+abortTimeoutTask = timerTaskCtor.newInstance();
   } catch (Exception e) {
 LOG.warn("Initialize abort timeout task failed", e);
   }



[hbase] branch branch-2 updated: HBASE-22870 reflection fails to access a private nested class

2019-08-17 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 29ed415  HBASE-22870 reflection fails to access a private nested class
29ed415 is described below

commit 29ed4157b30d9720337d576ad48c28e9d1c47114
Author: satanson 
AuthorDate: Sun Aug 18 09:47:06 2019 +0800

HBASE-22870 reflection fails to access a private nested class

Signed-off-by Reid Chan 
---
 .../java/org/apache/hadoop/hbase/regionserver/HRegionServer.java| 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
index c4fd11c..604cf11 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
@@ -2480,9 +2480,11 @@ public class HRegionServer extends HasThread implements
   this.abortMonitor = new Timer("Abort regionserver monitor", true);
   TimerTask abortTimeoutTask = null;
   try {
-abortTimeoutTask =
+Constructor timerTaskCtor =
   Class.forName(conf.get(ABORT_TIMEOUT_TASK, 
SystemExitWhenAbortTimeout.class.getName()))
-
.asSubclass(TimerTask.class).getDeclaredConstructor().newInstance();
+.asSubclass(TimerTask.class).getDeclaredConstructor();
+timerTaskCtor.setAccessible(true);
+abortTimeoutTask = timerTaskCtor.newInstance();
   } catch (Exception e) {
 LOG.warn("Initialize abort timeout task failed", e);
   }



[hbase] branch branch-2.2 updated: HBASE-22870 reflection fails to access a private nested class

2019-08-17 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2.2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.2 by this push:
 new 754c2da  HBASE-22870 reflection fails to access a private nested class
754c2da is described below

commit 754c2da349c7c0aea40f024f52c1f9191bf3cad2
Author: satanson 
AuthorDate: Sun Aug 18 09:47:06 2019 +0800

HBASE-22870 reflection fails to access a private nested class

Signed-off-by Reid Chan 

Conflicts:

hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
---
 .../java/org/apache/hadoop/hbase/regionserver/HRegionServer.java  | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
index 7991a61..81166e2 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
@@ -1065,9 +1065,11 @@ public class HRegionServer extends HasThread implements
   Timer abortMonitor = new Timer("Abort regionserver monitor", true);
   TimerTask abortTimeoutTask = null;
   try {
-abortTimeoutTask =
-Class.forName(conf.get(ABORT_TIMEOUT_TASK, 
SystemExitWhenAbortTimeout.class.getName()))
-
.asSubclass(TimerTask.class).getDeclaredConstructor().newInstance();
+Constructor timerTaskCtor =
+  Class.forName(conf.get(ABORT_TIMEOUT_TASK, 
SystemExitWhenAbortTimeout.class.getName()))
+.asSubclass(TimerTask.class).getDeclaredConstructor();
+timerTaskCtor.setAccessible(true);
+abortTimeoutTask = timerTaskCtor.newInstance();
   } catch (Exception e) {
 LOG.warn("Initialize abort timeout task failed", e);
   }



[hbase] branch branch-2.1 updated: HBASE-22870 reflection fails to access a private nested class

2019-08-17 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new 9e18049  HBASE-22870 reflection fails to access a private nested class
9e18049 is described below

commit 9e180499577bac0a103eb965055d961d19e15acb
Author: satanson 
AuthorDate: Sun Aug 18 09:47:06 2019 +0800

HBASE-22870 reflection fails to access a private nested class

Signed-off-by Reid Chan 

Conflicts:

hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
---
 .../java/org/apache/hadoop/hbase/regionserver/HRegionServer.java  | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
index 405d75a..587e7f6 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
@@ -1047,9 +1047,11 @@ public class HRegionServer extends HasThread implements
   Timer abortMonitor = new Timer("Abort regionserver monitor", true);
   TimerTask abortTimeoutTask = null;
   try {
-abortTimeoutTask =
-Class.forName(conf.get(ABORT_TIMEOUT_TASK, 
SystemExitWhenAbortTimeout.class.getName()))
-
.asSubclass(TimerTask.class).getDeclaredConstructor().newInstance();
+Constructor timerTaskCtor =
+  Class.forName(conf.get(ABORT_TIMEOUT_TASK, 
SystemExitWhenAbortTimeout.class.getName()))
+.asSubclass(TimerTask.class).getDeclaredConstructor();
+timerTaskCtor.setAccessible(true);
+abortTimeoutTask = timerTaskCtor.newInstance();
   } catch (Exception e) {
 LOG.warn("Initialize abort timeout task failed", e);
   }



[hbase] branch branch-1 updated: HBASE-22861 [WAL] Merged region should get its WAL according to WALProvider

2019-08-21 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1 by this push:
 new 105008e  HBASE-22861 [WAL] Merged region should get its WAL according 
to WALProvider
105008e is described below

commit 105008e748da81f11d3e111932ce24ee03bca64a
Author: Reid Chan 
AuthorDate: Thu Aug 22 11:15:21 2019 +0800

HBASE-22861 [WAL] Merged region should get its WAL according to WALProvider
---
 .../apache/hadoop/hbase/regionserver/HRegion.java  |   3 +-
 .../hbase/MockRegionServerServicesWithWALs.java| 296 +
 .../regionserver/TestRegionMergeTransaction.java   |  40 ++-
 .../hbase/regionserver/TestSplitTransaction.java   | 262 +-
 4 files changed, 337 insertions(+), 264 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
index b137c97..1157eba 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
@@ -7198,7 +7198,8 @@ public class HRegion implements HeapSize, 
PropagatingConfigurationObserver, Regi
*/
   HRegion createMergedRegionFromMerges(final HRegionInfo mergedRegionInfo,
   final HRegion region_b) throws IOException {
-HRegion r = HRegion.newHRegion(this.fs.getTableDir(), this.getWAL(),
+WAL mergedRegionWAL = rsServices == null ? getWAL() : 
rsServices.getWAL(mergedRegionInfo);
+HRegion r = HRegion.newHRegion(this.fs.getTableDir(), mergedRegionWAL,
 fs.getFileSystem(), this.getBaseConf(), mergedRegionInfo,
 this.getTableDesc(), this.rsServices);
 r.readRequestsCount.set(this.getReadRequestsCount()
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/MockRegionServerServicesWithWALs.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/MockRegionServerServicesWithWALs.java
new file mode 100644
index 000..69290b2
--- /dev/null
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/MockRegionServerServicesWithWALs.java
@@ -0,0 +1,296 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import com.google.protobuf.Service;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ConcurrentMap;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.hbase.client.ClusterConnection;
+import org.apache.hadoop.hbase.executor.ExecutorService;
+import org.apache.hadoop.hbase.ipc.RpcServerInterface;
+import org.apache.hadoop.hbase.master.TableLockManager;
+import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos;
+import org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos;
+import org.apache.hadoop.hbase.quotas.RegionServerQuotaManager;
+import org.apache.hadoop.hbase.regionserver.CompactionRequestor;
+import org.apache.hadoop.hbase.regionserver.FlushRequester;
+import org.apache.hadoop.hbase.regionserver.HeapMemoryManager;
+import org.apache.hadoop.hbase.regionserver.Leases;
+import org.apache.hadoop.hbase.regionserver.MetricsRegionServer;
+import org.apache.hadoop.hbase.regionserver.Region;
+import org.apache.hadoop.hbase.regionserver.RegionServerAccounting;
+import org.apache.hadoop.hbase.regionserver.RegionServerServices;
+import org.apache.hadoop.hbase.regionserver.ServerNonceManager;
+import org.apache.hadoop.hbase.regionserver.throttle.ThroughputController;
+import org.apache.hadoop.hbase.wal.WAL;
+import org.apache.hadoop.hbase.wal.WALProvider;
+import org.apache.hadoop.hbase.zookeeper.MetaTableLocator;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.zookeeper.KeeperException;
+
+/**
+ * Mock region server services with WALProvider, it can be used for testing 
wal related tests,
+ * like split or merge regions.
+ */
+public class MockRegionServerService

[hbase] branch branch-1.4 updated: HBASE-22861 [WAL] Merged region should get its WAL according to WALProvider

2019-08-21 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1.4
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1.4 by this push:
 new dde3873  HBASE-22861 [WAL] Merged region should get its WAL according 
to WALProvider
dde3873 is described below

commit dde3873360bc64d81d1dbdfa5f79bf2096954fbc
Author: Reid Chan 
AuthorDate: Thu Aug 22 11:15:21 2019 +0800

HBASE-22861 [WAL] Merged region should get its WAL according to WALProvider
---
 .../apache/hadoop/hbase/regionserver/HRegion.java  |   3 +-
 .../hbase/MockRegionServerServicesWithWALs.java| 296 +
 .../regionserver/TestRegionMergeTransaction.java   |  40 ++-
 .../hbase/regionserver/TestSplitTransaction.java   | 262 +-
 4 files changed, 337 insertions(+), 264 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
index 9ccb677..f16e9e9 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
@@ -7187,7 +7187,8 @@ public class HRegion implements HeapSize, 
PropagatingConfigurationObserver, Regi
*/
   HRegion createMergedRegionFromMerges(final HRegionInfo mergedRegionInfo,
   final HRegion region_b) throws IOException {
-HRegion r = HRegion.newHRegion(this.fs.getTableDir(), this.getWAL(),
+WAL mergedRegionWAL = rsServices == null ? getWAL() : 
rsServices.getWAL(mergedRegionInfo);
+HRegion r = HRegion.newHRegion(this.fs.getTableDir(), mergedRegionWAL,
 fs.getFileSystem(), this.getBaseConf(), mergedRegionInfo,
 this.getTableDesc(), this.rsServices);
 r.readRequestsCount.set(this.getReadRequestsCount()
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/MockRegionServerServicesWithWALs.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/MockRegionServerServicesWithWALs.java
new file mode 100644
index 000..69290b2
--- /dev/null
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/MockRegionServerServicesWithWALs.java
@@ -0,0 +1,296 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import com.google.protobuf.Service;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ConcurrentMap;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.hbase.client.ClusterConnection;
+import org.apache.hadoop.hbase.executor.ExecutorService;
+import org.apache.hadoop.hbase.ipc.RpcServerInterface;
+import org.apache.hadoop.hbase.master.TableLockManager;
+import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos;
+import org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos;
+import org.apache.hadoop.hbase.quotas.RegionServerQuotaManager;
+import org.apache.hadoop.hbase.regionserver.CompactionRequestor;
+import org.apache.hadoop.hbase.regionserver.FlushRequester;
+import org.apache.hadoop.hbase.regionserver.HeapMemoryManager;
+import org.apache.hadoop.hbase.regionserver.Leases;
+import org.apache.hadoop.hbase.regionserver.MetricsRegionServer;
+import org.apache.hadoop.hbase.regionserver.Region;
+import org.apache.hadoop.hbase.regionserver.RegionServerAccounting;
+import org.apache.hadoop.hbase.regionserver.RegionServerServices;
+import org.apache.hadoop.hbase.regionserver.ServerNonceManager;
+import org.apache.hadoop.hbase.regionserver.throttle.ThroughputController;
+import org.apache.hadoop.hbase.wal.WAL;
+import org.apache.hadoop.hbase.wal.WALProvider;
+import org.apache.hadoop.hbase.zookeeper.MetaTableLocator;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.zookeeper.KeeperException;
+
+/**
+ * Mock region server services with WALProvider, it can be used for testing 
wal related tests,
+ * like split or merge regions.
+ */
+public class MockRegionServerService

[hbase] branch branch-1.3 updated: HBASE-22861 [WAL] Merged region should get its WAL according to WALProvider

2019-08-21 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1.3
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1.3 by this push:
 new ca0390c  HBASE-22861 [WAL] Merged region should get its WAL according 
to WALProvider
ca0390c is described below

commit ca0390cb7b17dc457a52fc210f592ecf0591fbfa
Author: Reid Chan 
AuthorDate: Thu Aug 22 11:15:21 2019 +0800

HBASE-22861 [WAL] Merged region should get its WAL according to WALProvider

Conflicts:

hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransaction.java
---
 .../apache/hadoop/hbase/regionserver/HRegion.java  |   3 +-
 .../hbase/MockRegionServerServicesWithWALs.java| 296 +
 .../regionserver/TestRegionMergeTransaction.java   |  40 ++-
 .../hbase/regionserver/TestSplitTransaction.java   | 258 +-
 4 files changed, 337 insertions(+), 260 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
index c988863..5a930d6 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
@@ -7014,7 +7014,8 @@ public class HRegion implements HeapSize, 
PropagatingConfigurationObserver, Regi
*/
   HRegion createMergedRegionFromMerges(final HRegionInfo mergedRegionInfo,
   final HRegion region_b) throws IOException {
-HRegion r = HRegion.newHRegion(this.fs.getTableDir(), this.getWAL(),
+WAL mergedRegionWAL = rsServices == null ? getWAL() : 
rsServices.getWAL(mergedRegionInfo);
+HRegion r = HRegion.newHRegion(this.fs.getTableDir(), mergedRegionWAL,
 fs.getFileSystem(), this.getBaseConf(), mergedRegionInfo,
 this.getTableDesc(), this.rsServices);
 r.readRequestsCount.set(this.getReadRequestsCount()
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/MockRegionServerServicesWithWALs.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/MockRegionServerServicesWithWALs.java
new file mode 100644
index 000..69290b2
--- /dev/null
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/MockRegionServerServicesWithWALs.java
@@ -0,0 +1,296 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import com.google.protobuf.Service;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ConcurrentMap;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.hbase.client.ClusterConnection;
+import org.apache.hadoop.hbase.executor.ExecutorService;
+import org.apache.hadoop.hbase.ipc.RpcServerInterface;
+import org.apache.hadoop.hbase.master.TableLockManager;
+import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos;
+import org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos;
+import org.apache.hadoop.hbase.quotas.RegionServerQuotaManager;
+import org.apache.hadoop.hbase.regionserver.CompactionRequestor;
+import org.apache.hadoop.hbase.regionserver.FlushRequester;
+import org.apache.hadoop.hbase.regionserver.HeapMemoryManager;
+import org.apache.hadoop.hbase.regionserver.Leases;
+import org.apache.hadoop.hbase.regionserver.MetricsRegionServer;
+import org.apache.hadoop.hbase.regionserver.Region;
+import org.apache.hadoop.hbase.regionserver.RegionServerAccounting;
+import org.apache.hadoop.hbase.regionserver.RegionServerServices;
+import org.apache.hadoop.hbase.regionserver.ServerNonceManager;
+import org.apache.hadoop.hbase.regionserver.throttle.ThroughputController;
+import org.apache.hadoop.hbase.wal.WAL;
+import org.apache.hadoop.hbase.wal.WALProvider;
+import org.apache.hadoop.hbase.zookeeper.MetaTableLocator;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.zookeeper.KeeperException;
+
+/**
+ * Mock region server services with WALProvider

[hbase] branch branch-1.3 updated: HBASE-22861 [Addendum] Remove unassign(not exist in branch-1.3) method in MockRegionServerServicesWithWALs

2019-08-21 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1.3
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1.3 by this push:
 new a4a7c0e  HBASE-22861 [Addendum] Remove unassign(not exist in 
branch-1.3) method in MockRegionServerServicesWithWALs
a4a7c0e is described below

commit a4a7c0edc259ed49760f752881bd8a81b8255452
Author: Reid Chan 
AuthorDate: Thu Aug 22 12:36:00 2019 +0800

HBASE-22861 [Addendum] Remove unassign(not exist in branch-1.3) method in 
MockRegionServerServicesWithWALs
---
 .../org/apache/hadoop/hbase/MockRegionServerServicesWithWALs.java| 5 -
 1 file changed, 5 deletions(-)

diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/MockRegionServerServicesWithWALs.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/MockRegionServerServicesWithWALs.java
index 69290b2..7f2606f 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/MockRegionServerServicesWithWALs.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/MockRegionServerServicesWithWALs.java
@@ -199,11 +199,6 @@ public class MockRegionServerServicesWithWALs implements 
RegionServerServices {
   }
 
   @Override
-  public void unassign(byte[] regionName) throws IOException {
-rss.unassign(regionName);
-  }
-
-  @Override
   public void addToOnlineRegions(Region r) {
 rss.addToOnlineRegions(r);
   }



[hbase] branch branch-1.3 updated: HBASE-22835 Scan/Get with setColumn and the store with ROWCOL bloom filter could throw AssertionError

2019-08-22 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1.3
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1.3 by this push:
 new 957d968  HBASE-22835 Scan/Get with setColumn and the store with ROWCOL 
bloom filter could throw AssertionError
957d968 is described below

commit 957d968cca24efacae77b65c1761bac93e7b4925
Author: eomiks 
AuthorDate: Fri Aug 23 11:57:59 2019 +0900

HBASE-22835 Scan/Get with setColumn and the store with ROWCOL bloom filter 
could throw AssertionError

Signed-off-by Reid Chan 
---
 .../hadoop/hbase/regionserver/StoreScanner.java| 227 -
 .../apache/hadoop/hbase/HBaseTestingUtility.java   |  29 +++
 .../hbase/regionserver/TestIsDeleteFailure.java| 146 +
 3 files changed, 305 insertions(+), 97 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
index 9d63374..3da0d1c 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
@@ -22,7 +22,6 @@ package org.apache.hadoop.hbase.regionserver;
 import java.io.IOException;
 import java.io.InterruptedIOException;
 import java.util.ArrayList;
-import java.util.Collection;
 import java.util.List;
 import java.util.NavigableSet;
 import java.util.concurrent.CountDownLatch;
@@ -42,7 +41,6 @@ import org.apache.hadoop.hbase.client.IsolationLevel;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.executor.ExecutorService;
 import org.apache.hadoop.hbase.filter.Filter;
-import org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.MatchCode;
 import org.apache.hadoop.hbase.regionserver.ScannerContext.LimitScope;
 import org.apache.hadoop.hbase.regionserver.ScannerContext.NextState;
 import org.apache.hadoop.hbase.regionserver.handler.ParallelSeekHandler;
@@ -552,8 +550,7 @@ public class StoreScanner extends 
NonReversedNonLazyKeyValueScanner
   prevCell = cell;
   topChanged = false;
   ScanQueryMatcher.MatchCode qcode = matcher.match(cell);
-  qcode = optimize(qcode, cell);
-  switch(qcode) {
+  switch (qcode) {
 case INCLUDE:
 case INCLUDE_AND_SEEK_NEXT_ROW:
 case INCLUDE_AND_SEEK_NEXT_COL:
@@ -606,9 +603,9 @@ public class StoreScanner extends 
NonReversedNonLazyKeyValueScanner
 // the heap.peek() will any way be in the next row. So the 
SQM.match(cell) need do
 // another compareRow to say the current row is DONE
 matcher.row = null;
-seekToNextRow(cell);
+seekOrSkipToNextRow(cell);
   } else if (qcode == 
ScanQueryMatcher.MatchCode.INCLUDE_AND_SEEK_NEXT_COL) {
-seekAsDirection(matcher.getKeyForNextColumn(cell));
+seekOrSkipToNextColumn(cell);
   } else {
 this.heap.next();
   }
@@ -648,7 +645,7 @@ public class StoreScanner extends 
NonReversedNonLazyKeyValueScanner
   // the heap.peek() will any way be in the next row. So the 
SQM.match(cell) need do
   // another compareRow to say the current row is DONE
   matcher.row = null;
-  seekToNextRow(cell);
+  seekOrSkipToNextRow(cell);
   NextState stateAfterSeekNextRow = needToReturn(outResult);
   if (stateAfterSeekNextRow != null) {
 return 
scannerContext.setScannerState(stateAfterSeekNextRow).hasMoreValues();
@@ -656,7 +653,7 @@ public class StoreScanner extends 
NonReversedNonLazyKeyValueScanner
   break;
 
 case SEEK_NEXT_COL:
-  seekAsDirection(matcher.getKeyForNextColumn(cell));
+  seekOrSkipToNextColumn(cell);
   NextState stateAfterSeekNextColumn = needToReturn(outResult);
   if (stateAfterSeekNextColumn != null) {
 return 
scannerContext.setScannerState(stateAfterSeekNextColumn).hasMoreValues();
@@ -713,93 +710,6 @@ public class StoreScanner extends 
NonReversedNonLazyKeyValueScanner
 return null;
   }
 
-  /**
-   * See if we should actually SEEK or rather just SKIP to the next Cell (see 
HBASE-13109).
-   * This method works together with ColumnTrackers and Filters. 
ColumnTrackers may issue SEEK
-   * hints, such as seek to next column, next row, or seek to an arbitrary 
seek key.
-   * This method intercepts these qcodes and decides whether a seek is the 
most efficient _actual_
-   * way to get us to the requested cell (SEEKs are more expensive than SKIP, 
SKIP, SKIP inside the
-   * current, loaded block).
-   * It does this by looking at the next indexed key of the current HFile. 
This key
-   * is then compared with the _SEEK_ key, where a SEEK key is an artificial 
'last possible key
-   * on the row' (on

[hbase] branch branch-1 updated: HBASE-22880 Move the DirScanPool out and do not use static field

2019-08-23 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1 by this push:
 new 8961315  HBASE-22880 Move the DirScanPool out and do not use static 
field
8961315 is described below

commit 896131540a72f5997d4b02c449e1fb6a62d687a1
Author: Dean_19 <35092554+zha...@users.noreply.github.com>
AuthorDate: Sat Aug 24 13:13:32 2019 +0800

HBASE-22880 Move the DirScanPool out and do not use static field

Signed-off-by Reid Chan 
---
 .../org/apache/hadoop/hbase/master/HMaster.java|  31 ++---
 .../hadoop/hbase/master/cleaner/CleanerChore.java  | 146 -
 .../hadoop/hbase/master/cleaner/DirScanPool.java   | 111 
 .../hadoop/hbase/master/cleaner/HFileCleaner.java  |   8 +-
 .../hadoop/hbase/master/cleaner/LogCleaner.java|  10 +-
 .../hadoop/hbase/backup/TestHFileArchiving.java|  33 +++--
 .../example/TestZooKeeperTableArchiveClient.java   |  17 +--
 .../hbase/master/cleaner/TestCleanerChore.java |  49 ---
 .../hbase/master/cleaner/TestHFileCleaner.java |  21 +--
 .../hbase/master/cleaner/TestHFileLinkCleaner.java |  24 +++-
 .../hbase/master/cleaner/TestLogsCleaner.java  |   8 +-
 11 files changed, 252 insertions(+), 206 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
index f319ee4..1518f76 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
@@ -98,7 +98,7 @@ import org.apache.hadoop.hbase.master.balancer.BalancerChore;
 import org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer;
 import org.apache.hadoop.hbase.master.balancer.ClusterStatusChore;
 import org.apache.hadoop.hbase.master.balancer.LoadBalancerFactory;
-import org.apache.hadoop.hbase.master.cleaner.CleanerChore;
+import org.apache.hadoop.hbase.master.cleaner.DirScanPool;
 import org.apache.hadoop.hbase.master.cleaner.HFileCleaner;
 import org.apache.hadoop.hbase.master.cleaner.LogCleaner;
 import org.apache.hadoop.hbase.master.cleaner.ReplicationZKLockCleanerChore;
@@ -333,6 +333,7 @@ public class HMaster extends HRegionServer implements 
MasterServices, Server {
   private SnapshotCleanerChore snapshotCleanerChore = null;
 
   CatalogJanitor catalogJanitorChore;
+  private DirScanPool cleanerPool;
   private ReplicationZKLockCleanerChore replicationZKLockCleanerChore;
   private ReplicationZKNodeCleanerChore replicationZKNodeCleanerChore;
   private LogCleaner logCleaner;
@@ -898,6 +899,7 @@ public class HMaster extends HRegionServer implements 
MasterServices, Server {
(System.currentTimeMillis() - masterActiveTime) / 1000.0f));
 this.masterFinishedInitializationTime = System.currentTimeMillis();
 configurationManager.registerObserver(this.balancer);
+configurationManager.registerObserver(this.cleanerPool);
 configurationManager.registerObserver(this.hfileCleaner);
 configurationManager.registerObserver(this.logCleaner);
 
@@ -1237,22 +1239,19 @@ public class HMaster extends HRegionServer implements 
MasterServices, Server {
this.service.startExecutorService(ExecutorType.MASTER_TABLE_OPERATIONS, 1);
startProcedureExecutor();
 
-// Initial cleaner chore
-CleanerChore.initChorePool(conf);
-   // Start log cleaner thread
-   int cleanerInterval = conf.getInt("hbase.master.cleaner.interval", 60 * 
1000);
-   this.logCleaner =
-  new LogCleaner(cleanerInterval,
- this, conf, getMasterFileSystem().getOldLogDir().getFileSystem(conf),
- getMasterFileSystem().getOldLogDir());
-getChoreService().scheduleChore(logCleaner);
-
+// Create cleaner thread pool
+cleanerPool = new DirScanPool(conf);
+// Start log cleaner thread
+int cleanerInterval = conf.getInt("hbase.master.cleaner.interval", 600 * 
1000);
+this.logCleaner = new LogCleaner(cleanerInterval, this, conf,
+  getMasterFileSystem().getOldLogDir().getFileSystem(conf),
+  getMasterFileSystem().getOldLogDir(), cleanerPool);
//start the hfile archive cleaner thread
 Path archiveDir = HFileArchiveUtil.getArchivePath(conf);
 Map params = new HashMap();
 params.put(MASTER, this);
-this.hfileCleaner = new HFileCleaner(cleanerInterval, this, conf, 
getMasterFileSystem()
-.getFileSystem(), archiveDir, params);
+this.hfileCleaner = new HFileCleaner(cleanerInterval, this, conf,
+  getMasterFileSystem().getFileSystem(), archiveDir, cleanerPool, params);
 getChoreService().scheduleChore(hfileCleaner);
 
 final boolean isSnapshotChoreDisabled = 
conf.getBoolean(HConstants.SNAPSHOT_CLEANER_DISABLE,
@@ -1306,8 +1305,10 @@ public class HMaster extends HRegionServer implemen

[hbase] branch branch-1.4 updated: HBASE-22880 Move the DirScanPool out and do not use static field

2019-08-23 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1.4
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1.4 by this push:
 new fdccda3  HBASE-22880 Move the DirScanPool out and do not use static 
field
fdccda3 is described below

commit fdccda3197bd5c9dcbc38e8b49ae19cc8c5d9b13
Author: Dean_19 <35092554+zha...@users.noreply.github.com>
AuthorDate: Sat Aug 24 13:13:32 2019 +0800

HBASE-22880 Move the DirScanPool out and do not use static field

Signed-off-by Reid Chan 
---
 .../org/apache/hadoop/hbase/master/HMaster.java|  31 ++---
 .../hadoop/hbase/master/cleaner/CleanerChore.java  | 146 -
 .../hadoop/hbase/master/cleaner/DirScanPool.java   | 111 
 .../hadoop/hbase/master/cleaner/HFileCleaner.java  |   8 +-
 .../hadoop/hbase/master/cleaner/LogCleaner.java|  10 +-
 .../hadoop/hbase/backup/TestHFileArchiving.java|  33 +++--
 .../example/TestZooKeeperTableArchiveClient.java   |  17 +--
 .../hbase/master/cleaner/TestCleanerChore.java |  49 ---
 .../hbase/master/cleaner/TestHFileCleaner.java |  21 +--
 .../hbase/master/cleaner/TestHFileLinkCleaner.java |  24 +++-
 .../hbase/master/cleaner/TestLogsCleaner.java  |   8 +-
 11 files changed, 252 insertions(+), 206 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
index 0bc6c91..7a49623 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
@@ -98,7 +98,7 @@ import org.apache.hadoop.hbase.master.balancer.BalancerChore;
 import org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer;
 import org.apache.hadoop.hbase.master.balancer.ClusterStatusChore;
 import org.apache.hadoop.hbase.master.balancer.LoadBalancerFactory;
-import org.apache.hadoop.hbase.master.cleaner.CleanerChore;
+import org.apache.hadoop.hbase.master.cleaner.DirScanPool;
 import org.apache.hadoop.hbase.master.cleaner.HFileCleaner;
 import org.apache.hadoop.hbase.master.cleaner.LogCleaner;
 import org.apache.hadoop.hbase.master.cleaner.ReplicationZKLockCleanerChore;
@@ -330,6 +330,7 @@ public class HMaster extends HRegionServer implements 
MasterServices, Server {
   private PeriodicDoMetrics periodicDoMetricsChore = null;
 
   CatalogJanitor catalogJanitorChore;
+  private DirScanPool cleanerPool;
   private ReplicationZKLockCleanerChore replicationZKLockCleanerChore;
   private ReplicationZKNodeCleanerChore replicationZKNodeCleanerChore;
   private LogCleaner logCleaner;
@@ -895,6 +896,7 @@ public class HMaster extends HRegionServer implements 
MasterServices, Server {
(System.currentTimeMillis() - masterActiveTime) / 1000.0f));
 this.masterFinishedInitializationTime = System.currentTimeMillis();
 configurationManager.registerObserver(this.balancer);
+configurationManager.registerObserver(this.cleanerPool);
 configurationManager.registerObserver(this.hfileCleaner);
 configurationManager.registerObserver(this.logCleaner);
 
@@ -1234,22 +1236,19 @@ public class HMaster extends HRegionServer implements 
MasterServices, Server {
this.service.startExecutorService(ExecutorType.MASTER_TABLE_OPERATIONS, 1);
startProcedureExecutor();
 
-// Initial cleaner chore
-CleanerChore.initChorePool(conf);
-   // Start log cleaner thread
-   int cleanerInterval = conf.getInt("hbase.master.cleaner.interval", 60 * 
1000);
-   this.logCleaner =
-  new LogCleaner(cleanerInterval,
- this, conf, getMasterFileSystem().getOldLogDir().getFileSystem(conf),
- getMasterFileSystem().getOldLogDir());
-getChoreService().scheduleChore(logCleaner);
-
+// Create cleaner thread pool
+cleanerPool = new DirScanPool(conf);
+// Start log cleaner thread
+int cleanerInterval = conf.getInt("hbase.master.cleaner.interval", 600 * 
1000);
+this.logCleaner = new LogCleaner(cleanerInterval, this, conf,
+  getMasterFileSystem().getOldLogDir().getFileSystem(conf),
+  getMasterFileSystem().getOldLogDir(), cleanerPool);
//start the hfile archive cleaner thread
 Path archiveDir = HFileArchiveUtil.getArchivePath(conf);
 Map params = new HashMap();
 params.put(MASTER, this);
-this.hfileCleaner = new HFileCleaner(cleanerInterval, this, conf, 
getMasterFileSystem()
-.getFileSystem(), archiveDir, params);
+this.hfileCleaner = new HFileCleaner(cleanerInterval, this, conf,
+  getMasterFileSystem().getFileSystem(), archiveDir, cleanerPool, params);
 getChoreService().scheduleChore(hfileCleaner);
 serviceStarted = true;
 if (LOG.isTraceEnabled()) {
@@ -1291,8 +1290,10 @@ public class HMaster extends HRegionServer implements 
MasterServices, Serve

[hbase] branch branch-1 updated: HBASE-22872 Don't try to create normalization plan unnecesarily when split and merge both are disabled

2019-08-26 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1 by this push:
 new f240ca0  HBASE-22872 Don't try to create normalization plan 
unnecesarily when split and merge both are disabled
f240ca0 is described below

commit f240ca0e637bbbd8c7854703b57d8ceb95b49166
Author: Aman Poonia 
AuthorDate: Sat Aug 17 15:17:42 2019 +0530

HBASE-22872 Don't try to create normalization plan unnecesarily when split 
and merge both are disabled

Signed-off-by: Reid Chan 
---
 .../master/normalizer/SimpleRegionNormalizer.java  | 30 --
 1 file changed, 17 insertions(+), 13 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
index c18a49a..b346c14 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
@@ -107,6 +107,23 @@ public class SimpleRegionNormalizer implements 
RegionNormalizer {
   LOG.debug("Normalization of system table " + table + " isn't allowed");
   return null;
 }
+boolean splitEnabled = true, mergeEnabled = true;
+try {
+  splitEnabled = masterRpcServices.isSplitOrMergeEnabled(null,
+
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.SPLIT)).getEnabled();
+} catch (ServiceException se) {
+  LOG.debug("Unable to determine whether split is enabled", se);
+}
+try {
+  mergeEnabled = masterRpcServices.isSplitOrMergeEnabled(null,
+
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.MERGE)).getEnabled();
+} catch (ServiceException se) {
+  LOG.debug("Unable to determine whether split is enabled", se);
+}
+if (!splitEnabled && !mergeEnabled) {
+  LOG.debug("Both split and merge are disabled for table: " + table);
+  return null;
+}
 
 List plans = new ArrayList();
 List tableRegions = 
masterServices.getAssignmentManager().getRegionStates().
@@ -141,19 +158,6 @@ public class SimpleRegionNormalizer implements 
RegionNormalizer {
 LOG.debug("Table " + table + ", average region size: " + avgRegionSize);
 
 int candidateIdx = 0;
-boolean splitEnabled = true, mergeEnabled = true;
-try {
-  splitEnabled = masterRpcServices.isSplitOrMergeEnabled(null,
-
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.SPLIT)).getEnabled();
-} catch (ServiceException se) {
-  LOG.debug("Unable to determine whether split is enabled", se);
-}
-try {
-  mergeEnabled = masterRpcServices.isSplitOrMergeEnabled(null,
-
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.MERGE)).getEnabled();
-} catch (ServiceException se) {
-  LOG.debug("Unable to determine whether split is enabled", se);
-}
 while (candidateIdx < tableRegions.size()) {
   HRegionInfo hri = tableRegions.get(candidateIdx);
   long regionSize = getRegionSize(hri);



[hbase] branch branch-1.4 updated: HBASE-22872 Don't try to create normalization plan unnecesarily when split and merge both are disabled

2019-08-26 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1.4
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1.4 by this push:
 new 23991cb  HBASE-22872 Don't try to create normalization plan 
unnecesarily when split and merge both are disabled
23991cb is described below

commit 23991cb51e5bac3a27237475f00bb4976a83d5fe
Author: Aman Poonia 
AuthorDate: Sat Aug 17 15:17:42 2019 +0530

HBASE-22872 Don't try to create normalization plan unnecesarily when split 
and merge both are disabled

Signed-off-by: Reid Chan 
---
 .../master/normalizer/SimpleRegionNormalizer.java  | 30 --
 1 file changed, 17 insertions(+), 13 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
index c18a49a..b346c14 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
@@ -107,6 +107,23 @@ public class SimpleRegionNormalizer implements 
RegionNormalizer {
   LOG.debug("Normalization of system table " + table + " isn't allowed");
   return null;
 }
+boolean splitEnabled = true, mergeEnabled = true;
+try {
+  splitEnabled = masterRpcServices.isSplitOrMergeEnabled(null,
+
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.SPLIT)).getEnabled();
+} catch (ServiceException se) {
+  LOG.debug("Unable to determine whether split is enabled", se);
+}
+try {
+  mergeEnabled = masterRpcServices.isSplitOrMergeEnabled(null,
+
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.MERGE)).getEnabled();
+} catch (ServiceException se) {
+  LOG.debug("Unable to determine whether split is enabled", se);
+}
+if (!splitEnabled && !mergeEnabled) {
+  LOG.debug("Both split and merge are disabled for table: " + table);
+  return null;
+}
 
 List plans = new ArrayList();
 List tableRegions = 
masterServices.getAssignmentManager().getRegionStates().
@@ -141,19 +158,6 @@ public class SimpleRegionNormalizer implements 
RegionNormalizer {
 LOG.debug("Table " + table + ", average region size: " + avgRegionSize);
 
 int candidateIdx = 0;
-boolean splitEnabled = true, mergeEnabled = true;
-try {
-  splitEnabled = masterRpcServices.isSplitOrMergeEnabled(null,
-
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.SPLIT)).getEnabled();
-} catch (ServiceException se) {
-  LOG.debug("Unable to determine whether split is enabled", se);
-}
-try {
-  mergeEnabled = masterRpcServices.isSplitOrMergeEnabled(null,
-
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.MERGE)).getEnabled();
-} catch (ServiceException se) {
-  LOG.debug("Unable to determine whether split is enabled", se);
-}
 while (candidateIdx < tableRegions.size()) {
   HRegionInfo hri = tableRegions.get(candidateIdx);
   long regionSize = getRegionSize(hri);



[hbase] branch branch-1.3 updated: HBASE-22872 Don't try to create normalization plan unnecesarily when split and merge both are disabled

2019-08-26 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1.3
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1.3 by this push:
 new 7c43b91  HBASE-22872 Don't try to create normalization plan 
unnecesarily when split and merge both are disabled
7c43b91 is described below

commit 7c43b91c5da8a955cfebe388ad8ea5ada58b9727
Author: Aman Poonia 
AuthorDate: Sat Aug 17 15:17:42 2019 +0530

HBASE-22872 Don't try to create normalization plan unnecesarily when split 
and merge both are disabled

Signed-off-by: Reid Chan 
---
 .../master/normalizer/SimpleRegionNormalizer.java  | 30 --
 1 file changed, 17 insertions(+), 13 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
index 67aaee1..52714b2 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
@@ -107,6 +107,23 @@ public class SimpleRegionNormalizer implements 
RegionNormalizer {
   LOG.debug("Normalization of system table " + table + " isn't allowed");
   return null;
 }
+boolean splitEnabled = true, mergeEnabled = true;
+try {
+  splitEnabled = masterRpcServices.isSplitOrMergeEnabled(null,
+
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.SPLIT)).getEnabled();
+} catch (ServiceException se) {
+  LOG.debug("Unable to determine whether split is enabled", se);
+}
+try {
+  mergeEnabled = masterRpcServices.isSplitOrMergeEnabled(null,
+
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.MERGE)).getEnabled();
+} catch (ServiceException se) {
+  LOG.debug("Unable to determine whether split is enabled", se);
+}
+if (!splitEnabled && !mergeEnabled) {
+  LOG.debug("Both split and merge are disabled for table: " + table);
+  return null;
+}
 
 List plans = new ArrayList();
 List tableRegions = 
masterServices.getAssignmentManager().getRegionStates().
@@ -137,19 +154,6 @@ public class SimpleRegionNormalizer implements 
RegionNormalizer {
 LOG.debug("Table " + table + ", average region size: " + avgRegionSize);
 
 int candidateIdx = 0;
-boolean splitEnabled = true, mergeEnabled = true;
-try {
-  splitEnabled = masterRpcServices.isSplitOrMergeEnabled(null,
-
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.SPLIT)).getEnabled();
-} catch (ServiceException se) {
-  LOG.debug("Unable to determine whether split is enabled", se);
-}
-try {
-  mergeEnabled = masterRpcServices.isSplitOrMergeEnabled(null,
-
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.MERGE)).getEnabled();
-} catch (ServiceException se) {
-  LOG.debug("Unable to determine whether split is enabled", se);
-}
 while (candidateIdx < tableRegions.size()) {
   HRegionInfo hri = tableRegions.get(candidateIdx);
   long regionSize = getRegionSize(hri);



[hbase] branch branch-2 updated: HBASE-22872 Don't try to create normalization plan unnecesarily when split and merge both are disabled

2019-08-26 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 1eac16e  HBASE-22872 Don't try to create normalization plan 
unnecesarily when split and merge both are disabled
1eac16e is described below

commit 1eac16e78f37102f2595fd0c36dbf85224109607
Author: Aman Poonia 
AuthorDate: Mon Aug 26 17:45:01 2019 +0530

HBASE-22872 Don't try to create normalization plan unnecesarily when split 
and merge both are disabled

Signed-off-by: Reid Chan 
---
 .../master/normalizer/SimpleRegionNormalizer.java  | 31 --
 1 file changed, 17 insertions(+), 14 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
index 300c6a7..74b338b 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
@@ -120,7 +120,23 @@ public class SimpleRegionNormalizer implements 
RegionNormalizer {
   LOG.debug("Normalization of system table " + table + " isn't allowed");
   return null;
 }
-
+boolean splitEnabled = true, mergeEnabled = true;
+try {
+  splitEnabled = masterRpcServices.isSplitOrMergeEnabled(null,
+
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.SPLIT)).getEnabled();
+} catch (org.apache.hbase.thirdparty.com.google.protobuf.ServiceException 
e) {
+  LOG.debug("Unable to determine whether split is enabled", e);
+}
+try {
+  mergeEnabled = masterRpcServices.isSplitOrMergeEnabled(null,
+
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.MERGE)).getEnabled();
+} catch (org.apache.hbase.thirdparty.com.google.protobuf.ServiceException 
e) {
+  LOG.debug("Unable to determine whether split is enabled", e);
+}
+if (!mergeEnabled && !splitEnabled) {
+  LOG.debug("Both split and merge are disabled for table: " + table);
+  return null;
+}
 List plans = new ArrayList<>();
 List tableRegions = 
masterServices.getAssignmentManager().getRegionStates().
   getRegionsOfTable(table);
@@ -178,19 +194,6 @@ public class SimpleRegionNormalizer implements 
RegionNormalizer {
 LOG.debug("Table " + table + ", average region size: " + avgRegionSize);
 
 int candidateIdx = 0;
-boolean splitEnabled = true, mergeEnabled = true;
-try {
-  splitEnabled = masterRpcServices.isSplitOrMergeEnabled(null,
-
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.SPLIT)).getEnabled();
-} catch (org.apache.hbase.thirdparty.com.google.protobuf.ServiceException 
e) {
-  LOG.debug("Unable to determine whether split is enabled", e);
-}
-try {
-  mergeEnabled = masterRpcServices.isSplitOrMergeEnabled(null,
-
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.MERGE)).getEnabled();
-} catch (org.apache.hbase.thirdparty.com.google.protobuf.ServiceException 
e) {
-  LOG.debug("Unable to determine whether split is enabled", e);
-}
 while (candidateIdx < tableRegions.size()) {
   RegionInfo hri = tableRegions.get(candidateIdx);
   long regionSize = getRegionSize(hri);



[hbase] branch branch-2.1 updated: HBASE-22872 Don't try to create normalization plan unnecesarily when split and merge both are disabled

2019-08-26 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new 06ae766  HBASE-22872 Don't try to create normalization plan 
unnecesarily when split and merge both are disabled
06ae766 is described below

commit 06ae7668095fde5af9e9e18aa2cb47f77f4583fa
Author: Aman Poonia 
AuthorDate: Mon Aug 26 17:45:01 2019 +0530

HBASE-22872 Don't try to create normalization plan unnecesarily when split 
and merge both are disabled

Signed-off-by: Reid Chan 
---
 .../master/normalizer/SimpleRegionNormalizer.java  | 31 --
 1 file changed, 17 insertions(+), 14 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
index 300c6a7..74b338b 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
@@ -120,7 +120,23 @@ public class SimpleRegionNormalizer implements 
RegionNormalizer {
   LOG.debug("Normalization of system table " + table + " isn't allowed");
   return null;
 }
-
+boolean splitEnabled = true, mergeEnabled = true;
+try {
+  splitEnabled = masterRpcServices.isSplitOrMergeEnabled(null,
+
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.SPLIT)).getEnabled();
+} catch (org.apache.hbase.thirdparty.com.google.protobuf.ServiceException 
e) {
+  LOG.debug("Unable to determine whether split is enabled", e);
+}
+try {
+  mergeEnabled = masterRpcServices.isSplitOrMergeEnabled(null,
+
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.MERGE)).getEnabled();
+} catch (org.apache.hbase.thirdparty.com.google.protobuf.ServiceException 
e) {
+  LOG.debug("Unable to determine whether split is enabled", e);
+}
+if (!mergeEnabled && !splitEnabled) {
+  LOG.debug("Both split and merge are disabled for table: " + table);
+  return null;
+}
 List plans = new ArrayList<>();
 List tableRegions = 
masterServices.getAssignmentManager().getRegionStates().
   getRegionsOfTable(table);
@@ -178,19 +194,6 @@ public class SimpleRegionNormalizer implements 
RegionNormalizer {
 LOG.debug("Table " + table + ", average region size: " + avgRegionSize);
 
 int candidateIdx = 0;
-boolean splitEnabled = true, mergeEnabled = true;
-try {
-  splitEnabled = masterRpcServices.isSplitOrMergeEnabled(null,
-
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.SPLIT)).getEnabled();
-} catch (org.apache.hbase.thirdparty.com.google.protobuf.ServiceException 
e) {
-  LOG.debug("Unable to determine whether split is enabled", e);
-}
-try {
-  mergeEnabled = masterRpcServices.isSplitOrMergeEnabled(null,
-
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.MERGE)).getEnabled();
-} catch (org.apache.hbase.thirdparty.com.google.protobuf.ServiceException 
e) {
-  LOG.debug("Unable to determine whether split is enabled", e);
-}
 while (candidateIdx < tableRegions.size()) {
   RegionInfo hri = tableRegions.get(candidateIdx);
   long regionSize = getRegionSize(hri);



[hbase] branch branch-2.2 updated: HBASE-22872 Don't try to create normalization plan unnecesarily when split and merge both are disabled

2019-08-26 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2.2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.2 by this push:
 new d48225c  HBASE-22872 Don't try to create normalization plan 
unnecesarily when split and merge both are disabled
d48225c is described below

commit d48225c8bfeb4a2ba8d3c5c3a6fa2e8c6592e564
Author: Aman Poonia 
AuthorDate: Mon Aug 26 17:45:01 2019 +0530

HBASE-22872 Don't try to create normalization plan unnecesarily when split 
and merge both are disabled

Signed-off-by: Reid Chan 
---
 .../master/normalizer/SimpleRegionNormalizer.java  | 31 --
 1 file changed, 17 insertions(+), 14 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
index 300c6a7..74b338b 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
@@ -120,7 +120,23 @@ public class SimpleRegionNormalizer implements 
RegionNormalizer {
   LOG.debug("Normalization of system table " + table + " isn't allowed");
   return null;
 }
-
+boolean splitEnabled = true, mergeEnabled = true;
+try {
+  splitEnabled = masterRpcServices.isSplitOrMergeEnabled(null,
+
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.SPLIT)).getEnabled();
+} catch (org.apache.hbase.thirdparty.com.google.protobuf.ServiceException 
e) {
+  LOG.debug("Unable to determine whether split is enabled", e);
+}
+try {
+  mergeEnabled = masterRpcServices.isSplitOrMergeEnabled(null,
+
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.MERGE)).getEnabled();
+} catch (org.apache.hbase.thirdparty.com.google.protobuf.ServiceException 
e) {
+  LOG.debug("Unable to determine whether split is enabled", e);
+}
+if (!mergeEnabled && !splitEnabled) {
+  LOG.debug("Both split and merge are disabled for table: " + table);
+  return null;
+}
 List plans = new ArrayList<>();
 List tableRegions = 
masterServices.getAssignmentManager().getRegionStates().
   getRegionsOfTable(table);
@@ -178,19 +194,6 @@ public class SimpleRegionNormalizer implements 
RegionNormalizer {
 LOG.debug("Table " + table + ", average region size: " + avgRegionSize);
 
 int candidateIdx = 0;
-boolean splitEnabled = true, mergeEnabled = true;
-try {
-  splitEnabled = masterRpcServices.isSplitOrMergeEnabled(null,
-
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.SPLIT)).getEnabled();
-} catch (org.apache.hbase.thirdparty.com.google.protobuf.ServiceException 
e) {
-  LOG.debug("Unable to determine whether split is enabled", e);
-}
-try {
-  mergeEnabled = masterRpcServices.isSplitOrMergeEnabled(null,
-
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.MERGE)).getEnabled();
-} catch (org.apache.hbase.thirdparty.com.google.protobuf.ServiceException 
e) {
-  LOG.debug("Unable to determine whether split is enabled", e);
-}
 while (candidateIdx < tableRegions.size()) {
   RegionInfo hri = tableRegions.get(candidateIdx);
   long regionSize = getRegionSize(hri);



[hbase] branch master updated: HBASE-22872 Don't try to create normalization plan unnecesarily when split and merge both are disabled

2019-08-27 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new 64f8890  HBASE-22872 Don't try to create normalization plan 
unnecesarily when split and merge both are disabled
64f8890 is described below

commit 64f88906f7cc7265fe0c42a4c42530dbd660c70b
Author: Aman Poonia 
AuthorDate: Tue Aug 27 21:35:57 2019 +0530

HBASE-22872 Don't try to create normalization plan unnecesarily when split 
and merge both are disabled

Signed-off-by: Reid Chan 
---
 .../master/normalizer/SimpleRegionNormalizer.java  | 31 --
 1 file changed, 17 insertions(+), 14 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
index a30a13b..b55d2b6 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java
@@ -131,7 +131,23 @@ public class SimpleRegionNormalizer implements 
RegionNormalizer {
   LOG.debug("Normalization of system table " + table + " isn't allowed");
   return null;
 }
-
+boolean splitEnabled = true, mergeEnabled = true;
+try {
+  splitEnabled = masterRpcServices.isSplitOrMergeEnabled(null,
+
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.SPLIT)).getEnabled();
+} catch (org.apache.hbase.thirdparty.com.google.protobuf.ServiceException 
e) {
+  LOG.debug("Unable to determine whether split is enabled", e);
+}
+try {
+  mergeEnabled = masterRpcServices.isSplitOrMergeEnabled(null,
+
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.MERGE)).getEnabled();
+} catch (org.apache.hbase.thirdparty.com.google.protobuf.ServiceException 
e) {
+  LOG.debug("Unable to determine whether split is enabled", e);
+}
+if (!mergeEnabled && !splitEnabled) {
+  LOG.debug("Both split and merge are disabled for table: " + table);
+  return null;
+}
 List plans = new ArrayList<>();
 List tableRegions = 
masterServices.getAssignmentManager().getRegionStates().
   getRegionsOfTable(table);
@@ -189,19 +205,6 @@ public class SimpleRegionNormalizer implements 
RegionNormalizer {
 LOG.debug("Table " + table + ", average region size: " + avgRegionSize);
 
 int candidateIdx = 0;
-boolean splitEnabled = true, mergeEnabled = true;
-try {
-  splitEnabled = masterRpcServices.isSplitOrMergeEnabled(null,
-
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.SPLIT)).getEnabled();
-} catch (org.apache.hbase.thirdparty.com.google.protobuf.ServiceException 
e) {
-  LOG.debug("Unable to determine whether split is enabled", e);
-}
-try {
-  mergeEnabled = masterRpcServices.isSplitOrMergeEnabled(null,
-
RequestConverter.buildIsSplitOrMergeEnabledRequest(MasterSwitchType.MERGE)).getEnabled();
-} catch (org.apache.hbase.thirdparty.com.google.protobuf.ServiceException 
e) {
-  LOG.debug("Unable to determine whether split is enabled", e);
-}
 while (candidateIdx < tableRegions.size()) {
   RegionInfo hri = tableRegions.get(candidateIdx);
   long regionSize = getRegionSize(hri);



[hbase] branch master updated: HBASE-22928 ScanMetrics counter update may not happen in case of exception in TableRecordReaderImpl

2019-09-01 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new 13b2edc  HBASE-22928 ScanMetrics counter update may not happen in case 
of exception in TableRecordReaderImpl
13b2edc is described below

commit 13b2edc1be9b389140974ce8d5150aa60407277d
Author: Pankaj 
AuthorDate: Mon Sep 2 08:17:44 2019 +0530

HBASE-22928 ScanMetrics counter update may not happen in case of exception 
in TableRecordReaderImpl

Signed-off-by: Reid Chan 
---
 .../org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java| 6 ++
 1 file changed, 6 insertions(+)

diff --git 
a/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java
 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java
index d7a2ccb..4aac38e 100644
--- 
a/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java
+++ 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java
@@ -77,6 +77,10 @@ public class TableRecordReaderImpl {
* @throws IOException When restarting fails.
*/
   public void restart(byte[] firstRow) throws IOException {
+// Update counter metrics based on current scan before reinitializing it
+if (currentScan != null) {
+  updateCounters();
+}
 currentScan = new Scan(scan);
 currentScan.withStartRow(firstRow);
 currentScan.setScanMetricsEnabled(true);
@@ -219,6 +223,7 @@ public class TableRecordReaderImpl {
   } catch (IOException e) {
 // do not retry if the exception tells us not to do so
 if (e instanceof DoNotRetryIOException) {
+  updateCounters();
   throw e;
 }
 // try to handle all other IOExceptions by restarting
@@ -257,6 +262,7 @@ public class TableRecordReaderImpl {
   updateCounters();
   return false;
 } catch (IOException ioe) {
+  updateCounters();
   if (logScannerActivity) {
 long now = System.currentTimeMillis();
 LOG.info("Mapper took " + (now-timestamp)



[hbase] branch branch-2 updated: HBASE-22928 ScanMetrics counter update may not happen in case of exception in TableRecordReaderImpl

2019-09-01 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 58319b8  HBASE-22928 ScanMetrics counter update may not happen in case 
of exception in TableRecordReaderImpl
58319b8 is described below

commit 58319b8cc63495f1130a6d4918429eda21914f20
Author: Pankaj 
AuthorDate: Mon Sep 2 08:17:44 2019 +0530

HBASE-22928 ScanMetrics counter update may not happen in case of exception 
in TableRecordReaderImpl

Signed-off-by: Reid Chan 
---
 .../org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java| 6 ++
 1 file changed, 6 insertions(+)

diff --git 
a/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java
 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java
index 28f4da1..1fa943b 100644
--- 
a/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java
+++ 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java
@@ -77,6 +77,10 @@ public class TableRecordReaderImpl {
* @throws IOException When restarting fails.
*/
   public void restart(byte[] firstRow) throws IOException {
+// Update counter metrics based on current scan before reinitializing it
+if (currentScan != null) {
+  updateCounters();
+}
 currentScan = new Scan(scan);
 currentScan.withStartRow(firstRow);
 currentScan.setScanMetricsEnabled(true);
@@ -219,6 +223,7 @@ public class TableRecordReaderImpl {
   } catch (IOException e) {
 // do not retry if the exception tells us not to do so
 if (e instanceof DoNotRetryIOException) {
+  updateCounters();
   throw e;
 }
 // try to handle all other IOExceptions by restarting
@@ -257,6 +262,7 @@ public class TableRecordReaderImpl {
   updateCounters();
   return false;
 } catch (IOException ioe) {
+  updateCounters();
   if (logScannerActivity) {
 long now = System.currentTimeMillis();
 LOG.info("Mapper took " + (now-timestamp)



[hbase] branch branch-2.1 updated: HBASE-22928 ScanMetrics counter update may not happen in case of exception in TableRecordReaderImpl

2019-09-01 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new 0e87e86  HBASE-22928 ScanMetrics counter update may not happen in case 
of exception in TableRecordReaderImpl
0e87e86 is described below

commit 0e87e86a290c20a0a24680ee47512f480b55b6cb
Author: Pankaj 
AuthorDate: Mon Sep 2 08:17:44 2019 +0530

HBASE-22928 ScanMetrics counter update may not happen in case of exception 
in TableRecordReaderImpl

Signed-off-by: Reid Chan 
---
 .../org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java| 6 ++
 1 file changed, 6 insertions(+)

diff --git 
a/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java
 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java
index 20c7b94..b9ed629 100644
--- 
a/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java
+++ 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java
@@ -77,6 +77,10 @@ public class TableRecordReaderImpl {
* @throws IOException When restarting fails.
*/
   public void restart(byte[] firstRow) throws IOException {
+// Update counter metrics based on current scan before reinitializing it
+if (currentScan != null) {
+  updateCounters();
+}
 currentScan = new Scan(scan);
 currentScan.withStartRow(firstRow);
 currentScan.setScanMetricsEnabled(true);
@@ -219,6 +223,7 @@ public class TableRecordReaderImpl {
   } catch (IOException e) {
 // do not retry if the exception tells us not to do so
 if (e instanceof DoNotRetryIOException) {
+  updateCounters();
   throw e;
 }
 // try to handle all other IOExceptions by restarting
@@ -257,6 +262,7 @@ public class TableRecordReaderImpl {
   updateCounters();
   return false;
 } catch (IOException ioe) {
+  updateCounters();
   if (logScannerActivity) {
 long now = System.currentTimeMillis();
 LOG.info("Mapper took " + (now-timestamp)



[hbase] branch branch-2.2 updated: HBASE-22928 ScanMetrics counter update may not happen in case of exception in TableRecordReaderImpl

2019-09-01 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2.2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.2 by this push:
 new be41a76  HBASE-22928 ScanMetrics counter update may not happen in case 
of exception in TableRecordReaderImpl
be41a76 is described below

commit be41a767cf0169117d3530fb3723246a4023da33
Author: Pankaj 
AuthorDate: Mon Sep 2 08:17:44 2019 +0530

HBASE-22928 ScanMetrics counter update may not happen in case of exception 
in TableRecordReaderImpl

Signed-off-by: Reid Chan 
---
 .../org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java| 6 ++
 1 file changed, 6 insertions(+)

diff --git 
a/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java
 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java
index 28f4da1..1fa943b 100644
--- 
a/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java
+++ 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java
@@ -77,6 +77,10 @@ public class TableRecordReaderImpl {
* @throws IOException When restarting fails.
*/
   public void restart(byte[] firstRow) throws IOException {
+// Update counter metrics based on current scan before reinitializing it
+if (currentScan != null) {
+  updateCounters();
+}
 currentScan = new Scan(scan);
 currentScan.withStartRow(firstRow);
 currentScan.setScanMetricsEnabled(true);
@@ -219,6 +223,7 @@ public class TableRecordReaderImpl {
   } catch (IOException e) {
 // do not retry if the exception tells us not to do so
 if (e instanceof DoNotRetryIOException) {
+  updateCounters();
   throw e;
 }
 // try to handle all other IOExceptions by restarting
@@ -257,6 +262,7 @@ public class TableRecordReaderImpl {
   updateCounters();
   return false;
 } catch (IOException ioe) {
+  updateCounters();
   if (logScannerActivity) {
 long now = System.currentTimeMillis();
 LOG.info("Mapper took " + (now-timestamp)



[hbase] branch branch-1 updated: HBASE-22890 Verify the files when RegionServer is starting and BucketCache is in file mode

2019-09-16 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1 by this push:
 new 5bf60ec  HBASE-22890 Verify the files when RegionServer is starting 
and BucketCache is in file mode
5bf60ec is described below

commit 5bf60ec55fdf637d80492b61e6e6d8d605c5ef4a
Author: zbq.dean 
AuthorDate: Mon Sep 16 15:14:39 2019 +0800

HBASE-22890 Verify the files when RegionServer is starting and BucketCache 
is in file mode

Signed-off-by: Reid Chan 
Signed-off-by: Stack 
---
 .../hadoop/hbase/io/hfile/bucket/BucketCache.java  |  75 --
 .../hadoop/hbase/io/hfile/bucket/FileIOEngine.java | 152 ++-
 .../hbase/io/hfile/bucket/PersistentIOEngine.java  |  59 
 .../hbase/io/hfile/bucket/TestFileIOEngine.java|   2 +-
 .../io/hfile/bucket/TestVerifyBucketCacheFile.java | 297 +
 5 files changed, 553 insertions(+), 32 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
index c5a1b21..98abfc8 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
@@ -29,6 +29,7 @@ import java.io.ObjectInputStream;
 import java.io.ObjectOutputStream;
 import java.io.Serializable;
 import java.nio.ByteBuffer;
+import java.security.NoSuchAlgorithmException;
 import java.util.ArrayList;
 import java.util.Comparator;
 import java.util.HashSet;
@@ -69,6 +70,7 @@ import org.apache.hadoop.hbase.io.hfile.CacheableDeserializer;
 import org.apache.hadoop.hbase.io.hfile.CacheableDeserializerIdManager;
 import org.apache.hadoop.hbase.io.hfile.CachedBlock;
 import org.apache.hadoop.hbase.io.hfile.HFileBlock;
+import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
 import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
 import org.apache.hadoop.hbase.util.HasThread;
 import org.apache.hadoop.hbase.util.IdReadWriteLock;
@@ -242,6 +244,17 @@ public class BucketCache implements BlockCache, HeapSize {
   /** In-memory bucket size */
   private float memoryFactor;
 
+  private String ioEngineName;
+  private static final String FILE_VERIFY_ALGORITHM =
+"hbase.bucketcache.persistent.file.integrity.check.algorithm";
+  private static final String DEFAULT_FILE_VERIFY_ALGORITHM = "MD5";
+
+  /**
+   * Use {@link java.security.MessageDigest} class's encryption algorithms to 
check
+   * persistent file integrity, default algorithm is MD5
+   * */
+  private String algorithm;
+
   public BucketCache(String ioEngineName, long capacity, int blockSize, int[] 
bucketSizes,
   int writerThreadNum, int writerQLen, String persistencePath) throws 
FileNotFoundException,
   IOException {
@@ -252,8 +265,7 @@ public class BucketCache implements BlockCache, HeapSize {
   public BucketCache(String ioEngineName, long capacity, int blockSize, int[] 
bucketSizes,
  int writerThreadNum, int writerQLen, String 
persistencePath, int ioErrorsTolerationDuration,
  Configuration conf)
-  throws FileNotFoundException, IOException {
-this.ioEngine = getIOEngineFromName(ioEngineName, capacity);
+  throws IOException {
 this.writerThreads = new WriterThread[writerThreadNum];
 long blockNumCapacity = capacity / blockSize;
 if (blockNumCapacity >= Integer.MAX_VALUE) {
@@ -275,6 +287,7 @@ public class BucketCache implements BlockCache, HeapSize {
 ", memoryFactor: " + memoryFactor);
 
 this.cacheCapacity = capacity;
+this.ioEngineName = ioEngineName;
 this.persistencePath = persistencePath;
 this.blockSize = blockSize;
 this.ioErrorsTolerationDuration = ioErrorsTolerationDuration;
@@ -288,14 +301,15 @@ public class BucketCache implements BlockCache, HeapSize {
 this.ramCache = new ConcurrentHashMap();
 
 this.backingMap = new ConcurrentHashMap((int) 
blockNumCapacity);
-
+this.algorithm = conf.get(FILE_VERIFY_ALGORITHM, 
DEFAULT_FILE_VERIFY_ALGORITHM);
+ioEngine = getIOEngineFromName();
 if (ioEngine.isPersistent() && persistencePath != null) {
   try {
 retrieveFromFile(bucketSizes);
   } catch (IOException ioex) {
 LOG.error("Can't restore from file because of", ioex);
   } catch (ClassNotFoundException cnfe) {
-LOG.error("Can't restore from file in rebuild because can't 
deserialise",cnfe);
+LOG.error("Can't restore from file in rebuild because can't 
deserialise", cnfe);
 throw new RuntimeException(cnfe);
   }
 }
@@ -359,12 +373,10 @@ public class BucketCache implements BlockCache, HeapSize {
 
   /**
* G

[hbase] branch branch-1.4 updated: HBASE-22890 Verify the files when RegionServer is starting and BucketCache is in file mode

2019-09-16 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1.4
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1.4 by this push:
 new f18b06d  HBASE-22890 Verify the files when RegionServer is starting 
and BucketCache is in file mode
f18b06d is described below

commit f18b06da1c2a106b16e2f863f01dc5ec9d4389a8
Author: zbq.dean 
AuthorDate: Mon Sep 16 15:14:39 2019 +0800

HBASE-22890 Verify the files when RegionServer is starting and BucketCache 
is in file mode

Signed-off-by: Reid Chan 
Signed-off-by: Stack 
---
 .../hadoop/hbase/io/hfile/bucket/BucketCache.java  |  75 --
 .../hadoop/hbase/io/hfile/bucket/FileIOEngine.java | 152 ++-
 .../hbase/io/hfile/bucket/PersistentIOEngine.java  |  59 
 .../hbase/io/hfile/bucket/TestFileIOEngine.java|   2 +-
 .../io/hfile/bucket/TestVerifyBucketCacheFile.java | 297 +
 5 files changed, 553 insertions(+), 32 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
index 5a4ac13..af10f2e 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
@@ -29,6 +29,7 @@ import java.io.ObjectInputStream;
 import java.io.ObjectOutputStream;
 import java.io.Serializable;
 import java.nio.ByteBuffer;
+import java.security.NoSuchAlgorithmException;
 import java.util.ArrayList;
 import java.util.Comparator;
 import java.util.HashSet;
@@ -69,6 +70,7 @@ import org.apache.hadoop.hbase.io.hfile.CacheableDeserializer;
 import org.apache.hadoop.hbase.io.hfile.CacheableDeserializerIdManager;
 import org.apache.hadoop.hbase.io.hfile.CachedBlock;
 import org.apache.hadoop.hbase.io.hfile.HFileBlock;
+import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
 import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
 import org.apache.hadoop.hbase.util.HasThread;
 import org.apache.hadoop.hbase.util.IdReadWriteLock;
@@ -242,6 +244,17 @@ public class BucketCache implements BlockCache, HeapSize {
   /** In-memory bucket size */
   private float memoryFactor;
 
+  private String ioEngineName;
+  private static final String FILE_VERIFY_ALGORITHM =
+"hbase.bucketcache.persistent.file.integrity.check.algorithm";
+  private static final String DEFAULT_FILE_VERIFY_ALGORITHM = "MD5";
+
+  /**
+   * Use {@link java.security.MessageDigest} class's encryption algorithms to 
check
+   * persistent file integrity, default algorithm is MD5
+   * */
+  private String algorithm;
+
   public BucketCache(String ioEngineName, long capacity, int blockSize, int[] 
bucketSizes,
   int writerThreadNum, int writerQLen, String persistencePath) throws 
FileNotFoundException,
   IOException {
@@ -252,8 +265,7 @@ public class BucketCache implements BlockCache, HeapSize {
   public BucketCache(String ioEngineName, long capacity, int blockSize, int[] 
bucketSizes,
  int writerThreadNum, int writerQLen, String 
persistencePath, int ioErrorsTolerationDuration,
  Configuration conf)
-  throws FileNotFoundException, IOException {
-this.ioEngine = getIOEngineFromName(ioEngineName, capacity);
+  throws IOException {
 this.writerThreads = new WriterThread[writerThreadNum];
 long blockNumCapacity = capacity / blockSize;
 if (blockNumCapacity >= Integer.MAX_VALUE) {
@@ -275,6 +287,7 @@ public class BucketCache implements BlockCache, HeapSize {
 ", memoryFactor: " + memoryFactor);
 
 this.cacheCapacity = capacity;
+this.ioEngineName = ioEngineName;
 this.persistencePath = persistencePath;
 this.blockSize = blockSize;
 this.ioErrorsTolerationDuration = ioErrorsTolerationDuration;
@@ -288,14 +301,15 @@ public class BucketCache implements BlockCache, HeapSize {
 this.ramCache = new ConcurrentHashMap();
 
 this.backingMap = new ConcurrentHashMap((int) 
blockNumCapacity);
-
+this.algorithm = conf.get(FILE_VERIFY_ALGORITHM, 
DEFAULT_FILE_VERIFY_ALGORITHM);
+ioEngine = getIOEngineFromName();
 if (ioEngine.isPersistent() && persistencePath != null) {
   try {
 retrieveFromFile(bucketSizes);
   } catch (IOException ioex) {
 LOG.error("Can't restore from file because of", ioex);
   } catch (ClassNotFoundException cnfe) {
-LOG.error("Can't restore from file in rebuild because can't 
deserialise",cnfe);
+LOG.error("Can't restore from file in rebuild because can't 
deserialise", cnfe);
 throw new RuntimeException(cnfe);
   }
 }
@@ -359,12 +373,10 @@ public class BucketCache implements BlockCache, HeapSize {
 

[hbase] branch branch-1 updated: Revert "HBASE-22890 Verify the files when RegionServer is starting and BucketCache is in file mode"

2019-09-16 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1 by this push:
 new 3b6cff5  Revert "HBASE-22890 Verify the files when RegionServer is 
starting and BucketCache is in file mode"
3b6cff5 is described below

commit 3b6cff590e99245cd972193d32ef73618cce41b3
Author: Reid Chan 
AuthorDate: Mon Sep 16 17:50:57 2019 +0800

Revert "HBASE-22890 Verify the files when RegionServer is starting and 
BucketCache is in file mode"

Reason: There're still some concerns on whether to delete cached data file.

This reverts commit 5bf60ec55fdf637d80492b61e6e6d8d605c5ef4a.
---
 .../hadoop/hbase/io/hfile/bucket/BucketCache.java  |  75 ++
 .../hadoop/hbase/io/hfile/bucket/FileIOEngine.java | 152 +--
 .../hbase/io/hfile/bucket/PersistentIOEngine.java  |  59 
 .../hbase/io/hfile/bucket/TestFileIOEngine.java|   2 +-
 .../io/hfile/bucket/TestVerifyBucketCacheFile.java | 297 -
 5 files changed, 32 insertions(+), 553 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
index 98abfc8..c5a1b21 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
@@ -29,7 +29,6 @@ import java.io.ObjectInputStream;
 import java.io.ObjectOutputStream;
 import java.io.Serializable;
 import java.nio.ByteBuffer;
-import java.security.NoSuchAlgorithmException;
 import java.util.ArrayList;
 import java.util.Comparator;
 import java.util.HashSet;
@@ -70,7 +69,6 @@ import org.apache.hadoop.hbase.io.hfile.CacheableDeserializer;
 import org.apache.hadoop.hbase.io.hfile.CacheableDeserializerIdManager;
 import org.apache.hadoop.hbase.io.hfile.CachedBlock;
 import org.apache.hadoop.hbase.io.hfile.HFileBlock;
-import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
 import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
 import org.apache.hadoop.hbase.util.HasThread;
 import org.apache.hadoop.hbase.util.IdReadWriteLock;
@@ -244,17 +242,6 @@ public class BucketCache implements BlockCache, HeapSize {
   /** In-memory bucket size */
   private float memoryFactor;
 
-  private String ioEngineName;
-  private static final String FILE_VERIFY_ALGORITHM =
-"hbase.bucketcache.persistent.file.integrity.check.algorithm";
-  private static final String DEFAULT_FILE_VERIFY_ALGORITHM = "MD5";
-
-  /**
-   * Use {@link java.security.MessageDigest} class's encryption algorithms to 
check
-   * persistent file integrity, default algorithm is MD5
-   * */
-  private String algorithm;
-
   public BucketCache(String ioEngineName, long capacity, int blockSize, int[] 
bucketSizes,
   int writerThreadNum, int writerQLen, String persistencePath) throws 
FileNotFoundException,
   IOException {
@@ -265,7 +252,8 @@ public class BucketCache implements BlockCache, HeapSize {
   public BucketCache(String ioEngineName, long capacity, int blockSize, int[] 
bucketSizes,
  int writerThreadNum, int writerQLen, String 
persistencePath, int ioErrorsTolerationDuration,
  Configuration conf)
-  throws IOException {
+  throws FileNotFoundException, IOException {
+this.ioEngine = getIOEngineFromName(ioEngineName, capacity);
 this.writerThreads = new WriterThread[writerThreadNum];
 long blockNumCapacity = capacity / blockSize;
 if (blockNumCapacity >= Integer.MAX_VALUE) {
@@ -287,7 +275,6 @@ public class BucketCache implements BlockCache, HeapSize {
 ", memoryFactor: " + memoryFactor);
 
 this.cacheCapacity = capacity;
-this.ioEngineName = ioEngineName;
 this.persistencePath = persistencePath;
 this.blockSize = blockSize;
 this.ioErrorsTolerationDuration = ioErrorsTolerationDuration;
@@ -301,15 +288,14 @@ public class BucketCache implements BlockCache, HeapSize {
 this.ramCache = new ConcurrentHashMap();
 
 this.backingMap = new ConcurrentHashMap((int) 
blockNumCapacity);
-this.algorithm = conf.get(FILE_VERIFY_ALGORITHM, 
DEFAULT_FILE_VERIFY_ALGORITHM);
-ioEngine = getIOEngineFromName();
+
 if (ioEngine.isPersistent() && persistencePath != null) {
   try {
 retrieveFromFile(bucketSizes);
   } catch (IOException ioex) {
 LOG.error("Can't restore from file because of", ioex);
   } catch (ClassNotFoundException cnfe) {
-LOG.error("Can't restore from file in rebuild because can't 
deserialise", cnfe);
+LOG.error("Can't restore from file in rebuild because can't 
deserialise",cnfe);
 th

[hbase] branch branch-1.4 updated: Revert "HBASE-22890 Verify the files when RegionServer is starting and BucketCache is in file mode"

2019-09-16 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1.4
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1.4 by this push:
 new a31cd0c  Revert "HBASE-22890 Verify the files when RegionServer is 
starting and BucketCache is in file mode"
a31cd0c is described below

commit a31cd0c0df9705db41551fe3403d5aef0da20033
Author: Reid Chan 
AuthorDate: Mon Sep 16 17:50:57 2019 +0800

Revert "HBASE-22890 Verify the files when RegionServer is starting and 
BucketCache is in file mode"

Reason: There're still some concerns on whether to delete cached data file.

This reverts commit 5bf60ec55fdf637d80492b61e6e6d8d605c5ef4a.
---
 .../hadoop/hbase/io/hfile/bucket/BucketCache.java  |  75 ++
 .../hadoop/hbase/io/hfile/bucket/FileIOEngine.java | 152 +--
 .../hbase/io/hfile/bucket/PersistentIOEngine.java  |  59 
 .../hbase/io/hfile/bucket/TestFileIOEngine.java|   2 +-
 .../io/hfile/bucket/TestVerifyBucketCacheFile.java | 297 -
 5 files changed, 32 insertions(+), 553 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
index af10f2e..5a4ac13 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
@@ -29,7 +29,6 @@ import java.io.ObjectInputStream;
 import java.io.ObjectOutputStream;
 import java.io.Serializable;
 import java.nio.ByteBuffer;
-import java.security.NoSuchAlgorithmException;
 import java.util.ArrayList;
 import java.util.Comparator;
 import java.util.HashSet;
@@ -70,7 +69,6 @@ import org.apache.hadoop.hbase.io.hfile.CacheableDeserializer;
 import org.apache.hadoop.hbase.io.hfile.CacheableDeserializerIdManager;
 import org.apache.hadoop.hbase.io.hfile.CachedBlock;
 import org.apache.hadoop.hbase.io.hfile.HFileBlock;
-import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
 import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
 import org.apache.hadoop.hbase.util.HasThread;
 import org.apache.hadoop.hbase.util.IdReadWriteLock;
@@ -244,17 +242,6 @@ public class BucketCache implements BlockCache, HeapSize {
   /** In-memory bucket size */
   private float memoryFactor;
 
-  private String ioEngineName;
-  private static final String FILE_VERIFY_ALGORITHM =
-"hbase.bucketcache.persistent.file.integrity.check.algorithm";
-  private static final String DEFAULT_FILE_VERIFY_ALGORITHM = "MD5";
-
-  /**
-   * Use {@link java.security.MessageDigest} class's encryption algorithms to 
check
-   * persistent file integrity, default algorithm is MD5
-   * */
-  private String algorithm;
-
   public BucketCache(String ioEngineName, long capacity, int blockSize, int[] 
bucketSizes,
   int writerThreadNum, int writerQLen, String persistencePath) throws 
FileNotFoundException,
   IOException {
@@ -265,7 +252,8 @@ public class BucketCache implements BlockCache, HeapSize {
   public BucketCache(String ioEngineName, long capacity, int blockSize, int[] 
bucketSizes,
  int writerThreadNum, int writerQLen, String 
persistencePath, int ioErrorsTolerationDuration,
  Configuration conf)
-  throws IOException {
+  throws FileNotFoundException, IOException {
+this.ioEngine = getIOEngineFromName(ioEngineName, capacity);
 this.writerThreads = new WriterThread[writerThreadNum];
 long blockNumCapacity = capacity / blockSize;
 if (blockNumCapacity >= Integer.MAX_VALUE) {
@@ -287,7 +275,6 @@ public class BucketCache implements BlockCache, HeapSize {
 ", memoryFactor: " + memoryFactor);
 
 this.cacheCapacity = capacity;
-this.ioEngineName = ioEngineName;
 this.persistencePath = persistencePath;
 this.blockSize = blockSize;
 this.ioErrorsTolerationDuration = ioErrorsTolerationDuration;
@@ -301,15 +288,14 @@ public class BucketCache implements BlockCache, HeapSize {
 this.ramCache = new ConcurrentHashMap();
 
 this.backingMap = new ConcurrentHashMap((int) 
blockNumCapacity);
-this.algorithm = conf.get(FILE_VERIFY_ALGORITHM, 
DEFAULT_FILE_VERIFY_ALGORITHM);
-ioEngine = getIOEngineFromName();
+
 if (ioEngine.isPersistent() && persistencePath != null) {
   try {
 retrieveFromFile(bucketSizes);
   } catch (IOException ioex) {
 LOG.error("Can't restore from file because of", ioex);
   } catch (ClassNotFoundException cnfe) {
-LOG.error("Can't restore from file in rebuild because can't 
deserialise", cnfe);
+LOG.error("Can't restore from file in rebuild because can't 
deserialise",cnfe);
 th

[hbase] branch branch-1 updated: HBASE-22890 Verify the file integrity in persistent IOEngine

2019-09-19 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1 by this push:
 new b5b871c  HBASE-22890 Verify the file integrity in persistent IOEngine
b5b871c is described below

commit b5b871c13397acbdea82c091f5e2fd43e95e3b13
Author: zbq.dean 
AuthorDate: Fri Sep 20 14:09:34 2019 +0800

HBASE-22890 Verify the file integrity in persistent IOEngine

Signed-off-by Anoop Sam John 
Signed-off-by stack 
Signed-off-by Reid Chan 
---
 .../hadoop/hbase/io/hfile/bucket/BucketCache.java  |  81 --
 .../hadoop/hbase/io/hfile/bucket/FileIOEngine.java |  88 ++-
 .../hbase/io/hfile/bucket/PersistentIOEngine.java  |  44 
 .../io/hfile/bucket/TestVerifyBucketCacheFile.java | 282 +
 4 files changed, 473 insertions(+), 22 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
index c5a1b21..1e87a8e 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
@@ -69,6 +69,8 @@ import org.apache.hadoop.hbase.io.hfile.CacheableDeserializer;
 import org.apache.hadoop.hbase.io.hfile.CacheableDeserializerIdManager;
 import org.apache.hadoop.hbase.io.hfile.CachedBlock;
 import org.apache.hadoop.hbase.io.hfile.HFileBlock;
+import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
+import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
 import org.apache.hadoop.hbase.util.HasThread;
 import org.apache.hadoop.hbase.util.IdReadWriteLock;
@@ -242,6 +244,16 @@ public class BucketCache implements BlockCache, HeapSize {
   /** In-memory bucket size */
   private float memoryFactor;
 
+  private static final String FILE_VERIFY_ALGORITHM =
+"hbase.bucketcache.persistent.file.integrity.check.algorithm";
+  private static final String DEFAULT_FILE_VERIFY_ALGORITHM = "MD5";
+
+  /**
+   * Use {@link java.security.MessageDigest} class's encryption algorithms to 
check
+   * persistent file integrity, default algorithm is MD5
+   * */
+  private String algorithm;
+
   public BucketCache(String ioEngineName, long capacity, int blockSize, int[] 
bucketSizes,
   int writerThreadNum, int writerQLen, String persistencePath) throws 
FileNotFoundException,
   IOException {
@@ -252,8 +264,9 @@ public class BucketCache implements BlockCache, HeapSize {
   public BucketCache(String ioEngineName, long capacity, int blockSize, int[] 
bucketSizes,
  int writerThreadNum, int writerQLen, String 
persistencePath, int ioErrorsTolerationDuration,
  Configuration conf)
-  throws FileNotFoundException, IOException {
-this.ioEngine = getIOEngineFromName(ioEngineName, capacity);
+  throws IOException {
+this.algorithm = conf.get(FILE_VERIFY_ALGORITHM, 
DEFAULT_FILE_VERIFY_ALGORITHM);
+ioEngine = getIOEngineFromName(ioEngineName, capacity);
 this.writerThreads = new WriterThread[writerThreadNum];
 long blockNumCapacity = capacity / blockSize;
 if (blockNumCapacity >= Integer.MAX_VALUE) {
@@ -295,7 +308,7 @@ public class BucketCache implements BlockCache, HeapSize {
   } catch (IOException ioex) {
 LOG.error("Can't restore from file because of", ioex);
   } catch (ClassNotFoundException cnfe) {
-LOG.error("Can't restore from file in rebuild because can't 
deserialise",cnfe);
+LOG.error("Can't restore from file in rebuild because can't 
deserialise", cnfe);
 throw new RuntimeException(cnfe);
   }
 }
@@ -1021,41 +1034,69 @@ public class BucketCache implements BlockCache, 
HeapSize {
 
   private void persistToFile() throws IOException {
 assert !cacheEnabled;
-FileOutputStream fos = null;
-ObjectOutputStream oos = null;
-try {
+try (ObjectOutputStream oos = new ObjectOutputStream(
+  new FileOutputStream(persistencePath, false))){
   if (!ioEngine.isPersistent()) {
 throw new IOException("Attempt to persist non-persistent cache 
mappings!");
   }
-  fos = new FileOutputStream(persistencePath, false);
-  oos = new ObjectOutputStream(fos);
+  byte[] checksum = ((PersistentIOEngine) 
ioEngine).calculateChecksum(algorithm);
+  if (checksum != null) {
+oos.write(ProtobufUtil.PB_MAGIC);
+oos.writeInt(checksum.length);
+oos.write(checksum);
+  }
   oos.writeLong(cacheCapacity);
   oos.writeUTF(ioEngine.getClass().getName());
   oos.writeUTF(backingMap.getClass().getName());
   oos.writeObject(deserialiserMap);
   oos.

[hbase] branch branch-1.4 updated: HBASE-22890 Verify the file integrity in persistent IOEngine

2019-09-19 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1.4
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1.4 by this push:
 new 29080ed  HBASE-22890 Verify the file integrity in persistent IOEngine
29080ed is described below

commit 29080eda9859a3689910eab285a3e51481f772ff
Author: zbq.dean 
AuthorDate: Fri Sep 20 14:09:34 2019 +0800

HBASE-22890 Verify the file integrity in persistent IOEngine

Signed-off-by Anoop Sam John 
Signed-off-by stack 
Signed-off-by Reid Chan 
---
 .../hadoop/hbase/io/hfile/bucket/BucketCache.java  |  81 --
 .../hadoop/hbase/io/hfile/bucket/FileIOEngine.java |  88 ++-
 .../hbase/io/hfile/bucket/PersistentIOEngine.java  |  44 
 .../io/hfile/bucket/TestVerifyBucketCacheFile.java | 282 +
 4 files changed, 473 insertions(+), 22 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
index 5a4ac13..5c02166 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
@@ -69,6 +69,8 @@ import org.apache.hadoop.hbase.io.hfile.CacheableDeserializer;
 import org.apache.hadoop.hbase.io.hfile.CacheableDeserializerIdManager;
 import org.apache.hadoop.hbase.io.hfile.CachedBlock;
 import org.apache.hadoop.hbase.io.hfile.HFileBlock;
+import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
+import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
 import org.apache.hadoop.hbase.util.HasThread;
 import org.apache.hadoop.hbase.util.IdReadWriteLock;
@@ -242,6 +244,16 @@ public class BucketCache implements BlockCache, HeapSize {
   /** In-memory bucket size */
   private float memoryFactor;
 
+  private static final String FILE_VERIFY_ALGORITHM =
+"hbase.bucketcache.persistent.file.integrity.check.algorithm";
+  private static final String DEFAULT_FILE_VERIFY_ALGORITHM = "MD5";
+
+  /**
+   * Use {@link java.security.MessageDigest} class's encryption algorithms to 
check
+   * persistent file integrity, default algorithm is MD5
+   * */
+  private String algorithm;
+
   public BucketCache(String ioEngineName, long capacity, int blockSize, int[] 
bucketSizes,
   int writerThreadNum, int writerQLen, String persistencePath) throws 
FileNotFoundException,
   IOException {
@@ -252,8 +264,9 @@ public class BucketCache implements BlockCache, HeapSize {
   public BucketCache(String ioEngineName, long capacity, int blockSize, int[] 
bucketSizes,
  int writerThreadNum, int writerQLen, String 
persistencePath, int ioErrorsTolerationDuration,
  Configuration conf)
-  throws FileNotFoundException, IOException {
-this.ioEngine = getIOEngineFromName(ioEngineName, capacity);
+  throws IOException {
+this.algorithm = conf.get(FILE_VERIFY_ALGORITHM, 
DEFAULT_FILE_VERIFY_ALGORITHM);
+ioEngine = getIOEngineFromName(ioEngineName, capacity);
 this.writerThreads = new WriterThread[writerThreadNum];
 long blockNumCapacity = capacity / blockSize;
 if (blockNumCapacity >= Integer.MAX_VALUE) {
@@ -295,7 +308,7 @@ public class BucketCache implements BlockCache, HeapSize {
   } catch (IOException ioex) {
 LOG.error("Can't restore from file because of", ioex);
   } catch (ClassNotFoundException cnfe) {
-LOG.error("Can't restore from file in rebuild because can't 
deserialise",cnfe);
+LOG.error("Can't restore from file in rebuild because can't 
deserialise", cnfe);
 throw new RuntimeException(cnfe);
   }
 }
@@ -1021,41 +1034,69 @@ public class BucketCache implements BlockCache, 
HeapSize {
 
   private void persistToFile() throws IOException {
 assert !cacheEnabled;
-FileOutputStream fos = null;
-ObjectOutputStream oos = null;
-try {
+try (ObjectOutputStream oos = new ObjectOutputStream(
+  new FileOutputStream(persistencePath, false))){
   if (!ioEngine.isPersistent()) {
 throw new IOException("Attempt to persist non-persistent cache 
mappings!");
   }
-  fos = new FileOutputStream(persistencePath, false);
-  oos = new ObjectOutputStream(fos);
+  byte[] checksum = ((PersistentIOEngine) 
ioEngine).calculateChecksum(algorithm);
+  if (checksum != null) {
+oos.write(ProtobufUtil.PB_MAGIC);
+oos.writeInt(checksum.length);
+oos.write(checksum);
+  }
   oos.writeLong(cacheCapacity);
   oos.writeUTF(ioEngine.getClass().getName());
   oos.writeUTF(backingMap.getClass().getName());
   oos.writeObject(deserialiserMap);
  

[hbase] branch master updated: HBASE-22975 Add read and write QPS metrics at server level and table level (#615)

2019-09-22 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new a8e3d23  HBASE-22975 Add read and write QPS metrics at server level 
and table level (#615)
a8e3d23 is described below

commit a8e3d23cca4d98d1c288bca0da056f92c49b453f
Author: zbq.dean 
AuthorDate: Mon Sep 23 12:51:25 2019 +0800

HBASE-22975 Add read and write QPS metrics at server level and table level 
(#615)

Signed-off-by Reid Chan 
---
 .../hbase/regionserver/MetricsTableQueryMeter.java |  53 +++
 .../regionserver/MetricsTableLatenciesImpl.java|  13 +++
 .../regionserver/MetricsTableQueryMeterImpl.java   | 102 +
 .../apache/hadoop/hbase/regionserver/HRegion.java  |  16 
 .../hbase/regionserver/MetricsRegionServer.java|  33 +++
 .../regionserver/RegionServerTableMetrics.java |  20 
 6 files changed, 237 insertions(+)

diff --git 
a/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableQueryMeter.java
 
b/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableQueryMeter.java
new file mode 100644
index 000..d0085ff
--- /dev/null
+++ 
b/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableQueryMeter.java
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.hadoop.hbase.TableName;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * Query Per Second for each table in a RegionServer.
+ */
+@InterfaceAudience.Private
+public interface MetricsTableQueryMeter {
+
+  /**
+   * Update table read QPS
+   * @param tableName The table the metric is for
+   * @param count Number of occurrences to record
+   */
+  void updateTableReadQueryMeter(TableName tableName, long count);
+
+  /**
+   * Update table read QPS
+   * @param tableName The table the metric is for
+   */
+  void updateTableReadQueryMeter(TableName tableName);
+
+  /**
+   * Update table write QPS
+   * @param tableName The table the metric is for
+   * @param count Number of occurrences to record
+   */
+  void updateTableWriteQueryMeter(TableName tableName, long count);
+
+  /**
+   * Update table write QPS
+   * @param tableName The table the metric is for
+   */
+  void updateTableWriteQueryMeter(TableName tableName);
+}
diff --git 
a/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatenciesImpl.java
 
b/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatenciesImpl.java
index 980388f..5a3f3b9 100644
--- 
a/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatenciesImpl.java
+++ 
b/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatenciesImpl.java
@@ -20,6 +20,8 @@ import java.util.HashMap;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.metrics.BaseSourceImpl;
 import org.apache.hadoop.metrics2.MetricHistogram;
+import org.apache.hadoop.metrics2.MetricsCollector;
+import org.apache.hadoop.metrics2.MetricsRecordBuilder;
 import org.apache.hadoop.metrics2.lib.DynamicMetricsRegistry;
 import org.apache.yetus.audience.InterfaceAudience;
 
@@ -171,4 +173,15 @@ public class MetricsTableLatenciesImpl extends 
BaseSourceImpl implements Metrics
   public void updateScanTime(String tableName, long t) {
 getOrCreateTableHistogram(tableName).updateScanTime(t);
   }
+
+  @Override
+  public void getMetrics(MetricsCollector metricsCollector, boolean all) {
+MetricsRecordBuilder mrb = metricsCollector.addRecord(metricsName);
+// source is registered in supers constructor, sometimes called before the 
whole initialization.
+metricsRegistry.snapshot(mrb, all);
+if (metricsAdapter != null) {
+  // snapshot MetricRegistry as well
+  metricsAdapter.snapshotAllMetrics(registry, mrb);
+}
+  }
 }
diff --git 
a/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableQueryMeterImpl.ja

[hbase] branch branch-1 updated: HBASE-22975 Add read and write QPS metrics at server level and table level

2019-09-22 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1 by this push:
 new 771e184  HBASE-22975 Add read and write QPS metrics at server level 
and table level
771e184 is described below

commit 771e18437605c80ba37d8b126c2ffa668ca56805
Author: zbq.dean 
AuthorDate: Mon Sep 23 13:51:51 2019 +0800

HBASE-22975 Add read and write QPS metrics at server level and table level

Signed-off-by Reid Chan 
---
 .../hbase/regionserver/MetricsTableQueryMeter.java |  53 ++
 .../regionserver/MetricsTableLatenciesImpl.java|  13 +++
 .../regionserver/MetricsTableQueryMeterImpl.java   | 107 +
 .../apache/hadoop/hbase/regionserver/HRegion.java  |  20 
 .../hbase/regionserver/MetricsRegionServer.java|  34 +++
 .../regionserver/RegionServerTableMetrics.java |  20 
 6 files changed, 247 insertions(+)

diff --git 
a/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableQueryMeter.java
 
b/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableQueryMeter.java
new file mode 100644
index 000..fcce6e3
--- /dev/null
+++ 
b/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableQueryMeter.java
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.classification.InterfaceAudience;
+
+/**
+ * Query Per Second for each table in a RegionServer.
+ */
+@InterfaceAudience.Private
+public interface MetricsTableQueryMeter {
+
+  /**
+   * Update table read QPS
+   * @param tableName The table the metric is for
+   * @param count Number of occurrences to record
+   */
+  void updateTableReadQueryMeter(TableName tableName, long count);
+
+  /**
+   * Update table read QPS
+   * @param tableName The table the metric is for
+   */
+  void updateTableReadQueryMeter(TableName tableName);
+
+  /**
+   * Update table write QPS
+   * @param tableName The table the metric is for
+   * @param count Number of occurrences to record
+   */
+  void updateTableWriteQueryMeter(TableName tableName, long count);
+
+  /**
+   * Update table write QPS
+   * @param tableName The table the metric is for
+   */
+  void updateTableWriteQueryMeter(TableName tableName);
+}
diff --git 
a/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatenciesImpl.java
 
b/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatenciesImpl.java
index 2c052f2..ec6e932 100644
--- 
a/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatenciesImpl.java
+++ 
b/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatenciesImpl.java
@@ -22,6 +22,8 @@ import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
 import org.apache.hadoop.hbase.metrics.BaseSourceImpl;
 import org.apache.hadoop.metrics2.MetricHistogram;
+import org.apache.hadoop.metrics2.MetricsCollector;
+import org.apache.hadoop.metrics2.MetricsRecordBuilder;
 import org.apache.hadoop.metrics2.lib.DynamicMetricsRegistry;
 
 import com.google.common.annotations.VisibleForTesting;
@@ -172,4 +174,15 @@ public class MetricsTableLatenciesImpl extends 
BaseSourceImpl implements Metrics
   public void updateScanTime(String tableName, long t) {
 getOrCreateTableHistogram(tableName).updateScanTime(t);
   }
+
+  @Override
+  public void getMetrics(MetricsCollector metricsCollector, boolean all) {
+MetricsRecordBuilder mrb = metricsCollector.addRecord(metricsName);
+// source is registered in supers constructor, sometimes called before the 
whole initialization.
+metricsRegistry.snapshot(mrb, all);
+if (metricsAdapter != null) {
+  // snapshot MetricRegistry as well
+  metricsAdapter.snapshotAllMetrics(registry, mrb);
+}
+  }
 }
diff --git 
a/hbase-hadoop2-compat/src/main/java/org/apache/hadoo

[hbase] branch branch-1.4 updated: HBASE-22975 Add read and write QPS metrics at server level and table level

2019-09-22 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1.4
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1.4 by this push:
 new 8fa1505  HBASE-22975 Add read and write QPS metrics at server level 
and table level
8fa1505 is described below

commit 8fa1505ca4e0912c211551b6d7ae5cae9ab17a0d
Author: zbq.dean 
AuthorDate: Mon Sep 23 13:51:51 2019 +0800

HBASE-22975 Add read and write QPS metrics at server level and table level

Signed-off-by Reid Chan 
---
 .../hbase/regionserver/MetricsTableQueryMeter.java |  53 ++
 .../regionserver/MetricsTableLatenciesImpl.java|  13 +++
 .../regionserver/MetricsTableQueryMeterImpl.java   | 107 +
 .../apache/hadoop/hbase/regionserver/HRegion.java  |  20 
 .../hbase/regionserver/MetricsRegionServer.java|  34 +++
 .../regionserver/RegionServerTableMetrics.java |  20 
 6 files changed, 247 insertions(+)

diff --git 
a/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableQueryMeter.java
 
b/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableQueryMeter.java
new file mode 100644
index 000..fcce6e3
--- /dev/null
+++ 
b/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableQueryMeter.java
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.classification.InterfaceAudience;
+
+/**
+ * Query Per Second for each table in a RegionServer.
+ */
+@InterfaceAudience.Private
+public interface MetricsTableQueryMeter {
+
+  /**
+   * Update table read QPS
+   * @param tableName The table the metric is for
+   * @param count Number of occurrences to record
+   */
+  void updateTableReadQueryMeter(TableName tableName, long count);
+
+  /**
+   * Update table read QPS
+   * @param tableName The table the metric is for
+   */
+  void updateTableReadQueryMeter(TableName tableName);
+
+  /**
+   * Update table write QPS
+   * @param tableName The table the metric is for
+   * @param count Number of occurrences to record
+   */
+  void updateTableWriteQueryMeter(TableName tableName, long count);
+
+  /**
+   * Update table write QPS
+   * @param tableName The table the metric is for
+   */
+  void updateTableWriteQueryMeter(TableName tableName);
+}
diff --git 
a/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatenciesImpl.java
 
b/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatenciesImpl.java
index 2c052f2..ec6e932 100644
--- 
a/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatenciesImpl.java
+++ 
b/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatenciesImpl.java
@@ -22,6 +22,8 @@ import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
 import org.apache.hadoop.hbase.metrics.BaseSourceImpl;
 import org.apache.hadoop.metrics2.MetricHistogram;
+import org.apache.hadoop.metrics2.MetricsCollector;
+import org.apache.hadoop.metrics2.MetricsRecordBuilder;
 import org.apache.hadoop.metrics2.lib.DynamicMetricsRegistry;
 
 import com.google.common.annotations.VisibleForTesting;
@@ -172,4 +174,15 @@ public class MetricsTableLatenciesImpl extends 
BaseSourceImpl implements Metrics
   public void updateScanTime(String tableName, long t) {
 getOrCreateTableHistogram(tableName).updateScanTime(t);
   }
+
+  @Override
+  public void getMetrics(MetricsCollector metricsCollector, boolean all) {
+MetricsRecordBuilder mrb = metricsCollector.addRecord(metricsName);
+// source is registered in supers constructor, sometimes called before the 
whole initialization.
+metricsRegistry.snapshot(mrb, all);
+if (metricsAdapter != null) {
+  // snapshot MetricRegistry as well
+  metricsAdapter.snapshotAllMetrics(registry, mrb);
+}
+  }
 }
diff --git 
a/hbase-hadoop2-compat/src/main/java/org/apache/hadoo

[hbase] branch branch-2 updated: HBASE-22975 Add read and write QPS metrics at server level and table level

2019-09-23 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 0c5b6df  HBASE-22975 Add read and write QPS metrics at server level 
and table level
0c5b6df is described below

commit 0c5b6df52e8c973d7993294d50028399ad66c1af
Author: zbq.dean 
AuthorDate: Mon Sep 23 14:18:29 2019 +0800

HBASE-22975 Add read and write QPS metrics at server level and table level

Signed-off-by: Reid Chan 
---
 .../hbase/regionserver/MetricsTableQueryMeter.java |  53 +++
 .../regionserver/MetricsTableLatenciesImpl.java|  13 +++
 .../regionserver/MetricsTableQueryMeterImpl.java   | 102 +
 .../apache/hadoop/hbase/regionserver/HRegion.java  |  16 
 .../hbase/regionserver/MetricsRegionServer.java|  34 +++
 .../regionserver/RegionServerTableMetrics.java |  20 
 6 files changed, 238 insertions(+)

diff --git 
a/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableQueryMeter.java
 
b/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableQueryMeter.java
new file mode 100644
index 000..d0085ff
--- /dev/null
+++ 
b/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableQueryMeter.java
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.hadoop.hbase.TableName;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * Query Per Second for each table in a RegionServer.
+ */
+@InterfaceAudience.Private
+public interface MetricsTableQueryMeter {
+
+  /**
+   * Update table read QPS
+   * @param tableName The table the metric is for
+   * @param count Number of occurrences to record
+   */
+  void updateTableReadQueryMeter(TableName tableName, long count);
+
+  /**
+   * Update table read QPS
+   * @param tableName The table the metric is for
+   */
+  void updateTableReadQueryMeter(TableName tableName);
+
+  /**
+   * Update table write QPS
+   * @param tableName The table the metric is for
+   * @param count Number of occurrences to record
+   */
+  void updateTableWriteQueryMeter(TableName tableName, long count);
+
+  /**
+   * Update table write QPS
+   * @param tableName The table the metric is for
+   */
+  void updateTableWriteQueryMeter(TableName tableName);
+}
diff --git 
a/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatenciesImpl.java
 
b/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatenciesImpl.java
index 980388f..5a3f3b9 100644
--- 
a/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatenciesImpl.java
+++ 
b/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatenciesImpl.java
@@ -20,6 +20,8 @@ import java.util.HashMap;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.metrics.BaseSourceImpl;
 import org.apache.hadoop.metrics2.MetricHistogram;
+import org.apache.hadoop.metrics2.MetricsCollector;
+import org.apache.hadoop.metrics2.MetricsRecordBuilder;
 import org.apache.hadoop.metrics2.lib.DynamicMetricsRegistry;
 import org.apache.yetus.audience.InterfaceAudience;
 
@@ -171,4 +173,15 @@ public class MetricsTableLatenciesImpl extends 
BaseSourceImpl implements Metrics
   public void updateScanTime(String tableName, long t) {
 getOrCreateTableHistogram(tableName).updateScanTime(t);
   }
+
+  @Override
+  public void getMetrics(MetricsCollector metricsCollector, boolean all) {
+MetricsRecordBuilder mrb = metricsCollector.addRecord(metricsName);
+// source is registered in supers constructor, sometimes called before the 
whole initialization.
+metricsRegistry.snapshot(mrb, all);
+if (metricsAdapter != null) {
+  // snapshot MetricRegistry as well
+  metricsAdapter.snapshotAllMetrics(registry, mrb);
+}
+  }
 }
diff --git 
a/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableQueryMeterImpl.java
 
b/hbas

[hbase] branch branch-2.1 updated: HBASE-22975 Add read and write QPS metrics at server level and table level

2019-09-23 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new e040010  HBASE-22975 Add read and write QPS metrics at server level 
and table level
e040010 is described below

commit e040010ed0391d29b36bc088dbaba76502bd85f2
Author: zbq.dean 
AuthorDate: Mon Sep 23 14:18:29 2019 +0800

HBASE-22975 Add read and write QPS metrics at server level and table level

Signed-off-by: Reid Chan 
---
 .../hbase/regionserver/MetricsTableQueryMeter.java |  53 +++
 .../regionserver/MetricsTableLatenciesImpl.java|  13 +++
 .../regionserver/MetricsTableQueryMeterImpl.java   | 102 +
 .../apache/hadoop/hbase/regionserver/HRegion.java  |  16 
 .../hbase/regionserver/MetricsRegionServer.java|  34 +++
 .../regionserver/RegionServerTableMetrics.java |  20 
 6 files changed, 238 insertions(+)

diff --git 
a/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableQueryMeter.java
 
b/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableQueryMeter.java
new file mode 100644
index 000..d0085ff
--- /dev/null
+++ 
b/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableQueryMeter.java
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.hadoop.hbase.TableName;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * Query Per Second for each table in a RegionServer.
+ */
+@InterfaceAudience.Private
+public interface MetricsTableQueryMeter {
+
+  /**
+   * Update table read QPS
+   * @param tableName The table the metric is for
+   * @param count Number of occurrences to record
+   */
+  void updateTableReadQueryMeter(TableName tableName, long count);
+
+  /**
+   * Update table read QPS
+   * @param tableName The table the metric is for
+   */
+  void updateTableReadQueryMeter(TableName tableName);
+
+  /**
+   * Update table write QPS
+   * @param tableName The table the metric is for
+   * @param count Number of occurrences to record
+   */
+  void updateTableWriteQueryMeter(TableName tableName, long count);
+
+  /**
+   * Update table write QPS
+   * @param tableName The table the metric is for
+   */
+  void updateTableWriteQueryMeter(TableName tableName);
+}
diff --git 
a/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatenciesImpl.java
 
b/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatenciesImpl.java
index 980388f..5a3f3b9 100644
--- 
a/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatenciesImpl.java
+++ 
b/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatenciesImpl.java
@@ -20,6 +20,8 @@ import java.util.HashMap;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.metrics.BaseSourceImpl;
 import org.apache.hadoop.metrics2.MetricHistogram;
+import org.apache.hadoop.metrics2.MetricsCollector;
+import org.apache.hadoop.metrics2.MetricsRecordBuilder;
 import org.apache.hadoop.metrics2.lib.DynamicMetricsRegistry;
 import org.apache.yetus.audience.InterfaceAudience;
 
@@ -171,4 +173,15 @@ public class MetricsTableLatenciesImpl extends 
BaseSourceImpl implements Metrics
   public void updateScanTime(String tableName, long t) {
 getOrCreateTableHistogram(tableName).updateScanTime(t);
   }
+
+  @Override
+  public void getMetrics(MetricsCollector metricsCollector, boolean all) {
+MetricsRecordBuilder mrb = metricsCollector.addRecord(metricsName);
+// source is registered in supers constructor, sometimes called before the 
whole initialization.
+metricsRegistry.snapshot(mrb, all);
+if (metricsAdapter != null) {
+  // snapshot MetricRegistry as well
+  metricsAdapter.snapshotAllMetrics(registry, mrb);
+}
+  }
 }
diff --git 
a/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableQueryMeterImpl.java
 
b/

[hbase] branch branch-2.2 updated: HBASE-22975 Add read and write QPS metrics at server level and table level

2019-09-23 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2.2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.2 by this push:
 new fd4cf24  HBASE-22975 Add read and write QPS metrics at server level 
and table level
fd4cf24 is described below

commit fd4cf240c3896e1e29bca2c18859845a722b4ad7
Author: zbq.dean 
AuthorDate: Mon Sep 23 14:18:29 2019 +0800

HBASE-22975 Add read and write QPS metrics at server level and table level

Signed-off-by: Reid Chan 
---
 .../hbase/regionserver/MetricsTableQueryMeter.java |  53 +++
 .../regionserver/MetricsTableLatenciesImpl.java|  13 +++
 .../regionserver/MetricsTableQueryMeterImpl.java   | 102 +
 .../apache/hadoop/hbase/regionserver/HRegion.java  |  16 
 .../hbase/regionserver/MetricsRegionServer.java|  34 +++
 .../regionserver/RegionServerTableMetrics.java |  20 
 6 files changed, 238 insertions(+)

diff --git 
a/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableQueryMeter.java
 
b/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableQueryMeter.java
new file mode 100644
index 000..d0085ff
--- /dev/null
+++ 
b/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableQueryMeter.java
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.hadoop.hbase.TableName;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * Query Per Second for each table in a RegionServer.
+ */
+@InterfaceAudience.Private
+public interface MetricsTableQueryMeter {
+
+  /**
+   * Update table read QPS
+   * @param tableName The table the metric is for
+   * @param count Number of occurrences to record
+   */
+  void updateTableReadQueryMeter(TableName tableName, long count);
+
+  /**
+   * Update table read QPS
+   * @param tableName The table the metric is for
+   */
+  void updateTableReadQueryMeter(TableName tableName);
+
+  /**
+   * Update table write QPS
+   * @param tableName The table the metric is for
+   * @param count Number of occurrences to record
+   */
+  void updateTableWriteQueryMeter(TableName tableName, long count);
+
+  /**
+   * Update table write QPS
+   * @param tableName The table the metric is for
+   */
+  void updateTableWriteQueryMeter(TableName tableName);
+}
diff --git 
a/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatenciesImpl.java
 
b/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatenciesImpl.java
index 980388f..5a3f3b9 100644
--- 
a/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatenciesImpl.java
+++ 
b/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatenciesImpl.java
@@ -20,6 +20,8 @@ import java.util.HashMap;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.metrics.BaseSourceImpl;
 import org.apache.hadoop.metrics2.MetricHistogram;
+import org.apache.hadoop.metrics2.MetricsCollector;
+import org.apache.hadoop.metrics2.MetricsRecordBuilder;
 import org.apache.hadoop.metrics2.lib.DynamicMetricsRegistry;
 import org.apache.yetus.audience.InterfaceAudience;
 
@@ -171,4 +173,15 @@ public class MetricsTableLatenciesImpl extends 
BaseSourceImpl implements Metrics
   public void updateScanTime(String tableName, long t) {
 getOrCreateTableHistogram(tableName).updateScanTime(t);
   }
+
+  @Override
+  public void getMetrics(MetricsCollector metricsCollector, boolean all) {
+MetricsRecordBuilder mrb = metricsCollector.addRecord(metricsName);
+// source is registered in supers constructor, sometimes called before the 
whole initialization.
+metricsRegistry.snapshot(mrb, all);
+if (metricsAdapter != null) {
+  // snapshot MetricRegistry as well
+  metricsAdapter.snapshotAllMetrics(registry, mrb);
+}
+  }
 }
diff --git 
a/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableQueryMeterImpl.java
 
b/

[hbase] branch 1.4 deleted (was a8e3d23)

2019-09-23 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a change to branch 1.4
in repository https://gitbox.apache.org/repos/asf/hbase.git.


 was a8e3d23  HBASE-22975 Add read and write QPS metrics at server level 
and table level (#615)

The revisions that were on this branch are still contained in
other references; therefore, this change does not discard any commits
from the repository.



[hbase] branch master updated: HBASE-23017 Verify the file integrity in persistent IOEngine

2019-10-10 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new 16da123  HBASE-23017 Verify the file integrity in persistent IOEngine
16da123 is described below

commit 16da123df45af712f604cff32897d6c1166b86b4
Author: Baiqiang Zhao 
AuthorDate: Fri Oct 11 14:38:00 2019 +0800

HBASE-23017 Verify the file integrity in persistent IOEngine

Signed-off-by: Anoop Sam John 
Signed-off-by: Reid Chan 
---
 .../src/main/protobuf/BucketCacheEntry.proto   |   1 +
 .../hadoop/hbase/io/hfile/bucket/BucketCache.java  |  22 ++
 .../hbase/io/hfile/bucket/BucketProtoUtils.java|  14 +-
 .../hadoop/hbase/io/hfile/bucket/FileIOEngine.java |  12 +-
 .../hbase/io/hfile/bucket/FileMmapIOEngine.java|  11 +-
 .../hbase/io/hfile/bucket/PersistentIOEngine.java  | 116 ++
 .../io/hfile/bucket/TestVerifyBucketCacheFile.java | 247 +
 7 files changed, 411 insertions(+), 12 deletions(-)

diff --git a/hbase-protocol-shaded/src/main/protobuf/BucketCacheEntry.proto 
b/hbase-protocol-shaded/src/main/protobuf/BucketCacheEntry.proto
index d78acc0..038c6ca 100644
--- a/hbase-protocol-shaded/src/main/protobuf/BucketCacheEntry.proto
+++ b/hbase-protocol-shaded/src/main/protobuf/BucketCacheEntry.proto
@@ -31,6 +31,7 @@ message BucketCacheEntry {
   required string map_class = 3;
   map deserializers = 4;
   required BackingMap backing_map = 5;
+  optional bytes checksum = 6;
 }
 
 message BackingMap {
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
index 99abfea..7d8f582 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
@@ -238,6 +238,16 @@ public class BucketCache implements BlockCache, HeapSize {
   /** In-memory bucket size */
   private float memoryFactor;
 
+  private static final String FILE_VERIFY_ALGORITHM =
+"hbase.bucketcache.persistent.file.integrity.check.algorithm";
+  private static final String DEFAULT_FILE_VERIFY_ALGORITHM = "MD5";
+
+  /**
+   * Use {@link java.security.MessageDigest} class's encryption algorithms to 
check
+   * persistent file integrity, default algorithm is MD5
+   * */
+  private String algorithm;
+
   public BucketCache(String ioEngineName, long capacity, int blockSize, int[] 
bucketSizes,
   int writerThreadNum, int writerQLen, String persistencePath) throws 
IOException {
 this(ioEngineName, capacity, blockSize, bucketSizes, writerThreadNum, 
writerQLen,
@@ -247,6 +257,7 @@ public class BucketCache implements BlockCache, HeapSize {
   public BucketCache(String ioEngineName, long capacity, int blockSize, int[] 
bucketSizes,
   int writerThreadNum, int writerQLen, String persistencePath, int 
ioErrorsTolerationDuration,
   Configuration conf) throws IOException {
+this.algorithm = conf.get(FILE_VERIFY_ALGORITHM, 
DEFAULT_FILE_VERIFY_ALGORITHM);
 this.ioEngine = getIOEngineFromName(ioEngineName, capacity, 
persistencePath);
 this.writerThreads = new WriterThread[writerThreadNum];
 long blockNumCapacity = capacity / blockSize;
@@ -1131,6 +1142,13 @@ public class BucketCache implements BlockCache, HeapSize 
{
   }
 
   private void parsePB(BucketCacheProtos.BucketCacheEntry proto) throws 
IOException {
+if (proto.hasChecksum()) {
+  ((PersistentIOEngine) 
ioEngine).verifyFileIntegrity(proto.getChecksum().toByteArray(),
+algorithm);
+} else {
+  // if has not checksum, it means the persistence file is old format
+  LOG.info("Persistent file is old format, it does not support verifying 
file integrity!");
+}
 verifyCapacityAndClasses(proto.getCacheCapacity(), proto.getIoClass(), 
proto.getMapClass());
 backingMap = BucketProtoUtils.fromPB(proto.getDeserializersMap(), 
proto.getBackingMap());
   }
@@ -1235,6 +1253,10 @@ public class BucketCache implements BlockCache, HeapSize 
{
 return this.bucketAllocator.getUsedSize();
   }
 
+  protected String getAlgorithm() {
+return algorithm;
+  }
+
   /**
* Evicts all blocks for a specific HFile.
* 
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketProtoUtils.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketProtoUtils.java
index 69b8370..f3d63d4 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketProtoUtils.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketProtoUtils.java
@@ -29,6 +29,7 @@ import org.apache.hadoop.hbase.io.hfile.BlockPriority;
 import org.apache.hadoo

[hbase] branch branch-2 updated: HBASE-23017 Verify the file integrity in persistent IOEngine

2019-10-10 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 273c0ba  HBASE-23017 Verify the file integrity in persistent IOEngine
273c0ba is described below

commit 273c0ba69fa681b543513a1a4cb1fed052d11bba
Author: Baiqiang Zhao 
AuthorDate: Fri Oct 11 14:38:00 2019 +0800

HBASE-23017 Verify the file integrity in persistent IOEngine

Signed-off-by: Anoop Sam John 
Signed-off-by: Reid Chan 
---
 .../src/main/protobuf/BucketCacheEntry.proto   |   1 +
 .../hadoop/hbase/io/hfile/bucket/BucketCache.java  |  22 ++
 .../hbase/io/hfile/bucket/BucketProtoUtils.java|  14 +-
 .../hadoop/hbase/io/hfile/bucket/FileIOEngine.java |  12 +-
 .../hbase/io/hfile/bucket/FileMmapIOEngine.java|  11 +-
 .../hbase/io/hfile/bucket/PersistentIOEngine.java  | 116 ++
 .../io/hfile/bucket/TestVerifyBucketCacheFile.java | 247 +
 7 files changed, 411 insertions(+), 12 deletions(-)

diff --git a/hbase-protocol-shaded/src/main/protobuf/BucketCacheEntry.proto 
b/hbase-protocol-shaded/src/main/protobuf/BucketCacheEntry.proto
index d78acc0..038c6ca 100644
--- a/hbase-protocol-shaded/src/main/protobuf/BucketCacheEntry.proto
+++ b/hbase-protocol-shaded/src/main/protobuf/BucketCacheEntry.proto
@@ -31,6 +31,7 @@ message BucketCacheEntry {
   required string map_class = 3;
   map deserializers = 4;
   required BackingMap backing_map = 5;
+  optional bytes checksum = 6;
 }
 
 message BackingMap {
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
index 99abfea..7d8f582 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
@@ -238,6 +238,16 @@ public class BucketCache implements BlockCache, HeapSize {
   /** In-memory bucket size */
   private float memoryFactor;
 
+  private static final String FILE_VERIFY_ALGORITHM =
+"hbase.bucketcache.persistent.file.integrity.check.algorithm";
+  private static final String DEFAULT_FILE_VERIFY_ALGORITHM = "MD5";
+
+  /**
+   * Use {@link java.security.MessageDigest} class's encryption algorithms to 
check
+   * persistent file integrity, default algorithm is MD5
+   * */
+  private String algorithm;
+
   public BucketCache(String ioEngineName, long capacity, int blockSize, int[] 
bucketSizes,
   int writerThreadNum, int writerQLen, String persistencePath) throws 
IOException {
 this(ioEngineName, capacity, blockSize, bucketSizes, writerThreadNum, 
writerQLen,
@@ -247,6 +257,7 @@ public class BucketCache implements BlockCache, HeapSize {
   public BucketCache(String ioEngineName, long capacity, int blockSize, int[] 
bucketSizes,
   int writerThreadNum, int writerQLen, String persistencePath, int 
ioErrorsTolerationDuration,
   Configuration conf) throws IOException {
+this.algorithm = conf.get(FILE_VERIFY_ALGORITHM, 
DEFAULT_FILE_VERIFY_ALGORITHM);
 this.ioEngine = getIOEngineFromName(ioEngineName, capacity, 
persistencePath);
 this.writerThreads = new WriterThread[writerThreadNum];
 long blockNumCapacity = capacity / blockSize;
@@ -1131,6 +1142,13 @@ public class BucketCache implements BlockCache, HeapSize 
{
   }
 
   private void parsePB(BucketCacheProtos.BucketCacheEntry proto) throws 
IOException {
+if (proto.hasChecksum()) {
+  ((PersistentIOEngine) 
ioEngine).verifyFileIntegrity(proto.getChecksum().toByteArray(),
+algorithm);
+} else {
+  // if has not checksum, it means the persistence file is old format
+  LOG.info("Persistent file is old format, it does not support verifying 
file integrity!");
+}
 verifyCapacityAndClasses(proto.getCacheCapacity(), proto.getIoClass(), 
proto.getMapClass());
 backingMap = BucketProtoUtils.fromPB(proto.getDeserializersMap(), 
proto.getBackingMap());
   }
@@ -1235,6 +1253,10 @@ public class BucketCache implements BlockCache, HeapSize 
{
 return this.bucketAllocator.getUsedSize();
   }
 
+  protected String getAlgorithm() {
+return algorithm;
+  }
+
   /**
* Evicts all blocks for a specific HFile.
* 
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketProtoUtils.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketProtoUtils.java
index 69b8370..f3d63d4 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketProtoUtils.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketProtoUtils.java
@@ -29,6 +29,7 @@ import org.apache.hadoop.hbase.io.hfile.BlockPriority;
 import org.apache.hadoo

[hbase] branch master updated: HBASE-23144 Compact_rs throw wrong number of arguments

2019-10-11 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new f0b2212  HBASE-23144 Compact_rs throw wrong number of arguments
f0b2212 is described below

commit f0b22120a09f27c730b4a06c34a9b3f24433bd49
Author: Karthik Palanisamy 
AuthorDate: Fri Oct 11 00:08:57 2019 -0700

HBASE-23144 Compact_rs throw wrong number of arguments

Signed-off-by: Reid Chan 
---
 hbase-shell/src/main/ruby/hbase/admin.rb  |  6 +-
 hbase-shell/src/test/ruby/hbase/admin_test.rb | 10 ++
 2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/hbase-shell/src/main/ruby/hbase/admin.rb 
b/hbase-shell/src/main/ruby/hbase/admin.rb
index b854eaf..5f4f16d 100644
--- a/hbase-shell/src/main/ruby/hbase/admin.rb
+++ b/hbase-shell/src/main/ruby/hbase/admin.rb
@@ -115,7 +115,11 @@ module Hbase
 
 # Requests to compact all regions on the regionserver
 def compact_regionserver(servername, major = false)
-  @admin.compactRegionServer(ServerName.valueOf(servername), major)
+  if major
+@admin.majorCompactRegionServer(ServerName.valueOf(servername))
+  else
+@admin.compactRegionServer(ServerName.valueOf(servername))
+  end
 end
 
 
#--
diff --git a/hbase-shell/src/test/ruby/hbase/admin_test.rb 
b/hbase-shell/src/test/ruby/hbase/admin_test.rb
index 1461c7f..e001445 100644
--- a/hbase-shell/src/test/ruby/hbase/admin_test.rb
+++ b/hbase-shell/src/test/ruby/hbase/admin_test.rb
@@ -107,6 +107,16 @@ module Hbase
 command(:flush, s.toString)
   end
 end
+
#---
+define_test 'compact all regions by server name' do
+  servers = admin.list_liveservers
+  servers.each do |s|
+command(:compact_rs, s.to_s)
+# major compact
+command(:compact_rs, s.to_s, true)
+break
+  end
+end
 
 
#---
 



[hbase] branch master updated: HBASE-23056 Block count is 0 when BucketCache using persistent IOEngine and retrieve from file

2019-10-11 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new 5848e14  HBASE-23056 Block count is 0 when BucketCache using 
persistent IOEngine and retrieve from file
5848e14 is described below

commit 5848e149f43c2b176baa25c0fa14744a88c9c217
Author: zbq.dean 
AuthorDate: Fri Sep 20 17:28:43 2019 +0800

HBASE-23056 Block count is 0 when BucketCache using persistent IOEngine and 
retrieve from file

Signed-off-by: Reid Chan 
---
 .../main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java   | 1 +
 1 file changed, 1 insertion(+)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
index 7d8f582..790b93d 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
@@ -1094,6 +1094,7 @@ public class BucketCache implements BlockCache, HeapSize {
   }
   parsePB(BucketCacheProtos.BucketCacheEntry.parseDelimitedFrom(in));
   bucketAllocator = new BucketAllocator(cacheCapacity, bucketSizes, 
backingMap, realCacheSize);
+  blockNumber.add(backingMap.size());
 }
   }
 



[hbase] branch branch-2 updated: HBASE-23056 Block count is 0 when BucketCache using persistent IOEngine and retrieve from file

2019-10-11 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 595bcda  HBASE-23056 Block count is 0 when BucketCache using 
persistent IOEngine and retrieve from file
595bcda is described below

commit 595bcda9c34fa6e24b6cce2ab4be4ee057983b1f
Author: zbq.dean 
AuthorDate: Fri Sep 20 17:28:43 2019 +0800

HBASE-23056 Block count is 0 when BucketCache using persistent IOEngine and 
retrieve from file

Signed-off-by: Reid Chan 
---
 .../main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java   | 1 +
 1 file changed, 1 insertion(+)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
index 7d8f582..790b93d 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
@@ -1094,6 +1094,7 @@ public class BucketCache implements BlockCache, HeapSize {
   }
   parsePB(BucketCacheProtos.BucketCacheEntry.parseDelimitedFrom(in));
   bucketAllocator = new BucketAllocator(cacheCapacity, bucketSizes, 
backingMap, realCacheSize);
+  blockNumber.add(backingMap.size());
 }
   }
 



[hbase] branch branch-1 updated: HBASE-23056 Block count is 0 when BucketCache using persistent IOEngine and retrieve from file

2019-10-11 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1 by this push:
 new 4a0442d  HBASE-23056 Block count is 0 when BucketCache using 
persistent IOEngine and retrieve from file
4a0442d is described below

commit 4a0442d14eddc509fb5ca6c5a369c3a7c1b25064
Author: zbq.dean 
AuthorDate: Fri Sep 20 17:31:13 2019 +0800

HBASE-23056 Block count is 0 when BucketCache using persistent IOEngine and 
retrieve from file

Signed-off-by: Reid Chan 
---
 .../main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java   | 1 +
 1 file changed, 1 insertion(+)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
index daf96ef..041179a 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
@@ -1120,6 +1120,7 @@ public class BucketCache implements BlockCache, HeapSize {
   bucketAllocator = allocator;
   deserialiserMap = deserMap;
   backingMap = backingMapFromFile;
+  blockNumber.set(backingMap.size());
 } finally {
   if (ois != null) {
 ois.close();



[hbase] branch branch-1.4 updated: HBASE-23056 Block count is 0 when BucketCache using persistent IOEngine and retrieve from file

2019-10-11 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1.4
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1.4 by this push:
 new a7fff50  HBASE-23056 Block count is 0 when BucketCache using 
persistent IOEngine and retrieve from file
a7fff50 is described below

commit a7fff5043a233ce649b8707059be906f9344b9a3
Author: zbq.dean 
AuthorDate: Fri Sep 20 17:31:13 2019 +0800

HBASE-23056 Block count is 0 when BucketCache using persistent IOEngine and 
retrieve from file

Signed-off-by: Reid Chan 
---
 .../main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java   | 1 +
 1 file changed, 1 insertion(+)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
index 5c241f4..40d6602 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
@@ -1120,6 +1120,7 @@ public class BucketCache implements BlockCache, HeapSize {
   bucketAllocator = allocator;
   deserialiserMap = deserMap;
   backingMap = backingMapFromFile;
+  blockNumber.set(backingMap.size());
 } finally {
   if (ois != null) {
 ois.close();



[hbase] branch master updated: HBASE-21048 Get LogLevel is not working from console in secure environment

2019-04-13 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new 16146a1  HBASE-21048 Get LogLevel is not working from console in 
secure environment
16146a1 is described below

commit 16146a18394e25c6c83c97812e7cb16db4166bf7
Author: Wei-Chiu Chuang 
AuthorDate: Sat Apr 13 18:58:35 2019 +0800

HBASE-21048 Get LogLevel is not working from console in secure environment

Signed-off-by: Reid Chan 
Amend author: Reid Chan 
---
 hbase-http/pom.xml |   5 +
 .../org/apache/hadoop/hbase/http/log/LogLevel.java | 211 ++--
 .../apache/hadoop/hbase/http/log/TestLogLevel.java | 352 ++---
 3 files changed, 495 insertions(+), 73 deletions(-)

diff --git a/hbase-http/pom.xml b/hbase-http/pom.xml
index 985d75f..65d4f5b 100644
--- a/hbase-http/pom.xml
+++ b/hbase-http/pom.xml
@@ -298,6 +298,11 @@
   mockito-core
   test
 
+
+  org.apache.hadoop
+  hadoop-minikdc
+  test
+
   
   
 
diff --git 
a/hbase-http/src/main/java/org/apache/hadoop/hbase/http/log/LogLevel.java 
b/hbase-http/src/main/java/org/apache/hadoop/hbase/http/log/LogLevel.java
index 6f619ae..7182a0b 100644
--- a/hbase-http/src/main/java/org/apache/hadoop/hbase/http/log/LogLevel.java
+++ b/hbase-http/src/main/java/org/apache/hadoop/hbase/http/log/LogLevel.java
@@ -26,61 +26,230 @@ import java.net.URL;
 import java.net.URLConnection;
 import java.util.Objects;
 import java.util.regex.Pattern;
+
 import javax.servlet.ServletException;
 import javax.servlet.http.HttpServlet;
 import javax.servlet.http.HttpServletRequest;
 import javax.servlet.http.HttpServletResponse;
+
 import org.apache.commons.logging.impl.Jdk14Logger;
 import org.apache.commons.logging.impl.Log4JLogger;
+import org.apache.hadoop.HadoopIllegalArgumentException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
 import org.apache.hadoop.hbase.http.HttpServer;
+import org.apache.hadoop.security.authentication.client.AuthenticatedURL;
+import org.apache.hadoop.security.authentication.client.KerberosAuthenticator;
 import org.apache.hadoop.util.ServletUtil;
+import org.apache.hadoop.util.Tool;
 import org.apache.log4j.LogManager;
 import org.apache.yetus.audience.InterfaceAudience;
 import org.apache.yetus.audience.InterfaceStability;
+
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 import org.slf4j.impl.Log4jLoggerAdapter;
 
+import 
org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting;
+import org.apache.hbase.thirdparty.com.google.common.base.Charsets;
+
 /**
  * Change log level in runtime.
  */
 @InterfaceAudience.Private
 public final class LogLevel {
   private static final String USAGES = "\nUsage: General options are:\n"
-  + "\t[-getlevel  ]\n"
-  + "\t[-setlevel   ]\n";
+  + "\t[-getlevel  \n"
+  + "\t[-setlevel";
 
+  public static final String PROTOCOL_HTTP = "http";
   /**
* A command line implementation
*/
-  public static void main(String[] args) {
-if (args.length == 3 && "-getlevel".equals(args[0])) {
-  process("http://"; + args[1] + "/logLevel?log=" + args[2]);
-  return;
-}
-else if (args.length == 4 && "-setlevel".equals(args[0])) {
-  process("http://"; + args[1] + "/logLevel?log=" + args[2]
-  + "&level=" + args[3]);
-  return;
-}
+  public static void main(String[] args) throws Exception {
+CLI cli = new CLI(new Configuration());
+System.exit(cli.run(args));
+  }
 
+  /**
+   * Valid command line options.
+   */
+  private enum Operations {
+GETLEVEL,
+SETLEVEL,
+UNKNOWN
+  }
+
+  private static void printUsage() {
 System.err.println(USAGES);
 System.exit(-1);
   }
 
-  private static void process(String urlstring) {
-try {
-  URL url = new URL(urlstring);
-  System.out.println("Connecting to " + url);
-  URLConnection connection = url.openConnection();
+  @VisibleForTesting
+  static class CLI extends Configured implements Tool {
+private Operations operation = Operations.UNKNOWN;
+private String hostName;
+private String className;
+private String level;
+
+CLI(Configuration conf) {
+  setConf(conf);
+}
+
+@Override
+public int run(String[] args) throws Exception {
+  try {
+parseArguments(args);
+sendLogLevelRequest();
+  } catch (HadoopIllegalArgumentException e) {
+printUsage();
+  }
+  return 0;
+}
+
+/**
+ * Send HTTP request to the daemon.
+ * @throws HadoopIllegalArgumentException if arguments are invalid.
+ * @throws Exce

[hbase] branch branch-2 updated: HBASE-22240 [backport] HBASE-19762 Fix Checkstyle errors in hbase-http

2019-04-15 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 0da8b2c  HBASE-22240 [backport] HBASE-19762 Fix Checkstyle errors in 
hbase-http
0da8b2c is described below

commit 0da8b2ce135ca20cdfe287cadd955362bb4457b4
Author: Jan Hentschel 
AuthorDate: Wed Feb 7 18:52:59 2018 +0100

HBASE-22240 [backport] HBASE-19762 Fix Checkstyle errors in hbase-http

Signed-off-by: Reid Chan 
---
 .../resources/hbase/checkstyle-suppressions.xml|   7 +-
 hbase-http/pom.xml |   7 ++
 .../hbase/http/ClickjackingPreventionFilter.java   |  45 
 .../org/apache/hadoop/hbase/http/HtmlQuoting.java  |  56 ++
 .../org/apache/hadoop/hbase/http/HttpConfig.java   |   5 +-
 .../hadoop/hbase/http/HttpRequestLogAppender.java  |   2 +-
 .../org/apache/hadoop/hbase/http/HttpServer.java   | 120 ++---
 .../org/apache/hadoop/hbase/http/InfoServer.java   |  51 -
 .../apache/hadoop/hbase/http/NoCacheFilter.java|   8 +-
 .../apache/hadoop/hbase/http/ProfileServlet.java   |   4 +-
 .../apache/hadoop/hbase/http/conf/ConfServlet.java |   7 +-
 .../org/apache/hadoop/hbase/http/log/LogLevel.java |  30 +++---
 .../org/apache/hadoop/hbase/util/ProcessUtils.java |   4 +-
 .../hbase/http/HttpServerFunctionalTest.java   |  27 ++---
 .../apache/hadoop/hbase/http/TestGlobalFilter.java |  44 
 .../apache/hadoop/hbase/http/TestHtmlQuoting.java  |   7 +-
 .../apache/hadoop/hbase/http/TestHttpServer.java   | 103 --
 .../apache/hadoop/hbase/http/TestPathFilter.java   |  41 ---
 .../hadoop/hbase/http/TestServletFilter.java   |  38 +++
 .../hadoop/hbase/http/TestSpnegoHttpServer.java|  59 +-
 .../apache/hadoop/hbase/http/log/TestLogLevel.java |  18 ++--
 .../hadoop/hbase/http/resource/JerseyResource.java |   2 +-
 .../hadoop/hbase/http/ssl/KeyStoreTestUtil.java|  13 ++-
 23 files changed, 376 insertions(+), 322 deletions(-)

diff --git 
a/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml 
b/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
index 77eefc2..090f1d9 100644
--- a/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
+++ b/hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml
@@ -1,7 +1,7 @@
 
 https://checkstyle.org/dtds/suppressions_1_2.dtd";>
+"-//Checkstyle//DTD SuppressionFilter Configuration 1.2//EN"
+"https://checkstyle.org/dtds/suppressions_1_2.dtd";>
 ";
-  static final Pattern TAG = Pattern.compile("<[^>]*>");
+  private static final String MARKER = "";
+  private static final Pattern TAG = Pattern.compile("<[^>]*>");
 
   /**
* A servlet implementation
@@ -98,9 +96,8 @@ public final class LogLevel {
 private static final long serialVersionUID = 1L;
 
 @Override
-public void doGet(HttpServletRequest request, HttpServletResponse response
-) throws ServletException, IOException {
-
+public void doGet(HttpServletRequest request, HttpServletResponse response)
+throws ServletException, IOException {
   // Do the authorization
   if (!HttpServer.hasAdministratorAccess(getServletContext(), request,
   response)) {
@@ -175,8 +172,7 @@ public final class LogLevel {
 + "Set the specified log level for the specified log name." + 
"\n" + "\n"
 + "\n" + "\n" + "\n" + "\n" + "\n";
 
-private static void process(org.apache.log4j.Logger log, String level,
-PrintWriter out) throws IOException {
+private static void process(org.apache.log4j.Logger log, String level, 
PrintWriter out) {
   if (level != null) {
 if (!level.equals(org.apache.log4j.Level.toLevel(level).toString())) {
   out.println(MARKER + "" + "Bad level : 
" + level
@@ -192,14 +188,18 @@ public final class LogLevel {
 }
 
 private static void process(java.util.logging.Logger log, String level,
-PrintWriter out) throws IOException {
+PrintWriter out) {
   if (level != null) {
 log.setLevel(java.util.logging.Level.parse(level));
 out.println(MARKER + "Setting Level to " + level + " ...");
   }
 
   java.util.logging.Level lev;
-  for(; (lev = log.getLevel()) == null; log = log.getParent());
+
+  while ((lev = log.getLevel()) == null) {
+log = log.getParent();
+  }
+
   out.println(MARKER + "Effective level: " + lev + "");
 }
   }
diff --git 
a/hbase-http/src/main/java/org/apache/hadoop/hbase/util/ProcessUtils.java 
b/hbase-http/src/main/java/org/apache/hadoop/hbase/u

[hbase] branch branch-2 updated: HBASE-21048 Get LogLevel is not working from console in secure environment

2019-04-15 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new dca30ce  HBASE-21048 Get LogLevel is not working from console in 
secure environment
dca30ce is described below

commit dca30ce620c7704913a6f0ea9a3c5c2194a09468
Author: Wei-Chiu Chuang 
AuthorDate: Sat Apr 13 18:58:35 2019 +0800

HBASE-21048 Get LogLevel is not working from console in secure environment

Signed-off-by: Reid Chan 
Amend author: Reid Chan 
---
 hbase-http/pom.xml |   5 +
 .../org/apache/hadoop/hbase/http/log/LogLevel.java | 211 ++--
 .../apache/hadoop/hbase/http/log/TestLogLevel.java | 352 ++---
 3 files changed, 495 insertions(+), 73 deletions(-)

diff --git a/hbase-http/pom.xml b/hbase-http/pom.xml
index ada1892..d607161 100644
--- a/hbase-http/pom.xml
+++ b/hbase-http/pom.xml
@@ -298,6 +298,11 @@
   mockito-core
   test
 
+
+  org.apache.hadoop
+  hadoop-minikdc
+  test
+
   
   
 
diff --git 
a/hbase-http/src/main/java/org/apache/hadoop/hbase/http/log/LogLevel.java 
b/hbase-http/src/main/java/org/apache/hadoop/hbase/http/log/LogLevel.java
index 6f619ae..7182a0b 100644
--- a/hbase-http/src/main/java/org/apache/hadoop/hbase/http/log/LogLevel.java
+++ b/hbase-http/src/main/java/org/apache/hadoop/hbase/http/log/LogLevel.java
@@ -26,61 +26,230 @@ import java.net.URL;
 import java.net.URLConnection;
 import java.util.Objects;
 import java.util.regex.Pattern;
+
 import javax.servlet.ServletException;
 import javax.servlet.http.HttpServlet;
 import javax.servlet.http.HttpServletRequest;
 import javax.servlet.http.HttpServletResponse;
+
 import org.apache.commons.logging.impl.Jdk14Logger;
 import org.apache.commons.logging.impl.Log4JLogger;
+import org.apache.hadoop.HadoopIllegalArgumentException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
 import org.apache.hadoop.hbase.http.HttpServer;
+import org.apache.hadoop.security.authentication.client.AuthenticatedURL;
+import org.apache.hadoop.security.authentication.client.KerberosAuthenticator;
 import org.apache.hadoop.util.ServletUtil;
+import org.apache.hadoop.util.Tool;
 import org.apache.log4j.LogManager;
 import org.apache.yetus.audience.InterfaceAudience;
 import org.apache.yetus.audience.InterfaceStability;
+
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 import org.slf4j.impl.Log4jLoggerAdapter;
 
+import 
org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting;
+import org.apache.hbase.thirdparty.com.google.common.base.Charsets;
+
 /**
  * Change log level in runtime.
  */
 @InterfaceAudience.Private
 public final class LogLevel {
   private static final String USAGES = "\nUsage: General options are:\n"
-  + "\t[-getlevel  ]\n"
-  + "\t[-setlevel   ]\n";
+  + "\t[-getlevel  \n"
+  + "\t[-setlevel";
 
+  public static final String PROTOCOL_HTTP = "http";
   /**
* A command line implementation
*/
-  public static void main(String[] args) {
-if (args.length == 3 && "-getlevel".equals(args[0])) {
-  process("http://"; + args[1] + "/logLevel?log=" + args[2]);
-  return;
-}
-else if (args.length == 4 && "-setlevel".equals(args[0])) {
-  process("http://"; + args[1] + "/logLevel?log=" + args[2]
-  + "&level=" + args[3]);
-  return;
-}
+  public static void main(String[] args) throws Exception {
+CLI cli = new CLI(new Configuration());
+System.exit(cli.run(args));
+  }
 
+  /**
+   * Valid command line options.
+   */
+  private enum Operations {
+GETLEVEL,
+SETLEVEL,
+UNKNOWN
+  }
+
+  private static void printUsage() {
 System.err.println(USAGES);
 System.exit(-1);
   }
 
-  private static void process(String urlstring) {
-try {
-  URL url = new URL(urlstring);
-  System.out.println("Connecting to " + url);
-  URLConnection connection = url.openConnection();
+  @VisibleForTesting
+  static class CLI extends Configured implements Tool {
+private Operations operation = Operations.UNKNOWN;
+private String hostName;
+private String className;
+private String level;
+
+CLI(Configuration conf) {
+  setConf(conf);
+}
+
+@Override
+public int run(String[] args) throws Exception {
+  try {
+parseArguments(args);
+sendLogLevelRequest();
+  } catch (HadoopIllegalArgumentException e) {
+printUsage();
+  }
+  return 0;
+}
+
+/**
+ * Send HTTP request to the daemon.
+ * @throws HadoopIllegalArgumentException if arguments are invalid.
+ * @throws Exce

[hbase] branch branch-1 updated: HBASE-21048 Get LogLevel is not working from console in secure environment

2019-04-16 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1 by this push:
 new 4af4afc  HBASE-21048 Get LogLevel is not working from console in 
secure environment
4af4afc is described below

commit 4af4afc94f2836400b716dfaeef2c661064eb4fe
Author: Wei-Chiu Chuang 
AuthorDate: Tue Apr 16 13:58:46 2019 -0700

HBASE-21048 Get LogLevel is not working from console in secure environment

Signed-off-by: Reid Chan 
---
 .../org/apache/hadoop/hbase/http/log/LogLevel.java | 236 +++--
 .../apache/hadoop/hbase/http/log/TestLogLevel.java | 374 +
 2 files changed, 514 insertions(+), 96 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/http/log/LogLevel.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/http/log/LogLevel.java
index 7701a25..328e1b1 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/http/log/LogLevel.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/http/log/LogLevel.java
@@ -17,6 +17,9 @@
  */
 package org.apache.hadoop.hbase.http.log;
 
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Charsets;
+
 import java.io.BufferedReader;
 import java.io.IOException;
 import java.io.InputStreamReader;
@@ -34,59 +37,223 @@ import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.commons.logging.impl.Jdk14Logger;
 import org.apache.commons.logging.impl.Log4JLogger;
+import org.apache.hadoop.HadoopIllegalArgumentException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
 import org.apache.hadoop.hbase.classification.InterfaceAudience;
 import org.apache.hadoop.hbase.classification.InterfaceStability;
 import org.apache.hadoop.hbase.http.HttpServer;
+import org.apache.hadoop.security.authentication.client.AuthenticatedURL;
+import org.apache.hadoop.security.authentication.client.KerberosAuthenticator;
 import org.apache.hadoop.util.ServletUtil;
+import org.apache.hadoop.util.Tool;
 
 /**
  * Change log level in runtime.
  */
 @InterfaceStability.Evolving
 public class LogLevel {
-  public static final String USAGES = "\nUsage: General options are:\n"
-  + "\t[-getlevel  ]\n"
-  + "\t[-setlevel   ]\n";
+  private static final String USAGES = "\nUsage: General options are:\n"
+  + "\t[-getlevel  \n"
+  + "\t[-setlevel";
 
+  public static final String PROTOCOL_HTTP = "http";
   /**
* A command line implementation
*/
-  public static void main(String[] args) {
-if (args.length == 3 && "-getlevel".equals(args[0])) {
-  process("http://"; + args[1] + "/logLevel?log=" + args[2]);
-  return;
-}
-else if (args.length == 4 && "-setlevel".equals(args[0])) {
-  process("http://"; + args[1] + "/logLevel?log=" + args[2]
-  + "&level=" + args[3]);
-  return;
-}
+  public static void main(String[] args) throws Exception {
+CLI cli = new CLI(new Configuration());
+System.exit(cli.run(args));
+  }
+
+  /**
+   * Valid command line options.
+   */
+  private enum Operations {
+GETLEVEL,
+SETLEVEL,
+UNKNOWN
+  }
 
+  private static void printUsage() {
 System.err.println(USAGES);
 System.exit(-1);
   }
 
-  private static void process(String urlstring) {
-try {
-  URL url = new URL(urlstring);
-  System.out.println("Connecting to " + url);
-  URLConnection connection = url.openConnection();
+  @VisibleForTesting
+  static class CLI extends Configured implements Tool {
+private Operations operation = Operations.UNKNOWN;
+private String hostName;
+private String className;
+private String level;
+
+CLI(Configuration conf) {
+  setConf(conf);
+}
+
+@Override
+public int run(String[] args) throws Exception {
+  try {
+parseArguments(args);
+sendLogLevelRequest();
+  } catch (HadoopIllegalArgumentException e) {
+printUsage();
+  }
+  return 0;
+}
+
+/**
+ * Send HTTP request to the daemon.
+ * @throws HadoopIllegalArgumentException if arguments are invalid.
+ * @throws Exception if unable to connect
+ */
+private void sendLogLevelRequest()
+throws HadoopIllegalArgumentException, Exception {
+  switch (operation) {
+case GETLEVEL:
+  doGetLevel();
+  break;
+case SETLEVEL:
+  doSetLevel();
+  break;
+default:
+  throw new HadoopIllegalArgumentException(
+  "Expect either -getlevel or -setlevel");
+  }
+}
+
+public void parseArguments(String[] ar

[hbase] branch branch-1 updated: HBASE-22559 [RPC] set guard against CALL_QUEUE_HANDLER_FACTOR_CONF_KEY

2019-06-13 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1 by this push:
 new 2ff4e4f  HBASE-22559 [RPC] set guard against 
CALL_QUEUE_HANDLER_FACTOR_CONF_KEY
2ff4e4f is described below

commit 2ff4e4f09630a8c04e57d16a71362708dd165532
Author: Reid Chan 
AuthorDate: Fri Jun 14 11:24:55 2019 +0800

HBASE-22559 [RPC] set guard against CALL_QUEUE_HANDLER_FACTOR_CONF_KEY
---
 .../main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java | 14 ++
 1 file changed, 14 insertions(+)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java
index 15c416c..d46786b 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java
@@ -133,6 +133,20 @@ public abstract class RpcExecutor {
 this.abortable = abortable;
 
 float callQueuesHandlersFactor = 
this.conf.getFloat(CALL_QUEUE_HANDLER_FACTOR_CONF_KEY, 0);
+if (Float.compare(callQueuesHandlersFactor, 1.0f) > 0 ||
+Float.compare(0.0f, callQueuesHandlersFactor) > 0) {
+  LOG.warn(CALL_QUEUE_HANDLER_FACTOR_CONF_KEY +
+" is *ILLEGAL*, it should be in range [0.0, 1.0]");
+  // For callQueuesHandlersFactor > 1.0, we just set it 1.0f.
+  if (Float.compare(callQueuesHandlersFactor, 1.0f) > 0) {
+LOG.warn("Set " + CALL_QUEUE_HANDLER_FACTOR_CONF_KEY + " 1.0f");
+callQueuesHandlersFactor = 1.0f;
+  } else {
+// But for callQueuesHandlersFactor < 0.0, following method 
#computeNumCallQueues
+// will compute max(1, -x) => 1 which has same effect of default value.
+LOG.warn("Set " + CALL_QUEUE_HANDLER_FACTOR_CONF_KEY + " default value 
0.0f");
+  }
+}
 this.numCallQueues = computeNumCallQueues(handlerCount, 
callQueuesHandlersFactor);
 this.queues = new ArrayList<>(this.numCallQueues);
 



[hbase] branch branch-1.4 updated: HBASE-22559 [RPC] set guard against CALL_QUEUE_HANDLER_FACTOR_CONF_KEY

2019-06-13 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1.4
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1.4 by this push:
 new 3a8f7c9  HBASE-22559 [RPC] set guard against 
CALL_QUEUE_HANDLER_FACTOR_CONF_KEY
3a8f7c9 is described below

commit 3a8f7c9a3642ffca859bd6f363ff3831a459e7e2
Author: Reid Chan 
AuthorDate: Fri Jun 14 11:24:55 2019 +0800

HBASE-22559 [RPC] set guard against CALL_QUEUE_HANDLER_FACTOR_CONF_KEY

Signed-off-by Andrew Purtell 
---
 .../main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java | 14 ++
 1 file changed, 14 insertions(+)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java
index 15c416c..d46786b 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java
@@ -133,6 +133,20 @@ public abstract class RpcExecutor {
 this.abortable = abortable;
 
 float callQueuesHandlersFactor = 
this.conf.getFloat(CALL_QUEUE_HANDLER_FACTOR_CONF_KEY, 0);
+if (Float.compare(callQueuesHandlersFactor, 1.0f) > 0 ||
+Float.compare(0.0f, callQueuesHandlersFactor) > 0) {
+  LOG.warn(CALL_QUEUE_HANDLER_FACTOR_CONF_KEY +
+" is *ILLEGAL*, it should be in range [0.0, 1.0]");
+  // For callQueuesHandlersFactor > 1.0, we just set it 1.0f.
+  if (Float.compare(callQueuesHandlersFactor, 1.0f) > 0) {
+LOG.warn("Set " + CALL_QUEUE_HANDLER_FACTOR_CONF_KEY + " 1.0f");
+callQueuesHandlersFactor = 1.0f;
+  } else {
+// But for callQueuesHandlersFactor < 0.0, following method 
#computeNumCallQueues
+// will compute max(1, -x) => 1 which has same effect of default value.
+LOG.warn("Set " + CALL_QUEUE_HANDLER_FACTOR_CONF_KEY + " default value 
0.0f");
+  }
+}
 this.numCallQueues = computeNumCallQueues(handlerCount, 
callQueuesHandlersFactor);
 this.queues = new ArrayList<>(this.numCallQueues);
 



[hbase] branch master updated: HBASE-22559 [RPC] set guard against CALL_QUEUE_HANDLER_FACTOR_CONF_KEY

2019-06-13 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new ab44531  HBASE-22559 [RPC] set guard against 
CALL_QUEUE_HANDLER_FACTOR_CONF_KEY
ab44531 is described below

commit ab4453158a41e29bb75e219079584997be756cbe
Author: Reid Chan 
AuthorDate: Fri Jun 14 11:35:34 2019 +0800

HBASE-22559 [RPC] set guard against CALL_QUEUE_HANDLER_FACTOR_CONF_KEY

Signed-off-by: Andrew Purtell 
---
 .../main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java | 14 ++
 1 file changed, 14 insertions(+)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java
index f63b243..3de5fa1 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java
@@ -116,6 +116,20 @@ public abstract class RpcExecutor {
 this.abortable = abortable;
 
 float callQueuesHandlersFactor = 
this.conf.getFloat(CALL_QUEUE_HANDLER_FACTOR_CONF_KEY, 0.1f);
+if (Float.compare(callQueuesHandlersFactor, 1.0f) > 0 ||
+Float.compare(0.0f, callQueuesHandlersFactor) > 0) {
+  LOG.warn(CALL_QUEUE_HANDLER_FACTOR_CONF_KEY +
+" is *ILLEGAL*, it should be in range [0.0, 1.0]");
+  // For callQueuesHandlersFactor > 1.0, we just set it 1.0f.
+  if (Float.compare(callQueuesHandlersFactor, 1.0f) > 0) {
+LOG.warn("Set " + CALL_QUEUE_HANDLER_FACTOR_CONF_KEY + " 1.0f");
+callQueuesHandlersFactor = 1.0f;
+  } else {
+// But for callQueuesHandlersFactor < 0.0, following method 
#computeNumCallQueues
+// will compute max(1, -x) => 1 which has same effect of default value.
+LOG.warn("Set " + CALL_QUEUE_HANDLER_FACTOR_CONF_KEY + " default value 
0.0f");
+  }
+}
 this.numCallQueues = computeNumCallQueues(handlerCount, 
callQueuesHandlersFactor);
 this.queues = new ArrayList<>(this.numCallQueues);
 



[hbase] branch branch-2 updated: HBASE-22559 [RPC] set guard against CALL_QUEUE_HANDLER_FACTOR_CONF_KEY

2019-06-13 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new e2d8911  HBASE-22559 [RPC] set guard against 
CALL_QUEUE_HANDLER_FACTOR_CONF_KEY
e2d8911 is described below

commit e2d891172a0fa03e44d7e7ca5e8cce9435940d1e
Author: Reid Chan 
AuthorDate: Fri Jun 14 11:35:34 2019 +0800

HBASE-22559 [RPC] set guard against CALL_QUEUE_HANDLER_FACTOR_CONF_KEY

Signed-off-by: Andrew Purtell 
---
 .../main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java | 14 ++
 1 file changed, 14 insertions(+)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java
index 7470758..a532b6e 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java
@@ -116,6 +116,20 @@ public abstract class RpcExecutor {
 this.abortable = abortable;
 
 float callQueuesHandlersFactor = 
this.conf.getFloat(CALL_QUEUE_HANDLER_FACTOR_CONF_KEY, 0.1f);
+if (Float.compare(callQueuesHandlersFactor, 1.0f) > 0 ||
+Float.compare(0.0f, callQueuesHandlersFactor) > 0) {
+  LOG.warn(CALL_QUEUE_HANDLER_FACTOR_CONF_KEY +
+" is *ILLEGAL*, it should be in range [0.0, 1.0]");
+  // For callQueuesHandlersFactor > 1.0, we just set it 1.0f.
+  if (Float.compare(callQueuesHandlersFactor, 1.0f) > 0) {
+LOG.warn("Set " + CALL_QUEUE_HANDLER_FACTOR_CONF_KEY + " 1.0f");
+callQueuesHandlersFactor = 1.0f;
+  } else {
+// But for callQueuesHandlersFactor < 0.0, following method 
#computeNumCallQueues
+// will compute max(1, -x) => 1 which has same effect of default value.
+LOG.warn("Set " + CALL_QUEUE_HANDLER_FACTOR_CONF_KEY + " default value 
0.0f");
+  }
+}
 this.numCallQueues = computeNumCallQueues(handlerCount, 
callQueuesHandlersFactor);
 this.queues = new ArrayList<>(this.numCallQueues);
 



[hbase] branch branch-2.2 updated: HBASE-22559 [RPC] set guard against CALL_QUEUE_HANDLER_FACTOR_CONF_KEY

2019-06-13 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2.2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.2 by this push:
 new 0b17c7f  HBASE-22559 [RPC] set guard against 
CALL_QUEUE_HANDLER_FACTOR_CONF_KEY
0b17c7f is described below

commit 0b17c7f7241d0d3e738d2ab234cf842823fe3480
Author: Reid Chan 
AuthorDate: Fri Jun 14 11:35:34 2019 +0800

HBASE-22559 [RPC] set guard against CALL_QUEUE_HANDLER_FACTOR_CONF_KEY

Signed-off-by: Andrew Purtell 
---
 .../main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java | 14 ++
 1 file changed, 14 insertions(+)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java
index 7470758..a532b6e 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcExecutor.java
@@ -116,6 +116,20 @@ public abstract class RpcExecutor {
 this.abortable = abortable;
 
 float callQueuesHandlersFactor = 
this.conf.getFloat(CALL_QUEUE_HANDLER_FACTOR_CONF_KEY, 0.1f);
+if (Float.compare(callQueuesHandlersFactor, 1.0f) > 0 ||
+Float.compare(0.0f, callQueuesHandlersFactor) > 0) {
+  LOG.warn(CALL_QUEUE_HANDLER_FACTOR_CONF_KEY +
+" is *ILLEGAL*, it should be in range [0.0, 1.0]");
+  // For callQueuesHandlersFactor > 1.0, we just set it 1.0f.
+  if (Float.compare(callQueuesHandlersFactor, 1.0f) > 0) {
+LOG.warn("Set " + CALL_QUEUE_HANDLER_FACTOR_CONF_KEY + " 1.0f");
+callQueuesHandlersFactor = 1.0f;
+  } else {
+// But for callQueuesHandlersFactor < 0.0, following method 
#computeNumCallQueues
+// will compute max(1, -x) => 1 which has same effect of default value.
+LOG.warn("Set " + CALL_QUEUE_HANDLER_FACTOR_CONF_KEY + " default value 
0.0f");
+  }
+}
 this.numCallQueues = computeNumCallQueues(handlerCount, 
callQueuesHandlersFactor);
 this.queues = new ArrayList<>(this.numCallQueues);
 



[hbase] branch branch-1 updated: HBASE-22562 Remove dead code: skipControl

2019-06-14 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1 by this push:
 new 84af2a4  HBASE-22562 Remove dead code: skipControl
84af2a4 is described below

commit 84af2a46109da2b47d3b935d847b78149587af5f
Author: Josh Elser 
AuthorDate: Wed Jun 12 19:15:14 2019 -0400

HBASE-22562 Remove dead code: skipControl

Signed-off-by: Reid Chan 
---
 .../throttle/PressureAwareCompactionThroughputController.java| 9 -
 .../throttle/PressureAwareFlushThroughputController.java | 6 --
 .../regionserver/throttle/PressureAwareThroughputController.java | 8 
 3 files changed, 23 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareCompactionThroughputController.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareCompactionThroughputController.java
index 2dc5817..b24555a 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareCompactionThroughputController.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareCompactionThroughputController.java
@@ -146,13 +146,4 @@ public class PressureAwareCompactionThroughputController 
extends PressureAwareTh
 + throughputDesc(getMaxThroughput()) + ", activeCompactions=" + 
activeOperations.size()
 + "]";
   }
-
-  @Override
-  protected boolean skipControl(long deltaSize, long controlSize) {
-if (deltaSize < controlSize) {
-  return true;
-} else {
-  return false;
-}
-  }
 }
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareFlushThroughputController.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareFlushThroughputController.java
index f301a27..ccb60ff 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareFlushThroughputController.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareFlushThroughputController.java
@@ -127,10 +127,4 @@ public class PressureAwareFlushThroughputController 
extends PressureAwareThrough
 return "DefaultFlushController [maxThroughput=" + 
throughputDesc(getMaxThroughput())
 + ", activeFlushNumber=" + activeOperations.size() + "]";
   }
-
-  @Override
-  protected boolean skipControl(long deltaSize, long controlSize) {
-// for flush, we control the flow no matter whether the flush size is small
-return false;
-  }
 }
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareThroughputController.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareThroughputController.java
index 8867611..854d245 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareThroughputController.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareThroughputController.java
@@ -137,14 +137,6 @@ public abstract class PressureAwareThroughputController 
extends Configured imple
 return sleepTime;
   }
 
-  /**
-   * Check whether to skip control given delta size and control size
-   * @param deltaSize Delta size since last control
-   * @param controlSize Size limit to perform control
-   * @return a boolean indicates whether to skip this control
-   */
-  protected abstract boolean skipControl(long deltaSize, long controlSize);
-
   @Override
   public void finish(String opName) {
 ActiveOperation operation = activeOperations.remove(opName);



[hbase] branch branch-1.4 updated: HBASE-22562 Remove dead code: skipControl

2019-06-14 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1.4
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1.4 by this push:
 new 7ba63c8  HBASE-22562 Remove dead code: skipControl
7ba63c8 is described below

commit 7ba63c8dbd6de65df89067d08710ab3a1365b85d
Author: Josh Elser 
AuthorDate: Wed Jun 12 19:15:14 2019 -0400

HBASE-22562 Remove dead code: skipControl

Signed-off-by: Reid Chan 
---
 .../throttle/PressureAwareCompactionThroughputController.java| 9 -
 .../throttle/PressureAwareFlushThroughputController.java | 6 --
 .../regionserver/throttle/PressureAwareThroughputController.java | 8 
 3 files changed, 23 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareCompactionThroughputController.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareCompactionThroughputController.java
index c0d3b74..1681cff 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareCompactionThroughputController.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareCompactionThroughputController.java
@@ -141,13 +141,4 @@ public class PressureAwareCompactionThroughputController 
extends PressureAwareTh
 + throughputDesc(getMaxThroughput()) + ", activeCompactions=" + 
activeOperations.size()
 + "]";
   }
-
-  @Override
-  protected boolean skipControl(long deltaSize, long controlSize) {
-if (deltaSize < controlSize) {
-  return true;
-} else {
-  return false;
-}
-  }
 }
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareFlushThroughputController.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareFlushThroughputController.java
index f301a27..ccb60ff 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareFlushThroughputController.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareFlushThroughputController.java
@@ -127,10 +127,4 @@ public class PressureAwareFlushThroughputController 
extends PressureAwareThrough
 return "DefaultFlushController [maxThroughput=" + 
throughputDesc(getMaxThroughput())
 + ", activeFlushNumber=" + activeOperations.size() + "]";
   }
-
-  @Override
-  protected boolean skipControl(long deltaSize, long controlSize) {
-// for flush, we control the flow no matter whether the flush size is small
-return false;
-  }
 }
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareThroughputController.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareThroughputController.java
index 8867611..854d245 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareThroughputController.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareThroughputController.java
@@ -137,14 +137,6 @@ public abstract class PressureAwareThroughputController 
extends Configured imple
 return sleepTime;
   }
 
-  /**
-   * Check whether to skip control given delta size and control size
-   * @param deltaSize Delta size since last control
-   * @param controlSize Size limit to perform control
-   * @return a boolean indicates whether to skip this control
-   */
-  protected abstract boolean skipControl(long deltaSize, long controlSize);
-
   @Override
   public void finish(String opName) {
 ActiveOperation operation = activeOperations.remove(opName);



[hbase] branch branch-2.1 updated: HBASE-22581 user with "CREATE" permission can grant, but not revoke permissions on created table

2019-06-17 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new 02f9c8b  HBASE-22581 user with "CREATE" permission can grant, but not 
revoke permissions on created table
02f9c8b is described below

commit 02f9c8b3b40e32e78b8885c43b5e0b272eceab83
Author: Istvan Toth 
AuthorDate: Fri Jun 14 08:41:51 2019 +0200

HBASE-22581 user with "CREATE" permission can grant, but not revoke 
permissions on created table

Signed-off-by: Reid Chan 
---
 .../hbase/security/access/AccessControlLists.java  |  9 -
 .../security/access/TestAccessController.java  | 42 ++
 2 files changed, 50 insertions(+), 1 deletion(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlLists.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlLists.java
index 219625b..5883120 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlLists.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlLists.java
@@ -259,7 +259,14 @@ public class AccessControlLists {
 Delete d = new Delete(userPermissionRowKey(userPerm));
 d.addColumns(ACL_LIST_FAMILY, userPermissionKey(userPerm));
 try {
-  t.delete(d);
+  /**
+   * We need to run the ACL delete in superuser context, to have
+   * similar authorization logic to addUserPermission().
+   * This ensures behaviour is consistent with pre 2.1.1 and 2.2+.
+   * The permission authorization has already happened here.
+   * See the TODO comment in addUserPermission for details
+   */
+  t.delete(new ArrayList<>(Arrays.asList(d)));
 } finally {
   t.close();
 }
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
index 481e4f7..1f2724c 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
@@ -3133,4 +3133,46 @@ public class TestAccessController extends SecureTestUtil 
{
 verifyAllowed(action, SUPERUSER);
 verifyDenied(action, USER_CREATE, USER_RW, USER_RO, USER_NONE, USER_OWNER, 
USER_ADMIN);
   }
+
+  @Test
+  public void testTableAdmin() throws Exception {
+
+// Create a user with table admin permissions only
+User userTableAdmin = User.createUserForTesting(conf, "table_admin", new 
String[0]);
+grantOnTable(TEST_UTIL, userTableAdmin.getShortName(), TEST_TABLE, null, 
null,
+  Permission.Action.ADMIN);
+
+AccessTestAction grantAction = new AccessTestAction() {
+  @Override
+  public Object run() throws Exception {
+try (Connection conn = ConnectionFactory.createConnection(conf);
+Table acl = conn.getTable(AccessControlLists.ACL_TABLE_NAME)) {
+  BlockingRpcChannel service = 
acl.coprocessorService(TEST_TABLE.getName());
+  AccessControlService.BlockingInterface protocol =
+  AccessControlService.newBlockingStub(service);
+  AccessControlUtil.grant(null, protocol, USER_NONE.getShortName(), 
TEST_TABLE, null, null,
+false, Action.READ);
+}
+return null;
+  }
+};
+
+AccessTestAction revokeAction = new AccessTestAction() {
+  @Override
+  public Object run() throws Exception {
+try (Connection conn = ConnectionFactory.createConnection(conf);
+Table acl = conn.getTable(AccessControlLists.ACL_TABLE_NAME)) {
+  BlockingRpcChannel service = 
acl.coprocessorService(TEST_TABLE.getName());
+  AccessControlService.BlockingInterface protocol =
+  AccessControlService.newBlockingStub(service);
+  AccessControlUtil.revoke(null, protocol, USER_NONE.getShortName(), 
TEST_TABLE, null, null,
+Action.READ);
+}
+return null;
+  }
+};
+
+verifyAllowed(userTableAdmin, grantAction);
+verifyAllowed(userTableAdmin, revokeAction);
+  }
 }



[hbase] branch branch-1 updated: HBASE-22596 [Chore] Separate the execution period between CompactionChecker and PeriodicMemStoreFlusher

2019-06-19 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1 by this push:
 new be9f0dd  HBASE-22596 [Chore] Separate the execution period between 
CompactionChecker and PeriodicMemStoreFlusher
be9f0dd is described below

commit be9f0dd58833827a7ce038f66343dd00236dc559
Author: Reid Chan 
AuthorDate: Thu Jun 20 10:49:11 2019 +0800

HBASE-22596 [Chore] Separate the execution period between CompactionChecker 
and PeriodicMemStoreFlusher


Signed-off-by: Zach York 
Signed-off-by: Xu Cang 
---
 .../org/apache/hadoop/hbase/regionserver/HRegionServer.java   | 11 +--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
index ab76e90..573a64a 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
@@ -341,6 +341,11 @@ public class HRegionServer extends HasThread implements
   protected final int threadWakeFrequency;
   protected final int msgInterval;
 
+  private static final String PERIOD_COMPACTION = 
"hbase.regionserver.compaction.check.period";
+  private final int compactionCheckFrequency;
+  private static final String PERIOD_FLUSH = 
"hbase.regionserver.flush.check.period";
+  private final int flushCheckFrequency;
+
   protected final int numRegionsToReport;
 
   // Stub to do region server status calls against the master.
@@ -548,6 +553,8 @@ public class HRegionServer extends HasThread implements
 this.numRetries = this.conf.getInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER,
 HConstants.DEFAULT_HBASE_CLIENT_RETRIES_NUMBER);
 this.threadWakeFrequency = conf.getInt(HConstants.THREAD_WAKE_FREQUENCY, 
10 * 1000);
+this.compactionCheckFrequency = conf.getInt(PERIOD_COMPACTION, 
this.threadWakeFrequency);
+this.flushCheckFrequency = conf.getInt(PERIOD_FLUSH, 
this.threadWakeFrequency);
 this.msgInterval = conf.getInt("hbase.regionserver.msginterval", 3 * 1000);
 
 this.sleeper = new Sleeper(this.msgInterval, this);
@@ -911,8 +918,8 @@ public class HRegionServer extends HasThread implements
 
 // Background thread to check for compactions; needed if region has not 
gotten updates
 // in a while. It will take care of not checking too frequently on 
store-by-store basis.
-this.compactionChecker = new CompactionChecker(this, 
this.threadWakeFrequency, this);
-this.periodicFlusher = new 
PeriodicMemstoreFlusher(this.threadWakeFrequency, this);
+this.compactionChecker = new CompactionChecker(this, 
this.compactionCheckFrequency, this);
+this.periodicFlusher = new 
PeriodicMemstoreFlusher(this.flushCheckFrequency, this);
 this.leases = new Leases(this.threadWakeFrequency);
 
 // Create the thread to clean the moved regions list



[hbase] branch branch-1.4 updated: HBASE-22596 [Chore] Separate the execution period between CompactionChecker and PeriodicMemStoreFlusher

2019-06-19 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1.4
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1.4 by this push:
 new 0bc7be7  HBASE-22596 [Chore] Separate the execution period between 
CompactionChecker and PeriodicMemStoreFlusher
0bc7be7 is described below

commit 0bc7be7ab4bb519901990779b1987536711bea3b
Author: Reid Chan 
AuthorDate: Thu Jun 20 10:49:11 2019 +0800

HBASE-22596 [Chore] Separate the execution period between CompactionChecker 
and PeriodicMemStoreFlusher

Signed-off-by: Zach York 
Signed-off-by: Xu Cang 
---
 .../org/apache/hadoop/hbase/regionserver/HRegionServer.java   | 11 +--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
index f26661b..af0c680 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
@@ -330,6 +330,11 @@ public class HRegionServer extends HasThread implements
   protected final int threadWakeFrequency;
   protected final int msgInterval;
 
+  private static final String PERIOD_COMPACTION = 
"hbase.regionserver.compaction.check.period";
+  private final int compactionCheckFrequency;
+  private static final String PERIOD_FLUSH = 
"hbase.regionserver.flush.check.period";
+  private final int flushCheckFrequency;
+
   protected final int numRegionsToReport;
 
   // Stub to do region server status calls against the master.
@@ -537,6 +542,8 @@ public class HRegionServer extends HasThread implements
 this.numRetries = this.conf.getInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER,
 HConstants.DEFAULT_HBASE_CLIENT_RETRIES_NUMBER);
 this.threadWakeFrequency = conf.getInt(HConstants.THREAD_WAKE_FREQUENCY, 
10 * 1000);
+this.compactionCheckFrequency = conf.getInt(PERIOD_COMPACTION, 
this.threadWakeFrequency);
+this.flushCheckFrequency = conf.getInt(PERIOD_FLUSH, 
this.threadWakeFrequency);
 this.msgInterval = conf.getInt("hbase.regionserver.msginterval", 3 * 1000);
 
 this.sleeper = new Sleeper(this.msgInterval, this);
@@ -900,8 +907,8 @@ public class HRegionServer extends HasThread implements
 
 // Background thread to check for compactions; needed if region has not 
gotten updates
 // in a while. It will take care of not checking too frequently on 
store-by-store basis.
-this.compactionChecker = new CompactionChecker(this, 
this.threadWakeFrequency, this);
-this.periodicFlusher = new 
PeriodicMemstoreFlusher(this.threadWakeFrequency, this);
+this.compactionChecker = new CompactionChecker(this, 
this.compactionCheckFrequency, this);
+this.periodicFlusher = new 
PeriodicMemstoreFlusher(this.flushCheckFrequency, this);
 this.leases = new Leases(this.threadWakeFrequency);
 
 // Create the thread to clean the moved regions list



[hbase] branch master updated: HBASE-22596 [Chore] Separate the execution period between CompactionChecker and PeriodicMemStoreFlusher

2019-06-19 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new c7c6140  HBASE-22596 [Chore] Separate the execution period between 
CompactionChecker and PeriodicMemStoreFlusher
c7c6140 is described below

commit c7c6140396528f8f9d4dff43035a516e7ba2f22a
Author: Reid Chan 
AuthorDate: Tue Jun 18 11:09:34 2019 +0800

HBASE-22596 [Chore] Separate the execution period between CompactionChecker 
and PeriodicMemStoreFlusher

Signed-off-by: Zach York 
Signed-off-by: Xu Cang 
---
 .../org/apache/hadoop/hbase/regionserver/HRegionServer.java   | 11 +--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
index 401c1b2..157f186 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
@@ -357,6 +357,11 @@ public class HRegionServer extends HasThread implements
   protected final int threadWakeFrequency;
   protected final int msgInterval;
 
+  private static final String PERIOD_COMPACTION = 
"hbase.regionserver.compaction.check.period";
+  private final int compactionCheckFrequency;
+  private static final String PERIOD_FLUSH = 
"hbase.regionserver.flush.check.period";
+  private final int flushCheckFrequency;
+
   protected final int numRegionsToReport;
 
   // Stub to do region server status calls against the master.
@@ -576,6 +581,8 @@ public class HRegionServer extends HasThread implements
   this.numRetries = 
this.conf.getInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER,
   HConstants.DEFAULT_HBASE_CLIENT_RETRIES_NUMBER);
   this.threadWakeFrequency = conf.getInt(HConstants.THREAD_WAKE_FREQUENCY, 
10 * 1000);
+  this.compactionCheckFrequency = conf.getInt(PERIOD_COMPACTION, 
this.threadWakeFrequency);
+  this.flushCheckFrequency = conf.getInt(PERIOD_FLUSH, 
this.threadWakeFrequency);
   this.msgInterval = conf.getInt("hbase.regionserver.msginterval", 3 * 
1000);
 
   this.sleeper = new Sleeper(this.msgInterval, this);
@@ -2018,8 +2025,8 @@ public class HRegionServer extends HasThread implements
 
 // Background thread to check for compactions; needed if region has not 
gotten updates
 // in a while. It will take care of not checking too frequently on 
store-by-store basis.
-this.compactionChecker = new CompactionChecker(this, 
this.threadWakeFrequency, this);
-this.periodicFlusher = new 
PeriodicMemStoreFlusher(this.threadWakeFrequency, this);
+this.compactionChecker = new CompactionChecker(this, 
this.compactionCheckFrequency, this);
+this.periodicFlusher = new 
PeriodicMemStoreFlusher(this.flushCheckFrequency, this);
 this.leases = new Leases(this.threadWakeFrequency);
 
 // Create the thread to clean the moved regions list



[hbase] branch branch-2 updated: HBASE-22596 [Chore] Separate the execution period between CompactionChecker and PeriodicMemStoreFlusher

2019-06-19 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new d30a7ca  HBASE-22596 [Chore] Separate the execution period between 
CompactionChecker and PeriodicMemStoreFlusher
d30a7ca is described below

commit d30a7ca75804129486d6236d26e228e33e226baa
Author: Reid Chan 
AuthorDate: Tue Jun 18 11:09:34 2019 +0800

HBASE-22596 [Chore] Separate the execution period between CompactionChecker 
and PeriodicMemStoreFlusher

Signed-off-by: Zach York 
Signed-off-by: Xu Cang 
---
 .../org/apache/hadoop/hbase/regionserver/HRegionServer.java   | 11 +--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
index cd94c78..786ff11 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
@@ -357,6 +357,11 @@ public class HRegionServer extends HasThread implements
   protected final int threadWakeFrequency;
   protected final int msgInterval;
 
+  private static final String PERIOD_COMPACTION = 
"hbase.regionserver.compaction.check.period";
+  private final int compactionCheckFrequency;
+  private static final String PERIOD_FLUSH = 
"hbase.regionserver.flush.check.period";
+  private final int flushCheckFrequency;
+
   protected final int numRegionsToReport;
 
   // Stub to do region server status calls against the master.
@@ -577,6 +582,8 @@ public class HRegionServer extends HasThread implements
   this.numRetries = 
this.conf.getInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER,
   HConstants.DEFAULT_HBASE_CLIENT_RETRIES_NUMBER);
   this.threadWakeFrequency = conf.getInt(HConstants.THREAD_WAKE_FREQUENCY, 
10 * 1000);
+  this.compactionCheckFrequency = conf.getInt(PERIOD_COMPACTION, 
this.threadWakeFrequency);
+  this.flushCheckFrequency = conf.getInt(PERIOD_FLUSH, 
this.threadWakeFrequency);
   this.msgInterval = conf.getInt("hbase.regionserver.msginterval", 3 * 
1000);
 
   this.sleeper = new Sleeper(this.msgInterval, this);
@@ -2010,8 +2017,8 @@ public class HRegionServer extends HasThread implements
 
 // Background thread to check for compactions; needed if region has not 
gotten updates
 // in a while. It will take care of not checking too frequently on 
store-by-store basis.
-this.compactionChecker = new CompactionChecker(this, 
this.threadWakeFrequency, this);
-this.periodicFlusher = new 
PeriodicMemStoreFlusher(this.threadWakeFrequency, this);
+this.compactionChecker = new CompactionChecker(this, 
this.compactionCheckFrequency, this);
+this.periodicFlusher = new 
PeriodicMemStoreFlusher(this.flushCheckFrequency, this);
 this.leases = new Leases(this.threadWakeFrequency);
 
 // Create the thread to clean the moved regions list



[hbase] branch branch-2.2 updated: HBASE-22596 [Chore] Separate the execution period between CompactionChecker and PeriodicMemStoreFlusher

2019-06-19 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2.2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.2 by this push:
 new 59fda38  HBASE-22596 [Chore] Separate the execution period between 
CompactionChecker and PeriodicMemStoreFlusher
59fda38 is described below

commit 59fda3890c0368eb8f99b030963f0be6665d48eb
Author: Reid Chan 
AuthorDate: Tue Jun 18 11:09:34 2019 +0800

HBASE-22596 [Chore] Separate the execution period between CompactionChecker 
and PeriodicMemStoreFlusher

Signed-off-by: Zach York 
Signed-off-by: Xu Cang 
---
 .../org/apache/hadoop/hbase/regionserver/HRegionServer.java   | 11 +--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
index 57f33cd..8ecd283 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
@@ -355,6 +355,11 @@ public class HRegionServer extends HasThread implements
   protected final int threadWakeFrequency;
   protected final int msgInterval;
 
+  private static final String PERIOD_COMPACTION = 
"hbase.regionserver.compaction.check.period";
+  private final int compactionCheckFrequency;
+  private static final String PERIOD_FLUSH = 
"hbase.regionserver.flush.check.period";
+  private final int flushCheckFrequency;
+
   protected final int numRegionsToReport;
 
   // Stub to do region server status calls against the master.
@@ -572,6 +577,8 @@ public class HRegionServer extends HasThread implements
   this.numRetries = 
this.conf.getInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER,
   HConstants.DEFAULT_HBASE_CLIENT_RETRIES_NUMBER);
   this.threadWakeFrequency = conf.getInt(HConstants.THREAD_WAKE_FREQUENCY, 
10 * 1000);
+  this.compactionCheckFrequency = conf.getInt(PERIOD_COMPACTION, 
this.threadWakeFrequency);
+  this.flushCheckFrequency = conf.getInt(PERIOD_FLUSH, 
this.threadWakeFrequency);
   this.msgInterval = conf.getInt("hbase.regionserver.msginterval", 3 * 
1000);
 
   this.sleeper = new Sleeper(this.msgInterval, this);
@@ -2018,8 +2025,8 @@ public class HRegionServer extends HasThread implements
 
 // Background thread to check for compactions; needed if region has not 
gotten updates
 // in a while. It will take care of not checking too frequently on 
store-by-store basis.
-this.compactionChecker = new CompactionChecker(this, 
this.threadWakeFrequency, this);
-this.periodicFlusher = new 
PeriodicMemStoreFlusher(this.threadWakeFrequency, this);
+this.compactionChecker = new CompactionChecker(this, 
this.compactionCheckFrequency, this);
+this.periodicFlusher = new 
PeriodicMemStoreFlusher(this.flushCheckFrequency, this);
 this.leases = new Leases(this.threadWakeFrequency);
 
 // Create the thread to clean the moved regions list



[hbase] branch branch-2.1 updated: HBASE-22596 [Chore] Separate the execution period between CompactionChecker and PeriodicMemStoreFlusher

2019-06-19 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new 060f52d  HBASE-22596 [Chore] Separate the execution period between 
CompactionChecker and PeriodicMemStoreFlusher
060f52d is described below

commit 060f52de5abc915d32d1f96bd5c9531d31fc4a2b
Author: Reid Chan 
AuthorDate: Tue Jun 18 11:09:34 2019 +0800

HBASE-22596 [Chore] Separate the execution period between CompactionChecker 
and PeriodicMemStoreFlusher

Signed-off-by: Zach York 
Signed-off-by: Xu Cang 
---
 .../org/apache/hadoop/hbase/regionserver/HRegionServer.java   | 11 +--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
index 446a19b..ef470f0 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
@@ -342,6 +342,11 @@ public class HRegionServer extends HasThread implements
   protected final int threadWakeFrequency;
   protected final int msgInterval;
 
+  private static final String PERIOD_COMPACTION = 
"hbase.regionserver.compaction.check.period";
+  private final int compactionCheckFrequency;
+  private static final String PERIOD_FLUSH = 
"hbase.regionserver.flush.check.period";
+  private final int flushCheckFrequency;
+
   protected final int numRegionsToReport;
 
   // Stub to do region server status calls against the master.
@@ -556,6 +561,8 @@ public class HRegionServer extends HasThread implements
   this.numRetries = 
this.conf.getInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER,
   HConstants.DEFAULT_HBASE_CLIENT_RETRIES_NUMBER);
   this.threadWakeFrequency = conf.getInt(HConstants.THREAD_WAKE_FREQUENCY, 
10 * 1000);
+  this.compactionCheckFrequency = conf.getInt(PERIOD_COMPACTION, 
this.threadWakeFrequency);
+  this.flushCheckFrequency = conf.getInt(PERIOD_FLUSH, 
this.threadWakeFrequency);
   this.msgInterval = conf.getInt("hbase.regionserver.msginterval", 3 * 
1000);
 
   this.sleeper = new Sleeper(this.msgInterval, this);
@@ -1995,8 +2002,8 @@ public class HRegionServer extends HasThread implements
 
 // Background thread to check for compactions; needed if region has not 
gotten updates
 // in a while. It will take care of not checking too frequently on 
store-by-store basis.
-this.compactionChecker = new CompactionChecker(this, 
this.threadWakeFrequency, this);
-this.periodicFlusher = new 
PeriodicMemStoreFlusher(this.threadWakeFrequency, this);
+this.compactionChecker = new CompactionChecker(this, 
this.compactionCheckFrequency, this);
+this.periodicFlusher = new 
PeriodicMemStoreFlusher(this.flushCheckFrequency, this);
 this.leases = new Leases(this.threadWakeFrequency);
 
 // Create the thread to clean the moved regions list



[hbase] branch branch-1 updated: HBASE-22656 [Metrics] Table metrics 'BatchPut' and 'BatchDelete' are never updated (#358)

2019-07-07 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1 by this push:
 new ebbb0e2  HBASE-22656 [Metrics] Table metrics 'BatchPut' and 
'BatchDelete' are never updated (#358)
ebbb0e2 is described below

commit ebbb0e29873007de72befb5f025635ba8b85bb3d
Author: Reid Chan 
AuthorDate: Sun Jul 7 23:15:10 2019 +0800

HBASE-22656 [Metrics] Table metrics 'BatchPut' and 'BatchDelete' are never 
updated (#358)

Signed-off-by: Michael Stack 
---
 .../org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java
index f6035c1..555b5d5 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java
@@ -93,7 +93,7 @@ public class MetricsRegionServer {
 
   public void updatePutBatch(TableName tn, long t) {
 if (tableMetrics != null && tn != null) {
-  tableMetrics.updatePut(tn, t);
+  tableMetrics.updatePutBatch(tn, t);
 }
 if (t > 1000) {
   serverSource.incrSlowPut();
@@ -117,7 +117,7 @@ public class MetricsRegionServer {
 
   public void updateDeleteBatch(TableName tn, long t) {
 if (tableMetrics != null && tn != null) {
-  tableMetrics.updateDelete(tn, t);
+  tableMetrics.updateDeleteBatch(tn, t);
 }
 if (t > 1000) {
   serverSource.incrSlowDelete();



[hbase] branch branch-1.4 updated: HBASE-22656 [Metrics] Table metrics 'BatchPut' and 'BatchDelete' are never updated (#358)

2019-07-07 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1.4
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1.4 by this push:
 new 0f02038  HBASE-22656 [Metrics] Table metrics 'BatchPut' and 
'BatchDelete' are never updated (#358)
0f02038 is described below

commit 0f02038a139f7164d4dd4ae31a9ef4240ca8d033
Author: Reid Chan 
AuthorDate: Sun Jul 7 23:15:10 2019 +0800

HBASE-22656 [Metrics] Table metrics 'BatchPut' and 'BatchDelete' are never 
updated (#358)

Signed-off-by: Michael Stack 
---
 .../org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java
index f6035c1..555b5d5 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java
@@ -93,7 +93,7 @@ public class MetricsRegionServer {
 
   public void updatePutBatch(TableName tn, long t) {
 if (tableMetrics != null && tn != null) {
-  tableMetrics.updatePut(tn, t);
+  tableMetrics.updatePutBatch(tn, t);
 }
 if (t > 1000) {
   serverSource.incrSlowPut();
@@ -117,7 +117,7 @@ public class MetricsRegionServer {
 
   public void updateDeleteBatch(TableName tn, long t) {
 if (tableMetrics != null && tn != null) {
-  tableMetrics.updateDelete(tn, t);
+  tableMetrics.updateDeleteBatch(tn, t);
 }
 if (t > 1000) {
   serverSource.incrSlowDelete();



[hbase] branch master updated: HBASE-22656 [Metrics] Table metrics 'BatchPut' and 'BatchDelete' are never updated

2019-07-07 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new 605f8a1  HBASE-22656 [Metrics] Table metrics 'BatchPut' and 
'BatchDelete' are never updated
605f8a1 is described below

commit 605f8a15bb7dabb23f1d397ca28ca0696a390497
Author: Reid Chan 
AuthorDate: Fri Jul 5 13:51:04 2019 +0800

HBASE-22656 [Metrics] Table metrics 'BatchPut' and 'BatchDelete' are never 
updated

Signed-off-by: Michael Stack 
---
 .../org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java
index 21534ce..56135df 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java
@@ -101,7 +101,7 @@ public class MetricsRegionServer {
 
   public void updatePutBatch(TableName tn, long t) {
 if (tableMetrics != null && tn != null) {
-  tableMetrics.updatePut(tn, t);
+  tableMetrics.updatePutBatch(tn, t);
 }
 if (t > 1000) {
   serverSource.incrSlowPut();
@@ -125,7 +125,7 @@ public class MetricsRegionServer {
 
   public void updateDeleteBatch(TableName tn, long t) {
 if (tableMetrics != null && tn != null) {
-  tableMetrics.updateDelete(tn, t);
+  tableMetrics.updateDeleteBatch(tn, t);
 }
 if (t > 1000) {
   serverSource.incrSlowDelete();



[hbase] branch branch-2 updated: HBASE-22656 [Metrics] Table metrics 'BatchPut' and 'BatchDelete' are never updated

2019-07-07 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 8222631  HBASE-22656 [Metrics] Table metrics 'BatchPut' and 
'BatchDelete' are never updated
8222631 is described below

commit 8222631ee30b2d8859086d52873be4262e31f287
Author: Reid Chan 
AuthorDate: Fri Jul 5 13:51:04 2019 +0800

HBASE-22656 [Metrics] Table metrics 'BatchPut' and 'BatchDelete' are never 
updated

Signed-off-by: Michael Stack 
---
 .../org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java
index e6f65e7..3396549 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java
@@ -98,7 +98,7 @@ public class MetricsRegionServer {
 
   public void updatePutBatch(TableName tn, long t) {
 if (tableMetrics != null && tn != null) {
-  tableMetrics.updatePut(tn, t);
+  tableMetrics.updatePutBatch(tn, t);
 }
 if (t > 1000) {
   serverSource.incrSlowPut();
@@ -122,7 +122,7 @@ public class MetricsRegionServer {
 
   public void updateDeleteBatch(TableName tn, long t) {
 if (tableMetrics != null && tn != null) {
-  tableMetrics.updateDelete(tn, t);
+  tableMetrics.updateDeleteBatch(tn, t);
 }
 if (t > 1000) {
   serverSource.incrSlowDelete();



[hbase] branch branch-2.2 updated: HBASE-22656 [Metrics] Table metrics 'BatchPut' and 'BatchDelete' are never updated

2019-07-07 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2.2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.2 by this push:
 new 5a2cc9c  HBASE-22656 [Metrics] Table metrics 'BatchPut' and 
'BatchDelete' are never updated
5a2cc9c is described below

commit 5a2cc9cfc73cf3fc67ddb1e6ffb4f58789dddb3d
Author: Reid Chan 
AuthorDate: Fri Jul 5 13:51:04 2019 +0800

HBASE-22656 [Metrics] Table metrics 'BatchPut' and 'BatchDelete' are never 
updated

Signed-off-by: Michael Stack 
---
 .../org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java
index e6f65e7..3396549 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java
@@ -98,7 +98,7 @@ public class MetricsRegionServer {
 
   public void updatePutBatch(TableName tn, long t) {
 if (tableMetrics != null && tn != null) {
-  tableMetrics.updatePut(tn, t);
+  tableMetrics.updatePutBatch(tn, t);
 }
 if (t > 1000) {
   serverSource.incrSlowPut();
@@ -122,7 +122,7 @@ public class MetricsRegionServer {
 
   public void updateDeleteBatch(TableName tn, long t) {
 if (tableMetrics != null && tn != null) {
-  tableMetrics.updateDelete(tn, t);
+  tableMetrics.updateDeleteBatch(tn, t);
 }
 if (t > 1000) {
   serverSource.incrSlowDelete();



[hbase] branch branch-2.1 updated: HBASE-22656 [Metrics] Table metrics 'BatchPut' and 'BatchDelete' are never updated

2019-07-07 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new 1fb3fa2  HBASE-22656 [Metrics] Table metrics 'BatchPut' and 
'BatchDelete' are never updated
1fb3fa2 is described below

commit 1fb3fa22fbdb0e8c491b3f507ca21a5188bb610d
Author: Reid Chan 
AuthorDate: Fri Jul 5 13:51:04 2019 +0800

HBASE-22656 [Metrics] Table metrics 'BatchPut' and 'BatchDelete' are never 
updated

Signed-off-by: Michael Stack 
---
 .../org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java
index df50fa8..9a73bd1 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServer.java
@@ -93,7 +93,7 @@ public class MetricsRegionServer {
 
   public void updatePutBatch(TableName tn, long t) {
 if (tableMetrics != null && tn != null) {
-  tableMetrics.updatePut(tn, t);
+  tableMetrics.updatePutBatch(tn, t);
 }
 if (t > 1000) {
   serverSource.incrSlowPut();
@@ -117,7 +117,7 @@ public class MetricsRegionServer {
 
   public void updateDeleteBatch(TableName tn, long t) {
 if (tableMetrics != null && tn != null) {
-  tableMetrics.updateDelete(tn, t);
+  tableMetrics.updateDeleteBatch(tn, t);
 }
 if (t > 1000) {
   serverSource.incrSlowDelete();



[hbase] branch master updated: HBASE-22610 [BucketCache] Rename hbase.offheapcache.minblocksize

2019-07-22 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new fe450b5  HBASE-22610 [BucketCache] Rename 
hbase.offheapcache.minblocksize
fe450b5 is described below

commit fe450b50c1c0060773f5e3f3b884da2cf45beadc
Author: syedmurtazahassan 
AuthorDate: Tue Jul 23 04:40:32 2019 +0200

HBASE-22610 [BucketCache] Rename hbase.offheapcache.minblocksize

* HBASE-22610 [BucketCache] Rename hbase.offheapcache.minblocksize.

* HBASE-22610 [BucketCache] Rename hbase.offheapcache.minblocksize. 
Deprecated old attribute and introduced a new one

* HBASE-22610 [BucketCache] Rename hbase.offheapcache.minblocksize. Removed 
unnecessary import

* HBASE-22610 added two import configs and removed one

Signed-off-by: Reid Chan 
---
 .../hadoop/hbase/io/hfile/BlockCacheFactory.java   | 24 +++---
 1 file changed, 21 insertions(+), 3 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
index 4d62992..4ee4977 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
@@ -78,20 +78,38 @@ public final class BlockCacheFactory {
   /**
* The target block size used by blockcache instances. Defaults to
* {@link HConstants#DEFAULT_BLOCKSIZE}.
-   * TODO: this config point is completely wrong, as it's used to determine the
-   * target block size of BlockCache instances. Rename.
*/
-  public static final String BLOCKCACHE_BLOCKSIZE_KEY = 
"hbase.offheapcache.minblocksize";
+  public static final String BLOCKCACHE_BLOCKSIZE_KEY = 
"hbase.blockcache.minblocksize";
 
   private static final String EXTERNAL_BLOCKCACHE_KEY = 
"hbase.blockcache.use.external";
   private static final boolean EXTERNAL_BLOCKCACHE_DEFAULT = false;
 
   private static final String EXTERNAL_BLOCKCACHE_CLASS_KEY = 
"hbase.blockcache.external.class";
 
+  /**
+   * @deprecated use {@link BlockCacheFactory#BLOCKCACHE_BLOCKSIZE_KEY} 
instead.
+   */
+  @Deprecated
+  static final String DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY = 
"hbase.blockcache.minblocksize";
+
+  /**
+   * The config point hbase.offheapcache.minblocksize is completely wrong, 
which is replaced by
+   * {@link BlockCacheFactory#BLOCKCACHE_BLOCKSIZE_KEY}. Keep the old config 
key here for backward
+   * compatibility.
+   */
+  static {
+Configuration.addDeprecation(DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY, 
BLOCKCACHE_BLOCKSIZE_KEY);
+  }
+
   private BlockCacheFactory() {
   }
 
   public static BlockCache createBlockCache(Configuration conf) {
+if (conf.get(DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY) != null) {
+  LOG.warn("The config key {} is deprecated now, instead please use {}. In 
future release "
++ "we will remove the deprecated config.", 
DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY,
+BLOCKCACHE_BLOCKSIZE_KEY);
+}
 FirstLevelBlockCache l1Cache = createFirstLevelCache(conf);
 if (l1Cache == null) {
   return null;



[hbase] branch branch-2 updated: HBASE-22610 [BucketCache] Rename hbase.offheapcache.minblocksize

2019-07-22 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new e95bdf4  HBASE-22610 [BucketCache] Rename 
hbase.offheapcache.minblocksize
e95bdf4 is described below

commit e95bdf415cea2190083150e3f8f6cea8995550d6
Author: syedmurtazahassan 
AuthorDate: Tue Jul 23 04:40:32 2019 +0200

HBASE-22610 [BucketCache] Rename hbase.offheapcache.minblocksize

* Deprecated old attribute and introduced a new one

* Removed unnecessary import

* Added two import configs and removed one

Signed-off-by: Reid Chan 
---
 .../hadoop/hbase/io/hfile/BlockCacheFactory.java   | 24 +++---
 1 file changed, 21 insertions(+), 3 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
index 4d62992..4ee4977 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
@@ -78,20 +78,38 @@ public final class BlockCacheFactory {
   /**
* The target block size used by blockcache instances. Defaults to
* {@link HConstants#DEFAULT_BLOCKSIZE}.
-   * TODO: this config point is completely wrong, as it's used to determine the
-   * target block size of BlockCache instances. Rename.
*/
-  public static final String BLOCKCACHE_BLOCKSIZE_KEY = 
"hbase.offheapcache.minblocksize";
+  public static final String BLOCKCACHE_BLOCKSIZE_KEY = 
"hbase.blockcache.minblocksize";
 
   private static final String EXTERNAL_BLOCKCACHE_KEY = 
"hbase.blockcache.use.external";
   private static final boolean EXTERNAL_BLOCKCACHE_DEFAULT = false;
 
   private static final String EXTERNAL_BLOCKCACHE_CLASS_KEY = 
"hbase.blockcache.external.class";
 
+  /**
+   * @deprecated use {@link BlockCacheFactory#BLOCKCACHE_BLOCKSIZE_KEY} 
instead.
+   */
+  @Deprecated
+  static final String DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY = 
"hbase.blockcache.minblocksize";
+
+  /**
+   * The config point hbase.offheapcache.minblocksize is completely wrong, 
which is replaced by
+   * {@link BlockCacheFactory#BLOCKCACHE_BLOCKSIZE_KEY}. Keep the old config 
key here for backward
+   * compatibility.
+   */
+  static {
+Configuration.addDeprecation(DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY, 
BLOCKCACHE_BLOCKSIZE_KEY);
+  }
+
   private BlockCacheFactory() {
   }
 
   public static BlockCache createBlockCache(Configuration conf) {
+if (conf.get(DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY) != null) {
+  LOG.warn("The config key {} is deprecated now, instead please use {}. In 
future release "
++ "we will remove the deprecated config.", 
DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY,
+BLOCKCACHE_BLOCKSIZE_KEY);
+}
 FirstLevelBlockCache l1Cache = createFirstLevelCache(conf);
 if (l1Cache == null) {
   return null;



[hbase] branch branch-2.2 updated: HBASE-22610 [BucketCache] Rename hbase.offheapcache.minblocksize

2019-07-22 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2.2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.2 by this push:
 new 87b0040  HBASE-22610 [BucketCache] Rename 
hbase.offheapcache.minblocksize
87b0040 is described below

commit 87b00408c133f75bcf8ab97834d27a879e46b54a
Author: syedmurtazahassan 
AuthorDate: Tue Jul 23 10:55:09 2019 +0800

HBASE-22610 [BucketCache] Rename hbase.offheapcache.minblocksize

Co-authored-by: Reid Chan 
Signed-off-by: Reid Chan 
---
 .../hadoop/hbase/io/hfile/BlockCacheFactory.java   | 24 +++---
 1 file changed, 21 insertions(+), 3 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
index 98b3c4f..01fb130 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
@@ -71,20 +71,38 @@ public final class BlockCacheFactory {
   /**
* The target block size used by blockcache instances. Defaults to
* {@link HConstants#DEFAULT_BLOCKSIZE}.
-   * TODO: this config point is completely wrong, as it's used to determine the
-   * target block size of BlockCache instances. Rename.
*/
-  public static final String BLOCKCACHE_BLOCKSIZE_KEY = 
"hbase.offheapcache.minblocksize";
+  public static final String BLOCKCACHE_BLOCKSIZE_KEY = 
"hbase.blockcache.minblocksize";
 
   private static final String EXTERNAL_BLOCKCACHE_KEY = 
"hbase.blockcache.use.external";
   private static final boolean EXTERNAL_BLOCKCACHE_DEFAULT = false;
 
   private static final String EXTERNAL_BLOCKCACHE_CLASS_KEY = 
"hbase.blockcache.external.class";
 
+  /**
+   * @deprecated use {@link BlockCacheFactory#BLOCKCACHE_BLOCKSIZE_KEY} 
instead.
+   */
+  @Deprecated
+  static final String DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY = 
"hbase.offheapcache.minblocksize";
+
+  /**
+   * The config point hbase.offheapcache.minblocksize is completely wrong, 
which is replaced by
+   * {@link BlockCacheFactory#BLOCKCACHE_BLOCKSIZE_KEY}. Keep the old config 
key here for backward
+   * compatibility.
+   */
+  static {
+Configuration.addDeprecation(DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY, 
BLOCKCACHE_BLOCKSIZE_KEY);
+  }
+
   private BlockCacheFactory() {
   }
 
   public static BlockCache createBlockCache(Configuration conf) {
+if (conf.get(DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY) != null) {
+  LOG.warn("The config key {} is deprecated now, instead please use {}. In 
future release "
+  + "we will remove the deprecated config.", 
DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY,
+BLOCKCACHE_BLOCKSIZE_KEY);
+}
 LruBlockCache onHeapCache = createOnHeapCache(conf);
 if (onHeapCache == null) {
   return null;



[hbase] branch master updated: Revert "HBASE-22610 [BucketCache] Rename hbase.offheapcache.minblocksize"

2019-07-22 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new 0e34dcb  Revert "HBASE-22610 [BucketCache] Rename 
hbase.offheapcache.minblocksize"
0e34dcb is described below

commit 0e34dcbf4b98140cf00945aa5e235f9c2dca6959
Author: Reid Chan 
AuthorDate: Tue Jul 23 11:04:13 2019 +0800

Revert "HBASE-22610 [BucketCache] Rename hbase.offheapcache.minblocksize"

Reason: Deprecated a wrong parameter.

This reverts commit fe450b50c1c0060773f5e3f3b884da2cf45beadc.
---
 .../hadoop/hbase/io/hfile/BlockCacheFactory.java   | 24 +++---
 1 file changed, 3 insertions(+), 21 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
index 4ee4977..4d62992 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
@@ -78,38 +78,20 @@ public final class BlockCacheFactory {
   /**
* The target block size used by blockcache instances. Defaults to
* {@link HConstants#DEFAULT_BLOCKSIZE}.
+   * TODO: this config point is completely wrong, as it's used to determine the
+   * target block size of BlockCache instances. Rename.
*/
-  public static final String BLOCKCACHE_BLOCKSIZE_KEY = 
"hbase.blockcache.minblocksize";
+  public static final String BLOCKCACHE_BLOCKSIZE_KEY = 
"hbase.offheapcache.minblocksize";
 
   private static final String EXTERNAL_BLOCKCACHE_KEY = 
"hbase.blockcache.use.external";
   private static final boolean EXTERNAL_BLOCKCACHE_DEFAULT = false;
 
   private static final String EXTERNAL_BLOCKCACHE_CLASS_KEY = 
"hbase.blockcache.external.class";
 
-  /**
-   * @deprecated use {@link BlockCacheFactory#BLOCKCACHE_BLOCKSIZE_KEY} 
instead.
-   */
-  @Deprecated
-  static final String DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY = 
"hbase.blockcache.minblocksize";
-
-  /**
-   * The config point hbase.offheapcache.minblocksize is completely wrong, 
which is replaced by
-   * {@link BlockCacheFactory#BLOCKCACHE_BLOCKSIZE_KEY}. Keep the old config 
key here for backward
-   * compatibility.
-   */
-  static {
-Configuration.addDeprecation(DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY, 
BLOCKCACHE_BLOCKSIZE_KEY);
-  }
-
   private BlockCacheFactory() {
   }
 
   public static BlockCache createBlockCache(Configuration conf) {
-if (conf.get(DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY) != null) {
-  LOG.warn("The config key {} is deprecated now, instead please use {}. In 
future release "
-+ "we will remove the deprecated config.", 
DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY,
-BLOCKCACHE_BLOCKSIZE_KEY);
-}
 FirstLevelBlockCache l1Cache = createFirstLevelCache(conf);
 if (l1Cache == null) {
   return null;



[hbase] branch branch-2 updated: Revert "HBASE-22610 [BucketCache] Rename hbase.offheapcache.minblocksize"

2019-07-22 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new be5e3de  Revert "HBASE-22610 [BucketCache] Rename 
hbase.offheapcache.minblocksize"
be5e3de is described below

commit be5e3de8afb277733f57d18d5d13e08241e0e888
Author: Reid Chan 
AuthorDate: Tue Jul 23 11:04:13 2019 +0800

Revert "HBASE-22610 [BucketCache] Rename hbase.offheapcache.minblocksize"

Reason: Deprecated a wrong parameter.

This reverts commit e95bdf415cea2190083150e3f8f6cea8995550d6.
---
 .../hadoop/hbase/io/hfile/BlockCacheFactory.java   | 24 +++---
 1 file changed, 3 insertions(+), 21 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
index 4ee4977..4d62992 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
@@ -78,38 +78,20 @@ public final class BlockCacheFactory {
   /**
* The target block size used by blockcache instances. Defaults to
* {@link HConstants#DEFAULT_BLOCKSIZE}.
+   * TODO: this config point is completely wrong, as it's used to determine the
+   * target block size of BlockCache instances. Rename.
*/
-  public static final String BLOCKCACHE_BLOCKSIZE_KEY = 
"hbase.blockcache.minblocksize";
+  public static final String BLOCKCACHE_BLOCKSIZE_KEY = 
"hbase.offheapcache.minblocksize";
 
   private static final String EXTERNAL_BLOCKCACHE_KEY = 
"hbase.blockcache.use.external";
   private static final boolean EXTERNAL_BLOCKCACHE_DEFAULT = false;
 
   private static final String EXTERNAL_BLOCKCACHE_CLASS_KEY = 
"hbase.blockcache.external.class";
 
-  /**
-   * @deprecated use {@link BlockCacheFactory#BLOCKCACHE_BLOCKSIZE_KEY} 
instead.
-   */
-  @Deprecated
-  static final String DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY = 
"hbase.blockcache.minblocksize";
-
-  /**
-   * The config point hbase.offheapcache.minblocksize is completely wrong, 
which is replaced by
-   * {@link BlockCacheFactory#BLOCKCACHE_BLOCKSIZE_KEY}. Keep the old config 
key here for backward
-   * compatibility.
-   */
-  static {
-Configuration.addDeprecation(DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY, 
BLOCKCACHE_BLOCKSIZE_KEY);
-  }
-
   private BlockCacheFactory() {
   }
 
   public static BlockCache createBlockCache(Configuration conf) {
-if (conf.get(DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY) != null) {
-  LOG.warn("The config key {} is deprecated now, instead please use {}. In 
future release "
-+ "we will remove the deprecated config.", 
DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY,
-BLOCKCACHE_BLOCKSIZE_KEY);
-}
 FirstLevelBlockCache l1Cache = createFirstLevelCache(conf);
 if (l1Cache == null) {
   return null;



[hbase] branch branch-2 updated: HBASE-22610 [BucketCache] Rename hbase.offheapcache.minblocksize

2019-07-22 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new fa4466d  HBASE-22610 [BucketCache] Rename 
hbase.offheapcache.minblocksize
fa4466d is described below

commit fa4466d3457dc12f903c7631bbf9f5f6acce1608
Author: syedmurtazahassan 
AuthorDate: Tue Jul 23 11:17:27 2019 +0800

HBASE-22610 [BucketCache] Rename hbase.offheapcache.minblocksize

Signed-off-by: Reid Chan 
Co-authored-by: Reid Chan 
---
 .../hadoop/hbase/io/hfile/BlockCacheFactory.java   | 24 +++---
 1 file changed, 21 insertions(+), 3 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
index 4d62992..2b97320 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
@@ -78,20 +78,38 @@ public final class BlockCacheFactory {
   /**
* The target block size used by blockcache instances. Defaults to
* {@link HConstants#DEFAULT_BLOCKSIZE}.
-   * TODO: this config point is completely wrong, as it's used to determine the
-   * target block size of BlockCache instances. Rename.
*/
-  public static final String BLOCKCACHE_BLOCKSIZE_KEY = 
"hbase.offheapcache.minblocksize";
+  public static final String BLOCKCACHE_BLOCKSIZE_KEY = 
"hbase.blockcache.minblocksize";
 
   private static final String EXTERNAL_BLOCKCACHE_KEY = 
"hbase.blockcache.use.external";
   private static final boolean EXTERNAL_BLOCKCACHE_DEFAULT = false;
 
   private static final String EXTERNAL_BLOCKCACHE_CLASS_KEY = 
"hbase.blockcache.external.class";
 
+  /**
+   * @deprecated use {@link BlockCacheFactory#BLOCKCACHE_BLOCKSIZE_KEY} 
instead.
+   */
+  @Deprecated
+  static final String DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY = 
"hbase.offheapcache.minblocksize";
+
+  /**
+   * The config point hbase.offheapcache.minblocksize is completely wrong, 
which is replaced by
+   * {@link BlockCacheFactory#BLOCKCACHE_BLOCKSIZE_KEY}. Keep the old config 
key here for backward
+   * compatibility.
+   */
+  static {
+Configuration.addDeprecation(DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY, 
BLOCKCACHE_BLOCKSIZE_KEY);
+  }
+
   private BlockCacheFactory() {
   }
 
   public static BlockCache createBlockCache(Configuration conf) {
+if (conf.get(DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY) != null) {
+  LOG.warn("The config key {} is deprecated now, instead please use {}. In 
future release "
+  + "we will remove the deprecated config.", 
DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY,
+BLOCKCACHE_BLOCKSIZE_KEY);
+}
 FirstLevelBlockCache l1Cache = createFirstLevelCache(conf);
 if (l1Cache == null) {
   return null;



[hbase] branch master updated: HBASE-22610 [BucketCache] Rename hbase.offheapcache.minblocksize

2019-07-22 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new 06f5c43  HBASE-22610 [BucketCache] Rename 
hbase.offheapcache.minblocksize
06f5c43 is described below

commit 06f5c43de340da62e765a753c10caba5465eeae2
Author: syedmurtazahassan 
AuthorDate: Tue Jul 23 11:17:27 2019 +0800

HBASE-22610 [BucketCache] Rename hbase.offheapcache.minblocksize

Signed-off-by: Reid Chan 
Co-authored-by: Reid Chan 
---
 .../hadoop/hbase/io/hfile/BlockCacheFactory.java   | 24 +++---
 1 file changed, 21 insertions(+), 3 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
index 4d62992..2b97320 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
@@ -78,20 +78,38 @@ public final class BlockCacheFactory {
   /**
* The target block size used by blockcache instances. Defaults to
* {@link HConstants#DEFAULT_BLOCKSIZE}.
-   * TODO: this config point is completely wrong, as it's used to determine the
-   * target block size of BlockCache instances. Rename.
*/
-  public static final String BLOCKCACHE_BLOCKSIZE_KEY = 
"hbase.offheapcache.minblocksize";
+  public static final String BLOCKCACHE_BLOCKSIZE_KEY = 
"hbase.blockcache.minblocksize";
 
   private static final String EXTERNAL_BLOCKCACHE_KEY = 
"hbase.blockcache.use.external";
   private static final boolean EXTERNAL_BLOCKCACHE_DEFAULT = false;
 
   private static final String EXTERNAL_BLOCKCACHE_CLASS_KEY = 
"hbase.blockcache.external.class";
 
+  /**
+   * @deprecated use {@link BlockCacheFactory#BLOCKCACHE_BLOCKSIZE_KEY} 
instead.
+   */
+  @Deprecated
+  static final String DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY = 
"hbase.offheapcache.minblocksize";
+
+  /**
+   * The config point hbase.offheapcache.minblocksize is completely wrong, 
which is replaced by
+   * {@link BlockCacheFactory#BLOCKCACHE_BLOCKSIZE_KEY}. Keep the old config 
key here for backward
+   * compatibility.
+   */
+  static {
+Configuration.addDeprecation(DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY, 
BLOCKCACHE_BLOCKSIZE_KEY);
+  }
+
   private BlockCacheFactory() {
   }
 
   public static BlockCache createBlockCache(Configuration conf) {
+if (conf.get(DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY) != null) {
+  LOG.warn("The config key {} is deprecated now, instead please use {}. In 
future release "
+  + "we will remove the deprecated config.", 
DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY,
+BLOCKCACHE_BLOCKSIZE_KEY);
+}
 FirstLevelBlockCache l1Cache = createFirstLevelCache(conf);
 if (l1Cache == null) {
   return null;



[hbase] branch branch-2.1 updated: HBASE-22610 [BucketCache] Rename hbase.offheapcache.minblocksize

2019-07-22 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new 49fcd92  HBASE-22610 [BucketCache] Rename 
hbase.offheapcache.minblocksize
49fcd92 is described below

commit 49fcd92bdf46fc130f20202c3f6693e88272ab04
Author: syedmurtazahassan 
AuthorDate: Tue Jul 23 12:05:31 2019 +0800

HBASE-22610 [BucketCache] Rename hbase.offheapcache.minblocksize

Co-authored-by: Reid Chan 
Signed-off-by: Reid Chan 
---
 .../apache/hadoop/hbase/io/hfile/CacheConfig.java  | 24 +++---
 1 file changed, 21 insertions(+), 3 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
index a022552..499130a 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
@@ -131,10 +131,8 @@ public class CacheConfig {
   /**
* The target block size used by blockcache instances. Defaults to
* {@link HConstants#DEFAULT_BLOCKSIZE}.
-   * TODO: this config point is completely wrong, as it's used to determine the
-   * target block size of BlockCache instances. Rename.
*/
-  public static final String BLOCKCACHE_BLOCKSIZE_KEY = 
"hbase.offheapcache.minblocksize";
+  public static final String BLOCKCACHE_BLOCKSIZE_KEY = 
"hbase.blockcache.minblocksize";
 
   private static final String EXTERNAL_BLOCKCACHE_KEY = 
"hbase.blockcache.use.external";
   private static final boolean EXTERNAL_BLOCKCACHE_DEFAULT = false;
@@ -145,6 +143,21 @@ public class CacheConfig {
   private static final boolean DROP_BEHIND_CACHE_COMPACTION_DEFAULT = true;
 
   /**
+   * @deprecated use {@link CacheConfig#BLOCKCACHE_BLOCKSIZE_KEY} instead.
+   */
+  @Deprecated
+  static final String DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY = 
"hbase.offheapcache.minblocksize";
+
+  /**
+   * The config point hbase.offheapcache.minblocksize is completely wrong, 
which is replaced by
+   * {@link BlockCacheFactory#BLOCKCACHE_BLOCKSIZE_KEY}. Keep the old config 
key here for backward
+   * compatibility.
+   */
+  static {
+Configuration.addDeprecation(DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY, 
BLOCKCACHE_BLOCKSIZE_KEY);
+  }
+
+  /**
* Enum of all built in external block caches.
* This is used for config.
*/
@@ -646,6 +659,11 @@ public class CacheConfig {
 if (GLOBAL_BLOCK_CACHE_INSTANCE != null) {
   return GLOBAL_BLOCK_CACHE_INSTANCE;
 }
+if (conf.get(DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY) != null) {
+  LOG.warn("The config key {} is deprecated now, instead please use {}. In 
future release "
+  + "we will remove the deprecated config.", 
DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY,
+BLOCKCACHE_BLOCKSIZE_KEY);
+}
 if (blockCacheDisabled) {
   return null;
 }



[hbase] branch branch-1 updated: HBASE-22610 [BucketCache] Rename hbase.offheapcache.minblocksize

2019-07-22 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1 by this push:
 new 0d8f9b5  HBASE-22610 [BucketCache] Rename 
hbase.offheapcache.minblocksize
0d8f9b5 is described below

commit 0d8f9b515b9b422d846a70799d40aea3d5eac264
Author: syedmurtazahassan 
AuthorDate: Tue Jul 23 12:19:08 2019 +0800

HBASE-22610 [BucketCache] Rename hbase.offheapcache.minblocksize

Co-authored-by: Reid Chan 
Signed-off-by: Reid Chan 
---
 .../apache/hadoop/hbase/io/hfile/CacheConfig.java  | 24 +++---
 1 file changed, 21 insertions(+), 3 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
index d020eb0..87eae04 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
@@ -129,10 +129,8 @@ public class CacheConfig {
   /**
* The target block size used by blockcache instances. Defaults to
* {@link HConstants#DEFAULT_BLOCKSIZE}.
-   * TODO: this config point is completely wrong, as it's used to determine the
-   * target block size of BlockCache instances. Rename.
*/
-  public static final String BLOCKCACHE_BLOCKSIZE_KEY = 
"hbase.offheapcache.minblocksize";
+  public static final String BLOCKCACHE_BLOCKSIZE_KEY = 
"hbase.blockcache.minblocksize";
 
   private static final String EXTERNAL_BLOCKCACHE_KEY = 
"hbase.blockcache.use.external";
   private static final boolean EXTERNAL_BLOCKCACHE_DEFAULT = false;
@@ -142,6 +140,21 @@ public class CacheConfig {
   private static final boolean DROP_BEHIND_CACHE_COMPACTION_DEFAULT = true;
 
   /**
+   * @deprecated use {@link CacheConfig#BLOCKCACHE_BLOCKSIZE_KEY} instead.
+   */
+  @Deprecated
+  static final String DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY = 
"hbase.offheapcache.minblocksize";
+
+  /**
+   * The config point hbase.offheapcache.minblocksize is completely wrong, 
which is replaced by
+   * {@link BlockCacheFactory#BLOCKCACHE_BLOCKSIZE_KEY}. Keep the old config 
key here for backward
+   * compatibility.
+   */
+  static {
+Configuration.addDeprecation(DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY, 
BLOCKCACHE_BLOCKSIZE_KEY);
+  }
+
+  /**
* Enum of all built in external block caches.
* This is used for config.
*/
@@ -675,6 +688,11 @@ public class CacheConfig {
   public static synchronized BlockCache instantiateBlockCache(Configuration 
conf) {
 if (GLOBAL_BLOCK_CACHE_INSTANCE != null) return 
GLOBAL_BLOCK_CACHE_INSTANCE;
 if (blockCacheDisabled) return null;
+if (conf.get(DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY) != null) {
+  LOG.warn("The config key " + DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY +
+  " is deprecated now, instead please use " + BLOCKCACHE_BLOCKSIZE_KEY 
 +". "
+  + "In future release we will remove the deprecated config.");
+}
 LruBlockCache l1 = getL1(conf);
 // blockCacheDisabled is set as a side-effect of getL1Internal(), so check 
it again after the call.
 if (blockCacheDisabled) return null;



[hbase] branch branch-1.4 updated: HBASE-22610 [BucketCache] Rename hbase.offheapcache.minblocksize

2019-07-22 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1.4
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1.4 by this push:
 new 4502b55  HBASE-22610 [BucketCache] Rename 
hbase.offheapcache.minblocksize
4502b55 is described below

commit 4502b55bdc863868bc452b7876c15c0ad7400c1e
Author: syedmurtazahassan 
AuthorDate: Tue Jul 23 12:19:08 2019 +0800

HBASE-22610 [BucketCache] Rename hbase.offheapcache.minblocksize

Co-authored-by: Reid Chan 
Signed-off-by: Reid Chan 
---
 .../apache/hadoop/hbase/io/hfile/CacheConfig.java  | 24 +++---
 1 file changed, 21 insertions(+), 3 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
index d020eb0..87eae04 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
@@ -129,10 +129,8 @@ public class CacheConfig {
   /**
* The target block size used by blockcache instances. Defaults to
* {@link HConstants#DEFAULT_BLOCKSIZE}.
-   * TODO: this config point is completely wrong, as it's used to determine the
-   * target block size of BlockCache instances. Rename.
*/
-  public static final String BLOCKCACHE_BLOCKSIZE_KEY = 
"hbase.offheapcache.minblocksize";
+  public static final String BLOCKCACHE_BLOCKSIZE_KEY = 
"hbase.blockcache.minblocksize";
 
   private static final String EXTERNAL_BLOCKCACHE_KEY = 
"hbase.blockcache.use.external";
   private static final boolean EXTERNAL_BLOCKCACHE_DEFAULT = false;
@@ -142,6 +140,21 @@ public class CacheConfig {
   private static final boolean DROP_BEHIND_CACHE_COMPACTION_DEFAULT = true;
 
   /**
+   * @deprecated use {@link CacheConfig#BLOCKCACHE_BLOCKSIZE_KEY} instead.
+   */
+  @Deprecated
+  static final String DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY = 
"hbase.offheapcache.minblocksize";
+
+  /**
+   * The config point hbase.offheapcache.minblocksize is completely wrong, 
which is replaced by
+   * {@link BlockCacheFactory#BLOCKCACHE_BLOCKSIZE_KEY}. Keep the old config 
key here for backward
+   * compatibility.
+   */
+  static {
+Configuration.addDeprecation(DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY, 
BLOCKCACHE_BLOCKSIZE_KEY);
+  }
+
+  /**
* Enum of all built in external block caches.
* This is used for config.
*/
@@ -675,6 +688,11 @@ public class CacheConfig {
   public static synchronized BlockCache instantiateBlockCache(Configuration 
conf) {
 if (GLOBAL_BLOCK_CACHE_INSTANCE != null) return 
GLOBAL_BLOCK_CACHE_INSTANCE;
 if (blockCacheDisabled) return null;
+if (conf.get(DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY) != null) {
+  LOG.warn("The config key " + DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY +
+  " is deprecated now, instead please use " + BLOCKCACHE_BLOCKSIZE_KEY 
 +". "
+  + "In future release we will remove the deprecated config.");
+}
 LruBlockCache l1 = getL1(conf);
 // blockCacheDisabled is set as a side-effect of getL1Internal(), so check 
it again after the call.
 if (blockCacheDisabled) return null;



[hbase] branch master updated: HBASE-22628 Document the custom WAL directory (hbase.wal.dir) usage

2019-07-23 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new 7ebf80f  HBASE-22628 Document the custom WAL directory (hbase.wal.dir) 
usage
7ebf80f is described below

commit 7ebf80fe1df3113fd577259536688d11a77f3d04
Author: Pankaj 
AuthorDate: Wed Jul 24 08:15:46 2019 +0530

HBASE-22628 Document the custom WAL directory (hbase.wal.dir) usage

Signed-off-by Reid Chan 
---
 src/main/asciidoc/_chapters/architecture.adoc | 20 
 1 file changed, 20 insertions(+)

diff --git a/src/main/asciidoc/_chapters/architecture.adoc 
b/src/main/asciidoc/_chapters/architecture.adoc
index 3a94740..d330d85 100644
--- a/src/main/asciidoc/_chapters/architecture.adoc
+++ b/src/main/asciidoc/_chapters/architecture.adoc
@@ -2491,6 +2491,26 @@ For example: If source cluster FS client configurations 
are copied to the destin
 
 NOTE: `DefaultSourceFSConfigurationProvider` supports only `xml` type files. 
It loads source cluster FS client configuration only once, so if source cluster 
FS client configuration files are updated, every peer(s) cluster RS must be 
restarted to reload the configuration.
 
+[[arch.custom.wal.dir]]
+=== Custom WAL Directory
+HBASE-17437 added support for specifying a WAL directory outside the HBase 
root directory or even in a different FileSystem since 1.3.3/2.0+. Some 
FileSystems (such as Amazon S3) don’t support append or consistent writes, in 
such scenario WAL directory needs to be configured in a different FileSystem to 
avoid loss of writes.
+
+Following configurations were added to accomplish this:
+. `hbase.wal.dir`
++
+This defines where the root WAL directory is located, could be on a different 
FileSystem than the root directory. WAL directory can not be set to a 
subdirectory of the root directory. The default value of this is the root 
directory if unset.
++
+. `hbase.rootdir.perms`
++
+Configures FileSystem permissions to set on the root directory. This is '700' 
by default.
++
+. `hbase.wal.dir.perms`
++
+Configures FileSystem permissions to set on the WAL directory FileSystem. This 
is '700' by default.
++
+
+NOTE: While migrating to custom WAL dir (outside the HBase root directory or a 
different FileSystem) existing WAL files must be copied manually to new WAL 
dir, otherwise it may lead to data loss/inconsistency as HMaster has no 
information about previous WAL directory.
+
 [[arch.hdfs]]
 == HDFS
 



[hbase] branch master updated: HBASE-22628 [Addendum] Document the custom WAL directory (hbase.wal.dir) usage

2019-07-25 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new b83d0c0  HBASE-22628 [Addendum] Document the custom WAL directory 
(hbase.wal.dir) usage
b83d0c0 is described below

commit b83d0c035ffc2d71c11d02bdf4138abf309f2984
Author: Pankaj 
AuthorDate: Thu Jul 25 12:18:05 2019 +0530

HBASE-22628 [Addendum] Document the custom WAL directory (hbase.wal.dir) 
usage

Signed-off-by: Reid Chan 
---
 src/main/asciidoc/_chapters/architecture.adoc | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/src/main/asciidoc/_chapters/architecture.adoc 
b/src/main/asciidoc/_chapters/architecture.adoc
index d330d85..886ac08 100644
--- a/src/main/asciidoc/_chapters/architecture.adoc
+++ b/src/main/asciidoc/_chapters/architecture.adoc
@@ -2495,19 +2495,19 @@ NOTE: `DefaultSourceFSConfigurationProvider` supports 
only `xml` type files. It
 === Custom WAL Directory
 HBASE-17437 added support for specifying a WAL directory outside the HBase 
root directory or even in a different FileSystem since 1.3.3/2.0+. Some 
FileSystems (such as Amazon S3) don’t support append or consistent writes, in 
such scenario WAL directory needs to be configured in a different FileSystem to 
avoid loss of writes.
 
-Following configurations were added to accomplish this:
+Following configurations are added to accomplish this:
+
 . `hbase.wal.dir`
 +
 This defines where the root WAL directory is located, could be on a different 
FileSystem than the root directory. WAL directory can not be set to a 
subdirectory of the root directory. The default value of this is the root 
directory if unset.
-+
+
 . `hbase.rootdir.perms`
 +
 Configures FileSystem permissions to set on the root directory. This is '700' 
by default.
-+
+
 . `hbase.wal.dir.perms`
 +
 Configures FileSystem permissions to set on the WAL directory FileSystem. This 
is '700' by default.
-+
 
 NOTE: While migrating to custom WAL dir (outside the HBase root directory or a 
different FileSystem) existing WAL files must be copied manually to new WAL 
dir, otherwise it may lead to data loss/inconsistency as HMaster has no 
information about previous WAL directory.
 



[hbase] branch master updated: HBASE-22702 [Log] 'Group not found for table' is chatty

2019-07-25 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new de98fee  HBASE-22702 [Log] 'Group not found for table' is chatty
de98fee is described below

commit de98fee288814c2b62666acda1fb538dd69cbc82
Author: syedmurtazahassan 
AuthorDate: Thu Jul 25 17:13:40 2019 +0200

HBASE-22702 [Log] 'Group not found for table' is chatty

Signed-off-by Reid Chan 
---
 .../apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java  | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git 
a/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java
 
b/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java
index 3f3e642..9709fb5 100644
--- 
a/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java
+++ 
b/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java
@@ -207,7 +207,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer {
 if (!misplacedRegions.contains(region)) {
   String groupName = 
rsGroupInfoManager.getRSGroupOfTable(region.getTable());
   if (groupName == null) {
-LOG.info("Group not found for table " + region.getTable() + ", 
using default");
+LOG.debug("Group not found for table " + region.getTable() + ", 
using default");
 groupName = RSGroupInfo.DEFAULT_GROUP;
   }
   groupToRegion.put(groupName, region);
@@ -239,7 +239,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer {
   for (RegionInfo region : misplacedRegions) {
 String groupName = 
rsGroupInfoManager.getRSGroupOfTable(region.getTable());
 if (groupName == null) {
-  LOG.info("Group not found for table " + region.getTable() + ", using 
default");
+  LOG.debug("Group not found for table " + region.getTable() + ", 
using default");
   groupName = RSGroupInfo.DEFAULT_GROUP;
 }
 RSGroupInfo info = rsGroupInfoManager.getRSGroup(groupName);
@@ -278,7 +278,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer {
   for (RegionInfo region : regions) {
 String groupName = 
rsGroupInfoManager.getRSGroupOfTable(region.getTable());
 if (groupName == null) {
-  LOG.info("Group not found for table " + region.getTable() + ", using 
default");
+  LOG.debug("Group not found for table " + region.getTable() + ", 
using default");
   groupName = RSGroupInfo.DEFAULT_GROUP;
 }
 regionMap.put(groupName, region);
@@ -340,7 +340,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer {
   ServerName assignedServer = region.getValue();
   String groupName = 
rsGroupInfoManager.getRSGroupOfTable(regionInfo.getTable());
   if (groupName == null) {
-LOG.info("Group not found for table " + regionInfo.getTable() + ", 
using default");
+LOG.debug("Group not found for table " + regionInfo.getTable() + ", 
using default");
 groupName = RSGroupInfo.DEFAULT_GROUP;
   }
   RSGroupInfo info = rsGroupInfoManager.getRSGroup(groupName);
@@ -379,7 +379,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer {
 try {
   String groupName = 
rsGroupInfoManager.getRSGroupOfTable(region.getTable());
   if (groupName == null) {
-LOG.info("Group not found for table " + region.getTable() + ", 
using default");
+LOG.debug("Group not found for table " + region.getTable() + ", 
using default");
 groupName = RSGroupInfo.DEFAULT_GROUP;
   }
   targetRSGInfo = rsGroupInfoManager.getRSGroup(groupName);



[hbase] branch branch-2 updated: HBASE-22702 [Log] 'Group not found for table' is chatty

2019-07-25 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new cab9211  HBASE-22702 [Log] 'Group not found for table' is chatty
cab9211 is described below

commit cab92111e4f2713295a70fecd5da1e6513366ad1
Author: syedmurtazahassan 
AuthorDate: Thu Jul 25 17:13:40 2019 +0200

HBASE-22702 [Log] 'Group not found for table' is chatty

Signed-off-by Reid Chan 
---
 .../apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java  | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git 
a/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java
 
b/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java
index 45f5e8f..31cb7ce 100644
--- 
a/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java
+++ 
b/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java
@@ -203,7 +203,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer {
 if (!misplacedRegions.contains(region)) {
   String groupName = 
rsGroupInfoManager.getRSGroupOfTable(region.getTable());
   if (groupName == null) {
-LOG.info("Group not found for table " + region.getTable() + ", 
using default");
+LOG.debug("Group not found for table " + region.getTable() + ", 
using default");
 groupName = RSGroupInfo.DEFAULT_GROUP;
   }
   groupToRegion.put(groupName, region);
@@ -235,7 +235,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer {
   for (RegionInfo region : misplacedRegions) {
 String groupName = 
rsGroupInfoManager.getRSGroupOfTable(region.getTable());
 if (groupName == null) {
-  LOG.info("Group not found for table " + region.getTable() + ", using 
default");
+  LOG.debug("Group not found for table " + region.getTable() + ", 
using default");
   groupName = RSGroupInfo.DEFAULT_GROUP;
 }
 RSGroupInfo info = rsGroupInfoManager.getRSGroup(groupName);
@@ -274,7 +274,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer {
   for (RegionInfo region : regions) {
 String groupName = 
rsGroupInfoManager.getRSGroupOfTable(region.getTable());
 if (groupName == null) {
-  LOG.info("Group not found for table " + region.getTable() + ", using 
default");
+  LOG.debug("Group not found for table " + region.getTable() + ", 
using default");
   groupName = RSGroupInfo.DEFAULT_GROUP;
 }
 regionMap.put(groupName, region);
@@ -336,7 +336,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer {
   ServerName assignedServer = region.getValue();
   String groupName = 
rsGroupInfoManager.getRSGroupOfTable(regionInfo.getTable());
   if (groupName == null) {
-LOG.info("Group not found for table " + regionInfo.getTable() + ", 
using default");
+LOG.debug("Group not found for table " + regionInfo.getTable() + ", 
using default");
 groupName = RSGroupInfo.DEFAULT_GROUP;
   }
   RSGroupInfo info = rsGroupInfoManager.getRSGroup(groupName);
@@ -385,7 +385,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer {
 try {
   String groupName = 
rsGroupInfoManager.getRSGroupOfTable(region.getTable());
   if (groupName == null) {
-LOG.info("Group not found for table " + region.getTable() + ", 
using default");
+LOG.debug("Group not found for table " + region.getTable() + ", 
using default");
 groupName = RSGroupInfo.DEFAULT_GROUP;
   }
   targetRSGInfo = rsGroupInfoManager.getRSGroup(groupName);



[hbase] branch branch-2.2 updated: HBASE-22702 [Log] 'Group not found for table' is chatty

2019-07-25 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2.2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.2 by this push:
 new a899c54  HBASE-22702 [Log] 'Group not found for table' is chatty
a899c54 is described below

commit a899c54cbc1344ba8ee0b14088d3562b9362c002
Author: syedmurtazahassan 
AuthorDate: Thu Jul 25 17:13:40 2019 +0200

HBASE-22702 [Log] 'Group not found for table' is chatty

Signed-off-by Reid Chan 
---
 .../apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java  | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git 
a/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java
 
b/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java
index 45f5e8f..31cb7ce 100644
--- 
a/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java
+++ 
b/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java
@@ -203,7 +203,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer {
 if (!misplacedRegions.contains(region)) {
   String groupName = 
rsGroupInfoManager.getRSGroupOfTable(region.getTable());
   if (groupName == null) {
-LOG.info("Group not found for table " + region.getTable() + ", 
using default");
+LOG.debug("Group not found for table " + region.getTable() + ", 
using default");
 groupName = RSGroupInfo.DEFAULT_GROUP;
   }
   groupToRegion.put(groupName, region);
@@ -235,7 +235,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer {
   for (RegionInfo region : misplacedRegions) {
 String groupName = 
rsGroupInfoManager.getRSGroupOfTable(region.getTable());
 if (groupName == null) {
-  LOG.info("Group not found for table " + region.getTable() + ", using 
default");
+  LOG.debug("Group not found for table " + region.getTable() + ", 
using default");
   groupName = RSGroupInfo.DEFAULT_GROUP;
 }
 RSGroupInfo info = rsGroupInfoManager.getRSGroup(groupName);
@@ -274,7 +274,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer {
   for (RegionInfo region : regions) {
 String groupName = 
rsGroupInfoManager.getRSGroupOfTable(region.getTable());
 if (groupName == null) {
-  LOG.info("Group not found for table " + region.getTable() + ", using 
default");
+  LOG.debug("Group not found for table " + region.getTable() + ", 
using default");
   groupName = RSGroupInfo.DEFAULT_GROUP;
 }
 regionMap.put(groupName, region);
@@ -336,7 +336,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer {
   ServerName assignedServer = region.getValue();
   String groupName = 
rsGroupInfoManager.getRSGroupOfTable(regionInfo.getTable());
   if (groupName == null) {
-LOG.info("Group not found for table " + regionInfo.getTable() + ", 
using default");
+LOG.debug("Group not found for table " + regionInfo.getTable() + ", 
using default");
 groupName = RSGroupInfo.DEFAULT_GROUP;
   }
   RSGroupInfo info = rsGroupInfoManager.getRSGroup(groupName);
@@ -385,7 +385,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer {
 try {
   String groupName = 
rsGroupInfoManager.getRSGroupOfTable(region.getTable());
   if (groupName == null) {
-LOG.info("Group not found for table " + region.getTable() + ", 
using default");
+LOG.debug("Group not found for table " + region.getTable() + ", 
using default");
 groupName = RSGroupInfo.DEFAULT_GROUP;
   }
   targetRSGInfo = rsGroupInfoManager.getRSGroup(groupName);



[hbase] branch branch-2.1 updated: HBASE-22702 [Log] 'Group not found for table' is chatty

2019-07-25 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new 141ccc4  HBASE-22702 [Log] 'Group not found for table' is chatty
141ccc4 is described below

commit 141ccc4fd2062274f3fdb8c9fec59c5343fcec06
Author: syedmurtazahassan 
AuthorDate: Thu Jul 25 17:13:40 2019 +0200

HBASE-22702 [Log] 'Group not found for table' is chatty

Signed-off-by Reid Chan 
---
 .../apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java  | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git 
a/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java
 
b/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java
index 8b40d25..60296dc 100644
--- 
a/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java
+++ 
b/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java
@@ -197,7 +197,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer {
 if (!misplacedRegions.contains(region)) {
   String groupName = 
rsGroupInfoManager.getRSGroupOfTable(region.getTable());
   if (groupName == null) {
-LOG.info("Group not found for table " + region.getTable() + ", 
using default");
+LOG.debug("Group not found for table " + region.getTable() + ", 
using default");
 groupName = RSGroupInfo.DEFAULT_GROUP;
   }
   groupToRegion.put(groupName, region);
@@ -232,7 +232,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer {
   for (RegionInfo region : misplacedRegions) {
 String groupName = 
rsGroupInfoManager.getRSGroupOfTable(region.getTable());
 if (groupName == null) {
-  LOG.info("Group not found for table " + region.getTable() + ", using 
default");
+  LOG.debug("Group not found for table " + region.getTable() + ", 
using default");
   groupName = RSGroupInfo.DEFAULT_GROUP;
 }
 RSGroupInfo info = rsGroupInfoManager.getRSGroup(groupName);
@@ -277,7 +277,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer {
   for (RegionInfo region : regions) {
 String groupName = 
rsGroupInfoManager.getRSGroupOfTable(region.getTable());
 if (groupName == null) {
-  LOG.info("Group not found for table " + region.getTable() + ", using 
default");
+  LOG.debug("Group not found for table " + region.getTable() + ", 
using default");
   groupName = RSGroupInfo.DEFAULT_GROUP;
 }
 regionMap.put(groupName, region);
@@ -339,7 +339,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer {
   ServerName assignedServer = region.getValue();
   String groupName = 
rsGroupInfoManager.getRSGroupOfTable(regionInfo.getTable());
   if (groupName == null) {
-LOG.info("Group not found for table " + regionInfo.getTable() + ", 
using default");
+LOG.debug("Group not found for table " + regionInfo.getTable() + ", 
using default");
 groupName = RSGroupInfo.DEFAULT_GROUP;
   }
   RSGroupInfo info = rsGroupInfoManager.getRSGroup(groupName);
@@ -389,7 +389,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer {
 try {
   String groupName = 
rsGroupInfoManager.getRSGroupOfTable(region.getTable());
   if (groupName == null) {
-LOG.info("Group not found for table " + region.getTable() + ", 
using default");
+LOG.debug("Group not found for table " + region.getTable() + ", 
using default");
 groupName = RSGroupInfo.DEFAULT_GROUP;
   }
   targetRSGInfo = rsGroupInfoManager.getRSGroup(groupName);



[hbase] branch branch-2.0 updated: HBASE-22702 [Log] 'Group not found for table' is chatty

2019-07-25 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-2.0
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.0 by this push:
 new e3e1c3f  HBASE-22702 [Log] 'Group not found for table' is chatty
e3e1c3f is described below

commit e3e1c3fad6986fe5dd720853959e59eb5e224f7b
Author: syedmurtazahassan 
AuthorDate: Thu Jul 25 17:13:40 2019 +0200

HBASE-22702 [Log] 'Group not found for table' is chatty

Signed-off-by Reid Chan 
---
 .../apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java  | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git 
a/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java
 
b/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java
index 2324018..28b9e75 100644
--- 
a/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java
+++ 
b/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java
@@ -191,7 +191,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer {
 if (!misplacedRegions.contains(region)) {
   String groupName = 
rsGroupInfoManager.getRSGroupOfTable(region.getTable());
   if (groupName == null) {
-LOG.info("Group not found for table " + region.getTable() + ", 
using default");
+LOG.debug("Group not found for table " + region.getTable() + ", 
using default");
 groupName = RSGroupInfo.DEFAULT_GROUP;
   }
   groupToRegion.put(groupName, region);
@@ -226,7 +226,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer {
   for (RegionInfo region : misplacedRegions) {
 String groupName = 
rsGroupInfoManager.getRSGroupOfTable(region.getTable());
 if (groupName == null) {
-  LOG.info("Group not found for table " + region.getTable() + ", using 
default");
+  LOG.debug("Group not found for table " + region.getTable() + ", 
using default");
   groupName = RSGroupInfo.DEFAULT_GROUP;
 }
 RSGroupInfo info = rsGroupInfoManager.getRSGroup(groupName);
@@ -271,7 +271,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer {
   for (RegionInfo region : regions) {
 String groupName = 
rsGroupInfoManager.getRSGroupOfTable(region.getTable());
 if (groupName == null) {
-  LOG.info("Group not found for table " + region.getTable() + ", using 
default");
+  LOG.debug("Group not found for table " + region.getTable() + ", 
using default");
   groupName = RSGroupInfo.DEFAULT_GROUP;
 }
 regionMap.put(groupName, region);
@@ -333,7 +333,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer {
   ServerName assignedServer = region.getValue();
   String groupName = 
rsGroupInfoManager.getRSGroupOfTable(regionInfo.getTable());
   if (groupName == null) {
-LOG.info("Group not found for table " + regionInfo.getTable() + ", 
using default");
+LOG.debug("Group not found for table " + regionInfo.getTable() + ", 
using default");
 groupName = RSGroupInfo.DEFAULT_GROUP;
   }
   RSGroupInfo info = rsGroupInfoManager.getRSGroup(groupName);
@@ -383,7 +383,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer {
 try {
   String groupName = 
rsGroupInfoManager.getRSGroupOfTable(region.getTable());
   if (groupName == null) {
-LOG.info("Group not found for table " + region.getTable() + ", 
using default");
+LOG.debug("Group not found for table " + region.getTable() + ", 
using default");
 groupName = RSGroupInfo.DEFAULT_GROUP;
   }
   targetRSGInfo = rsGroupInfoManager.getRSGroup(groupName);



[hbase] branch branch-1.4 updated: HBASE-22702 [Log] 'Group not found for table' is chatty

2019-07-25 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1.4
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1.4 by this push:
 new 620ed11  HBASE-22702 [Log] 'Group not found for table' is chatty
620ed11 is described below

commit 620ed116a4eb4d66fd727069e01758cdf7157ea7
Author: syedmurtazahassan 
AuthorDate: Thu Jul 25 17:13:40 2019 +0200

HBASE-22702 [Log] 'Group not found for table' is chatty

Signed-off-by Reid Chan 
---
 .../apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java  | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git 
a/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java
 
b/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java
index 5e51128..f1b5fde 100644
--- 
a/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java
+++ 
b/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java
@@ -207,7 +207,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer, LoadBalanc
 if (!misplacedRegions.contains(region)) {
   String groupName = infoManager.getRSGroupOfTable(region.getTable());
   if (groupName == null) {
-LOG.info("Group not found for table " + region.getTable() + ", 
using default");
+LOG.debug("Group not found for table " + region.getTable() + ", 
using default");
 groupName = RSGroupInfo.DEFAULT_GROUP;
   }
   groupToRegion.put(groupName, region);
@@ -232,7 +232,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer, LoadBalanc
   for (HRegionInfo region : misplacedRegions) {
 String groupName = infoManager.getRSGroupOfTable(region.getTable());
 if (groupName == null) {
-  LOG.info("Group not found for table " + region.getTable() + ", using 
default");
+  LOG.debug("Group not found for table " + region.getTable() + ", 
using default");
   groupName = RSGroupInfo.DEFAULT_GROUP;
 }
 RSGroupInfo info = infoManager.getRSGroup(groupName);
@@ -283,7 +283,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer, LoadBalanc
   for (HRegionInfo region : regions) {
 String groupName = infoManager.getRSGroupOfTable(region.getTable());
 if (groupName == null) {
-  LOG.info("Group not found for table " + region.getTable() + ", using 
default");
+  LOG.debug("Group not found for table " + region.getTable() + ", 
using default");
   groupName = RSGroupInfo.DEFAULT_GROUP;
 }
 regionMap.put(groupName, region);
@@ -346,7 +346,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer, LoadBalanc
   ServerName assignedServer = region.getValue();
   String groupName = infoManager.getRSGroupOfTable(regionInfo.getTable());
   if (groupName == null) {
-LOG.info("Group not found for table " + regionInfo.getTable() + ", 
using default");
+LOG.debug("Group not found for table " + regionInfo.getTable() + ", 
using default");
 groupName = RSGroupInfo.DEFAULT_GROUP;
   }
   RSGroupInfo info = infoManager.getRSGroup(groupName);
@@ -384,7 +384,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer, LoadBalanc
 try {
   String groupName = infoManager.getRSGroupOfTable(region.getTable());
   if (groupName == null) {
-LOG.info("Group not found for table " + region.getTable() + ", 
using default");
+LOG.debug("Group not found for table " + region.getTable() + ", 
using default");
 groupName = RSGroupInfo.DEFAULT_GROUP;
   }
   info = infoManager.getRSGroup(groupName);



[hbase] branch branch-1 updated: HBASE-22702 [Log] 'Group not found for table' is chatty

2019-07-25 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1 by this push:
 new cbb9614  HBASE-22702 [Log] 'Group not found for table' is chatty
cbb9614 is described below

commit cbb96144333efb8e855a50826252231955b8ee06
Author: syedmurtazahassan 
AuthorDate: Thu Jul 25 17:13:40 2019 +0200

HBASE-22702 [Log] 'Group not found for table' is chatty

Signed-off-by Reid Chan 
---
 .../apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java  | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git 
a/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java
 
b/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java
index 5e51128..f1b5fde 100644
--- 
a/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java
+++ 
b/hbase-rsgroup/src/main/java/org/apache/hadoop/hbase/rsgroup/RSGroupBasedLoadBalancer.java
@@ -207,7 +207,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer, LoadBalanc
 if (!misplacedRegions.contains(region)) {
   String groupName = infoManager.getRSGroupOfTable(region.getTable());
   if (groupName == null) {
-LOG.info("Group not found for table " + region.getTable() + ", 
using default");
+LOG.debug("Group not found for table " + region.getTable() + ", 
using default");
 groupName = RSGroupInfo.DEFAULT_GROUP;
   }
   groupToRegion.put(groupName, region);
@@ -232,7 +232,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer, LoadBalanc
   for (HRegionInfo region : misplacedRegions) {
 String groupName = infoManager.getRSGroupOfTable(region.getTable());
 if (groupName == null) {
-  LOG.info("Group not found for table " + region.getTable() + ", using 
default");
+  LOG.debug("Group not found for table " + region.getTable() + ", 
using default");
   groupName = RSGroupInfo.DEFAULT_GROUP;
 }
 RSGroupInfo info = infoManager.getRSGroup(groupName);
@@ -283,7 +283,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer, LoadBalanc
   for (HRegionInfo region : regions) {
 String groupName = infoManager.getRSGroupOfTable(region.getTable());
 if (groupName == null) {
-  LOG.info("Group not found for table " + region.getTable() + ", using 
default");
+  LOG.debug("Group not found for table " + region.getTable() + ", 
using default");
   groupName = RSGroupInfo.DEFAULT_GROUP;
 }
 regionMap.put(groupName, region);
@@ -346,7 +346,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer, LoadBalanc
   ServerName assignedServer = region.getValue();
   String groupName = infoManager.getRSGroupOfTable(regionInfo.getTable());
   if (groupName == null) {
-LOG.info("Group not found for table " + regionInfo.getTable() + ", 
using default");
+LOG.debug("Group not found for table " + regionInfo.getTable() + ", 
using default");
 groupName = RSGroupInfo.DEFAULT_GROUP;
   }
   RSGroupInfo info = infoManager.getRSGroup(groupName);
@@ -384,7 +384,7 @@ public class RSGroupBasedLoadBalancer implements 
RSGroupableBalancer, LoadBalanc
 try {
   String groupName = infoManager.getRSGroupOfTable(region.getTable());
   if (groupName == null) {
-LOG.info("Group not found for table " + region.getTable() + ", 
using default");
+LOG.debug("Group not found for table " + region.getTable() + ", 
using default");
 groupName = RSGroupInfo.DEFAULT_GROUP;
   }
   info = infoManager.getRSGroup(groupName);



[hbase] branch branch-1 updated: HBASE-22658 region_mover.rb should choose same rsgroup servers as target servers

2019-07-25 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1 by this push:
 new 1930856  HBASE-22658 region_mover.rb should choose same rsgroup 
servers as target servers
1930856 is described below

commit 1930856da6fb613715616aa37de75ed0186be3c8
Author: liang.feng 
AuthorDate: Tue Jul 16 22:15:40 2019 +0800

HBASE-22658 region_mover.rb should choose same rsgroup servers as target 
servers

Co-authored-by: Reid Chan 
Signed-off-by: stack 
Signed-off-by: Andrew Purtell 
Signed-off-by: Reid Chan 
---
 bin/region_mover.rb | 38 ++
 1 file changed, 38 insertions(+)

diff --git a/bin/region_mover.rb b/bin/region_mover.rb
index 03b7c01..3e4f020 100644
--- a/bin/region_mover.rb
+++ b/bin/region_mover.rb
@@ -41,6 +41,10 @@ import org.apache.commons.logging.LogFactory
 import org.apache.hadoop.hbase.protobuf.ProtobufUtil
 import org.apache.hadoop.hbase.ServerName
 import org.apache.hadoop.hbase.HRegionInfo
+import org.apache.hadoop.hbase.rsgroup.RSGroupAdmin
+import org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient
+import org.apache.hadoop.hbase.client.ConnectionFactory
+import org.apache.hadoop.hbase.net.Address
 
 # Name of this script
 NAME = "region_mover"
@@ -299,6 +303,17 @@ def unloadRegions(options, hostname, port)
   # Get an admin instance
   admin = HBaseAdmin.new(config)
   servers = getServers(admin)
+  # If rsgroup enable, get servers belongs to the same rsgroup as given server
+  if isEnableRSGroup(admin)
+$LOG.info("RegionServer group is enabled.")
+begin
+  conn = ConnectionFactory.createConnection(config)
+  rsgroupAdmin = RSGroupAdminClient.new(conn)
+  servers = getSameRSGroupServers(servers, rsgroupAdmin, hostname, port)
+ensure
+  conn.close()
+end
+  end
   # Remove the server we are unloading from from list of servers.
   # Side-effect is the servername that matches this hostname 
   servername = stripServer(servers, hostname, port)
@@ -432,6 +447,29 @@ def getFilename(options, targetServer, port)
   return filename
 end
 
+# Get servers in the same regionserver group as the given server
+def getSameRSGroupServers(servers, rsgroupAdmin, hostname, port)
+  results = []
+  rsgroup = rsgroupAdmin.getRSGroupOfServer(Address.fromParts(hostname,
+java.lang.Integer.parseInt(port)))
+  # rsgroup must be default or others, can't be nil
+  $LOG.info("Getting servers list from group: " + rsgroup.getName())
+  rsservers = rsgroup.getServers()
+  servers.each do |server|
+servername = ServerName.parseServerName(server)
+tmp = Address.fromParts(servername.getHostname(), servername.getPort())
+if rsservers.contains(tmp)
+  results << servername.getServerName()
+end
+  end
+  return results
+end
+
+# Determine whether rsgroup has been enabled
+def isEnableRSGroup(admin)
+  coprocessors = java.util.Arrays.asList(admin.getMasterCoprocessors());
+  return coprocessors.contains("RSGroupAdminEndpoint")
+end
 
 # Do command-line parsing
 options = {}



[hbase] branch branch-1.4 updated: HBASE-22658 region_mover.rb should choose same rsgroup servers as target servers

2019-07-25 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1.4
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1.4 by this push:
 new 3c4d591  HBASE-22658 region_mover.rb should choose same rsgroup 
servers as target servers
3c4d591 is described below

commit 3c4d5911a1cf127056d6c695fe787d8e7eb071a6
Author: liang.feng 
AuthorDate: Tue Jul 16 22:15:40 2019 +0800

HBASE-22658 region_mover.rb should choose same rsgroup servers as target 
servers

Co-authored-by: Reid Chan 
Signed-off-by: stack 
Signed-off-by: Andrew Purtell 
Signed-off-by: Reid Chan 
---
 bin/region_mover.rb | 38 ++
 1 file changed, 38 insertions(+)

diff --git a/bin/region_mover.rb b/bin/region_mover.rb
index 03b7c01..3e4f020 100644
--- a/bin/region_mover.rb
+++ b/bin/region_mover.rb
@@ -41,6 +41,10 @@ import org.apache.commons.logging.LogFactory
 import org.apache.hadoop.hbase.protobuf.ProtobufUtil
 import org.apache.hadoop.hbase.ServerName
 import org.apache.hadoop.hbase.HRegionInfo
+import org.apache.hadoop.hbase.rsgroup.RSGroupAdmin
+import org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient
+import org.apache.hadoop.hbase.client.ConnectionFactory
+import org.apache.hadoop.hbase.net.Address
 
 # Name of this script
 NAME = "region_mover"
@@ -299,6 +303,17 @@ def unloadRegions(options, hostname, port)
   # Get an admin instance
   admin = HBaseAdmin.new(config)
   servers = getServers(admin)
+  # If rsgroup enable, get servers belongs to the same rsgroup as given server
+  if isEnableRSGroup(admin)
+$LOG.info("RegionServer group is enabled.")
+begin
+  conn = ConnectionFactory.createConnection(config)
+  rsgroupAdmin = RSGroupAdminClient.new(conn)
+  servers = getSameRSGroupServers(servers, rsgroupAdmin, hostname, port)
+ensure
+  conn.close()
+end
+  end
   # Remove the server we are unloading from from list of servers.
   # Side-effect is the servername that matches this hostname 
   servername = stripServer(servers, hostname, port)
@@ -432,6 +447,29 @@ def getFilename(options, targetServer, port)
   return filename
 end
 
+# Get servers in the same regionserver group as the given server
+def getSameRSGroupServers(servers, rsgroupAdmin, hostname, port)
+  results = []
+  rsgroup = rsgroupAdmin.getRSGroupOfServer(Address.fromParts(hostname,
+java.lang.Integer.parseInt(port)))
+  # rsgroup must be default or others, can't be nil
+  $LOG.info("Getting servers list from group: " + rsgroup.getName())
+  rsservers = rsgroup.getServers()
+  servers.each do |server|
+servername = ServerName.parseServerName(server)
+tmp = Address.fromParts(servername.getHostname(), servername.getPort())
+if rsservers.contains(tmp)
+  results << servername.getServerName()
+end
+  end
+  return results
+end
+
+# Determine whether rsgroup has been enabled
+def isEnableRSGroup(admin)
+  coprocessors = java.util.Arrays.asList(admin.getMasterCoprocessors());
+  return coprocessors.contains("RSGroupAdminEndpoint")
+end
 
 # Do command-line parsing
 options = {}



[hbase] branch master updated: HBASE-22609 [Docs] More detail documentation about 'hbase.server.thread.wakefrequency'

2019-08-04 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new a62fdcc  HBASE-22609 [Docs] More detail documentation about 
'hbase.server.thread.wakefrequency'
a62fdcc is described below

commit a62fdccd3b66ff2740d1a034a84f52796baf4b8b
Author: Reid Chan 
AuthorDate: Mon Aug 5 14:51:00 2019 +0800

HBASE-22609 [Docs] More detail documentation about 
'hbase.server.thread.wakefrequency'

Signed-off-by: stack 
---
 hbase-common/src/main/resources/hbase-default.xml | 26 +++
 1 file changed, 22 insertions(+), 4 deletions(-)

diff --git a/hbase-common/src/main/resources/hbase-default.xml 
b/hbase-common/src/main/resources/hbase-default.xml
index d9f5854..877cd74 100644
--- a/hbase-common/src/main/resources/hbase-default.xml
+++ b/hbase-common/src/main/resources/hbase-default.xml
@@ -606,8 +606,7 @@ possible configurations would overwhelm and obscure the 
important.
   Then the cluster's availability is at least 99% when 
balancing.
   
   
-hbase.balancer.period
-
+hbase.balancer.period
 30
 Period at which the region balancer runs in the 
Master.
   
@@ -631,8 +630,27 @@ possible configurations would overwhelm and obscure the 
important.
   
 hbase.server.thread.wakefrequency
 1
-Time to sleep in between searches for work (in milliseconds).
-Used as sleep interval by service threads such as log roller.
+In master side, this config is the period used for FS related 
behaviors:
+  checking if hdfs is out of safe mode, setting or checking hbase.version 
file,
+  setting or checking hbase.id file. Using default value should be fine.
+  In regionserver side, this config is used in several places: flushing 
check interval,
+  compaction check interval, wal rolling check interval. Specially, admin 
can tune
+  flushing and compaction check interval by 
hbase.regionserver.flush.check.period
+  and hbase.regionserver.compaction.check.period. (in 
milliseconds)
+  
+  
+hbase.regionserver.flush.check.period
+${hbase.server.thread.wakefrequency}
+It determines the flushing check period of PeriodicFlusher in 
regionserver.
+  If unset, it uses hbase.server.thread.wakefrequency as default value.
+  (in milliseconds)
+  
+  
+hbase.regionserver.compaction.check.period
+${hbase.server.thread.wakefrequency}
+It determines the compaction check period of 
CompactionChecker in regionserver.
+  If unset, it uses hbase.server.thread.wakefrequency as default value.
+  (in milliseconds)
   
   
 hbase.server.versionfile.writeattempts



[hbase] branch branch-1 updated: HBASE-22774 [WAL] RegionGroupingStrategy loses its function after split

2019-08-13 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1 by this push:
 new f887207  HBASE-22774 [WAL] RegionGroupingStrategy loses its function 
after split
f887207 is described below

commit f887207322ee4907c46580b628cb828e3f5539c9
Author: Reid Chan 
AuthorDate: Wed Aug 14 10:27:55 2019 +0800

HBASE-22774 [WAL] RegionGroupingStrategy loses its function after split

Signed-off-by: Peter Somogyi 
---
 .../apache/hadoop/hbase/regionserver/HRegion.java  |   6 +-
 .../hadoop/hbase/wal/BoundedGroupingStrategy.java  |   2 +-
 .../hbase/wal/NamespaceGroupingStrategy.java   |   7 +-
 .../hadoop/hbase/wal/RegionGroupingProvider.java   |   3 +-
 .../hbase/regionserver/TestSplitTransaction.java   | 306 -
 5 files changed, 306 insertions(+), 18 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
index c1c101a..b137c97 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
@@ -7180,9 +7180,11 @@ public class HRegion implements HeapSize, 
PropagatingConfigurationObserver, Regi
 // Move the files from the temporary .splits to the final /table/region 
directory
 fs.commitDaughterRegion(hri);
 
+// rsServices can be null in UT
+WAL daughterWAL = rsServices == null ? getWAL() :rsServices.getWAL(hri);
 // Create the daughter HRegion instance
-HRegion r = HRegion.newHRegion(this.fs.getTableDir(), this.getWAL(), 
fs.getFileSystem(),
-this.getBaseConf(), hri, this.getTableDesc(), rsServices);
+HRegion r = HRegion.newHRegion(this.fs.getTableDir(), daughterWAL,
+  fs.getFileSystem(), this.getBaseConf(), hri, this.getTableDesc(), 
rsServices);
 r.readRequestsCount.set(this.getReadRequestsCount() / 2);
 r.writeRequestsCount.set(this.getWriteRequestsCount() / 2);
 return r;
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/BoundedGroupingStrategy.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/BoundedGroupingStrategy.java
index 06f8792..b3366c2 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/BoundedGroupingStrategy.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/BoundedGroupingStrategy.java
@@ -72,7 +72,7 @@ public class BoundedGroupingStrategy implements 
RegionGroupingStrategy{
 int regionGroupNumber = config.getInt(NUM_REGION_GROUPS, 
DEFAULT_NUM_REGION_GROUPS);
 groupNames = new String[regionGroupNumber];
 for (int i = 0; i < regionGroupNumber; i++) {
-  groupNames[i] = providerId + GROUP_NAME_DELIMITER + "regiongroup-" + i;
+  groupNames[i] = "regiongroup-" + i;
 }
   }
 
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/NamespaceGroupingStrategy.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/NamespaceGroupingStrategy.java
index 6193592..7b100cd 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/NamespaceGroupingStrategy.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/NamespaceGroupingStrategy.java
@@ -31,7 +31,6 @@ import 
org.apache.hadoop.hbase.wal.RegionGroupingProvider.RegionGroupingStrategy
  */
 @InterfaceAudience.Private
 public class NamespaceGroupingStrategy implements RegionGroupingStrategy {
-  private String providerId;
 
   @Override
   public String group(byte[] identifier, byte[] namespace) {
@@ -41,12 +40,10 @@ public class NamespaceGroupingStrategy implements 
RegionGroupingStrategy {
 } else {
   namespaceString = Bytes.toString(namespace);
 }
-return providerId + GROUP_NAME_DELIMITER + namespaceString;
+return namespaceString;
   }
 
   @Override
-  public void init(Configuration config, String providerId) {
-this.providerId = providerId;
-  }
+  public void init(Configuration config, String providerId) {}
 
 }
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/RegionGroupingProvider.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/RegionGroupingProvider.java
index b853c5b..96cef6f 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/RegionGroupingProvider.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/RegionGroupingProvider.java
@@ -62,7 +62,6 @@ public class RegionGroupingProvider implements WALProvider {
* Map identifiers to a group number.
*/
   public static interface RegionGroupingStrategy {
-String GROUP_NAME_DELIMITER = ".";
 
 /**
  * Given an identifier and a namespace, pick a group.
@@ -252,7 +251,7 @@ public class RegionGroupingProvider implements WALProvider {
 publ

[hbase] branch branch-1.4 updated: HBASE-22774 [WAL] RegionGroupingStrategy loses its function after split

2019-08-13 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1.4
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1.4 by this push:
 new ae3c106  HBASE-22774 [WAL] RegionGroupingStrategy loses its function 
after split
ae3c106 is described below

commit ae3c106131bca3f1b25d8e53e1409872a104a2a1
Author: Reid Chan 
AuthorDate: Wed Aug 14 10:27:55 2019 +0800

HBASE-22774 [WAL] RegionGroupingStrategy loses its function after split

Signed-off-by: Peter Somogyi 
---
 .../apache/hadoop/hbase/regionserver/HRegion.java  |   6 +-
 .../hadoop/hbase/wal/BoundedGroupingStrategy.java  |   2 +-
 .../hbase/wal/NamespaceGroupingStrategy.java   |   7 +-
 .../hadoop/hbase/wal/RegionGroupingProvider.java   |   3 +-
 .../hbase/regionserver/TestSplitTransaction.java   | 306 -
 5 files changed, 306 insertions(+), 18 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
index a458f39..9ccb677 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
@@ -7169,9 +7169,11 @@ public class HRegion implements HeapSize, 
PropagatingConfigurationObserver, Regi
 // Move the files from the temporary .splits to the final /table/region 
directory
 fs.commitDaughterRegion(hri);
 
+// rsServices can be null in UT
+WAL daughterWAL = rsServices == null ? getWAL() :rsServices.getWAL(hri);
 // Create the daughter HRegion instance
-HRegion r = HRegion.newHRegion(this.fs.getTableDir(), this.getWAL(), 
fs.getFileSystem(),
-this.getBaseConf(), hri, this.getTableDesc(), rsServices);
+HRegion r = HRegion.newHRegion(this.fs.getTableDir(), daughterWAL,
+  fs.getFileSystem(), this.getBaseConf(), hri, this.getTableDesc(), 
rsServices);
 r.readRequestsCount.set(this.getReadRequestsCount() / 2);
 r.writeRequestsCount.set(this.getWriteRequestsCount() / 2);
 return r;
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/BoundedGroupingStrategy.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/BoundedGroupingStrategy.java
index 06f8792..b3366c2 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/BoundedGroupingStrategy.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/BoundedGroupingStrategy.java
@@ -72,7 +72,7 @@ public class BoundedGroupingStrategy implements 
RegionGroupingStrategy{
 int regionGroupNumber = config.getInt(NUM_REGION_GROUPS, 
DEFAULT_NUM_REGION_GROUPS);
 groupNames = new String[regionGroupNumber];
 for (int i = 0; i < regionGroupNumber; i++) {
-  groupNames[i] = providerId + GROUP_NAME_DELIMITER + "regiongroup-" + i;
+  groupNames[i] = "regiongroup-" + i;
 }
   }
 
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/NamespaceGroupingStrategy.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/NamespaceGroupingStrategy.java
index 6193592..7b100cd 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/NamespaceGroupingStrategy.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/NamespaceGroupingStrategy.java
@@ -31,7 +31,6 @@ import 
org.apache.hadoop.hbase.wal.RegionGroupingProvider.RegionGroupingStrategy
  */
 @InterfaceAudience.Private
 public class NamespaceGroupingStrategy implements RegionGroupingStrategy {
-  private String providerId;
 
   @Override
   public String group(byte[] identifier, byte[] namespace) {
@@ -41,12 +40,10 @@ public class NamespaceGroupingStrategy implements 
RegionGroupingStrategy {
 } else {
   namespaceString = Bytes.toString(namespace);
 }
-return providerId + GROUP_NAME_DELIMITER + namespaceString;
+return namespaceString;
   }
 
   @Override
-  public void init(Configuration config, String providerId) {
-this.providerId = providerId;
-  }
+  public void init(Configuration config, String providerId) {}
 
 }
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/RegionGroupingProvider.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/RegionGroupingProvider.java
index b853c5b..96cef6f 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/RegionGroupingProvider.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/RegionGroupingProvider.java
@@ -62,7 +62,6 @@ public class RegionGroupingProvider implements WALProvider {
* Map identifiers to a group number.
*/
   public static interface RegionGroupingStrategy {
-String GROUP_NAME_DELIMITER = ".";
 
 /**
  * Given an identifier and a namespace, pick a group.
@@ -252,7 +251,7 @@ public class RegionGroupingProvider implements WALProvider {
  

[hbase] branch branch-1.3 updated: HBASE-22774 [WAL] RegionGroupingStrategy loses its function after split

2019-08-13 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1.3
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1.3 by this push:
 new d2b310b  HBASE-22774 [WAL] RegionGroupingStrategy loses its function 
after split
d2b310b is described below

commit d2b310bca8140515b436aa34b2bda5c8d0236803
Author: Reid Chan 
AuthorDate: Wed Aug 14 10:27:55 2019 +0800

HBASE-22774 [WAL] RegionGroupingStrategy loses its function after split

Signed-off-by: Peter Somogyi 

Conflicts:

hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransaction.java
---
 .../apache/hadoop/hbase/regionserver/HRegion.java  |   6 +-
 .../hadoop/hbase/wal/BoundedGroupingStrategy.java  |   2 +-
 .../hbase/wal/NamespaceGroupingStrategy.java   |   7 +-
 .../hadoop/hbase/wal/RegionGroupingProvider.java   |   3 +-
 .../hbase/regionserver/TestSplitTransaction.java   | 304 -
 5 files changed, 303 insertions(+), 19 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
index 33e953f..c988863 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
@@ -6996,9 +6996,11 @@ public class HRegion implements HeapSize, 
PropagatingConfigurationObserver, Regi
 // Move the files from the temporary .splits to the final /table/region 
directory
 fs.commitDaughterRegion(hri);
 
+// rsServices can be null in UT
+WAL daughterWAL = rsServices == null ? getWAL() :rsServices.getWAL(hri);
 // Create the daughter HRegion instance
-HRegion r = HRegion.newHRegion(this.fs.getTableDir(), this.getWAL(), 
fs.getFileSystem(),
-this.getBaseConf(), hri, this.getTableDesc(), rsServices);
+HRegion r = HRegion.newHRegion(this.fs.getTableDir(), daughterWAL,
+  fs.getFileSystem(), this.getBaseConf(), hri, this.getTableDesc(), 
rsServices);
 r.readRequestsCount.set(this.getReadRequestsCount() / 2);
 r.writeRequestsCount.set(this.getWriteRequestsCount() / 2);
 return r;
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/BoundedGroupingStrategy.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/BoundedGroupingStrategy.java
index 06f8792..b3366c2 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/BoundedGroupingStrategy.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/BoundedGroupingStrategy.java
@@ -72,7 +72,7 @@ public class BoundedGroupingStrategy implements 
RegionGroupingStrategy{
 int regionGroupNumber = config.getInt(NUM_REGION_GROUPS, 
DEFAULT_NUM_REGION_GROUPS);
 groupNames = new String[regionGroupNumber];
 for (int i = 0; i < regionGroupNumber; i++) {
-  groupNames[i] = providerId + GROUP_NAME_DELIMITER + "regiongroup-" + i;
+  groupNames[i] = "regiongroup-" + i;
 }
   }
 
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/NamespaceGroupingStrategy.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/NamespaceGroupingStrategy.java
index 6193592..7b100cd 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/NamespaceGroupingStrategy.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/NamespaceGroupingStrategy.java
@@ -31,7 +31,6 @@ import 
org.apache.hadoop.hbase.wal.RegionGroupingProvider.RegionGroupingStrategy
  */
 @InterfaceAudience.Private
 public class NamespaceGroupingStrategy implements RegionGroupingStrategy {
-  private String providerId;
 
   @Override
   public String group(byte[] identifier, byte[] namespace) {
@@ -41,12 +40,10 @@ public class NamespaceGroupingStrategy implements 
RegionGroupingStrategy {
 } else {
   namespaceString = Bytes.toString(namespace);
 }
-return providerId + GROUP_NAME_DELIMITER + namespaceString;
+return namespaceString;
   }
 
   @Override
-  public void init(Configuration config, String providerId) {
-this.providerId = providerId;
-  }
+  public void init(Configuration config, String providerId) {}
 
 }
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/RegionGroupingProvider.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/RegionGroupingProvider.java
index a725989..e96f4d7 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/RegionGroupingProvider.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/RegionGroupingProvider.java
@@ -61,7 +61,6 @@ public class RegionGroupingProvider implements WALProvider {
* Map identifiers to a group number.
*/
   public static interface RegionGroupingStrategy {
-String GROUP_NAME_DELIMITER = ".";
 
 /**
  * Given an identif

[hbase] branch master updated: HBASE-22873 Typo in block caching docs

2019-08-17 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new 7903f55  HBASE-22873 Typo in block caching docs
7903f55 is described below

commit 7903f55c18d561d7451e91b8a014936a3116f142
Author: Shuai Lin 
AuthorDate: Sat Aug 17 19:10:46 2019 +0800

HBASE-22873 Typo in block caching docs

Signed-off-by: Reid Chan 
---
 src/main/asciidoc/_chapters/architecture.adoc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/main/asciidoc/_chapters/architecture.adoc 
b/src/main/asciidoc/_chapters/architecture.adoc
index a2fe2f1..848b8fd 100644
--- a/src/main/asciidoc/_chapters/architecture.adoc
+++ b/src/main/asciidoc/_chapters/architecture.adoc
@@ -837,7 +837,7 @@ Here are two use cases:
   Setting block caching on such a table is a waste of memory and CPU cycles, 
more so that it will generate more garbage to pick up by the JVM.
   For more information on monitoring GC, see <>.
 * Mapping a table: In a typical MapReduce job that takes a table in input, 
every row will be read only once so there's no need to put them into the block 
cache.
-  The Scan object has the option of turning this off via the setCaching method 
(set it to false). You can still keep block caching turned on on this table if 
you need fast random read access.
+  The Scan object has the option of turning this off via the setCacheBlocks 
method (set it to false). You can still keep block caching turned on on this 
table if you need fast random read access.
   An example would be counting the number of rows in a table that serves live 
traffic, caching every block of that table would create massive churn and would 
surely evict data that's currently in use.
 
 [[data.blocks.in.fscache]]



[hbase] branch branch-1 updated: HBASE-25031 [Flaky Test] TestReplicationDisableInactivePeer#testDisableInactivePeer (#2402)

2020-09-27 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1 by this push:
 new 982aa33  HBASE-25031 [Flaky Test] 
TestReplicationDisableInactivePeer#testDisableInactivePeer (#2402)
982aa33 is described below

commit 982aa33cda77225c29afb918b8812b441e44a80c
Author: Reid Chan 
AuthorDate: Mon Sep 28 12:23:59 2020 +0800

HBASE-25031 [Flaky Test] 
TestReplicationDisableInactivePeer#testDisableInactivePeer (#2402)

Signed-off-by: Viraj Jasani
---
 .../hadoop/hbase/replication/TestReplicationDisableInactivePeer.java | 1 +
 1 file changed, 1 insertion(+)

diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationDisableInactivePeer.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationDisableInactivePeer.java
index d73d7f8..9896b1b 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationDisableInactivePeer.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationDisableInactivePeer.java
@@ -62,6 +62,7 @@ public class TestReplicationDisableInactivePeer extends 
TestReplicationBase {
 // disable and start the peer
 admin.disablePeer("2");
 utility2.startMiniHBaseCluster(1, 2);
+htable2 = utility2.getConnection().getTable(tableName);
 Get get = new Get(rowkey);
 for (int i = 0; i < NB_RETRIES; i++) {
   Result res = htable2.get(get);



[hbase] branch branch-1 updated (982aa33 -> ccb4e89)

2020-09-27 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a change to branch branch-1
in repository https://gitbox.apache.org/repos/asf/hbase.git.


from 982aa33  HBASE-25031 [Flaky Test] 
TestReplicationDisableInactivePeer#testDisableInactivePeer (#2402)
 add ccb4e89  HBASE-25030 [Flaky Test] 
TestRestartCluster#testClusterRestart (#2401)

No new revisions were added by this update.

Summary of changes:
 .../src/test/java/org/apache/hadoop/hbase/master/TestRestartCluster.java | 1 +
 1 file changed, 1 insertion(+)



[hbase] branch branch-1 updated: HBASE-25025 [Flaky Test][branch-1] TestFromClientSide#testCheckAndDeleteWithCompareOp (#2396)

2020-09-28 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1 by this push:
 new 93b76fd  HBASE-25025 [Flaky Test][branch-1] 
TestFromClientSide#testCheckAndDeleteWithCompareOp (#2396)
93b76fd is described below

commit 93b76fdb322130b474a122494a3844893e628443
Author: Reid Chan 
AuthorDate: Tue Sep 29 14:38:22 2020 +0800

HBASE-25025 [Flaky Test][branch-1] 
TestFromClientSide#testCheckAndDeleteWithCompareOp (#2396)

Signed-off-by: Viraj Jasani 
---
 .../hadoop/hbase/client/TestFromClientSide.java| 92 +-
 1 file changed, 55 insertions(+), 37 deletions(-)

diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
index 0e715a9..4c2ec1f 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
@@ -5067,73 +5067,91 @@ public class TestFromClientSide {
 FAMILY);
 
TEST_UTIL.waitTableAvailable(TableName.valueOf("testCheckAndDeleteWithCompareOp"),
 1);
 
-Put put2 = new Put(ROW);
-put2.add(FAMILY, QUALIFIER, value2);
-table.put(put2);
+Put  = new Put(ROW);
+.add(FAMILY, QUALIFIER, value2);
 
-Put put3 = new Put(ROW);
-put3.add(FAMILY, QUALIFIER, value3);
+Put  = new Put(ROW);
+.add(FAMILY, QUALIFIER, value3);
 
 Delete delete = new Delete(ROW);
 delete.deleteColumns(FAMILY, QUALIFIER);
 
 // cell = "", using "" to compare only LESS/LESS_OR_EQUAL/NOT_EQUAL
 // turns out "match"
+table.put();
+assertTrue(verifyPut(table, , value2));
 boolean ok = table.checkAndDelete(ROW, FAMILY, QUALIFIER, 
CompareOp.GREATER, value1, delete);
-assertEquals(ok, false);
+//  is less than , > || >= should be false
+assertFalse(ok);
 ok = table.checkAndDelete(ROW, FAMILY, QUALIFIER, CompareOp.EQUAL, value1, 
delete);
-assertEquals(ok, false);
+assertFalse(ok);
 ok = table.checkAndDelete(ROW, FAMILY, QUALIFIER, 
CompareOp.GREATER_OR_EQUAL, value1, delete);
-assertEquals(ok, false);
+assertFalse(ok);
+//  is less than , < || <= should be true
 ok = table.checkAndDelete(ROW, FAMILY, QUALIFIER, CompareOp.LESS, value1, 
delete);
-assertEquals(ok, true);
-table.put(put2);
+assertTrue(ok);
+table.put();
+assertTrue(verifyPut(table, , value2));
 ok = table.checkAndDelete(ROW, FAMILY, QUALIFIER, CompareOp.LESS_OR_EQUAL, 
value1, delete);
-assertEquals(ok, true);
-table.put(put2);
-
-assertEquals(ok, true);
+assertTrue(ok);
 
 // cell = "", using "" to compare only 
LARGER/LARGER_OR_EQUAL/NOT_EQUAL
 // turns out "match"
-table.put(put3);
+table.put();
+assertTrue(verifyPut(table, , value3));
+//  is larger than ,  < || <= shoule be false
 ok = table.checkAndDelete(ROW, FAMILY, QUALIFIER, CompareOp.LESS, value4, 
delete);
-
-assertEquals(ok, false);
+assertFalse(ok);
 ok = table.checkAndDelete(ROW, FAMILY, QUALIFIER, CompareOp.LESS_OR_EQUAL, 
value4, delete);
-
-assertEquals(ok, false);
+assertFalse(ok);
 ok = table.checkAndDelete(ROW, FAMILY, QUALIFIER, CompareOp.EQUAL, value4, 
delete);
-
-assertEquals(ok, false);
+assertFalse(ok);
+//  is larger than , (> || >= || !=) shoule be true
 ok = table.checkAndDelete(ROW, FAMILY, QUALIFIER, CompareOp.GREATER, 
value4, delete);
-
-assertEquals(ok, true);
-table.put(put3);
+assertTrue(ok);
+table.put();
+assertTrue(verifyPut(table, , value3));
 ok = table.checkAndDelete(ROW, FAMILY, QUALIFIER, 
CompareOp.GREATER_OR_EQUAL, value4, delete);
-assertEquals(ok, true);
-table.put(put3);
+assertTrue(ok);
+table.put();
+assertTrue(verifyPut(table, , value3));
 ok = table.checkAndDelete(ROW, FAMILY, QUALIFIER, CompareOp.NOT_EQUAL, 
value4, delete);
-
-assertEquals(ok, true);
+assertTrue(ok);
 
 // cell = "", using "" to compare only 
GREATER_OR_EQUAL/LESS_OR_EQUAL/EQUAL
 // turns out "match"
-table.put(put2);
+//  equals to , != shoule be all false
+table.put();
+assertTrue(verifyPut(table, , value2));
 ok = table.checkAndDelete(ROW, FAMILY, QUALIFIER, CompareOp.GREATER, 
value2, delete);
-assertEquals(ok, false);
+assertFalse(ok);
 ok = table.checkAndDelete(ROW, FAMILY, QUALIFIER, CompareOp.NOT_EQUAL, 
value2, delete);
-assertEquals(ok, false);
+assertFalse(ok);
 ok = tabl

[hbase] branch branch-1 updated: HBASE-25114 [Flake Test][branch-1] TestFromClientSide#testCacheOnWriteEvictOnClose (#2470)

2020-09-29 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1 by this push:
 new e719a5b  HBASE-25114 [Flake Test][branch-1] 
TestFromClientSide#testCacheOnWriteEvictOnClose (#2470)
e719a5b is described below

commit e719a5b589b3eb2fd7bff47e6332aa3d61e3b2f6
Author: Reid Chan 
AuthorDate: Wed Sep 30 00:17:40 2020 +0800

HBASE-25114 [Flake Test][branch-1] 
TestFromClientSide#testCacheOnWriteEvictOnClose (#2470)

Signed-off-by: Viraj Jasani 
---
 .../test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
index 4c2ec1f..c10ff07 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
@@ -5386,7 +5386,7 @@ public class TestFromClientSide {
 waitForStoreFileCount(store, 1, 1); // wait 10 seconds max
 assertEquals(1, store.getStorefilesCount());
 // evicted two data blocks and two index blocks and compaction does not 
cache new blocks
-expectedBlockCount = 0;
+expectedBlockCount -= 4;
 assertEquals(expectedBlockCount, cache.getBlockCount());
 expectedBlockHits += 2;
 assertEquals(expectedBlockMiss, cache.getStats().getMissCount());



[hbase] branch branch-1 updated: HBASE-25122 [Flake Test][branch-1] TestExportSnapshotWithTemporaryDirectory (#2472)

2020-09-30 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1 by this push:
 new 6edeb5d  HBASE-25122 [Flake Test][branch-1] 
TestExportSnapshotWithTemporaryDirectory (#2472)
6edeb5d is described below

commit 6edeb5d1ecfd8b0155ff3c9022c6de2101565177
Author: Reid Chan 
AuthorDate: Wed Sep 30 18:55:28 2020 +0800

HBASE-25122 [Flake Test][branch-1] TestExportSnapshotWithTemporaryDirectory 
(#2472)

* Remove unused imports

Signed-off-by: Viraj Jasani 
---
 .../hadoop/hbase/snapshot/TestExportSnapshot.java   |  6 ++
 .../TestExportSnapshotWithTemporaryDirectory.java   | 21 +++--
 2 files changed, 9 insertions(+), 18 deletions(-)

diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java
index f045ada..e96a1de 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java
@@ -40,8 +40,6 @@ import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HRegionInfo;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
-import org.apache.hadoop.hbase.client.HTable;
-import org.apache.hadoop.hbase.client.Table;
 import org.apache.hadoop.hbase.master.snapshot.SnapshotManager;
 import 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription;
 import 
org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos.SnapshotFileInfo;
@@ -67,7 +65,7 @@ import org.junit.rules.TestRule;
 public class TestExportSnapshot {
   @Rule public final TestRule timeout = CategoryBasedTimeout.builder().
   withTimeout(this.getClass()).withLookingForStuckThread(true).build();
-  private static final Log LOG = LogFactory.getLog(TestExportSnapshot.class);
+  protected static final Log LOG = LogFactory.getLog(TestExportSnapshot.class);
 
   protected final static HBaseTestingUtility TEST_UTIL = new 
HBaseTestingUtility();
 
@@ -79,7 +77,7 @@ public class TestExportSnapshot {
   private TableName tableName;
   private Admin admin;
 
-  public static void setUpBaseConf(Configuration conf) {
+  public static void setUpBaseConf(Configuration conf) throws Exception  {
 conf.setBoolean(SnapshotManager.HBASE_SNAPSHOT_ENABLED, true);
 conf.setInt("hbase.regionserver.msginterval", 100);
 conf.setInt("hbase.client.pause", 250);
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshotWithTemporaryDirectory.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshotWithTemporaryDirectory.java
index d50f262..b9c69d7 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshotWithTemporaryDirectory.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshotWithTemporaryDirectory.java
@@ -17,10 +17,6 @@
  */
 package org.apache.hadoop.hbase.snapshot;
 
-import java.io.File;
-import java.nio.file.Paths;
-import java.util.UUID;
-import org.apache.commons.io.FileUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hbase.testclassification.MediumTests;
@@ -31,24 +27,21 @@ import org.junit.experimental.categories.Category;
 @Category({MediumTests.class})
 public class TestExportSnapshotWithTemporaryDirectory extends 
TestExportSnapshot {
 
-  protected static String TEMP_DIR = Paths.get("").toAbsolutePath().toString() 
+ Path.SEPARATOR
-  + UUID.randomUUID().toString();
-
   @BeforeClass
   public static void setUpBeforeClass() throws Exception {
-setUpBaseConf(TEST_UTIL.getConfiguration());
+Configuration conf = TEST_UTIL.getConfiguration();
+TestExportSnapshot.setUpBaseConf(conf);
 TEST_UTIL.startMiniCluster(3);
+Path rootDir = 
TEST_UTIL.getMiniHBaseCluster().getMaster().getMasterFileSystem().getRootDir();
+LOG.info("Root dir: " + rootDir);
+conf.set(SnapshotDescriptionUtils.SNAPSHOT_WORKING_DIR,
+  new Path(rootDir.getParent(), ".tmpdir").toUri().toString());
 TEST_UTIL.startMiniMapReduceCluster();
   }
 
   @AfterClass
   public static void tearDownAfterClass() throws Exception {
 TestExportSnapshot.tearDownAfterClass();
-FileUtils.deleteDirectory(new File(TEMP_DIR));
   }
 
-  public static void setUpBaseConf(Configuration conf) {
-TestExportSnapshot.setUpBaseConf(conf);
-conf.set(SnapshotDescriptionUtils.SNAPSHOT_WORKING_DIR,  "file://" + new 
Path(TEMP_DIR, ".tmpdir").toUri());
-  }
-}
\ No newline at end of file
+}



[hbase] branch branch-1 updated: HBASE-24849 Branch-1 Backport : HBASE-24665 MultiWAL : Avoid rolling of ALL WALs when one of the WAL needs a roll (#2194)

2020-10-16 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1 by this push:
 new e066951  HBASE-24849 Branch-1 Backport : HBASE-24665 MultiWAL : Avoid 
rolling of ALL WALs when one of the WAL needs a roll (#2194)
e066951 is described below

commit e06695112a358706344cc8682f2713b43daab340
Author: WenFeiYi 
AuthorDate: Fri Oct 16 20:42:18 2020 +0800

HBASE-24849 Branch-1 Backport : HBASE-24665 MultiWAL : Avoid rolling of ALL 
WALs when one of the WAL needs a roll (#2194)

Signed-off-by: Reid Chan 
---
 .../hadoop/hbase/regionserver/LogRoller.java   | 108 ++-
 .../hadoop/hbase/regionserver/TestLogRoller.java   | 114 +
 2 files changed, 194 insertions(+), 28 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/LogRoller.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/LogRoller.java
index fd208c2..08d5a33 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/LogRoller.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/LogRoller.java
@@ -32,6 +32,7 @@ import org.apache.hadoop.hbase.RemoteExceptionHandler;
 import org.apache.hadoop.hbase.Server;
 import org.apache.hadoop.hbase.regionserver.wal.FSHLog;
 import org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
 import org.apache.hadoop.hbase.wal.WAL;
 import org.apache.hadoop.hbase.regionserver.wal.WALActionsListener;
 import org.apache.hadoop.hbase.util.Bytes;
@@ -56,23 +57,27 @@ public class LogRoller extends HasThread {
   private static final Log LOG = LogFactory.getLog(LogRoller.class);
   private final ReentrantLock rollLock = new ReentrantLock();
   private final AtomicBoolean rollLog = new AtomicBoolean(false);
-  private final ConcurrentHashMap walNeedsRoll =
-  new ConcurrentHashMap();
+  private final ConcurrentHashMap wals =
+  new ConcurrentHashMap();
   private final Server server;
   protected final RegionServerServices services;
-  private volatile long lastrolltime = System.currentTimeMillis();
   // Period to roll log.
-  private final long rollperiod;
+  private final long rollPeriod;
   private final int threadWakeFrequency;
   // The interval to check low replication on hlog's pipeline
-  private long checkLowReplicationInterval;
+  private final long checkLowReplicationInterval;
 
   public void addWAL(final WAL wal) {
-if (null == walNeedsRoll.putIfAbsent(wal, Boolean.FALSE)) {
+if (null == wals.putIfAbsent(wal, new RollController(wal))) {
   wal.registerWALActionsListener(new WALActionsListener.Base() {
 @Override
 public void logRollRequested(WALActionsListener.RollRequestReason 
reason) {
-  walNeedsRoll.put(wal, Boolean.TRUE);
+  RollController controller = wals.get(wal);
+  if (controller == null) {
+wals.putIfAbsent(wal, new RollController(wal));
+controller = wals.get(wal);
+  }
+  controller.requestRoll();
   // TODO logs will contend with each other here, replace with e.g. 
DelayedQueue
   synchronized(rollLog) {
 rollLog.set(true);
@@ -84,8 +89,8 @@ public class LogRoller extends HasThread {
   }
 
   public void requestRollAll() {
-for (WAL wal : walNeedsRoll.keySet()) {
-  walNeedsRoll.put(wal, Boolean.TRUE);
+for (RollController controller : wals.values()) {
+  controller.requestRoll();
 }
 synchronized(rollLog) {
   rollLog.set(true);
@@ -98,7 +103,7 @@ public class LogRoller extends HasThread {
 super("LogRoller");
 this.server = server;
 this.services = services;
-this.rollperiod = this.server.getConfiguration().
+this.rollPeriod = this.server.getConfiguration().
   getLong("hbase.regionserver.logroll.period", 360);
 this.threadWakeFrequency = this.server.getConfiguration().
   getInt(HConstants.THREAD_WAKE_FREQUENCY, 10 * 1000);
@@ -120,9 +125,9 @@ public class LogRoller extends HasThread {
*/
   void checkLowReplication(long now) {
 try {
-  for (Entry entry : walNeedsRoll.entrySet()) {
+  for (Entry entry : wals.entrySet()) {
 WAL wal = entry.getKey();
-boolean neeRollAlready = entry.getValue();
+boolean neeRollAlready = entry.getValue().needsRoll(now);
 if(wal instanceof FSHLog && !neeRollAlready) {
   FSHLog hlog = (FSHLog)wal;
   if ((now - hlog.getLastTimeCheckLowReplication())
@@ -139,11 +144,16 @@ public class LogRoller extends HasThread {
   @Override
   public void run() {
 while (!server.isStopped()) {
-  long now = System.currentTimeMillis();
+  long now = EnvironmentEdgeManager.currentTime();
   checkLo

[hbase] branch branch-1 updated: HBASE-25195 [branch-1] getNumOpenConnections is not effective (#2557)

2020-10-18 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch branch-1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1 by this push:
 new d3ac342  HBASE-25195 [branch-1] getNumOpenConnections is not effective 
(#2557)
d3ac342 is described below

commit d3ac3420e5b9c7ba39497fc9c86e5a47c0332eb1
Author: Reid Chan 
AuthorDate: Mon Oct 19 10:54:15 2020 +0800

HBASE-25195 [branch-1] getNumOpenConnections is not effective (#2557)

Signed-off-by: Viraj Jasani 
---
 .../org/apache/hadoop/hbase/ipc/MetricsHBaseServerWrapperImpl.java| 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerWrapperImpl.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerWrapperImpl.java
index 6ba7ea2..2f3745f 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerWrapperImpl.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerWrapperImpl.java
@@ -65,10 +65,10 @@ public class MetricsHBaseServerWrapperImpl implements 
MetricsHBaseServerWrapper
 
   @Override
   public int getNumOpenConnections() {
-if (!isServerStarted() || this.server.connectionList == null) {
+if (!isServerStarted()) {
   return 0;
 }
-return server.connectionList.size();
+return server.numConnections;
   }
 
   @Override



[hbase] branch master updated: HBASE-25189 [Metrics] Add checkAndPut and checkAndDelete latency metrics at table level (#2549)

2020-10-25 Thread reidchan
This is an automated email from the ASF dual-hosted git repository.

reidchan pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/master by this push:
 new e5d4e2f  HBASE-25189 [Metrics] Add checkAndPut and checkAndDelete 
latency metrics at table level (#2549)
e5d4e2f is described below

commit e5d4e2fc8138cba0c4a1da2b42b51042da3d9c7e
Author: Reid Chan 
AuthorDate: Sun Oct 25 17:46:14 2020 +0800

HBASE-25189 [Metrics] Add checkAndPut and checkAndDelete latency metrics at 
table level (#2549)

Signed-off-by: Viraj Jasani 
---
 .../hbase/regionserver/MetricsTableLatencies.java  | 25 +++
 .../regionserver/MetricsTableLatenciesImpl.java| 36 ++
 .../hbase/regionserver/MetricsRegionServer.java| 15 +++--
 .../hadoop/hbase/regionserver/RSRpcServices.java   |  9 --
 .../regionserver/RegionServerTableMetrics.java | 12 
 .../regionserver/TestMetricsRegionServer.java  | 17 ++
 6 files changed, 94 insertions(+), 20 deletions(-)

diff --git 
a/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatencies.java
 
b/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatencies.java
index 231bad1..2aeb82b 100644
--- 
a/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatencies.java
+++ 
b/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatencies.java
@@ -53,6 +53,9 @@ public interface MetricsTableLatencies {
   String DELETE_BATCH_TIME = "deleteBatchTime";
   String INCREMENT_TIME = "incrementTime";
   String APPEND_TIME = "appendTime";
+  String CHECK_AND_DELETE_TIME = "checkAndDeleteTime";
+  String CHECK_AND_PUT_TIME = "checkAndPutTime";
+  String CHECK_AND_MUTATE_TIME = "checkAndMutateTime";
 
   /**
* Update the Put time histogram
@@ -125,4 +128,26 @@ public interface MetricsTableLatencies {
* @param t time it took
*/
   void updateScanTime(String tableName, long t);
+
+  /**
+   * Update the CheckAndDelete time histogram.
+   * @param nameAsString The table the metric is for
+   * @param time time it took
+   */
+  void updateCheckAndDelete(String nameAsString, long time);
+
+  /**
+   * Update the CheckAndPut time histogram.
+   * @param nameAsString The table the metric is for
+   * @param time time it took
+   */
+  void updateCheckAndPut(String nameAsString, long time);
+
+  /**
+   * Update the CheckAndMutate time histogram.
+   * @param nameAsString The table the metric is for
+   * @param time time it took
+   */
+  void updateCheckAndMutate(String nameAsString, long time);
+
 }
diff --git 
a/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatenciesImpl.java
 
b/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatenciesImpl.java
index 5a3f3b9..5e13a61 100644
--- 
a/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatenciesImpl.java
+++ 
b/hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsTableLatenciesImpl.java
@@ -47,6 +47,9 @@ public class MetricsTableLatenciesImpl extends BaseSourceImpl 
implements Metrics
 final MetricHistogram deleteBatchTimeHisto;
 final MetricHistogram scanTimeHisto;
 final MetricHistogram scanSizeHisto;
+final MetricHistogram checkAndDeleteTimeHisto;
+final MetricHistogram checkAndPutTimeHisto;
+final MetricHistogram checkAndMutateTimeHisto;
 
 TableHistograms(DynamicMetricsRegistry registry, TableName tn) {
   getTimeHisto = registry.newTimeHistogram(qualifyMetricsName(tn, 
GET_TIME));
@@ -60,6 +63,12 @@ public class MetricsTableLatenciesImpl extends 
BaseSourceImpl implements Metrics
   qualifyMetricsName(tn, DELETE_BATCH_TIME));
   scanTimeHisto = registry.newTimeHistogram(qualifyMetricsName(tn, 
SCAN_TIME));
   scanSizeHisto = registry.newSizeHistogram(qualifyMetricsName(tn, 
SCAN_SIZE));
+  checkAndDeleteTimeHisto =
+registry.newTimeHistogram(qualifyMetricsName(tn, 
CHECK_AND_DELETE_TIME));
+  checkAndPutTimeHisto =
+registry.newTimeHistogram(qualifyMetricsName(tn, CHECK_AND_PUT_TIME));
+  checkAndMutateTimeHisto =
+registry.newTimeHistogram(qualifyMetricsName(tn, 
CHECK_AND_MUTATE_TIME));
 }
 
 public void updatePut(long time) {
@@ -97,6 +106,18 @@ public class MetricsTableLatenciesImpl extends 
BaseSourceImpl implements Metrics
 public void updateScanTime(long t) {
   scanTimeHisto.add(t);
 }
+
+public void updateCheckAndDeleteTime(long t) {
+  checkAndDeleteTimeHisto.add(t);
+}
+
+public void updateCheckAndPutTime(long t) {
+  checkAndPutTimeHisto.add(t);
+}
+
+public void updateCheckAndMutateTime(long t) {
+  checkAndMutateTimeHisto.a

  1   2   3   4   >