[jira] [Updated] (HBASE-23603) Update Apache POM to version 21 for hbase-connectors

2020-01-22 Thread Jan Hentschel (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Hentschel updated HBASE-23603:
--
Fix Version/s: connector-1.0.1
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Pushed to master.

> Update Apache POM to version 21 for hbase-connectors
> 
>
> Key: HBASE-23603
> URL: https://issues.apache.org/jira/browse/HBASE-23603
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jan Hentschel
>Assignee: Jan Hentschel
>Priority: Trivial
> Fix For: connector-1.0.1
>
>
> {{hbase-connectors}} currently uses the Apache POM version 18. The latest 
> version is 21.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase-connectors] HorizonNet merged pull request #58: HBASE-23603 Updated Apache POM to version 22

2020-01-22 Thread GitBox
HorizonNet merged pull request #58: HBASE-23603 Updated Apache POM to version 22
URL: https://github.com/apache/hbase-connectors/pull/58
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22514) Move rsgroup feature into core of HBase

2020-01-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17021829#comment-17021829
 ] 

Hudson commented on HBASE-22514:


Results for branch HBASE-22514
[build #250 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/250/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/250//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/250//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/250//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Move rsgroup feature into core of HBase
> ---
>
> Key: HBASE-22514
> URL: https://issues.apache.org/jira/browse/HBASE-22514
> Project: HBase
>  Issue Type: Umbrella
>  Components: Admin, Client, rsgroup
>Reporter: Yechao Chen
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-22514.master.001.patch, 
> image-2019-05-31-18-25-38-217.png
>
>
> The class RSGroupAdminClient is not public 
> we need to use java api  RSGroupAdminClient  to manager RSG 
> so  RSGroupAdminClient should be public
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23647) Make MasterRegistry the default registry impl

2020-01-22 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17021801#comment-17021801
 ] 

HBase QA commented on HBASE-23647:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  3m 
51s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 47 new or modified test 
files. {color} |
|| || || || {color:brown} HBASE-18095/client-locate-meta-no-zookeeper Compile 
Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
46s{color} | {color:green} HBASE-18095/client-locate-meta-no-zookeeper passed 
{color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
45s{color} | {color:green} HBASE-18095/client-locate-meta-no-zookeeper passed 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
33s{color} | {color:green} HBASE-18095/client-locate-meta-no-zookeeper passed 
{color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 3s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} HBASE-18095/client-locate-meta-no-zookeeper passed 
{color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  4m 
56s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
50s{color} | {color:green} HBASE-18095/client-locate-meta-no-zookeeper passed 
{color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} hbase-common: The patch generated 0 new + 4 
unchanged - 1 fixed = 4 total (was 5) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} The patch passed checkstyle in hbase-client {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
35s{color} | {color:green} hbase-server: The patch generated 0 new + 451 
unchanged - 2 fixed = 451 total (was 453) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 2s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
17m 14s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.8.5 2.9.2 or 3.1.2. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
7s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
54s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}176m  2s{color} 
| {col

[GitHub] [hbase] Apache-HBase commented on issue #1039: HBASE-23647: Make MasterRegistry the default impl.

2020-01-22 Thread GitBox
Apache-HBase commented on issue #1039: HBASE-23647: Make MasterRegistry the 
default impl.
URL: https://github.com/apache/hbase/pull/1039#issuecomment-577523063
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   3m 51s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  1s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
47 new or modified test files.  |
   ||| _ HBASE-18095/client-locate-meta-no-zookeeper Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 34s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   5m 46s |  
HBASE-18095/client-locate-meta-no-zookeeper passed  |
   | +1 :green_heart: |  compile  |   1m 45s |  
HBASE-18095/client-locate-meta-no-zookeeper passed  |
   | +1 :green_heart: |  checkstyle  |   2m 33s |  
HBASE-18095/client-locate-meta-no-zookeeper passed  |
   | +1 :green_heart: |  shadedjars  |   5m  3s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  
HBASE-18095/client-locate-meta-no-zookeeper passed  |
   | +0 :ok: |  spotbugs  |   4m 56s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   6m 50s |  
HBASE-18095/client-locate-meta-no-zookeeper passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m 28s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 46s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 46s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  hbase-common: The patch 
generated 0 new + 4 unchanged - 1 fixed = 4 total (was 5)  |
   | +1 :green_heart: |  checkstyle  |   0m 34s |  The patch passed checkstyle 
in hbase-client  |
   | +1 :green_heart: |  checkstyle  |   1m 35s |  hbase-server: The patch 
generated 0 new + 451 unchanged - 2 fixed = 451 total (was 453)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   5m  2s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  17m 14s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   7m 37s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m  7s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   1m 54s |  hbase-client in the patch passed.  
|
   | -1 :x: |  unit  | 176m  2s |  hbase-server in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m 16s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 258m 29s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hbase.TestZooKeeper |
   |   | hadoop.hbase.replication.TestReplicationSmallTests |
   |   | hadoop.hbase.client.TestFromClientSide |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1039/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1039 |
   | JIRA Issue | HBASE-23647 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 14c22c9438e5 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1039/out/precommit/personality/provided.sh
 |
   | git revision | HBASE-18095/client-locate-meta-no-zookeeper / d9bb034c94 |
   | Default Java | 1.8.0_181 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1039/6/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1039/6/testReport/
 |
   | Max. process+thread count | 5220 (vs. ulimit of 1) |
   | modules | C: hbase-common hbase-client hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1039/6/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   

--

[GitHub] [hbase] saintstack commented on a change in pull request #1039: HBASE-23647: Make MasterRegistry the default impl.

2020-01-22 Thread GitBox
saintstack commented on a change in pull request #1039: HBASE-23647: Make 
MasterRegistry the default impl.
URL: https://github.com/apache/hbase/pull/1039#discussion_r369943132
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/StartMiniClusterOption.java
 ##
 @@ -46,6 +46,14 @@
* can find the active/primary master with {@link 
MiniHBaseCluster#getMaster()}.
*/
   private final int numMasters;
+
+  /**
+   * Number of masters that always remain standby. These set of masters never 
transition to active
+   * even if an active master does not exist. These are needed for testing 
scenarios where there are
+   * no active masters in the cluster but the cluster connection (backed by 
master registry) should
+   * still work.
+   */
+  private final int numAlwaysStandByMasters;
 
 Review comment:
   Oh, so a hack to get around some test scenarios. Ok. Good.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack commented on a change in pull request #1039: HBASE-23647: Make MasterRegistry the default impl.

2020-01-22 Thread GitBox
saintstack commented on a change in pull request #1039: HBASE-23647: Make 
MasterRegistry the default impl.
URL: https://github.com/apache/hbase/pull/1039#discussion_r369933775
 
 

 ##
 File path: 
hbase-common/src/main/java/org/apache/hadoop/hbase/HBaseConfiguration.java
 ##
 @@ -268,14 +268,19 @@ public static Configuration 
createClusterConf(Configuration baseConf, String clu
* used to communicate with distant clusters
* @param conf configuration object to configure
* @param key string that contains the 3 required configuratins
-   * @throws IOException
*/
   private static void applyClusterKeyToConf(Configuration conf, String key)
-  throws IOException{
+  throws IOException {
 ZKConfig.ZKClusterKey zkClusterKey = ZKConfig.transformClusterKey(key);
 conf.set(HConstants.ZOOKEEPER_QUORUM, zkClusterKey.getQuorumString());
 conf.setInt(HConstants.ZOOKEEPER_CLIENT_PORT, 
zkClusterKey.getClientPort());
 conf.set(HConstants.ZOOKEEPER_ZNODE_PARENT, zkClusterKey.getZnodeParent());
+// Without the right registry, the above configs are useless. Also, we 
don't use setClass()
+// here because the ConnectionRegistry* classes are not resolvable from 
this module.
+// This will be broken if ZkConnectionRegistry class gets renamed or 
moved. Is there a better
+// way?
 
 Review comment:
   This code is going waste away. If user chooses zk registry, this code 
applies? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack commented on a change in pull request #1039: HBASE-23647: Make MasterRegistry the default impl.

2020-01-22 Thread GitBox
saintstack commented on a change in pull request #1039: HBASE-23647: Make 
MasterRegistry the default impl.
URL: https://github.com/apache/hbase/pull/1039#discussion_r369940494
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
 ##
 @@ -789,8 +789,17 @@ public boolean 
registerService(com.google.protobuf.Service instance) {
 return true;
   }
 
-  private Configuration unsetClientZookeeperQuorum() {
+  private Configuration cleanupConfiguration() {
 Configuration conf = this.conf;
+// We use ZKConnectionRegistry for all the internal communication, 
primarily for these reasons:
+// - Decouples RS and master life cycles. RegionServers can continue be up 
independent of
+//   masters' availability.
 
 Review comment:
   Is this not possible when Master Registry?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack commented on a change in pull request #1039: HBASE-23647: Make MasterRegistry the default impl.

2020-01-22 Thread GitBox
saintstack commented on a change in pull request #1039: HBASE-23647: Make 
MasterRegistry the default impl.
URL: https://github.com/apache/hbase/pull/1039#discussion_r369944879
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/AlwaysStandByMasterManager.java
 ##
 @@ -0,0 +1,87 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.monitoring.MonitoredTask;
+import org.apache.hadoop.hbase.util.Threads;
+import org.apache.hadoop.hbase.zookeeper.MasterAddressTracker;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * An implementation of ActiveMasterManager that never transitions it's master 
to active state. It
+ * always remains as a stand by master. With the master registry 
implementation (HBASE-18095) it is
+ * expected to have at least one active / standby master always running at any 
point in time since
+ * they serve as the gateway for client connections.
+ *
+ * With this implementation, tests can simulate the scenario of not having an 
active master yet the
+ * client connections to the cluster succeed.
 
 Review comment:
   Is it going to be hell when no Master for clients up in the cluster?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack commented on a change in pull request #1039: HBASE-23647: Make MasterRegistry the default impl.

2020-01-22 Thread GitBox
saintstack commented on a change in pull request #1039: HBASE-23647: Make 
MasterRegistry the default impl.
URL: https://github.com/apache/hbase/pull/1039#discussion_r369941187
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
 ##
 @@ -3058,6 +3061,26 @@ private void initConnection() throws IOException {
 this.asyncConnection = 
ClusterConnectionFactory.createAsyncClusterConnection(conf, null, user);
   }
 
+  /**
+   * Resets the connections so that the next time getConnection() is called, a 
new connection is
+   * created. This is needed in cases where the entire cluster / all the 
masters are shutdown and
+   * the connection is not valid anymore.
+   * TODO: There should be a more coherent way of doing this. Unfortunately 
the way tests are
+   *   written, not all start() stop() calls go through this class. Most tests 
directly operate on
+   *   the underlying mini/local hbase cluster. That makes it difficult for 
this wrapper class to
+   *   maintain the connection state automatically. Cleaning this is a much 
bigger refactor.
 
 Review comment:
   Good comment


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack commented on a change in pull request #1039: HBASE-23647: Make MasterRegistry the default impl.

2020-01-22 Thread GitBox
saintstack commented on a change in pull request #1039: HBASE-23647: Make 
MasterRegistry the default impl.
URL: https://github.com/apache/hbase/pull/1039#discussion_r369945441
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRSKilledWhenInitializing.java
 ##
 @@ -96,7 +96,7 @@ public void 
testRSTerminationAfterRegisteringToMasterBeforeCreatingEphemeralNode
 TEST_UTIL.startMiniZKCluster();
 TEST_UTIL.createRootDir();
 final LocalHBaseCluster cluster =
-new LocalHBaseCluster(conf, NUM_MASTERS, NUM_RS, HMaster.class,
+new LocalHBaseCluster(conf, NUM_MASTERS, 0, NUM_RS, HMaster.class,
 
 Review comment:
   We seem to do '0' as AlwaysMasters. Should we do override that defaults 
zero? Just to keep the AlwaysMasters out of view when not needed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack commented on a change in pull request #1039: HBASE-23647: Make MasterRegistry the default impl.

2020-01-22 Thread GitBox
saintstack commented on a change in pull request #1039: HBASE-23647: Make 
MasterRegistry the default impl.
URL: https://github.com/apache/hbase/pull/1039#discussion_r369945016
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAlwaysStandByHMaster.java
 ##
 @@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import static junit.framework.TestCase.assertTrue;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.StartMiniClusterOption;
+import org.apache.hadoop.hbase.testclassification.MasterTests;
+import org.apache.hadoop.hbase.testclassification.MediumTests;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+@Category({MediumTests.class, MasterTests.class})
+public class TestAlwaysStandByHMaster {
+
+  @ClassRule
+  public static final HBaseClassTestRule CLASS_RULE =
+  HBaseClassTestRule.forClass(TestAlwaysStandByHMaster.class);
+  private static final HBaseTestingUtility TEST_UTIL = new 
HBaseTestingUtility();
+
+  @BeforeClass
+  public static void setup() throws Exception {
+StartMiniClusterOption option = StartMiniClusterOption.builder().
+numAlwaysStandByMasters(1).numMasters(1).numRegionServers(1).build();
+TEST_UTIL.startMiniCluster(option);
+  }
+
+  public static void teardown() throws Exception {
+TEST_UTIL.shutdownMiniCluster();
+  }
+
+  /**
+   * Tests that the AlwaysStandByHMaster does not transition to active state 
even if no active
+   * master exists.
+   */
+  @Test  public void testAlwaysStandBy() throws Exception {
+// Make sure there is an active master.
+assertNotNull(TEST_UTIL.getMiniHBaseCluster().getMaster());
+assertEquals(2, TEST_UTIL.getMiniHBaseCluster().getMasterThreads().size());
+// Kill the only active master.
+TEST_UTIL.getMiniHBaseCluster().stopMaster(0).join();
+// Wait for 5s to make sure the always standby doesn't transition to 
active state.
+
assertFalse(TEST_UTIL.getMiniHBaseCluster().waitForActiveAndReadyMaster(5000));
+// Add a new master.
+HMaster newActive = 
TEST_UTIL.getMiniHBaseCluster().startMaster().getMaster();
+
assertTrue(TEST_UTIL.getMiniHBaseCluster().waitForActiveAndReadyMaster(5000));
+// Newly added master should be the active.
+assertEquals(newActive.getServerName(),
+TEST_UTIL.getMiniHBaseCluster().getMaster().getServerName());
+  }
+}
 
 Review comment:
   Good


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack commented on a change in pull request #1039: HBASE-23647: Make MasterRegistry the default impl.

2020-01-22 Thread GitBox
saintstack commented on a change in pull request #1039: HBASE-23647: Make 
MasterRegistry the default impl.
URL: https://github.com/apache/hbase/pull/1039#discussion_r369939988
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/LocalHBaseCluster.java
 ##
 @@ -170,27 +171,36 @@ public LocalHBaseCluster(final Configuration conf, final 
int noMasters,
 this.masterClass = (Class)
   conf.getClass(HConstants.MASTER_IMPL, masterClass);
 // Start the HMasters.
-for (int i = 0; i < noMasters; i++) {
+int i;
+for (i = 0; i < noMasters; i++) {
   addMaster(new Configuration(conf), i);
 }
-
-// Populate the master address host ports in the config. This is needed if 
a master based
-// registry is configured for client metadata services (HBASE-18095)
-List masterHostPorts = new ArrayList<>();
-getMasters().forEach(masterThread ->
-
masterHostPorts.add(masterThread.getMaster().getServerName().getAddress().toString()));
-conf.set(HConstants.MASTER_ADDRS_KEY, String.join(",", masterHostPorts));
-
+for (int j = 0; j < noAlwaysStandByMasters; j++) {
+  Configuration c = new Configuration(conf);
+  c.set(HConstants.MASTER_IMPL, 
"org.apache.hadoop.hbase.master.AlwaysStandByHMaster");
+  addMaster(c, i + j);
+}
 // Start the HRegionServers.
 this.regionServerClass =
   (Class)conf.getClass(HConstants.REGION_SERVER_IMPL,
regionServerClass);
 
-for (int i = 0; i < noRegionServers; i++) {
-  addRegionServer(new Configuration(conf), i);
+for (int j = 0; j < noRegionServers; j++) {
+  addRegionServer(new Configuration(conf), j);
 }
   }
 
+  /**
+   * Populates the master address host ports in the config. This is needed if 
a master based
+   * registry is configured for client metadata services (HBASE-18095)
+   */
+  private void refreshMasterAddrsConfig() {
+List masterHostPorts = new ArrayList<>();
+getMasters().forEach(masterThread ->
+
masterHostPorts.add(masterThread.getMaster().getServerName().getAddress().toString()));
+conf.set(HConstants.MASTER_ADDRS_KEY, String.join(",", masterHostPorts));
 
 Review comment:
   I could be obnoxious here but won't. Better to do as you have done here and 
just move the code not change it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack commented on a change in pull request #1039: HBASE-23647: Make MasterRegistry the default impl.

2020-01-22 Thread GitBox
saintstack commented on a change in pull request #1039: HBASE-23647: Make 
MasterRegistry the default impl.
URL: https://github.com/apache/hbase/pull/1039#discussion_r369945155
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterShutdown.java
 ##
 @@ -133,11 +134,19 @@ public void 
testMasterShutdownBeforeStartingAnyRegionServer() throws Exception {
 util.startMiniZKCluster();
 util.createRootDir();
 final LocalHBaseCluster cluster =
-new LocalHBaseCluster(conf, NUM_MASTERS, NUM_RS, HMaster.class,
+new LocalHBaseCluster(conf, NUM_MASTERS, 0, NUM_RS, HMaster.class,
 MiniHBaseCluster.MiniHBaseClusterRegionServer.class);
 final int MASTER_INDEX = 0;
 final MasterThread master = cluster.getMasters().get(MASTER_INDEX);
 master.start();
+// Switching to master registry exposed a race in the master bootstrap 
that can result in a
+// lost shutdown command (HBASE-8422). The race is essentially because the 
server manager in
+// HMaster is not initialized by the time shutdown() RPC (below) is made to
+// the master. The reason it was not happening earlier is because the 
connection creation with
+// ZK registry is so slow that by then the server manager is init'ed thus 
masking the problem.
+// For now, I'm putting a wait() here to workaround the issue, I think the 
fix for it is a
+// little delicate and needs to be done separately.
+Waiter.waitFor(conf, 5000, () -> master.getMaster().getServerManager() != 
null);
 
 Review comment:
   Good
   
   BTW, does test suite run faster now?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack commented on a change in pull request #1039: HBASE-23647: Make MasterRegistry the default impl.

2020-01-22 Thread GitBox
saintstack commented on a change in pull request #1039: HBASE-23647: Make 
MasterRegistry the default impl.
URL: https://github.com/apache/hbase/pull/1039#discussion_r369944404
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMetaRegionLocationCache.java
 ##
 @@ -60,7 +60,7 @@ public static void setUp() throws Exception {
 TEST_UTIL.getConfiguration().setInt(HConstants.META_REPLICAS_NUM, 3);
 TEST_UTIL.startMiniCluster(3);
 REGISTRY = 
ConnectionRegistryFactory.getRegistry(TEST_UTIL.getConfiguration());
-RegionReplicaTestHelper.waitUntilAllMetaReplicasHavingRegionLocation(
+RegionReplicaTestHelper.waitUntilAllMetaReplicasAreReady(TEST_UTIL,
 
 Review comment:
   We pass in a TEST_UTIL AND a Configuration? Do we always get the 
configuration from TEST_UTIL? If so, just pass TEST_UTIL? Otherwise, ignore 
this comment.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack commented on a change in pull request #1039: HBASE-23647: Make MasterRegistry the default impl.

2020-01-22 Thread GitBox
saintstack commented on a change in pull request #1039: HBASE-23647: Make 
MasterRegistry the default impl.
URL: https://github.com/apache/hbase/pull/1039#discussion_r369944209
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSideWithCoprocessor.java
 ##
 @@ -43,7 +43,7 @@
   @Parameterized.Parameters
   public static Collection parameters() {
 return Arrays.asList(new Object[][] {
-{ ZKConnectionRegistry.class, 1}
+{ MasterRegistry.class, 1}
 
 Review comment:
   What you thinking? No tests of old registry in Master?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack commented on a change in pull request #1039: HBASE-23647: Make MasterRegistry the default impl.

2020-01-22 Thread GitBox
saintstack commented on a change in pull request #1039: HBASE-23647: Make 
MasterRegistry the default impl.
URL: https://github.com/apache/hbase/pull/1039#discussion_r369940617
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
 ##
 @@ -789,8 +789,17 @@ public boolean 
registerService(com.google.protobuf.Service instance) {
 return true;
   }
 
-  private Configuration unsetClientZookeeperQuorum() {
+  private Configuration cleanupConfiguration() {
 Configuration conf = this.conf;
+// We use ZKConnectionRegistry for all the internal communication, 
primarily for these reasons:
+// - Decouples RS and master life cycles. RegionServers can continue be up 
independent of
+//   masters' availability.
+// - Configuration management for region servers (cluster internal) is 
much simpler when adding
+//   new masters etc.
 
 Review comment:
   I don't understand this one.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack commented on a change in pull request #1039: HBASE-23647: Make MasterRegistry the default impl.

2020-01-22 Thread GitBox
saintstack commented on a change in pull request #1039: HBASE-23647: Make 
MasterRegistry the default impl.
URL: https://github.com/apache/hbase/pull/1039#discussion_r369935638
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/LocalHBaseCluster.java
 ##
 @@ -170,27 +171,36 @@ public LocalHBaseCluster(final Configuration conf, final 
int noMasters,
 this.masterClass = (Class)
   conf.getClass(HConstants.MASTER_IMPL, masterClass);
 // Start the HMasters.
-for (int i = 0; i < noMasters; i++) {
+int i;
+for (i = 0; i < noMasters; i++) {
   addMaster(new Configuration(conf), i);
 }
-
-// Populate the master address host ports in the config. This is needed if 
a master based
-// registry is configured for client metadata services (HBASE-18095)
-List masterHostPorts = new ArrayList<>();
-getMasters().forEach(masterThread ->
-
masterHostPorts.add(masterThread.getMaster().getServerName().getAddress().toString()));
-conf.set(HConstants.MASTER_ADDRS_KEY, String.join(",", masterHostPorts));
-
+for (int j = 0; j < noAlwaysStandByMasters; j++) {
+  Configuration c = new Configuration(conf);
+  c.set(HConstants.MASTER_IMPL, 
"org.apache.hadoop.hbase.master.AlwaysStandByHMaster");
 
 Review comment:
   Late to the game , but what is an AlwaysStandByHMaster rather than 
StandByMaster? Why the Always? Maybe it will make sense later in review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack commented on a change in pull request #1039: HBASE-23647: Make MasterRegistry the default impl.

2020-01-22 Thread GitBox
saintstack commented on a change in pull request #1039: HBASE-23647: Make 
MasterRegistry the default impl.
URL: https://github.com/apache/hbase/pull/1039#discussion_r369943408
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/StartMiniClusterOption.java
 ##
 @@ -46,6 +46,14 @@
* can find the active/primary master with {@link 
MiniHBaseCluster#getMaster()}.
*/
   private final int numMasters;
+
+  /**
+   * Number of masters that always remain standby. These set of masters never 
transition to active
+   * even if an active master does not exist. These are needed for testing 
scenarios where there are
+   * no active masters in the cluster but the cluster connection (backed by 
master registry) should
+   * still work.
+   */
+  private final int numAlwaysStandByMasters;
 
 Review comment:
   Would be good to keep them 'hidden' as much as we can.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack commented on a change in pull request #1039: HBASE-23647: Make MasterRegistry the default impl.

2020-01-22 Thread GitBox
saintstack commented on a change in pull request #1039: HBASE-23647: Make 
MasterRegistry the default impl.
URL: https://github.com/apache/hbase/pull/1039#discussion_r369944939
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/AlwaysStandByMasterManager.java
 ##
 @@ -0,0 +1,87 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.monitoring.MonitoredTask;
+import org.apache.hadoop.hbase.util.Threads;
+import org.apache.hadoop.hbase.zookeeper.MasterAddressTracker;
+import org.apache.hadoop.hbase.zookeeper.ZKWatcher;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.zookeeper.KeeperException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * An implementation of ActiveMasterManager that never transitions it's master 
to active state. It
+ * always remains as a stand by master. With the master registry 
implementation (HBASE-18095) it is
+ * expected to have at least one active / standby master always running at any 
point in time since
+ * they serve as the gateway for client connections.
+ *
+ * With this implementation, tests can simulate the scenario of not having an 
active master yet the
+ * client connections to the cluster succeed.
+ */
+@InterfaceAudience.Private
+class AlwaysStandByMasterManager extends ActiveMasterManager {
+  private static final Logger LOG = 
LoggerFactory.getLogger(AlwaysStandByMasterManager.class);
+
+  AlwaysStandByMasterManager(ZKWatcher watcher, ServerName sn, Server master) {
+super(watcher, sn, master);
+  }
+
+  /**
+   * An implementation that never transitions to an active master.
+   */
+  boolean blockUntilBecomingActiveMaster(int checkInterval, MonitoredTask 
startupStatus) {
+while (!(master.isAborted() || master.isStopped())) {
+  startupStatus.setStatus("Forever looping to stay as a standby master.");
+  try {
+activeMasterServerName = null;
+try {
+  if (MasterAddressTracker.getMasterAddress(watcher) != null) {
+clusterHasActiveMaster.set(true);
+  }
+  Threads.sleepWithoutInterrupt(100);
+} catch (IOException e) {
+  // pass, we will get notified when some other active master creates 
the znode.
+}
+  } catch (KeeperException e) {
+master.abort("Received an unexpected KeeperException, aborting", e);
+return false;
+  }
+  synchronized (this.clusterHasActiveMaster) {
+while (clusterHasActiveMaster.get() && !master.isStopped()) {
+  try {
+clusterHasActiveMaster.wait(checkInterval);
+  } catch (InterruptedException e) {
+// We expect to be interrupted when a master dies,
+//  will fall out if so
+LOG.debug("Interrupted waiting for master to die", e);
+  }
+}
+if (clusterShutDown.get()) {
+  this.master.stop(
+  "Cluster went down before this master became active");
+}
+  }
+}
+return false;
+  }
+}
 
 Review comment:
   Good.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack commented on a change in pull request #1039: HBASE-23647: Make MasterRegistry the default impl.

2020-01-22 Thread GitBox
saintstack commented on a change in pull request #1039: HBASE-23647: Make 
MasterRegistry the default impl.
URL: https://github.com/apache/hbase/pull/1039#discussion_r369943797
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncAdminMasterSwitch.java
 ##
 @@ -48,8 +48,6 @@ public void testSwitch() throws IOException, 
InterruptedException {
 assertEquals(TEST_UTIL.getHBaseCluster().getRegionServerThreads().size(),
   
admin.getClusterMetrics(EnumSet.of(ClusterMetrics.Option.SERVERS_NAME)).join()
 .getServersName().size());
-// stop the old master, and start a new one
-TEST_UTIL.getMiniHBaseCluster().startMaster();
 
 Review comment:
   No need of a new master to complete test?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] VladRodionov commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions

2020-01-22 Thread GitBox
VladRodionov commented on a change in pull request #921: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/921#discussion_r369938900
 
 

 ##
 File path: hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java
 ##
 @@ -1126,7 +1126,7 @@ public HFileFilter(FileSystem fs) {
 
 @Override
 protected boolean accept(Path p, @CheckForNull Boolean isDir) {
-  if (!StoreFileInfo.isHFile(p)) {
+  if (!StoreFileInfo.isHFile(p) && !StoreFileInfo.isMobFile(p)) {
 
 Review comment:
   Created: https://issues.apache.org/jira/browse/HBASE-23724


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] VladRodionov commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions

2020-01-22 Thread GitBox
VladRodionov commented on a change in pull request #921: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/921#discussion_r369938849
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileInfo.java
 ##
 @@ -52,36 +53,34 @@
   private static final Logger LOG = 
LoggerFactory.getLogger(StoreFileInfo.class);
 
   /**
-   * A non-capture group, for hfiles, so that this can be embedded.
-   * HFiles are uuid ([0-9a-z]+). Bulk loaded hfiles has (_SeqId_[0-9]+_) has 
suffix.
-   * The mob del file has (_del) as suffix.
+   * A non-capture group, for hfiles, so that this can be embedded. HFiles are 
uuid ([0-9a-z]+).
+   * Bulk loaded hfiles has (_SeqId_[0-9]+_) has suffix. The mob del file has 
(_del) as suffix.
*/
   public static final String HFILE_NAME_REGEX = 
"[0-9a-f]+(?:(?:_SeqId_[0-9]+_)|(?:_del))?";
 
 Review comment:
   Created: https://issues.apache.org/jira/browse/HBASE-23724


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] VladRodionov commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions

2020-01-22 Thread GitBox
VladRodionov commented on a change in pull request #921: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/921#discussion_r369938761
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileInfo.java
 ##
 @@ -442,6 +438,31 @@ public static boolean isHFile(final String fileName) {
 return m.matches() && m.groupCount() > 0;
   }
 
+  public static boolean isMobFile(final Path path) {
+String fileName = path.getName();
+String[] parts = fileName.split(MobUtils.SEP);
+if (parts.length != 2) {
+  return false;
+}
+Matcher m = HFILE_NAME_PATTERN.matcher(parts[0]);
+Matcher mm = HFILE_NAME_PATTERN.matcher(parts[1]);
+return m.matches() && mm.matches();
+  }
+
+  public static boolean isMobRefFile(final Path path) {
+String fileName = path.getName();
+int lastIndex = fileName.lastIndexOf(MobUtils.SEP);
+if (lastIndex < 0) {
+  return false;
+}
+String[] parts = new String[2];
+parts[0] = fileName.substring(0, lastIndex);
+parts[1] = fileName.substring(lastIndex + 1);
+String name = parts[0] + "." + parts[1];
+Matcher m = REF_NAME_PATTERN.matcher(name);
 
 Review comment:
   Created: https://issues.apache.org/jira/browse/HBASE-23724


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] VladRodionov commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions

2020-01-22 Thread GitBox
VladRodionov commented on a change in pull request #921: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/921#discussion_r369938825
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileInfo.java
 ##
 @@ -442,6 +438,31 @@ public static boolean isHFile(final String fileName) {
 return m.matches() && m.groupCount() > 0;
   }
 
+  public static boolean isMobFile(final Path path) {
+String fileName = path.getName();
+String[] parts = fileName.split(MobUtils.SEP);
+if (parts.length != 2) {
+  return false;
+}
+Matcher m = HFILE_NAME_PATTERN.matcher(parts[0]);
+Matcher mm = HFILE_NAME_PATTERN.matcher(parts[1]);
 
 Review comment:
   Created: https://issues.apache.org/jira/browse/HBASE-23724


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (HBASE-23724) Change code in StoreFileInfo to use regex matcher for mob files.

2020-01-22 Thread Vladimir Rodionov (Jira)
Vladimir Rodionov created HBASE-23724:
-

 Summary: Change code in StoreFileInfo to use regex matcher for mob 
files.
 Key: HBASE-23724
 URL: https://issues.apache.org/jira/browse/HBASE-23724
 Project: HBase
  Issue Type: Sub-task
Reporter: Vladimir Rodionov
Assignee: Vladimir Rodionov


Currently it sits on top of other regex with additional logic added. Code 
should simplified.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] VladRodionov commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions

2020-01-22 Thread GitBox
VladRodionov commented on a change in pull request #921: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/921#discussion_r369938008
 
 

 ##
 File path: hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobUtils.java
 ##
 @@ -967,85 +673,40 @@ public static boolean 
isMobFileExpired(ColumnFamilyDescriptor column, long curre
   }
 
   /**
-   * fill out partition id based on compaction policy and date, threshold...
-   * @param id Partition id to be filled out
-   * @param firstDayOfCurrentMonth The first day in the current month
-   * @param firstDayOfCurrentWeek The first day in the current week
-   * @param dateStr Date string from the mob file
-   * @param policy Mob compaction policy
-   * @param calendar Calendar object
-   * @param threshold Mob compaciton threshold configured
-   * @return true if the file needs to be excluded from compaction
+   * Gets encoded region name from a MOB file name
+   * @param mobFileName MOB file name
+   * @return encoded region name or null
*/
-  public static boolean fillPartitionId(final CompactionPartitionId id,
-  final Date firstDayOfCurrentMonth, final Date firstDayOfCurrentWeek, 
final String dateStr,
-  final MobCompactPartitionPolicy policy, final Calendar calendar, final 
long threshold) {
-
-boolean skipCompcation = false;
-id.setThreshold(threshold);
-if (threshold <= 0) {
-  id.setDate(dateStr);
-  return skipCompcation;
+  public static String getEncodedRegionName(String mobFileName) {
+int index = mobFileName.lastIndexOf(MobFileName.REGION_SEP);
+if (index < 0) {
+  return null;
 }
+return mobFileName.substring(index + 1);
+  }
 
-long finalThreshold;
-Date date;
-try {
-  date = MobUtils.parseDate(dateStr);
-} catch (ParseException e)  {
-  LOG.warn("Failed to parse date " + dateStr, e);
-  id.setDate(dateStr);
-  return true;
-}
+  /**
+   * Get list of referenced MOB files from a given collection of store files
+   * @param storeFiles store files
+   * @param mobDir MOB file directory
+   * @return list of MOB file paths
+   */
 
-/* The algorithm works as follows:
- *For monthly policy:
- *   1). If the file's date is in past months, apply 4 * 7 * threshold
- *   2). If the file's date is in past weeks, apply 7 * threshold
- *   3). If the file's date is in current week, exclude it from the 
compaction
- *For weekly policy:
- *   1). If the file's date is in past weeks, apply 7 * threshold
- *   2). If the file's date in currently, apply threshold
- *For daily policy:
- *   1). apply threshold
- */
-if (policy == MobCompactPartitionPolicy.MONTHLY) {
-  if (date.before(firstDayOfCurrentMonth)) {
-// Check overflow
-if (threshold < (Long.MAX_VALUE / MONTHLY_THRESHOLD_MULTIPLIER)) {
-  finalThreshold = MONTHLY_THRESHOLD_MULTIPLIER * threshold;
-} else {
-  finalThreshold = Long.MAX_VALUE;
-}
-id.setThreshold(finalThreshold);
+  public static List getReferencedMobFiles(Collection 
storeFiles, Path mobDir) {
 
-// set to the date for the first day of that month
-id.setDate(MobUtils.formatDate(MobUtils.getFirstDayOfMonth(calendar, 
date)));
-return skipCompcation;
+Set mobSet = new HashSet();
+for (HStoreFile sf : storeFiles) {
+  byte[] value = sf.getMetadataValue(HStoreFile.MOB_FILE_REFS);
+  if (value != null && value.length > 1) {
+String s = Bytes.toString(value);
+String[] all = s.split(",");
+Collections.addAll(mobSet, all);
   }
 }
-
-if ((policy == MobCompactPartitionPolicy.MONTHLY) ||
-(policy == MobCompactPartitionPolicy.WEEKLY)) {
-  // Check if it needs to apply weekly multiplier
-  if (date.before(firstDayOfCurrentWeek)) {
-// Check overflow
-if (threshold < (Long.MAX_VALUE / WEEKLY_THRESHOLD_MULTIPLIER)) {
-  finalThreshold = WEEKLY_THRESHOLD_MULTIPLIER * threshold;
-} else {
-  finalThreshold = Long.MAX_VALUE;
-}
-id.setThreshold(finalThreshold);
-
-id.setDate(MobUtils.formatDate(MobUtils.getFirstDayOfWeek(calendar, 
date)));
-return skipCompcation;
-  } else if (policy == MobCompactPartitionPolicy.MONTHLY) {
-skipCompcation = true;
-  }
+List retList = new ArrayList();
+for (String name : mobSet) {
+  retList.add(new Path(mobDir, name));
 
 Review comment:
   Created: https://issues.apache.org/jira/browse/HBASE-23723


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (HBASE-23723) Add tests for MOB compaction on a table created from snapshot

2020-01-22 Thread Vladimir Rodionov (Jira)
Vladimir Rodionov created HBASE-23723:
-

 Summary: Add tests for MOB compaction on a table created from 
snapshot
 Key: HBASE-23723
 URL: https://issues.apache.org/jira/browse/HBASE-23723
 Project: HBase
  Issue Type: Sub-task
 Environment: How does code  handle snapshot naming convention for MOB 
files.
Reporter: Vladimir Rodionov






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] VladRodionov commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions

2020-01-22 Thread GitBox
VladRodionov commented on a change in pull request #921: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/921#discussion_r369935727
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/mob/TestMobFileName.java
 ##
 @@ -47,6 +47,7 @@
   private Date date;
   private String dateStr;
   private byte[] startKey;
+  private String regionName = "region";
 
 Review comment:
   MobFileName code is not used during upgrade from older version and does not 
have any code which handles old files.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23330) Expose cluster ID for clients using it for delegation token based auth

2020-01-22 Thread Bharath Vissapragada (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17021779#comment-17021779
 ] 

Bharath Vissapragada commented on HBASE-23330:
--

Thats a fair point. I'll get it working for all connection types.

>   Expose cluster ID for clients using it for delegation token based auth
> 
>
> Key: HBASE-23330
> URL: https://issues.apache.org/jira/browse/HBASE-23330
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master
>Affects Versions: 3.0.0
>Reporter: Bharath Vissapragada
>Assignee: Bharath Vissapragada
>Priority: Major
>
> As Gary Helming noted in HBASE-18095, some clients use Cluster ID for 
> delgation based auth. 
> {quote}
> There is an additional complication here for token-based authentication. When 
> a delegation token is used for SASL authentication, the client uses the 
> cluster ID obtained from Zookeeper to select the token identifier to use. So 
> there would also need to be some Zookeeper-less, unauthenticated way to 
> obtain the cluster ID as well.
> {quote}
> Once we move ZK out of the picture, cluster ID sits behind an end point that 
> needs to be authenticated. Figure out a way to expose this to clients.
> One suggestion in the comments (from Andrew)
> {quote}
>  Cluster ID lookup is most easily accomplished with a new servlet on the 
> HTTP(S) endpoint on the masters, serving the cluster ID as plain text. It 
> can't share the RPC server endpoint when SASL is enabled because any 
> interaction with that endpoint must be authenticated. This is ugly but 
> alternatives seem worse. One alternative would be a second RPC port for APIs 
> that do not / cannot require prior authentication.
> {quote}
> There could be implications if SPNEGO is enabled on these http(s) end points. 
> We need to make sure that it is handled.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23711) Add test for MinVersions and KeepDeletedCells TTL

2020-01-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17021773#comment-17021773
 ] 

Hudson commented on HBASE-23711:


Results for branch branch-2.1
[build #1781 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1781/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1781//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1781//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1781//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Add test for MinVersions and KeepDeletedCells TTL
> -
>
> Key: HBASE-23711
> URL: https://issues.apache.org/jira/browse/HBASE-23711
> Project: HBase
>  Issue Type: Test
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 1.6.0, 2.2.3, 2.1.9
>
> Attachments: HBASE-23711-master.v01.patch
>
>
> Recently I was researching how HBase handles the interactions between setting 
> MinVersions and KeepDeletedCells = TTL, and I wrote a test to prove my 
> assumptions about the behavior were correct. There doesn't seem to be an 
> equivalent existing test in TestMinVersions, so I thought I'd contribute it. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] saintstack commented on issue #1012: HBASE-21065 Try ROW_INDEX_V1 encoding on meta table (fix bloomfilters…

2020-01-22 Thread GitBox
saintstack commented on issue #1012: HBASE-21065 Try ROW_INDEX_V1 encoding on 
meta table (fix bloomfilters…
URL: https://github.com/apache/hbase/pull/1012#issuecomment-577503341
 
 
   Fix conflict. Tests should pass not since HBASE-23705 went in.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (HBASE-23705) Add CellComparator to HFileContext

2020-01-22 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack resolved HBASE-23705.
---
Hadoop Flags: Reviewed
  Resolution: Fixed

Merged to master and cherry-picked to branch-2. Thanks for reviews 
[~anoop.hbase], [~janh], and [~ramkrishna.s.vasude...@gmail.com]

> Add CellComparator to HFileContext
> --
>
> Key: HBASE-23705
> URL: https://issues.apache.org/jira/browse/HBASE-23705
> Project: HBase
>  Issue Type: Sub-task
>  Components: io
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
>
> The HFileContext is present when reading and writing files. It is populated 
> at read time using HFile trailer content and file metadata. At write time, we 
> create it up front.
> Interesting is that though CellComparator is written to the HFile trailer, 
> and parse of the Trailer creates an HFileInfo which builds the HFileContext 
> at read time, the HFileContext does not expose what CellComparator to use 
> decoding and seeking. Around the codebase there are various compensations 
> made for this lack with decoders that actually have a decoding context (with 
> a reference to the hfilecontext), hard-coding use of the default 
> CellComparator. StoreFileInfo will use default if not passed a comparator 
> (even though we'd just read the trailer and even though it has reference to 
> filecontext) and HFile does similar. What CellComparator to use in a given 
> context is confused.
> Let me fix this situation removing ambiguity. It will also fix bugs in parent 
> issue where UTs are failing because wrong CellComparator is being used.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] VladRodionov commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions

2020-01-22 Thread GitBox
VladRodionov commented on a change in pull request #921: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/921#discussion_r369927056
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/mob/TestMobCompaction.java
 ##
 @@ -0,0 +1,375 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mob;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.util.Arrays;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeepDeletedCells;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.master.MobFileCleanerChore;
+import org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner;
+import org.apache.hadoop.hbase.testclassification.LargeTests;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.ClassRule;
+import org.junit.Ignore;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.rules.TestName;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+/**
+Reproduction for MOB data loss
 
 Review comment:
   We have already integration test - IntegrationTestMobCompaction. I converted 
TestMobCompaction into MobStressToolRunner and it is no longer a unit test. It 
is easier to run MobStressTool than Integration tests, due to different code 
initialization paths. It is a legacy code, but I want it keep for some time. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] saintstack merged pull request #1062: HBASE-23705 Add CellComparator to HFileContext

2020-01-22 Thread GitBox
saintstack merged pull request #1062: HBASE-23705 Add CellComparator to 
HFileContext
URL: https://github.com/apache/hbase/pull/1062
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23722) TestCustomSaslAuthenticationProvider failing in nightlies

2020-01-22 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17021747#comment-17021747
 ] 

Michael Stack commented on HBASE-23722:
---

Just ran into this (if that helps). Thanks for filing.

> TestCustomSaslAuthenticationProvider failing in nightlies
> -
>
> Key: HBASE-23722
> URL: https://issues.apache.org/jira/browse/HBASE-23722
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
>
> {noformat}
> 2020-01-22 21:15:57,250 DEBUG 
> [hconnection-0x5f0ea4c6-metaLookup-shared--pool15-t14] 
> client.RpcRetryingCallerImpl(132): Call exception, tries=10, retries=16, 
> started=38409 ms ago, cancelled=false, msg=Call to 
> a8b44f950ced/172.17.0.3:42595 failed on local exception: java.io.IOException: 
> java.lang.NullPointerException, details=row 
> 'testPositiveAuthentication,r1,99' on table 'hbase:meta' at 
> region=hbase:meta,,1.1588230740, hostname=a8b44f950ced,42595,1579726988645, 
> seqNum=-1, see https://s.apache.org/timeout, exception=java.io.IOException: 
> Call to a8b44f950ced/172.17.0.3:42595 failed on local exception: 
> java.io.IOException: java.lang.NullPointerException
>   at sun.reflect.GeneratedConstructorAccessor40.newInstance(Unknown 
> Source)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:220)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:378)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:91)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:409)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:405)
>   at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:117)
>   at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:132)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callMethod(AbstractRpcClient.java:422)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:316)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$200(AbstractRpcClient.java:91)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:571)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:42810)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:332)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallable.rpcCall(ScannerCallable.java:242)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallable.rpcCall(ScannerCallable.java:58)
>   at 
> org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:127)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:396)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:370)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:107)
>   at 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:80)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: java.lang.NullPointerException
>   at org.apache.hadoop.hbase.ipc.IPCUtil.toIOE(IPCUtil.java:154)
>   ... 17 more
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.security.provider.BuiltInProviderSelector.selectProvider(BuiltInProviderSelector.java:128)
>   at 
> org.apache.hadoop.hbase.security.provider.TestCustomSaslAuthenticationProvider$InMemoryProviderSelector.selectProvider(TestCustomSaslAuthenticationProvider.java:390)
>   at 
> org.apache.hadoop.hbase.security.provider.SaslClientAuthenticationProviders.selectProvider(SaslClientAuthenticationProviders.java:214)
>   at 
> org.apache.hadoop.hbase.ipc.RpcConnection.(RpcConnection.java:106)
>   at 
> org.apache.hadoop.hbase.ipc.BlockingRpcConnection.(BlockingRpcConnection.java:219)
>   at 
> org.apache.h

[GitHub] [hbase] saintstack commented on issue #1062: HBASE-23705 Add CellComparator to HFileContext

2020-01-22 Thread GitBox
saintstack commented on issue #1062: HBASE-23705 Add CellComparator to 
HFileContext
URL: https://github.com/apache/hbase/pull/1062#issuecomment-577493606
 
 
   Unit test is known issue (HBASE-23722). The checkstyle is the 
method-too-large. Merging.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23722) TestCustomSaslAuthenticationProvider failing in nightlies

2020-01-22 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-23722:
---
Parent Issue: HBASE-23347  (was: HBASE-23709)

> TestCustomSaslAuthenticationProvider failing in nightlies
> -
>
> Key: HBASE-23722
> URL: https://issues.apache.org/jira/browse/HBASE-23722
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
>
> {noformat}
> 2020-01-22 21:15:57,250 DEBUG 
> [hconnection-0x5f0ea4c6-metaLookup-shared--pool15-t14] 
> client.RpcRetryingCallerImpl(132): Call exception, tries=10, retries=16, 
> started=38409 ms ago, cancelled=false, msg=Call to 
> a8b44f950ced/172.17.0.3:42595 failed on local exception: java.io.IOException: 
> java.lang.NullPointerException, details=row 
> 'testPositiveAuthentication,r1,99' on table 'hbase:meta' at 
> region=hbase:meta,,1.1588230740, hostname=a8b44f950ced,42595,1579726988645, 
> seqNum=-1, see https://s.apache.org/timeout, exception=java.io.IOException: 
> Call to a8b44f950ced/172.17.0.3:42595 failed on local exception: 
> java.io.IOException: java.lang.NullPointerException
>   at sun.reflect.GeneratedConstructorAccessor40.newInstance(Unknown 
> Source)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:220)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:378)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:91)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:409)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:405)
>   at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:117)
>   at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:132)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callMethod(AbstractRpcClient.java:422)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:316)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$200(AbstractRpcClient.java:91)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:571)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:42810)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:332)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallable.rpcCall(ScannerCallable.java:242)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallable.rpcCall(ScannerCallable.java:58)
>   at 
> org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:127)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:396)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:370)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:107)
>   at 
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:80)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: java.lang.NullPointerException
>   at org.apache.hadoop.hbase.ipc.IPCUtil.toIOE(IPCUtil.java:154)
>   ... 17 more
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.security.provider.BuiltInProviderSelector.selectProvider(BuiltInProviderSelector.java:128)
>   at 
> org.apache.hadoop.hbase.security.provider.TestCustomSaslAuthenticationProvider$InMemoryProviderSelector.selectProvider(TestCustomSaslAuthenticationProvider.java:390)
>   at 
> org.apache.hadoop.hbase.security.provider.SaslClientAuthenticationProviders.selectProvider(SaslClientAuthenticationProviders.java:214)
>   at 
> org.apache.hadoop.hbase.ipc.RpcConnection.(RpcConnection.java:106)
>   at 
> org.apache.hadoop.hbase.ipc.BlockingRpcConnection.(BlockingRpcConnection.java:219)
>   at 
> org.apache.hadoop.hbase.ipc.BlockingRpcClient.createConnection(BlockingRpcClient.

[jira] [Commented] (HBASE-23709) Unwrap the real user to properly dispatch proxy-user auth'n

2020-01-22 Thread Josh Elser (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17021710#comment-17021710
 ] 

Josh Elser commented on HBASE-23709:


Filed HBASE-23722 to track the failing UT.

> Unwrap the real user to properly dispatch proxy-user auth'n
> ---
>
> Key: HBASE-23709
> URL: https://issues.apache.org/jira/browse/HBASE-23709
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Jan Hentschel
>Assignee: Josh Elser
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
>
> Currently {{TestSecureRESTServer}} fails consistently on branch-2 and should 
> be fixed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23722) TestCustomSaslAuthenticationProvider failing in nightlies

2020-01-22 Thread Josh Elser (Jira)
Josh Elser created HBASE-23722:
--

 Summary: TestCustomSaslAuthenticationProvider failing in nightlies
 Key: HBASE-23722
 URL: https://issues.apache.org/jira/browse/HBASE-23722
 Project: HBase
  Issue Type: Sub-task
Reporter: Josh Elser
Assignee: Josh Elser


{noformat}
2020-01-22 21:15:57,250 DEBUG 
[hconnection-0x5f0ea4c6-metaLookup-shared--pool15-t14] 
client.RpcRetryingCallerImpl(132): Call exception, tries=10, retries=16, 
started=38409 ms ago, cancelled=false, msg=Call to 
a8b44f950ced/172.17.0.3:42595 failed on local exception: java.io.IOException: 
java.lang.NullPointerException, details=row 
'testPositiveAuthentication,r1,99' on table 'hbase:meta' at 
region=hbase:meta,,1.1588230740, hostname=a8b44f950ced,42595,1579726988645, 
seqNum=-1, see https://s.apache.org/timeout, exception=java.io.IOException: 
Call to a8b44f950ced/172.17.0.3:42595 failed on local exception: 
java.io.IOException: java.lang.NullPointerException
at sun.reflect.GeneratedConstructorAccessor40.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:220)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:378)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:91)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:409)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:405)
at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:117)
at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:132)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callMethod(AbstractRpcClient.java:422)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:316)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$200(AbstractRpcClient.java:91)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:571)
at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:42810)
at 
org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:332)
at 
org.apache.hadoop.hbase.client.ScannerCallable.rpcCall(ScannerCallable.java:242)
at 
org.apache.hadoop.hbase.client.ScannerCallable.rpcCall(ScannerCallable.java:58)
at 
org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:127)
at 
org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:396)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:370)
at 
org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:107)
at 
org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:80)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: java.lang.NullPointerException
at org.apache.hadoop.hbase.ipc.IPCUtil.toIOE(IPCUtil.java:154)
... 17 more
Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hbase.security.provider.BuiltInProviderSelector.selectProvider(BuiltInProviderSelector.java:128)
at 
org.apache.hadoop.hbase.security.provider.TestCustomSaslAuthenticationProvider$InMemoryProviderSelector.selectProvider(TestCustomSaslAuthenticationProvider.java:390)
at 
org.apache.hadoop.hbase.security.provider.SaslClientAuthenticationProviders.selectProvider(SaslClientAuthenticationProviders.java:214)
at 
org.apache.hadoop.hbase.ipc.RpcConnection.(RpcConnection.java:106)
at 
org.apache.hadoop.hbase.ipc.BlockingRpcConnection.(BlockingRpcConnection.java:219)
at 
org.apache.hadoop.hbase.ipc.BlockingRpcClient.createConnection(BlockingRpcClient.java:72)
at 
org.apache.hadoop.hbase.ipc.BlockingRpcClient.createConnection(BlockingRpcClient.java:38)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.getConnection(AbstractRpcClient.java:350)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callMethod(AbstractRpcClient.java:419)
... 16 more
 {noformat

[GitHub] [hbase] joshelser commented on issue #936: HBASE-17115 Define UI admins via an ACL

2020-01-22 Thread GitBox
joshelser commented on issue #936: HBASE-17115 Define UI admins via an ACL
URL: https://github.com/apache/hbase/pull/936#issuecomment-577480036
 
 
   I just need to write a UT to cover this stuff. Just realized I hadn't done 
that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (HBASE-23721) Add further constraint on edits to hbase:meta to prevent possibly damaging edits

2020-01-22 Thread Michael Stack (Jira)
Michael Stack created HBASE-23721:
-

 Summary: Add further constraint on edits to hbase:meta to prevent 
possibly damaging edits
 Key: HBASE-23721
 URL: https://issues.apache.org/jira/browse/HBASE-23721
 Project: HBase
  Issue Type: Improvement
Reporter: Michael Stack


HBASE-23055 Alter hbase:meta enables editing hbase:meta in hbase-2.3.0. The 
patch includes constraint that prevents an operator accidentally deleting a 
critical column family. This issue is about adding further constraints to 
prevent possibly damaging configs. Per [~zhangduo], operator "...should not set 
TTL on the column families, and also, you should not change the versions of 
REPLICATION_BARRIER_FAMILY..."



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] saintstack commented on a change in pull request #1043: HBASE-23055 Alter hbase:meta

2020-01-22 Thread GitBox
saintstack commented on a change in pull request #1043: HBASE-23055 Alter 
hbase:meta
URL: https://github.com/apache/hbase/pull/1043#discussion_r369904784
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ModifyTableProcedure.java
 ##
 @@ -59,10 +62,19 @@
   private TableDescriptor modifiedTableDescriptor;
   private boolean deleteColumnFamilyInModify;
   private boolean shouldCheckDescriptor;
+  /**
+   * List of column families that cannot be deleted from the hbase:meta table.
+   * They are critical to cluster operation. This is a bit of an odd place to
+   * keep this list but then this is the tooling that does add/remove. Keeping
+   * it local!
+   */
+  private static final List UNDELETABLE_META_COLUMNFAMILIES =
 
 Review comment:
   Filed HBASE-23721


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (HBASE-23720) [create-release] Update yetus version used from 0.11.0 to 0.11.1

2020-01-22 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack resolved HBASE-23720.
---
Resolution: Fixed

Pushed the few lines I changed. Resolving.

> [create-release] Update yetus version used from 0.11.0 to 0.11.1
> 
>
> Key: HBASE-23720
> URL: https://issues.apache.org/jira/browse/HBASE-23720
> Project: HBase
>  Issue Type: Bug
>  Components: RC
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: 
> 0001-HBASE-23720-create-release-Update-yetus-version-used.patch
>
>
> Update the yetus version used by the create-release scripts. Was pointing to 
> 0.11.0 but that is no longer in apache repo so the create-release fails.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23720) [create-release] Update yetus version used from 0.11.0 to 0.11.1

2020-01-22 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HBASE-23720:
--
Attachment: 0001-HBASE-23720-create-release-Update-yetus-version-used.patch

> [create-release] Update yetus version used from 0.11.0 to 0.11.1
> 
>
> Key: HBASE-23720
> URL: https://issues.apache.org/jira/browse/HBASE-23720
> Project: HBase
>  Issue Type: Bug
>  Components: RC
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: 
> 0001-HBASE-23720-create-release-Update-yetus-version-used.patch
>
>
> Update the yetus version used by the create-release scripts. Was pointing to 
> 0.11.0 but that is no longer in apache repo so the create-release fails.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23330) Expose cluster ID for clients using it for delegation token based auth

2020-01-22 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17021673#comment-17021673
 ] 

Andrew Kyle Purtell commented on HBASE-23330:
-

Breaking auth for thrift clients would be the wrong call, though, imho, if you 
are saying that is what would happen.

>   Expose cluster ID for clients using it for delegation token based auth
> 
>
> Key: HBASE-23330
> URL: https://issues.apache.org/jira/browse/HBASE-23330
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master
>Affects Versions: 3.0.0
>Reporter: Bharath Vissapragada
>Assignee: Bharath Vissapragada
>Priority: Major
>
> As Gary Helming noted in HBASE-18095, some clients use Cluster ID for 
> delgation based auth. 
> {quote}
> There is an additional complication here for token-based authentication. When 
> a delegation token is used for SASL authentication, the client uses the 
> cluster ID obtained from Zookeeper to select the token identifier to use. So 
> there would also need to be some Zookeeper-less, unauthenticated way to 
> obtain the cluster ID as well.
> {quote}
> Once we move ZK out of the picture, cluster ID sits behind an end point that 
> needs to be authenticated. Figure out a way to expose this to clients.
> One suggestion in the comments (from Andrew)
> {quote}
>  Cluster ID lookup is most easily accomplished with a new servlet on the 
> HTTP(S) endpoint on the masters, serving the cluster ID as plain text. It 
> can't share the RPC server endpoint when SASL is enabled because any 
> interaction with that endpoint must be authenticated. This is ugly but 
> alternatives seem worse. One alternative would be a second RPC port for APIs 
> that do not / cannot require prior authentication.
> {quote}
> There could be implications if SPNEGO is enabled on these http(s) end points. 
> We need to make sure that it is handled.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-23330) Expose cluster ID for clients using it for delegation token based auth

2020-01-22 Thread Bharath Vissapragada (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17021668#comment-17021668
 ] 

Bharath Vissapragada edited comment on HBASE-23330 at 1/23/20 2:02 AM:
---

{quote}Is that true in all active branches?
{quote}
Yes, checked in master/branch-1/branch-2.
{quote}Yes, this is possible, and I don't think we have a way of determining 
how common it might be.
{quote}
It looks like there is no feature parity between thrift client vs the regular 
async client. For example: most functions in ThriftAdmin are not implemented, 
hence my question. If we don't worry about thrift connections, I don't think we 
need to do stuff like exposing clusterID over a servlet etc. The patch can be 
much simpler.


was (Author: bharathv):
{quote}Is that true in all active branches?
{quote}
Yes, checked in master/branch-1/branch-2.
{quote}Yes, this is possible, and I don't think we have a way of determining 
how common it might be.
{quote}
It looks like there is a big feature parity between thrift client vs the 
regular async client. For example: most functions in ThriftAdmin are not 
implemented, hence my question. If we don't worry about thrift connections, I 
don't think we need to do stuff like exposing clusterID over a servlet etc. The 
patch can be much simpler.

>   Expose cluster ID for clients using it for delegation token based auth
> 
>
> Key: HBASE-23330
> URL: https://issues.apache.org/jira/browse/HBASE-23330
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master
>Affects Versions: 3.0.0
>Reporter: Bharath Vissapragada
>Assignee: Bharath Vissapragada
>Priority: Major
>
> As Gary Helming noted in HBASE-18095, some clients use Cluster ID for 
> delgation based auth. 
> {quote}
> There is an additional complication here for token-based authentication. When 
> a delegation token is used for SASL authentication, the client uses the 
> cluster ID obtained from Zookeeper to select the token identifier to use. So 
> there would also need to be some Zookeeper-less, unauthenticated way to 
> obtain the cluster ID as well.
> {quote}
> Once we move ZK out of the picture, cluster ID sits behind an end point that 
> needs to be authenticated. Figure out a way to expose this to clients.
> One suggestion in the comments (from Andrew)
> {quote}
>  Cluster ID lookup is most easily accomplished with a new servlet on the 
> HTTP(S) endpoint on the masters, serving the cluster ID as plain text. It 
> can't share the RPC server endpoint when SASL is enabled because any 
> interaction with that endpoint must be authenticated. This is ugly but 
> alternatives seem worse. One alternative would be a second RPC port for APIs 
> that do not / cannot require prior authentication.
> {quote}
> There could be implications if SPNEGO is enabled on these http(s) end points. 
> We need to make sure that it is handled.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23330) Expose cluster ID for clients using it for delegation token based auth

2020-01-22 Thread Bharath Vissapragada (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17021668#comment-17021668
 ] 

Bharath Vissapragada commented on HBASE-23330:
--

{quote}Is that true in all active branches?
{quote}
Yes, checked in master/branch-1/branch-2.
{quote}Yes, this is possible, and I don't think we have a way of determining 
how common it might be.
{quote}
It looks like there is a big feature parity between thrift client vs the 
regular async client. For example: most functions in ThriftAdmin are not 
implemented, hence my question. If we don't worry about thrift connections, I 
don't think we need to do stuff like exposing clusterID over a servlet etc. The 
patch can be much simpler.

>   Expose cluster ID for clients using it for delegation token based auth
> 
>
> Key: HBASE-23330
> URL: https://issues.apache.org/jira/browse/HBASE-23330
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master
>Affects Versions: 3.0.0
>Reporter: Bharath Vissapragada
>Assignee: Bharath Vissapragada
>Priority: Major
>
> As Gary Helming noted in HBASE-18095, some clients use Cluster ID for 
> delgation based auth. 
> {quote}
> There is an additional complication here for token-based authentication. When 
> a delegation token is used for SASL authentication, the client uses the 
> cluster ID obtained from Zookeeper to select the token identifier to use. So 
> there would also need to be some Zookeeper-less, unauthenticated way to 
> obtain the cluster ID as well.
> {quote}
> Once we move ZK out of the picture, cluster ID sits behind an end point that 
> needs to be authenticated. Figure out a way to expose this to clients.
> One suggestion in the comments (from Andrew)
> {quote}
>  Cluster ID lookup is most easily accomplished with a new servlet on the 
> HTTP(S) endpoint on the masters, serving the cluster ID as plain text. It 
> can't share the RPC server endpoint when SASL is enabled because any 
> interaction with that endpoint must be authenticated. This is ugly but 
> alternatives seem worse. One alternative would be a second RPC port for APIs 
> that do not / cannot require prior authentication.
> {quote}
> There could be implications if SPNEGO is enabled on these http(s) end points. 
> We need to make sure that it is handled.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23720) [create-release] Update yetus version used from 0.11.0 to 0.11.1

2020-01-22 Thread Michael Stack (Jira)
Michael Stack created HBASE-23720:
-

 Summary: [create-release] Update yetus version used from 0.11.0 to 
0.11.1
 Key: HBASE-23720
 URL: https://issues.apache.org/jira/browse/HBASE-23720
 Project: HBase
  Issue Type: Bug
  Components: RC
Reporter: Michael Stack
Assignee: Michael Stack
 Fix For: 3.0.0


Update the yetus version used by the create-release scripts. Was pointing to 
0.11.0 but that is no longer in apache repo so the create-release fails.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23711) Add test for MinVersions and KeepDeletedCells TTL

2020-01-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17021661#comment-17021661
 ] 

Hudson commented on HBASE-23711:


Results for branch branch-2.2
[build #762 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/762/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/762//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/762//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/762//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Add test for MinVersions and KeepDeletedCells TTL
> -
>
> Key: HBASE-23711
> URL: https://issues.apache.org/jira/browse/HBASE-23711
> Project: HBase
>  Issue Type: Test
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 1.6.0, 2.2.3, 2.1.9
>
> Attachments: HBASE-23711-master.v01.patch
>
>
> Recently I was researching how HBase handles the interactions between setting 
> MinVersions and KeepDeletedCells = TTL, and I wrote a test to prove my 
> assumptions about the behavior were correct. There doesn't seem to be an 
> equivalent existing test in TestMinVersions, so I thought I'd contribute it. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] bharathv commented on a change in pull request #1039: HBASE-23647: Make MasterRegistry the default impl.

2020-01-22 Thread GitBox
bharathv commented on a change in pull request #1039: HBASE-23647: Make 
MasterRegistry the default impl.
URL: https://github.com/apache/hbase/pull/1039#discussion_r369894075
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
 ##
 @@ -789,8 +789,19 @@ public boolean 
registerService(com.google.protobuf.Service instance) {
 return true;
   }
 
-  private Configuration unsetClientZookeeperQuorum() {
+  private Configuration cleanupConfiguration() {
 Configuration conf = this.conf;
+// We use ZKConnectionRegistry for all the internal communication, 
primarily for these reasons:
+// - Decouples RS and master life cycles. For example, if all the masters 
are down, region
+//   servers can abort at the same time, because the internal connection 
is master dependent and
+//   fails.This is an operational nightmare. Using the ZK based registry 
means that the region
+//   servers are now on the look out for new masters, if they are spun up.
+// - Configuration management for region servers (cluster internal) is 
much simpler when adding
+//   new masters etc.
+// - We need to retain ZKConnectionRegistry for replication use anyway, so 
we just extend it for
+//   other internal connections too.
+conf.set(HConstants.CLIENT_CONNECTION_REGISTRY_IMPL_CONF_KEY,
+"org.apache.hadoop.hbase.client.ZKConnectionRegistry");
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] bharathv commented on a change in pull request #1039: HBASE-23647: Make MasterRegistry the default impl.

2020-01-22 Thread GitBox
bharathv commented on a change in pull request #1039: HBASE-23647: Make 
MasterRegistry the default impl.
URL: https://github.com/apache/hbase/pull/1039#discussion_r369893660
 
 

 ##
 File path: 
hbase-common/src/main/java/org/apache/hadoop/hbase/HBaseConfiguration.java
 ##
 @@ -268,14 +268,19 @@ public static Configuration 
createClusterConf(Configuration baseConf, String clu
* used to communicate with distant clusters
* @param conf configuration object to configure
* @param key string that contains the 3 required configuratins
-   * @throws IOException
*/
   private static void applyClusterKeyToConf(Configuration conf, String key)
-  throws IOException{
+  throws IOException {
 ZKConfig.ZKClusterKey zkClusterKey = ZKConfig.transformClusterKey(key);
 conf.set(HConstants.ZOOKEEPER_QUORUM, zkClusterKey.getQuorumString());
 conf.setInt(HConstants.ZOOKEEPER_CLIENT_PORT, 
zkClusterKey.getClientPort());
 conf.set(HConstants.ZOOKEEPER_ZNODE_PARENT, zkClusterKey.getZnodeParent());
+// Without the right registry, the above configs are useless. Also, we 
don't use setClass()
+// here because the ConnectionRegistry* classes are not resolvable from 
this module.
+// This will be broken if ZkConnectionRegistry class gets renamed or 
moved. Is there a better
+// way?
 
 Review comment:
   > fall through to alternate or recovery code
   
   There is no alternate or recovery from that point, no? The same error is 
propagated while creating the registry instance, so I guess we don't need to do 
it again here I think.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] bharathv commented on a change in pull request #1039: HBASE-23647: Make MasterRegistry the default impl.

2020-01-22 Thread GitBox
bharathv commented on a change in pull request #1039: HBASE-23647: Make 
MasterRegistry the default impl.
URL: https://github.com/apache/hbase/pull/1039#discussion_r369894052
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
 ##
 @@ -789,8 +789,19 @@ public boolean 
registerService(com.google.protobuf.Service instance) {
 return true;
   }
 
-  private Configuration unsetClientZookeeperQuorum() {
+  private Configuration cleanupConfiguration() {
 Configuration conf = this.conf;
+// We use ZKConnectionRegistry for all the internal communication, 
primarily for these reasons:
+// - Decouples RS and master life cycles. For example, if all the masters 
are down, region
+//   servers can abort at the same time, because the internal connection 
is master dependent and
+//   fails.This is an operational nightmare. Using the ZK based registry 
means that the region
+//   servers are now on the look out for new masters, if they are spun up.
 
 Review comment:
   Clarified.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] bharathv commented on a change in pull request #1039: HBASE-23647: Make MasterRegistry the default impl.

2020-01-22 Thread GitBox
bharathv commented on a change in pull request #1039: HBASE-23647: Make 
MasterRegistry the default impl.
URL: https://github.com/apache/hbase/pull/1039#discussion_r369897054
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterShutdown.java
 ##
 @@ -133,11 +133,19 @@ public void 
testMasterShutdownBeforeStartingAnyRegionServer() throws Exception {
 util.startMiniZKCluster();
 util.createRootDir();
 final LocalHBaseCluster cluster =
-new LocalHBaseCluster(conf, NUM_MASTERS, NUM_RS, HMaster.class,
+new LocalHBaseCluster(conf, NUM_MASTERS, 0, NUM_RS, HMaster.class,
 MiniHBaseCluster.MiniHBaseClusterRegionServer.class);
 final int MASTER_INDEX = 0;
 final MasterThread master = cluster.getMasters().get(MASTER_INDEX);
 master.start();
+// Switching to master registry exposed a race in the master bootstrap 
that can result in a
+// lost shutdown command (essentially HBASE-8422). The race is essentially 
because the
+// server manager in HMaster is not initialized by the time shutdown() RPC 
(below) is made to
+// the master. The reason it was not happening earlier is because the 
connection creation with
+// ZK registry is so slow that by then the server manager is init'ed thus 
masking the problem.
+// For now, I'm putting a sleep here to workaround the issue, I think the 
fix for it is a little
+// delicate and needs to be done separately.
+Thread.sleep(5000);
 
 Review comment:
   That is cleaner, done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #1082: HBASE-23069 periodic dependency bump for Sep 2019

2020-01-22 Thread GitBox
Apache-HBase commented on issue #1082: HBASE-23069 periodic dependency bump for 
Sep 2019
URL: https://github.com/apache/hbase/pull/1082#issuecomment-577465072
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   3m  3s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -0 :warning: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 36s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   3m 34s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   5m 20s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   3m 19s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 19s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 54s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 54s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedjars  |   5m 25s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  19m 38s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   3m 18s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 204m 19s |  root in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 272m 53s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hbase.security.provider.TestCustomSaslAuthenticationProvider |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1082/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1082 |
   | Optional Tests | dupname asflicense javac javadoc unit shadedjars 
hadoopcheck xml compile |
   | uname | Linux 0c03b78b67a8 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1082/out/precommit/personality/provided.sh
 |
   | git revision | branch-2 / 62e340901f |
   | Default Java | 1.8.0_181 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1082/2/artifact/out/patch-unit-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1082/2/testReport/
 |
   | Max. process+thread count | 4722 (vs. ulimit of 1) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1082/2/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23330) Expose cluster ID for clients using it for delegation token based auth

2020-01-22 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17021655#comment-17021655
 ] 

Andrew Kyle Purtell commented on HBASE-23330:
-

bq.  After staring at the code, I realized that all the calls to get a token 
happen after creating a connection .

Is that true in all active branches? 

bq. If we already have the connection created by then, we can use the regular 
"registry" way of fetching the cluster ID, no? 

Sure.

bq. Does anyone use delegation tokens with a ThriftConnection?

Yes, this is possible, and I don't think we have a way of determining how 
common it might be.

>   Expose cluster ID for clients using it for delegation token based auth
> 
>
> Key: HBASE-23330
> URL: https://issues.apache.org/jira/browse/HBASE-23330
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master
>Affects Versions: 3.0.0
>Reporter: Bharath Vissapragada
>Assignee: Bharath Vissapragada
>Priority: Major
>
> As Gary Helming noted in HBASE-18095, some clients use Cluster ID for 
> delgation based auth. 
> {quote}
> There is an additional complication here for token-based authentication. When 
> a delegation token is used for SASL authentication, the client uses the 
> cluster ID obtained from Zookeeper to select the token identifier to use. So 
> there would also need to be some Zookeeper-less, unauthenticated way to 
> obtain the cluster ID as well.
> {quote}
> Once we move ZK out of the picture, cluster ID sits behind an end point that 
> needs to be authenticated. Figure out a way to expose this to clients.
> One suggestion in the comments (from Andrew)
> {quote}
>  Cluster ID lookup is most easily accomplished with a new servlet on the 
> HTTP(S) endpoint on the masters, serving the cluster ID as plain text. It 
> can't share the RPC server endpoint when SASL is enabled because any 
> interaction with that endpoint must be authenticated. This is ugly but 
> alternatives seem worse. One alternative would be a second RPC port for APIs 
> that do not / cannot require prior authentication.
> {quote}
> There could be implications if SPNEGO is enabled on these http(s) end points. 
> We need to make sure that it is handled.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-23330) Expose cluster ID for clients using it for delegation token based auth

2020-01-22 Thread Bharath Vissapragada (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17021652#comment-17021652
 ] 

Bharath Vissapragada edited comment on HBASE-23330 at 1/23/20 1:22 AM:
---

[~ghelmling]/[~apurtell] After staring at the code, I realized that all the 
calls to get a token happen *after* creating a connection . For example, look 
at all the callers of {{TokenUtil#getAuthToken()}} (Also, refer to 
TableMapReduceUtil#initCredentials() call path) . Now my point is, if we 
already have the connection created by then, we can use the regular "registry" 
way of fetching the cluster ID, no? Only case where it does not work is if the 
conneciton is of type {{ThriftConnection}}. Does anyone use delegation tokens 
with a ThriftConnection?

Fwiw, I figured out a way to bypass spnego authentication and implemented a 
servlet that can plumb the cluster ID. After doing that and fixing the actual 
problem, I figured that it is not actually needed at all. Did I miss something? 


was (Author: bharathv):
[~ghelmling]/[~apurtell] After staring at the code, I realized that all the 
calls to get a token happen *after* creating a connection . For example, look 
at all the callers of {{TokenUtil#getAuthToken()}} (Also, refer to 
TableMapReduceUtil#initCredentials() call path) . Now my point is, if we 
already have the connection created by then, we can use the regular "registry" 
way of fetching the cluster ID, no? Only case where it does not work is if the 
conneciton is of type {ThriftConnection}. Does anyone use delegation tokens 
with a ThriftConnection?

Fwiw, I figured out a way to bypass spnego authentication and implemented a 
servlet that can plumb the cluster ID. After doing that and fixing the actual 
problem, I figured that it is not actually needed at all. Did I miss something? 

>   Expose cluster ID for clients using it for delegation token based auth
> 
>
> Key: HBASE-23330
> URL: https://issues.apache.org/jira/browse/HBASE-23330
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master
>Affects Versions: 3.0.0
>Reporter: Bharath Vissapragada
>Assignee: Bharath Vissapragada
>Priority: Major
>
> As Gary Helming noted in HBASE-18095, some clients use Cluster ID for 
> delgation based auth. 
> {quote}
> There is an additional complication here for token-based authentication. When 
> a delegation token is used for SASL authentication, the client uses the 
> cluster ID obtained from Zookeeper to select the token identifier to use. So 
> there would also need to be some Zookeeper-less, unauthenticated way to 
> obtain the cluster ID as well.
> {quote}
> Once we move ZK out of the picture, cluster ID sits behind an end point that 
> needs to be authenticated. Figure out a way to expose this to clients.
> One suggestion in the comments (from Andrew)
> {quote}
>  Cluster ID lookup is most easily accomplished with a new servlet on the 
> HTTP(S) endpoint on the masters, serving the cluster ID as plain text. It 
> can't share the RPC server endpoint when SASL is enabled because any 
> interaction with that endpoint must be authenticated. This is ugly but 
> alternatives seem worse. One alternative would be a second RPC port for APIs 
> that do not / cannot require prior authentication.
> {quote}
> There could be implications if SPNEGO is enabled on these http(s) end points. 
> We need to make sure that it is handled.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23330) Expose cluster ID for clients using it for delegation token based auth

2020-01-22 Thread Bharath Vissapragada (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17021652#comment-17021652
 ] 

Bharath Vissapragada commented on HBASE-23330:
--

[~ghelmling]/[~apurtell] After staring at the code, I realized that all the 
calls to get a token happen *after* creating a connection . For example, look 
at all the callers of {{TokenUtil#getAuthToken()}} (Also, refer to 
TableMapReduceUtil#initCredentials() call path) . Now my point is, if we 
already have the connection created by then, we can use the regular "registry" 
way of fetching the cluster ID, no? Only case where it does not work is if the 
conneciton is of type {ThriftConnection}. Does anyone use delegation tokens 
with a ThriftConnection?

Fwiw, I figured out a way to bypass spnego authentication and implemented a 
servlet that can plumb the cluster ID. After doing that and fixing the actual 
problem, I figured that it is not actually needed at all. Did I miss something? 

>   Expose cluster ID for clients using it for delegation token based auth
> 
>
> Key: HBASE-23330
> URL: https://issues.apache.org/jira/browse/HBASE-23330
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, master
>Affects Versions: 3.0.0
>Reporter: Bharath Vissapragada
>Assignee: Bharath Vissapragada
>Priority: Major
>
> As Gary Helming noted in HBASE-18095, some clients use Cluster ID for 
> delgation based auth. 
> {quote}
> There is an additional complication here for token-based authentication. When 
> a delegation token is used for SASL authentication, the client uses the 
> cluster ID obtained from Zookeeper to select the token identifier to use. So 
> there would also need to be some Zookeeper-less, unauthenticated way to 
> obtain the cluster ID as well.
> {quote}
> Once we move ZK out of the picture, cluster ID sits behind an end point that 
> needs to be authenticated. Figure out a way to expose this to clients.
> One suggestion in the comments (from Andrew)
> {quote}
>  Cluster ID lookup is most easily accomplished with a new servlet on the 
> HTTP(S) endpoint on the masters, serving the cluster ID as plain text. It 
> can't share the RPC server endpoint when SASL is enabled because any 
> interaction with that endpoint must be authenticated. This is ugly but 
> alternatives seem worse. One alternative would be a second RPC port for APIs 
> that do not / cannot require prior authentication.
> {quote}
> There could be implications if SPNEGO is enabled on these http(s) end points. 
> We need to make sure that it is handled.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23055) Alter hbase:meta

2020-01-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17021635#comment-17021635
 ] 

Hudson commented on HBASE-23055:


Results for branch branch-2
[build #2424 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2424/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2424//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2424//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2424//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Alter hbase:meta
> 
>
> Key: HBASE-23055
> URL: https://issues.apache.org/jira/browse/HBASE-23055
> Project: HBase
>  Issue Type: Task
>  Components: meta
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
>
> hbase:meta is currently hardcoded. Its schema cannot be change.
> This issue is about allowing edits to hbase:meta schema. It will allow our 
> being able to set encodings such as the block-with-indexes which will help 
> quell CPU usage on host carrying hbase:meta. A dynamic hbase:meta is first 
> step on road to being able to split meta.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23683) Make HBaseInterClusterReplicationEndpoint more extensible

2020-01-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17021638#comment-17021638
 ] 

Hudson commented on HBASE-23683:


Results for branch branch-2
[build #2424 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2424/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2424//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2424//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2424//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Make HBaseInterClusterReplicationEndpoint more extensible
> -
>
> Key: HBASE-23683
> URL: https://issues.apache.org/jira/browse/HBASE-23683
> Project: HBase
>  Issue Type: Improvement
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
>
> *HBaseInterClusterReplicationEndpoint* currently creates the cluster 
> connection and sink manager instances inside its _init_ method and assigns 
> those to private class variables. Then any potential custom extension of 
> *HBaseInterClusterReplicationEndpoint* that requires custom implementations 
> of connection and/or sink manager would need to resort to _java reflection_ 
> for effectively replace those instances, such as below:
> {noformat}
> ...
> ClusterConnection conn = (ClusterConnection)ConnectionFactory.
>   createConnection(context.getConfiguration(), 
> User.create(replicationUgi));
> ReplicationSinkManager sinkManager = new ReplicationSinkManager(conn, 
> ctx.getPeerId(),
>   this, context.getConfiguration());
> try {
>   Field field = this.getClass().getSuperclass().getDeclaredField("conn");
>   field.setAccessible(true);
>   field.set(this, conn);
>   field = 
> this.getClass().getSuperclass().getDeclaredField("replicationSinkMgr");
>   field.setAccessible(true);
>   field.set(this, sinkManager);
> } catch (Exception e) {
>   throw new IOException(e);
> }
> ...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23709) Unwrap the real user to properly dispatch proxy-user auth'n

2020-01-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17021637#comment-17021637
 ] 

Hudson commented on HBASE-23709:


Results for branch branch-2
[build #2424 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2424/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2424//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2424//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2424//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Unwrap the real user to properly dispatch proxy-user auth'n
> ---
>
> Key: HBASE-23709
> URL: https://issues.apache.org/jira/browse/HBASE-23709
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Jan Hentschel
>Assignee: Josh Elser
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
>
> Currently {{TestSecureRESTServer}} fails consistently on branch-2 and should 
> be fixed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23711) Add test for MinVersions and KeepDeletedCells TTL

2020-01-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17021636#comment-17021636
 ] 

Hudson commented on HBASE-23711:


Results for branch branch-2
[build #2424 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2424/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2424//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2424//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2424//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Add test for MinVersions and KeepDeletedCells TTL
> -
>
> Key: HBASE-23711
> URL: https://issues.apache.org/jira/browse/HBASE-23711
> Project: HBase
>  Issue Type: Test
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 1.6.0, 2.2.3, 2.1.9
>
> Attachments: HBASE-23711-master.v01.patch
>
>
> Recently I was researching how HBase handles the interactions between setting 
> MinVersions and KeepDeletedCells = TTL, and I wrote a test to prove my 
> assumptions about the behavior were correct. There doesn't seem to be an 
> equivalent existing test in TestMinVersions, so I thought I'd contribute it. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23718) [hbase-thirdparty] Update libs; pb from 3.9 to 3.11, etc.

2020-01-22 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17021633#comment-17021633
 ] 

Michael Stack commented on HBASE-23718:
---

An amendment reset gson back to 2.8.5 since 2.8.6 is built against jdk9. The 
gson artifact is used by branch-1. Needs to be built w jdk1.7.

> [hbase-thirdparty] Update libs; pb from 3.9 to 3.11, etc.
> -
>
> Key: HBASE-23718
> URL: https://issues.apache.org/jira/browse/HBASE-23718
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: hbase-thirdparty-3.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23718) [hbase-thirdparty] Update libs; pb from 3.9 to 3.11, etc.

2020-01-22 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HBASE-23718:
--
Release Note: 
guava: 28.1-jre => 28.2-jre
error_prone: 2.3.3 => 2.3.4
netty: 4.1.42.Final => 4.1.44.Final
protobuf: 3.9.2 => 3.11.1
maven-assembly-plugin: 3.1.1 => 3.2.0


  was:
gson: 2.8.5 => 2.8.6
guava: 28.1-jre => 28.2-jre
error_prone: 2.3.3 => 2.3.4
netty: 4.1.42.Final => 4.1.44.Final
protobuf: 3.9.2 => 3.11.1
maven-assembly-plugin: 3.1.1 => 3.2.0



> [hbase-thirdparty] Update libs; pb from 3.9 to 3.11, etc.
> -
>
> Key: HBASE-23718
> URL: https://issues.apache.org/jira/browse/HBASE-23718
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: hbase-thirdparty-3.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] wchevreuil commented on issue #1078: HBASE-23715 MasterFileSystem should not create MasterProcWALs dir on …

2020-01-22 Thread GitBox
wchevreuil commented on issue #1078: HBASE-23715 MasterFileSystem should not 
create MasterProcWALs dir on …
URL: https://github.com/apache/hbase/pull/1078#issuecomment-577441070
 
 
   Test failure is unrelated, TestCustomSaslAuthenticationProvider seems flaky 
as it's failing to me even without this change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase-operator-tools] wchevreuil commented on a change in pull request #45: HBASE-23371 [HBCK2] Provide client side method for removing ghost regions in meta.

2020-01-22 Thread GitBox
wchevreuil commented on a change in pull request #45: HBASE-23371 [HBCK2] 
Provide client side method for removing ghost regions in meta.
URL: 
https://github.com/apache/hbase-operator-tools/pull/45#discussion_r369868118
 
 

 ##
 File path: 
hbase-hbck2/src/main/java/org/apache/hbase/FsRegionsMetaRecoverer.java
 ##
 @@ -70,61 +78,203 @@ public FsRegionsMetaRecoverer(Configuration 
configuration) throws IOException {
 this.fs = fileSystem;
   }
 
-  private List getTableRegionsDirs(String table) throws Exception {
+  private List getTableRegionsDirs(String table) throws IOException {
 String hbaseRoot = this.config.get(HConstants.HBASE_DIR);
 Path tableDir = FSUtils.getTableDir(new Path(hbaseRoot), 
TableName.valueOf(table));
 return FSUtils.getRegionDirs(fs, tableDir);
   }
 
   public Map> reportTablesMissingRegions(final 
List namespacesOrTables)
   throws IOException {
-final Map> result = new HashMap<>();
-List tableNames = 
MetaTableAccessor.getTableStates(this.conn).keySet().stream()
-  .filter(tableName -> {
-if(namespacesOrTables==null || namespacesOrTables.isEmpty()){
-  return true;
-} else {
-  Optional findings = namespacesOrTables.stream().filter(
-name -> (name.indexOf(":") > 0) ?
-  tableName.equals(TableName.valueOf(name)) :
-  tableName.getNamespaceAsString().equals(name)).findFirst();
-  return findings.isPresent();
-}
-  }).collect(Collectors.toList());
-tableNames.stream().forEach(tableName -> {
-  try {
-result.put(tableName,
-  
findMissingRegionsInMETA(tableName.getNameWithNamespaceInclAsString()));
-  } catch (Exception e) {
-LOG.warn("Can't get missing regions from meta", e);
-  }
+InternalMetaChecker missingChecker = new InternalMetaChecker<>();
+return missingChecker.reportTablesRegions(namespacesOrTables, 
this::findMissingRegionsInMETA);
+  }
+
+  public Map>
+  reportTablesExtraRegions(final List namespacesOrTables) throws 
IOException {
+InternalMetaChecker extraChecker = new InternalMetaChecker<>();
+return extraChecker.reportTablesRegions(namespacesOrTables, 
this::findExtraRegionsInMETA);
+  }
+
+  List findMissingRegionsInMETA(String table) throws IOException {
+InternalMetaChecker missingChecker = new InternalMetaChecker<>();
+return missingChecker.checkRegionsInMETA(table, (regions, dirs) -> {
+  ListUtils utils = new ListUtils<>();
+  return utils.complement(dirs, regions, r -> r.getEncodedName(), d -> 
d.getName());
 });
-return result;
-  }
-
-  List findMissingRegionsInMETA(String table) throws Exception {
-final List missingRegions = new ArrayList<>();
-final List regionsDirs = getTableRegionsDirs(table);
-TableName tableName = TableName.valueOf(table);
-List regionInfos = MetaTableAccessor.
-  getTableRegions(this.conn, tableName, false);
-HashSet regionsInMeta = regionInfos.stream().map(info ->
-  info.getEncodedName()).collect(Collectors.toCollection(HashSet::new));
-for(final Path regionDir : regionsDirs){
-  if (!regionsInMeta.contains(regionDir.getName())) {
-LOG.debug(regionDir + "is not in META.");
-missingRegions.add(regionDir);
-  }
-}
-return missingRegions;
   }
 
-  public void putRegionInfoFromHdfsInMeta(Path region) throws IOException {
+  List findExtraRegionsInMETA(String table) throws IOException {
+InternalMetaChecker extraChecker = new InternalMetaChecker<>();
+return extraChecker.checkRegionsInMETA(table, (regions,dirs) -> {
+  ListUtils utils = new ListUtils<>();
+  return utils.complement(regions, dirs, d -> d.getName(), r -> 
r.getEncodedName());
+});
+  }
+
+  void putRegionInfoFromHdfsInMeta(Path region) throws IOException {
 RegionInfo info = HRegionFileSystem.loadRegionInfoFileContent(fs, region);
 MetaTableAccessor.addRegionToMeta(conn, info);
   }
 
-  @Override public void close() throws IOException {
+  List addMissingRegionsInMeta(List regionsPath) throws 
IOException {
+List reAddedRegionsEncodedNames = new ArrayList<>();
+for(Path regionPath : regionsPath){
+  this.putRegionInfoFromHdfsInMeta(regionPath);
+  reAddedRegionsEncodedNames.add(regionPath.getName());
+}
+return reAddedRegionsEncodedNames;
+  }
+
+  public Pair, List> 
addMissingRegionsInMetaForTables(
+  List nameSpaceOrTable) throws IOException {
+InternalMetaChecker missingChecker = new InternalMetaChecker<>();
+return 
missingChecker.processRegionsMetaCleanup(this::reportTablesMissingRegions,
+  this::addMissingRegionsInMeta, nameSpaceOrTable);
+  }
+
+  public Pair, List> 
removeExtraRegionsFromMetaForTables(
+List nameSpaceOrTable) throws IOException {
+if(nameSpaceOrTable.size()>0) {
+  InternalMetaChecker extraChecker = new 
InternalMetaChecker<>();
+  return 
extraChecker.processRegionsMetaCleanup(th

[GitHub] [hbase] Apache-HBase commented on issue #936: HBASE-17115 Define UI admins via an ACL

2020-01-22 Thread GitBox
Apache-HBase commented on issue #936: HBASE-17115 Define UI admins via an ACL
URL: https://github.com/apache/hbase/pull/936#issuecomment-577439989
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 26s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
4 new or modified test files.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   5m 45s |  master passed  |
   | +1 :green_heart: |  compile  |   3m 16s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   2m 36s |  master passed  |
   | +0 :ok: |  refguide  |   6m 27s |  branch has no errors when building the 
reference guide. See footer for rendered docs, which you should manually 
inspect.  |
   | +1 :green_heart: |  shadedjars  |   4m 56s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   3m 39s |  master passed  |
   | +0 :ok: |  spotbugs  |   4m 32s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |  20m 51s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 17s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 17s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 29s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +0 :ok: |  refguide  |   6m 17s |  patch has no errors when building the 
reference guide. See footer for rendered docs, which you should manually 
inspect.  |
   | +1 :green_heart: |  shadedjars  |   4m 58s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  17m 12s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | -1 :x: |  javadoc  |   0m 16s |  hbase-http generated 1 new + 0 unchanged 
- 0 fixed = 1 total (was 0)  |
   | -1 :x: |  javadoc  |   3m  0s |  root generated 1 new + 3 unchanged - 0 
fixed = 4 total (was 3)  |
   | +1 :green_heart: |  findbugs  |  23m 21s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 203m 40s |  root in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m 20s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 328m 10s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hbase.security.provider.TestCustomSaslAuthenticationProvider |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-936/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/936 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile refguide |
   | uname | Linux bb3c9a4ff87d 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-936/out/precommit/personality/provided.sh
 |
   | git revision | master / 11b7ecb3af |
   | Default Java | 1.8.0_181 |
   | refguide | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-936/3/artifact/out/branch-site/book.html
 |
   | refguide | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-936/3/artifact/out/patch-site/book.html
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-936/3/artifact/out/diff-javadoc-javadoc-hbase-http.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-936/3/artifact/out/diff-javadoc-javadoc-root.txt
 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-936/3/artifact/out/patch-unit-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-936/3/testReport/
 |
   | Max. process+thread count | 4688 (vs. ulimit of 1) |
   | modules | C: hbase-http hbase-server . U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-936/3/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://y

[GitHub] [hbase-operator-tools] wchevreuil commented on a change in pull request #45: HBASE-23371 [HBCK2] Provide client side method for removing ghost regions in meta.

2020-01-22 Thread GitBox
wchevreuil commented on a change in pull request #45: HBASE-23371 [HBCK2] 
Provide client side method for removing ghost regions in meta.
URL: 
https://github.com/apache/hbase-operator-tools/pull/45#discussion_r369867148
 
 

 ##
 File path: 
hbase-hbck2/src/main/java/org/apache/hbase/FsRegionsMetaRecoverer.java
 ##
 @@ -70,61 +78,203 @@ public FsRegionsMetaRecoverer(Configuration 
configuration) throws IOException {
 this.fs = fileSystem;
   }
 
-  private List getTableRegionsDirs(String table) throws Exception {
+  private List getTableRegionsDirs(String table) throws IOException {
 String hbaseRoot = this.config.get(HConstants.HBASE_DIR);
 Path tableDir = FSUtils.getTableDir(new Path(hbaseRoot), 
TableName.valueOf(table));
 return FSUtils.getRegionDirs(fs, tableDir);
   }
 
   public Map> reportTablesMissingRegions(final 
List namespacesOrTables)
   throws IOException {
-final Map> result = new HashMap<>();
-List tableNames = 
MetaTableAccessor.getTableStates(this.conn).keySet().stream()
-  .filter(tableName -> {
-if(namespacesOrTables==null || namespacesOrTables.isEmpty()){
-  return true;
-} else {
-  Optional findings = namespacesOrTables.stream().filter(
-name -> (name.indexOf(":") > 0) ?
-  tableName.equals(TableName.valueOf(name)) :
-  tableName.getNamespaceAsString().equals(name)).findFirst();
-  return findings.isPresent();
-}
-  }).collect(Collectors.toList());
-tableNames.stream().forEach(tableName -> {
-  try {
-result.put(tableName,
-  
findMissingRegionsInMETA(tableName.getNameWithNamespaceInclAsString()));
-  } catch (Exception e) {
-LOG.warn("Can't get missing regions from meta", e);
-  }
+InternalMetaChecker missingChecker = new InternalMetaChecker<>();
+return missingChecker.reportTablesRegions(namespacesOrTables, 
this::findMissingRegionsInMETA);
+  }
+
+  public Map>
+  reportTablesExtraRegions(final List namespacesOrTables) throws 
IOException {
+InternalMetaChecker extraChecker = new InternalMetaChecker<>();
+return extraChecker.reportTablesRegions(namespacesOrTables, 
this::findExtraRegionsInMETA);
+  }
+
+  List findMissingRegionsInMETA(String table) throws IOException {
+InternalMetaChecker missingChecker = new InternalMetaChecker<>();
+return missingChecker.checkRegionsInMETA(table, (regions, dirs) -> {
+  ListUtils utils = new ListUtils<>();
+  return utils.complement(dirs, regions, r -> r.getEncodedName(), d -> 
d.getName());
 });
-return result;
-  }
-
-  List findMissingRegionsInMETA(String table) throws Exception {
-final List missingRegions = new ArrayList<>();
-final List regionsDirs = getTableRegionsDirs(table);
-TableName tableName = TableName.valueOf(table);
-List regionInfos = MetaTableAccessor.
-  getTableRegions(this.conn, tableName, false);
-HashSet regionsInMeta = regionInfos.stream().map(info ->
-  info.getEncodedName()).collect(Collectors.toCollection(HashSet::new));
-for(final Path regionDir : regionsDirs){
-  if (!regionsInMeta.contains(regionDir.getName())) {
-LOG.debug(regionDir + "is not in META.");
-missingRegions.add(regionDir);
-  }
-}
-return missingRegions;
   }
 
-  public void putRegionInfoFromHdfsInMeta(Path region) throws IOException {
+  List findExtraRegionsInMETA(String table) throws IOException {
+InternalMetaChecker extraChecker = new InternalMetaChecker<>();
+return extraChecker.checkRegionsInMETA(table, (regions,dirs) -> {
+  ListUtils utils = new ListUtils<>();
+  return utils.complement(regions, dirs, d -> d.getName(), r -> 
r.getEncodedName());
+});
+  }
+
+  void putRegionInfoFromHdfsInMeta(Path region) throws IOException {
 RegionInfo info = HRegionFileSystem.loadRegionInfoFileContent(fs, region);
 MetaTableAccessor.addRegionToMeta(conn, info);
   }
 
-  @Override public void close() throws IOException {
+  List addMissingRegionsInMeta(List regionsPath) throws 
IOException {
+List reAddedRegionsEncodedNames = new ArrayList<>();
+for(Path regionPath : regionsPath){
+  this.putRegionInfoFromHdfsInMeta(regionPath);
+  reAddedRegionsEncodedNames.add(regionPath.getName());
+}
+return reAddedRegionsEncodedNames;
+  }
+
+  public Pair, List> 
addMissingRegionsInMetaForTables(
+  List nameSpaceOrTable) throws IOException {
+InternalMetaChecker missingChecker = new InternalMetaChecker<>();
+return 
missingChecker.processRegionsMetaCleanup(this::reportTablesMissingRegions,
+  this::addMissingRegionsInMeta, nameSpaceOrTable);
+  }
+
+  public Pair, List> 
removeExtraRegionsFromMetaForTables(
+List nameSpaceOrTable) throws IOException {
+if(nameSpaceOrTable.size()>0) {
+  InternalMetaChecker extraChecker = new 
InternalMetaChecker<>();
+  return 
extraChecker.processRegionsMetaCleanup(th

[GitHub] [hbase] apurtell commented on a change in pull request #1039: HBASE-23647: Make MasterRegistry the default impl.

2020-01-22 Thread GitBox
apurtell commented on a change in pull request #1039: HBASE-23647: Make 
MasterRegistry the default impl.
URL: https://github.com/apache/hbase/pull/1039#discussion_r369862230
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
 ##
 @@ -789,8 +789,19 @@ public boolean 
registerService(com.google.protobuf.Service instance) {
 return true;
   }
 
-  private Configuration unsetClientZookeeperQuorum() {
+  private Configuration cleanupConfiguration() {
 Configuration conf = this.conf;
+// We use ZKConnectionRegistry for all the internal communication, 
primarily for these reasons:
+// - Decouples RS and master life cycles. For example, if all the masters 
are down, region
+//   servers can abort at the same time, because the internal connection 
is master dependent and
+//   fails.This is an operational nightmare. Using the ZK based registry 
means that the region
+//   servers are now on the look out for new masters, if they are spun up.
+// - Configuration management for region servers (cluster internal) is 
much simpler when adding
+//   new masters etc.
+// - We need to retain ZKConnectionRegistry for replication use anyway, so 
we just extend it for
+//   other internal connections too.
+conf.set(HConstants.CLIENT_CONNECTION_REGISTRY_IMPL_CONF_KEY,
+"org.apache.hadoop.hbase.client.ZKConnectionRegistry");
 
 Review comment:
   Ok, but these class names can be constant strings at least. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] apurtell commented on a change in pull request #1039: HBASE-23647: Make MasterRegistry the default impl.

2020-01-22 Thread GitBox
apurtell commented on a change in pull request #1039: HBASE-23647: Make 
MasterRegistry the default impl.
URL: https://github.com/apache/hbase/pull/1039#discussion_r369861495
 
 

 ##
 File path: 
hbase-common/src/main/java/org/apache/hadoop/hbase/HBaseConfiguration.java
 ##
 @@ -268,14 +268,19 @@ public static Configuration 
createClusterConf(Configuration baseConf, String clu
* used to communicate with distant clusters
* @param conf configuration object to configure
* @param key string that contains the 3 required configuratins
-   * @throws IOException
*/
   private static void applyClusterKeyToConf(Configuration conf, String key)
-  throws IOException{
+  throws IOException {
 ZKConfig.ZKClusterKey zkClusterKey = ZKConfig.transformClusterKey(key);
 conf.set(HConstants.ZOOKEEPER_QUORUM, zkClusterKey.getQuorumString());
 conf.setInt(HConstants.ZOOKEEPER_CLIENT_PORT, 
zkClusterKey.getClientPort());
 conf.set(HConstants.ZOOKEEPER_ZNODE_PARENT, zkClusterKey.getZnodeParent());
+// Without the right registry, the above configs are useless. Also, we 
don't use setClass()
+// here because the ConnectionRegistry* classes are not resolvable from 
this module.
+// This will be broken if ZkConnectionRegistry class gets renamed or 
moved. Is there a better
+// way?
 
 Review comment:
   You could use findClass and if there's an exception fall through to 
alternate or recovery code. Anyway, agreed, a reference to a class constant is 
not called for here. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] apurtell commented on a change in pull request #1039: HBASE-23647: Make MasterRegistry the default impl.

2020-01-22 Thread GitBox
apurtell commented on a change in pull request #1039: HBASE-23647: Make 
MasterRegistry the default impl.
URL: https://github.com/apache/hbase/pull/1039#discussion_r369862135
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
 ##
 @@ -789,8 +789,19 @@ public boolean 
registerService(com.google.protobuf.Service instance) {
 return true;
   }
 
-  private Configuration unsetClientZookeeperQuorum() {
+  private Configuration cleanupConfiguration() {
 Configuration conf = this.conf;
+// We use ZKConnectionRegistry for all the internal communication, 
primarily for these reasons:
+// - Decouples RS and master life cycles. For example, if all the masters 
are down, region
+//   servers can abort at the same time, because the internal connection 
is master dependent and
+//   fails.This is an operational nightmare. Using the ZK based registry 
means that the region
+//   servers are now on the look out for new masters, if they are spun up.
 
 Review comment:
   The tense here is confusing. 
   
   "Using the ZK based registry means the region servers continue to be 
independent of master availability..." ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] apurtell commented on a change in pull request #1039: HBASE-23647: Make MasterRegistry the default impl.

2020-01-22 Thread GitBox
apurtell commented on a change in pull request #1039: HBASE-23647: Make 
MasterRegistry the default impl.
URL: https://github.com/apache/hbase/pull/1039#discussion_r369863370
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterShutdown.java
 ##
 @@ -133,11 +133,19 @@ public void 
testMasterShutdownBeforeStartingAnyRegionServer() throws Exception {
 util.startMiniZKCluster();
 util.createRootDir();
 final LocalHBaseCluster cluster =
-new LocalHBaseCluster(conf, NUM_MASTERS, NUM_RS, HMaster.class,
+new LocalHBaseCluster(conf, NUM_MASTERS, 0, NUM_RS, HMaster.class,
 MiniHBaseCluster.MiniHBaseClusterRegionServer.class);
 final int MASTER_INDEX = 0;
 final MasterThread master = cluster.getMasters().get(MASTER_INDEX);
 master.start();
+// Switching to master registry exposed a race in the master bootstrap 
that can result in a
+// lost shutdown command (essentially HBASE-8422). The race is essentially 
because the
+// server manager in HMaster is not initialized by the time shutdown() RPC 
(below) is made to
+// the master. The reason it was not happening earlier is because the 
connection creation with
+// ZK registry is so slow that by then the server manager is init'ed thus 
masking the problem.
+// For now, I'm putting a sleep here to workaround the issue, I think the 
fix for it is a little
+// delicate and needs to be done separately.
+Thread.sleep(5000);
 
 Review comment:
   Is it possible to use Waiter#waitFor to wait on some condition variable or 
predicate? Hard coded sleeps in tests tend to fall over on Apache Jenkins 
because of load issues ie the sleeps can never be long enough...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] VladRodionov commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions

2020-01-22 Thread GitBox
VladRodionov commented on a change in pull request #921: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/921#discussion_r369857412
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobConstants.java
 ##
 @@ -55,33 +55,22 @@
   public static final long DEFAULT_MOB_CACHE_EVICT_PERIOD = 3600L;
 
   public final static String TEMP_DIR_NAME = ".tmp";
-  public final static String BULKLOAD_DIR_NAME = ".bulkload";
-  public final static byte[] MOB_TABLE_LOCK_SUFFIX = Bytes.toBytes(".mobLock");
-  public final static String EMPTY_STRING = "";
-  /**
-   * If the size of a mob file is less than this value, it's regarded as a 
small file and needs to
-   * be merged in mob compaction. The default value is 1280MB.
-   */
-  public static final String MOB_COMPACTION_MERGEABLE_THRESHOLD =
-"hbase.mob.compaction.mergeable.threshold";
-  public static final long DEFAULT_MOB_COMPACTION_MERGEABLE_THRESHOLD = 10 * 
128 * 1024 * 1024;
+
   /**
-   * The max number of del files that is allowed in the mob file compaction. 
In the mob
-   * compaction, when the number of existing del files is larger than this 
value, they are merged
-   * until number of del files is not larger this value. The default value is 
3.
+   * The max number of a MOB table regions that is allowed in a batch of the 
mob compaction.
+   * By setting this number to a custom value, users can control the overall 
effect
+   * of a major compaction of a large MOB-enabled table.
*/
-  public static final String MOB_DELFILE_MAX_COUNT = 
"hbase.mob.delfile.max.count";
 
 Review comment:
   MobFileCleanerChore checks old configs and log warnings already. I will 
restore all removed old configs in MobConstants.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] VladRodionov commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions

2020-01-22 Thread GitBox
VladRodionov commented on a change in pull request #921: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/921#discussion_r369855291
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobConstants.java
 ##
 @@ -55,33 +55,22 @@
   public static final long DEFAULT_MOB_CACHE_EVICT_PERIOD = 3600L;
 
   public final static String TEMP_DIR_NAME = ".tmp";
-  public final static String BULKLOAD_DIR_NAME = ".bulkload";
 
 Review comment:
   It is not a public API, they are not client-usable, although they are client 
visible. I removed them because they are no longer used in a HBase code.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] VladRodionov commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions

2020-01-22 Thread GitBox
VladRodionov commented on a change in pull request #921: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/921#discussion_r369854559
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCompactionChore.java
 ##
 @@ -0,0 +1,224 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.ScheduledChore;
+import org.apache.hadoop.hbase.TableDescriptors;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
+import org.apache.hadoop.hbase.client.CompactionState;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.client.TableState;
+import org.apache.hadoop.hbase.mob.MobConstants;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import 
org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting;
+
+
+/**
+ * Periodic MOB compaction chore.
+ * It runs MOB compaction on region servers in parallel, thus
+ * utilizing distributed cluster resources. To avoid possible major
+ * compaction storms, one can specify maximum number regions to be compacted
+ * in parallel by setting configuration parameter: 
+ * 'hbase.mob.major.compaction.region.batch.size', which by default is 0 
(unlimited).
+ *
+ */
+@InterfaceAudience.Private
+public class MobFileCompactionChore extends ScheduledChore {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MobFileCompactionChore.class);
+  private Configuration conf;
+  private HMaster master;
+  private int regionBatchSize = 0;// not set - compact all
+
+  public MobFileCompactionChore(HMaster master) {
+super(master.getServerName() + "-MobFileCompactionChore", master,
+
master.getConfiguration().getInt(MobConstants.MOB_COMPACTION_CHORE_PERIOD,
+  MobConstants.DEFAULT_MOB_COMPACTION_CHORE_PERIOD),
+
master.getConfiguration().getInt(MobConstants.MOB_COMPACTION_CHORE_PERIOD,
+  MobConstants.DEFAULT_MOB_COMPACTION_CHORE_PERIOD),
+TimeUnit.SECONDS);
+this.master = master;
+this.conf = master.getConfiguration();
+this.regionBatchSize =
+
master.getConfiguration().getInt(MobConstants.MOB_MAJOR_COMPACTION_REGION_BATCH_SIZE,
+  MobConstants.DEFAULT_MOB_MAJOR_COMPACTION_REGION_BATCH_SIZE);
+
+  }
+
+  @VisibleForTesting
+  public MobFileCompactionChore(Configuration conf, int batchSize) {
+this.conf = conf;
+this.regionBatchSize = batchSize;
+  }
+
+  @Override
+  protected void chore() {
+
+boolean reported = false;
+
+try (Connection conn = ConnectionFactory.createConnection(conf);
+Admin admin = conn.getAdmin();) {
+
+  TableDescriptors htds = master.getTableDescriptors();
+  Map map = htds.getAll();
+  for (TableDescriptor htd : map.values()) {
+if (!master.getTableStateManager().isTableState(htd.getTableName(),
+  TableState.State.ENABLED)) {
+  LOG.debug("Skipping MOB compaction on table {} because it is not 
ENABLED",
+htd.getTableName());
+  continue;
+} else {
+  LOG.debug("Starting MOB compaction on table {}", htd.getTableName());
+}
+for (ColumnFamilyDescriptor hcd : htd.getColumnFamilies()) {
+  try {
+if (hcd.isMobEnabled()) {
+  if (!reported) {
+master.reportMobCompactionStart(htd.getTableName());
+reported = true;
+  }
+  LOG.info(" Major compacting {} cf={}", htd.getTableName(), 
hcd.getNameAsString());
+  if (regionBatchSize == 

[GitHub] [hbase] bharathv commented on issue #1039: HBASE-23647: Make MasterRegistry the default impl.

2020-01-22 Thread GitBox
bharathv commented on issue #1039: HBASE-23647: Make MasterRegistry the default 
impl.
URL: https://github.com/apache/hbase/pull/1039#issuecomment-577427617
 
 
   Reposting from the jira comment:
   
   @apurtell  TestFromClientSide was known to be flaky (even before this 
patch). So I'm fairly certain it has nothing to do with the current jira.
   
   One thing @ndimiduk  and I noticed here is that switching to JUnit 4.13 
(HBASE-23664) has exacerbated the problem. Based on some initial debugging it 
looks like it has something to do with leaking FileSystem/DFSClient objects in 
the HBase code and restarting the minicluster in the same JUnit test runner JVM 
(what TestFromClientSide does) causes a leak of hdfs lease renewer threads. 
Some how Junit 4.12 was masking the problem.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23647) Make MasterRegistry the default registry impl

2020-01-22 Thread Bharath Vissapragada (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17021607#comment-17021607
 ] 

Bharath Vissapragada commented on HBASE-23647:
--

[~apurtell] {{TestFromClientSide}} was known to be flaky (even before this 
patch). So I'm fairly certain it has nothing to do with the current jira.

One thing [~ndimiduk] and I noticed here is that switching to JUnit 4.13 
(HBASE-23664) has exacerbated the problem. Based on some initial debugging it 
looks like it has something to do with leaking FileSystem/DFSClient objects in 
the HBase code and restarting the minicluster in the same JUnit test runner JVM 
(what TestFromClientSide does) causes a leak of hdfs lease renewer threads. 
Some how Junit 4.12 was masking the problem.

> Make MasterRegistry the default registry impl
> -
>
> Key: HBASE-23647
> URL: https://issues.apache.org/jira/browse/HBASE-23647
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 3.0.0
>Reporter: Bharath Vissapragada
>Assignee: Bharath Vissapragada
>Priority: Major
>
> Now that we are close to getting the registry implementation patch in, the 
> idea here is to make it the default implementation in 3.0.0 and this means
> - No known bugs with the implementation
> - No known performance issues
> - Entire nightly test suite is green (and without flakes).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] joshelser commented on issue #936: HBASE-17115 Define UI admins via an ACL

2020-01-22 Thread GitBox
joshelser commented on issue #936: HBASE-17115 Define UI admins via an ACL
URL: https://github.com/apache/hbase/pull/936#issuecomment-577413620
 
 
   Turns out I'm still missing the critical bits for everything besides the 
`/logs/` http endpoint. Looking in hadoop to see how they do this because I 
have no idea how the existing AdminAuthorizedServlet is supposed to be used (I 
would have thought it should be a Filter..)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23647) Make MasterRegistry the default registry impl

2020-01-22 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17021578#comment-17021578
 ] 

Andrew Kyle Purtell commented on HBASE-23647:
-

So the only test failure now is TestFromClientSide? Can this be excluded as not 
related or a flake?

> Make MasterRegistry the default registry impl
> -
>
> Key: HBASE-23647
> URL: https://issues.apache.org/jira/browse/HBASE-23647
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 3.0.0
>Reporter: Bharath Vissapragada
>Assignee: Bharath Vissapragada
>Priority: Major
>
> Now that we are close to getting the registry implementation patch in, the 
> idea here is to make it the default implementation in 3.0.0 and this means
> - No known bugs with the implementation
> - No known performance issues
> - Entire nightly test suite is green (and without flakes).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] apurtell commented on issue #1039: HBASE-23647: Make MasterRegistry the default impl.

2020-01-22 Thread GitBox
apurtell commented on issue #1039: HBASE-23647: Make MasterRegistry the default 
impl.
URL: https://github.com/apache/hbase/pull/1039#issuecomment-577412722
 
 
   So the only test failure now is TestFromClientSide? Can this be excluded as 
not related or a flake?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23717) [hbase-thirdparty] Change pom version from 3.1.2-SNAPSHOT to 3.2.0-SNAPSHOT

2020-01-22 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HBASE-23717:
--
Fix Version/s: (was: thirdparty-3.2.0)
   hbase-thirdparty-3.2.0

> [hbase-thirdparty] Change pom version from 3.1.2-SNAPSHOT to 3.2.0-SNAPSHOT
> ---
>
> Key: HBASE-23717
> URL: https://issues.apache.org/jira/browse/HBASE-23717
> Project: HBase
>  Issue Type: Sub-task
>  Components: thirdparty
>Affects Versions: thirdparty-3.2.0
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: hbase-thirdparty-3.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23716) [hbase-thirdparty] Make release 3.2.0

2020-01-22 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HBASE-23716:
--
Fix Version/s: (was: thirdparty-3.2.0)
   hbase-thirdparty-3.2.0

> [hbase-thirdparty] Make release 3.2.0
> -
>
> Key: HBASE-23716
> URL: https://issues.apache.org/jira/browse/HBASE-23716
> Project: HBase
>  Issue Type: Bug
>  Components: thirdparty
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Michael Stack
>Priority: Major
> Fix For: hbase-thirdparty-3.2.0
>
>
> Update the dependencies in hbase-thirdparty. In particular, pb goes from 3.7 
> to 3.11 with a few perf improvements.
> Would like to update the hbase-thirdparty to include in the coming 
> hbase-2.3.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23718) [hbase-thirdparty] Update libs; pb from 3.9 to 3.11, etc.

2020-01-22 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HBASE-23718:
--
Fix Version/s: (was: thirdparty-3.2.0)
   hbase-thirdparty-3.2.0

> [hbase-thirdparty] Update libs; pb from 3.9 to 3.11, etc.
> -
>
> Key: HBASE-23718
> URL: https://issues.apache.org/jira/browse/HBASE-23718
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: hbase-thirdparty-3.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase-operator-tools] jatsakthi commented on issue #47: HBASE-23180 hbck2 testing tool

2020-01-22 Thread GitBox
jatsakthi commented on issue #47: HBASE-23180 hbck2 testing tool
URL: 
https://github.com/apache/hbase-operator-tools/pull/47#issuecomment-577398625
 
 
   @petersomogyi can I get your +1 here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #1062: HBASE-23705 Add CellComparator to HFileContext

2020-01-22 Thread GitBox
Apache-HBase commented on issue #1062: HBASE-23705 Add CellComparator to 
HFileContext
URL: https://github.com/apache/hbase/pull/1062#issuecomment-577396513
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 18s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
20 new or modified test files.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 34s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   5m 46s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 51s |  master passed  |
   | +1 :green_heart: |  checkstyle  |  59m 15s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m  2s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 16s |  master passed  |
   | +0 :ok: |  spotbugs  |   5m 24s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   7m  2s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 17s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   6m 35s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 19s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 19s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  hbase-common: The patch 
generated 0 new + 17 unchanged - 32 fixed = 17 total (was 49)  |
   | +1 :green_heart: |  checkstyle  |   1m 25s |  hbase-server: The patch 
generated 0 new + 55 unchanged - 155 fixed = 55 total (was 210)  |
   | -1 :x: |  checkstyle  |   0m 20s |  hbase-mapreduce: The patch generated 1 
new + 0 unchanged - 18 fixed = 1 total (was 18)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   5m 36s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  18m 44s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   7m  2s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m 12s |  hbase-common in the patch passed.  
|
   | -1 :x: |  unit  | 169m  4s |  hbase-server in the patch failed.  |
   | +1 :green_heart: |  unit  |  18m 44s |  hbase-mapreduce in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 30s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 326m 28s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hbase.security.provider.TestCustomSaslAuthenticationProvider |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1062/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1062 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 486397b1d226 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1062/out/precommit/personality/provided.sh
 |
   | git revision | master / 11b7ecb3af |
   | Default Java | 1.8.0_181 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1062/5/artifact/out/diff-checkstyle-hbase-mapreduce.txt
 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1062/5/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1062/5/testReport/
 |
   | Max. process+thread count | 5323 (vs. ulimit of 1) |
   | modules | C: hbase-common hbase-server hbase-mapreduce U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1062/5/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific co

[jira] [Commented] (HBASE-18095) Provide an option for clients to find the server hosting META that does not involve the ZooKeeper client

2020-01-22 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-18095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17021525#comment-17021525
 ] 

Hudson commented on HBASE-18095:


Results for branch HBASE-18095/client-locate-meta-no-zookeeper
[build #47 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18095%252Fclient-locate-meta-no-zookeeper/47/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18095%252Fclient-locate-meta-no-zookeeper/47//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18095%252Fclient-locate-meta-no-zookeeper/47//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18095%252Fclient-locate-meta-no-zookeeper/47//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Provide an option for clients to find the server hosting META that does not 
> involve the ZooKeeper client
> 
>
> Key: HBASE-18095
> URL: https://issues.apache.org/jira/browse/HBASE-18095
> Project: HBase
>  Issue Type: New Feature
>  Components: Client
>Reporter: Andrew Kyle Purtell
>Assignee: Bharath Vissapragada
>Priority: Major
> Attachments: HBASE-18095.master-v1.patch, HBASE-18095.master-v2.patch
>
>
> Clients are required to connect to ZooKeeper to find the location of the 
> regionserver hosting the meta table region. Site configuration provides the 
> client a list of ZK quorum peers and the client uses an embedded ZK client to 
> query meta location. Timeouts and retry behavior of this embedded ZK client 
> are managed orthogonally to HBase layer settings and in some cases the ZK 
> cannot manage what in theory the HBase client can, i.e. fail fast upon outage 
> or network partition.
> We should consider new configuration settings that provide a list of 
> well-known master and backup master locations, and with this information the 
> client can contact any of the master processes directly. Any master in either 
> active or passive state will track meta location and respond to requests for 
> it with its cached last known location. If this location is stale, the client 
> can ask again with a flag set that requests the master refresh its location 
> cache and return the up-to-date location. Every client interaction with the 
> cluster thus uses only HBase RPC as transport, with appropriate settings 
> applied to the connection. The configuration toggle that enables this 
> alternative meta location lookup should be false by default.
> This removes the requirement that HBase clients embed the ZK client and 
> contact the ZK service directly at the beginning of the connection lifecycle. 
> This has several benefits. ZK service need not be exposed to clients, and 
> their potential abuse, yet no benefit ZK provides the HBase server cluster is 
> compromised. Normalizing HBase client and ZK client timeout settings and 
> retry behavior - in some cases, impossible, i.e. for fail-fast - is no longer 
> necessary. 
> And, from [~ghelmling]: There is an additional complication here for 
> token-based authentication. When a delegation token is used for SASL 
> authentication, the client uses the cluster ID obtained from Zookeeper to 
> select the token identifier to use. So there would also need to be some 
> Zookeeper-less, unauthenticated way to obtain the cluster ID as well. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-23718) [hbase-thirdparty] Update libs; pb from 3.9 to 3.11, etc.

2020-01-22 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack resolved HBASE-23718.
---
Hadoop Flags: Reviewed
Assignee: Michael Stack
  Resolution: Fixed

Merged. Resolving. Thanks for review [~psomogyi]

> [hbase-thirdparty] Update libs; pb from 3.9 to 3.11, etc.
> -
>
> Key: HBASE-23718
> URL: https://issues.apache.org/jira/browse/HBASE-23718
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: thirdparty-3.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase-thirdparty] saintstack commented on a change in pull request #10: HBASE-23718 [hbase-thirdparty] Update libs; pb from 3.9 to 3.11, etc.

2020-01-22 Thread GitBox
saintstack commented on a change in pull request #10: HBASE-23718 
[hbase-thirdparty] Update libs; pb from 3.9 to 3.11, etc.
URL: https://github.com/apache/hbase-thirdparty/pull/10#discussion_r369803580
 
 

 ##
 File path: pom.xml
 ##
 @@ -127,7 +127,7 @@
 ${compileSource}
 3.3.3
 org.apache.hbase.thirdparty
-3.9.2
+3.11.1
 
 Review comment:
   It was not up in the mvn repos when I was at this last night 
https://mvnrepository.com/artifact/com.google.protobuf/protobuf-java ... and no 
java commits in 3.11.2 anyways: 
https://github.com/protocolbuffers/protobuf/compare/v3.11.2...3.11.x


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase-thirdparty] saintstack merged pull request #10: HBASE-23718 [hbase-thirdparty] Update libs; pb from 3.9 to 3.11, etc.

2020-01-22 Thread GitBox
saintstack merged pull request #10: HBASE-23718 [hbase-thirdparty] Update libs; 
pb from 3.9 to 3.11, etc.
URL: https://github.com/apache/hbase-thirdparty/pull/10
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (HBASE-23717) [hbase-thirdparty] Change pom version from 3.1.2-SNAPSHOT to 3.2.0-SNAPSHOT

2020-01-22 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack resolved HBASE-23717.
---
Hadoop Flags: Reviewed
  Resolution: Fixed

Merged. Resovling.

> [hbase-thirdparty] Change pom version from 3.1.2-SNAPSHOT to 3.2.0-SNAPSHOT
> ---
>
> Key: HBASE-23717
> URL: https://issues.apache.org/jira/browse/HBASE-23717
> Project: HBase
>  Issue Type: Sub-task
>  Components: thirdparty
>Affects Versions: thirdparty-3.2.0
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: thirdparty-3.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] VladRodionov commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions

2020-01-22 Thread GitBox
VladRodionov commented on a change in pull request #921: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/921#discussion_r369801379
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMobStoreCompaction.java
 ##
 @@ -158,6 +161,9 @@ public void testSmallerValue() throws Exception {
 
   /**
* During compaction, the mob threshold size is changed.
+   * The test is no longer valid. Major MOB compaction must be triggered by 
User
+   * HRegion does not provide public API to trigger major-compaction by User
+   * This test will move to mob sub-package.  
 
 Review comment:
   Test actually passes. Moved to mob sub-package


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase-thirdparty] saintstack commented on issue #9: HBASE-23717 [hbase-thirdparty] Change pomp version from 3.1.2-SNAPSHO…

2020-01-22 Thread GitBox
saintstack commented on issue #9: HBASE-23717 [hbase-thirdparty] Change pomp 
version from 3.1.2-SNAPSHO…
URL: https://github.com/apache/hbase-thirdparty/pull/9#issuecomment-577384113
 
 
   Thanks @HorizonNet and @petersomogyi  (Fixed the title on merge)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase-thirdparty] saintstack merged pull request #9: HBASE-23717 [hbase-thirdparty] Change pomp version from 3.1.2-SNAPSHO…

2020-01-22 Thread GitBox
saintstack merged pull request #9: HBASE-23717 [hbase-thirdparty] Change pomp 
version from 3.1.2-SNAPSHO…
URL: https://github.com/apache/hbase-thirdparty/pull/9
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23069) periodic dependency bump for Sep 2019

2020-01-22 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17021497#comment-17021497
 ] 

Michael Stack commented on HBASE-23069:
---

Undid ruby update and the httpclient changes.

The first fails shell tests because missing method:
{{
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 21.575 
s <<< FAILURE! - in org.apache.hadoop.hbase.client.TestShell
[ERROR] org.apache.hadoop.hbase.client.TestShell.testRunShellTests  Time 
elapsed: 0.558 s  <<< ERROR!
org.jruby.embed.EvalFailedException: (LoadError) load error: hbase_constants -- 
java.lang.NoSuchMethodError: 
org.jcodings.Encoding.caseMap(Lorg/jcodings/IntHolder;[BLorg/jcodings/IntHolder;I[BII)I
at 
org.apache.hadoop.hbase.client.TestShell.testRunShellTests(TestShell.java:40)
Caused by: org.jruby.exceptions.LoadError: (LoadError) load error: 
hbase_constants -- java.lang.NoSuchMethodError: 
org.jcodings.Encoding.caseMap(Lorg/jcodings/IntHolder;[BLorg/jcodings/IntHolder;I[BII)I
Caused by: java.lang.NoSuchMethodError: 
org.jcodings.Encoding.caseMap(Lorg/jcodings/IntHolder;[BLorg/jcodings/IntHolder;I[BII)I
at 
org.apache.hadoop.hbase.client.TestShell.testRunShellTests(TestShell.java:40)}}

httpclient update will remove any '//' from our encoded REST URLs  -- they are 
illegal -- which is messing up our parse.

> periodic dependency bump for Sep 2019
> -
>
> Key: HBASE-23069
> URL: https://issues.apache.org/jira/browse/HBASE-23069
> Project: HBase
>  Issue Type: Improvement
>  Components: dependencies, hbase-thirdparty
>Reporter: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0, 2.3.0, 1.6.0
>
>
> we should do a pass to see if there are any dependencies we can bump. (also 
> follow-on we should automate this check)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23069) periodic dependency bump for Sep 2019

2020-01-22 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HBASE-23069:
--
Release Note: 
caffeine: 2.6.2 => 2.8.1
commons-codec: 1.10 => 1.13
commons-io: 2.5 => 2.6
disrupter: 3.3.6 => 3.4.2
httpcore: 4.4.6 => 4.4.13
jackson: 2.9.10 => 2.10.1
jackson.databind: 2.9.10.1 => 2.10.1
jetty: 9.3.27.v20190418 => 9.3.28.v20191105
protobuf.plugin: 0.5.0 => 0.6.1
zookeeper: 3.4.10 => 3.4.14
slf4j: 1.7.25 => 1.7.30
rat: 0.12 => 0.13
asciidoctor: 1.5.5 => 1.5.8.1
asciidoctor.pdf: 1.5.0-alpha.15 => 1.5.0-rc.2
error-prone: 2.3.3 => 2.3.4

  was:
caffeine: 2.6.2 => 2.8.1
commons-codec: 1.10 => 1.13
commons-io: 2.5 => 2.6
disrupter: 3.3.6 => 3.4.2
httpclient: 4.5.3 => 4.5.11
httpcore: 4.4.6 => 4.4.13
jackson: 2.9.10 => 2.10.1
jackson.databind: 2.9.10.1 => 2.10.1
jetty: 9.3.27.v20190418 => 9.3.28.v20191105
jruby: 9.1.17.0 => 9.2.9.0
protobuf.plugin: 0.5.0 => 0.6.1
zookeeper: 3.4.10 => 3.4.14
slf4j: 1.7.25 => 1.7.30
rat: 0.12 => 0.13
asciidoctor: 1.5.5 => 1.5.8.1
asciidoctor.pdf: 1.5.0-alpha.15 => 1.5.0-rc.2
error-prone: 2.3.3 => 2.3.4


> periodic dependency bump for Sep 2019
> -
>
> Key: HBASE-23069
> URL: https://issues.apache.org/jira/browse/HBASE-23069
> Project: HBase
>  Issue Type: Improvement
>  Components: dependencies, hbase-thirdparty
>Reporter: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0, 2.3.0, 1.6.0
>
>
> we should do a pass to see if there are any dependencies we can bump. (also 
> follow-on we should automate this check)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] saintstack commented on issue #1082: HBASE-23069 periodic dependency bump for Sep 2019

2020-01-22 Thread GitBox
saintstack commented on issue #1082: HBASE-23069 periodic dependency bump for 
Sep 2019
URL: https://github.com/apache/hbase/pull/1082#issuecomment-577381759
 
 
   New push walks back the ruby change and the httpclient changes.
   
   The first fails shell tests because missing method:
   ```
   
   [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
21.575 s <<< FAILURE! - in org.apache.hadoop.hbase.client.TestShell
   [ERROR] org.apache.hadoop.hbase.client.TestShell.testRunShellTests  Time 
elapsed: 0.558 s  <<< ERROR!
   org.jruby.embed.EvalFailedException: (LoadError) load error: hbase_constants 
-- java.lang.NoSuchMethodError: 
org.jcodings.Encoding.caseMap(Lorg/jcodings/IntHolder;[BLorg/jcodings/IntHolder;I[BII)I
at 
org.apache.hadoop.hbase.client.TestShell.testRunShellTests(TestShell.java:40)
   Caused by: org.jruby.exceptions.LoadError: (LoadError) load error: 
hbase_constants -- java.lang.NoSuchMethodError: 
org.jcodings.Encoding.caseMap(Lorg/jcodings/IntHolder;[BLorg/jcodings/IntHolder;I[BII)I
   Caused by: java.lang.NoSuchMethodError: 
org.jcodings.Encoding.caseMap(Lorg/jcodings/IntHolder;[BLorg/jcodings/IntHolder;I[BII)I
at 
org.apache.hadoop.hbase.client.TestShell.testRunShellTests(TestShell.java:40)
   ```
   
   httpclient update will remove any '//' from our encoded REST URLs which is 
messing up our parse.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] VladRodionov commented on a change in pull request #921: HBASE-22749: Distributed MOB compactions

2020-01-22 Thread GitBox
VladRodionov commented on a change in pull request #921: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/921#discussion_r369786238
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobFileCleanerChore.java
 ##
 @@ -0,0 +1,310 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.mob;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.apache.hadoop.hbase.ScheduledChore;
+import org.apache.hadoop.hbase.TableDescriptors;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.backup.HFileArchiver;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.io.hfile.CacheConfig;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.regionserver.BloomType;
+import org.apache.hadoop.hbase.regionserver.HStoreFile;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import 
org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting;
+
+/**
+ * The class MobFileCleanerChore for running cleaner regularly to remove the 
expired
+ * and obsolete (files which have no active references to) mob files.
+ */
+@SuppressWarnings("deprecation")
+@InterfaceAudience.Private
+public class MobFileCleanerChore extends ScheduledChore {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MobFileCleanerChore.class);
+  private final HMaster master;
+  private ExpiredMobFileCleaner cleaner;
+  
+  static {
+Configuration.addDeprecation(MobConstants.DEPRECATED_MOB_CLEANER_PERIOD, 
+  MobConstants.MOB_CLEANER_PERIOD);
+  }
+
+  public MobFileCleanerChore(HMaster master) {
+super(master.getServerName() + "-ExpiredMobFileCleanerChore", master,
+master.getConfiguration().getInt(MobConstants.MOB_CLEANER_PERIOD,
+  MobConstants.DEFAULT_MOB_CLEANER_PERIOD),
+master.getConfiguration().getInt(MobConstants.MOB_CLEANER_PERIOD,
+  MobConstants.DEFAULT_MOB_CLEANER_PERIOD),
+TimeUnit.SECONDS);
+this.master = master;
+cleaner = new ExpiredMobFileCleaner();
+cleaner.setConf(master.getConfiguration());
+checkObsoleteConfigurations();
+  }
+
+  private void checkObsoleteConfigurations() {
+Configuration conf = master.getConfiguration();
+
+if (conf.get("hbase.mob.compaction.mergeable.threshold") != null) {
+  LOG.warn("'hbase.mob.compaction.mergeable.threshold' is obsolete and not 
used anymore.");
+}
+if (conf.get("hbase.mob.delfile.max.count") != null) {
+  LOG.warn("'hbase.mob.delfile.max.count' is obsolete and not used 
anymore.");
+}
+if (conf.get("hbase.mob.compaction.threads.max") != null) {
+  LOG.warn("'hbase.mob.compaction.threads.max' is obsolete and not used 
anymore.");
+}
+if (conf.get("hbase.mob.compaction.batch.size") != null) {
+  LOG.warn("'hbase.mob.compaction.batch.size' is obsolete and not used 
anymore.");
+}
+  }
+
+  @VisibleForTesting
+  public MobFileCleanerChore() {
+this.master = null;
+  }
+
+  @Override
+  @edu.umd.cs.findbugs.annotations.SuppressWarnings(value = 
"REC_CATCH_EXCEPTION",
+  justification = "Intentional")
+
+  protected void chore() {
+TableDescriptors htds = master.getTableDescriptors();
+
+Map map = null;
+try {
+  map = htds.getAll();
+} catch (IOException e) {
+  LOG.error("M

[GitHub] [hbase-operator-tools] joshelser commented on a change in pull request #45: HBASE-23371 [HBCK2] Provide client side method for removing ghost regions in meta.

2020-01-22 Thread GitBox
joshelser commented on a change in pull request #45: HBASE-23371 [HBCK2] 
Provide client side method for removing ghost regions in meta.
URL: 
https://github.com/apache/hbase-operator-tools/pull/45#discussion_r369783729
 
 

 ##
 File path: 
hbase-hbck2/src/main/java/org/apache/hbase/FsRegionsMetaRecoverer.java
 ##
 @@ -70,61 +78,203 @@ public FsRegionsMetaRecoverer(Configuration 
configuration) throws IOException {
 this.fs = fileSystem;
   }
 
-  private List getTableRegionsDirs(String table) throws Exception {
+  private List getTableRegionsDirs(String table) throws IOException {
 String hbaseRoot = this.config.get(HConstants.HBASE_DIR);
 Path tableDir = FSUtils.getTableDir(new Path(hbaseRoot), 
TableName.valueOf(table));
 return FSUtils.getRegionDirs(fs, tableDir);
   }
 
   public Map> reportTablesMissingRegions(final 
List namespacesOrTables)
   throws IOException {
-final Map> result = new HashMap<>();
-List tableNames = 
MetaTableAccessor.getTableStates(this.conn).keySet().stream()
-  .filter(tableName -> {
-if(namespacesOrTables==null || namespacesOrTables.isEmpty()){
-  return true;
-} else {
-  Optional findings = namespacesOrTables.stream().filter(
-name -> (name.indexOf(":") > 0) ?
-  tableName.equals(TableName.valueOf(name)) :
-  tableName.getNamespaceAsString().equals(name)).findFirst();
-  return findings.isPresent();
-}
-  }).collect(Collectors.toList());
-tableNames.stream().forEach(tableName -> {
-  try {
-result.put(tableName,
-  
findMissingRegionsInMETA(tableName.getNameWithNamespaceInclAsString()));
-  } catch (Exception e) {
-LOG.warn("Can't get missing regions from meta", e);
-  }
+InternalMetaChecker missingChecker = new InternalMetaChecker<>();
+return missingChecker.reportTablesRegions(namespacesOrTables, 
this::findMissingRegionsInMETA);
+  }
+
+  public Map>
+  reportTablesExtraRegions(final List namespacesOrTables) throws 
IOException {
+InternalMetaChecker extraChecker = new InternalMetaChecker<>();
+return extraChecker.reportTablesRegions(namespacesOrTables, 
this::findExtraRegionsInMETA);
+  }
+
+  List findMissingRegionsInMETA(String table) throws IOException {
+InternalMetaChecker missingChecker = new InternalMetaChecker<>();
+return missingChecker.checkRegionsInMETA(table, (regions, dirs) -> {
+  ListUtils utils = new ListUtils<>();
+  return utils.complement(dirs, regions, r -> r.getEncodedName(), d -> 
d.getName());
 });
-return result;
-  }
-
-  List findMissingRegionsInMETA(String table) throws Exception {
-final List missingRegions = new ArrayList<>();
-final List regionsDirs = getTableRegionsDirs(table);
-TableName tableName = TableName.valueOf(table);
-List regionInfos = MetaTableAccessor.
-  getTableRegions(this.conn, tableName, false);
-HashSet regionsInMeta = regionInfos.stream().map(info ->
-  info.getEncodedName()).collect(Collectors.toCollection(HashSet::new));
-for(final Path regionDir : regionsDirs){
-  if (!regionsInMeta.contains(regionDir.getName())) {
-LOG.debug(regionDir + "is not in META.");
-missingRegions.add(regionDir);
-  }
-}
-return missingRegions;
   }
 
-  public void putRegionInfoFromHdfsInMeta(Path region) throws IOException {
+  List findExtraRegionsInMETA(String table) throws IOException {
+InternalMetaChecker extraChecker = new InternalMetaChecker<>();
+return extraChecker.checkRegionsInMETA(table, (regions,dirs) -> {
+  ListUtils utils = new ListUtils<>();
+  return utils.complement(regions, dirs, d -> d.getName(), r -> 
r.getEncodedName());
+});
+  }
+
+  void putRegionInfoFromHdfsInMeta(Path region) throws IOException {
 RegionInfo info = HRegionFileSystem.loadRegionInfoFileContent(fs, region);
 MetaTableAccessor.addRegionToMeta(conn, info);
   }
 
-  @Override public void close() throws IOException {
+  List addMissingRegionsInMeta(List regionsPath) throws 
IOException {
+List reAddedRegionsEncodedNames = new ArrayList<>();
+for(Path regionPath : regionsPath){
+  this.putRegionInfoFromHdfsInMeta(regionPath);
+  reAddedRegionsEncodedNames.add(regionPath.getName());
+}
+return reAddedRegionsEncodedNames;
+  }
+
+  public Pair, List> 
addMissingRegionsInMetaForTables(
+  List nameSpaceOrTable) throws IOException {
+InternalMetaChecker missingChecker = new InternalMetaChecker<>();
+return 
missingChecker.processRegionsMetaCleanup(this::reportTablesMissingRegions,
+  this::addMissingRegionsInMeta, nameSpaceOrTable);
+  }
+
+  public Pair, List> 
removeExtraRegionsFromMetaForTables(
+List nameSpaceOrTable) throws IOException {
+if(nameSpaceOrTable.size()>0) {
+  InternalMetaChecker extraChecker = new 
InternalMetaChecker<>();
+  return 
extraChecker.processRegionsMetaCleanup(thi

[GitHub] [hbase-operator-tools] joshelser commented on a change in pull request #45: HBASE-23371 [HBCK2] Provide client side method for removing ghost regions in meta.

2020-01-22 Thread GitBox
joshelser commented on a change in pull request #45: HBASE-23371 [HBCK2] 
Provide client side method for removing ghost regions in meta.
URL: 
https://github.com/apache/hbase-operator-tools/pull/45#discussion_r369781637
 
 

 ##
 File path: hbase-hbck2/README.md
 ##
 @@ -148,6 +148,29 @@ Command:
to finish parent and children. This is SLOW, and dangerous so use
selectively. Does not always work.
 
+ extraRegionsInMeta ...
+   Options:
+-f, --fixfix meta by removing all extra regions found.
+   Reports regions present on hbase:meta, but with no related
+   directories on the file system. Needs hbase:meta to be online.
+   For each table name passed as parameter, performs diff
+   between regions available in hbase:meta and region dirs on the given
+   file system. Extra regions would get deleted from Meta
+   if passed the --fix option.
+   NOTE: Before deciding on use the "--fix" option, it's worth check if
+   reported extra regions are overlapping with existing valid regions.
+   If so, then "extraRegionsInMeta --fix" is indeed the optimal solution.
+   Otherwise, "assigns" command is the simpler solution, as it recreates
+   regions dirs in the filesystem, if not existing.
+   An example triggering extra regions report for tables 'table_1'
+   and 'table_2', under default namespace:
+ $ HBCK2 extraRegionsInMeta default:table_1 default:table_2
 
 Review comment:
   Ok! Just double checking.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase-operator-tools] wchevreuil commented on a change in pull request #45: HBASE-23371 [HBCK2] Provide client side method for removing ghost regions in meta.

2020-01-22 Thread GitBox
wchevreuil commented on a change in pull request #45: HBASE-23371 [HBCK2] 
Provide client side method for removing ghost regions in meta.
URL: 
https://github.com/apache/hbase-operator-tools/pull/45#discussion_r369770407
 
 

 ##
 File path: 
hbase-hbck2/src/main/java/org/apache/hbase/FsRegionsMetaRecoverer.java
 ##
 @@ -70,61 +78,203 @@ public FsRegionsMetaRecoverer(Configuration 
configuration) throws IOException {
 this.fs = fileSystem;
   }
 
-  private List getTableRegionsDirs(String table) throws Exception {
+  private List getTableRegionsDirs(String table) throws IOException {
 String hbaseRoot = this.config.get(HConstants.HBASE_DIR);
 Path tableDir = FSUtils.getTableDir(new Path(hbaseRoot), 
TableName.valueOf(table));
 return FSUtils.getRegionDirs(fs, tableDir);
   }
 
   public Map> reportTablesMissingRegions(final 
List namespacesOrTables)
   throws IOException {
-final Map> result = new HashMap<>();
-List tableNames = 
MetaTableAccessor.getTableStates(this.conn).keySet().stream()
-  .filter(tableName -> {
-if(namespacesOrTables==null || namespacesOrTables.isEmpty()){
-  return true;
-} else {
-  Optional findings = namespacesOrTables.stream().filter(
-name -> (name.indexOf(":") > 0) ?
-  tableName.equals(TableName.valueOf(name)) :
-  tableName.getNamespaceAsString().equals(name)).findFirst();
-  return findings.isPresent();
-}
-  }).collect(Collectors.toList());
-tableNames.stream().forEach(tableName -> {
-  try {
-result.put(tableName,
-  
findMissingRegionsInMETA(tableName.getNameWithNamespaceInclAsString()));
-  } catch (Exception e) {
-LOG.warn("Can't get missing regions from meta", e);
-  }
+InternalMetaChecker missingChecker = new InternalMetaChecker<>();
+return missingChecker.reportTablesRegions(namespacesOrTables, 
this::findMissingRegionsInMETA);
+  }
+
+  public Map>
+  reportTablesExtraRegions(final List namespacesOrTables) throws 
IOException {
+InternalMetaChecker extraChecker = new InternalMetaChecker<>();
+return extraChecker.reportTablesRegions(namespacesOrTables, 
this::findExtraRegionsInMETA);
+  }
+
+  List findMissingRegionsInMETA(String table) throws IOException {
+InternalMetaChecker missingChecker = new InternalMetaChecker<>();
+return missingChecker.checkRegionsInMETA(table, (regions, dirs) -> {
+  ListUtils utils = new ListUtils<>();
+  return utils.complement(dirs, regions, r -> r.getEncodedName(), d -> 
d.getName());
 });
-return result;
-  }
-
-  List findMissingRegionsInMETA(String table) throws Exception {
-final List missingRegions = new ArrayList<>();
-final List regionsDirs = getTableRegionsDirs(table);
-TableName tableName = TableName.valueOf(table);
-List regionInfos = MetaTableAccessor.
-  getTableRegions(this.conn, tableName, false);
-HashSet regionsInMeta = regionInfos.stream().map(info ->
-  info.getEncodedName()).collect(Collectors.toCollection(HashSet::new));
-for(final Path regionDir : regionsDirs){
-  if (!regionsInMeta.contains(regionDir.getName())) {
-LOG.debug(regionDir + "is not in META.");
-missingRegions.add(regionDir);
-  }
-}
-return missingRegions;
   }
 
-  public void putRegionInfoFromHdfsInMeta(Path region) throws IOException {
+  List findExtraRegionsInMETA(String table) throws IOException {
+InternalMetaChecker extraChecker = new InternalMetaChecker<>();
+return extraChecker.checkRegionsInMETA(table, (regions,dirs) -> {
+  ListUtils utils = new ListUtils<>();
+  return utils.complement(regions, dirs, d -> d.getName(), r -> 
r.getEncodedName());
+});
+  }
+
+  void putRegionInfoFromHdfsInMeta(Path region) throws IOException {
 RegionInfo info = HRegionFileSystem.loadRegionInfoFileContent(fs, region);
 MetaTableAccessor.addRegionToMeta(conn, info);
   }
 
-  @Override public void close() throws IOException {
+  List addMissingRegionsInMeta(List regionsPath) throws 
IOException {
+List reAddedRegionsEncodedNames = new ArrayList<>();
+for(Path regionPath : regionsPath){
+  this.putRegionInfoFromHdfsInMeta(regionPath);
+  reAddedRegionsEncodedNames.add(regionPath.getName());
+}
+return reAddedRegionsEncodedNames;
+  }
+
+  public Pair, List> 
addMissingRegionsInMetaForTables(
+  List nameSpaceOrTable) throws IOException {
+InternalMetaChecker missingChecker = new InternalMetaChecker<>();
+return 
missingChecker.processRegionsMetaCleanup(this::reportTablesMissingRegions,
+  this::addMissingRegionsInMeta, nameSpaceOrTable);
+  }
+
+  public Pair, List> 
removeExtraRegionsFromMetaForTables(
+List nameSpaceOrTable) throws IOException {
+if(nameSpaceOrTable.size()>0) {
+  InternalMetaChecker extraChecker = new 
InternalMetaChecker<>();
+  return 
extraChecker.processRegionsMetaCleanup(th

  1   2   >