[hbase] branch branch-1.4 updated: HBASE-23337 Release scripts should rely on maven for deploy. (#887)

2019-12-03 Thread busbey
This is an automated email from the ASF dual-hosted git repository.

busbey pushed a commit to branch branch-1.4
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1.4 by this push:
 new 75e044f  HBASE-23337 Release scripts should rely on maven for deploy. 
(#887)
75e044f is described below

commit 75e044f2ad685a9a6a472c34bb7361a9b602e248
Author: Sean Busbey 
AuthorDate: Mon Dec 2 06:39:24 2019 -0600

HBASE-23337 Release scripts should rely on maven for deploy. (#887)

- switch to nexus-staging-maven-plugin for asf-release
- cleaned up some tabs in the root pom

(differs from master because there are no release scripts here.)

Signed-off-by: stack 
(cherry picked from commit 97e01070001ef81558b4dae3a3610d0c73651cb9)
---
 pom.xml | 22 ++
 1 file changed, 22 insertions(+)

diff --git a/pom.xml b/pom.xml
index 5206916..92fd177 100644
--- a/pom.xml
+++ b/pom.xml
@@ -2163,6 +2163,28 @@
 ${hbase-surefire.cygwin-argLine}
   
 
+
+
+  apache-release
+  
+
+  
+  
+org.sonatype.plugins
+nexus-staging-maven-plugin
+1.6.8
+true
+
+  https://repository.apache.org/
+  apache.releases.https
+
+  
+
+  
+
 
 
   release



[hbase] branch branch-1 updated: HBASE-23337 Release scripts should rely on maven for deploy. (#887)

2019-12-03 Thread busbey
This is an automated email from the ASF dual-hosted git repository.

busbey pushed a commit to branch branch-1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1 by this push:
 new af2ac03  HBASE-23337 Release scripts should rely on maven for deploy. 
(#887)
af2ac03 is described below

commit af2ac03e3a1656cdd5ef9de05667c1f71406b35f
Author: Sean Busbey 
AuthorDate: Mon Dec 2 06:39:24 2019 -0600

HBASE-23337 Release scripts should rely on maven for deploy. (#887)

- switch to nexus-staging-maven-plugin for asf-release
- cleaned up some tabs in the root pom

(differs from master because there are no release scripts here.)

Signed-off-by: stack 
(cherry picked from commit 97e01070001ef81558b4dae3a3610d0c73651cb9)
---
 pom.xml | 22 ++
 1 file changed, 22 insertions(+)

diff --git a/pom.xml b/pom.xml
index 92c435d..ec70ba8 100644
--- a/pom.xml
+++ b/pom.xml
@@ -2157,6 +2157,28 @@
 ${hbase-surefire.cygwin-argLine}
   
 
+
+
+  apache-release
+  
+
+  
+  
+org.sonatype.plugins
+nexus-staging-maven-plugin
+1.6.8
+true
+
+  https://repository.apache.org/
+  apache.releases.https
+
+  
+
+  
+
 
 
   release



[hbase] branch branch-2.1 updated: HBASE-23337 Release scripts should rely on maven for deploy. (#887)

2019-12-03 Thread busbey
This is an automated email from the ASF dual-hosted git repository.

busbey pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new afcde36  HBASE-23337 Release scripts should rely on maven for deploy. 
(#887)
afcde36 is described below

commit afcde366eb62bdedf0c3afde28f0460ae8760096
Author: Sean Busbey 
AuthorDate: Mon Dec 2 06:39:24 2019 -0600

HBASE-23337 Release scripts should rely on maven for deploy. (#887)

- switch to nexus-staging-maven-plugin for asf-release
- cleaned up some tabs in the root pom

(differs from master because there are no release scripts here.)

Signed-off-by: stack 
(cherry picked from commit 97e01070001ef81558b4dae3a3610d0c73651cb9)
---
 pom.xml | 22 ++
 1 file changed, 22 insertions(+)

diff --git a/pom.xml b/pom.xml
index db40fa7..0909884 100755
--- a/pom.xml
+++ b/pom.xml
@@ -2355,6 +2355,28 @@
 ${hbase-surefire.cygwin-argLine}
   
 
+
+
+  apache-release
+  
+
+  
+  
+org.sonatype.plugins
+nexus-staging-maven-plugin
+1.6.8
+true
+
+  https://repository.apache.org/
+  apache.releases.https
+
+  
+
+  
+
 
 
   release



[hbase] branch branch-2.2 updated: HBASE-23337 Release scripts should rely on maven for deploy. (#887)

2019-12-03 Thread busbey
This is an automated email from the ASF dual-hosted git repository.

busbey pushed a commit to branch branch-2.2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.2 by this push:
 new f2370f7  HBASE-23337 Release scripts should rely on maven for deploy. 
(#887)
f2370f7 is described below

commit f2370f75e3c487ce3a0dc28a259471b122ee6494
Author: Sean Busbey 
AuthorDate: Mon Dec 2 06:39:24 2019 -0600

HBASE-23337 Release scripts should rely on maven for deploy. (#887)

- switch to nexus-staging-maven-plugin for asf-release
- cleaned up some tabs in the root pom

(differs from master because there are no release scripts here.)

Signed-off-by: stack 
(cherry picked from commit 97e01070001ef81558b4dae3a3610d0c73651cb9)
---
 pom.xml | 24 +++-
 1 file changed, 23 insertions(+), 1 deletion(-)

diff --git a/pom.xml b/pom.xml
index 6bbe439..b51f2d1 100755
--- a/pom.xml
+++ b/pom.xml
@@ -2349,6 +2349,28 @@
 ${hbase-surefire.cygwin-argLine}
   
 
+
+
+  apache-release
+  
+
+  
+  
+org.sonatype.plugins
+nexus-staging-maven-plugin
+1.6.8
+true
+
+  https://repository.apache.org/
+  apache.releases.https
+
+  
+
+  
+
 
 
   release
@@ -2597,7 +2619,7 @@
   
 org.apache.hadoop
 hadoop-auth
-   ${hadoop-two.version}
+${hadoop-two.version}
 
   
 com.google.guava



[hbase] branch branch-2.1 updated: HBASE-23345 Table need to replication unless all of cfs are excluded

2019-12-03 Thread zghao
This is an automated email from the ASF dual-hosted git repository.

zghao pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new fcc0071  HBASE-23345 Table need to replication unless all of cfs are 
excluded
fcc0071 is described below

commit fcc0071cda2da34a140f1de03f03855205f503a1
Author: ddupg 
AuthorDate: Thu Nov 28 18:57:56 2019 +0800

HBASE-23345 Table need to replication unless all of cfs are excluded

Signed-off-by: Guanghao Zhang 
---
 .../hbase/replication/ReplicationPeerConfig.java   |  25 ++-
 .../replication/TestReplicationPeerConfig.java | 202 ++
 .../hadoop/hbase/replication/ReplicationUtils.java |  42 +---
 .../hbase/replication/TestReplicationUtil.java | 235 -
 .../master/replication/ModifyPeerProcedure.java|   7 +-
 .../master/replication/ReplicationPeerManager.java |   2 +-
 .../replication/UpdatePeerConfigProcedure.java |   6 +-
 .../NamespaceTableCfWALEntryFilter.java|   2 +-
 .../org/apache/hadoop/hbase/util/HBaseFsck.java|   7 +-
 .../TestReplicationWALEntryFilters.java| 116 +-
 10 files changed, 287 insertions(+), 357 deletions(-)

diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
index e0d9a4c..7c0f115 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
@@ -366,22 +366,31 @@ public class ReplicationPeerConfig {
* @return true if the table need replicate to the peer cluster
*/
   public boolean needToReplicate(TableName table) {
+String namespace = table.getNamespaceAsString();
 if (replicateAllUserTables) {
-  if (excludeNamespaces != null && 
excludeNamespaces.contains(table.getNamespaceAsString())) {
+  // replicate all user tables, but filter by exclude namespaces and 
table-cfs config
+  if (excludeNamespaces != null && excludeNamespaces.contains(namespace)) {
 return false;
   }
-  if (excludeTableCFsMap != null && excludeTableCFsMap.containsKey(table)) 
{
-return false;
+  // trap here, must check existence first since HashMap allows null value.
+  if (excludeTableCFsMap == null || 
!excludeTableCFsMap.containsKey(table)) {
+return true;
   }
-  return true;
+  Collection cfs = excludeTableCFsMap.get(table);
+  // if cfs is null or empty then we can make sure that we do not need to 
replicate this table,
+  // otherwise, we may still need to replicate the table but filter out 
some families.
+  return cfs != null && !cfs.isEmpty();
 } else {
-  if (namespaces != null && 
namespaces.contains(table.getNamespaceAsString())) {
-return true;
+  // Not replicate all user tables, so filter by namespaces and table-cfs 
config
+  if (namespaces == null && tableCFsMap == null) {
+return false;
   }
-  if (tableCFsMap != null && tableCFsMap.containsKey(table)) {
+  // First filter by namespaces config
+  // If table's namespace in peer config, all the tables data are 
applicable for replication
+  if (namespaces != null && namespaces.contains(namespace)) {
 return true;
   }
-  return false;
+  return tableCFsMap != null && tableCFsMap.containsKey(table);
 }
   }
 }
diff --git 
a/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
 
b/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
index 881ef45..d67a3f8 100644
--- 
a/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
+++ 
b/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
@@ -17,10 +17,17 @@
  */
 package org.apache.hadoop.hbase.replication;
 
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
 import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.testclassification.ClientTests;
 import org.apache.hadoop.hbase.testclassification.SmallTests;
 import org.apache.hadoop.hbase.util.BuilderStyleTest;
+import org.junit.Assert;
 import org.junit.ClassRule;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
@@ -32,6 +39,9 @@ public class TestReplicationPeerConfig {
   public static final HBaseClassTestRule CLASS_RULE =
   HBaseClassTestRule.forClass(TestReplicationPeerConfig.class);
 
+  private static TableName TABLE_A = TableName.valueOf("replication", "testA");
+  private static TableName TABLE_B = 

[hbase] branch branch-2.1 updated: HBASE-23356 When construct StoreScanner throw exceptions it is possible to left some KeyValueScanner not closed. (#891)

2019-12-03 Thread binlijin
This is an automated email from the ASF dual-hosted git repository.

binlijin pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new 0cd1fa9  HBASE-23356 When construct StoreScanner throw exceptions it 
is possible to left some KeyValueScanner not closed. (#891)
0cd1fa9 is described below

commit 0cd1fa97116f37f3a8284d2e64711e693842e878
Author: binlijin 
AuthorDate: Wed Dec 4 10:34:07 2019 +0800

HBASE-23356 When construct StoreScanner throw exceptions it is possible to 
left some KeyValueScanner not closed. (#891)

Signed-off-by: GuangxuCheng  
---
 .../apache/hadoop/hbase/regionserver/HStore.java   | 60 +++---
 .../hadoop/hbase/regionserver/StoreScanner.java|  7 ++-
 2 files changed, 47 insertions(+), 20 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
index 7ab6c80..9bc46bc 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
@@ -1244,18 +1244,34 @@ public class HStore implements Store, HeapSize, 
StoreConfigInformation, Propagat
   this.lock.readLock().unlock();
 }
 
-// First the store file scanners
+try {
+  // First the store file scanners
+
+  // TODO this used to get the store files in descending order,
+  // but now we get them in ascending order, which I think is
+  // actually more correct, since memstore get put at the end.
+  List sfScanners = StoreFileScanner
+.getScannersForStoreFiles(storeFilesToScan, cacheBlocks, usePread, 
isCompaction, false,
+  matcher, readPt);
+  List scanners = new ArrayList<>(sfScanners.size() + 1);
+  scanners.addAll(sfScanners);
+  // Then the memstore scanners
+  scanners.addAll(memStoreScanners);
+  return scanners;
+} catch (Throwable t) {
+  clearAndClose(memStoreScanners);
+  throw t instanceof IOException ? (IOException) t : new IOException(t);
+}
+  }
 
-// TODO this used to get the store files in descending order,
-// but now we get them in ascending order, which I think is
-// actually more correct, since memstore get put at the end.
-List sfScanners = 
StoreFileScanner.getScannersForStoreFiles(storeFilesToScan,
-  cacheBlocks, usePread, isCompaction, false, matcher, readPt);
-List scanners = new ArrayList<>(sfScanners.size() + 1);
-scanners.addAll(sfScanners);
-// Then the memstore scanners
-scanners.addAll(memStoreScanners);
-return scanners;
+  private static void clearAndClose(List scanners) {
+if (scanners == null) {
+  return;
+}
+for (KeyValueScanner s : scanners) {
+  s.close();
+}
+scanners.clear();
   }
 
   /**
@@ -1309,15 +1325,21 @@ public class HStore implements Store, HeapSize, 
StoreConfigInformation, Propagat
 this.lock.readLock().unlock();
   }
 }
-List sfScanners = 
StoreFileScanner.getScannersForStoreFiles(files,
-  cacheBlocks, usePread, isCompaction, false, matcher, readPt);
-List scanners = new ArrayList<>(sfScanners.size() + 1);
-scanners.addAll(sfScanners);
-// Then the memstore scanners
-if (memStoreScanners != null) {
-  scanners.addAll(memStoreScanners);
+try {
+  List sfScanners = StoreFileScanner
+.getScannersForStoreFiles(files, cacheBlocks, usePread, isCompaction, 
false, matcher,
+  readPt);
+  List scanners = new ArrayList<>(sfScanners.size() + 1);
+  scanners.addAll(sfScanners);
+  // Then the memstore scanners
+  if (memStoreScanners != null) {
+scanners.addAll(memStoreScanners);
+  }
+  return scanners;
+} catch (Throwable t) {
+  clearAndClose(memStoreScanners);
+  throw t instanceof IOException ? (IOException) t : new IOException(t);
 }
-return scanners;
   }
 
   /**
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
index 05d146f..51b6597 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
@@ -236,9 +236,10 @@ public class StoreScanner extends 
NonReversedNonLazyKeyValueScanner
 
 store.addChangedReaderObserver(this);
 
+List scanners = null;
 try {
   // Pass columns to try to filter out unnecessary StoreFiles.
-  List scanners = selectScannersFrom(store,
+  scanners = selectScannersFrom(store,
 store.getScanners(cacheBlocks, scanUsePread, false, matcher, 
scan.getStartRow(),
   scan.includeStartRow(), scan.getStopRow(), 

[hbase] branch branch-2.2 updated: HBASE-23356 When construct StoreScanner throw exceptions it is possible to left some KeyValueScanner not closed. (#891)

2019-12-03 Thread binlijin
This is an automated email from the ASF dual-hosted git repository.

binlijin pushed a commit to branch branch-2.2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.2 by this push:
 new d4a179f  HBASE-23356 When construct StoreScanner throw exceptions it 
is possible to left some KeyValueScanner not closed. (#891)
d4a179f is described below

commit d4a179f905dcd79c9f422aa855204f895f68dcba
Author: binlijin 
AuthorDate: Wed Dec 4 10:34:07 2019 +0800

HBASE-23356 When construct StoreScanner throw exceptions it is possible to 
left some KeyValueScanner not closed. (#891)

Signed-off-by: GuangxuCheng  
---
 .../apache/hadoop/hbase/regionserver/HStore.java   | 60 +++---
 .../hadoop/hbase/regionserver/StoreScanner.java|  7 ++-
 2 files changed, 47 insertions(+), 20 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
index 33ebcac..1af777b 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
@@ -1266,18 +1266,34 @@ public class HStore implements Store, HeapSize, 
StoreConfigInformation, Propagat
   this.lock.readLock().unlock();
 }
 
-// First the store file scanners
+try {
+  // First the store file scanners
+
+  // TODO this used to get the store files in descending order,
+  // but now we get them in ascending order, which I think is
+  // actually more correct, since memstore get put at the end.
+  List sfScanners = StoreFileScanner
+.getScannersForStoreFiles(storeFilesToScan, cacheBlocks, usePread, 
isCompaction, false,
+  matcher, readPt);
+  List scanners = new ArrayList<>(sfScanners.size() + 1);
+  scanners.addAll(sfScanners);
+  // Then the memstore scanners
+  scanners.addAll(memStoreScanners);
+  return scanners;
+} catch (Throwable t) {
+  clearAndClose(memStoreScanners);
+  throw t instanceof IOException ? (IOException) t : new IOException(t);
+}
+  }
 
-// TODO this used to get the store files in descending order,
-// but now we get them in ascending order, which I think is
-// actually more correct, since memstore get put at the end.
-List sfScanners = 
StoreFileScanner.getScannersForStoreFiles(storeFilesToScan,
-  cacheBlocks, usePread, isCompaction, false, matcher, readPt);
-List scanners = new ArrayList<>(sfScanners.size() + 1);
-scanners.addAll(sfScanners);
-// Then the memstore scanners
-scanners.addAll(memStoreScanners);
-return scanners;
+  private static void clearAndClose(List scanners) {
+if (scanners == null) {
+  return;
+}
+for (KeyValueScanner s : scanners) {
+  s.close();
+}
+scanners.clear();
   }
 
   /**
@@ -1331,15 +1347,21 @@ public class HStore implements Store, HeapSize, 
StoreConfigInformation, Propagat
 this.lock.readLock().unlock();
   }
 }
-List sfScanners = 
StoreFileScanner.getScannersForStoreFiles(files,
-  cacheBlocks, usePread, isCompaction, false, matcher, readPt);
-List scanners = new ArrayList<>(sfScanners.size() + 1);
-scanners.addAll(sfScanners);
-// Then the memstore scanners
-if (memStoreScanners != null) {
-  scanners.addAll(memStoreScanners);
+try {
+  List sfScanners = StoreFileScanner
+.getScannersForStoreFiles(files, cacheBlocks, usePread, isCompaction, 
false, matcher,
+  readPt);
+  List scanners = new ArrayList<>(sfScanners.size() + 1);
+  scanners.addAll(sfScanners);
+  // Then the memstore scanners
+  if (memStoreScanners != null) {
+scanners.addAll(memStoreScanners);
+  }
+  return scanners;
+} catch (Throwable t) {
+  clearAndClose(memStoreScanners);
+  throw t instanceof IOException ? (IOException) t : new IOException(t);
 }
-return scanners;
   }
 
   /**
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
index 67c01fa..725d8e6 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
@@ -236,9 +236,10 @@ public class StoreScanner extends 
NonReversedNonLazyKeyValueScanner
 
 store.addChangedReaderObserver(this);
 
+List scanners = null;
 try {
   // Pass columns to try to filter out unnecessary StoreFiles.
-  List scanners = selectScannersFrom(store,
+  scanners = selectScannersFrom(store,
 store.getScanners(cacheBlocks, scanUsePread, false, matcher, 
scan.getStartRow(),
   scan.includeStartRow(), scan.getStopRow(), 

[hbase] branch branch-2 updated: HBASE-23356 When construct StoreScanner throw exceptions it is possible to left some KeyValueScanner not closed. (#891)

2019-12-03 Thread binlijin
This is an automated email from the ASF dual-hosted git repository.

binlijin pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 2e8ea9e  HBASE-23356 When construct StoreScanner throw exceptions it 
is possible to left some KeyValueScanner not closed. (#891)
2e8ea9e is described below

commit 2e8ea9e2720b3e2bf114d12c5f68b1b5f9ff2b34
Author: binlijin 
AuthorDate: Wed Dec 4 10:34:07 2019 +0800

HBASE-23356 When construct StoreScanner throw exceptions it is possible to 
left some KeyValueScanner not closed. (#891)

Signed-off-by: GuangxuCheng  
---
 .../apache/hadoop/hbase/regionserver/HStore.java   | 60 +++---
 .../hadoop/hbase/regionserver/StoreScanner.java|  7 ++-
 2 files changed, 47 insertions(+), 20 deletions(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
index 8406ec8..c0bd0dd 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
@@ -1270,18 +1270,34 @@ public class HStore implements Store, HeapSize, 
StoreConfigInformation, Propagat
   this.lock.readLock().unlock();
 }
 
-// First the store file scanners
+try {
+  // First the store file scanners
+
+  // TODO this used to get the store files in descending order,
+  // but now we get them in ascending order, which I think is
+  // actually more correct, since memstore get put at the end.
+  List sfScanners = StoreFileScanner
+.getScannersForStoreFiles(storeFilesToScan, cacheBlocks, usePread, 
isCompaction, false,
+  matcher, readPt);
+  List scanners = new ArrayList<>(sfScanners.size() + 1);
+  scanners.addAll(sfScanners);
+  // Then the memstore scanners
+  scanners.addAll(memStoreScanners);
+  return scanners;
+} catch (Throwable t) {
+  clearAndClose(memStoreScanners);
+  throw t instanceof IOException ? (IOException) t : new IOException(t);
+}
+  }
 
-// TODO this used to get the store files in descending order,
-// but now we get them in ascending order, which I think is
-// actually more correct, since memstore get put at the end.
-List sfScanners = 
StoreFileScanner.getScannersForStoreFiles(storeFilesToScan,
-  cacheBlocks, usePread, isCompaction, false, matcher, readPt);
-List scanners = new ArrayList<>(sfScanners.size() + 1);
-scanners.addAll(sfScanners);
-// Then the memstore scanners
-scanners.addAll(memStoreScanners);
-return scanners;
+  private static void clearAndClose(List scanners) {
+if (scanners == null) {
+  return;
+}
+for (KeyValueScanner s : scanners) {
+  s.close();
+}
+scanners.clear();
   }
 
   /**
@@ -1335,15 +1351,21 @@ public class HStore implements Store, HeapSize, 
StoreConfigInformation, Propagat
 this.lock.readLock().unlock();
   }
 }
-List sfScanners = 
StoreFileScanner.getScannersForStoreFiles(files,
-  cacheBlocks, usePread, isCompaction, false, matcher, readPt);
-List scanners = new ArrayList<>(sfScanners.size() + 1);
-scanners.addAll(sfScanners);
-// Then the memstore scanners
-if (memStoreScanners != null) {
-  scanners.addAll(memStoreScanners);
+try {
+  List sfScanners = StoreFileScanner
+.getScannersForStoreFiles(files, cacheBlocks, usePread, isCompaction, 
false, matcher,
+  readPt);
+  List scanners = new ArrayList<>(sfScanners.size() + 1);
+  scanners.addAll(sfScanners);
+  // Then the memstore scanners
+  if (memStoreScanners != null) {
+scanners.addAll(memStoreScanners);
+  }
+  return scanners;
+} catch (Throwable t) {
+  clearAndClose(memStoreScanners);
+  throw t instanceof IOException ? (IOException) t : new IOException(t);
 }
-return scanners;
   }
 
   /**
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
index 67c01fa..725d8e6 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
@@ -236,9 +236,10 @@ public class StoreScanner extends 
NonReversedNonLazyKeyValueScanner
 
 store.addChangedReaderObserver(this);
 
+List scanners = null;
 try {
   // Pass columns to try to filter out unnecessary StoreFiles.
-  List scanners = selectScannersFrom(store,
+  scanners = selectScannersFrom(store,
 store.getScanners(cacheBlocks, scanUsePread, false, matcher, 
scan.getStartRow(),
   scan.includeStartRow(), scan.getStopRow(), 

[hbase] branch master updated (0f166ed -> 580d65e)

2019-12-03 Thread binlijin
This is an automated email from the ASF dual-hosted git repository.

binlijin pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hbase.git.


from 0f166ed  HBASE-22096 /storeFile.jsp shows CorruptHFileException when 
the storeFile is a reference file (#888)
 add 580d65e  HBASE-23356 When construct StoreScanner throw exceptions it 
is possible to left some KeyValueScanner not closed. (#891)

No new revisions were added by this update.

Summary of changes:
 .../apache/hadoop/hbase/regionserver/HStore.java   | 60 +++---
 .../hadoop/hbase/regionserver/StoreScanner.java|  7 ++-
 2 files changed, 47 insertions(+), 20 deletions(-)



[hbase] branch branch-2.2 updated: HBASE-23345 Table need to replication unless all of cfs are excluded

2019-12-03 Thread zghao
This is an automated email from the ASF dual-hosted git repository.

zghao pushed a commit to branch branch-2.2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.2 by this push:
 new 7ccb6f7  HBASE-23345 Table need to replication unless all of cfs are 
excluded
7ccb6f7 is described below

commit 7ccb6f7141e17cb0712d40a98d5bea9c11083242
Author: ddupg 
AuthorDate: Thu Nov 28 18:57:56 2019 +0800

HBASE-23345 Table need to replication unless all of cfs are excluded

Signed-off-by: Guanghao Zhang 
---
 .../hbase/replication/ReplicationPeerConfig.java   |  25 ++-
 .../replication/TestReplicationPeerConfig.java | 202 ++
 .../hadoop/hbase/replication/ReplicationUtils.java |  38 
 .../hbase/replication/TestReplicationUtil.java | 235 -
 .../master/replication/ModifyPeerProcedure.java|   7 +-
 .../master/replication/ReplicationPeerManager.java |   2 +-
 .../replication/UpdatePeerConfigProcedure.java |   6 +-
 .../NamespaceTableCfWALEntryFilter.java|   2 +-
 .../org/apache/hadoop/hbase/util/HBaseFsck.java|   7 +-
 .../TestReplicationWALEntryFilters.java| 116 +-
 10 files changed, 278 insertions(+), 362 deletions(-)

diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
index e0d9a4c..7c0f115 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
@@ -366,22 +366,31 @@ public class ReplicationPeerConfig {
* @return true if the table need replicate to the peer cluster
*/
   public boolean needToReplicate(TableName table) {
+String namespace = table.getNamespaceAsString();
 if (replicateAllUserTables) {
-  if (excludeNamespaces != null && 
excludeNamespaces.contains(table.getNamespaceAsString())) {
+  // replicate all user tables, but filter by exclude namespaces and 
table-cfs config
+  if (excludeNamespaces != null && excludeNamespaces.contains(namespace)) {
 return false;
   }
-  if (excludeTableCFsMap != null && excludeTableCFsMap.containsKey(table)) 
{
-return false;
+  // trap here, must check existence first since HashMap allows null value.
+  if (excludeTableCFsMap == null || 
!excludeTableCFsMap.containsKey(table)) {
+return true;
   }
-  return true;
+  Collection cfs = excludeTableCFsMap.get(table);
+  // if cfs is null or empty then we can make sure that we do not need to 
replicate this table,
+  // otherwise, we may still need to replicate the table but filter out 
some families.
+  return cfs != null && !cfs.isEmpty();
 } else {
-  if (namespaces != null && 
namespaces.contains(table.getNamespaceAsString())) {
-return true;
+  // Not replicate all user tables, so filter by namespaces and table-cfs 
config
+  if (namespaces == null && tableCFsMap == null) {
+return false;
   }
-  if (tableCFsMap != null && tableCFsMap.containsKey(table)) {
+  // First filter by namespaces config
+  // If table's namespace in peer config, all the tables data are 
applicable for replication
+  if (namespaces != null && namespaces.contains(namespace)) {
 return true;
   }
-  return false;
+  return tableCFsMap != null && tableCFsMap.containsKey(table);
 }
   }
 }
diff --git 
a/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
 
b/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
index 881ef45..d67a3f8 100644
--- 
a/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
+++ 
b/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
@@ -17,10 +17,17 @@
  */
 package org.apache.hadoop.hbase.replication;
 
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
 import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.testclassification.ClientTests;
 import org.apache.hadoop.hbase.testclassification.SmallTests;
 import org.apache.hadoop.hbase.util.BuilderStyleTest;
+import org.junit.Assert;
 import org.junit.ClassRule;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
@@ -32,6 +39,9 @@ public class TestReplicationPeerConfig {
   public static final HBaseClassTestRule CLASS_RULE =
   HBaseClassTestRule.forClass(TestReplicationPeerConfig.class);
 
+  private static TableName TABLE_A = TableName.valueOf("replication", "testA");
+  private static TableName TABLE_B = 

[hbase] branch branch-2 updated: HBASE-23345 Table need to replication unless all of cfs are excluded

2019-12-03 Thread zghao
This is an automated email from the ASF dual-hosted git repository.

zghao pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 997684f  HBASE-23345 Table need to replication unless all of cfs are 
excluded
997684f is described below

commit 997684f24d6bf53ab48cb09237eb5a891037dc91
Author: ddupg 
AuthorDate: Thu Nov 28 18:57:56 2019 +0800

HBASE-23345 Table need to replication unless all of cfs are excluded

Signed-off-by: Guanghao Zhang 
---
 .../hbase/replication/ReplicationPeerConfig.java   |  25 ++-
 .../replication/TestReplicationPeerConfig.java | 202 ++
 .../hadoop/hbase/replication/ReplicationUtils.java |  38 
 .../hbase/replication/TestReplicationUtil.java | 235 -
 .../master/replication/ModifyPeerProcedure.java|   7 +-
 .../master/replication/ReplicationPeerManager.java |   2 +-
 .../replication/UpdatePeerConfigProcedure.java |   6 +-
 .../NamespaceTableCfWALEntryFilter.java|   2 +-
 .../org/apache/hadoop/hbase/util/HBaseFsck.java|   7 +-
 .../TestReplicationWALEntryFilters.java| 116 +-
 10 files changed, 278 insertions(+), 362 deletions(-)

diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
index e0d9a4c..7c0f115 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerConfig.java
@@ -366,22 +366,31 @@ public class ReplicationPeerConfig {
* @return true if the table need replicate to the peer cluster
*/
   public boolean needToReplicate(TableName table) {
+String namespace = table.getNamespaceAsString();
 if (replicateAllUserTables) {
-  if (excludeNamespaces != null && 
excludeNamespaces.contains(table.getNamespaceAsString())) {
+  // replicate all user tables, but filter by exclude namespaces and 
table-cfs config
+  if (excludeNamespaces != null && excludeNamespaces.contains(namespace)) {
 return false;
   }
-  if (excludeTableCFsMap != null && excludeTableCFsMap.containsKey(table)) 
{
-return false;
+  // trap here, must check existence first since HashMap allows null value.
+  if (excludeTableCFsMap == null || 
!excludeTableCFsMap.containsKey(table)) {
+return true;
   }
-  return true;
+  Collection cfs = excludeTableCFsMap.get(table);
+  // if cfs is null or empty then we can make sure that we do not need to 
replicate this table,
+  // otherwise, we may still need to replicate the table but filter out 
some families.
+  return cfs != null && !cfs.isEmpty();
 } else {
-  if (namespaces != null && 
namespaces.contains(table.getNamespaceAsString())) {
-return true;
+  // Not replicate all user tables, so filter by namespaces and table-cfs 
config
+  if (namespaces == null && tableCFsMap == null) {
+return false;
   }
-  if (tableCFsMap != null && tableCFsMap.containsKey(table)) {
+  // First filter by namespaces config
+  // If table's namespace in peer config, all the tables data are 
applicable for replication
+  if (namespaces != null && namespaces.contains(namespace)) {
 return true;
   }
-  return false;
+  return tableCFsMap != null && tableCFsMap.containsKey(table);
 }
   }
 }
diff --git 
a/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
 
b/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
index 881ef45..d67a3f8 100644
--- 
a/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
+++ 
b/hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
@@ -17,10 +17,17 @@
  */
 package org.apache.hadoop.hbase.replication;
 
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
 import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.testclassification.ClientTests;
 import org.apache.hadoop.hbase.testclassification.SmallTests;
 import org.apache.hadoop.hbase.util.BuilderStyleTest;
+import org.junit.Assert;
 import org.junit.ClassRule;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
@@ -32,6 +39,9 @@ public class TestReplicationPeerConfig {
   public static final HBaseClassTestRule CLASS_RULE =
   HBaseClassTestRule.forClass(TestReplicationPeerConfig.class);
 
+  private static TableName TABLE_A = TableName.valueOf("replication", "testA");
+  private static TableName TABLE_B = 

[hbase] branch branch-2.1 updated: HBASE-22096 /storeFile.jsp shows CorruptHFileException when the storeFile is a reference file (addendum)

2019-12-03 Thread brfrn169
This is an automated email from the ASF dual-hosted git repository.

brfrn169 pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
 new 75764fa  HBASE-22096 /storeFile.jsp shows CorruptHFileException when 
the storeFile is a reference file (addendum)
75764fa is described below

commit 75764fa2a9dbb3c9769d7e37ef6479e71e2fb754
Author: Toshihiro Suzuki 
AuthorDate: Wed Dec 4 09:26:15 2019 +0900

HBASE-22096 /storeFile.jsp shows CorruptHFileException when the storeFile 
is a reference file (addendum)

Signed-off-by: Sean Busbey 
---
 .../src/main/resources/hbase-webapps/regionserver/storeFile.jsp | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hbase-server/src/main/resources/hbase-webapps/regionserver/storeFile.jsp 
b/hbase-server/src/main/resources/hbase-webapps/regionserver/storeFile.jsp
index b538cb7..5c0a4e1 100644
--- a/hbase-server/src/main/resources/hbase-webapps/regionserver/storeFile.jsp
+++ b/hbase-server/src/main/resources/hbase-webapps/regionserver/storeFile.jsp
@@ -54,7 +54,7 @@
  printer.setConf(conf);
  String[] options = {"-s"};
  printer.parseOptions(options);
- StoreFileInfo sfi = new StoreFileInfo(conf, fs, new Path(storeFile), 
true);
+ StoreFileInfo sfi = new StoreFileInfo(conf, fs, new Path(storeFile));
  printer.processFile(sfi.getFileStatus().getPath(), true);
  String text = byteStream.toString();%>
  <%=



[hbase] branch branch-2.2 updated: HBASE-22096 /storeFile.jsp shows CorruptHFileException when the storeFile is a reference file (addendum)

2019-12-03 Thread brfrn169
This is an automated email from the ASF dual-hosted git repository.

brfrn169 pushed a commit to branch branch-2.2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.2 by this push:
 new 3b0d223  HBASE-22096 /storeFile.jsp shows CorruptHFileException when 
the storeFile is a reference file (addendum)
3b0d223 is described below

commit 3b0d22375d375abf8208e74b290250db3dfc23d3
Author: Toshihiro Suzuki 
AuthorDate: Wed Dec 4 09:26:15 2019 +0900

HBASE-22096 /storeFile.jsp shows CorruptHFileException when the storeFile 
is a reference file (addendum)

Signed-off-by: Sean Busbey 
---
 .../src/main/resources/hbase-webapps/regionserver/storeFile.jsp | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hbase-server/src/main/resources/hbase-webapps/regionserver/storeFile.jsp 
b/hbase-server/src/main/resources/hbase-webapps/regionserver/storeFile.jsp
index b538cb7..5c0a4e1 100644
--- a/hbase-server/src/main/resources/hbase-webapps/regionserver/storeFile.jsp
+++ b/hbase-server/src/main/resources/hbase-webapps/regionserver/storeFile.jsp
@@ -54,7 +54,7 @@
  printer.setConf(conf);
  String[] options = {"-s"};
  printer.parseOptions(options);
- StoreFileInfo sfi = new StoreFileInfo(conf, fs, new Path(storeFile), 
true);
+ StoreFileInfo sfi = new StoreFileInfo(conf, fs, new Path(storeFile));
  printer.processFile(sfi.getFileStatus().getPath(), true);
  String text = byteStream.toString();%>
  <%=



[hbase] branch branch-2 updated: HBASE-23337 Release scripts should rely on maven for deploy. (#887)

2019-12-03 Thread busbey
This is an automated email from the ASF dual-hosted git repository.

busbey pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 0770b07  HBASE-23337 Release scripts should rely on maven for deploy. 
(#887)
0770b07 is described below

commit 0770b0768f913618e78b30d6b913dc1426004610
Author: Sean Busbey 
AuthorDate: Mon Dec 2 06:39:24 2019 -0600

HBASE-23337 Release scripts should rely on maven for deploy. (#887)

- switch to nexus-staging-maven-plugin for asf-release
- cleaned up some tabs in the root pom

(differs from master because there are no release scripts here.)

Signed-off-by: stack 
(cherry picked from commit 97e01070001ef81558b4dae3a3610d0c73651cb9)
---
 pom.xml | 24 +++-
 1 file changed, 23 insertions(+), 1 deletion(-)

diff --git a/pom.xml b/pom.xml
index a217cbc..4470ac5 100755
--- a/pom.xml
+++ b/pom.xml
@@ -2308,6 +2308,28 @@
 ${hbase-surefire.cygwin-argLine}
   
 
+
+
+  apache-release
+  
+
+  
+  
+org.sonatype.plugins
+nexus-staging-maven-plugin
+1.6.8
+true
+
+  https://repository.apache.org/
+  apache.releases.https
+
+  
+
+  
+
 
 
   release
@@ -2556,7 +2578,7 @@
   
 org.apache.hadoop
 hadoop-auth
-   ${hadoop-two.version}
+${hadoop-two.version}
 
   
 com.google.guava



[hbase-connectors] branch master updated: HBASE-23295 HBaseContext should use most recent delegation token (#47)

2019-12-03 Thread meszibalu
This is an automated email from the ASF dual-hosted git repository.

meszibalu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase-connectors.git


The following commit(s) were added to refs/heads/master by this push:
 new 75e4136  HBASE-23295 HBaseContext should use most recent delegation 
token (#47)
75e4136 is described below

commit 75e41365207408f5b47d5925469a49fd60078b5e
Author: István Adamcsik 
AuthorDate: Tue Dec 3 16:15:50 2019 +0100

HBASE-23295 HBaseContext should use most recent delegation token (#47)

Signed-off-by: Balazs Meszaros 
---
 .../apache/hadoop/hbase/spark/HBaseContext.scala   |  13 +-
 .../hadoop/hbase/spark/TestJavaHBaseContext.java   | 134 ++---
 .../hadoop/hbase/spark/HBaseContextSuite.scala |  11 +-
 3 files changed, 67 insertions(+), 91 deletions(-)

diff --git 
a/spark/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/HBaseContext.scala
 
b/spark/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/HBaseContext.scala
index e50a3e8..890e67f 100644
--- 
a/spark/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/HBaseContext.scala
+++ 
b/spark/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/HBaseContext.scala
@@ -65,13 +65,11 @@ class HBaseContext(@transient val sc: SparkContext,
val tmpHdfsConfgFile: String = null)
   extends Serializable with Logging {
 
-  @transient var credentials = 
UserGroupInformation.getCurrentUser().getCredentials()
   @transient var tmpHdfsConfiguration:Configuration = config
   @transient var appliedCredentials = false
   @transient val job = Job.getInstance(config)
   TableMapReduceUtil.initCredentials(job)
   val broadcastedConf = sc.broadcast(new SerializableWritable(config))
-  val credentialsConf = sc.broadcast(new 
SerializableWritable(job.getCredentials))
 
   LatestHBaseContextCache.latest = this
 
@@ -233,21 +231,12 @@ class HBaseContext(@transient val sc: SparkContext,
   }
 
   def applyCreds[T] (){
-credentials = UserGroupInformation.getCurrentUser().getCredentials()
-
-if (log.isDebugEnabled) {
-  logDebug("appliedCredentials:" + appliedCredentials + ",credentials:" + 
credentials)
-}
-
-if (!appliedCredentials && credentials != null) {
+if (!appliedCredentials) {
   appliedCredentials = true
 
   @transient val ugi = UserGroupInformation.getCurrentUser
-  ugi.addCredentials(credentials)
   // specify that this is a proxy user
   ugi.setAuthenticationMethod(AuthenticationMethod.PROXY)
-
-  ugi.addCredentials(credentialsConf.value.value)
 }
   }
 
diff --git 
a/spark/hbase-spark/src/test/java/org/apache/hadoop/hbase/spark/TestJavaHBaseContext.java
 
b/spark/hbase-spark/src/test/java/org/apache/hadoop/hbase/spark/TestJavaHBaseContext.java
index 4134ee6..865a3a3 100644
--- 
a/spark/hbase-spark/src/test/java/org/apache/hadoop/hbase/spark/TestJavaHBaseContext.java
+++ 
b/spark/hbase-spark/src/test/java/org/apache/hadoop/hbase/spark/TestJavaHBaseContext.java
@@ -52,8 +52,10 @@ import org.apache.spark.api.java.JavaRDD;
 import org.apache.spark.api.java.JavaSparkContext;
 import org.apache.spark.api.java.function.Function;
 import org.junit.After;
+import org.junit.AfterClass;
 import org.junit.Assert;
 import org.junit.Before;
+import org.junit.BeforeClass;
 import org.junit.ClassRule;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
@@ -70,11 +72,10 @@ public class TestJavaHBaseContext implements Serializable {
   public static final HBaseClassTestRule TIMEOUT =
   HBaseClassTestRule.forClass(TestJavaHBaseContext.class);
 
-  private transient JavaSparkContext jsc;
-  HBaseTestingUtility htu;
-  protected static final Logger LOG = 
LoggerFactory.getLogger(TestJavaHBaseContext.class);
-
-
+  private static transient JavaSparkContext JSC;
+  private static HBaseTestingUtility TEST_UTIL;
+  private static JavaHBaseContext HBASE_CONTEXT;
+  private static final Logger LOG = 
LoggerFactory.getLogger(TestJavaHBaseContext.class);
 
   byte[] tableName = Bytes.toBytes("t1");
   byte[] columnFamily = Bytes.toBytes("c");
@@ -82,56 +83,57 @@ public class TestJavaHBaseContext implements Serializable {
   String columnFamilyStr = Bytes.toString(columnFamily);
   String columnFamilyStr1 = Bytes.toString(columnFamily1);
 
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
 
-  @Before
-  public void setUp() {
-jsc = new JavaSparkContext("local", "JavaHBaseContextSuite");
+JSC = new JavaSparkContext("local", "JavaHBaseContextSuite");
+TEST_UTIL = new HBaseTestingUtility();
+Configuration conf = TEST_UTIL.getConfiguration();
 
-File tempDir = Files.createTempDir();
-tempDir.deleteOnExit();
+HBASE_CONTEXT = new JavaHBaseContext(JSC, conf);
 
-htu = new HBaseTestingUtility();
-try {
-  LOG.info("cleaning up test dir");
+LOG.info("cleaning up test dir");
 
-  htu.cleanupTestDir();
+

[hbase-site] branch asf-site updated: INFRA-10751 Empty commit

2019-12-03 Thread git-site-role
This is an automated email from the ASF dual-hosted git repository.

git-site-role pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/hbase-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 062366d  INFRA-10751 Empty commit
062366d is described below

commit 062366d6d34b5e386116f9f81f90f2877c0ba9ef
Author: jenkins 
AuthorDate: Tue Dec 3 14:43:06 2019 +

INFRA-10751 Empty commit



[hbase-connectors] branch master updated (8f7e56c -> 3106694)

2019-12-03 Thread meszibalu
This is an automated email from the ASF dual-hosted git repository.

meszibalu pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/hbase-connectors.git.


from 8f7e56c  HBASE-23348 Spark's createTable method throws an exception 
while the table is being split (#50)
 add 3106694  HBASE-23351 updating hbase version to 2.2.2 (#52)

No new revisions were added by this update.

Summary of changes:
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



svn commit: r37066 [2/4] - /release/hbase/2.1.8/

2019-12-03 Thread zhangduo


Added: release/hbase/2.1.8/CHANGES.md
==
--- release/hbase/2.1.8/CHANGES.md (added)
+++ release/hbase/2.1.8/CHANGES.md Tue Dec  3 08:01:47 2019
@@ -0,0 +1,1342 @@
+# HBASE Changelog
+
+
+# HBASE Changelog
+
+## Release 2.1.8 - Unreleased (as of 2019-11-19)
+
+
+
+### IMPROVEMENTS:
+
+| JIRA | Summary | Priority | Component |
+|: |: | :--- |: |
+| [HBASE-19450](https://issues.apache.org/jira/browse/HBASE-19450) | Add log 
about average execution time for ScheduledChore |  Minor | Operability |
+| [HBASE-23283](https://issues.apache.org/jira/browse/HBASE-23283) | Provide 
clear and consistent logging about the period of enabled chores |  Minor | 
Operability |
+| [HBASE-23245](https://issues.apache.org/jira/browse/HBASE-23245) | All 
MutableHistogram implementations should remove maxExpected |  Major | metrics |
+| [HBASE-23228](https://issues.apache.org/jira/browse/HBASE-23228) | Allow for 
jdk8 specific modules on branch-1 in precommit/nightly testing |  Critical | 
build, test |
+| [HBASE-23082](https://issues.apache.org/jira/browse/HBASE-23082) | Backport 
low-latency snapshot tracking for space quotas to 2.x |  Major | Quotas |
+| [HBASE-23238](https://issues.apache.org/jira/browse/HBASE-23238) | 
Additional test and checks for null references on ScannerCallableWithReplicas | 
 Minor | . |
+| [HBASE-23221](https://issues.apache.org/jira/browse/HBASE-23221) | Polish 
the WAL interface after HBASE-23181 |  Major | regionserver, wal |
+| [HBASE-23207](https://issues.apache.org/jira/browse/HBASE-23207) | Log a 
region open journal |  Minor | . |
+| [HBASE-23172](https://issues.apache.org/jira/browse/HBASE-23172) | HBase 
Canary region success count metrics reflect column family successes, not region 
successes |  Minor | canary |
+| [HBASE-20626](https://issues.apache.org/jira/browse/HBASE-20626) | Change 
the value of "Requests Per Second" on WEBUI |  Major | metrics, UI |
+| [HBASE-23093](https://issues.apache.org/jira/browse/HBASE-23093) | Avoid 
Optional Anti-Pattern where possible |  Minor | . |
+| [HBASE-23114](https://issues.apache.org/jira/browse/HBASE-23114) | Use 
archiveArtifacts in Jenkinsfiles |  Trivial | . |
+| [HBASE-23095](https://issues.apache.org/jira/browse/HBASE-23095) | Reuse 
FileStatus in StoreFileInfo |  Major | mob, snapshots |
+| [HBASE-23116](https://issues.apache.org/jira/browse/HBASE-23116) | 
LoadBalancer should log table name when balancing per table |  Minor | . |
+| [HBASE-22874](https://issues.apache.org/jira/browse/HBASE-22874) | Define a 
public interface for Canary and move existing implementation to LimitedPrivate 
|  Critical | canary |
+| [HBASE-23038](https://issues.apache.org/jira/browse/HBASE-23038) | Provide 
consistent and clear logging about disabling chores |  Minor | master, 
regionserver |
+
+
+### BUG FIXES:
+
+| JIRA | Summary | Priority | Component |
+|: |: | :--- |: |
+| [HBASE-23318](https://issues.apache.org/jira/browse/HBASE-23318) | 
LoadTestTool doesn't start |  Minor | . |
+| [HBASE-23294](https://issues.apache.org/jira/browse/HBASE-23294) | 
ReplicationBarrierCleaner should delete all the barriers for a removed region 
which does not belong to any serial replication peer |  Major | master, 
Replication |
+| [HBASE-23290](https://issues.apache.org/jira/browse/HBASE-23290) | shell 
processlist command is broken |  Major | shell |
+| [HBASE-18439](https://issues.apache.org/jira/browse/HBASE-18439) | 
Subclasses of o.a.h.h.chaos.actions.Action all use the same logger |  Minor | 
integration tests |
+| [HBASE-23262](https://issues.apache.org/jira/browse/HBASE-23262) | Cannot 
load Master UI |  Major | master, UI |
+| [HBASE-22980](https://issues.apache.org/jira/browse/HBASE-22980) | 
HRegionPartioner getPartition() method incorrectly partitions the regions of 
the table. |  Major | mapreduce |
+| [HBASE-21458](https://issues.apache.org/jira/browse/HBASE-21458) | Error: 
Could not find or load main class org.apache.hadoop.hbase.util.GetJavaProperty 
|  Minor | build, Client |
+| [HBASE-23243](https://issues.apache.org/jira/browse/HBASE-23243) | [pv2] 
Filter out SUCCESS procedures; on decent-sized cluster, plethora overwhelms 
problems |  Major | proc-v2, UI |
+| [HBASE-23247](https://issues.apache.org/jira/browse/HBASE-23247) | [hbck2] 
Schedule SCPs for 'Unknown Servers' |  Major | hbck2 |
+| [HBASE-23241](https://issues.apache.org/jira/browse/HBASE-23241) | 
TestExecutorService sometimes fail |  Major | test |
+| [HBASE-23244](https://issues.apache.org/jira/browse/HBASE-23244) | NPEs 
running Canary |  Major | canary |
+| [HBASE-23231](https://issues.apache.org/jira/browse/HBASE-23231) | 
ReplicationSource do not update metrics after refresh |  Major | wal |
+| [HBASE-23175](https://issues.apache.org/jira/browse/HBASE-23175) | Yarn 
unable to acquire delegation token for HBase Spark jobs |  Major | security, 
spark |
+| 

svn commit: r37066 [3/4] - /release/hbase/2.1.8/

2019-12-03 Thread zhangduo
Propchange: release/hbase/2.1.8/CHANGES.md
--
svn:executable = *

Added: release/hbase/2.1.8/RELEASENOTES.md
==
--- release/hbase/2.1.8/RELEASENOTES.md (added)
+++ release/hbase/2.1.8/RELEASENOTES.md Tue Dec  3 08:01:47 2019
@@ -0,0 +1,1453 @@
+# RELEASENOTES
+
+
+# HBASE  2.1.8 Release Notes
+
+These release notes cover new developer and user-facing incompatibilities, 
important issues, features, and major improvements.
+
+
+---
+
+* [HBASE-19450](https://issues.apache.org/jira/browse/HBASE-19450) | *Minor* | 
**Add log about average execution time for ScheduledChore**
+
+
+HBase internal chores now log a moving average of how long execution of each 
chore takes at `INFO` level for the logger 
`org.apache.hadoop.hbase.ScheduledChore`.
+
+Such messages will happen at most once per five minutes.
+
+
+---
+
+* [HBASE-23250](https://issues.apache.org/jira/browse/HBASE-23250) | *Minor* | 
**Log message about CleanerChore delegate initialization should be at INFO**
+
+CleanerChore delegate initialization is now logged at INFO level instead of 
DEBUG
+
+
+---
+
+* [HBASE-23243](https://issues.apache.org/jira/browse/HBASE-23243) | *Major* | 
**[pv2] Filter out SUCCESS procedures; on decent-sized cluster, plethora 
overwhelms problems**
+
+The 'Procedures & Locks' tab in Master UI only displays problematic Procedures 
now (RUNNABLE, WAITING-TIMEOUT, etc.). It no longer notes procedures whose 
state is SUCCESS.
+
+
+---
+
+* [HBASE-23227](https://issues.apache.org/jira/browse/HBASE-23227) | *Blocker* 
| **Upgrade jackson-databind to 2.9.10.1**
+
+
+
+the Apache HBase REST Proxy now uses Jackson Databind version 2.9.10.1 to 
address the following CVEs
+
+  - CVE-2019-16942
+  - CVE-2019-16943
+
+Users of prior releases with Jackson Databind 2.9.10 are advised to either 
upgrade to this release or to upgrade their local Jackson Databind jar directly.
+
+
+---
+
+* [HBASE-23222](https://issues.apache.org/jira/browse/HBASE-23222) | 
*Critical* | **Better logging and mitigation for MOB compaction failures**
+
+
+
+The MOB compaction process in the HBase Master now logs more about its 
activity.
+
+In the event that you run into the problems described in HBASE-22075, there is 
a new HFileCleanerDelegate that will stop all removal of MOB hfiles from the 
archive area. It can be configured by adding 
`org.apache.hadoop.hbase.mob.ManualMobMaintHFileCleaner` to the list configured 
for `hbase.master.hfilecleaner.plugins`. This new cleaner delegate will cause 
your archive area to grow unbounded; you will have to manually prune files 
which may be prohibitively complex. Consider if your use case will allow you to 
mitigate by disabling mob compactions instead.
+
+Caveats:
+* Be sure the list of cleaner delegates still includes the default cleaners 
you will likely need: ttl, snapshot, and hlink.
+* Be mindful that if you enable this cleaner delegate then there will be *no* 
automated process for removing these mob hfiles. You should see a single region 
per table in `%hbase_root%/archive` that accumulates files over time. You will 
have to determine which of these files are safe or not to remove.
+* You should list this cleaner delegate after the snapshot and hlink delegates 
so that you can enable sufficient logging to determine when an archived mob 
hfile is needed by those subsystems. When set to `TRACE` logging, the 
CleanerChore logger will include archive retention decision justifications.
+* If your use case creates a large number of uniquely named tables, this new 
delegate will cause memory pressure on the master.
+
+
+---
+
+* [HBASE-23172](https://issues.apache.org/jira/browse/HBASE-23172) | *Minor* | 
**HBase Canary region success count metrics reflect column family successes, 
not region successes**
+
+Added a comment to make clear that read/write success counts are tallying 
column family success counts, not region success counts.
+
+Additionally, the region read and write latencies previously only stored the 
latencies of the last column family of the region reads/writes. This has been 
fixed by using a map of each region to a list of read and write latency values.
+
+
+---
+
+* [HBASE-23177](https://issues.apache.org/jira/browse/HBASE-23177) | *Major* | 
**If fail to open reference because FNFE, make it plain it is a Reference**
+
+Changes the message on the FNFE exception thrown when the file a Reference 
points to is missing; the message now includes detail on Reference as well as 
pointed-to file so can connect how FNFE relates to region open.
+
+
+---
+
+* [HBASE-20626](https://issues.apache.org/jira/browse/HBASE-20626) | *Major* | 
**Change the value of "Requests Per Second" on WEBUI**
+
+Use 'totalRowActionRequestCount' to calculate QPS on web UI.
+
+
+---
+
+* [HBASE-22874](https://issues.apache.org/jira/browse/HBASE-22874) | 
*Critical* | **Define 

svn commit: r37066 [1/4] - /release/hbase/2.1.8/

2019-12-03 Thread zhangduo
Author: zhangduo
Date: Tue Dec  3 08:01:47 2019
New Revision: 37066

Log:
release HBase 2.1.8

Added:
release/hbase/2.1.8/
release/hbase/2.1.8/CHANGES.md   (with props)
release/hbase/2.1.8/RELEASENOTES.md   (with props)
release/hbase/2.1.8/api_compare_2.1.7_to_2.1.8RC0.html
release/hbase/2.1.8/hbase-2.1.8-bin.tar.gz   (with props)
release/hbase/2.1.8/hbase-2.1.8-bin.tar.gz.asc   (with props)
release/hbase/2.1.8/hbase-2.1.8-bin.tar.gz.sha512
release/hbase/2.1.8/hbase-2.1.8-client-bin.tar.gz   (with props)
release/hbase/2.1.8/hbase-2.1.8-client-bin.tar.gz.asc   (with props)
release/hbase/2.1.8/hbase-2.1.8-client-bin.tar.gz.sha512
release/hbase/2.1.8/hbase-2.1.8-src.tar.gz   (with props)
release/hbase/2.1.8/hbase-2.1.8-src.tar.gz.asc   (with props)
release/hbase/2.1.8/hbase-2.1.8-src.tar.gz.sha512



svn commit: r37066 [4/4] - /release/hbase/2.1.8/

2019-12-03 Thread zhangduo
Added: release/hbase/2.1.8/api_compare_2.1.7_to_2.1.8RC0.html
==
--- release/hbase/2.1.8/api_compare_2.1.7_to_2.1.8RC0.html (added)
+++ release/hbase/2.1.8/api_compare_2.1.7_to_2.1.8RC0.html Tue Dec  3 08:01:47 
2019
@@ -0,0 +1,681 @@
+
+
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd;>
+http://www.w3.org/1999/xhtml; xml:lang="en" lang="en">
+
+
+
+
+
+hbase: rel/2.1.7 to 2.1.8RC0 compatibility report
+
+body {
+font-family:Arial, sans-serif;
+background-color:White;
+color:Black;
+}
+hr {
+color:Black;
+background-color:Black;
+height:1px;
+border:0;
+}
+h1 {
+margin-bottom:0px;
+padding-bottom:0px;
+font-size:1.625em;
+}
+h2 {
+margin-bottom:0px;
+padding-bottom:0px;
+font-size:1.25em;
+white-space:nowrap;
+}
+div.symbols {
+color:#003E69;
+}
+div.symbols i {
+color:Brown;
+}
+span.section {
+font-weight:bold;
+cursor:pointer;
+color:#003E69;
+white-space:nowrap;
+margin-left:0.3125em;
+}
+span:hover.section {
+color:#336699;
+}
+span.sect_aff {
+cursor:pointer;
+padding-left:1.55em;
+font-size:0.875em;
+color:#cc3300;
+}
+span.ext {
+font-weight:normal;
+}
+span.jar {
+color:#cc3300;
+font-size:0.875em;
+font-weight:bold;
+}
+div.jar_list {
+padding-left:0.4em;
+font-size:0.94em;
+}
+span.pkg_t {
+color:#408080;
+font-size:0.875em;
+}
+span.pkg {
+color:#408080;
+font-size:0.875em;
+font-weight:bold;
+}
+span.cname {
+color:Green;
+font-size:0.875em;
+font-weight:bold;
+}
+span.iname_b {
+font-weight:bold;
+}
+span.iname_a {
+color:#33;
+font-weight:bold;
+font-size:0.94em;
+}
+span.sym_p {
+font-weight:normal;
+white-space:normal;
+}
+span.sym_pd {
+white-space:normal;
+}
+span.sym_p span, span.sym_pd span {
+white-space:nowrap;
+}
+span.attr {
+color:Black;
+font-weight:normal;
+}
+span.deprecated {
+color:Red;
+font-weight:bold;
+font-family:Monaco, monospace;
+}
+div.affect {
+padding-left:1em;
+padding-bottom:10px;
+font-size:0.87em;
+font-style:italic;
+line-height:0.9em;
+}
+div.affected {
+padding-left:2em;
+padding-top:10px;
+}
+table.ptable {
+border-collapse:collapse;
+border:1px outset black;
+margin-left:0.95em;
+margin-top:3px;
+margin-bottom:3px;
+width:56.25em;
+}
+table.ptable td {
+border:1px solid Gray;
+padding:3px;
+font-size:0.875em;
+text-align:left;
+vertical-align:top;
+max-width:28em;
+word-wrap:break-word;
+}
+table.ptable th {
+background-color:#ee;
+font-weight:bold;
+color:#33;
+font-family:Verdana, Arial;
+font-size:0.875em;
+border:1px solid Gray;
+text-align:center;
+vertical-align:top;
+white-space:nowrap;
+padding:3px;
+}
+table.summary {
+border-collapse:collapse;
+border:1px outset black;
+}
+table.summary th {
+background-color:#ee;
+font-weight:normal;
+text-align:left;
+font-size:0.94em;
+white-space:nowrap;
+border:1px inset Gray;
+padding:3px;
+}
+table.summary td {
+text-align:right;
+white-space:nowrap;
+border:1px inset Gray;
+padding:3px 5px 3px 10px;
+}
+span.mngl {
+padding-left:1em;
+font-size:0.875em;
+cursor:text;
+color:#44;
+font-weight:bold;
+}
+span.pleft {
+padding-left:2.5em;
+}
+span.color_p {
+font-style:italic;
+color:Brown;
+}
+span.param {
+font-style:italic;
+}
+span.focus_p {
+font-style:italic;
+background-color:#DCDCDC;
+}
+span.ttype {
+font-weight:normal;
+}
+span.nowrap {
+white-space:nowrap;
+}
+span.value {
+white-space:nowrap;
+font-weight:bold;
+}
+.passed {
+background-color:#CCFFCC;
+font-weight:normal;
+}
+.warning {
+background-color:#F4F4AF;
+font-weight:normal;
+}
+.failed {
+background-color:#FF;
+font-weight:normal;
+}
+.new {
+background-color:#C6DEFF;
+font-weight:normal;
+}
+
+.compatible {
+background-color:#CCFFCC;
+font-weight:normal;
+}
+.almost_compatible {
+background-color:#FFDAA3;
+font-weight:normal;
+}
+.incompatible {
+background-color:#FF;
+font-weight:normal;
+}
+.gray {
+background-color:#DCDCDC;
+font-weight:normal;
+}
+
+.top_ref {
+font-size:0.69em;
+}
+.footer {
+font-size:0.8125em;
+}
+.tabset {
+float:left;
+}
+a.tab {
+border:1px solid Black;
+float:left;
+margin:0px 5px -1px 0px;
+padding:3px 5px 3px 5px;
+position:relative;
+font-size:0.875em;
+background-color:#DDD;
+text-decoration:none;
+color:Black;
+}
+a.disabled:hover
+{
+color:Black;
+background:#EEE;
+}
+a.active:hover
+{
+color:Black;
+background:White;
+}
+a.active {
+border-bottom-color:White;
+background-color:White;
+}
+div.tab {
+border-top:1px 

svn commit: r37065 - /release/hbase/2.1.7/

2019-12-03 Thread zhangduo
Author: zhangduo
Date: Tue Dec  3 08:00:13 2019
New Revision: 37065

Log:
release HBase 2.1.8

Removed:
release/hbase/2.1.7/