hbase git commit: HBASE-19890 Canary usage should document hbase.canary.sink.class config

2018-04-04 Thread psomogyi
Repository: hbase
Updated Branches:
  refs/heads/master 7abaf22a1 -> 87ab7e712


HBASE-19890 Canary usage should document hbase.canary.sink.class config


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/87ab7e71
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/87ab7e71
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/87ab7e71

Branch: refs/heads/master
Commit: 87ab7e712df4b9c9b24665488a69190310e747d9
Parents: 7abaf22
Author: Peter Somogyi 
Authored: Tue Apr 3 10:44:29 2018 +0200
Committer: Peter Somogyi 
Committed: Wed Apr 4 09:08:31 2018 +0200

--
 src/main/asciidoc/_chapters/ops_mgt.adoc | 7 +++
 1 file changed, 7 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/87ab7e71/src/main/asciidoc/_chapters/ops_mgt.adoc
--
diff --git a/src/main/asciidoc/_chapters/ops_mgt.adoc 
b/src/main/asciidoc/_chapters/ops_mgt.adoc
index 8d49ef8..ce327fa 100644
--- a/src/main/asciidoc/_chapters/ops_mgt.adoc
+++ b/src/main/asciidoc/_chapters/ops_mgt.adoc
@@ -108,6 +108,13 @@ Usage: hbase canary [opts] [table1 [table2]...] | 
[regionserver1 [regionserver2]
-D= assigning or override the configuration params
 
 
+[NOTE]
+The `Sink` class is instantiated using the `hbase.canary.sink.class` 
configuration property which
+will also determine the used Monitor class. In the absence of this property 
RegionServerStdOutSink
+will be used. You need to use the Sink according to the passed parameters to 
the _canary_ command.
+As an example you have to set `hbase.canary.sink.class` property to
+`org.apache.hadoop.hbase.tool.Canary$RegionStdOutSink` for using table 
parameters.
+
 This tool will return non zero error codes to user for collaborating with 
other monitoring tools, such as Nagios.
 The error code definitions are:
 



hbase git commit: HBASE-20256 Document commands that do not work with 1.2 shell

2018-04-04 Thread psomogyi
Repository: hbase
Updated Branches:
  refs/heads/master 87ab7e712 -> aed7834dd


HBASE-20256 Document commands that do not work with 1.2 shell


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/aed7834d
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/aed7834d
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/aed7834d

Branch: refs/heads/master
Commit: aed7834dd11f294e1d28ce9feed7362973fe8328
Parents: 87ab7e7
Author: Peter Somogyi 
Authored: Wed Mar 28 10:37:47 2018 +0200
Committer: Peter Somogyi 
Committed: Wed Apr 4 09:13:14 2018 +0200

--
 src/main/asciidoc/_chapters/upgrading.adoc | 10 ++
 1 file changed, 10 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/aed7834d/src/main/asciidoc/_chapters/upgrading.adoc
--
diff --git a/src/main/asciidoc/_chapters/upgrading.adoc 
b/src/main/asciidoc/_chapters/upgrading.adoc
index acabf6c..38a67d4 100644
--- a/src/main/asciidoc/_chapters/upgrading.adoc
+++ b/src/main/asciidoc/_chapters/upgrading.adoc
@@ -446,6 +446,16 @@ The following permission related changes either altered 
semantics or defaults:
 
 A number of admin commands are known to not work when used from a pre-HBase 
2.0 client. This includes an HBase Shell that has the library jars from 
pre-HBase 2.0. You will need to plan for an outage of use of admin APIs and 
commands until you can also update to the needed client version.
 
+The following client operations do not work against HBase 2.0+ cluster when 
executed from a pre-HBase 2.0 client:
+
+* list_procedures
+* split
+* merge_region
+* list_quotas
+* enable_table_replication
+* disable_table_replication
+* Snapshot related commands
+
 .Deprecated in 1.0 admin commands have been removed.
 
 The following commands that were deprecated in 1.0 have been removed. Where 
applicable the replacement command is listed.



hbase git commit: HBASE-20328 Fix local backup master start command in documentation

2018-04-04 Thread psomogyi
Repository: hbase
Updated Branches:
  refs/heads/master aed7834dd -> 0c0fe05bc


HBASE-20328 Fix local backup master start command in documentation

Signed-off-by: Umesh Agashe 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/0c0fe05b
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/0c0fe05b
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/0c0fe05b

Branch: refs/heads/master
Commit: 0c0fe05bc410bcfcccaa19d4be96834cc28f9317
Parents: aed7834
Author: Yuki Tawara 
Authored: Tue Apr 3 00:10:42 2018 +0900
Committer: Peter Somogyi 
Committed: Wed Apr 4 10:06:19 2018 +0200

--
 src/main/asciidoc/_chapters/getting_started.adoc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/0c0fe05b/src/main/asciidoc/_chapters/getting_started.adoc
--
diff --git a/src/main/asciidoc/_chapters/getting_started.adoc 
b/src/main/asciidoc/_chapters/getting_started.adoc
index 1cdc0a2..47e0d96 100644
--- a/src/main/asciidoc/_chapters/getting_started.adoc
+++ b/src/main/asciidoc/_chapters/getting_started.adoc
@@ -371,7 +371,7 @@ The following command starts 3 backup servers using ports 
16002/16012, 16003/160
 +
 
 
-$ ./bin/local-master-backup.sh 2 3 5
+$ ./bin/local-master-backup.sh start 2 3 5
 
 +
 To kill a backup master without killing the entire cluster, you need to find 
its process ID (PID). The PID is stored in a file with a name like 
_/tmp/hbase-USER-X-master.pid_.



hbase git commit: HBASE-20231 Not able to delete column family from a row using RemoteHTable

2018-04-04 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1.3 0db4bd3aa -> 090adcd37


HBASE-20231 Not able to delete column family from a row using RemoteHTable

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/090adcd3
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/090adcd3
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/090adcd3

Branch: refs/heads/branch-1.3
Commit: 090adcd375e5df8d24e16f88c15cc2bfda383808
Parents: 0db4bd3
Author: Pankaj Kumar 
Authored: Wed Apr 4 14:43:02 2018 +0530
Committer: Ashish Singhi 
Committed: Wed Apr 4 14:43:02 2018 +0530

--
 .../hadoop/hbase/rest/client/RemoteHTable.java  |  7 +--
 .../hbase/rest/client/TestRemoteTable.java  | 22 
 2 files changed, 27 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/090adcd3/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
--
diff --git 
a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
 
b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
index 8fa1b8a..6b0aad1 100644
--- 
a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
+++ 
b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
@@ -109,13 +109,16 @@ public class RemoteHTable implements Table {
   Iterator ii = quals.iterator();
   while (ii.hasNext()) {
 sb.append(Bytes.toStringBinary((byte[])e.getKey()));
-sb.append(':');
 Object o = ii.next();
 // Puts use byte[] but Deletes use KeyValue
 if (o instanceof byte[]) {
+  sb.append(':');
   sb.append(Bytes.toStringBinary((byte[])o));
 } else if (o instanceof KeyValue) {
-  sb.append(Bytes.toStringBinary(((KeyValue)o).getQualifier()));
+  if (((KeyValue) o).getQualifierLength() != 0) {
+sb.append(':');
+sb.append(Bytes.toStringBinary(((KeyValue) o).getQualifier()));
+  }
 } else {
   throw new RuntimeException("object type not handled");
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/090adcd3/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
--
diff --git 
a/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
 
b/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
index 121ff65..cd33edd 100644
--- 
a/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
+++ 
b/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
@@ -330,18 +330,27 @@ public class TestRemoteTable {
 Put put = new Put(ROW_3);
 put.add(COLUMN_1, QUALIFIER_1, VALUE_1);
 put.add(COLUMN_2, QUALIFIER_2, VALUE_2);
+put.add(COLUMN_3, QUALIFIER_1, VALUE_1);
+put.add(COLUMN_3, QUALIFIER_2, VALUE_2);
 remoteTable.put(put);
 
 Get get = new Get(ROW_3);
 get.addFamily(COLUMN_1);
 get.addFamily(COLUMN_2);
+get.addFamily(COLUMN_3);
 Result result = remoteTable.get(get);
 byte[] value1 = result.getValue(COLUMN_1, QUALIFIER_1);
 byte[] value2 = result.getValue(COLUMN_2, QUALIFIER_2);
+byte[] value3 = result.getValue(COLUMN_3, QUALIFIER_1);
+byte[] value4 = result.getValue(COLUMN_3, QUALIFIER_2);
 assertNotNull(value1);
 assertTrue(Bytes.equals(VALUE_1, value1));
 assertNotNull(value2);
 assertTrue(Bytes.equals(VALUE_2, value2));
+assertNotNull(value3);
+assertTrue(Bytes.equals(VALUE_1, value3));
+assertNotNull(value4);
+assertTrue(Bytes.equals(VALUE_2, value4));
 
 Delete delete = new Delete(ROW_3);
 delete.addColumn(COLUMN_2, QUALIFIER_2);
@@ -371,6 +380,19 @@ public class TestRemoteTable {
 assertTrue(Bytes.equals(VALUE_1, value1));
 assertNull(value2);
 
+// Delete column family from row
+delete = new Delete(ROW_3);
+delete.addFamily(COLUMN_3);
+remoteTable.delete(delete);
+
+get = new Get(ROW_3);
+get.addFamily(COLUMN_3);
+result = remoteTable.get(get);
+value3 = result.getValue(COLUMN_3, QUALIFIER_1);
+value4 = result.getValue(COLUMN_3, QUALIFIER_2);
+assertNull(value3);
+assertNull(value4);
+
 delete = new Delete(ROW_3);
 remoteTable.delete(delete);
 



hbase git commit: HBASE-20231 Not able to delete column family from a row using RemoteHTable

2018-04-04 Thread ashishsinghi
Repository: hbase
Updated Branches:
  refs/heads/branch-1.2 76f599de9 -> 8eac32fe9


HBASE-20231 Not able to delete column family from a row using RemoteHTable

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/8eac32fe
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/8eac32fe
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/8eac32fe

Branch: refs/heads/branch-1.2
Commit: 8eac32fe92cc960490a9f560133b5be2c05558b4
Parents: 76f599d
Author: Pankaj Kumar 
Authored: Wed Apr 4 14:44:12 2018 +0530
Committer: Ashish Singhi 
Committed: Wed Apr 4 14:44:12 2018 +0530

--
 .../hadoop/hbase/rest/client/RemoteHTable.java  |  7 +--
 .../hbase/rest/client/TestRemoteTable.java  | 22 
 2 files changed, 27 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/8eac32fe/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
--
diff --git 
a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
 
b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
index 8c5c168..e878794 100644
--- 
a/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
+++ 
b/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
@@ -109,13 +109,16 @@ public class RemoteHTable implements Table {
   Iterator ii = quals.iterator();
   while (ii.hasNext()) {
 sb.append(Bytes.toStringBinary((byte[])e.getKey()));
-sb.append(':');
 Object o = ii.next();
 // Puts use byte[] but Deletes use KeyValue
 if (o instanceof byte[]) {
+  sb.append(':');
   sb.append(Bytes.toStringBinary((byte[])o));
 } else if (o instanceof KeyValue) {
-  sb.append(Bytes.toStringBinary(((KeyValue)o).getQualifier()));
+  if (((KeyValue) o).getQualifierLength() != 0) {
+sb.append(':');
+sb.append(Bytes.toStringBinary(((KeyValue) o).getQualifier()));
+  }
 } else {
   throw new RuntimeException("object type not handled");
 }

http://git-wip-us.apache.org/repos/asf/hbase/blob/8eac32fe/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
--
diff --git 
a/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
 
b/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
index 121ff65..cd33edd 100644
--- 
a/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
+++ 
b/hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
@@ -330,18 +330,27 @@ public class TestRemoteTable {
 Put put = new Put(ROW_3);
 put.add(COLUMN_1, QUALIFIER_1, VALUE_1);
 put.add(COLUMN_2, QUALIFIER_2, VALUE_2);
+put.add(COLUMN_3, QUALIFIER_1, VALUE_1);
+put.add(COLUMN_3, QUALIFIER_2, VALUE_2);
 remoteTable.put(put);
 
 Get get = new Get(ROW_3);
 get.addFamily(COLUMN_1);
 get.addFamily(COLUMN_2);
+get.addFamily(COLUMN_3);
 Result result = remoteTable.get(get);
 byte[] value1 = result.getValue(COLUMN_1, QUALIFIER_1);
 byte[] value2 = result.getValue(COLUMN_2, QUALIFIER_2);
+byte[] value3 = result.getValue(COLUMN_3, QUALIFIER_1);
+byte[] value4 = result.getValue(COLUMN_3, QUALIFIER_2);
 assertNotNull(value1);
 assertTrue(Bytes.equals(VALUE_1, value1));
 assertNotNull(value2);
 assertTrue(Bytes.equals(VALUE_2, value2));
+assertNotNull(value3);
+assertTrue(Bytes.equals(VALUE_1, value3));
+assertNotNull(value4);
+assertTrue(Bytes.equals(VALUE_2, value4));
 
 Delete delete = new Delete(ROW_3);
 delete.addColumn(COLUMN_2, QUALIFIER_2);
@@ -371,6 +380,19 @@ public class TestRemoteTable {
 assertTrue(Bytes.equals(VALUE_1, value1));
 assertNull(value2);
 
+// Delete column family from row
+delete = new Delete(ROW_3);
+delete.addFamily(COLUMN_3);
+remoteTable.delete(delete);
+
+get = new Get(ROW_3);
+get.addFamily(COLUMN_3);
+result = remoteTable.get(get);
+value3 = result.getValue(COLUMN_3, QUALIFIER_1);
+value4 = result.getValue(COLUMN_3, QUALIFIER_2);
+assertNull(value3);
+assertNull(value4);
+
 delete = new Delete(ROW_3);
 remoteTable.delete(delete);
 



hbase git commit: HBASE-20301 Remove the meaningless plus sign from table.jsp

2018-04-04 Thread chia7712
Repository: hbase
Updated Branches:
  refs/heads/branch-1.4 0ccdffe95 -> 382c5f079


HBASE-20301 Remove the meaningless plus sign from table.jsp

Signed-off-by: tedyu 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/382c5f07
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/382c5f07
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/382c5f07

Branch: refs/heads/branch-1.4
Commit: 382c5f0791af7d7489f580a3abb258075579ad37
Parents: 0ccdffe
Author: Chia-Ping Tsai 
Authored: Wed Mar 28 14:46:35 2018 +0800
Committer: Chia-Ping Tsai 
Committed: Wed Apr 4 20:11:14 2018 +0800

--
 hbase-server/src/main/resources/hbase-webapps/master/table.jsp | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/382c5f07/hbase-server/src/main/resources/hbase-webapps/master/table.jsp
--
diff --git a/hbase-server/src/main/resources/hbase-webapps/master/table.jsp 
b/hbase-server/src/main/resources/hbase-webapps/master/table.jsp
index 86a5a76..44f0a64 100644
--- a/hbase-server/src/main/resources/hbase-webapps/master/table.jsp
+++ b/hbase-server/src/main/resources/hbase-webapps/master/table.jsp
@@ -665,7 +665,7 @@ ShowDetailName&Start/End Key

hbase git commit: HBASE-20301 Remove the meaningless plus sign from table.jsp

2018-04-04 Thread chia7712
Repository: hbase
Updated Branches:
  refs/heads/branch-1 2eae8104d -> 2f683cd43


HBASE-20301 Remove the meaningless plus sign from table.jsp

Signed-off-by: tedyu 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/2f683cd4
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/2f683cd4
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/2f683cd4

Branch: refs/heads/branch-1
Commit: 2f683cd4386e99381fcab769ead21e1385f494e9
Parents: 2eae810
Author: Chia-Ping Tsai 
Authored: Wed Mar 28 14:46:35 2018 +0800
Committer: Chia-Ping Tsai 
Committed: Wed Apr 4 20:10:46 2018 +0800

--
 hbase-server/src/main/resources/hbase-webapps/master/table.jsp | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/2f683cd4/hbase-server/src/main/resources/hbase-webapps/master/table.jsp
--
diff --git a/hbase-server/src/main/resources/hbase-webapps/master/table.jsp 
b/hbase-server/src/main/resources/hbase-webapps/master/table.jsp
index 5fa068c..2d77e57 100644
--- a/hbase-server/src/main/resources/hbase-webapps/master/table.jsp
+++ b/hbase-server/src/main/resources/hbase-webapps/master/table.jsp
@@ -672,7 +672,7 @@ ShowDetailName&Start/End Key

[02/25] hbase-site git commit: Published site at 0c0fe05bc410bcfcccaa19d4be96834cc28f9317.

2018-04-04 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/export_control.html
--
diff --git a/export_control.html b/export_control.html
index 24c0446..561a436 100644
--- a/export_control.html
+++ b/export_control.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase – 
   Export Control
@@ -321,7 +321,7 @@ for more details.
 https://www.apache.org/";>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2018-04-03
+  Last Published: 
2018-04-04
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/index.html
--
diff --git a/index.html b/index.html
index 2d5468c..bf7d736 100644
--- a/index.html
+++ b/index.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase – Apache HBase™ Home
 
@@ -425,7 +425,7 @@ Apache HBase is an open-source, distributed, versioned, 
non-relational database
 https://www.apache.org/";>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2018-04-03
+  Last Published: 
2018-04-04
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/integration.html
--
diff --git a/integration.html b/integration.html
index abadd64..b8cd2d8 100644
--- a/integration.html
+++ b/integration.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase – CI Management
 
@@ -281,7 +281,7 @@
 https://www.apache.org/";>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2018-04-03
+  Last Published: 
2018-04-04
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/issue-tracking.html
--
diff --git a/issue-tracking.html b/issue-tracking.html
index d3c0079..7cfa5eb 100644
--- a/issue-tracking.html
+++ b/issue-tracking.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase – Issue Management
 
@@ -278,7 +278,7 @@
 https://www.apache.org/";>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2018-04-03
+  Last Published: 
2018-04-04
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/license.html
--
diff --git a/license.html b/license.html
index 73bfa25..0ca53c8 100644
--- a/license.html
+++ b/license.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase – Project Licenses
 
@@ -481,7 +481,7 @@
 https://www.apache.org/";>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2018-04-03
+  Last Published: 
2018-04-04
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/mail-lists.html
--
diff --git a/mail-lists.html b/mail-lists.html
index 6820fd5..33c4f9a 100644
--- a/mail-lists.html
+++ b/mail-lists.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase – Project Mailing Lists
 
@@ -331,7 +331,7 @@
 https://www.apache.org/";>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2018-04-03
+  Last Published: 
2018-04-04
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/metrics.html
--
diff --git a/metrics.html b/metrics.html
index 62f2c84..776c015 100644
--- a/metrics.html
+++ b/metrics.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase –  
   Apache HBase (TM) Metrics
@@ -449,7 +449,7 @@ export HBASE_REGIONSERVER_OPTS="$HBASE_JMX_OPTS 
-Dcom.sun.management.jmxrem
 https://www.apache.org/";>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2018-04-03
+  Last Published: 
2018-04-04
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/old_news.html
--
diff --git a/old_news.html b/old_news.html
ind

hbase-site git commit: INFRA-10751 Empty commit

2018-04-04 Thread git-site-role
Repository: hbase-site
Updated Branches:
  refs/heads/asf-site 174c22ea2 -> a0fbd6a82


INFRA-10751 Empty commit


Project: http://git-wip-us.apache.org/repos/asf/hbase-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase-site/commit/a0fbd6a8
Tree: http://git-wip-us.apache.org/repos/asf/hbase-site/tree/a0fbd6a8
Diff: http://git-wip-us.apache.org/repos/asf/hbase-site/diff/a0fbd6a8

Branch: refs/heads/asf-site
Commit: a0fbd6a823c9f5c583c24874b24f028879e938c3
Parents: 174c22e
Author: jenkins 
Authored: Wed Apr 4 14:48:00 2018 +
Committer: jenkins 
Committed: Wed Apr 4 14:48:00 2018 +

--

--




[21/25] hbase-site git commit: Published site at 0c0fe05bc410bcfcccaa19d4be96834cc28f9317.

2018-04-04 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/book.html
--
diff --git a/book.html b/book.html
index 233b28d..0621ea8 100644
--- a/book.html
+++ b/book.html
@@ -901,7 +901,7 @@ The following command starts 3 backup servers using ports 
16002/16012, 16003/160
 
 
 
-$ ./bin/local-master-backup.sh 2 3 5
+$ ./bin/local-master-backup.sh start 2 3 5
 
 
 
@@ -6751,6 +6751,12 @@ Quitting...
 
 The metric 'blockCacheEvictionCount' published on a per-region server basis 
no longer includes blocks removed from the cache due to the invalidation of the 
hfiles they are from (e.g. via compaction).
 
+
+The metric 'totalRequestCount' increments once per request; previously it 
incremented by the number of Actions carried in the request; e.g. 
if a request was a multi made of four Gets and two Puts, 
we’d increment 'totalRequestCount' by six; now we increment by one 
regardless. Expect to see lower values for this metric in hbase-2.0.0.
+
+
+The 'readRequestCount' now counts reads that return a non-empty row where 
in older hbases, we’d increment 'readRequestCount' whether a Result or 
not. This change will flatten the profile of the read-requests graphs if 
requests for non-existent rows. A YCSB read-heavy workload can do this 
dependent on how the database was loaded.
+
 
 
 
@@ -6763,6 +6769,16 @@ Quitting...
 
 
 
+
+The following metrics have been added:
+
+
+
+
+'totalRowActionRequestCount' is a count of region row actions summing reads 
and writes.
+
+
+
 
 ZooKeeper configs no longer read from zoo.cfg
 HBase no longer optionally reads the 'zoo.cfg' file for ZooKeeper related 
configuration settings. If you previously relied on the 
'hbase.config.read.zookeeper.config' config for this functionality, you should 
migrate any needed settings to the hbase-site.xml file while adding the prefix 
'hbase.zookeeper.property.' to each property name.
@@ -6786,6 +6802,34 @@ Quitting...
 A number of admin commands are known to not work when used from a pre-HBase 
2.0 client. This includes an HBase Shell that has the library jars from 
pre-HBase 2.0. You will need to plan for an outage of use of admin APIs and 
commands until you can also update to the needed client version.
 
 
+The following client operations do not work against HBase 2.0+ cluster when 
executed from a pre-HBase 2.0 client:
+
+
+
+
+list_procedures
+
+
+split
+
+
+merge_region
+
+
+list_quotas
+
+
+enable_table_replication
+
+
+disable_table_replication
+
+
+Snapshot related commands
+
+
+
+
 Deprecated in 1.0 admin commands have been removed.
 The following commands that were deprecated in 1.0 have been removed. Where 
applicable the replacement command is listed.
 
@@ -14702,8 +14746,11 @@ If writing to the WAL fails, the entire operation to 
modify the data fails.
 
 
 HBase uses an implementation of the https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/wal/WAL.html";>WAL
 interface.
-Usually, there is only one instance of a WAL per RegionServer.
-The RegionServer records Puts and Deletes to it, before recording them to the 
MemStore for the affected Store.
+Usually, there is only one instance of a WAL per RegionServer. An exception
+is the RegionServer that is carrying hbase:meta; the meta 
table gets its
+own dedicated WAL.
+The RegionServer records Puts and Deletes to its WAL, before recording them
+these Mutations MemStore for the affected Store.
 
 
 
@@ -14723,14 +14770,46 @@ You will likely find references to the HLog in 
documentation tailored to these o
 
 
 
-The WAL resides in HDFS in the /hbase/WALs/ directory (prior to 
HBase 0.94, they were stored in /hbase/.logs/), with subdirectories 
per region.
+The WAL resides in HDFS in the /hbase/WALs/ directory, with 
subdirectories per region.
+
+
+For more general information about the concept of write ahead logs, see the 
Wikipedia
+http://en.wikipedia.org/wiki/Write-ahead_logging";>Write-Ahead Log 
article.
+
+
+
+70.6.2. WAL 
Providers
+
+In HBase, there are a number of WAL imlementations (or 'Providers'). Each 
is known
+by a short name label (that unfortunately is not always descriptive). You set 
the provider in
+hbase-site.xml passing the WAL provder short-name as the value on the
+hbase.wal.provider property (Set the provider for hbase:meta 
using the
+hbase.wal.meta_provider property).
+
+
+
+
+asyncfs: The default. New since hbase-2.0.0 
(HBASE-15536, HBASE-14790). This AsyncFSWAL provider, as it identifies 
itself in RegionServer logs, is built on a new non-blocking dfsclient 
implementation. It is currently resident in the hbase codebase but intent is to 
move it back up into HDFS itself. WALs edits are written concurrently 
("fan-out") style to each of the WAL-block replicas on each DataNode rather 
than in a chained pipeline as the default client does. Latencies should be 
better. See https://www.slideshare.net/HBaseCon/apache-hbase-improvements-and-practices-at-xiaomi";>Apache
 HBase Improe

[15/25] hbase-site git commit: Published site at 0c0fe05bc410bcfcccaa19d4be96834cc28f9317.

2018-04-04 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/devapidocs/org/apache/hadoop/hbase/regionserver/package-tree.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/regionserver/package-tree.html 
b/devapidocs/org/apache/hadoop/hbase/regionserver/package-tree.html
index 8d5fac5..55727dc 100644
--- a/devapidocs/org/apache/hadoop/hbase/regionserver/package-tree.html
+++ b/devapidocs/org/apache/hadoop/hbase/regionserver/package-tree.html
@@ -704,20 +704,20 @@
 
 java.lang.https://docs.oracle.com/javase/8/docs/api/java/lang/Enum.html?is-external=true";
 title="class or interface in java.lang">Enum (implements java.lang.https://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html?is-external=true";
 title="class or interface in java.lang">Comparable, java.io.https://docs.oracle.com/javase/8/docs/api/java/io/Serializable.html?is-external=true";
 title="class or interface in java.io">Serializable)
 
-org.apache.hadoop.hbase.regionserver.TimeRangeTracker.Type
+org.apache.hadoop.hbase.regionserver.BloomType
+org.apache.hadoop.hbase.regionserver.MetricsRegionServerSourceFactoryImpl.FactoryStorage
 org.apache.hadoop.hbase.regionserver.HRegion.FlushResult.Result
+org.apache.hadoop.hbase.regionserver.Region.Operation
+org.apache.hadoop.hbase.regionserver.ScanType
+org.apache.hadoop.hbase.regionserver.TimeRangeTracker.Type
 org.apache.hadoop.hbase.regionserver.SplitLogWorker.TaskExecutor.Status
-org.apache.hadoop.hbase.regionserver.DefaultHeapMemoryTuner.StepDirection
-org.apache.hadoop.hbase.regionserver.MetricsRegionServerSourceFactoryImpl.FactoryStorage
 org.apache.hadoop.hbase.regionserver.ScannerContext.LimitScope
-org.apache.hadoop.hbase.regionserver.FlushType
-org.apache.hadoop.hbase.regionserver.ScannerContext.NextState
 org.apache.hadoop.hbase.regionserver.ChunkCreator.ChunkType
-org.apache.hadoop.hbase.regionserver.ScanType
-org.apache.hadoop.hbase.regionserver.BloomType
-org.apache.hadoop.hbase.regionserver.Region.Operation
-org.apache.hadoop.hbase.regionserver.CompactingMemStore.IndexType
 org.apache.hadoop.hbase.regionserver.MemStoreCompactionStrategy.Action
+org.apache.hadoop.hbase.regionserver.ScannerContext.NextState
+org.apache.hadoop.hbase.regionserver.CompactingMemStore.IndexType
+org.apache.hadoop.hbase.regionserver.FlushType
+org.apache.hadoop.hbase.regionserver.DefaultHeapMemoryTuner.StepDirection
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/devapidocs/org/apache/hadoop/hbase/regionserver/querymatcher/package-tree.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/regionserver/querymatcher/package-tree.html
 
b/devapidocs/org/apache/hadoop/hbase/regionserver/querymatcher/package-tree.html
index b377318..3bd22b5 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/regionserver/querymatcher/package-tree.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/regionserver/querymatcher/package-tree.html
@@ -130,8 +130,8 @@
 
 java.lang.https://docs.oracle.com/javase/8/docs/api/java/lang/Enum.html?is-external=true";
 title="class or interface in java.lang">Enum (implements java.lang.https://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html?is-external=true";
 title="class or interface in java.lang">Comparable, java.io.https://docs.oracle.com/javase/8/docs/api/java/io/Serializable.html?is-external=true";
 title="class or interface in java.io">Serializable)
 
-org.apache.hadoop.hbase.regionserver.querymatcher.StripeCompactionScanQueryMatcher.DropDeletesInOutput
 org.apache.hadoop.hbase.regionserver.querymatcher.ScanQueryMatcher.MatchCode
+org.apache.hadoop.hbase.regionserver.querymatcher.StripeCompactionScanQueryMatcher.DropDeletesInOutput
 org.apache.hadoop.hbase.regionserver.querymatcher.DeleteTracker.DeleteResult
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/devapidocs/org/apache/hadoop/hbase/replication/regionserver/package-tree.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/replication/regionserver/package-tree.html 
b/devapidocs/org/apache/hadoop/hbase/replication/regionserver/package-tree.html
index 732825f..5efcbb8 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/replication/regionserver/package-tree.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/replication/regionserver/package-tree.html
@@ -199,8 +199,8 @@
 
 java.lang.https://docs.oracle.com/javase/8/docs/api/java/lang/Enum.html?is-external=true";
 title="class or interface in java.lang">Enum (implements java.lang.https://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html?is-external=true";
 title="class or interface in java.lang">Comparable, java.io.https://docs.oracle.com/javase/8/docs/api/java/io/Serializable.html?is-external=true";
 title="class or interface in java.io">Serializable)
 
-org.apache.hadoop.hbase.replication.regionse

[05/25] hbase-site git commit: Published site at 0c0fe05bc410bcfcccaa19d4be96834cc28f9317.

2018-04-04 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/devapidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.Scanner.Iter.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.Scanner.Iter.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.Scanner.Iter.html
index 81dd9f9..fc92a63 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.Scanner.Iter.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.Scanner.Iter.html
@@ -123,910 +123,913 @@
 115  Iterator ii = 
quals.iterator();
 116  while (ii.hasNext()) {
 117
sb.append(toURLEncodedBytes((byte[])e.getKey()));
-118sb.append(':');
-119Object o = ii.next();
-120// Puts use byte[] but 
Deletes use KeyValue
-121if (o instanceof byte[]) {
-122  
sb.append(toURLEncodedBytes((byte[])o));
+118Object o = ii.next();
+119// Puts use byte[] but 
Deletes use KeyValue
+120if (o instanceof byte[]) {
+121  sb.append(':');
+122  
sb.append(toURLEncodedBytes((byte[]) o));
 123} else if (o instanceof 
KeyValue) {
-124  
sb.append(toURLEncodedBytes(CellUtil.cloneQualifier((KeyValue)o)));
-125} else {
-126  throw new 
RuntimeException("object type not handled");
-127}
-128if (ii.hasNext()) {
-129  sb.append(',');
+124  if (((KeyValue) 
o).getQualifierLength() != 0) {
+125sb.append(':');
+126
sb.append(toURLEncodedBytes(CellUtil.cloneQualifier((KeyValue) o)));
+127  }
+128} else {
+129  throw new 
RuntimeException("object type not handled");
 130}
-131  }
-132}
-133if (i.hasNext()) {
-134  sb.append(',');
+131if (ii.hasNext()) {
+132  sb.append(',');
+133}
+134  }
 135}
-136  }
-137}
-138if (startTime >= 0 && 
endTime != Long.MAX_VALUE) {
-139  sb.append('/');
-140  sb.append(startTime);
-141  if (startTime != endTime) {
-142sb.append(',');
-143sb.append(endTime);
-144  }
-145} else if (endTime != Long.MAX_VALUE) 
{
-146  sb.append('/');
-147  sb.append(endTime);
-148}
-149if (maxVersions > 1) {
-150  sb.append("?v=");
-151  sb.append(maxVersions);
-152}
-153return sb.toString();
-154  }
-155
-156  protected String 
buildMultiRowSpec(final byte[][] rows, int maxVersions) {
-157StringBuilder sb = new 
StringBuilder();
-158sb.append('/');
-159sb.append(Bytes.toString(name));
-160sb.append("/multiget/");
-161if (rows == null || rows.length == 0) 
{
-162  return sb.toString();
-163}
-164sb.append("?");
-165for(int i=0; i[19/25] hbase-site git commit: Published site at 0c0fe05bc410bcfcccaa19d4be96834cc28f9317.

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/checkstyle.rss
--
diff --git a/checkstyle.rss b/checkstyle.rss
index dda72e8..3d9e14a 100644
--- a/checkstyle.rss
+++ b/checkstyle.rss
@@ -26,7 +26,7 @@ under the License.
 ©2007 - 2018 The Apache Software Foundation
 
   File: 3600,
- Errors: 15914,
+ Errors: 15913,
  Warnings: 0,
  Infos: 0
   
@@ -13971,7 +13971,7 @@ under the License.
   0
 
 
-  88
+  87
 
   
   

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/coc.html
--
diff --git a/coc.html b/coc.html
index 62dd9f8..d922ab3 100644
--- a/coc.html
+++ b/coc.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase – 
   Code of Conduct Policy
@@ -365,7 +365,7 @@ email to mailto:priv...@hbase.apache.org";>the priv
 https://www.apache.org/";>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2018-04-03
+  Last Published: 
2018-04-04
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/dependencies.html
--
diff --git a/dependencies.html b/dependencies.html
index 936adbb..dea81bf 100644
--- a/dependencies.html
+++ b/dependencies.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase – Project Dependencies
 
@@ -430,7 +430,7 @@
 https://www.apache.org/";>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2018-04-03
+  Last Published: 
2018-04-04
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/dependency-convergence.html
--
diff --git a/dependency-convergence.html b/dependency-convergence.html
index faedc46..0c9c22d 100644
--- a/dependency-convergence.html
+++ b/dependency-convergence.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase – Reactor Dependency Convergence
 
@@ -1095,7 +1095,7 @@
 https://www.apache.org/";>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2018-04-03
+  Last Published: 
2018-04-04
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/dependency-info.html
--
diff --git a/dependency-info.html b/dependency-info.html
index 81994f9..7e867d6 100644
--- a/dependency-info.html
+++ b/dependency-info.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase – Dependency Information
 
@@ -303,7 +303,7 @@
 https://www.apache.org/";>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2018-04-03
+  Last Published: 
2018-04-04
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/dependency-management.html
--
diff --git a/dependency-management.html b/dependency-management.html
index 9e26e21..6b6d817 100644
--- a/dependency-management.html
+++ b/dependency-management.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase – Project Dependency Management
 
@@ -959,7 +959,7 @@
 https://www.apache.org/";>The Apache Software 
Foundation.
 All rights reserved.  
 
-  Last Published: 
2018-04-03
+  Last Published: 
2018-04-04
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/devapidocs/constant-values.html
--
diff --git a/devapidocs/constant-values.html b/devapidocs/constant-values.html
index dc53779..4473943 100644
--- a/devapidocs/constant-values.html
+++ b/devapidocs/constant-values.html
@@ -3768,21 +3768,21 @@
 
 public static final https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String
 date
-"Tue Apr  3 14:41:01 UTC 2018"
+"Wed Apr  4 14:41:07 UTC 2018"
 
 
 
 
 public static final https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String
 revision
-"219625233c1e

[11/25] hbase-site git commit: Published site at 0c0fe05bc410bcfcccaa19d4be96834cc28f9317.

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegionServer.MovedRegionInfo.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegionServer.MovedRegionInfo.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegionServer.MovedRegionInfo.html
index 7aeb6fd..d2efdfe 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegionServer.MovedRegionInfo.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegionServer.MovedRegionInfo.html
@@ -55,3756 +55,3746 @@
 047import 
java.util.concurrent.locks.ReentrantReadWriteLock;
 048import java.util.function.Function;
 049import 
javax.management.MalformedObjectNameException;
-050import javax.management.ObjectName;
-051import javax.servlet.http.HttpServlet;
-052import 
org.apache.commons.lang3.RandomUtils;
-053import 
org.apache.commons.lang3.StringUtils;
-054import 
org.apache.commons.lang3.SystemUtils;
-055import 
org.apache.hadoop.conf.Configuration;
-056import org.apache.hadoop.fs.FileSystem;
-057import org.apache.hadoop.fs.Path;
-058import 
org.apache.hadoop.hbase.Abortable;
-059import 
org.apache.hadoop.hbase.CacheEvictionStats;
-060import 
org.apache.hadoop.hbase.ChoreService;
-061import 
org.apache.hadoop.hbase.ClockOutOfSyncException;
-062import 
org.apache.hadoop.hbase.CoordinatedStateManager;
-063import 
org.apache.hadoop.hbase.DoNotRetryIOException;
-064import 
org.apache.hadoop.hbase.HBaseConfiguration;
-065import 
org.apache.hadoop.hbase.HBaseInterfaceAudience;
-066import 
org.apache.hadoop.hbase.HConstants;
-067import 
org.apache.hadoop.hbase.HealthCheckChore;
-068import 
org.apache.hadoop.hbase.MetaTableAccessor;
-069import 
org.apache.hadoop.hbase.NotServingRegionException;
-070import 
org.apache.hadoop.hbase.PleaseHoldException;
-071import 
org.apache.hadoop.hbase.ScheduledChore;
-072import 
org.apache.hadoop.hbase.ServerName;
-073import 
org.apache.hadoop.hbase.Stoppable;
-074import 
org.apache.hadoop.hbase.TableDescriptors;
-075import 
org.apache.hadoop.hbase.TableName;
-076import 
org.apache.hadoop.hbase.YouAreDeadException;
-077import 
org.apache.hadoop.hbase.ZNodeClearer;
-078import 
org.apache.hadoop.hbase.client.ClusterConnection;
-079import 
org.apache.hadoop.hbase.client.Connection;
-080import 
org.apache.hadoop.hbase.client.ConnectionUtils;
-081import 
org.apache.hadoop.hbase.client.RegionInfo;
-082import 
org.apache.hadoop.hbase.client.RegionInfoBuilder;
-083import 
org.apache.hadoop.hbase.client.RpcRetryingCallerFactory;
-084import 
org.apache.hadoop.hbase.client.TableDescriptorBuilder;
-085import 
org.apache.hadoop.hbase.client.locking.EntityLock;
-086import 
org.apache.hadoop.hbase.client.locking.LockServiceClient;
-087import 
org.apache.hadoop.hbase.conf.ConfigurationManager;
-088import 
org.apache.hadoop.hbase.conf.ConfigurationObserver;
-089import 
org.apache.hadoop.hbase.coordination.SplitLogWorkerCoordination;
-090import 
org.apache.hadoop.hbase.coordination.ZkCoordinatedStateManager;
-091import 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost;
-092import 
org.apache.hadoop.hbase.exceptions.RegionMovedException;
-093import 
org.apache.hadoop.hbase.exceptions.RegionOpeningException;
-094import 
org.apache.hadoop.hbase.exceptions.UnknownProtocolException;
-095import 
org.apache.hadoop.hbase.executor.ExecutorService;
-096import 
org.apache.hadoop.hbase.executor.ExecutorType;
-097import 
org.apache.hadoop.hbase.fs.HFileSystem;
-098import 
org.apache.hadoop.hbase.http.InfoServer;
-099import 
org.apache.hadoop.hbase.io.hfile.BlockCache;
-100import 
org.apache.hadoop.hbase.io.hfile.CacheConfig;
-101import 
org.apache.hadoop.hbase.io.hfile.HFile;
-102import 
org.apache.hadoop.hbase.io.util.MemorySizeUtil;
-103import 
org.apache.hadoop.hbase.ipc.CoprocessorRpcUtils;
-104import 
org.apache.hadoop.hbase.ipc.NettyRpcClientConfigHelper;
-105import 
org.apache.hadoop.hbase.ipc.RpcClient;
-106import 
org.apache.hadoop.hbase.ipc.RpcClientFactory;
-107import 
org.apache.hadoop.hbase.ipc.RpcControllerFactory;
-108import 
org.apache.hadoop.hbase.ipc.RpcServer;
-109import 
org.apache.hadoop.hbase.ipc.RpcServerInterface;
-110import 
org.apache.hadoop.hbase.ipc.ServerNotRunningYetException;
-111import 
org.apache.hadoop.hbase.ipc.ServerRpcController;
-112import 
org.apache.hadoop.hbase.log.HBaseMarkers;
-113import 
org.apache.hadoop.hbase.master.HMaster;
-114import 
org.apache.hadoop.hbase.master.LoadBalancer;
-115import 
org.apache.hadoop.hbase.master.RegionState.State;
-116import 
org.apache.hadoop.hbase.mob.MobCacheConfig;
-117import 
org.apache.hadoop.hbase.procedure.RegionServerProcedureManagerHost;
-118import 
org.apache.hadoop.hbase.procedure2.RSProcedureCallable;
-119import 
org.apache.hadoop.hbase.quotas.FileSystemUtilizationChore;
-120import 
org.apache.hadoop.hbase.quotas.QuotaUtil;
-121import 
org.apache.hadoop.hbase.quo

[16/25] hbase-site git commit: Published site at 0c0fe05bc410bcfcccaa19d4be96834cc28f9317.

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/devapidocs/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSource.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSource.html
 
b/devapidocs/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSource.html
index 3377a50..4ba1d04 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSource.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSource.html
@@ -1780,7 +1780,7 @@ extends 
 
 TOTAL_ROW_ACTION_REQUEST_COUNT
-static final https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String TOTAL_ROW_ACTION_REQUEST_COUNT
+static final https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String TOTAL_ROW_ACTION_REQUEST_COUNT
 
 See Also:
 Constant
 Field Values
@@ -1793,7 +1793,7 @@ extends 
 
 TOTAL_ROW_ACTION_REQUEST_COUNT_DESC
-static final https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String TOTAL_ROW_ACTION_REQUEST_COUNT_DESC
+static final https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String TOTAL_ROW_ACTION_REQUEST_COUNT_DESC
 
 See Also:
 Constant
 Field Values
@@ -1806,7 +1806,7 @@ extends 
 
 READ_REQUEST_COUNT
-static final https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String READ_REQUEST_COUNT
+static final https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String READ_REQUEST_COUNT
 
 See Also:
 Constant
 Field Values
@@ -1819,7 +1819,7 @@ extends 
 
 READ_REQUEST_COUNT_DESC
-static final https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String READ_REQUEST_COUNT_DESC
+static final https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String READ_REQUEST_COUNT_DESC
 
 See Also:
 Constant
 Field Values
@@ -1832,7 +1832,7 @@ extends 
 
 FILTERED_READ_REQUEST_COUNT
-static final https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String FILTERED_READ_REQUEST_COUNT
+static final https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String FILTERED_READ_REQUEST_COUNT
 
 See Also:
 Constant
 Field Values
@@ -1845,7 +1845,7 @@ extends 
 
 FILTERED_READ_REQUEST_COUNT_DESC
-static final https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String FILTERED_READ_REQUEST_COUNT_DESC
+static final https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String FILTERED_READ_REQUEST_COUNT_DESC
 
 See Also:
 Constant
 Field Values
@@ -1858,7 +1858,7 @@ extends 
 
 WRITE_REQUEST_COUNT
-static final https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String WRITE_REQUEST_COUNT
+static final https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String WRITE_REQUEST_COUNT
 
 See Also:
 Constant
 Field Values
@@ -1871,7 +1871,7 @@ extends 
 
 WRITE_REQUEST_COUNT_DESC
-static final https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String WRITE_REQUEST_COUNT_DESC
+static final https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String WRITE_REQUEST_COUNT_DESC
 
 See Also:
 Constant
 Field Values
@@ -1884,7 +1884,7 @@ extends 
 
 CHECK_MUTATE_FAILED_COUNT
-static final https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String CHECK_MUTATE_FAILED_COUNT
+static final https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String CHECK_MUTATE_FAILED_COUNT
 
 See Also:
 Constant
 Field Values
@@ -1897,7 +1897,7 @@ extends 
 
 CHECK_MUTATE_FAILED_COUNT_DESC
-static final https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String CHECK_MUTATE_FAILED_COUNT_DESC
+static final https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String CHECK_MUTATE_FAILED_COUNT_DESC
 
 See Also:
 C

[18/25] hbase-site git commit: Published site at 0c0fe05bc410bcfcccaa19d4be96834cc28f9317.

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/devapidocs/org/apache/hadoop/hbase/regionserver/HRegionServer.MovedRegionsCleaner.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/regionserver/HRegionServer.MovedRegionsCleaner.html
 
b/devapidocs/org/apache/hadoop/hbase/regionserver/HRegionServer.MovedRegionsCleaner.html
index 3a6187e..46a36b5 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/regionserver/HRegionServer.MovedRegionsCleaner.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/regionserver/HRegionServer.MovedRegionsCleaner.html
@@ -122,7 +122,7 @@ var activeTableTab = "activeTableTab";
 
 
 
-protected static final class HRegionServer.MovedRegionsCleaner
+protected static final class HRegionServer.MovedRegionsCleaner
 extends ScheduledChore
 implements Stoppable
 Creates a Chore thread to clean the moved region 
cache.
@@ -242,7 +242,7 @@ implements 
 
 regionServer
-private HRegionServer regionServer
+private HRegionServer regionServer
 
 
 
@@ -251,7 +251,7 @@ implements 
 
 stoppable
-Stoppable stoppable
+Stoppable stoppable
 
 
 
@@ -268,7 +268,7 @@ implements 
 
 MovedRegionsCleaner
-private MovedRegionsCleaner(HRegionServer regionServer,
+private MovedRegionsCleaner(HRegionServer regionServer,
 Stoppable stoppable)
 
 
@@ -286,7 +286,7 @@ implements 
 
 create
-static HRegionServer.MovedRegionsCleaner create(HRegionServer rs)
+static HRegionServer.MovedRegionsCleaner create(HRegionServer rs)
 
 
 
@@ -295,7 +295,7 @@ implements 
 
 chore
-protected void chore()
+protected void chore()
 Description copied from 
class: ScheduledChore
 The task to execute on each scheduled execution of the 
Chore
 
@@ -310,7 +310,7 @@ implements 
 
 stop
-public void stop(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String why)
+public void stop(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String why)
 Description copied from 
interface: Stoppable
 Stop this service.
  Implementers should favor logging errors over throwing 
RuntimeExceptions.
@@ -328,7 +328,7 @@ implements 
 
 isStopped
-public boolean isStopped()
+public boolean isStopped()
 
 Specified by:
 isStopped in
 interface Stoppable

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/devapidocs/org/apache/hadoop/hbase/regionserver/HRegionServer.PeriodicMemStoreFlusher.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/regionserver/HRegionServer.PeriodicMemStoreFlusher.html
 
b/devapidocs/org/apache/hadoop/hbase/regionserver/HRegionServer.PeriodicMemStoreFlusher.html
index 907aacc..a9b1c72 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/regionserver/HRegionServer.PeriodicMemStoreFlusher.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/regionserver/HRegionServer.PeriodicMemStoreFlusher.html
@@ -122,7 +122,7 @@ var activeTableTab = "activeTableTab";
 
 
 
-static class HRegionServer.PeriodicMemStoreFlusher
+static class HRegionServer.PeriodicMemStoreFlusher
 extends ScheduledChore
 
 
@@ -228,7 +228,7 @@ extends 
 
 server
-final HRegionServer server
+final HRegionServer server
 
 
 
@@ -237,7 +237,7 @@ extends 
 
 RANGE_OF_DELAY
-static final int RANGE_OF_DELAY
+static final int RANGE_OF_DELAY
 
 See Also:
 Constant
 Field Values
@@ -250,7 +250,7 @@ extends 
 
 MIN_DELAY_TIME
-static final int MIN_DELAY_TIME
+static final int MIN_DELAY_TIME
 
 See Also:
 Constant
 Field Values
@@ -271,7 +271,7 @@ extends 
 
 PeriodicMemStoreFlusher
-public PeriodicMemStoreFlusher(int cacheFlushInterval,
+public PeriodicMemStoreFlusher(int cacheFlushInterval,
HRegionServer server)
 
 
@@ -289,7 +289,7 @@ extends 
 
 chore
-protected void chore()
+protected void chore()
 Description copied from 
class: ScheduledChore
 The task to execute on each scheduled execution of the 
Chore
 



[01/25] hbase-site git commit: Published site at 0c0fe05bc410bcfcccaa19d4be96834cc28f9317.

Repository: hbase-site
Updated Branches:
  refs/heads/asf-site 35735602f -> 174c22ea2


http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/testdevapidocs/src-html/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSinkManager.html
--
diff --git 
a/testdevapidocs/src-html/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSinkManager.html
 
b/testdevapidocs/src-html/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSinkManager.html
index 035cf41..6a16c3c 100644
--- 
a/testdevapidocs/src-html/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSinkManager.html
+++ 
b/testdevapidocs/src-html/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSinkManager.html
@@ -35,97 +35,97 @@
 027import 
org.apache.hadoop.hbase.ServerName;
 028import 
org.apache.hadoop.hbase.client.ClusterConnection;
 029import 
org.apache.hadoop.hbase.replication.HBaseReplicationEndpoint;
-030import 
org.apache.hadoop.hbase.replication.ReplicationPeers;
-031import 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSinkManager.SinkPeer;
-032import 
org.apache.hadoop.hbase.testclassification.ReplicationTests;
-033import 
org.apache.hadoop.hbase.testclassification.SmallTests;
-034import org.junit.Before;
-035import org.junit.ClassRule;
-036import org.junit.Test;
-037import 
org.junit.experimental.categories.Category;
-038
-039import 
org.apache.hbase.thirdparty.com.google.common.collect.Lists;
-040
-041import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminService;
-042
-043@Category({ReplicationTests.class, 
SmallTests.class})
-044public class TestReplicationSinkManager 
{
-045
-046  @ClassRule
-047  public static final HBaseClassTestRule 
CLASS_RULE =
-048  
HBaseClassTestRule.forClass(TestReplicationSinkManager.class);
-049
-050  private static final String 
PEER_CLUSTER_ID = "PEER_CLUSTER_ID";
-051
-052  private ReplicationPeers 
replicationPeers;
-053  private HBaseReplicationEndpoint 
replicationEndpoint;
-054  private ReplicationSinkManager 
sinkManager;
-055
-056  @Before
-057  public void setUp() {
-058replicationPeers = 
mock(ReplicationPeers.class);
-059replicationEndpoint = 
mock(HBaseReplicationEndpoint.class);
-060sinkManager = new 
ReplicationSinkManager(mock(ClusterConnection.class),
-061  PEER_CLUSTER_ID, 
replicationEndpoint, new Configuration());
-062  }
-063
-064  @Test
-065  public void testChooseSinks() {
-066List serverNames = 
Lists.newArrayList();
-067for (int i = 0; i < 20; i++) {
-068  
serverNames.add(mock(ServerName.class));
-069}
-070
-071
when(replicationEndpoint.getRegionServers())
-072  .thenReturn(serverNames);
+030import 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSinkManager.SinkPeer;
+031import 
org.apache.hadoop.hbase.testclassification.ReplicationTests;
+032import 
org.apache.hadoop.hbase.testclassification.SmallTests;
+033import org.junit.Before;
+034import org.junit.ClassRule;
+035import org.junit.Test;
+036import 
org.junit.experimental.categories.Category;
+037
+038import 
org.apache.hbase.thirdparty.com.google.common.collect.Lists;
+039
+040import 
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminService;
+041
+042@Category({ReplicationTests.class, 
SmallTests.class})
+043public class TestReplicationSinkManager 
{
+044
+045  @ClassRule
+046  public static final HBaseClassTestRule 
CLASS_RULE =
+047  
HBaseClassTestRule.forClass(TestReplicationSinkManager.class);
+048
+049  private static final String 
PEER_CLUSTER_ID = "PEER_CLUSTER_ID";
+050
+051  private HBaseReplicationEndpoint 
replicationEndpoint;
+052  private ReplicationSinkManager 
sinkManager;
+053
+054  @Before
+055  public void setUp() {
+056replicationEndpoint = 
mock(HBaseReplicationEndpoint.class);
+057sinkManager = new 
ReplicationSinkManager(mock(ClusterConnection.class),
+058  PEER_CLUSTER_ID, 
replicationEndpoint, new Configuration());
+059  }
+060
+061  @Test
+062  public void testChooseSinks() {
+063List serverNames = 
Lists.newArrayList();
+064int totalServers = 20;
+065for (int i = 0; i < totalServers; 
i++) {
+066  
serverNames.add(mock(ServerName.class));
+067}
+068
+069
when(replicationEndpoint.getRegionServers())
+070  .thenReturn(serverNames);
+071
+072sinkManager.chooseSinks();
 073
-074sinkManager.chooseSinks();
-075
-076assertEquals(2, 
sinkManager.getNumSinks());
-077
-078  }
-079
-080  @Test
-081  public void 
testChooseSinks_LessThanRatioAvailable() {
-082List serverNames = 
Lists.newArrayList(mock(ServerName.class),
-083  mock(ServerName.class));
-084
-085
when(replicationEndpoint.getRegionServers())
-086  .thenReturn(serverNames);
-087
-088sinkManager.chooseSinks();
-089
-090assertEquals(1, 
sinkManager.getN

[06/25] hbase-site git commit: Published site at 0c0fe05bc410bcfcccaa19d4be96834cc28f9317.

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/devapidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.CheckAndMutateBuilderImpl.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.CheckAndMutateBuilderImpl.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.CheckAndMutateBuilderImpl.html
index 81dd9f9..fc92a63 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.CheckAndMutateBuilderImpl.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.CheckAndMutateBuilderImpl.html
@@ -123,910 +123,913 @@
 115  Iterator ii = 
quals.iterator();
 116  while (ii.hasNext()) {
 117
sb.append(toURLEncodedBytes((byte[])e.getKey()));
-118sb.append(':');
-119Object o = ii.next();
-120// Puts use byte[] but 
Deletes use KeyValue
-121if (o instanceof byte[]) {
-122  
sb.append(toURLEncodedBytes((byte[])o));
+118Object o = ii.next();
+119// Puts use byte[] but 
Deletes use KeyValue
+120if (o instanceof byte[]) {
+121  sb.append(':');
+122  
sb.append(toURLEncodedBytes((byte[]) o));
 123} else if (o instanceof 
KeyValue) {
-124  
sb.append(toURLEncodedBytes(CellUtil.cloneQualifier((KeyValue)o)));
-125} else {
-126  throw new 
RuntimeException("object type not handled");
-127}
-128if (ii.hasNext()) {
-129  sb.append(',');
+124  if (((KeyValue) 
o).getQualifierLength() != 0) {
+125sb.append(':');
+126
sb.append(toURLEncodedBytes(CellUtil.cloneQualifier((KeyValue) o)));
+127  }
+128} else {
+129  throw new 
RuntimeException("object type not handled");
 130}
-131  }
-132}
-133if (i.hasNext()) {
-134  sb.append(',');
+131if (ii.hasNext()) {
+132  sb.append(',');
+133}
+134  }
 135}
-136  }
-137}
-138if (startTime >= 0 && 
endTime != Long.MAX_VALUE) {
-139  sb.append('/');
-140  sb.append(startTime);
-141  if (startTime != endTime) {
-142sb.append(',');
-143sb.append(endTime);
-144  }
-145} else if (endTime != Long.MAX_VALUE) 
{
-146  sb.append('/');
-147  sb.append(endTime);
-148}
-149if (maxVersions > 1) {
-150  sb.append("?v=");
-151  sb.append(maxVersions);
-152}
-153return sb.toString();
-154  }
-155
-156  protected String 
buildMultiRowSpec(final byte[][] rows, int maxVersions) {
-157StringBuilder sb = new 
StringBuilder();
-158sb.append('/');
-159sb.append(Bytes.toString(name));
-160sb.append("/multiget/");
-161if (rows == null || rows.length == 0) 
{
-162  return sb.toString();
-163}
-164sb.append("?");
-165for(int i=0; i[10/25] hbase-site git commit: Published site at 0c0fe05bc410bcfcccaa19d4be96834cc28f9317.

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegionServer.MovedRegionsCleaner.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegionServer.MovedRegionsCleaner.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegionServer.MovedRegionsCleaner.html
index 7aeb6fd..d2efdfe 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegionServer.MovedRegionsCleaner.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegionServer.MovedRegionsCleaner.html
@@ -55,3756 +55,3746 @@
 047import 
java.util.concurrent.locks.ReentrantReadWriteLock;
 048import java.util.function.Function;
 049import 
javax.management.MalformedObjectNameException;
-050import javax.management.ObjectName;
-051import javax.servlet.http.HttpServlet;
-052import 
org.apache.commons.lang3.RandomUtils;
-053import 
org.apache.commons.lang3.StringUtils;
-054import 
org.apache.commons.lang3.SystemUtils;
-055import 
org.apache.hadoop.conf.Configuration;
-056import org.apache.hadoop.fs.FileSystem;
-057import org.apache.hadoop.fs.Path;
-058import 
org.apache.hadoop.hbase.Abortable;
-059import 
org.apache.hadoop.hbase.CacheEvictionStats;
-060import 
org.apache.hadoop.hbase.ChoreService;
-061import 
org.apache.hadoop.hbase.ClockOutOfSyncException;
-062import 
org.apache.hadoop.hbase.CoordinatedStateManager;
-063import 
org.apache.hadoop.hbase.DoNotRetryIOException;
-064import 
org.apache.hadoop.hbase.HBaseConfiguration;
-065import 
org.apache.hadoop.hbase.HBaseInterfaceAudience;
-066import 
org.apache.hadoop.hbase.HConstants;
-067import 
org.apache.hadoop.hbase.HealthCheckChore;
-068import 
org.apache.hadoop.hbase.MetaTableAccessor;
-069import 
org.apache.hadoop.hbase.NotServingRegionException;
-070import 
org.apache.hadoop.hbase.PleaseHoldException;
-071import 
org.apache.hadoop.hbase.ScheduledChore;
-072import 
org.apache.hadoop.hbase.ServerName;
-073import 
org.apache.hadoop.hbase.Stoppable;
-074import 
org.apache.hadoop.hbase.TableDescriptors;
-075import 
org.apache.hadoop.hbase.TableName;
-076import 
org.apache.hadoop.hbase.YouAreDeadException;
-077import 
org.apache.hadoop.hbase.ZNodeClearer;
-078import 
org.apache.hadoop.hbase.client.ClusterConnection;
-079import 
org.apache.hadoop.hbase.client.Connection;
-080import 
org.apache.hadoop.hbase.client.ConnectionUtils;
-081import 
org.apache.hadoop.hbase.client.RegionInfo;
-082import 
org.apache.hadoop.hbase.client.RegionInfoBuilder;
-083import 
org.apache.hadoop.hbase.client.RpcRetryingCallerFactory;
-084import 
org.apache.hadoop.hbase.client.TableDescriptorBuilder;
-085import 
org.apache.hadoop.hbase.client.locking.EntityLock;
-086import 
org.apache.hadoop.hbase.client.locking.LockServiceClient;
-087import 
org.apache.hadoop.hbase.conf.ConfigurationManager;
-088import 
org.apache.hadoop.hbase.conf.ConfigurationObserver;
-089import 
org.apache.hadoop.hbase.coordination.SplitLogWorkerCoordination;
-090import 
org.apache.hadoop.hbase.coordination.ZkCoordinatedStateManager;
-091import 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost;
-092import 
org.apache.hadoop.hbase.exceptions.RegionMovedException;
-093import 
org.apache.hadoop.hbase.exceptions.RegionOpeningException;
-094import 
org.apache.hadoop.hbase.exceptions.UnknownProtocolException;
-095import 
org.apache.hadoop.hbase.executor.ExecutorService;
-096import 
org.apache.hadoop.hbase.executor.ExecutorType;
-097import 
org.apache.hadoop.hbase.fs.HFileSystem;
-098import 
org.apache.hadoop.hbase.http.InfoServer;
-099import 
org.apache.hadoop.hbase.io.hfile.BlockCache;
-100import 
org.apache.hadoop.hbase.io.hfile.CacheConfig;
-101import 
org.apache.hadoop.hbase.io.hfile.HFile;
-102import 
org.apache.hadoop.hbase.io.util.MemorySizeUtil;
-103import 
org.apache.hadoop.hbase.ipc.CoprocessorRpcUtils;
-104import 
org.apache.hadoop.hbase.ipc.NettyRpcClientConfigHelper;
-105import 
org.apache.hadoop.hbase.ipc.RpcClient;
-106import 
org.apache.hadoop.hbase.ipc.RpcClientFactory;
-107import 
org.apache.hadoop.hbase.ipc.RpcControllerFactory;
-108import 
org.apache.hadoop.hbase.ipc.RpcServer;
-109import 
org.apache.hadoop.hbase.ipc.RpcServerInterface;
-110import 
org.apache.hadoop.hbase.ipc.ServerNotRunningYetException;
-111import 
org.apache.hadoop.hbase.ipc.ServerRpcController;
-112import 
org.apache.hadoop.hbase.log.HBaseMarkers;
-113import 
org.apache.hadoop.hbase.master.HMaster;
-114import 
org.apache.hadoop.hbase.master.LoadBalancer;
-115import 
org.apache.hadoop.hbase.master.RegionState.State;
-116import 
org.apache.hadoop.hbase.mob.MobCacheConfig;
-117import 
org.apache.hadoop.hbase.procedure.RegionServerProcedureManagerHost;
-118import 
org.apache.hadoop.hbase.procedure2.RSProcedureCallable;
-119import 
org.apache.hadoop.hbase.quotas.FileSystemUtilizationChore;
-120import 
org.apache.hadoop.hbase.quotas.QuotaUtil;
-121import 
org.apa

[14/25] hbase-site git commit: Published site at 0c0fe05bc410bcfcccaa19d4be96834cc28f9317.

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/devapidocs/org/apache/hadoop/hbase/rest/client/RemoteHTable.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/rest/client/RemoteHTable.html 
b/devapidocs/org/apache/hadoop/hbase/rest/client/RemoteHTable.html
index fa7374b..3f8505d 100644
--- a/devapidocs/org/apache/hadoop/hbase/rest/client/RemoteHTable.html
+++ b/devapidocs/org/apache/hadoop/hbase/rest/client/RemoteHTable.html
@@ -766,7 +766,7 @@ implements 
 
 RemoteHTable
-public RemoteHTable(Client client,
+public RemoteHTable(Client client,
 https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String name)
 Constructor
 
@@ -777,7 +777,7 @@ implements 
 
 RemoteHTable
-public RemoteHTable(Client client,
+public RemoteHTable(Client client,
 org.apache.hadoop.conf.Configuration conf,
 https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String name)
 Constructor
@@ -789,7 +789,7 @@ implements 
 
 RemoteHTable
-public RemoteHTable(Client client,
+public RemoteHTable(Client client,
 org.apache.hadoop.conf.Configuration conf,
 byte[] name)
 Constructor
@@ -822,7 +822,7 @@ implements 
 
 buildMultiRowSpec
-protected https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String buildMultiRowSpec(byte[][] rows,
+protected https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String buildMultiRowSpec(byte[][] rows,
int maxVersions)
 
 
@@ -832,7 +832,7 @@ implements 
 
 buildResultFromModel
-protected Result[] buildResultFromModel(CellSetModel model)
+protected Result[] buildResultFromModel(CellSetModel model)
 
 
 
@@ -841,7 +841,7 @@ implements 
 
 buildModelFromPut
-protected CellSetModel buildModelFromPut(Put put)
+protected CellSetModel buildModelFromPut(Put put)
 
 
 
@@ -850,7 +850,7 @@ implements 
 
 getTableName
-public byte[] getTableName()
+public byte[] getTableName()
 
 
 
@@ -859,7 +859,7 @@ implements 
 
 getName
-public TableName getName()
+public TableName getName()
 Description copied from 
interface: Table
 Gets the fully qualified table name instance of this 
table.
 
@@ -874,7 +874,7 @@ implements 
 
 getConfiguration
-public org.apache.hadoop.conf.Configuration getConfiguration()
+public org.apache.hadoop.conf.Configuration getConfiguration()
 Description copied from 
interface: Table
 Returns the Configuration object used by this 
instance.
  
@@ -893,7 +893,7 @@ implements 
 getTableDescriptor
 https://docs.oracle.com/javase/8/docs/api/java/lang/Deprecated.html?is-external=true";
 title="class or interface in java.lang">@Deprecated
-public HTableDescriptor getTableDescriptor()
+public HTableDescriptor getTableDescriptor()
 throws https://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true";
 title="class or interface in java.io">IOException
 Deprecated. 
 Description copied from 
interface: Table
@@ -912,7 +912,7 @@ public 
 
 close
-public void close()
+public void close()
throws https://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true";
 title="class or interface in java.io">IOException
 Description copied from 
interface: Table
 Releases any resources held or pending changes in internal 
buffers.
@@ -934,7 +934,7 @@ public 
 
 get
-public Result get(Get get)
+public Result get(Get get)
throws https://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true";
 title="class or interface in java.io">IOException
 Description copied from 
interface: Table
 Extracts certain cells from a given row.
@@ -958,7 +958,7 @@ public 
 
 get
-public Result[] get(https://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true";
 title="class or interface in java.util">List gets)
+public Result[] get(https://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true";
 title="class or interface in java.util">List gets)
  throws https://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true";
 title="class or interface in java.io">IOException
 Description copied from 
interface: Table
 Extracts specified cells from the given rows, as a 
batch.
@@ -984,7 +984,7 @@ public 
 
 getResults
-private Result[] getResults(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String spec)
+private Result[] getResults(https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">Str

[09/25] hbase-site git commit: Published site at 0c0fe05bc410bcfcccaa19d4be96834cc28f9317.

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegionServer.PeriodicMemStoreFlusher.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegionServer.PeriodicMemStoreFlusher.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegionServer.PeriodicMemStoreFlusher.html
index 7aeb6fd..d2efdfe 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegionServer.PeriodicMemStoreFlusher.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegionServer.PeriodicMemStoreFlusher.html
@@ -55,3756 +55,3746 @@
 047import 
java.util.concurrent.locks.ReentrantReadWriteLock;
 048import java.util.function.Function;
 049import 
javax.management.MalformedObjectNameException;
-050import javax.management.ObjectName;
-051import javax.servlet.http.HttpServlet;
-052import 
org.apache.commons.lang3.RandomUtils;
-053import 
org.apache.commons.lang3.StringUtils;
-054import 
org.apache.commons.lang3.SystemUtils;
-055import 
org.apache.hadoop.conf.Configuration;
-056import org.apache.hadoop.fs.FileSystem;
-057import org.apache.hadoop.fs.Path;
-058import 
org.apache.hadoop.hbase.Abortable;
-059import 
org.apache.hadoop.hbase.CacheEvictionStats;
-060import 
org.apache.hadoop.hbase.ChoreService;
-061import 
org.apache.hadoop.hbase.ClockOutOfSyncException;
-062import 
org.apache.hadoop.hbase.CoordinatedStateManager;
-063import 
org.apache.hadoop.hbase.DoNotRetryIOException;
-064import 
org.apache.hadoop.hbase.HBaseConfiguration;
-065import 
org.apache.hadoop.hbase.HBaseInterfaceAudience;
-066import 
org.apache.hadoop.hbase.HConstants;
-067import 
org.apache.hadoop.hbase.HealthCheckChore;
-068import 
org.apache.hadoop.hbase.MetaTableAccessor;
-069import 
org.apache.hadoop.hbase.NotServingRegionException;
-070import 
org.apache.hadoop.hbase.PleaseHoldException;
-071import 
org.apache.hadoop.hbase.ScheduledChore;
-072import 
org.apache.hadoop.hbase.ServerName;
-073import 
org.apache.hadoop.hbase.Stoppable;
-074import 
org.apache.hadoop.hbase.TableDescriptors;
-075import 
org.apache.hadoop.hbase.TableName;
-076import 
org.apache.hadoop.hbase.YouAreDeadException;
-077import 
org.apache.hadoop.hbase.ZNodeClearer;
-078import 
org.apache.hadoop.hbase.client.ClusterConnection;
-079import 
org.apache.hadoop.hbase.client.Connection;
-080import 
org.apache.hadoop.hbase.client.ConnectionUtils;
-081import 
org.apache.hadoop.hbase.client.RegionInfo;
-082import 
org.apache.hadoop.hbase.client.RegionInfoBuilder;
-083import 
org.apache.hadoop.hbase.client.RpcRetryingCallerFactory;
-084import 
org.apache.hadoop.hbase.client.TableDescriptorBuilder;
-085import 
org.apache.hadoop.hbase.client.locking.EntityLock;
-086import 
org.apache.hadoop.hbase.client.locking.LockServiceClient;
-087import 
org.apache.hadoop.hbase.conf.ConfigurationManager;
-088import 
org.apache.hadoop.hbase.conf.ConfigurationObserver;
-089import 
org.apache.hadoop.hbase.coordination.SplitLogWorkerCoordination;
-090import 
org.apache.hadoop.hbase.coordination.ZkCoordinatedStateManager;
-091import 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost;
-092import 
org.apache.hadoop.hbase.exceptions.RegionMovedException;
-093import 
org.apache.hadoop.hbase.exceptions.RegionOpeningException;
-094import 
org.apache.hadoop.hbase.exceptions.UnknownProtocolException;
-095import 
org.apache.hadoop.hbase.executor.ExecutorService;
-096import 
org.apache.hadoop.hbase.executor.ExecutorType;
-097import 
org.apache.hadoop.hbase.fs.HFileSystem;
-098import 
org.apache.hadoop.hbase.http.InfoServer;
-099import 
org.apache.hadoop.hbase.io.hfile.BlockCache;
-100import 
org.apache.hadoop.hbase.io.hfile.CacheConfig;
-101import 
org.apache.hadoop.hbase.io.hfile.HFile;
-102import 
org.apache.hadoop.hbase.io.util.MemorySizeUtil;
-103import 
org.apache.hadoop.hbase.ipc.CoprocessorRpcUtils;
-104import 
org.apache.hadoop.hbase.ipc.NettyRpcClientConfigHelper;
-105import 
org.apache.hadoop.hbase.ipc.RpcClient;
-106import 
org.apache.hadoop.hbase.ipc.RpcClientFactory;
-107import 
org.apache.hadoop.hbase.ipc.RpcControllerFactory;
-108import 
org.apache.hadoop.hbase.ipc.RpcServer;
-109import 
org.apache.hadoop.hbase.ipc.RpcServerInterface;
-110import 
org.apache.hadoop.hbase.ipc.ServerNotRunningYetException;
-111import 
org.apache.hadoop.hbase.ipc.ServerRpcController;
-112import 
org.apache.hadoop.hbase.log.HBaseMarkers;
-113import 
org.apache.hadoop.hbase.master.HMaster;
-114import 
org.apache.hadoop.hbase.master.LoadBalancer;
-115import 
org.apache.hadoop.hbase.master.RegionState.State;
-116import 
org.apache.hadoop.hbase.mob.MobCacheConfig;
-117import 
org.apache.hadoop.hbase.procedure.RegionServerProcedureManagerHost;
-118import 
org.apache.hadoop.hbase.procedure2.RSProcedureCallable;
-119import 
org.apache.hadoop.hbase.quotas.FileSystemUtilizationChore;
-120import 
org.apache.hadoop.hbase.quotas.QuotaUtil;

[22/25] hbase-site git commit: Published site at 0c0fe05bc410bcfcccaa19d4be96834cc28f9317.

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/apidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.html 
b/apidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.html
index 81dd9f9..fc92a63 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.html
@@ -123,910 +123,913 @@
 115  Iterator ii = 
quals.iterator();
 116  while (ii.hasNext()) {
 117
sb.append(toURLEncodedBytes((byte[])e.getKey()));
-118sb.append(':');
-119Object o = ii.next();
-120// Puts use byte[] but 
Deletes use KeyValue
-121if (o instanceof byte[]) {
-122  
sb.append(toURLEncodedBytes((byte[])o));
+118Object o = ii.next();
+119// Puts use byte[] but 
Deletes use KeyValue
+120if (o instanceof byte[]) {
+121  sb.append(':');
+122  
sb.append(toURLEncodedBytes((byte[]) o));
 123} else if (o instanceof 
KeyValue) {
-124  
sb.append(toURLEncodedBytes(CellUtil.cloneQualifier((KeyValue)o)));
-125} else {
-126  throw new 
RuntimeException("object type not handled");
-127}
-128if (ii.hasNext()) {
-129  sb.append(',');
+124  if (((KeyValue) 
o).getQualifierLength() != 0) {
+125sb.append(':');
+126
sb.append(toURLEncodedBytes(CellUtil.cloneQualifier((KeyValue) o)));
+127  }
+128} else {
+129  throw new 
RuntimeException("object type not handled");
 130}
-131  }
-132}
-133if (i.hasNext()) {
-134  sb.append(',');
+131if (ii.hasNext()) {
+132  sb.append(',');
+133}
+134  }
 135}
-136  }
-137}
-138if (startTime >= 0 && 
endTime != Long.MAX_VALUE) {
-139  sb.append('/');
-140  sb.append(startTime);
-141  if (startTime != endTime) {
-142sb.append(',');
-143sb.append(endTime);
-144  }
-145} else if (endTime != Long.MAX_VALUE) 
{
-146  sb.append('/');
-147  sb.append(endTime);
-148}
-149if (maxVersions > 1) {
-150  sb.append("?v=");
-151  sb.append(maxVersions);
-152}
-153return sb.toString();
-154  }
-155
-156  protected String 
buildMultiRowSpec(final byte[][] rows, int maxVersions) {
-157StringBuilder sb = new 
StringBuilder();
-158sb.append('/');
-159sb.append(Bytes.toString(name));
-160sb.append("/multiget/");
-161if (rows == null || rows.length == 0) 
{
-162  return sb.toString();
-163}
-164sb.append("?");
-165for(int i=0; i[08/25] hbase-site git commit: Published site at 0c0fe05bc410bcfcccaa19d4be96834cc28f9317.

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegionServer.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegionServer.html 
b/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegionServer.html
index 7aeb6fd..d2efdfe 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegionServer.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegionServer.html
@@ -55,3756 +55,3746 @@
 047import 
java.util.concurrent.locks.ReentrantReadWriteLock;
 048import java.util.function.Function;
 049import 
javax.management.MalformedObjectNameException;
-050import javax.management.ObjectName;
-051import javax.servlet.http.HttpServlet;
-052import 
org.apache.commons.lang3.RandomUtils;
-053import 
org.apache.commons.lang3.StringUtils;
-054import 
org.apache.commons.lang3.SystemUtils;
-055import 
org.apache.hadoop.conf.Configuration;
-056import org.apache.hadoop.fs.FileSystem;
-057import org.apache.hadoop.fs.Path;
-058import 
org.apache.hadoop.hbase.Abortable;
-059import 
org.apache.hadoop.hbase.CacheEvictionStats;
-060import 
org.apache.hadoop.hbase.ChoreService;
-061import 
org.apache.hadoop.hbase.ClockOutOfSyncException;
-062import 
org.apache.hadoop.hbase.CoordinatedStateManager;
-063import 
org.apache.hadoop.hbase.DoNotRetryIOException;
-064import 
org.apache.hadoop.hbase.HBaseConfiguration;
-065import 
org.apache.hadoop.hbase.HBaseInterfaceAudience;
-066import 
org.apache.hadoop.hbase.HConstants;
-067import 
org.apache.hadoop.hbase.HealthCheckChore;
-068import 
org.apache.hadoop.hbase.MetaTableAccessor;
-069import 
org.apache.hadoop.hbase.NotServingRegionException;
-070import 
org.apache.hadoop.hbase.PleaseHoldException;
-071import 
org.apache.hadoop.hbase.ScheduledChore;
-072import 
org.apache.hadoop.hbase.ServerName;
-073import 
org.apache.hadoop.hbase.Stoppable;
-074import 
org.apache.hadoop.hbase.TableDescriptors;
-075import 
org.apache.hadoop.hbase.TableName;
-076import 
org.apache.hadoop.hbase.YouAreDeadException;
-077import 
org.apache.hadoop.hbase.ZNodeClearer;
-078import 
org.apache.hadoop.hbase.client.ClusterConnection;
-079import 
org.apache.hadoop.hbase.client.Connection;
-080import 
org.apache.hadoop.hbase.client.ConnectionUtils;
-081import 
org.apache.hadoop.hbase.client.RegionInfo;
-082import 
org.apache.hadoop.hbase.client.RegionInfoBuilder;
-083import 
org.apache.hadoop.hbase.client.RpcRetryingCallerFactory;
-084import 
org.apache.hadoop.hbase.client.TableDescriptorBuilder;
-085import 
org.apache.hadoop.hbase.client.locking.EntityLock;
-086import 
org.apache.hadoop.hbase.client.locking.LockServiceClient;
-087import 
org.apache.hadoop.hbase.conf.ConfigurationManager;
-088import 
org.apache.hadoop.hbase.conf.ConfigurationObserver;
-089import 
org.apache.hadoop.hbase.coordination.SplitLogWorkerCoordination;
-090import 
org.apache.hadoop.hbase.coordination.ZkCoordinatedStateManager;
-091import 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost;
-092import 
org.apache.hadoop.hbase.exceptions.RegionMovedException;
-093import 
org.apache.hadoop.hbase.exceptions.RegionOpeningException;
-094import 
org.apache.hadoop.hbase.exceptions.UnknownProtocolException;
-095import 
org.apache.hadoop.hbase.executor.ExecutorService;
-096import 
org.apache.hadoop.hbase.executor.ExecutorType;
-097import 
org.apache.hadoop.hbase.fs.HFileSystem;
-098import 
org.apache.hadoop.hbase.http.InfoServer;
-099import 
org.apache.hadoop.hbase.io.hfile.BlockCache;
-100import 
org.apache.hadoop.hbase.io.hfile.CacheConfig;
-101import 
org.apache.hadoop.hbase.io.hfile.HFile;
-102import 
org.apache.hadoop.hbase.io.util.MemorySizeUtil;
-103import 
org.apache.hadoop.hbase.ipc.CoprocessorRpcUtils;
-104import 
org.apache.hadoop.hbase.ipc.NettyRpcClientConfigHelper;
-105import 
org.apache.hadoop.hbase.ipc.RpcClient;
-106import 
org.apache.hadoop.hbase.ipc.RpcClientFactory;
-107import 
org.apache.hadoop.hbase.ipc.RpcControllerFactory;
-108import 
org.apache.hadoop.hbase.ipc.RpcServer;
-109import 
org.apache.hadoop.hbase.ipc.RpcServerInterface;
-110import 
org.apache.hadoop.hbase.ipc.ServerNotRunningYetException;
-111import 
org.apache.hadoop.hbase.ipc.ServerRpcController;
-112import 
org.apache.hadoop.hbase.log.HBaseMarkers;
-113import 
org.apache.hadoop.hbase.master.HMaster;
-114import 
org.apache.hadoop.hbase.master.LoadBalancer;
-115import 
org.apache.hadoop.hbase.master.RegionState.State;
-116import 
org.apache.hadoop.hbase.mob.MobCacheConfig;
-117import 
org.apache.hadoop.hbase.procedure.RegionServerProcedureManagerHost;
-118import 
org.apache.hadoop.hbase.procedure2.RSProcedureCallable;
-119import 
org.apache.hadoop.hbase.quotas.FileSystemUtilizationChore;
-120import 
org.apache.hadoop.hbase.quotas.QuotaUtil;
-121import 
org.apache.hadoop.hbase.quotas.RegionServerRpcQuotaManager;
-122import 
org.apache.hadoop.hbase.quotas.Regio

[20/25] hbase-site git commit: Published site at 0c0fe05bc410bcfcccaa19d4be96834cc28f9317.

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/checkstyle-aggregate.html
--
diff --git a/checkstyle-aggregate.html b/checkstyle-aggregate.html
index 5628338..8131ea5 100644
--- a/checkstyle-aggregate.html
+++ b/checkstyle-aggregate.html
@@ -7,7 +7,7 @@
   
 
 
-
+
 
 Apache HBase – Checkstyle Results
 
@@ -274,7 +274,7 @@
 3600
 0
 0
-15914
+15913
 
 Files
 
@@ -6517,7 +6517,7 @@
 org/apache/hadoop/hbase/regionserver/HRegionServer.java
 0
 0
-88
+87
 
 org/apache/hadoop/hbase/regionserver/HRegionServerCommandLine.java
 0
@@ -10219,7 +10219,7 @@
 
 
 http://checkstyle.sourceforge.net/config_blocks.html#NeedBraces";>NeedBraces
-1939
+1938
  Error
 
 coding
@@ -10312,12 +10312,12 @@
 http://checkstyle.sourceforge.net/config_javadoc.html#JavadocTagContinuationIndentation";>JavadocTagContinuationIndentation
 
 offset: "2"
-784
+798
  Error
 
 
 http://checkstyle.sourceforge.net/config_javadoc.html#NonEmptyAtclauseDescription";>NonEmptyAtclauseDescription
-3847
+3833
  Error
 
 misc
@@ -14526,7 +14526,7 @@
 
  Error
 javadoc
-NonEmptyAtclauseDescription
+JavadocTagContinuationIndentation
 Javadoc comment at column 26 has parse error. Missed HTML close tag 'arg'. 
Sometimes it means that close tag missed for one of previous tags.
 44
 
@@ -15162,7 +15162,7 @@
 
  Error
 javadoc
-NonEmptyAtclauseDescription
+JavadocTagContinuationIndentation
 Javadoc comment at column 4 has parse error. Missed HTML close tag 'pre'. 
Sometimes it means that close tag missed for one of previous tags.
 59
 
@@ -16917,7 +16917,7 @@
 
  Error
 javadoc
-NonEmptyAtclauseDescription
+JavadocTagContinuationIndentation
 Javadoc comment at column 19 has parse error. Details: no viable 
alternative at input '\n   *   List

[25/25] hbase-site git commit: Published site at 0c0fe05bc410bcfcccaa19d4be96834cc28f9317.

Published site at 0c0fe05bc410bcfcccaa19d4be96834cc28f9317.


Project: http://git-wip-us.apache.org/repos/asf/hbase-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase-site/commit/174c22ea
Tree: http://git-wip-us.apache.org/repos/asf/hbase-site/tree/174c22ea
Diff: http://git-wip-us.apache.org/repos/asf/hbase-site/diff/174c22ea

Branch: refs/heads/asf-site
Commit: 174c22ea2b32f6b39c31c8875b6cb4bf0778c9d6
Parents: 3573560
Author: jenkins 
Authored: Wed Apr 4 14:47:39 2018 +
Committer: jenkins 
Committed: Wed Apr 4 14:47:39 2018 +

--
 acid-semantics.html | 4 +-
 apache_hbase_reference_guide.pdf| 36603 +
 .../hadoop/hbase/rest/client/RemoteHTable.html  |   116 +-
 .../hadoop/hbase/rest/client/RemoteHTable.html  |  1799 +-
 book.html   |   264 +-
 bulk-loads.html | 4 +-
 checkstyle-aggregate.html   | 13588 +++---
 checkstyle.rss  | 4 +-
 coc.html| 4 +-
 dependencies.html   | 4 +-
 dependency-convergence.html | 4 +-
 dependency-info.html| 4 +-
 dependency-management.html  | 4 +-
 devapidocs/constant-values.html |36 +-
 devapidocs/index-all.html   | 4 -
 .../hadoop/hbase/backup/package-tree.html   | 6 +-
 .../hadoop/hbase/client/package-tree.html   |20 +-
 .../hadoop/hbase/executor/package-tree.html | 2 +-
 .../hadoop/hbase/filter/package-tree.html   |10 +-
 .../hadoop/hbase/io/hfile/package-tree.html | 2 +-
 ...alancedQueueRpcExecutor.FastPathHandler.html |12 +-
 .../apache/hadoop/hbase/ipc/package-tree.html   | 4 +-
 .../hadoop/hbase/mapreduce/package-tree.html| 4 +-
 .../hbase/master/balancer/package-tree.html | 2 +-
 .../hadoop/hbase/master/package-tree.html   | 4 +-
 .../hbase/master/procedure/package-tree.html| 4 +-
 .../hadoop/hbase/monitoring/package-tree.html   | 2 +-
 .../org/apache/hadoop/hbase/package-tree.html   |16 +-
 .../hadoop/hbase/procedure2/package-tree.html   | 4 +-
 .../hadoop/hbase/quotas/package-tree.html   | 6 +-
 .../HRegionServer.CompactionChecker.html|14 +-
 .../HRegionServer.MovedRegionInfo.html  |16 +-
 .../HRegionServer.MovedRegionsCleaner.html  |16 +-
 .../HRegionServer.PeriodicMemStoreFlusher.html  |12 +-
 .../hbase/regionserver/HRegionServer.html   |   652 +-
 .../regionserver/MetricsRegionServerSource.html |   462 +-
 .../hadoop/hbase/regionserver/package-tree.html |18 +-
 .../regionserver/querymatcher/package-tree.html | 2 +-
 .../replication/regionserver/package-tree.html  | 2 +-
 .../RemoteHTable.CheckAndMutateBuilderImpl.html |28 +-
 .../rest/client/RemoteHTable.Scanner.Iter.html  |12 +-
 .../hbase/rest/client/RemoteHTable.Scanner.html |18 +-
 .../hadoop/hbase/rest/client/RemoteHTable.html  |   124 +-
 .../hadoop/hbase/security/package-tree.html | 2 +-
 .../hadoop/hbase/thrift/package-tree.html   | 2 +-
 .../apache/hadoop/hbase/util/package-tree.html  |10 +-
 .../apache/hadoop/hbase/wal/package-tree.html   | 2 +-
 .../org/apache/hadoop/hbase/Version.html| 6 +-
 ...alancedQueueRpcExecutor.FastPathHandler.html |85 +-
 .../ipc/FastPathBalancedQueueRpcExecutor.html   |85 +-
 .../HRegionServer.CompactionChecker.html|  7306 ++--
 .../HRegionServer.MovedRegionInfo.html  |  7306 ++--
 .../HRegionServer.MovedRegionsCleaner.html  |  7306 ++--
 .../HRegionServer.PeriodicMemStoreFlusher.html  |  7306 ++--
 .../hbase/regionserver/HRegionServer.html   |  7306 ++--
 .../regionserver/MetricsRegionServerSource.html |   602 +-
 .../ReplicationSinkManager.SinkPeer.html| 2 +-
 .../regionserver/ReplicationSinkManager.html| 2 +-
 .../RemoteHTable.CheckAndMutateBuilderImpl.html |  1799 +-
 .../rest/client/RemoteHTable.Scanner.Iter.html  |  1799 +-
 .../hbase/rest/client/RemoteHTable.Scanner.html |  1799 +-
 .../hadoop/hbase/rest/client/RemoteHTable.html  |  1799 +-
 export_control.html | 4 +-
 index.html  | 4 +-
 integration.html| 4 +-
 issue-tracking.html | 4 +-
 license.html| 4 +-
 mail-lists.html | 4 +-
 metrics.html| 4 +-
 old_news.html   | 4 +-
 plugin-management.html  | 4 +-
 plugins.html| 4 +-
 poweredbyhbase.html  

[03/25] hbase-site git commit: Published site at 0c0fe05bc410bcfcccaa19d4be96834cc28f9317.

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/devapidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.html 
b/devapidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.html
index 81dd9f9..fc92a63 100644
--- a/devapidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.html
+++ b/devapidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.html
@@ -123,910 +123,913 @@
 115  Iterator ii = 
quals.iterator();
 116  while (ii.hasNext()) {
 117
sb.append(toURLEncodedBytes((byte[])e.getKey()));
-118sb.append(':');
-119Object o = ii.next();
-120// Puts use byte[] but 
Deletes use KeyValue
-121if (o instanceof byte[]) {
-122  
sb.append(toURLEncodedBytes((byte[])o));
+118Object o = ii.next();
+119// Puts use byte[] but 
Deletes use KeyValue
+120if (o instanceof byte[]) {
+121  sb.append(':');
+122  
sb.append(toURLEncodedBytes((byte[]) o));
 123} else if (o instanceof 
KeyValue) {
-124  
sb.append(toURLEncodedBytes(CellUtil.cloneQualifier((KeyValue)o)));
-125} else {
-126  throw new 
RuntimeException("object type not handled");
-127}
-128if (ii.hasNext()) {
-129  sb.append(',');
+124  if (((KeyValue) 
o).getQualifierLength() != 0) {
+125sb.append(':');
+126
sb.append(toURLEncodedBytes(CellUtil.cloneQualifier((KeyValue) o)));
+127  }
+128} else {
+129  throw new 
RuntimeException("object type not handled");
 130}
-131  }
-132}
-133if (i.hasNext()) {
-134  sb.append(',');
+131if (ii.hasNext()) {
+132  sb.append(',');
+133}
+134  }
 135}
-136  }
-137}
-138if (startTime >= 0 && 
endTime != Long.MAX_VALUE) {
-139  sb.append('/');
-140  sb.append(startTime);
-141  if (startTime != endTime) {
-142sb.append(',');
-143sb.append(endTime);
-144  }
-145} else if (endTime != Long.MAX_VALUE) 
{
-146  sb.append('/');
-147  sb.append(endTime);
-148}
-149if (maxVersions > 1) {
-150  sb.append("?v=");
-151  sb.append(maxVersions);
-152}
-153return sb.toString();
-154  }
-155
-156  protected String 
buildMultiRowSpec(final byte[][] rows, int maxVersions) {
-157StringBuilder sb = new 
StringBuilder();
-158sb.append('/');
-159sb.append(Bytes.toString(name));
-160sb.append("/multiget/");
-161if (rows == null || rows.length == 0) 
{
-162  return sb.toString();
-163}
-164sb.append("?");
-165for(int i=0; i[12/25] hbase-site git commit: Published site at 0c0fe05bc410bcfcccaa19d4be96834cc28f9317.

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegionServer.CompactionChecker.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegionServer.CompactionChecker.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegionServer.CompactionChecker.html
index 7aeb6fd..d2efdfe 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegionServer.CompactionChecker.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/HRegionServer.CompactionChecker.html
@@ -55,3756 +55,3746 @@
 047import 
java.util.concurrent.locks.ReentrantReadWriteLock;
 048import java.util.function.Function;
 049import 
javax.management.MalformedObjectNameException;
-050import javax.management.ObjectName;
-051import javax.servlet.http.HttpServlet;
-052import 
org.apache.commons.lang3.RandomUtils;
-053import 
org.apache.commons.lang3.StringUtils;
-054import 
org.apache.commons.lang3.SystemUtils;
-055import 
org.apache.hadoop.conf.Configuration;
-056import org.apache.hadoop.fs.FileSystem;
-057import org.apache.hadoop.fs.Path;
-058import 
org.apache.hadoop.hbase.Abortable;
-059import 
org.apache.hadoop.hbase.CacheEvictionStats;
-060import 
org.apache.hadoop.hbase.ChoreService;
-061import 
org.apache.hadoop.hbase.ClockOutOfSyncException;
-062import 
org.apache.hadoop.hbase.CoordinatedStateManager;
-063import 
org.apache.hadoop.hbase.DoNotRetryIOException;
-064import 
org.apache.hadoop.hbase.HBaseConfiguration;
-065import 
org.apache.hadoop.hbase.HBaseInterfaceAudience;
-066import 
org.apache.hadoop.hbase.HConstants;
-067import 
org.apache.hadoop.hbase.HealthCheckChore;
-068import 
org.apache.hadoop.hbase.MetaTableAccessor;
-069import 
org.apache.hadoop.hbase.NotServingRegionException;
-070import 
org.apache.hadoop.hbase.PleaseHoldException;
-071import 
org.apache.hadoop.hbase.ScheduledChore;
-072import 
org.apache.hadoop.hbase.ServerName;
-073import 
org.apache.hadoop.hbase.Stoppable;
-074import 
org.apache.hadoop.hbase.TableDescriptors;
-075import 
org.apache.hadoop.hbase.TableName;
-076import 
org.apache.hadoop.hbase.YouAreDeadException;
-077import 
org.apache.hadoop.hbase.ZNodeClearer;
-078import 
org.apache.hadoop.hbase.client.ClusterConnection;
-079import 
org.apache.hadoop.hbase.client.Connection;
-080import 
org.apache.hadoop.hbase.client.ConnectionUtils;
-081import 
org.apache.hadoop.hbase.client.RegionInfo;
-082import 
org.apache.hadoop.hbase.client.RegionInfoBuilder;
-083import 
org.apache.hadoop.hbase.client.RpcRetryingCallerFactory;
-084import 
org.apache.hadoop.hbase.client.TableDescriptorBuilder;
-085import 
org.apache.hadoop.hbase.client.locking.EntityLock;
-086import 
org.apache.hadoop.hbase.client.locking.LockServiceClient;
-087import 
org.apache.hadoop.hbase.conf.ConfigurationManager;
-088import 
org.apache.hadoop.hbase.conf.ConfigurationObserver;
-089import 
org.apache.hadoop.hbase.coordination.SplitLogWorkerCoordination;
-090import 
org.apache.hadoop.hbase.coordination.ZkCoordinatedStateManager;
-091import 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost;
-092import 
org.apache.hadoop.hbase.exceptions.RegionMovedException;
-093import 
org.apache.hadoop.hbase.exceptions.RegionOpeningException;
-094import 
org.apache.hadoop.hbase.exceptions.UnknownProtocolException;
-095import 
org.apache.hadoop.hbase.executor.ExecutorService;
-096import 
org.apache.hadoop.hbase.executor.ExecutorType;
-097import 
org.apache.hadoop.hbase.fs.HFileSystem;
-098import 
org.apache.hadoop.hbase.http.InfoServer;
-099import 
org.apache.hadoop.hbase.io.hfile.BlockCache;
-100import 
org.apache.hadoop.hbase.io.hfile.CacheConfig;
-101import 
org.apache.hadoop.hbase.io.hfile.HFile;
-102import 
org.apache.hadoop.hbase.io.util.MemorySizeUtil;
-103import 
org.apache.hadoop.hbase.ipc.CoprocessorRpcUtils;
-104import 
org.apache.hadoop.hbase.ipc.NettyRpcClientConfigHelper;
-105import 
org.apache.hadoop.hbase.ipc.RpcClient;
-106import 
org.apache.hadoop.hbase.ipc.RpcClientFactory;
-107import 
org.apache.hadoop.hbase.ipc.RpcControllerFactory;
-108import 
org.apache.hadoop.hbase.ipc.RpcServer;
-109import 
org.apache.hadoop.hbase.ipc.RpcServerInterface;
-110import 
org.apache.hadoop.hbase.ipc.ServerNotRunningYetException;
-111import 
org.apache.hadoop.hbase.ipc.ServerRpcController;
-112import 
org.apache.hadoop.hbase.log.HBaseMarkers;
-113import 
org.apache.hadoop.hbase.master.HMaster;
-114import 
org.apache.hadoop.hbase.master.LoadBalancer;
-115import 
org.apache.hadoop.hbase.master.RegionState.State;
-116import 
org.apache.hadoop.hbase.mob.MobCacheConfig;
-117import 
org.apache.hadoop.hbase.procedure.RegionServerProcedureManagerHost;
-118import 
org.apache.hadoop.hbase.procedure2.RSProcedureCallable;
-119import 
org.apache.hadoop.hbase.quotas.FileSystemUtilizationChore;
-120import 
org.apache.hadoop.hbase.quotas.QuotaUtil;
-121import 
org.apache.hadoop

[13/25] hbase-site git commit: Published site at 0c0fe05bc410bcfcccaa19d4be96834cc28f9317.

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/devapidocs/src-html/org/apache/hadoop/hbase/ipc/FastPathBalancedQueueRpcExecutor.FastPathHandler.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/ipc/FastPathBalancedQueueRpcExecutor.FastPathHandler.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/ipc/FastPathBalancedQueueRpcExecutor.FastPathHandler.html
index 23eeb7d..a27d588 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/ipc/FastPathBalancedQueueRpcExecutor.FastPathHandler.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/ipc/FastPathBalancedQueueRpcExecutor.FastPathHandler.html
@@ -88,48 +88,49 @@
 080// if an empty queue of CallRunners 
so we are available for direct handoff when one comes in.
 081final Deque 
fastPathHandlerStack;
 082// Semaphore to coordinate loading of 
fastpathed loadedTask and our running it.
-083private Semaphore semaphore = new 
Semaphore(0);
-084// The task we get when 
fast-pathing.
-085private CallRunner 
loadedCallRunner;
-086
-087FastPathHandler(String name, double 
handlerFailureThreshhold, BlockingQueue q,
-088final AtomicInteger 
activeHandlerCount,
-089final 
Deque fastPathHandlerStack) {
-090  super(name, 
handlerFailureThreshhold, q, activeHandlerCount);
-091  this.fastPathHandlerStack = 
fastPathHandlerStack;
-092}
-093
-094@Override
-095protected CallRunner getCallRunner() 
throws InterruptedException {
-096  // Get a callrunner if one in the 
Q.
-097  CallRunner cr = this.q.poll();
-098  if (cr == null) {
-099// Else, if a 
fastPathHandlerStack present and no callrunner in Q, register ourselves for
-100// the fastpath handoff done via 
fastPathHandlerStack.
-101if (this.fastPathHandlerStack != 
null) {
-102  
this.fastPathHandlerStack.push(this);
-103  this.semaphore.acquire();
-104  cr = this.loadedCallRunner;
-105  this.loadedCallRunner = null;
-106} else {
-107  // No fastpath available. Block 
until a task comes available.
-108  cr = super.getCallRunner();
-109}
-110  }
-111  return cr;
-112}
-113
-114/**
-115 * @param task Task gotten via 
fastpath.
-116 * @return True if we successfully 
loaded our task
-117 */
-118boolean loadCallRunner(final 
CallRunner cr) {
-119  this.loadedCallRunner = cr;
-120  this.semaphore.release();
-121  return true;
-122}
-123  }
-124}
+083// UNFAIR synchronization.
+084private Semaphore semaphore = new 
Semaphore(0);
+085// The task we get when 
fast-pathing.
+086private CallRunner 
loadedCallRunner;
+087
+088FastPathHandler(String name, double 
handlerFailureThreshhold, BlockingQueue q,
+089final AtomicInteger 
activeHandlerCount,
+090final 
Deque fastPathHandlerStack) {
+091  super(name, 
handlerFailureThreshhold, q, activeHandlerCount);
+092  this.fastPathHandlerStack = 
fastPathHandlerStack;
+093}
+094
+095@Override
+096protected CallRunner getCallRunner() 
throws InterruptedException {
+097  // Get a callrunner if one in the 
Q.
+098  CallRunner cr = this.q.poll();
+099  if (cr == null) {
+100// Else, if a 
fastPathHandlerStack present and no callrunner in Q, register ourselves for
+101// the fastpath handoff done via 
fastPathHandlerStack.
+102if (this.fastPathHandlerStack != 
null) {
+103  
this.fastPathHandlerStack.push(this);
+104  this.semaphore.acquire();
+105  cr = this.loadedCallRunner;
+106  this.loadedCallRunner = null;
+107} else {
+108  // No fastpath available. Block 
until a task comes available.
+109  cr = super.getCallRunner();
+110}
+111  }
+112  return cr;
+113}
+114
+115/**
+116 * @param cr Task gotten via 
fastpath.
+117 * @return True if we successfully 
loaded our task
+118 */
+119boolean loadCallRunner(final 
CallRunner cr) {
+120  this.loadedCallRunner = cr;
+121  this.semaphore.release();
+122  return true;
+123}
+124  }
+125}
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/devapidocs/src-html/org/apache/hadoop/hbase/ipc/FastPathBalancedQueueRpcExecutor.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/ipc/FastPathBalancedQueueRpcExecutor.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/ipc/FastPathBalancedQueueRpcExecutor.html
index 23eeb7d..a27d588 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/ipc/FastPathBalancedQueueRpcExecutor.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/ipc/FastPathBalancedQueueRpcExecutor.html
@@ -88,48 +88,49 @@
 080// if an empty queue of C

[24/25] hbase-site git commit: Published site at 0c0fe05bc410bcfcccaa19d4be96834cc28f9317.

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/apache_hbase_reference_guide.pdf
--
diff --git a/apache_hbase_reference_guide.pdf b/apache_hbase_reference_guide.pdf
index 4b723bf..d691d6b 100644
--- a/apache_hbase_reference_guide.pdf
+++ b/apache_hbase_reference_guide.pdf
@@ -5,16 +5,16 @@
 /Author (Apache HBase Team)
 /Creator (Asciidoctor PDF 1.5.0.alpha.15, based on Prawn 2.2.2)
 /Producer (Apache HBase Team)
-/ModDate (D:20180403144547+00'00')
-/CreationDate (D:20180403144547+00'00')
+/ModDate (D:20180404144554+00'00')
+/CreationDate (D:20180404144554+00'00')
 >>
 endobj
 2 0 obj
 << /Type /Catalog
 /Pages 3 0 R
 /Names 26 0 R
-/Outlines 4533 0 R
-/PageLabels 4759 0 R
+/Outlines 4543 0 R
+/PageLabels 4769 0 R
 /PageMode /UseOutlines
 /OpenAction [7 0 R /FitH 842.89]
 /ViewerPreferences << /DisplayDocTitle true
@@ -23,8 +23,8 @@ endobj
 endobj
 3 0 obj
 << /Type /Pages
-/Count 710
-/Kids [7 0 R 12 0 R 14 0 R 16 0 R 18 0 R 20 0 R 22 0 R 24 0 R 44 0 R 47 0 R 50 
0 R 54 0 R 61 0 R 65 0 R 67 0 R 69 0 R 76 0 R 79 0 R 81 0 R 87 0 R 90 0 R 92 0 
R 94 0 R 101 0 R 107 0 R 112 0 R 114 0 R 130 0 R 135 0 R 142 0 R 151 0 R 159 0 
R 168 0 R 179 0 R 183 0 R 185 0 R 189 0 R 198 0 R 207 0 R 215 0 R 224 0 R 229 0 
R 238 0 R 246 0 R 255 0 R 268 0 R 275 0 R 285 0 R 293 0 R 301 0 R 308 0 R 316 0 
R 322 0 R 328 0 R 335 0 R 343 0 R 354 0 R 363 0 R 375 0 R 383 0 R 391 0 R 398 0 
R 407 0 R 415 0 R 425 0 R 433 0 R 440 0 R 449 0 R 461 0 R 470 0 R 477 0 R 485 0 
R 493 0 R 502 0 R 509 0 R 514 0 R 518 0 R 523 0 R 527 0 R 543 0 R 554 0 R 558 0 
R 573 0 R 578 0 R 583 0 R 585 0 R 587 0 R 590 0 R 592 0 R 594 0 R 602 0 R 608 0 
R 613 0 R 618 0 R 625 0 R 635 0 R 643 0 R 647 0 R 651 0 R 653 0 R 664 0 R 674 0 
R 681 0 R 693 0 R 704 0 R 713 0 R 721 0 R 727 0 R 730 0 R 734 0 R 738 0 R 741 0 
R 744 0 R 746 0 R 749 0 R 754 0 R 756 0 R 761 0 R 765 0 R 770 0 R 774 0 R 777 0 
R 783 0 R 785 0 R 790 0 R 798 0 R 800 0 
 R 803 0 R 806 0 R 810 0 R 813 0 R 828 0 R 835 0 R 844 0 R 855 0 R 861 0 R 871 
0 R 882 0 R 885 0 R 889 0 R 892 0 R 897 0 R 906 0 R 914 0 R 918 0 R 922 0 R 927 
0 R 931 0 R 933 0 R 948 0 R 959 0 R 964 0 R 970 0 R 973 0 R 981 0 R 989 0 R 994 
0 R 1000 0 R 1005 0 R 1007 0 R 1009 0 R 1011 0 R 1021 0 R 1029 0 R 1033 0 R 
1040 0 R 1047 0 R 1055 0 R 1059 0 R 1065 0 R 1070 0 R 1078 0 R 1082 0 R 1087 0 
R 1089 0 R 1095 0 R 1102 0 R 1104 0 R  0 R 1122 0 R 1126 0 R 1128 0 R 1130 
0 R 1134 0 R 1137 0 R 1142 0 R 1145 0 R 1157 0 R 1161 0 R 1167 0 R 1175 0 R 
1180 0 R 1184 0 R 1188 0 R 1190 0 R 1193 0 R 1196 0 R 1199 0 R 1203 0 R 1207 0 
R 1211 0 R 1216 0 R 1220 0 R 1223 0 R 1225 0 R 1235 0 R 1238 0 R 1246 0 R 1255 
0 R 1261 0 R 1265 0 R 1267 0 R 1277 0 R 1280 0 R 1286 0 R 1295 0 R 1298 0 R 
1305 0 R 1313 0 R 1315 0 R 1317 0 R 1326 0 R 1328 0 R 1330 0 R 1333 0 R 1335 0 
R 1337 0 R 1339 0 R 1341 0 R 1344 0 R 1348 0 R 1353 0 R 1355 0 R 1357 0 R 1359 
0 R 1364 0 R 1372 0 R 1377 0 R 1380 0 R 1382 0 R 1385 0 R
  1389 0 R 1393 0 R 1396 0 R 1398 0 R 1400 0 R 1403 0 R 1409 0 R 1414 0 R 1422 
0 R 1436 0 R 1450 0 R 1454 0 R 1459 0 R 1472 0 R 1477 0 R 1492 0 R 1500 0 R 
1504 0 R 1512 0 R 1527 0 R 1541 0 R 1553 0 R 1558 0 R 1564 0 R 1573 0 R 1579 0 
R 1584 0 R 1592 0 R 1595 0 R 1605 0 R 1611 0 R 1614 0 R 1627 0 R 1629 0 R 1635 
0 R 1639 0 R 1641 0 R 1649 0 R 1657 0 R 1661 0 R 1663 0 R 1665 0 R 1677 0 R 
1683 0 R 1692 0 R 1698 0 R 1712 0 R 1717 0 R 1726 0 R 1734 0 R 1740 0 R 1745 0 
R 1751 0 R 1754 0 R 1757 0 R 1762 0 R 1766 0 R 1773 0 R 1777 0 R 1782 0 R 1791 
0 R 1796 0 R 1801 0 R 1803 0 R 1811 0 R 1818 0 R 1824 0 R 1829 0 R 1833 0 R 
1836 0 R 1841 0 R 1846 0 R 1854 0 R 1856 0 R 1858 0 R 1861 0 R 1869 0 R 1872 0 
R 1879 0 R 1888 0 R 1891 0 R 1896 0 R 1898 0 R 1901 0 R 1906 0 R 1909 0 R 1911 
0 R 1914 0 R 1917 0 R 1920 0 R 1931 0 R 1936 0 R 1941 0 R 1943 0 R 1952 0 R 
1959 0 R 1967 0 R 1973 0 R 1978 0 R 1980 0 R 1989 0 R 1998 0 R 2009 0 R 2015 0 
R 2022 0 R 2024 0 R 2029 0 R 2031 0 R 2033 0 R 2036 0 R 2039 0
  R 2042 0 R 2047 0 R 2051 0 R 2062 0 R 2065 0 R 2070 0 R 2073 0 R 2075 0 R 
2080 0 R 2090 0 R 2092 0 R 2094 0 R 2096 0 R 2098 0 R 2101 0 R 2103 0 R 2105 0 
R 2108 0 R 2110 0 R 2112 0 R 2117 0 R 2122 0 R 2131 0 R 2133 0 R 2135 0 R 2142 
0 R 2144 0 R 2149 0 R 2151 0 R 2153 0 R 2160 0 R 2165 0 R 2169 0 R 2173 0 R 
2178 0 R 2180 0 R 2182 0 R 2186 0 R 2189 0 R 2191 0 R 2193 0 R 2197 0 R 2199 0 
R 2202 0 R 2204 0 R 2206 0 R 2208 0 R 2215 0 R 2218 0 R 2223 0 R 2225 0 R 2227 
0 R 2229 0 R 2231 0 R 2239 0 R 2250 0 R 2264 0 R 2275 0 R 2279 0 R 2284 0 R 
2288 0 R 2291 0 R 2296 0 R 2301 0 R 2303 0 R 2306 0 R 2308 0 R 2310 0 R 2312 0 
R 2317 0 R 2319 0 R 2332 0 R 2335 0 R 2343 0 R 2349 0 R 2361 0 R 2375 0 R 2388 
0 R 2405 0 R 2409 0 R 2411 0 R 2415 0 R 2433 0 R 2440 0 R 2452 0 R 2456 0 R 
2460 0 R 2469 0 R 2481 0 R 2486 0 R 2496 0 R 2509 0 R 2529 0 R 2538 0 R 2541 0 
R 2550 0 R 2567 0 R 2574 0 R 2577 0 R 2582 0 R 2586 0 R 2589 0 R 2598 0 R 2606 
0 R 2610 0 R 2612 0 R 2616 0 R 2630 0

[23/25] hbase-site git commit: Published site at 0c0fe05bc410bcfcccaa19d4be96834cc28f9317.

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/apidocs/org/apache/hadoop/hbase/rest/client/RemoteHTable.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/rest/client/RemoteHTable.html 
b/apidocs/org/apache/hadoop/hbase/rest/client/RemoteHTable.html
index 1eecc9b..d05b567 100644
--- a/apidocs/org/apache/hadoop/hbase/rest/client/RemoteHTable.html
+++ b/apidocs/org/apache/hadoop/hbase/rest/client/RemoteHTable.html
@@ -611,7 +611,7 @@ implements 
 
 RemoteHTable
-public RemoteHTable(Client client,
+public RemoteHTable(Client client,
 https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String name)
 Constructor
 
@@ -622,7 +622,7 @@ implements 
 
 RemoteHTable
-public RemoteHTable(Client client,
+public RemoteHTable(Client client,
 org.apache.hadoop.conf.Configuration conf,
 https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String name)
 Constructor
@@ -634,7 +634,7 @@ implements 
 
 RemoteHTable
-public RemoteHTable(Client client,
+public RemoteHTable(Client client,
 org.apache.hadoop.conf.Configuration conf,
 byte[] name)
 Constructor
@@ -667,7 +667,7 @@ implements 
 
 buildMultiRowSpec
-protected https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String buildMultiRowSpec(byte[][] rows,
+protected https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String buildMultiRowSpec(byte[][] rows,
int maxVersions)
 
 
@@ -677,7 +677,7 @@ implements 
 
 buildResultFromModel
-protected Result[] buildResultFromModel(org.apache.hadoop.hbase.rest.model.CellSetModel model)
+protected Result[] buildResultFromModel(org.apache.hadoop.hbase.rest.model.CellSetModel model)
 
 
 
@@ -686,7 +686,7 @@ implements 
 
 buildModelFromPut
-protected org.apache.hadoop.hbase.rest.model.CellSetModel buildModelFromPut(Put put)
+protected org.apache.hadoop.hbase.rest.model.CellSetModel buildModelFromPut(Put put)
 
 
 
@@ -695,7 +695,7 @@ implements 
 
 getTableName
-public byte[] getTableName()
+public byte[] getTableName()
 
 
 
@@ -704,7 +704,7 @@ implements 
 
 getName
-public TableName getName()
+public TableName getName()
 Description copied from 
interface: Table
 Gets the fully qualified table name instance of this 
table.
 
@@ -719,7 +719,7 @@ implements 
 
 getConfiguration
-public org.apache.hadoop.conf.Configuration getConfiguration()
+public org.apache.hadoop.conf.Configuration getConfiguration()
 Description copied from 
interface: Table
 Returns the Configuration object used by this 
instance.
  
@@ -738,7 +738,7 @@ implements 
 getTableDescriptor
 https://docs.oracle.com/javase/8/docs/api/java/lang/Deprecated.html?is-external=true";
 title="class or interface in java.lang">@Deprecated
-public HTableDescriptor getTableDescriptor()
+public HTableDescriptor getTableDescriptor()
 throws https://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true";
 title="class or interface in java.io">IOException
 Deprecated. 
 Description copied from 
interface: Table
@@ -757,7 +757,7 @@ public 
 
 close
-public void close()
+public void close()
throws https://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true";
 title="class or interface in java.io">IOException
 Description copied from 
interface: Table
 Releases any resources held or pending changes in internal 
buffers.
@@ -779,7 +779,7 @@ public 
 
 get
-public Result get(Get get)
+public Result get(Get get)
throws https://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true";
 title="class or interface in java.io">IOException
 Description copied from 
interface: Table
 Extracts certain cells from a given row.
@@ -803,7 +803,7 @@ public 
 
 get
-public Result[] get(https://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true";
 title="class or interface in java.util">List gets)
+public Result[] get(https://docs.oracle.com/javase/8/docs/api/java/util/List.html?is-external=true";
 title="class or interface in java.util">List gets)
  throws https://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true";
 title="class or interface in java.io">IOException
 Description copied from 
interface: Table
 Extracts specified cells from the given rows, as a 
batch.
@@ -829,7 +829,7 @@ public 
 
 exists
-public boolean exists(Get get)
+public boolean exists(Get get)
throws https://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true";
 title="class or interface i

[07/25] hbase-site git commit: Published site at 0c0fe05bc410bcfcccaa19d4be96834cc28f9317.

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSource.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSource.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSource.html
index ef127bf..2cdcb06 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSource.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSource.html
@@ -246,309 +246,313 @@
 238  String MIN_STORE_FILE_AGE = 
"minStoreFileAge";
 239  String AVG_STORE_FILE_AGE = 
"avgStoreFileAge";
 240  String NUM_REFERENCE_FILES = 
"numReferenceFiles";
-241  String MAX_STORE_FILE_AGE_DESC = "Max 
age of store files hosted on this region server";
-242  String MIN_STORE_FILE_AGE_DESC = "Min 
age of store files hosted on this region server";
-243  String AVG_STORE_FILE_AGE_DESC = 
"Average age of store files hosted on this region server";
-244  String NUM_REFERENCE_FILES_DESC = 
"Number of reference file on this region server";
+241  String MAX_STORE_FILE_AGE_DESC = "Max 
age of store files hosted on this RegionServer";
+242  String MIN_STORE_FILE_AGE_DESC = "Min 
age of store files hosted on this RegionServer";
+243  String AVG_STORE_FILE_AGE_DESC = 
"Average age of store files hosted on this RegionServer";
+244  String NUM_REFERENCE_FILES_DESC = 
"Number of reference file on this RegionServer";
 245  String STOREFILE_SIZE_DESC = "Size of 
storefiles being served.";
 246  String TOTAL_REQUEST_COUNT = 
"totalRequestCount";
 247  String TOTAL_REQUEST_COUNT_DESC =
-248  "Total number of requests this 
RegionServer has answered.";
-249  String TOTAL_ROW_ACTION_REQUEST_COUNT = 
"totalRowActionRequestCount";
-250  String 
TOTAL_ROW_ACTION_REQUEST_COUNT_DESC =
-251  "Total number of region requests 
this RegionServer has answered, count by row-level action";
-252  String READ_REQUEST_COUNT = 
"readRequestCount";
-253  String READ_REQUEST_COUNT_DESC =
-254  "Number of read requests this 
region server has answered.";
-255  String FILTERED_READ_REQUEST_COUNT = 
"filteredReadRequestCount";
-256  String FILTERED_READ_REQUEST_COUNT_DESC 
=
-257"Number of filtered read requests 
this region server has answered.";
-258  String WRITE_REQUEST_COUNT = 
"writeRequestCount";
-259  String WRITE_REQUEST_COUNT_DESC =
-260  "Number of mutation requests this 
region server has answered.";
-261  String CHECK_MUTATE_FAILED_COUNT = 
"checkMutateFailedCount";
-262  String CHECK_MUTATE_FAILED_COUNT_DESC 
=
-263  "Number of Check and Mutate calls 
that failed the checks.";
-264  String CHECK_MUTATE_PASSED_COUNT = 
"checkMutatePassedCount";
-265  String CHECK_MUTATE_PASSED_COUNT_DESC 
=
-266  "Number of Check and Mutate calls 
that passed the checks.";
-267  String STOREFILE_INDEX_SIZE = 
"storeFileIndexSize";
-268  String STOREFILE_INDEX_SIZE_DESC = 
"Size of indexes in storefiles on disk.";
-269  String STATIC_INDEX_SIZE = 
"staticIndexSize";
-270  String STATIC_INDEX_SIZE_DESC = 
"Uncompressed size of the static indexes.";
-271  String STATIC_BLOOM_SIZE = 
"staticBloomSize";
-272  String STATIC_BLOOM_SIZE_DESC =
-273  "Uncompressed size of the static 
bloom filters.";
-274  String NUMBER_OF_MUTATIONS_WITHOUT_WAL 
= "mutationsWithoutWALCount";
-275  String 
NUMBER_OF_MUTATIONS_WITHOUT_WAL_DESC =
-276  "Number of mutations that have been 
sent by clients with the write ahead logging turned off.";
-277  String DATA_SIZE_WITHOUT_WAL = 
"mutationsWithoutWALSize";
-278  String DATA_SIZE_WITHOUT_WAL_DESC =
-279  "Size of data that has been sent by 
clients with the write ahead logging turned off.";
-280  String PERCENT_FILES_LOCAL = 
"percentFilesLocal";
-281  String PERCENT_FILES_LOCAL_DESC =
-282  "The percent of HFiles that are 
stored on the local hdfs data node.";
-283  String 
PERCENT_FILES_LOCAL_SECONDARY_REGIONS = "percentFilesLocalSecondaryRegions";
-284  String 
PERCENT_FILES_LOCAL_SECONDARY_REGIONS_DESC =
-285"The percent of HFiles used by 
secondary regions that are stored on the local hdfs data node.";
-286  String SPLIT_QUEUE_LENGTH = 
"splitQueueLength";
-287  String SPLIT_QUEUE_LENGTH_DESC = 
"Length of the queue for splits.";
-288  String COMPACTION_QUEUE_LENGTH = 
"compactionQueueLength";
-289  String LARGE_COMPACTION_QUEUE_LENGTH = 
"largeCompactionQueueLength";
-290  String SMALL_COMPACTION_QUEUE_LENGTH = 
"smallCompactionQueueLength";
-291  String COMPACTION_QUEUE_LENGTH_DESC = 
"Length of the queue for compactions.";
-292  String 
LARGE_COMPACTION_QUEUE_LENGTH_DESC = "Length of the queue for compactions with 
input size "
-293  + "larger than throttle threshold 
(2.5GB by default)";
-294  String 
SMALL_COMPACTION_QUEUE_LENGTH_DESC = "Length of the queue for compactions with 
input size "
-295  + "smaller

[04/25] hbase-site git commit: Published site at 0c0fe05bc410bcfcccaa19d4be96834cc28f9317.

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/devapidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.Scanner.html
--
diff --git 
a/devapidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.Scanner.html
 
b/devapidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.Scanner.html
index 81dd9f9..fc92a63 100644
--- 
a/devapidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.Scanner.html
+++ 
b/devapidocs/src-html/org/apache/hadoop/hbase/rest/client/RemoteHTable.Scanner.html
@@ -123,910 +123,913 @@
 115  Iterator ii = 
quals.iterator();
 116  while (ii.hasNext()) {
 117
sb.append(toURLEncodedBytes((byte[])e.getKey()));
-118sb.append(':');
-119Object o = ii.next();
-120// Puts use byte[] but 
Deletes use KeyValue
-121if (o instanceof byte[]) {
-122  
sb.append(toURLEncodedBytes((byte[])o));
+118Object o = ii.next();
+119// Puts use byte[] but 
Deletes use KeyValue
+120if (o instanceof byte[]) {
+121  sb.append(':');
+122  
sb.append(toURLEncodedBytes((byte[]) o));
 123} else if (o instanceof 
KeyValue) {
-124  
sb.append(toURLEncodedBytes(CellUtil.cloneQualifier((KeyValue)o)));
-125} else {
-126  throw new 
RuntimeException("object type not handled");
-127}
-128if (ii.hasNext()) {
-129  sb.append(',');
+124  if (((KeyValue) 
o).getQualifierLength() != 0) {
+125sb.append(':');
+126
sb.append(toURLEncodedBytes(CellUtil.cloneQualifier((KeyValue) o)));
+127  }
+128} else {
+129  throw new 
RuntimeException("object type not handled");
 130}
-131  }
-132}
-133if (i.hasNext()) {
-134  sb.append(',');
+131if (ii.hasNext()) {
+132  sb.append(',');
+133}
+134  }
 135}
-136  }
-137}
-138if (startTime >= 0 && 
endTime != Long.MAX_VALUE) {
-139  sb.append('/');
-140  sb.append(startTime);
-141  if (startTime != endTime) {
-142sb.append(',');
-143sb.append(endTime);
-144  }
-145} else if (endTime != Long.MAX_VALUE) 
{
-146  sb.append('/');
-147  sb.append(endTime);
-148}
-149if (maxVersions > 1) {
-150  sb.append("?v=");
-151  sb.append(maxVersions);
-152}
-153return sb.toString();
-154  }
-155
-156  protected String 
buildMultiRowSpec(final byte[][] rows, int maxVersions) {
-157StringBuilder sb = new 
StringBuilder();
-158sb.append('/');
-159sb.append(Bytes.toString(name));
-160sb.append("/multiget/");
-161if (rows == null || rows.length == 0) 
{
-162  return sb.toString();
-163}
-164sb.append("?");
-165for(int i=0; i[17/25] hbase-site git commit: Published site at 0c0fe05bc410bcfcccaa19d4be96834cc28f9317.

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/174c22ea/devapidocs/org/apache/hadoop/hbase/regionserver/HRegionServer.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/regionserver/HRegionServer.html 
b/devapidocs/org/apache/hadoop/hbase/regionserver/HRegionServer.html
index a55d21c..be246fb 100644
--- a/devapidocs/org/apache/hadoop/hbase/regionserver/HRegionServer.html
+++ b/devapidocs/org/apache/hadoop/hbase/regionserver/HRegionServer.html
@@ -123,7 +123,7 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.LimitedPrivate(value="Tools")
-public class HRegionServer
+public class HRegionServer
 extends HasThread
 implements RegionServerServices, LastSequenceId, 
ConfigurationObserver
 HRegionServer makes a set of HRegions available to clients. 
It checks in with
@@ -363,58 +363,52 @@ implements msgInterval 
 
 
-private https://docs.oracle.com/javase/8/docs/api/javax/management/ObjectName.html?is-external=true";
 title="class or interface in javax.management">ObjectName
-mxBean
-MX Bean for RegionServerInfo
-
-
-
 (package private) ServerNonceManager
 nonceManager
 Nonce manager.
 
 
-
+
 private ScheduledChore
 nonceManagerChore
 The nonce manager chore.
 
 
-
+
 protected int
 numRegionsToReport 
 
-
+
 (package private) int
 numRetries 
 
-
+
 (package private) https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/atomic/AtomicBoolean.html?is-external=true";
 title="class or interface in 
java.util.concurrent.atomic">AtomicBoolean
 online 
 
-
+
 protected https://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true";
 title="class or interface in java.util">MapString,HRegion>
 onlineRegions
 Map of regions currently being served by this region 
server.
 
 
-
+
 private int
 operationTimeout 
 
-
+
 private JvmPauseMonitor
 pauseMonitor 
 
-
+
 (package private) ScheduledChore
 periodicFlusher 
 
-
+
 private RemoteProcedureResultReporter
 procedureResultReporter 
 
-
+
 protected https://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true";
 title="class or interface in java.util">MapString,https://docs.oracle.com/javase/8/docs/api/java/net/InetSocketAddress.html?is-external=true";
 title="class or interface in java.net">InetSocketAddress[]>
 regionFavoredNodesMap
 Map of encoded region names to the DataNode locations they 
should be hosted on
@@ -422,177 +416,177 @@ implements 
+
 static https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String
 REGIONSERVER
 region server process name
 
 
-
+
 private RegionServerAccounting
 regionServerAccounting 
 
-
+
 protected https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ConcurrentMap.html?is-external=true";
 title="class or interface in 
java.util.concurrent">ConcurrentMapBoolean>
 regionsInTransitionInRS 
 
-
+
 protected ReplicationSinkService
 replicationSinkHandler 
 
-
+
 protected ReplicationSourceService
 replicationSourceHandler 
 
-
+
 private org.apache.hadoop.fs.Path
 rootDir 
 
-
+
 (package private) https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ConcurrentMap.html?is-external=true";
 title="class or interface in java.util.concurrent">ConcurrentMapString,https://docs.oracle.com/javase/8/docs/api/java/lang/Integer.html?is-external=true";
 title="class or interface in java.lang">Integer>
 rowlocks 
 
-
+
 (package private) RpcClient
 rpcClient 
 
-
+
 private RpcControllerFactory
 rpcControllerFactory 
 
-
+
 private RpcRetryingCallerFactory
 rpcRetryingCallerFactory 
 
-
+
 protected RSRpcServices
 rpcServices 
 
-
+
 (package private) static https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String
 RS_HOSTNAME_DISABLE_MASTER_REVERSEDNS_KEY 
 
-
+
 (package private) static https://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String
 RS_HOSTNAME_KEY 
 
-
+
 private RegionServerCoprocessorHost
 rsHost 
 
-
+
 private RegionServerProcedureManagerHost
 rspmHost 
 
-
+
 private RegionServerRpcQuotaManager
 rsQuotaManager 
 
-
+
 private RegionServerSpaceQuotaManager
 rsSpaceQuotaManager 
 
-
+
 private 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos.RegionServerStatusService.BlockingInterface
 rssStub 
 
-
+
 protected SecureBulkLoadManager
 sec

hbase git commit: HBASE-20337 Update the doc on how to setup shortcircuit reads; its stale

Repository: hbase
Updated Branches:
  refs/heads/master 0c0fe05bc -> d60decd95


HBASE-20337 Update the doc on how to setup shortcircuit reads; its stale


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/d60decd9
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/d60decd9
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/d60decd9

Branch: refs/heads/master
Commit: d60decd959d4556caa54a3f355e246372d0147e5
Parents: 0c0fe05
Author: Michael Stack 
Authored: Tue Apr 3 10:27:38 2018 -0700
Committer: Michael Stack 
Committed: Wed Apr 4 11:20:58 2018 -0700

--
 src/main/asciidoc/_chapters/schema_design.adoc | 26 ++---
 1 file changed, 23 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/d60decd9/src/main/asciidoc/_chapters/schema_design.adoc
--
diff --git a/src/main/asciidoc/_chapters/schema_design.adoc 
b/src/main/asciidoc/_chapters/schema_design.adoc
index 4cd7656..12d449b 100644
--- a/src/main/asciidoc/_chapters/schema_design.adoc
+++ b/src/main/asciidoc/_chapters/schema_design.adoc
@@ -1148,16 +1148,36 @@ Detect regionserver failure as fast as reasonable. Set 
the following parameters:
 - `dfs.namenode.avoid.read.stale.datanode = true`
 - `dfs.namenode.avoid.write.stale.datanode = true`
 
+[[shortcircuit.reads]]
 ===  Optimize on the Server Side for Low Latency
-
-* Skip the network for local blocks. In `hbase-site.xml`, set the following 
parameters:
+Skip the network for local blocks when the RegionServer goes to read from HDFS 
by exploiting HDFS's
+link:https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html[Short-Circuit
 Local Reads] facility.
+Note how setup must be done both at the datanode and on the dfsclient ends of 
the conneciton -- i.e. at the RegionServer
+and how both ends need to have loaded the hadoop native `.so` library.
+After configuring your hadoop setting _dfs.client.read.shortcircuit_ to _true_ 
and configuring
+the _dfs.domain.socket.path_ path for the datanode and dfsclient to share and 
restarting, next configure
+the regionserver/dfsclient side.
+
+* In `hbase-site.xml`, set the following parameters:
 - `dfs.client.read.shortcircuit = true`
-- `dfs.client.read.shortcircuit.buffer.size = 131072` (Important to avoid OOME)
+- `dfs.client.read.shortcircuit.skip.checksum = true` so we don't double 
checksum (HBase does its own checksumming to save on i/os. See 
<> for more on this. 
+- `dfs.domain.socket.path` to match what was set for the datanodes.
+- `dfs.client.read.shortcircuit.buffer.size = 131072` Important to avoid OOME 
-- hbase has a default it uses if unset, see 
`hbase.dfs.client.read.shortcircuit.buffer.size`; its default is 131072.
 * Ensure data locality. In `hbase-site.xml`, set 
`hbase.hstore.min.locality.to.skip.major.compact = 0.7` (Meaning that 0.7 \<= n 
\<= 1)
 * Make sure DataNodes have enough handlers for block transfers. In 
`hdfs-site.xml`, set the following parameters:
 - `dfs.datanode.max.xcievers >= 8192`
 - `dfs.datanode.handler.count =` number of spindles
 
+Check the RegionServer logs after restart. You should only see complaint if 
misconfiguration.
+Otherwise, shortcircuit read operates quietly in background. It does not 
provide metrics so
+no optics on how effective it is but read latencies should show a marked 
improvement, especially if
+good data locality, lots of random reads, and dataset is larger than available 
cache.
+
+For more on short-circuit reads, see Colin's old blog on rollout,
+link:http://blog.cloudera.com/blog/2013/08/how-improved-short-circuit-local-reads-bring-better-performance-and-security-to-hadoop/[How
 Improved Short-Circuit Local Reads Bring Better Performance and Security to 
Hadoop].
+The link:https://issues.apache.org/jira/browse/HDFS-347[HDFS-347] issue also 
makes for an
+interesting read showing the HDFS community at its best (caveat a few 
comments).
+
 ===  JVM Tuning
 
   Tune JVM GC for low collection latencies



hbase git commit: HBASE-20337 Update the doc on how to setup shortcircuit reads; its stale; ADDENDUM

Repository: hbase
Updated Branches:
  refs/heads/master d60decd95 -> 8bc723477


HBASE-20337 Update the doc on how to setup shortcircuit reads; its stale; 
ADDENDUM


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/8bc72347
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/8bc72347
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/8bc72347

Branch: refs/heads/master
Commit: 8bc723477b60ce1ed0a71081630459621cb0f284
Parents: d60decd
Author: Michael Stack 
Authored: Wed Apr 4 11:25:25 2018 -0700
Committer: Michael Stack 
Committed: Wed Apr 4 11:25:25 2018 -0700

--
 src/main/asciidoc/_chapters/schema_design.adoc | 5 +
 1 file changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/8bc72347/src/main/asciidoc/_chapters/schema_design.adoc
--
diff --git a/src/main/asciidoc/_chapters/schema_design.adoc 
b/src/main/asciidoc/_chapters/schema_design.adoc
index 12d449b..a25b85e 100644
--- a/src/main/asciidoc/_chapters/schema_design.adoc
+++ b/src/main/asciidoc/_chapters/schema_design.adoc
@@ -1173,6 +1173,11 @@ Otherwise, shortcircuit read operates quietly in 
background. It does not provide
 no optics on how effective it is but read latencies should show a marked 
improvement, especially if
 good data locality, lots of random reads, and dataset is larger than available 
cache.
 
+Other advanced configurations that you might play with, especially if 
shortcircuit functionality
+is complaining in the logs,  include 
`dfs.client.read.shortcircuit.streams.cache.size` and
+`dfs.client.socketcache.capacity`. Documentation is sparse on these options. 
You'll have to
+read source code.
+
 For more on short-circuit reads, see Colin's old blog on rollout,
 
link:http://blog.cloudera.com/blog/2013/08/how-improved-short-circuit-local-reads-bring-better-performance-and-security-to-hadoop/[How
 Improved Short-Circuit Local Reads Bring Better Performance and Security to 
Hadoop].
 The link:https://issues.apache.org/jira/browse/HDFS-347[HDFS-347] issue also 
makes for an



hbase git commit: HBASE-20305 adding options to skip deletes/puts on target when running SyncTable

Repository: hbase
Updated Branches:
  refs/heads/master 8bc723477 -> f574fd478


HBASE-20305 adding options to skip deletes/puts on target when running SyncTable

Signed-off-by: tedyu 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/f574fd47
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/f574fd47
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/f574fd47

Branch: refs/heads/master
Commit: f574fd478211f795aa4a3365ae2ba2d368fea54d
Parents: 8bc7234
Author: wellington 
Authored: Wed Mar 28 22:12:01 2018 +0100
Committer: tedyu 
Committed: Wed Apr 4 12:38:18 2018 -0700

--
 .../hadoop/hbase/mapreduce/SyncTable.java   |  36 ++-
 .../hadoop/hbase/mapreduce/TestSyncTable.java   | 248 ++-
 2 files changed, 272 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/f574fd47/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/SyncTable.java
--
diff --git 
a/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/SyncTable.java
 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/SyncTable.java
index 9b4625b..32b7561 100644
--- 
a/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/SyncTable.java
+++ 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/SyncTable.java
@@ -63,7 +63,9 @@ public class SyncTable extends Configured implements Tool {
   static final String TARGET_TABLE_CONF_KEY = "sync.table.target.table.name";
   static final String SOURCE_ZK_CLUSTER_CONF_KEY = 
"sync.table.source.zk.cluster";
   static final String TARGET_ZK_CLUSTER_CONF_KEY = 
"sync.table.target.zk.cluster";
-  static final String DRY_RUN_CONF_KEY="sync.table.dry.run";
+  static final String DRY_RUN_CONF_KEY = "sync.table.dry.run";
+  static final String DO_DELETES_CONF_KEY = "sync.table.do.deletes";
+  static final String DO_PUTS_CONF_KEY = "sync.table.do.puts";
 
   Path sourceHashDir;
   String sourceTableName;
@@ -72,6 +74,8 @@ public class SyncTable extends Configured implements Tool {
   String sourceZkCluster;
   String targetZkCluster;
   boolean dryRun;
+  boolean doDeletes = true;
+  boolean doPuts = true;
 
   Counters counters;
 
@@ -128,6 +132,8 @@ public class SyncTable extends Configured implements Tool {
   jobConf.set(TARGET_ZK_CLUSTER_CONF_KEY, targetZkCluster);
 }
 jobConf.setBoolean(DRY_RUN_CONF_KEY, dryRun);
+jobConf.setBoolean(DO_DELETES_CONF_KEY, doDeletes);
+jobConf.setBoolean(DO_PUTS_CONF_KEY, doPuts);
 
 TableMapReduceUtil.initTableMapperJob(targetTableName, 
tableHash.initScan(),
 SyncMapper.class, null, null, job);
@@ -162,6 +168,8 @@ public class SyncTable extends Configured implements Tool {
 Table sourceTable;
 Table targetTable;
 boolean dryRun;
+boolean doDeletes = true;
+boolean doPuts = true;
 
 HashTable.TableHash sourceTableHash;
 HashTable.TableHash.Reader sourceHashReader;
@@ -186,7 +194,9 @@ public class SyncTable extends Configured implements Tool {
   TableOutputFormat.OUTPUT_CONF_PREFIX);
   sourceTable = openTable(sourceConnection, conf, SOURCE_TABLE_CONF_KEY);
   targetTable = openTable(targetConnection, conf, TARGET_TABLE_CONF_KEY);
-  dryRun = conf.getBoolean(SOURCE_TABLE_CONF_KEY, false);
+  dryRun = conf.getBoolean(DRY_RUN_CONF_KEY, false);
+  doDeletes = conf.getBoolean(DO_DELETES_CONF_KEY, true);
+  doPuts = conf.getBoolean(DO_PUTS_CONF_KEY, true);
 
   sourceTableHash = HashTable.TableHash.read(conf, sourceHashDir);
   LOG.info("Read source hash manifest: " + sourceTableHash);
@@ -473,7 +483,7 @@ public class SyncTable extends Configured implements Tool {
   context.getCounter(Counter.TARGETMISSINGCELLS).increment(1);
   matchingRow = false;
 
-  if (!dryRun) {
+  if (!dryRun && doPuts) {
 if (put == null) {
   put = new Put(rowKey);
 }
@@ -488,7 +498,7 @@ public class SyncTable extends Configured implements Tool {
   context.getCounter(Counter.SOURCEMISSINGCELLS).increment(1);
   matchingRow = false;
 
-  if (!dryRun) {
+  if (!dryRun && doDeletes) {
 if (delete == null) {
   delete = new Delete(rowKey);
 }
@@ -515,7 +525,7 @@ public class SyncTable extends Configured implements Tool {
 context.getCounter(Counter.DIFFERENTCELLVALUES).increment(1);
 matchingRow = false;
 
-if (!dryRun) {
+if (!dryRun && doPuts) {
   // overwrite target cell
   if (put == null) {
 put = new Put(rowKey);
@@ -696,6 +706,10 @@ public class SyncTable extends Configured implements Tool {
 Syst

hbase git commit: HBASE-17730 Add documentation to upgrade coprocessors for 2.0

Repository: hbase
Updated Branches:
  refs/heads/master f574fd478 -> dcc840e8a


HBASE-17730 Add documentation to upgrade coprocessors for 2.0


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/dcc840e8
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/dcc840e8
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/dcc840e8

Branch: refs/heads/master
Commit: dcc840e8a5c4d3421edcc2595d886ae8380bcb15
Parents: f574fd4
Author: Apekshit Sharma 
Authored: Tue Apr 3 16:38:44 2018 -0700
Committer: Apekshit Sharma 
Committed: Wed Apr 4 13:08:22 2018 -0700

--
 ...tion_instead_of_inheritance-HBASE-17732.adoc | 21 ++
 src/main/asciidoc/_chapters/upgrading.adoc  | 43 
 2 files changed, 64 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/dcc840e8/dev-support/design-docs/Coprocessor_Design_Improvements-Use_composition_instead_of_inheritance-HBASE-17732.adoc
--
diff --git 
a/dev-support/design-docs/Coprocessor_Design_Improvements-Use_composition_instead_of_inheritance-HBASE-17732.adoc
 
b/dev-support/design-docs/Coprocessor_Design_Improvements-Use_composition_instead_of_inheritance-HBASE-17732.adoc
index a61b37b..2476f8a 100644
--- 
a/dev-support/design-docs/Coprocessor_Design_Improvements-Use_composition_instead_of_inheritance-HBASE-17732.adoc
+++ 
b/dev-support/design-docs/Coprocessor_Design_Improvements-Use_composition_instead_of_inheritance-HBASE-17732.adoc
@@ -1,3 +1,24 @@
+
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
 = Coprocessor Design Improvements 
(link:https://issues.apache.org/jira/browse/HBASE-17732[HBASE-17732])
 
 Author: Apekshit Sharma

http://git-wip-us.apache.org/repos/asf/hbase/blob/dcc840e8/src/main/asciidoc/_chapters/upgrading.adoc
--
diff --git a/src/main/asciidoc/_chapters/upgrading.adoc 
b/src/main/asciidoc/_chapters/upgrading.adoc
index 38a67d4..68adb14 100644
--- a/src/main/asciidoc/_chapters/upgrading.adoc
+++ b/src/main/asciidoc/_chapters/upgrading.adoc
@@ -546,6 +546,48 @@ The Java client API for HBase has a number of changes that 
break both source and
 This would be a good place to link to an appendix on migrating applications
 
 
+[[upgrade2.0.coprocessors]]
+ Upgrading Coprocessors to 2.0
+Coprocessors have changed substantially in 2.0 ranging from top level design 
changes in class
+hierarchies to changed/removed methods, interfaces, etc.
+(Parent jira: 
link:https://issues.apache.org/jira/browse/HBASE-18169[HBASE-18169 Coprocessor 
fix
+and cleanup before 2.0.0 release]). Some of the reasons for such widespread 
changes:
+
+. Pass Interfaces instead of Implementations; e.g. TableDescriptor instead of 
HTableDescriptor and
+Region instead of HRegion 
(link:https://issues.apache.org/jira/browse/HBASE-18241[HBASE-18241]
+Change client.Table and client.Admin to not use HTableDescriptor).
+. Design refactor so implementers need to fill out less boilerplate and so we 
can do more
+compile-time checking 
(link:https://issues.apache.org/jira/browse/HBASE-17732[HBASE-17732])
+. Purge Protocol Buffers from Coprocessor API
+(link:https://issues.apache.org/jira/browse/HBASE-18859[HBASE-18859],
+link:https://issues.apache.org/jira/browse/HBASE-16769[HBASE-16769], etc)
+. Cut back on what we expose to Coprocessors removing hooks on internals that 
were too private to
+ expose (for eg. 
link:https://issues.apache.org/jira/browse/HBASE-18453[HBASE-18453]
+ CompactionRequest should not be exposed to user directly;
+ link:https://issues.apache.org/jira/browse/HBASE-18298[HBASE-18298] 
RegionServerServices Interface
+ cleanup for CP expose; etc)
+
+To use coprocessors in 2.0, they should be rebuilt against new API otherwise 
they will fail to
+load and HBase processes will die.
+
+Suggested order of changes to upgrade the coprocessors:
+
+. Directly implement observer interfaces instead of extending Base*Observer 
classes

hbase git commit: HBASE-17730 Add license header to design doc.

Repository: hbase
Updated Branches:
  refs/heads/branch-2.0 d7cb0bd41 -> c9f3eb00a


HBASE-17730 Add license header to design doc.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/c9f3eb00
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/c9f3eb00
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/c9f3eb00

Branch: refs/heads/branch-2.0
Commit: c9f3eb00ab93d7b97de43c0bc3601371c77d0ba2
Parents: d7cb0bd
Author: Apekshit Sharma 
Authored: Wed Apr 4 13:11:53 2018 -0700
Committer: Apekshit Sharma 
Committed: Wed Apr 4 13:12:33 2018 -0700

--
 ...tion_instead_of_inheritance-HBASE-17732.adoc | 21 
 1 file changed, 21 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/c9f3eb00/dev-support/design-docs/Coprocessor_Design_Improvements-Use_composition_instead_of_inheritance-HBASE-17732.adoc
--
diff --git 
a/dev-support/design-docs/Coprocessor_Design_Improvements-Use_composition_instead_of_inheritance-HBASE-17732.adoc
 
b/dev-support/design-docs/Coprocessor_Design_Improvements-Use_composition_instead_of_inheritance-HBASE-17732.adoc
index a61b37b..2476f8a 100644
--- 
a/dev-support/design-docs/Coprocessor_Design_Improvements-Use_composition_instead_of_inheritance-HBASE-17732.adoc
+++ 
b/dev-support/design-docs/Coprocessor_Design_Improvements-Use_composition_instead_of_inheritance-HBASE-17732.adoc
@@ -1,3 +1,24 @@
+
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
 = Coprocessor Design Improvements 
(link:https://issues.apache.org/jira/browse/HBASE-17732[HBASE-17732])
 
 Author: Apekshit Sharma



hbase git commit: HBASE-17730 Add license header to design doc.

Repository: hbase
Updated Branches:
  refs/heads/branch-2 a761f175a -> 1e0d90953


HBASE-17730 Add license header to design doc.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/1e0d9095
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/1e0d9095
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/1e0d9095

Branch: refs/heads/branch-2
Commit: 1e0d9095325774d8fab4982ede911356f3b9907f
Parents: a761f17
Author: Apekshit Sharma 
Authored: Wed Apr 4 13:11:53 2018 -0700
Committer: Apekshit Sharma 
Committed: Wed Apr 4 13:11:53 2018 -0700

--
 ...tion_instead_of_inheritance-HBASE-17732.adoc | 21 
 1 file changed, 21 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/1e0d9095/dev-support/design-docs/Coprocessor_Design_Improvements-Use_composition_instead_of_inheritance-HBASE-17732.adoc
--
diff --git 
a/dev-support/design-docs/Coprocessor_Design_Improvements-Use_composition_instead_of_inheritance-HBASE-17732.adoc
 
b/dev-support/design-docs/Coprocessor_Design_Improvements-Use_composition_instead_of_inheritance-HBASE-17732.adoc
index a61b37b..2476f8a 100644
--- 
a/dev-support/design-docs/Coprocessor_Design_Improvements-Use_composition_instead_of_inheritance-HBASE-17732.adoc
+++ 
b/dev-support/design-docs/Coprocessor_Design_Improvements-Use_composition_instead_of_inheritance-HBASE-17732.adoc
@@ -1,3 +1,24 @@
+
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
 = Coprocessor Design Improvements 
(link:https://issues.apache.org/jira/browse/HBASE-17732[HBASE-17732])
 
 Author: Apekshit Sharma



hbase git commit: HBASE-19488 Move to using Apache commons CollectionUtils

Repository: hbase
Updated Branches:
  refs/heads/master dcc840e8a -> d866e7c65


HBASE-19488 Move to using Apache commons CollectionUtils

Signed-off-by: Apekshit Sharma 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/d866e7c6
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/d866e7c6
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/d866e7c6

Branch: refs/heads/master
Commit: d866e7c658e04a5b3d221137d41e3e277f9991c4
Parents: dcc840e
Author: BELUGA BEHR 
Authored: Wed Apr 4 14:12:19 2018 -0700
Committer: Apekshit Sharma 
Committed: Wed Apr 4 14:12:19 2018 -0700

--
 .../hadoop/hbase/client/RowMutations.java   |  5 +-
 .../org/apache/hadoop/hbase/util/Bytes.java |  1 +
 .../hadoop/hbase/util/CollectionUtils.java  | 80 
 .../replication/ZKReplicationPeerStorage.java   |  5 +-
 .../replication/ZKReplicationQueueStorage.java  | 24 --
 .../hadoop/hbase/regionserver/HRegion.java  |  2 +-
 .../hbase/regionserver/RSRpcServices.java   |  2 +-
 .../hadoop/hbase/regionserver/StoreScanner.java |  6 +-
 .../hbase/regionserver/wal/FSWALEntry.java  |  2 +-
 9 files changed, 30 insertions(+), 97 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/d866e7c6/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RowMutations.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RowMutations.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RowMutations.java
index 1eb3151..4b426cf 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RowMutations.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RowMutations.java
@@ -22,8 +22,9 @@ import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collections;
 import java.util.List;
+
+import org.apache.commons.collections.CollectionUtils;
 import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.hadoop.hbase.util.CollectionUtils;
 import org.apache.yetus.audience.InterfaceAudience;
 
 /**
@@ -47,7 +48,7 @@ public class RowMutations implements Row {
*/
   public static RowMutations of(List mutations) throws 
IOException {
 if (CollectionUtils.isEmpty(mutations)) {
-  throw new IllegalArgumentException("Can't instantiate a RowMutations by 
empty list");
+  throw new IllegalArgumentException("Cannot instantiate a RowMutations by 
empty list");
 }
 return new RowMutations(mutations.get(0).getRow(), mutations.size())
 .add(mutations);

http://git-wip-us.apache.org/repos/asf/hbase/blob/d866e7c6/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java
--
diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java
index b7912fd..a315fd2 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java
@@ -38,6 +38,7 @@ import java.util.Comparator;
 import java.util.Iterator;
 import java.util.List;
 
+import org.apache.commons.collections.CollectionUtils;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CellComparator;
 import org.apache.hadoop.hbase.KeyValue;

http://git-wip-us.apache.org/repos/asf/hbase/blob/d866e7c6/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CollectionUtils.java
--
diff --git 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CollectionUtils.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CollectionUtils.java
index 8bbb6f1..bfe41d8 100644
--- 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CollectionUtils.java
+++ 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CollectionUtils.java
@@ -19,10 +19,6 @@
 package org.apache.hadoop.hbase.util;
 
 import java.io.IOException;
-import java.util.ArrayList;
-import java.util.Collection;
-import java.util.Collections;
-import java.util.List;
 import java.util.concurrent.ConcurrentMap;
 import java.util.function.Supplier;
 
@@ -34,82 +30,6 @@ import org.apache.yetus.audience.InterfaceAudience;
 @InterfaceAudience.Private
 public class CollectionUtils {
 
-  private static final List EMPTY_LIST = 
Collections.unmodifiableList(new ArrayList<>(0));
-
-  
-  @SuppressWarnings("unchecked")
-  public static  Collection nullSafe(Collection in) {
-if (in == null) {
-  return (Collection)EMPTY_LIST;
-}
-return in;
-  }
-
-  / size /
-
-  public static  int nullSafeSize(Collection collection) {

hbase git commit: HBASE-19488 Move to using Apache commons CollectionUtils

Repository: hbase
Updated Branches:
  refs/heads/branch-2 1e0d90953 -> 039bc7357


HBASE-19488 Move to using Apache commons CollectionUtils

Signed-off-by: Apekshit Sharma 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/039bc735
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/039bc735
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/039bc735

Branch: refs/heads/branch-2
Commit: 039bc73571d2cc89378749573dfeec74c247b0b9
Parents: 1e0d909
Author: BELUGA BEHR 
Authored: Wed Apr 4 14:12:19 2018 -0700
Committer: Apekshit Sharma 
Committed: Wed Apr 4 14:16:33 2018 -0700

--
 .../hadoop/hbase/client/RowMutations.java   |  5 +-
 .../org/apache/hadoop/hbase/util/Bytes.java |  1 +
 .../hadoop/hbase/util/CollectionUtils.java  | 80 
 .../replication/ZKReplicationPeerStorage.java   |  5 +-
 .../replication/ZKReplicationQueueStorage.java  | 24 --
 .../hadoop/hbase/regionserver/HRegion.java  |  2 +-
 .../hbase/regionserver/RSRpcServices.java   |  2 +-
 .../hadoop/hbase/regionserver/StoreScanner.java |  6 +-
 .../hbase/regionserver/wal/FSWALEntry.java  |  2 +-
 9 files changed, 30 insertions(+), 97 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/039bc735/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RowMutations.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RowMutations.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RowMutations.java
index 1eb3151..4b426cf 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RowMutations.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RowMutations.java
@@ -22,8 +22,9 @@ import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collections;
 import java.util.List;
+
+import org.apache.commons.collections.CollectionUtils;
 import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.hadoop.hbase.util.CollectionUtils;
 import org.apache.yetus.audience.InterfaceAudience;
 
 /**
@@ -47,7 +48,7 @@ public class RowMutations implements Row {
*/
   public static RowMutations of(List mutations) throws 
IOException {
 if (CollectionUtils.isEmpty(mutations)) {
-  throw new IllegalArgumentException("Can't instantiate a RowMutations by 
empty list");
+  throw new IllegalArgumentException("Cannot instantiate a RowMutations by 
empty list");
 }
 return new RowMutations(mutations.get(0).getRow(), mutations.size())
 .add(mutations);

http://git-wip-us.apache.org/repos/asf/hbase/blob/039bc735/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java
--
diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java
index b7912fd..a315fd2 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java
@@ -38,6 +38,7 @@ import java.util.Comparator;
 import java.util.Iterator;
 import java.util.List;
 
+import org.apache.commons.collections.CollectionUtils;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CellComparator;
 import org.apache.hadoop.hbase.KeyValue;

http://git-wip-us.apache.org/repos/asf/hbase/blob/039bc735/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CollectionUtils.java
--
diff --git 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CollectionUtils.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CollectionUtils.java
index 8bbb6f1..bfe41d8 100644
--- 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CollectionUtils.java
+++ 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CollectionUtils.java
@@ -19,10 +19,6 @@
 package org.apache.hadoop.hbase.util;
 
 import java.io.IOException;
-import java.util.ArrayList;
-import java.util.Collection;
-import java.util.Collections;
-import java.util.List;
 import java.util.concurrent.ConcurrentMap;
 import java.util.function.Supplier;
 
@@ -34,82 +30,6 @@ import org.apache.yetus.audience.InterfaceAudience;
 @InterfaceAudience.Private
 public class CollectionUtils {
 
-  private static final List EMPTY_LIST = 
Collections.unmodifiableList(new ArrayList<>(0));
-
-  
-  @SuppressWarnings("unchecked")
-  public static  Collection nullSafe(Collection in) {
-if (in == null) {
-  return (Collection)EMPTY_LIST;
-}
-return in;
-  }
-
-  / size /
-
-  public static  int nullSafeSize(Collection collection

hbase git commit: HBASE-14348 Update download mirror link

Repository: hbase
Updated Branches:
  refs/heads/master d866e7c65 -> 5fed7fd3d


HBASE-14348 Update download mirror link


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/5fed7fd3
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/5fed7fd3
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/5fed7fd3

Branch: refs/heads/master
Commit: 5fed7fd3d2f62623cc01bff4a0b0e30eae86beb4
Parents: d866e7c
Author: Lars Francke 
Authored: Wed Apr 4 14:30:06 2018 -0700
Committer: Michael Stack 
Committed: Wed Apr 4 14:30:06 2018 -0700

--
 README.txt   |  2 +-
 src/main/asciidoc/_chapters/getting_started.adoc |  2 +-
 src/site/asciidoc/old_news.adoc  | 12 ++--
 src/site/resources/doap_Hbase.rdf|  2 +-
 src/site/site.xml|  2 +-
 src/site/xdoc/index.xml  |  2 +-
 src/site/xdoc/old_news.xml   | 12 ++--
 7 files changed, 17 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/5fed7fd3/README.txt
--
diff --git a/README.txt b/README.txt
index 4ebb504..7a17227 100755
--- a/README.txt
+++ b/README.txt
@@ -26,7 +26,7 @@ notice here [9].
 1. http://hbase.apache.org
 2. http://research.google.com/archive/bigtable.html
 3. http://hadoop.apache.org
-4. http://www.apache.org/dyn/closer.cgi/hbase/
+4. http://www.apache.org/dyn/closer.lua/hbase/
 5. https://hbase.apache.org/source-repository.html
 6. https://hbase.apache.org/issue-tracking.html
 7. http://hbase.apache.org/license.html

http://git-wip-us.apache.org/repos/asf/hbase/blob/5fed7fd3/src/main/asciidoc/_chapters/getting_started.adoc
--
diff --git a/src/main/asciidoc/_chapters/getting_started.adoc 
b/src/main/asciidoc/_chapters/getting_started.adoc
index 47e0d96..9e0bbdf 100644
--- a/src/main/asciidoc/_chapters/getting_started.adoc
+++ b/src/main/asciidoc/_chapters/getting_started.adoc
@@ -52,7 +52,7 @@ See <> for information about supported JDK 
versions.
 === Get Started with HBase
 
 .Procedure: Download, Configure, and Start HBase in Standalone Mode
-. Choose a download site from this list of 
link:https://www.apache.org/dyn/closer.cgi/hbase/[Apache Download Mirrors].
+. Choose a download site from this list of 
link:https://www.apache.org/dyn/closer.lua/hbase/[Apache Download Mirrors].
   Click on the suggested top link.
   This will take you to a mirror of _HBase Releases_.
   Click on the folder named _stable_ and then download the binary file that 
ends in _.tar.gz_ to your local filesystem.

http://git-wip-us.apache.org/repos/asf/hbase/blob/5fed7fd3/src/site/asciidoc/old_news.adoc
--
diff --git a/src/site/asciidoc/old_news.adoc b/src/site/asciidoc/old_news.adoc
index 4ae3d7a..9c8d088 100644
--- a/src/site/asciidoc/old_news.adoc
+++ b/src/site/asciidoc/old_news.adoc
@@ -57,7 +57,7 @@ October 25th, 2012:: 
link:http://www.meetup.com/HBase-NYC/events/81728932/[Strat
 
 September 11th, 2012:: 
link:http://www.meetup.com/hbaseusergroup/events/80621872/[Contributor's 
Pow-Wow at HortonWorks HQ.]
 
-August 8th, 2012:: link:https://www.apache.org/dyn/closer.cgi/hbase/[Apache 
HBase 0.94.1 is available for download]
+August 8th, 2012:: link:https://www.apache.org/dyn/closer.lua/hbase/[Apache 
HBase 0.94.1 is available for download]
 
 June 15th, 2012:: 
link:http://www.meetup.com/hbaseusergroup/events/59829652/[Birds-of-a-feather] 
in San Jose, day after:: link:http://hadoopsummit.org[Hadoop Summit]
 
@@ -69,9 +69,9 @@ March 27th, 2012:: 
link:http://www.meetup.com/hbaseusergroup/events/56021562/[Me
 
 January 19th, 2012:: 
link:http://www.meetup.com/hbaseusergroup/events/46702842/[Meetup @ EBay]
 
-January 23rd, 2012:: Apache HBase 0.92.0 released. 
link:https://www.apache.org/dyn/closer.cgi/hbase/[Download it!]
+January 23rd, 2012:: Apache HBase 0.92.0 released. 
link:https://www.apache.org/dyn/closer.lua/hbase/[Download it!]
 
-December 23rd, 2011:: Apache HBase 0.90.5 released. 
link:https://www.apache.org/dyn/closer.cgi/hbase/[Download it!]
+December 23rd, 2011:: Apache HBase 0.90.5 released. 
link:https://www.apache.org/dyn/closer.lua/hbase/[Download it!]
 
 November 29th, 2011:: 
link:http://www.meetup.com/hackathon/events/41025972/[Developer Pow-Wow in SF] 
at Salesforce HQ
 
@@ -83,9 +83,9 @@ June 30th, 2011:: 
link:http://www.meetup.com/hbaseusergroup/events/20572251/[HBa
 
 June 8th, 2011:: 
link:http://berlinbuzzwords.de/wiki/hbase-workshop-and-hackathon[HBase 
Hackathon] in Berlin to coincide with:: link:http://berlinbuzzwords.de/[Berlin 
Buzzwords]
 
-May 19th, 2011:

hbase git commit: HBASE-16499 slow replication for small HBase clusters - addendum for updating in the document

Repository: hbase
Updated Branches:
  refs/heads/master 5fed7fd3d -> e2b0490d1


HBASE-16499 slow replication for small HBase clusters - addendum for updating 
in the document

Signed-off-by: Ashish Singhi 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/e2b0490d
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/e2b0490d
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/e2b0490d

Branch: refs/heads/master
Commit: e2b0490d18f7cc03aa59475a1b423597ddc481fb
Parents: 5fed7fd
Author: Ashish Singhi 
Authored: Thu Apr 5 11:16:52 2018 +0530
Committer: Ashish Singhi 
Committed: Thu Apr 5 11:16:52 2018 +0530

--
 src/main/asciidoc/_chapters/upgrading.adoc | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/e2b0490d/src/main/asciidoc/_chapters/upgrading.adoc
--
diff --git a/src/main/asciidoc/_chapters/upgrading.adoc 
b/src/main/asciidoc/_chapters/upgrading.adoc
index 68adb14..f5cdff3 100644
--- a/src/main/asciidoc/_chapters/upgrading.adoc
+++ b/src/main/asciidoc/_chapters/upgrading.adoc
@@ -390,6 +390,7 @@ The following configuration settings changed their default 
value. Where applicab
 * hbase.client.max.perserver.tasks is now 2. Previously it was 5.
 * hbase.normalizer.period is now 5 minutes. Previously it was 30 minutes.
 * hbase.regionserver.region.split.policy is now SteppingSplitPolicy. 
Previously it was IncreasingToUpperBoundRegionSplitPolicy.
+* replication.source.ratio is now 0.5. Previously it was 0.1.
 
 [[upgrade2.0.regions.on.master]]
 ."Master hosting regions" feature broken and unsupported