hbase git commit: HBASE-18467 more debug

2017-08-26 Thread busbey
Repository: hbase
Updated Branches:
  refs/heads/HBASE-18467 2989a0be1 -> fe61f0da8


HBASE-18467 more debug


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/fe61f0da
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/fe61f0da
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/fe61f0da

Branch: refs/heads/HBASE-18467
Commit: fe61f0da8dddff392a19e933f8ffc0684d332691
Parents: 2989a0b
Author: Sean Busbey 
Authored: Sun Aug 27 01:30:40 2017 -0500
Committer: Sean Busbey 
Committed: Sun Aug 27 01:30:40 2017 -0500

--
 dev-support/Jenkinsfile | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/fe61f0da/dev-support/Jenkinsfile
--
diff --git a/dev-support/Jenkinsfile b/dev-support/Jenkinsfile
index 28ca8b7..50142bf 100644
--- a/dev-support/Jenkinsfile
+++ b/dev-support/Jenkinsfile
@@ -409,8 +409,10 @@ END
echo " ${change.author}"
echo ""
// Workaround for JENKINS-46358
-   writeFile file: 'tmp_commit_file', text: "${msg}"
-   sh '''grep -o -E 'HBASE-[0-9]+' 'tmp_commit_file' 
>matched_jiras'''
+   writeFile file: 'tmp_commit_file', text: msg
+   echo "finished writing commit to a file."
+   sh "grep -o -E 'HBASE-[0-9]+' 'tmp_commit_file' >matched_jiras"
+   echo "finished filtering via grep."
def jiras = readFile(file: 'matched_jiras').split()
if (jiras.length == 0) {
  echo "[WARN] no JIRA key found in message, TODO email 
committer"



hbase git commit: HBASE-18467 why are we serializing the changeset

2017-08-26 Thread busbey
Repository: hbase
Updated Branches:
  refs/heads/HBASE-18467 d1f1c45ad -> 2989a0be1


HBASE-18467 why are we serializing the changeset


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/2989a0be
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/2989a0be
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/2989a0be

Branch: refs/heads/HBASE-18467
Commit: 2989a0be1960721c5a26a279d9188f159512eee6
Parents: d1f1c45
Author: Sean Busbey 
Authored: Sun Aug 27 00:48:54 2017 -0500
Committer: Sean Busbey 
Committed: Sun Aug 27 00:48:54 2017 -0500

--
 dev-support/Jenkinsfile | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/2989a0be/dev-support/Jenkinsfile
--
diff --git a/dev-support/Jenkinsfile b/dev-support/Jenkinsfile
index 95d2b67..28ca8b7 100644
--- a/dev-support/Jenkinsfile
+++ b/dev-support/Jenkinsfile
@@ -394,7 +394,6 @@ END
echo ""
echo "[INFO] There are ${currentBuild.changeSets.size()} change 
sets."
def seenJiras = []
-   CharSequence pattern = /HBASE-[0-9]+/
for ( changelist in currentBuild.changeSets ) {
  if ( changelist.isEmptySet() ) {
echo "[DEBUG] change set was empty, skipping JIRA comments."
@@ -410,8 +409,8 @@ END
echo " ${change.author}"
echo ""
// Workaround for JENKINS-46358
-   writeFile file: 'tmp_commit_file', text: msg
-   sh "grep -o -E '${pattern}' 'tmp_commit_file' >matched_jiras"
+   writeFile file: 'tmp_commit_file', text: "${msg}"
+   sh '''grep -o -E 'HBASE-[0-9]+' 'tmp_commit_file' 
>matched_jiras'''
def jiras = readFile(file: 'matched_jiras').split()
if (jiras.length == 0) {
  echo "[WARN] no JIRA key found in message, TODO email 
committer"



hbase git commit: HBASE-18467 wait, I can just use the shell.

2017-08-26 Thread busbey
Repository: hbase
Updated Branches:
  refs/heads/HBASE-18467 ea7baa560 -> d1f1c45ad


HBASE-18467 wait, I can just use the shell.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/d1f1c45a
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/d1f1c45a
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/d1f1c45a

Branch: refs/heads/HBASE-18467
Commit: d1f1c45ad70dddfe2a1c557e561fa2af236c8ae6
Parents: ea7baa5
Author: Sean Busbey 
Authored: Sun Aug 27 00:38:38 2017 -0500
Committer: Sean Busbey 
Committed: Sun Aug 27 00:38:38 2017 -0500

--
 dev-support/Jenkinsfile | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/d1f1c45a/dev-support/Jenkinsfile
--
diff --git a/dev-support/Jenkinsfile b/dev-support/Jenkinsfile
index 27850a4..95d2b67 100644
--- a/dev-support/Jenkinsfile
+++ b/dev-support/Jenkinsfile
@@ -395,9 +395,6 @@ END
echo "[INFO] There are ${currentBuild.changeSets.size()} change 
sets."
def seenJiras = []
CharSequence pattern = /HBASE-[0-9]+/
-   def foobar = { CharSequence foo, CharSequence bar ->
- org.codehaus.groovy.runtime.StringGroovyMethods.find(foo,bar)
-   }
for ( changelist in currentBuild.changeSets ) {
  if ( changelist.isEmptySet() ) {
echo "[DEBUG] change set was empty, skipping JIRA comments."
@@ -412,9 +409,14 @@ END
echo "  ${change.commitId}"
echo " ${change.author}"
echo ""
-   // For now, only match the first occurrance of an HBase jira 
id, due to JENKINS-46358
-   currentIssue = foobar(msg, pattern)
-   if (currentIssue != null ) {
+   // Workaround for JENKINS-46358
+   writeFile file: 'tmp_commit_file', text: msg
+   sh "grep -o -E '${pattern}' 'tmp_commit_file' >matched_jiras"
+   def jiras = readFile(file: 'matched_jiras').split()
+   if (jiras.length == 0) {
+ echo "[WARN] no JIRA key found in message, TODO email 
committer"
+   }
+   for (currentIssue in jiras) {
  echo "[DEBUG] found jira key: ${currentIssue}"
  if ( currentIssue in seenJiras ) {
echo "[DEBUG] already commented on ${currentIssue}."
@@ -423,8 +425,6 @@ END
jiraComment issueKey: currentIssue, body: comment
seenJiras << currentIssue
  }
-   } else {
-  echo "[WARN] no JIRA key found in message, TODO email 
committer"
}
  }
}



[03/50] [abbrv] hbase git commit: HBASE-18687 Add @since 2.0.0 to new classes; AMENDMENT2

2017-08-26 Thread busbey
HBASE-18687 Add @since 2.0.0 to new classes; AMENDMENT2


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/439191ec
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/439191ec
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/439191ec

Branch: refs/heads/HBASE-18467
Commit: 439191ece6e024dd736cff000109748672219d18
Parents: 6859d4e
Author: Michael Stack 
Authored: Fri Aug 25 14:44:01 2017 -0700
Committer: Michael Stack 
Committed: Fri Aug 25 14:44:01 2017 -0700

--
 .../java/org/apache/hadoop/hbase/backup/BackupClientFactory.java   | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/439191ec/hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/BackupClientFactory.java
--
diff --git 
a/hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/BackupClientFactory.java
 
b/hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/BackupClientFactory.java
index 6db39f8..22e69a3 100644
--- 
a/hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/BackupClientFactory.java
+++ 
b/hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/BackupClientFactory.java
@@ -25,6 +25,8 @@ import 
org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient;
 import org.apache.hadoop.hbase.backup.impl.TableBackupClient;
 import org.apache.hadoop.hbase.client.Connection;
 
+import org.apache.hadoop.hbase.classification.InterfaceAudience;
+
 @InterfaceAudience.Private
 public class BackupClientFactory {
 



[08/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TsvImporterCustomTestMapperForOprAttr.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TsvImporterCustomTestMapperForOprAttr.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TsvImporterCustomTestMapperForOprAttr.java
deleted file mode 100644
index 9d8b8f0..000
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TsvImporterCustomTestMapperForOprAttr.java
+++ /dev/null
@@ -1,58 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hbase.mapreduce;
-
-import java.io.IOException;
-
-import org.apache.hadoop.hbase.KeyValue;
-import org.apache.hadoop.hbase.client.Put;
-import 
org.apache.hadoop.hbase.mapreduce.ImportTsv.TsvParser.BadTsvLineException;
-import org.apache.hadoop.hbase.mapreduce.ImportTsv.TsvParser.ParsedLine;
-import org.apache.hadoop.hbase.util.Bytes;
-
-/**
- * 
- * Just shows a simple example of how the attributes can be extracted and added
- * to the puts
- */
-public class TsvImporterCustomTestMapperForOprAttr extends TsvImporterMapper {
-  @Override
-  protected void populatePut(byte[] lineBytes, ParsedLine parsed, Put put, int 
i)
-  throws BadTsvLineException, IOException {
-KeyValue kv;
-kv = new KeyValue(lineBytes, parsed.getRowKeyOffset(), 
parsed.getRowKeyLength(),
-parser.getFamily(i), 0, parser.getFamily(i).length, 
parser.getQualifier(i), 0,
-parser.getQualifier(i).length, ts, KeyValue.Type.Put, lineBytes, 
parsed.getColumnOffset(i),
-parsed.getColumnLength(i));
-if (parsed.getIndividualAttributes() != null) {
-  String[] attributes = parsed.getIndividualAttributes();
-  for (String attr : attributes) {
-String[] split = attr.split(ImportTsv.DEFAULT_ATTRIBUTES_SEPERATOR);
-if (split == null || split.length <= 1) {
-  throw new BadTsvLineException("Invalid attributes seperator 
specified" + attributes);
-} else {
-  if (split[0].length() <= 0 || split[1].length() <= 0) {
-throw new BadTsvLineException("Invalid attributes seperator 
specified" + attributes);
-  }
-  put.setAttribute(split[0], Bytes.toBytes(split[1]));
-}
-  }
-}
-put.add(kv);
-  }
-}

http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-server/src/test/java/org/apache/hadoop/hbase/namespace/TestNamespaceAuditor.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/namespace/TestNamespaceAuditor.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/namespace/TestNamespaceAuditor.java
index f641887..a81d268 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/namespace/TestNamespaceAuditor.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/namespace/TestNamespaceAuditor.java
@@ -65,7 +65,6 @@ import 
org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
 import org.apache.hadoop.hbase.coprocessor.RegionObserver;
 import org.apache.hadoop.hbase.coprocessor.RegionServerCoprocessorEnvironment;
 import org.apache.hadoop.hbase.coprocessor.RegionServerObserver;
-import org.apache.hadoop.hbase.mapreduce.TableInputFormatBase;
 import org.apache.hadoop.hbase.master.HMaster;
 import org.apache.hadoop.hbase.master.MasterCoprocessorHost;
 import org.apache.hadoop.hbase.master.TableNamespaceManager;
@@ -336,7 +335,7 @@ public class TestNamespaceAuditor {
 byte[] columnFamily = Bytes.toBytes("info");
 HTableDescriptor tableDescOne = new HTableDescriptor(tableTwo);
 tableDescOne.addFamily(new HColumnDescriptor(columnFamily));
-ADMIN.createTable(tableDescOne, Bytes.toBytes("1"), Bytes.toBytes("2000"), 
initialRegions);
+ADMIN.createTable(tableDescOne, Bytes.toBytes("0"), Bytes.toBytes("9"), 
initialRegions);
 Connection connection = 
ConnectionFactory.createConnection(UTIL.getConfiguration());
 try (Table table = connection.getTable(tableTwo)) {
   UTIL.loadNumericRows(table, Bytes.toBy

[15/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-server/src/test/java/org/apache/hadoop/hbase/ScanPerformanceEvaluation.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/ScanPerformanceEvaluation.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/ScanPerformanceEvaluation.java
deleted file mode 100644
index e669f14..000
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/ScanPerformanceEvaluation.java
+++ /dev/null
@@ -1,406 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.hbase;
-
-import java.io.IOException;
-import java.util.concurrent.TimeUnit;
-
-import org.apache.commons.cli.CommandLine;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FSDataInputStream;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.hbase.client.Connection;
-import org.apache.hadoop.hbase.client.ConnectionFactory;
-import org.apache.hadoop.hbase.client.Result;
-import org.apache.hadoop.hbase.client.ResultScanner;
-import org.apache.hadoop.hbase.client.Scan;
-import org.apache.hadoop.hbase.client.Table;
-import org.apache.hadoop.hbase.client.TableSnapshotScanner;
-import org.apache.hadoop.hbase.client.metrics.ScanMetrics;
-import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
-import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
-import org.apache.hadoop.hbase.mapreduce.TableMapper;
-import org.apache.hadoop.hbase.util.AbstractHBaseTool;
-import org.apache.hadoop.hbase.util.FSUtils;
-import org.apache.hadoop.io.NullWritable;
-import org.apache.hadoop.mapreduce.Counters;
-import org.apache.hadoop.mapreduce.Job;
-import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
-import org.apache.hadoop.util.StringUtils;
-import org.apache.hadoop.util.ToolRunner;
-
-import org.apache.hadoop.hbase.shaded.com.google.common.base.Stopwatch;
-
-/**
- * A simple performance evaluation tool for single client and MR scans
- * and snapshot scans.
- */
-@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.TOOLS)
-public class ScanPerformanceEvaluation extends AbstractHBaseTool {
-
-  private static final String HBASE_COUNTER_GROUP_NAME = "HBase Counters";
-
-  private String type;
-  private String file;
-  private String tablename;
-  private String snapshotName;
-  private String restoreDir;
-  private String caching;
-
-  @Override
-  public void setConf(Configuration conf) {
-super.setConf(conf);
-Path rootDir;
-try {
-  rootDir = FSUtils.getRootDir(conf);
-  rootDir.getFileSystem(conf);
-} catch (IOException ex) {
-  throw new RuntimeException(ex);
-}
-  }
-
-  @Override
-  protected void addOptions() {
-this.addRequiredOptWithArg("t", "type", "the type of the test. One of the 
following: streaming|scan|snapshotscan|scanmapreduce|snapshotscanmapreduce");
-this.addOptWithArg("f", "file", "the filename to read from");
-this.addOptWithArg("tn", "table", "the tablename to read from");
-this.addOptWithArg("sn", "snapshot", "the snapshot name to read from");
-this.addOptWithArg("rs", "restoredir", "the directory to restore the 
snapshot");
-this.addOptWithArg("ch", "caching", "scanner caching value");
-  }
-
-  @Override
-  protected void processOptions(CommandLine cmd) {
-type = cmd.getOptionValue("type");
-file = cmd.getOptionValue("file");
-tablename = cmd.getOptionValue("table");
-snapshotName = cmd.getOptionValue("snapshot");
-restoreDir = cmd.getOptionValue("restoredir");
-caching = cmd.getOptionValue("caching");
-  }
-
-  protected void testHdfsStreaming(Path filename) throws IOException {
-byte[] buf = new byte[1024];
-FileSystem fs = filename.getFileSystem(getConf());
-
-// read the file from start to finish
-Stopwatch fileOpenTimer = Stopwatch.createUnstarted();
-Stopwatch streamTimer = Stopwatch.createUnstarted();
-
-fileOpenTimer.start();
-FSDataInputStream in = fs.open(filename);
-fileOpenTimer.stop();
-
-long totalBytes = 0;
-streamTimer

[27/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/util/LoadTestTool.java
--
diff --git 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/util/LoadTestTool.java 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/util/LoadTestTool.java
new file mode 100644
index 000..6b5cbe2
--- /dev/null
+++ 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/util/LoadTestTool.java
@@ -0,0 +1,915 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import java.io.IOException;
+import java.io.InterruptedIOException;
+import java.lang.reflect.Constructor;
+import java.security.SecureRandom;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Properties;
+import java.util.Random;
+import java.util.concurrent.atomic.AtomicReference;
+
+import javax.crypto.spec.SecretKeySpec;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HBaseInterfaceAudience;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.classification.InterfaceAudience;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Durability;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.io.compress.Compression;
+import org.apache.hadoop.hbase.io.crypto.Cipher;
+import org.apache.hadoop.hbase.io.crypto.Encryption;
+import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
+import org.apache.hadoop.hbase.regionserver.BloomType;
+import org.apache.hadoop.hbase.security.EncryptionUtil;
+import org.apache.hadoop.hbase.security.HBaseKerberosUtils;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.hbase.security.access.AccessControlClient;
+import org.apache.hadoop.hbase.security.access.Permission;
+import org.apache.hadoop.hbase.util.test.LoadTestDataGenerator;
+import org.apache.hadoop.hbase.util.test.LoadTestDataGeneratorWithACL;
+import org.apache.hadoop.util.ToolRunner;
+
+/**
+ * A command-line utility that reads, writes, and verifies data. Unlike
+ * {@link org.apache.hadoop.hbase.PerformanceEvaluation}, this tool validates 
the data written,
+ * and supports simultaneously writing and reading the same set of keys.
+ */
+@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.TOOLS)
+public class LoadTestTool extends AbstractHBaseTool {
+
+  private static final Log LOG = LogFactory.getLog(LoadTestTool.class);
+  private static final String COLON = ":";
+
+  /** Table name for the test */
+  private TableName tableName;
+
+  /** Column families for the test */
+  private byte[][] families;
+
+  /** Table name to use of not overridden on the command line */
+  protected static final String DEFAULT_TABLE_NAME = "cluster_test";
+
+  /** The default data size if not specified */
+  protected static final int DEFAULT_DATA_SIZE = 64;
+
+  /** The number of reader/writer threads if not specified */
+  protected static final int DEFAULT_NUM_THREADS = 20;
+
+  /** Usage string for the load option */
+  protected static final String OPT_USAGE_LOAD =
+  ":" +
+  "[:<#threads=" + DEFAULT_NUM_THREADS + ">]";
+
+  /** Usage string for the read option */
+  protected static final String OPT_USAGE_READ =
+  "[:<#threads=" + DEFAULT_NUM_THREADS + ">]";
+
+  /** Usage string for the update option */
+  protected static final String OPT_USAGE_UPDATE =
+  "[:<#threads=" + DEFAULT_NUM_THREADS
+  + ">][:<#whether to ignore nonce collisions=0>]";
+
+  protected static final String OPT_USAGE_BLOOM = "Bloom filter type, one of " 
+
+  Arrays.toString(BloomType.values());
+
+  protected static final String OPT

[11/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTsv.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTsv.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTsv.java
deleted file mode 100644
index efcf91e..000
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTsv.java
+++ /dev/null
@@ -1,571 +0,0 @@
-/**
- *
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hbase.mapreduce;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertTrue;
-
-import java.io.IOException;
-import java.util.HashMap;
-import java.util.HashSet;
-import java.util.Iterator;
-import java.util.List;
-import java.util.Map;
-import java.util.Set;
-import java.util.UUID;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.conf.Configurable;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FSDataOutputStream;
-import org.apache.hadoop.fs.FileStatus;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.hbase.Cell;
-import org.apache.hadoop.hbase.CellUtil;
-import org.apache.hadoop.hbase.HBaseTestingUtility;
-import org.apache.hadoop.hbase.HConstants;
-import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.TableNotFoundException;
-import org.apache.hadoop.hbase.client.Connection;
-import org.apache.hadoop.hbase.client.ConnectionFactory;
-import org.apache.hadoop.hbase.client.Result;
-import org.apache.hadoop.hbase.client.ResultScanner;
-import org.apache.hadoop.hbase.client.Scan;
-import org.apache.hadoop.hbase.client.Table;
-import org.apache.hadoop.hbase.io.hfile.CacheConfig;
-import org.apache.hadoop.hbase.io.hfile.HFile;
-import org.apache.hadoop.hbase.io.hfile.HFileScanner;
-import org.apache.hadoop.hbase.testclassification.LargeTests;
-import org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests;
-import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.hadoop.io.Text;
-import org.apache.hadoop.mapred.Utils.OutputFileUtils.OutputFilesFilter;
-import org.apache.hadoop.mapreduce.Job;
-import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
-import org.apache.hadoop.util.Tool;
-import org.apache.hadoop.util.ToolRunner;
-import org.junit.AfterClass;
-import org.junit.Before;
-import org.junit.BeforeClass;
-import org.junit.Rule;
-import org.junit.Test;
-import org.junit.experimental.categories.Category;
-import org.junit.rules.ExpectedException;
-
-@Category({VerySlowMapReduceTests.class, LargeTests.class})
-public class TestImportTsv implements Configurable {
-
-  private static final Log LOG = LogFactory.getLog(TestImportTsv.class);
-  protected static final String NAME = TestImportTsv.class.getSimpleName();
-  protected static HBaseTestingUtility util = new HBaseTestingUtility();
-
-  // Delete the tmp directory after running doMROnTableTest. Boolean. Default 
is true.
-  protected static final String DELETE_AFTER_LOAD_CONF = NAME + 
".deleteAfterLoad";
-
-  /**
-   * Force use of combiner in doMROnTableTest. Boolean. Default is true.
-   */
-  protected static final String FORCE_COMBINER_CONF = NAME + ".forceCombiner";
-
-  private final String FAMILY = "FAM";
-  private TableName tn;
-  private Map args;
-
-  @Rule
-  public ExpectedException exception = ExpectedException.none();
-
-  public Configuration getConf() {
-return util.getConfiguration();
-  }
-
-  public void setConf(Configuration conf) {
-throw new IllegalArgumentException("setConf not supported");
-  }
-
-  @BeforeClass
-  public static void provisionCluster() throws Exception {
-util.startMiniCluster();
-  }
-
-  @AfterClass
-  public static void releaseCluster() throws Exception {
-util.shutdownMiniCluster();
-  }
-
-  @Before
-  public void setup() throws Exception {
-tn = TableName.valueOf("test-" + UUID.randomUUID());
-args = new HashMap<>();
-// Prepare the

[25/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/CellCreator.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/CellCreator.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/CellCreator.java
deleted file mode 100644
index 1d4d37b..000
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/CellCreator.java
+++ /dev/null
@@ -1,134 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hbase.mapreduce;
-
-import java.io.IOException;
-import java.util.List;
-
-import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hbase.Cell;
-import org.apache.hadoop.hbase.KeyValue;
-import org.apache.hadoop.hbase.Tag;
-import org.apache.hadoop.util.ReflectionUtils;
-
-/**
- * Facade to create Cells for HFileOutputFormat. The created Cells are of 
Put type.
- */
-@InterfaceAudience.Public
-public class CellCreator {
-
-  public static final String VISIBILITY_EXP_RESOLVER_CLASS =
-  "hbase.mapreduce.visibility.expression.resolver.class";
-
-  private VisibilityExpressionResolver visExpResolver;
-
-  public CellCreator(Configuration conf) {
-Class clazz = conf.getClass(
-VISIBILITY_EXP_RESOLVER_CLASS, 
DefaultVisibilityExpressionResolver.class,
-VisibilityExpressionResolver.class);
-this.visExpResolver = ReflectionUtils.newInstance(clazz, conf);
-this.visExpResolver.init();
-  }
-
-  /**
-   * @param row row key
-   * @param roffset row offset
-   * @param rlength row length
-   * @param family family name
-   * @param foffset family offset
-   * @param flength family length
-   * @param qualifier column qualifier
-   * @param qoffset qualifier offset
-   * @param qlength qualifier length
-   * @param timestamp version timestamp
-   * @param value column value
-   * @param voffset value offset
-   * @param vlength value length
-   * @return created Cell
-   * @throws IOException
-   */
-  public Cell create(byte[] row, int roffset, int rlength, byte[] family, int 
foffset, int flength,
-  byte[] qualifier, int qoffset, int qlength, long timestamp, byte[] 
value, int voffset,
-  int vlength) throws IOException {
-return create(row, roffset, rlength, family, foffset, flength, qualifier, 
qoffset, qlength,
-timestamp, value, voffset, vlength, (List)null);
-  }
-
-  /**
-   * @param row row key
-   * @param roffset row offset
-   * @param rlength row length
-   * @param family family name
-   * @param foffset family offset
-   * @param flength family length
-   * @param qualifier column qualifier
-   * @param qoffset qualifier offset
-   * @param qlength qualifier length
-   * @param timestamp version timestamp
-   * @param value column value
-   * @param voffset value offset
-   * @param vlength value length
-   * @param visExpression visibility expression to be associated with cell
-   * @return created Cell
-   * @throws IOException
-   */
-  @Deprecated
-  public Cell create(byte[] row, int roffset, int rlength, byte[] family, int 
foffset, int flength,
-  byte[] qualifier, int qoffset, int qlength, long timestamp, byte[] 
value, int voffset,
-  int vlength, String visExpression) throws IOException {
-List visTags = null;
-if (visExpression != null) {
-  visTags = this.visExpResolver.createVisibilityExpTags(visExpression);
-}
-return new KeyValue(row, roffset, rlength, family, foffset, flength, 
qualifier, qoffset,
-qlength, timestamp, KeyValue.Type.Put, value, voffset, vlength, 
visTags);
-  }
-
-  /**
-   * @param row row key
-   * @param roffset row offset
-   * @param rlength row length
-   * @param family family name
-   * @param foffset family offset
-   * @param flength family length
-   * @param qualifier column qualifier
-   * @param qoffset qualifier offset
-   * @param qlength qualifier length
-   * @param timestamp version timestamp
-   * @param value column value
-   * @param voffset value offset
-   * @param vlength value length
-   * @param tags
-   * @re

[34/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapred/TestTableOutputFormatConnectionExhaust.java
--
diff --git 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapred/TestTableOutputFormatConnectionExhaust.java
 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapred/TestTableOutputFormatConnectionExhaust.java
new file mode 100644
index 000..835117c
--- /dev/null
+++ 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapred/TestTableOutputFormatConnectionExhaust.java
@@ -0,0 +1,104 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.testclassification.MediumTests;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.RecordWriter;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+import java.io.IOException;
+
+import static org.junit.Assert.fail;
+
+/**
+ * Spark creates many instances of TableOutputFormat within a single process.  
We need to make
+ * sure we can have many instances and not leak connections.
+ *
+ * This test creates a few TableOutputFormats and shouldn't fail due to ZK 
connection exhaustion.
+ */
+@Category(MediumTests.class)
+public class TestTableOutputFormatConnectionExhaust {
+
+  private static final Log LOG =
+  LogFactory.getLog(TestTableOutputFormatConnectionExhaust.class);
+
+  private final static HBaseTestingUtility UTIL = new HBaseTestingUtility();
+  static final String TABLE = "TestTableOutputFormatConnectionExhaust";
+  static final String FAMILY = "family";
+
+  @BeforeClass
+  public static void beforeClass() throws Exception {
+// Default in ZookeeperMiniCluster is 1000, setting artificially low to 
trigger exhaustion.
+// need min of 7 to properly start the default mini HBase cluster
+UTIL.getConfiguration().setInt(HConstants.ZOOKEEPER_MAX_CLIENT_CNXNS, 10);
+UTIL.startMiniCluster();
+  }
+
+  @AfterClass
+  public static void afterClass() throws Exception {
+UTIL.shutdownMiniCluster();
+  }
+
+  @Before
+  public void before() throws IOException {
+LOG.info("before");
+UTIL.ensureSomeRegionServersAvailable(1);
+LOG.info("before done");
+  }
+
+  /**
+   * Open and close a TableOutputFormat.  The closing the RecordWriter should 
release HBase
+   * Connection (ZK) resources, and will throw exception if they are exhausted.
+   */
+  static void openCloseTableOutputFormat(int iter)  throws IOException {
+LOG.info("Instantiating TableOutputFormat connection  " + iter);
+JobConf conf = new JobConf();
+conf.addResource(UTIL.getConfiguration());
+conf.set(TableOutputFormat.OUTPUT_TABLE, TABLE);
+TableMapReduceUtil.initTableMapJob(TABLE, FAMILY, TableMap.class,
+ImmutableBytesWritable.class, ImmutableBytesWritable.class, conf);
+TableOutputFormat tof = new TableOutputFormat();
+RecordWriter rw = tof.getRecordWriter(null, conf, TABLE, null);
+rw.close(null);
+  }
+
+  @Test
+  public void testConnectionExhaustion() throws IOException {
+int MAX_INSTANCES = 5; // fails on iteration 3 if zk connections leak
+for (int i = 0; i < MAX_INSTANCES; i++) {
+  final int iter = i;
+  try {
+openCloseTableOutputFormat(iter);
+  } catch (Exception e) {
+LOG.error("Exception encountered", e);
+fail("Failed on iteration " + i);
+  }
+}
+  }
+
+}

http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapred/TestTableSnapshotInputFormat.java
--
diff --git 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapred/TestTableSnapshotInputFormat.java
 
b/hbase-mapreduce/src/tes

[48/50] [abbrv] hbase git commit: HBASE-18688 Upgrade commons-codec to 1.10

2017-08-26 Thread busbey
HBASE-18688 Upgrade commons-codec to 1.10

Change-Id: I764495e969c99c39b77e2e7541612ee828257126


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/f386a9a3
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/f386a9a3
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/f386a9a3

Branch: refs/heads/HBASE-18467
Commit: f386a9a3756f935f5feec6e4264d651d666aef5e
Parents: 664b6be
Author: Apekshit Sharma 
Authored: Fri Aug 25 14:09:01 2017 -0700
Committer: Apekshit Sharma 
Committed: Sat Aug 26 02:00:21 2017 -0700

--
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/f386a9a3/pom.xml
--
diff --git a/pom.xml b/pom.xml
index e610c22..370166b 100755
--- a/pom.xml
+++ b/pom.xml
@@ -1379,7 +1379,7 @@
 
 1.7.7
 1.4
-1.9
+1.10
 
 2.5
 2.6



[13/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java
deleted file mode 100644
index 87522b6..000
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java
+++ /dev/null
@@ -1,1495 +0,0 @@
-/**
- *
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hbase.mapreduce;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertNotNull;
-import static org.junit.Assert.assertNotSame;
-import static org.junit.Assert.assertTrue;
-import static org.junit.Assert.fail;
-
-import java.io.IOException;
-import java.lang.reflect.Field;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-import java.util.Map.Entry;
-import java.util.Random;
-import java.util.Set;
-import java.util.concurrent.Callable;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FileStatus;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.LocatedFileStatus;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.fs.RemoteIterator;
-import org.apache.hadoop.hbase.ArrayBackedTag;
-import org.apache.hadoop.hbase.CategoryBasedTimeout;
-import org.apache.hadoop.hbase.Cell;
-import org.apache.hadoop.hbase.CellUtil;
-import org.apache.hadoop.hbase.CompatibilitySingletonFactory;
-import org.apache.hadoop.hbase.HBaseConfiguration;
-import org.apache.hadoop.hbase.HBaseTestingUtility;
-import org.apache.hadoop.hbase.HColumnDescriptor;
-import org.apache.hadoop.hbase.HConstants;
-import org.apache.hadoop.hbase.HDFSBlocksDistribution;
-import org.apache.hadoop.hbase.HTableDescriptor;
-import org.apache.hadoop.hbase.HadoopShims;
-import org.apache.hadoop.hbase.KeyValue;
-import org.apache.hadoop.hbase.PerformanceEvaluation;
-import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.Tag;
-import org.apache.hadoop.hbase.TagType;
-import org.apache.hadoop.hbase.TagUtil;
-import org.apache.hadoop.hbase.client.Admin;
-import org.apache.hadoop.hbase.client.Connection;
-import org.apache.hadoop.hbase.client.ConnectionFactory;
-import org.apache.hadoop.hbase.client.Put;
-import org.apache.hadoop.hbase.client.RegionLocator;
-import org.apache.hadoop.hbase.client.Result;
-import org.apache.hadoop.hbase.client.ResultScanner;
-import org.apache.hadoop.hbase.client.Scan;
-import org.apache.hadoop.hbase.client.Table;
-import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
-import org.apache.hadoop.hbase.io.compress.Compression;
-import org.apache.hadoop.hbase.io.compress.Compression.Algorithm;
-import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
-import org.apache.hadoop.hbase.io.hfile.CacheConfig;
-import org.apache.hadoop.hbase.io.hfile.HFile;
-import org.apache.hadoop.hbase.io.hfile.HFile.Reader;
-import org.apache.hadoop.hbase.io.hfile.HFileScanner;
-import org.apache.hadoop.hbase.regionserver.BloomType;
-import org.apache.hadoop.hbase.regionserver.HRegion;
-import org.apache.hadoop.hbase.regionserver.Store;
-import org.apache.hadoop.hbase.regionserver.StoreFile;
-import org.apache.hadoop.hbase.regionserver.TimeRangeTracker;
-import org.apache.hadoop.hbase.testclassification.LargeTests;
-import org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests;
-import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.hadoop.hbase.util.FSUtils;
-import org.apache.hadoop.hbase.util.ReflectionUtils;
-import org.apache.hadoop.hbase.util.Writables;
-import org.apache.hadoop.hdfs.DistributedFileSystem;
-import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy;
-import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
-import org.apache.hadoop.hdfs.server.blockmanagement.BlockStoragePolicy

[09/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormatScanBase.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormatScanBase.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormatScanBase.java
deleted file mode 100644
index 0f49333..000
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormatScanBase.java
+++ /dev/null
@@ -1,287 +0,0 @@
-/**
- *
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hbase.mapreduce;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
-
-import java.io.IOException;
-import java.util.List;
-import java.util.Locale;
-import java.util.Map;
-import java.util.NavigableMap;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.hbase.HBaseTestingUtility;
-import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.client.Result;
-import org.apache.hadoop.hbase.client.Scan;
-import org.apache.hadoop.hbase.client.Table;
-import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
-import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.hadoop.io.NullWritable;
-import org.apache.hadoop.mapreduce.InputSplit;
-import org.apache.hadoop.mapreduce.Job;
-import org.apache.hadoop.mapreduce.Reducer;
-import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
-import org.junit.AfterClass;
-import org.junit.Assert;
-import org.junit.BeforeClass;
-
-
-/**
- * 
- * Tests various scan start and stop row scenarios. This is set in a scan and
- * tested in a MapReduce job to see if that is handed over and done properly
- * too.
- * 
- * 
- * This test is broken into two parts in order to side-step the test timeout
- * period of 900, as documented in HBASE-8326.
- * 
- */
-public abstract class TestTableInputFormatScanBase {
-
-  private static final Log LOG = 
LogFactory.getLog(TestTableInputFormatScanBase.class);
-  static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
-
-  static final TableName TABLE_NAME = TableName.valueOf("scantest");
-  static final byte[][] INPUT_FAMILYS = {Bytes.toBytes("content1"), 
Bytes.toBytes("content2")};
-  static final String KEY_STARTROW = "startRow";
-  static final String KEY_LASTROW = "stpRow";
-
-  private static Table table = null;
-
-  @BeforeClass
-  public static void setUpBeforeClass() throws Exception {
-// test intermittently fails under hadoop2 (2.0.2-alpha) if 
shortcircuit-read (scr) is on.
-// this turns it off for this test.  TODO: Figure out why scr breaks 
recovery. 
-System.setProperty("hbase.tests.use.shortcircuit.reads", "false");
-
-// switch TIF to log at DEBUG level
-TEST_UTIL.enableDebug(TableInputFormat.class);
-TEST_UTIL.enableDebug(TableInputFormatBase.class);
-// start mini hbase cluster
-TEST_UTIL.startMiniCluster(3);
-// create and fill table
-table = TEST_UTIL.createMultiRegionTable(TABLE_NAME, INPUT_FAMILYS);
-TEST_UTIL.loadTable(table, INPUT_FAMILYS, null, false);
-  }
-
-  @AfterClass
-  public static void tearDownAfterClass() throws Exception {
-TEST_UTIL.shutdownMiniCluster();
-  }
-
-  /**
-   * Pass the key and value to reduce.
-   */
-  public static class ScanMapper
-  extends TableMapper {
-
-/**
- * Pass the key and value to reduce.
- *
- * @param key  The key, here "aaa", "aab" etc.
- * @param value  The value is the same as the key.
- * @param context  The task context.
- * @throws IOException When reading the rows fails.
- */
-@Override
-public void map(ImmutableBytesWritable key, Result value,
-  Context context)
-throws IOException, InterruptedException {
-  if (value.size() != 2) {
-throw new IOException("There should be two input columns");
-  }
-  Map>>
-cfMap = value.getMap();
-
-  if (!cfMap.containsKey(INPUT_FAMILYS[0]) |

[36/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
--
diff --git 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
new file mode 100644
index 000..23a70a9
--- /dev/null
+++ 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
@@ -0,0 +1,2627 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import static 
org.codehaus.jackson.map.SerializationConfig.Feature.SORT_PROPERTIES_ALPHABETICALLY;
+
+import java.io.IOException;
+import java.io.PrintStream;
+import java.lang.reflect.Constructor;
+import java.math.BigDecimal;
+import java.math.MathContext;
+import java.text.DecimalFormat;
+import java.text.SimpleDateFormat;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Date;
+import java.util.LinkedList;
+import java.util.Locale;
+import java.util.Map;
+import java.util.Queue;
+import java.util.Random;
+import java.util.TreeMap;
+import java.util.NoSuchElementException;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+
+import org.apache.commons.lang.StringUtils;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.classification.InterfaceAudience;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.Append;
+import org.apache.hadoop.hbase.client.AsyncConnection;
+import org.apache.hadoop.hbase.client.AsyncTable;
+import org.apache.hadoop.hbase.client.BufferedMutator;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Consistency;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Durability;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Increment;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.RawAsyncTable;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.RowMutations;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.filter.BinaryComparator;
+import org.apache.hadoop.hbase.filter.CompareFilter;
+import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.filter.FilterAllFilter;
+import org.apache.hadoop.hbase.filter.FilterList;
+import org.apache.hadoop.hbase.filter.PageFilter;
+import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
+import org.apache.hadoop.hbase.filter.WhileMatchFilter;
+import org.apache.hadoop.hbase.io.compress.Compression;
+import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
+import org.apache.hadoop.hbase.io.hfile.RandomDistribution;
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
+import org.apache.hadoop.hbase.regionserver.BloomType;
+import org.apache.hadoop.hbase.regionserver.CompactingMemStore;
+import org.apache.hadoop.hbase.regionserver.TestHRegionFileSystem;
+import org.apache.hadoop.hbase.trace.HBaseHTraceConfiguration;
+import org.apache.hadoop.hbase.trace.SpanReceiverHost;
+import org.apache.hadoop.hbase.util.*;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.apache.hadoop.mapreduce.lib.input.NLineInputFormat;
+import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
+import org.apache.hadoop.mapreduce.lib.reduce.LongSumRedu

[50/50] [abbrv] hbase git commit: HBASE-18467 WIP run all stages and build jira comments.

2017-08-26 Thread busbey
HBASE-18467 WIP run all stages and build jira comments.

Currently blocked by JENKINS-46358

HBASE-18467 use single find as a work around.

HBASE-18467 trying to get StringGroovyMethods instead of DefaultGroovyMethods

HBASE-18467 still trying to get the StringGroovyMethods version.

HBASE-18467 move pattern into a variable because groovy is horrible.

HBASE-18467 move the try block's start to cover more.

HBASE-18467 switch to using hte java class for Pattern.

HBASE-18467 just call the groovy implementation directly.

HBASE-18467 has to be the charsequence version, not the string version. 
:rolling_eyes_cat:

HBASE-18467 maybe declared types for CharSequence?

HBASE-18467 indirect.

HBASE-18467

HBASE-18467


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/ea7baa56
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/ea7baa56
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/ea7baa56

Branch: refs/heads/HBASE-18467
Commit: ea7baa560c1333c3c2292ca2cec6a1cb65ce8627
Parents: f53051b
Author: Sean Busbey 
Authored: Wed Aug 9 00:48:46 2017 -0500
Committer: Sean Busbey 
Committed: Sun Aug 27 00:23:23 2017 -0500

--
 dev-support/Jenkinsfile| 143 ++--
 dev-support/hbase_nightly_yetus.sh |   7 ++
 2 files changed, 143 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/ea7baa56/dev-support/Jenkinsfile
--
diff --git a/dev-support/Jenkinsfile b/dev-support/Jenkinsfile
index 1f01a47..27850a4 100644
--- a/dev-support/Jenkinsfile
+++ b/dev-support/Jenkinsfile
@@ -17,7 +17,9 @@
 pipeline {
   agent {
 node {
-  label 'Hadoop'
+//  label 'Hadoop'
+// temp go to ubuntu since it seems like no one uses those
+  label 'ubuntu'
 }
   }
   triggers {
@@ -128,7 +130,18 @@ curl -L  -o personality.sh "${env.PROJET_PERSONALITY}"
   steps {
 unstash 'yetus'
 // TODO should this be a download from master, similar to how the 
personality is?
-sh "${env.BASEDIR}/dev-support/hbase_nightly_yetus.sh"
+sh '''#!/usr/bin/env bash
+  rm -f "${OUTPUTDIR}/success" "${OUTPUTDIR}/failure"
+  declare commentfile
+  if "${BASEDIR}/dev-support/hbase_nightly_yetus.sh" ; then
+commentfile="${OUTPUTDIR}/success"
+echo '(/) *{color:green}+1 general checks{color}*' >> 
"${commentfile}"
+  else
+commentfile="${OUTPUTDIR}/failure"
+echo '(x) *{color:red}-1 general checks{color}*' >> 
"${commentfile}"
+  fi
+  echo "-- For more information [see general 
report|${BUILD_URL}/General_Nightly_Build_Report/]" >> "${commentfile}"
+'''
   }
   post {
 always {
@@ -159,13 +172,22 @@ curl -L  -o personality.sh "${env.PROJET_PERSONALITY}"
   }
   steps {
 unstash 'yetus'
-sh """#!/usr/bin/env bash
+sh '''#!/usr/bin/env bash
   # for branch-1.1 we don't do jdk8 findbugs, so do it here
-  if [ "${env.BRANCH_NAME}" == "branch-1.1" ]; then
+  if [ "${BRANCH_NAME}" == "branch-1.1" ]; then
 TESTS+=",findbugs"
   fi
-  "${env.BASEDIR}/dev-support/hbase_nightly_yetus.sh"
-"""
+  declare commentfile
+  rm -f "${OUTPUTDIR}/success" "${OUTPUTDIR}/failure"
+  if "${BASEDIR}/dev-support/hbase_nightly_yetus.sh" ; then
+commentfile="${OUTPUTDIR}/success"
+echo '(/) *{color:green}+1 jdk7 checks{color}*' >> "${commentfile}"
+  else
+commentfile="${OUTPUTDIR}/failure"
+echo '(x) *{color:red}-1 jdk7 checks{color}*' >> "${commentfile}"
+  fi
+  echo "-- For more information [see jdk7 
report|${BUILD_URL}/JDK7_Nightly_Build_Report/]" >> "${commentfile}"
+'''
   }
   post {
 always {
@@ -215,7 +237,18 @@ curl -L  -o personality.sh "${env.PROJET_PERSONALITY}"
   }
   steps {
 unstash 'yetus'
-sh "${env.BASEDIR}/dev-support/hbase_nightly_yetus.sh"
+sh '''#!/usr/bin/env bash
+  declare commentfile
+  rm -f "${OUTPUTDIR}/success" "${OUTPUTDIR}/failure"
+  if "${BASEDIR}/dev-support/hbase_nightly_yetus.sh" ; then
+commentfile="${OUTPUTDIR}/success"
+echo '(/) *{color:green}+1 jdk8 checks{color}*' >> "${commentfile}"
+  else
+commentfile="${OUTPUTDIR}/failure"
+echo '(x) *{color:red}-1 jdk8 checks{color}*' >> "${commentfile}"
+  fi
+  echo "-- For more information [see jdk8 
report|${BUILD_URL}/JDK8_Nightly_Build_Report/]" >> "${commentfile}"
+'''
   }
   post {
 always {
@@ -287,6 +320,7 @@ curl -L  -o personalit

[29/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormatScanBase.java
--
diff --git 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormatScanBase.java
 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormatScanBase.java
new file mode 100644
index 000..13b6a96
--- /dev/null
+++ 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormatScanBase.java
@@ -0,0 +1,287 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.Locale;
+import java.util.Map;
+import java.util.NavigableMap;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.Reducer;
+import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+
+
+/**
+ * 
+ * Tests various scan start and stop row scenarios. This is set in a scan and
+ * tested in a MapReduce job to see if that is handed over and done properly
+ * too.
+ * 
+ * 
+ * This test is broken into two parts in order to side-step the test timeout
+ * period of 900, as documented in HBASE-8326.
+ * 
+ */
+public abstract class TestTableInputFormatScanBase {
+
+  private static final Log LOG = 
LogFactory.getLog(TestTableInputFormatScanBase.class);
+  static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+
+  static final TableName TABLE_NAME = TableName.valueOf("scantest");
+  static final byte[][] INPUT_FAMILYS = {Bytes.toBytes("content1"), 
Bytes.toBytes("content2")};
+  static final String KEY_STARTROW = "startRow";
+  static final String KEY_LASTROW = "stpRow";
+
+  private static Table table = null;
+
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+// test intermittently fails under hadoop2 (2.0.2-alpha) if 
shortcircuit-read (scr) is on.
+// this turns it off for this test.  TODO: Figure out why scr breaks 
recovery.
+System.setProperty("hbase.tests.use.shortcircuit.reads", "false");
+
+// switch TIF to log at DEBUG level
+TEST_UTIL.enableDebug(TableInputFormat.class);
+TEST_UTIL.enableDebug(TableInputFormatBase.class);
+// start mini hbase cluster
+TEST_UTIL.startMiniCluster(3);
+// create and fill table
+table = TEST_UTIL.createMultiRegionTable(TABLE_NAME, INPUT_FAMILYS);
+TEST_UTIL.loadTable(table, INPUT_FAMILYS, null, false);
+  }
+
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+TEST_UTIL.shutdownMiniCluster();
+  }
+
+  /**
+   * Pass the key and value to reduce.
+   */
+  public static class ScanMapper
+  extends TableMapper {
+
+/**
+ * Pass the key and value to reduce.
+ *
+ * @param key  The key, here "aaa", "aab" etc.
+ * @param value  The value is the same as the key.
+ * @param context  The task context.
+ * @throws IOException When reading the rows fails.
+ */
+@Override
+public void map(ImmutableBytesWritable key, Result value,
+  Context context)
+throws IOException, InterruptedException {
+  if (value.size() != 2) {
+throw new IOException("There should be two input columns");
+  }
+  Map>>
+cfMap = value.getMap();
+
+  if (!cfMap.containsKey(INPUT_FAMILY

[19/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatImpl.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatImpl.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatImpl.java
deleted file mode 100644
index bf11473..000
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatImpl.java
+++ /dev/null
@@ -1,412 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.hbase.mapreduce;
-
-import org.apache.hadoop.hbase.client.TableDescriptor;
-import org.apache.hadoop.hbase.shaded.com.google.common.collect.Lists;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.hbase.CellUtil;
-import org.apache.hadoop.hbase.HDFSBlocksDistribution;
-import org.apache.hadoop.hbase.HDFSBlocksDistribution.HostAndWeight;
-import org.apache.hadoop.hbase.HRegionInfo;
-import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.hbase.classification.InterfaceStability;
-import org.apache.hadoop.hbase.client.ClientSideRegionScanner;
-import org.apache.hadoop.hbase.client.IsolationLevel;
-import org.apache.hadoop.hbase.client.Result;
-import org.apache.hadoop.hbase.client.Scan;
-import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
-import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MapReduceProtos.TableSnapshotRegionSplit;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotDescription;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotRegionManifest;
-import org.apache.hadoop.hbase.regionserver.HRegion;
-import org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper;
-import org.apache.hadoop.hbase.snapshot.SnapshotDescriptionUtils;
-import org.apache.hadoop.hbase.snapshot.SnapshotManifest;
-import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.hadoop.hbase.util.FSUtils;
-import org.apache.hadoop.io.Writable;
-
-import java.io.ByteArrayOutputStream;
-import java.io.DataInput;
-import java.io.DataOutput;
-import java.io.IOException;
-import java.util.ArrayList;
-import java.util.List;
-import java.util.UUID;
-
-/**
- * Hadoop MR API-agnostic implementation for mapreduce over table snapshots.
- */
-@InterfaceAudience.Private
-@InterfaceStability.Evolving
-public class TableSnapshotInputFormatImpl {
-  // TODO: Snapshots files are owned in fs by the hbase user. There is no
-  // easy way to delegate access.
-
-  public static final Log LOG = 
LogFactory.getLog(TableSnapshotInputFormatImpl.class);
-
-  private static final String SNAPSHOT_NAME_KEY = 
"hbase.TableSnapshotInputFormat.snapshot.name";
-  // key for specifying the root dir of the restored snapshot
-  protected static final String RESTORE_DIR_KEY = 
"hbase.TableSnapshotInputFormat.restore.dir";
-
-  /** See {@link #getBestLocations(Configuration, HDFSBlocksDistribution)} */
-  private static final String LOCALITY_CUTOFF_MULTIPLIER =
-"hbase.tablesnapshotinputformat.locality.cutoff.multiplier";
-  private static final float DEFAULT_LOCALITY_CUTOFF_MULTIPLIER = 0.8f;
-
-  /**
-   * Implementation class for InputSplit logic common between mapred and 
mapreduce.
-   */
-  public static class InputSplit implements Writable {
-
-private TableDescriptor htd;
-private HRegionInfo regionInfo;
-private String[] locations;
-private String scan;
-private String restoreDir;
-
-// constructor for mapreduce framework / Writable
-public InputSplit() {}
-
-public InputSplit(TableDescriptor htd, HRegionInfo regionInfo, 
List locations,
-Scan scan, Path restoreDir) {
-  this.htd = htd;
-  this.regionInfo = regionInfo;
-  if (locations == null || locations.isEmpty()) {
-this.locations = new String[0];
- 

[30/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultithreadedTableMapper.java
--
diff --git 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultithreadedTableMapper.java
 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultithreadedTableMapper.java
new file mode 100644
index 000..694a359
--- /dev/null
+++ 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultithreadedTableMapper.java
@@ -0,0 +1,264 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.NavigableMap;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FileUtil;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.testclassification.LargeTests;
+import org.apache.hadoop.hbase.testclassification.MapReduceTests;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+/**
+ * Test Map/Reduce job over HBase tables. The map/reduce process we're testing
+ * on our tables is simple - take every row in the table, reverse the value of
+ * a particular cell, and write it back to the table.
+ */
+@Category({MapReduceTests.class, LargeTests.class})
+public class TestMultithreadedTableMapper {
+  private static final Log LOG = 
LogFactory.getLog(TestMultithreadedTableMapper.class);
+  private static final HBaseTestingUtility UTIL =
+  new HBaseTestingUtility();
+  static final TableName MULTI_REGION_TABLE_NAME = TableName.valueOf("mrtest");
+  static final byte[] INPUT_FAMILY = Bytes.toBytes("contents");
+  static final byte[] OUTPUT_FAMILY = Bytes.toBytes("text");
+  static final intNUMBER_OF_THREADS = 10;
+
+  @BeforeClass
+  public static void beforeClass() throws Exception {
+// Up the handlers; this test needs more than usual.
+
UTIL.getConfiguration().setInt(HConstants.REGION_SERVER_HIGH_PRIORITY_HANDLER_COUNT,
 10);
+UTIL.startMiniCluster();
+Table table =
+UTIL.createMultiRegionTable(MULTI_REGION_TABLE_NAME, new byte[][] { 
INPUT_FAMILY,
+OUTPUT_FAMILY });
+UTIL.loadTable(table, INPUT_FAMILY, false);
+UTIL.waitUntilAllRegionsAssigned(MULTI_REGION_TABLE_NAME);
+  }
+
+  @AfterClass
+  public static void afterClass() throws Exception {
+UTIL.shutdownMiniCluster();
+  }
+
+  /**
+   * Pass the given key and processed record reduce
+   */
+  public static class ProcessContentsMapper
+  extends TableMapper {
+
+/**
+ * Pass the key, and reversed value to reduce
+ *
+ * @param key
+ * @param value
+ * @param context
+ * @throws IOException
+ */
+@Override
+public void map(ImmutableBytesWritable key, Result value,
+Context context)
+throws IOException, InterruptedException {
+  if (value.size() != 1) {
+throw new IOException("There should only be one input column");
+  }
+  Map>>
+  cf = value.getMap();
+  if(!cf.containsKey(INPUT_FAMILY)) {
+throw new IOException("Wrong input columns. Missing: '" +
+Bytes.toString(INPUT_FAMILY) + "'.");
+  }
+  // Get the

[07/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-server/src/test/java/org/apache/hadoop/hbase/util/LoadTestTool.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/util/LoadTestTool.java 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/util/LoadTestTool.java
deleted file mode 100644
index ad832e3..000
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/util/LoadTestTool.java
+++ /dev/null
@@ -1,968 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements. See the NOTICE file distributed with this
- * work for additional information regarding copyright ownership. The ASF
- * licenses this file to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- * License for the specific language governing permissions and limitations
- * under the License.
- */
-package org.apache.hadoop.hbase.util;
-
-import java.io.IOException;
-import java.io.InterruptedIOException;
-import java.lang.reflect.Constructor;
-import java.net.InetAddress;
-import java.security.SecureRandom;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.List;
-import java.util.Locale;
-import java.util.Properties;
-import java.util.Random;
-import java.util.concurrent.atomic.AtomicReference;
-
-import javax.crypto.spec.SecretKeySpec;
-
-import org.apache.commons.cli.CommandLine;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hbase.HBaseConfiguration;
-import org.apache.hadoop.hbase.HBaseInterfaceAudience;
-import org.apache.hadoop.hbase.HBaseTestingUtility;
-import org.apache.hadoop.hbase.HColumnDescriptor;
-import org.apache.hadoop.hbase.HConstants;
-import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.hbase.client.Admin;
-import org.apache.hadoop.hbase.client.Connection;
-import org.apache.hadoop.hbase.client.ConnectionFactory;
-import org.apache.hadoop.hbase.client.Durability;
-import org.apache.hadoop.hbase.client.TableDescriptor;
-import org.apache.hadoop.hbase.io.compress.Compression;
-import org.apache.hadoop.hbase.io.crypto.Cipher;
-import org.apache.hadoop.hbase.io.crypto.Encryption;
-import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
-import org.apache.hadoop.hbase.regionserver.BloomType;
-import org.apache.hadoop.hbase.security.EncryptionUtil;
-import org.apache.hadoop.hbase.security.User;
-import org.apache.hadoop.hbase.security.access.AccessControlClient;
-import org.apache.hadoop.hbase.security.access.Permission;
-import org.apache.hadoop.hbase.util.test.LoadTestDataGenerator;
-import org.apache.hadoop.hbase.util.test.LoadTestDataGeneratorWithACL;
-import org.apache.hadoop.security.SecurityUtil;
-import org.apache.hadoop.security.UserGroupInformation;
-import org.apache.hadoop.util.ToolRunner;
-
-/**
- * A command-line utility that reads, writes, and verifies data. Unlike
- * {@link org.apache.hadoop.hbase.PerformanceEvaluation}, this tool validates 
the data written,
- * and supports simultaneously writing and reading the same set of keys.
- */
-@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.TOOLS)
-public class LoadTestTool extends AbstractHBaseTool {
-
-  private static final Log LOG = LogFactory.getLog(LoadTestTool.class);
-  private static final String COLON = ":";
-
-  /** Table name for the test */
-  private TableName tableName;
-
-  /** Column families for the test */
-  private byte[][] families;
-
-  /** Table name to use of not overridden on the command line */
-  protected static final String DEFAULT_TABLE_NAME = "cluster_test";
-
-  /** Column family used by the test */
-  public static byte[] DEFAULT_COLUMN_FAMILY = Bytes.toBytes("test_cf");
-
-  /** Column families used by the test */
-  public static final byte[][] DEFAULT_COLUMN_FAMILIES = { 
DEFAULT_COLUMN_FAMILY };
-
-  /** The default data size if not specified */
-  protected static final int DEFAULT_DATA_SIZE = 64;
-
-  /** The number of reader/writer threads if not specified */
-  protected static final int DEFAULT_NUM_THREADS = 20;
-
-  /** Usage string for the load option */
-  protected static final String OPT_USAGE_LOAD =
-  ":" +
-  "[:<#threads=" + DEFAULT_NUM_THREADS + ">]";
-
-  /** Usage string for the read option */
-  protected static final String OPT_USAGE_READ =
-  "[:<#threads=" + DEFAULT_NUM_THREADS + ">]";
-
-  /** Usage string for t

[21/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/SyncTable.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/SyncTable.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/SyncTable.java
deleted file mode 100644
index c72a0c3..000
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/SyncTable.java
+++ /dev/null
@@ -1,786 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hbase.mapreduce;
-
-import java.io.IOException;
-import java.util.Iterator;
-import java.util.Collections;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.conf.Configured;
-import org.apache.hadoop.fs.FileStatus;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.hbase.Cell;
-import org.apache.hadoop.hbase.CellComparator;
-import org.apache.hadoop.hbase.CellUtil;
-import org.apache.hadoop.hbase.HBaseConfiguration;
-import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.client.Connection;
-import org.apache.hadoop.hbase.client.ConnectionFactory;
-import org.apache.hadoop.hbase.client.Delete;
-import org.apache.hadoop.hbase.client.Mutation;
-import org.apache.hadoop.hbase.client.Put;
-import org.apache.hadoop.hbase.client.Result;
-import org.apache.hadoop.hbase.client.ResultScanner;
-import org.apache.hadoop.hbase.client.Scan;
-import org.apache.hadoop.hbase.client.Table;
-import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
-import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.hadoop.mapreduce.Counters;
-import org.apache.hadoop.mapreduce.Job;
-import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
-import org.apache.hadoop.util.GenericOptionsParser;
-import org.apache.hadoop.util.Tool;
-import org.apache.hadoop.util.ToolRunner;
-
-import org.apache.hadoop.hbase.shaded.com.google.common.base.Throwables;
-import org.apache.hadoop.hbase.shaded.com.google.common.collect.Iterators;
-
-public class SyncTable extends Configured implements Tool {
-
-  private static final Log LOG = LogFactory.getLog(SyncTable.class);
-
-  static final String SOURCE_HASH_DIR_CONF_KEY = "sync.table.source.hash.dir";
-  static final String SOURCE_TABLE_CONF_KEY = "sync.table.source.table.name";
-  static final String TARGET_TABLE_CONF_KEY = "sync.table.target.table.name";
-  static final String SOURCE_ZK_CLUSTER_CONF_KEY = 
"sync.table.source.zk.cluster";
-  static final String TARGET_ZK_CLUSTER_CONF_KEY = 
"sync.table.target.zk.cluster";
-  static final String DRY_RUN_CONF_KEY="sync.table.dry.run";
-
-  Path sourceHashDir;
-  String sourceTableName;
-  String targetTableName;
-
-  String sourceZkCluster;
-  String targetZkCluster;
-  boolean dryRun;
-
-  Counters counters;
-
-  public SyncTable(Configuration conf) {
-super(conf);
-  }
-
-  public Job createSubmittableJob(String[] args) throws IOException {
-FileSystem fs = sourceHashDir.getFileSystem(getConf());
-if (!fs.exists(sourceHashDir)) {
-  throw new IOException("Source hash dir not found: " + sourceHashDir);
-}
-
-HashTable.TableHash tableHash = HashTable.TableHash.read(getConf(), 
sourceHashDir);
-LOG.info("Read source hash manifest: " + tableHash);
-LOG.info("Read " + tableHash.partitions.size() + " partition keys");
-if (!tableHash.tableName.equals(sourceTableName)) {
-  LOG.warn("Table name mismatch - manifest indicates hash was taken from: "
-  + tableHash.tableName + " but job is reading from: " + 
sourceTableName);
-}
-if (tableHash.numHashFiles != tableHash.partitions.size() + 1) {
-  throw new RuntimeException("Hash data appears corrupt. The number of of 
hash files created"
-  + " should be 1 more than the number of partition keys.  However, 
the manifest file "
-  + " says numHashFiles=" + tableHash.numHashFiles + " but the number 
of partition keys"
-  + " found in the partitions file is " + tableHash.partitions.siz

[26/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/RowCounter.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/RowCounter.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/RowCounter.java
deleted file mode 100644
index 43560fd..000
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/RowCounter.java
+++ /dev/null
@@ -1,121 +0,0 @@
-/**
- *
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hbase.mapred;
-
-import java.io.IOException;
-
-import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.conf.Configured;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.hbase.HBaseConfiguration;
-import org.apache.hadoop.hbase.client.Result;
-import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
-import org.apache.hadoop.mapred.FileOutputFormat;
-import org.apache.hadoop.mapred.JobClient;
-import org.apache.hadoop.mapred.JobConf;
-import org.apache.hadoop.mapred.OutputCollector;
-import org.apache.hadoop.mapred.Reporter;
-import org.apache.hadoop.util.Tool;
-import org.apache.hadoop.util.ToolRunner;
-
-/**
- * A job with a map to count rows.
- * Map outputs table rows IF the input row has columns that have content.
- * Uses a org.apache.hadoop.mapred.lib.IdentityReducer
- */
-@InterfaceAudience.Public
-public class RowCounter extends Configured implements Tool {
-  // Name of this 'program'
-  static final String NAME = "rowcounter";
-
-  /**
-   * Mapper that runs the count.
-   */
-  static class RowCounterMapper
-  implements TableMap {
-private static enum Counters {ROWS}
-
-public void map(ImmutableBytesWritable row, Result values,
-OutputCollector output,
-Reporter reporter)
-throws IOException {
-// Count every row containing data, whether it's in qualifiers or 
values
-reporter.incrCounter(Counters.ROWS, 1);
-}
-
-public void configure(JobConf jc) {
-  // Nothing to do.
-}
-
-public void close() throws IOException {
-  // Nothing to do.
-}
-  }
-
-  /**
-   * @param args
-   * @return the JobConf
-   * @throws IOException
-   */
-  public JobConf createSubmittableJob(String[] args) throws IOException {
-JobConf c = new JobConf(getConf(), getClass());
-c.setJobName(NAME);
-// Columns are space delimited
-StringBuilder sb = new StringBuilder();
-final int columnoffset = 2;
-for (int i = columnoffset; i < args.length; i++) {
-  if (i > columnoffset) {
-sb.append(" ");
-  }
-  sb.append(args[i]);
-}
-// Second argument is the table name.
-TableMapReduceUtil.initTableMapJob(args[1], sb.toString(),
-  RowCounterMapper.class, ImmutableBytesWritable.class, Result.class, c);
-c.setNumReduceTasks(0);
-// First arg is the output directory.
-FileOutputFormat.setOutputPath(c, new Path(args[0]));
-return c;
-  }
-
-  static int printUsage() {
-System.out.println(NAME +
-  "[...]");
-return -1;
-  }
-
-  public int run(final String[] args) throws Exception {
-// Make sure there are at least 3 parameters
-if (args.length < 3) {
-  System.err.println("ERROR: Wrong number of parameters: " + args.length);
-  return printUsage();
-}
-JobClient.runJob(createSubmittableJob(args));
-return 0;
-  }
-
-  /**
-   * @param args
-   * @throws Exception
-   */
-  public static void main(String[] args) throws Exception {
-int errCode = ToolRunner.run(HBaseConfiguration.create(), new 
RowCounter(), args);
-System.exit(errCode);
-  }
-}

http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableInputFormat.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableInputFormat.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableInputFormat.java
deleted file mode 100644
index 208849a..000
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/map

[22/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatBase.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatBase.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatBase.java
deleted file mode 100644
index e18b3aa..000
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatBase.java
+++ /dev/null
@@ -1,297 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hbase.mapreduce;
-
-import java.io.IOException;
-import java.text.MessageFormat;
-import java.util.ArrayList;
-import java.util.List;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.hbase.HRegionInfo;
-import org.apache.hadoop.hbase.HRegionLocation;
-import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.client.Connection;
-import org.apache.hadoop.hbase.client.ConnectionFactory;
-import org.apache.hadoop.hbase.client.RegionLocator;
-import org.apache.hadoop.hbase.client.Result;
-import org.apache.hadoop.hbase.client.Scan;
-import org.apache.hadoop.hbase.client.Table;
-import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
-import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.hadoop.hbase.util.Pair;
-import org.apache.hadoop.hbase.util.RegionSizeCalculator;
-import org.apache.hadoop.mapreduce.InputFormat;
-import org.apache.hadoop.mapreduce.InputSplit;
-import org.apache.hadoop.mapreduce.JobContext;
-import org.apache.hadoop.mapreduce.RecordReader;
-import org.apache.hadoop.mapreduce.TaskAttemptContext;
-
-import java.util.Map;
-import java.util.HashMap;
-import java.util.Iterator;
-/**
- * A base for {@link MultiTableInputFormat}s. Receives a list of
- * {@link Scan} instances that define the input tables and
- * filters etc. Subclasses may use other TableRecordReader implementations.
- */
-@InterfaceAudience.Public
-public abstract class MultiTableInputFormatBase extends
-InputFormat {
-
-  private static final Log LOG = 
LogFactory.getLog(MultiTableInputFormatBase.class);
-
-  /** Holds the set of scans used to define the input. */
-  private List scans;
-
-  /** The reader scanning the table, can be a custom one. */
-  private TableRecordReader tableRecordReader = null;
-
-  /**
-   * Builds a TableRecordReader. If no TableRecordReader was provided, uses the
-   * default.
-   *
-   * @param split The split to work with.
-   * @param context The current context.
-   * @return The newly created record reader.
-   * @throws IOException When creating the reader fails.
-   * @throws InterruptedException when record reader initialization fails
-   * @see org.apache.hadoop.mapreduce.InputFormat#createRecordReader(
-   *  org.apache.hadoop.mapreduce.InputSplit,
-   *  org.apache.hadoop.mapreduce.TaskAttemptContext)
-   */
-  @Override
-  public RecordReader createRecordReader(
-  InputSplit split, TaskAttemptContext context)
-  throws IOException, InterruptedException {
-TableSplit tSplit = (TableSplit) split;
-LOG.info(MessageFormat.format("Input split length: {0} bytes.", 
tSplit.getLength()));
-
-if (tSplit.getTable() == null) {
-  throw new IOException("Cannot create a record reader because of a"
-  + " previous error. Please look at the previous logs lines from"
-  + " the task's full log for more details.");
-}
-final Connection connection = 
ConnectionFactory.createConnection(context.getConfiguration());
-Table table = connection.getTable(tSplit.getTable());
-
-if (this.tableRecordReader == null) {
-  this.tableRecordReader = new TableRecordReader();
-}
-final TableRecordReader trr = this.tableRecordReader;
-
-try {
-  Scan sc = tSplit.getScan();
-  sc.setStartRow(tSplit.getStartRow());
-  sc.setStopRow(tSplit.getEndRow());
-  trr.setScan(sc);
-  trr.setTable(table);
-  return new RecordReader() {
-
-@

[33/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java
--
diff --git 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java
 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java
new file mode 100644
index 000..c6a8761
--- /dev/null
+++ 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java
@@ -0,0 +1,1496 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNotSame;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+import java.io.IOException;
+import java.lang.reflect.Field;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Random;
+import java.util.Set;
+import java.util.concurrent.Callable;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.apache.hadoop.hbase.ArrayBackedTag;
+import org.apache.hadoop.hbase.CategoryBasedTimeout;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.CompatibilitySingletonFactory;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HDFSBlocksDistribution;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.HadoopShims;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.PerformanceEvaluation;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.Tag;
+import org.apache.hadoop.hbase.TagType;
+import org.apache.hadoop.hbase.TagUtil;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.RegionLocator;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.io.compress.Compression;
+import org.apache.hadoop.hbase.io.compress.Compression.Algorithm;
+import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
+import org.apache.hadoop.hbase.io.hfile.CacheConfig;
+import org.apache.hadoop.hbase.io.hfile.HFile;
+import org.apache.hadoop.hbase.io.hfile.HFile.Reader;
+import org.apache.hadoop.hbase.io.hfile.HFileScanner;
+import org.apache.hadoop.hbase.regionserver.BloomType;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.Store;
+import org.apache.hadoop.hbase.regionserver.StoreFile;
+import org.apache.hadoop.hbase.regionserver.TestHRegionFileSystem;
+import org.apache.hadoop.hbase.regionserver.TimeRangeTracker;
+import org.apache.hadoop.hbase.testclassification.LargeTests;
+import org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.util.ReflectionUtils;
+import org.apache.hadoop.hbase.util.Writables;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy;
+import org.apache.hadoop.hdfs.protocol.HdfsFileStat

[46/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapred/TableInputFormatBase.java
--
diff --git 
a/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapred/TableInputFormatBase.java
 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapred/TableInputFormatBase.java
new file mode 100644
index 000..9811a97
--- /dev/null
+++ 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapred/TableInputFormatBase.java
@@ -0,0 +1,313 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.Closeable;
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.classification.InterfaceAudience;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.RegionLocator;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.mapred.InputFormat;
+import org.apache.hadoop.mapred.InputSplit;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.RecordReader;
+import org.apache.hadoop.mapred.Reporter;
+
+/**
+ * A Base for {@link TableInputFormat}s. Receives a {@link Table}, a
+ * byte[] of input columns and optionally a {@link Filter}.
+ * Subclasses may use other TableRecordReader implementations.
+ *
+ * Subclasses MUST ensure initializeTable(Connection, TableName) is called for 
an instance to
+ * function properly. Each of the entry points to this class used by the 
MapReduce framework,
+ * {@link #getRecordReader(InputSplit, JobConf, Reporter)} and {@link 
#getSplits(JobConf, int)},
+ * will call {@link #initialize(JobConf)} as a convenient centralized location 
to handle
+ * retrieving the necessary configuration information. If your subclass 
overrides either of these
+ * methods, either call the parent version or call initialize yourself.
+ *
+ * 
+ * An example of a subclass:
+ * 
+ *   class ExampleTIF extends TableInputFormatBase {
+ *
+ * {@literal @}Override
+ * protected void initialize(JobConf context) throws IOException {
+ *   // We are responsible for the lifecycle of this connection until we 
hand it over in
+ *   // initializeTable.
+ *   Connection connection =
+ *  ConnectionFactory.createConnection(HBaseConfiguration.create(job));
+ *   TableName tableName = TableName.valueOf("exampleTable");
+ *   // mandatory. once passed here, TableInputFormatBase will handle 
closing the connection.
+ *   initializeTable(connection, tableName);
+ *   byte[][] inputColumns = new byte [][] { Bytes.toBytes("columnA"),
+ * Bytes.toBytes("columnB") };
+ *   // mandatory
+ *   setInputColumns(inputColumns);
+ *   // optional, by default we'll get everything for the given columns.
+ *   Filter exampleFilter = new RowFilter(CompareOp.EQUAL, new 
RegexStringComparator("aa.*"));
+ *   setRowFilter(exampleFilter);
+ * }
+ *   }
+ * 
+ */
+
+@InterfaceAudience.Public
+public abstract class TableInputFormatBase
+implements InputFormat {
+  private static final Log LOG = LogFactory.getLog(TableInputFormatBase.class);
+  private byte [][] inputColumns;
+  private Table table;
+  private RegionLocator regionLocator;
+  private Connection connection;
+  private TableRecordReader tableRecordReader;
+  private Filter rowFilter;
+
+  private static final String NOT_INITIALIZED = "The input format instance has 
not been properly " +
+  "initialized. Ensure you call initializeTable either in your constructor 
or initialize " +
+  "method";
+  private static final String INITIALIZATION_ERROR = "Cannot create a record 
reader because of a" +
+" previous error. Please look at the previous logs lines from" +
+" the task's full log for more details.";
+
+  /**
+   * Builds a TableRe

[37/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
--
diff --git 
a/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
new file mode 100644
index 000..e80410f
--- /dev/null
+++ 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
@@ -0,0 +1, @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.snapshot;
+
+import java.io.BufferedInputStream;
+import java.io.FileNotFoundException;
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.LinkedList;
+import java.util.List;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.Option;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.classification.InterfaceAudience;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileChecksum;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FileUtil;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.io.FileLink;
+import org.apache.hadoop.hbase.io.HFileLink;
+import org.apache.hadoop.hbase.io.WALLink;
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
+import org.apache.hadoop.hbase.mob.MobUtils;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotDescription;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotFileInfo;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotRegionManifest;
+import org.apache.hadoop.hbase.util.AbstractHBaseTool;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.util.HFileArchiveUtil;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.io.BytesWritable;
+import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.apache.hadoop.mapreduce.InputFormat;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.RecordReader;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
+import org.apache.hadoop.mapreduce.security.TokenCache;
+import org.apache.hadoop.hbase.io.hadoopbackport.ThrottledInputStream;
+import org.apache.hadoop.util.StringUtils;
+import org.apache.hadoop.util.Tool;
+
+/**
+ * Export the specified snapshot to a given FileSystem.
+ *
+ * The .snapshot/name folder is copied to the destination cluster
+ * and then all the hfiles/wals are copied using a Map-Reduce Job in the 
.archive/ location.
+ * When everything is done, the second cluster can restore the snapshot.
+ */
+@InterfaceAudience.Public
+public class ExportSnapshot extends AbstractHBaseTool implements Tool {
+  public static final String NAME = "exportsnapshot";
+  /** Configuration prefix for overrides for the source filesystem */
+  public static final String CONF_SOURCE_PREFIX = NAME + ".from.";
+  /** Configuration prefix for overrides for the destination filesystem */
+  public static final String CONF_DEST_PREFIX = NAME + ".to.";
+
+  private static final Log LOG = LogFactory.getLog(ExportSnapshot.class);
+
+  private static final String MR_NUM_MAPS = "mapreduce.job.maps";
+  private static fin

[04/50] [abbrv] hbase git commit: HBASE-18679 Add a null check around the result of getCounters() in ITBLL

2017-08-26 Thread busbey
HBASE-18679 Add a null check around the result of getCounters() in ITBLL


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/2773510f
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/2773510f
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/2773510f

Branch: refs/heads/HBASE-18467
Commit: 2773510f120730f926569fef30c3e7b766517e89
Parents: 439191e
Author: Josh Elser 
Authored: Thu Aug 24 17:52:13 2017 -0400
Committer: Josh Elser 
Committed: Fri Aug 25 18:40:02 2017 -0400

--
 .../hbase/test/IntegrationTestBigLinkedList.java   | 13 -
 1 file changed, 12 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/2773510f/hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestBigLinkedList.java
--
diff --git 
a/hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestBigLinkedList.java
 
b/hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestBigLinkedList.java
index 2fdfab6..f05ef66 100644
--- 
a/hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestBigLinkedList.java
+++ 
b/hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestBigLinkedList.java
@@ -820,6 +820,11 @@ public class IntegrationTestBigLinkedList extends 
IntegrationTestBase {
 public boolean verify() {
   try {
 Counters counters = job.getCounters();
+if (counters == null) {
+  LOG.info("Counters object was null, Generator verification cannot be 
performed."
+  + " This is commonly a result of insufficient YARN 
configuration.");
+  return false;
+}
 
 if (counters.findCounter(Counts.TERMINATING).getValue() > 0 ||
 counters.findCounter(Counts.UNDEFINED).getValue() > 0 ||
@@ -1315,7 +1320,8 @@ public class IntegrationTestBigLinkedList extends 
IntegrationTestBase {
   if (success) {
 Counters counters = job.getCounters();
 if (null == counters) {
-  LOG.warn("Counters were null, cannot verify Job completion");
+  LOG.warn("Counters were null, cannot verify Job completion."
+  + " This is commonly a result of insufficient YARN 
configuration.");
   // We don't have access to the counters to know if we have "bad" 
counts
   return 0;
 }
@@ -1337,6 +1343,11 @@ public class IntegrationTestBigLinkedList extends 
IntegrationTestBase {
   }
 
   Counters counters = job.getCounters();
+  if (counters == null) {
+LOG.info("Counters object was null, write verification cannot be 
performed."
+  + " This is commonly a result of insufficient YARN 
configuration.");
+return false;
+  }
 
   // Run through each check, even if we fail one early
   boolean success = verifyExpectedValues(expectedReferenced, counters);



[18/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java
deleted file mode 100644
index 8bb266e..000
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java
+++ /dev/null
@@ -1,700 +0,0 @@
-/**
- *
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hbase.mapreduce.replication;
-
-import java.io.IOException;
-import java.util.Arrays;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.conf.Configured;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.hbase.Abortable;
-import org.apache.hadoop.hbase.HBaseConfiguration;
-import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.client.Connection;
-import org.apache.hadoop.hbase.client.ConnectionFactory;
-import org.apache.hadoop.hbase.client.Get;
-import org.apache.hadoop.hbase.client.Put;
-import org.apache.hadoop.hbase.client.Result;
-import org.apache.hadoop.hbase.client.ResultScanner;
-import org.apache.hadoop.hbase.client.Scan;
-import org.apache.hadoop.hbase.client.Table;
-import org.apache.hadoop.hbase.client.TableSnapshotScanner;
-import org.apache.hadoop.hbase.filter.Filter;
-import org.apache.hadoop.hbase.filter.FilterList;
-import org.apache.hadoop.hbase.filter.PrefixFilter;
-import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
-import org.apache.hadoop.hbase.mapreduce.TableInputFormat;
-import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
-import org.apache.hadoop.hbase.mapreduce.TableMapper;
-import org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormat;
-import org.apache.hadoop.hbase.mapreduce.TableSplit;
-import org.apache.hadoop.hbase.replication.ReplicationException;
-import org.apache.hadoop.hbase.replication.ReplicationFactory;
-import org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
-import org.apache.hadoop.hbase.replication.ReplicationPeerZKImpl;
-import org.apache.hadoop.hbase.replication.ReplicationPeers;
-import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.hadoop.hbase.util.FSUtils;
-import org.apache.hadoop.hbase.util.Pair;
-import org.apache.hadoop.hbase.util.Threads;
-import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
-import org.apache.hadoop.mapreduce.InputSplit;
-import org.apache.hadoop.mapreduce.Job;
-import org.apache.hadoop.mapreduce.MRJobConfig;
-import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
-import org.apache.hadoop.util.Tool;
-import org.apache.hadoop.util.ToolRunner;
-
-import 
org.apache.hadoop.hbase.shaded.com.google.common.annotations.VisibleForTesting;
-
-/**
- * This map-only job compares the data from a local table with a remote one.
- * Every cell is compared and must have exactly the same keys (even timestamp)
- * as well as same value. It is possible to restrict the job by time range and
- * families. The peer id that's provided must match the one given when the
- * replication stream was setup.
- * 
- * Two counters are provided, Verifier.Counters.GOODROWS and BADROWS. The 
reason
- * for a why a row is different is shown in the map's log.
- */
-public class VerifyReplication extends Configured implements Tool {
-
-  private static final Log LOG =
-  LogFactory.getLog(VerifyReplication.class);
-
-  public final static String NAME = "verifyrep";
-  private final static String PEER_CONFIG_PREFIX = NAME + ".peer.";
-  long startTime = 0;
-  long endTime = Long.MAX_VALUE;
-  int batch = -1;
-  int versions = -1;
-  String tableName = null;
-  String families = null;
-  String delimiter = "";
-  String peerId = null;
-  String rowPrefixes = null;
-  int sleepMsBeforeReCompare = 0;
-  boolean verbose = false;
-  boolean includeDeletedCells = false;
-  //Source table snapshot 

[45/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java
--
diff --git 
a/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java
 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java
new file mode 100644
index 000..9cccf8c
--- /dev/null
+++ 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java
@@ -0,0 +1,386 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Random;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.classification.InterfaceAudience;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+/**
+ * Tool used to copy a table to another one which can be on a different setup.
+ * It is also configurable with a start and time as well as a specification
+ * of the region server implementation if different from the local cluster.
+ */
+@InterfaceAudience.Public
+public class CopyTable extends Configured implements Tool {
+  private static final Log LOG = LogFactory.getLog(CopyTable.class);
+
+  final static String NAME = "copytable";
+  long startTime = 0;
+  long endTime = HConstants.LATEST_TIMESTAMP;
+  int batch = Integer.MAX_VALUE;
+  int cacheRow = -1;
+  int versions = -1;
+  String tableName = null;
+  String startRow = null;
+  String stopRow = null;
+  String dstTableName = null;
+  String peerAddress = null;
+  String families = null;
+  boolean allCells = false;
+  static boolean shuffle = false;
+
+  boolean bulkload = false;
+  Path bulkloadDir = null;
+
+  private final static String JOB_NAME_CONF_KEY = "mapreduce.job.name";
+
+  /**
+   * Sets up the actual job.
+   *
+   * @param args  The command line parameters.
+   * @return The newly created job.
+   * @throws IOException When setting up the job fails.
+   */
+  public Job createSubmittableJob(String[] args)
+  throws IOException {
+if (!doCommandLine(args)) {
+  return null;
+}
+
+Job job = Job.getInstance(getConf(), getConf().get(JOB_NAME_CONF_KEY, NAME 
+ "_" + tableName));
+job.setJarByClass(CopyTable.class);
+Scan scan = new Scan();
+
+scan.setBatch(batch);
+scan.setCacheBlocks(false);
+
+if (cacheRow > 0) {
+  scan.setCaching(cacheRow);
+} else {
+  
scan.setCaching(getConf().getInt(HConstants.HBASE_CLIENT_SCANNER_CACHING, 100));
+}
+
+scan.setTimeRange(startTime, endTime);
+
+if (allCells) {
+  scan.setRaw(true);
+}
+if (shuffle) {
+  job.getConfiguration().set(TableInputFormat.SHUFFLE_MAPS, "true");
+}
+if (versions >= 0) {
+  scan.setMaxVersions(versions);
+}
+
+if (startRow != null) {
+  scan.setStartRow(Bytes.toBytesBinary(startRow));
+}
+
+if (stopRow != null) {
+  scan.setStopRow(Bytes.toBytesBinary(stopRow));
+}
+
+if(families != null) {
+  String[] fams = families.split(",");
+  Map cfRenameMap = new HashMap<>();
+  for(String fam : fams) {
+String sourceCf;
+if(fam.contains(":")) {
+// fam looks like "sourceCfName:destCfName"
+String[] srcAndDest = fam.split(":", 2);
+sourceCf = srcAndDest[0];
+String destCf = srcAndDest[1];
+cfRenameMap.put(sourceCf, destCf);
+} else {
+// fam is just "sourceCf"
+sourceCf = fam;
+ 

[02/50] [abbrv] hbase git commit: HBASE-18687 Add @since 2.0.0 to new classes; AMENDMENT

2017-08-26 Thread busbey
HBASE-18687 Add @since 2.0.0 to new classes; AMENDMENT


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/6859d4e2
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/6859d4e2
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/6859d4e2

Branch: refs/heads/HBASE-18467
Commit: 6859d4e207b261a8afb09eed9c94c94a7d7425ad
Parents: e62fdd9
Author: Michael Stack 
Authored: Fri Aug 25 14:14:51 2017 -0700
Committer: Michael Stack 
Committed: Fri Aug 25 14:14:51 2017 -0700

--
 .../java/org/apache/hadoop/hbase/backup/BackupClientFactory.java  | 1 +
 .../java/org/apache/hadoop/hbase/filter/BinaryComparator.java | 1 +
 .../main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java  | 1 +
 .../apache/hadoop/hbase/security/AbstractHBaseSaslRpcClient.java  | 1 +
 .../apache/hadoop/hbase/security/access/AccessControlUtil.java| 3 +++
 5 files changed, 7 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/6859d4e2/hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/BackupClientFactory.java
--
diff --git 
a/hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/BackupClientFactory.java
 
b/hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/BackupClientFactory.java
index 21d73cc..6db39f8 100644
--- 
a/hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/BackupClientFactory.java
+++ 
b/hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/BackupClientFactory.java
@@ -25,6 +25,7 @@ import 
org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient;
 import org.apache.hadoop.hbase.backup.impl.TableBackupClient;
 import org.apache.hadoop.hbase.client.Connection;
 
+@InterfaceAudience.Private
 public class BackupClientFactory {
 
   public static TableBackupClient create (Connection conn, String backupId, 
BackupRequest request)

http://git-wip-us.apache.org/repos/asf/hbase/blob/6859d4e2/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/BinaryComparator.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/BinaryComparator.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/BinaryComparator.java
index 87b622c..8a4aa34 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/BinaryComparator.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/BinaryComparator.java
@@ -33,6 +33,7 @@ import 
org.apache.hadoop.hbase.shaded.com.google.protobuf.InvalidProtocolBufferE
 /**
  * A binary comparator which lexicographically compares against the specified
  * byte array using {@link 
org.apache.hadoop.hbase.util.Bytes#compareTo(byte[], byte[])}.
+ * @since 2.0.0
  */
 @InterfaceAudience.Public
 public class BinaryComparator extends 
org.apache.hadoop.hbase.filter.ByteArrayComparable {

http://git-wip-us.apache.org/repos/asf/hbase/blob/6859d4e2/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java
index 7925505..12c829e 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java
@@ -89,6 +89,7 @@ import org.apache.hadoop.security.token.TokenSelector;
  * outside the lock in {@link Call} and {@link HBaseRpcController} which means 
the implementations
  * of the callbacks are free to hold any lock.
  * 
+ * @since 2.0.0
  */
 @InterfaceAudience.Private
 public abstract class AbstractRpcClient implements 
RpcClient {

http://git-wip-us.apache.org/repos/asf/hbase/blob/6859d4e2/hbase-client/src/main/java/org/apache/hadoop/hbase/security/AbstractHBaseSaslRpcClient.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/security/AbstractHBaseSaslRpcClient.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/security/AbstractHBaseSaslRpcClient.java
index cd2f4cd..de2c96e 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/security/AbstractHBaseSaslRpcClient.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/security/AbstractHBaseSaslRpcClient.java
@@ -42,6 +42,7 @@ import org.apache.hadoop.security.token.TokenIdentifier;
 /**
  * A utility class that encapsulates SASL logic for RPC client. Copied from
  * org.apache.hadoop.security
+ * @since 2.0.0
  */
 @InterfaceAudience.Private
 public abstract class AbstractHBaseSaslRpcClient {

http://git-wip-us.apache.org/repos/asf/hbase/bl

[41/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/SyncTable.java
--
diff --git 
a/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/SyncTable.java
 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/SyncTable.java
new file mode 100644
index 000..c72a0c3
--- /dev/null
+++ 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/SyncTable.java
@@ -0,0 +1,786 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Collections;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellComparator;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Mutation;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Counters;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
+import org.apache.hadoop.util.GenericOptionsParser;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+import org.apache.hadoop.hbase.shaded.com.google.common.base.Throwables;
+import org.apache.hadoop.hbase.shaded.com.google.common.collect.Iterators;
+
+public class SyncTable extends Configured implements Tool {
+
+  private static final Log LOG = LogFactory.getLog(SyncTable.class);
+
+  static final String SOURCE_HASH_DIR_CONF_KEY = "sync.table.source.hash.dir";
+  static final String SOURCE_TABLE_CONF_KEY = "sync.table.source.table.name";
+  static final String TARGET_TABLE_CONF_KEY = "sync.table.target.table.name";
+  static final String SOURCE_ZK_CLUSTER_CONF_KEY = 
"sync.table.source.zk.cluster";
+  static final String TARGET_ZK_CLUSTER_CONF_KEY = 
"sync.table.target.zk.cluster";
+  static final String DRY_RUN_CONF_KEY="sync.table.dry.run";
+
+  Path sourceHashDir;
+  String sourceTableName;
+  String targetTableName;
+
+  String sourceZkCluster;
+  String targetZkCluster;
+  boolean dryRun;
+
+  Counters counters;
+
+  public SyncTable(Configuration conf) {
+super(conf);
+  }
+
+  public Job createSubmittableJob(String[] args) throws IOException {
+FileSystem fs = sourceHashDir.getFileSystem(getConf());
+if (!fs.exists(sourceHashDir)) {
+  throw new IOException("Source hash dir not found: " + sourceHashDir);
+}
+
+HashTable.TableHash tableHash = HashTable.TableHash.read(getConf(), 
sourceHashDir);
+LOG.info("Read source hash manifest: " + tableHash);
+LOG.info("Read " + tableHash.partitions.size() + " partition keys");
+if (!tableHash.tableName.equals(sourceTableName)) {
+  LOG.warn("Table name mismatch - manifest indicates hash was taken from: "
+  + tableHash.tableName + " but job is reading from: " + 
sourceTableName);
+}
+if (tableHash.numHashFiles != tableHash.partitions.size() + 1) {
+  throw new RuntimeException("Hash data appears corrupt. The number of of 
hash files created"
+  + " should be 1 more than the number of partition keys.  However, 
the manifest file "
+  + " says numHashFiles=" + tableHash.numHashFiles + " but the number 
of partition keys"
+  + " found in the partitions file is " + tableHash.parti

[12/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportExport.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportExport.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportExport.java
deleted file mode 100644
index dc59817..000
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportExport.java
+++ /dev/null
@@ -1,727 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hbase.mapreduce;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertTrue;
-import static org.junit.Assert.fail;
-import static org.mockito.Matchers.any;
-import static org.mockito.Mockito.doAnswer;
-import static org.mockito.Mockito.mock;
-import static org.mockito.Mockito.when;
-
-import java.io.ByteArrayOutputStream;
-import java.io.File;
-import java.io.IOException;
-import java.io.PrintStream;
-import java.net.URL;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.List;
-import java.util.NavigableMap;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.hbase.HConstants;
-import org.apache.hadoop.hbase.Cell;
-import org.apache.hadoop.hbase.CellUtil;
-import org.apache.hadoop.hbase.HBaseTestingUtility;
-import org.apache.hadoop.hbase.HColumnDescriptor;
-import org.apache.hadoop.hbase.HRegionInfo;
-import org.apache.hadoop.hbase.HTableDescriptor;
-import org.apache.hadoop.hbase.KeepDeletedCells;
-import org.apache.hadoop.hbase.KeyValue;
-import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.client.Delete;
-import org.apache.hadoop.hbase.client.Durability;
-import org.apache.hadoop.hbase.client.Get;
-import org.apache.hadoop.hbase.client.Put;
-import org.apache.hadoop.hbase.client.Result;
-import org.apache.hadoop.hbase.client.ResultScanner;
-import org.apache.hadoop.hbase.client.Scan;
-import org.apache.hadoop.hbase.client.Table;
-import org.apache.hadoop.hbase.filter.Filter;
-import org.apache.hadoop.hbase.filter.FilterBase;
-import org.apache.hadoop.hbase.filter.PrefixFilter;
-import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
-import org.apache.hadoop.hbase.mapreduce.Import.KeyValueImporter;
-import org.apache.hadoop.hbase.regionserver.wal.WALActionsListener;
-import org.apache.hadoop.hbase.regionserver.wal.WALEdit;
-import org.apache.hadoop.hbase.wal.WAL;
-import org.apache.hadoop.hbase.wal.WALKey;
-import org.apache.hadoop.hbase.testclassification.MediumTests;
-import org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests;
-import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.hadoop.hbase.util.LauncherSecurityManager;
-import org.apache.hadoop.mapreduce.Mapper.Context;
-import org.apache.hadoop.util.ToolRunner;
-import org.junit.After;
-import org.junit.AfterClass;
-import org.junit.Assert;
-import org.junit.Before;
-import org.junit.BeforeClass;
-import org.junit.Rule;
-import org.junit.Test;
-import org.junit.experimental.categories.Category;
-import org.junit.rules.TestName;
-import org.mockito.invocation.InvocationOnMock;
-import org.mockito.stubbing.Answer;
-
-/**
- * Tests the table import and table export MR job functionality
- */
-@Category({VerySlowMapReduceTests.class, MediumTests.class})
-public class TestImportExport {
-  private static final Log LOG = LogFactory.getLog(TestImportExport.class);
-  private static final HBaseTestingUtility UTIL = new HBaseTestingUtility();
-  private static final byte[] ROW1 = Bytes.toBytesBinary("\\x32row1");
-  private static final byte[] ROW2 = Bytes.toBytesBinary("\\x32row2");
-  private static final byte[] ROW3 = Bytes.toBytesBinary("\\x32row3");
-  private static final String FAMILYA_STRING = "a";
-  private static final String FAMILYB_STRING = "b";
-  private static final byte[] FAMILYA = Bytes.toBytes(FAMILYA_STRING);
-  private s

[05/50] [abbrv] hbase git commit: HBASE-16324 Remove LegacyScanQueryMatcher

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/8d33949b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreScanner.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreScanner.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreScanner.java
index 1653728..4082818 100644
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreScanner.java
+++ 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreScanner.java
@@ -19,8 +19,13 @@
 
 package org.apache.hadoop.hbase.regionserver;
 
+import static org.apache.hadoop.hbase.CellUtil.createCell;
+import static org.apache.hadoop.hbase.KeyValueTestUtil.create;
 import static 
org.apache.hadoop.hbase.regionserver.KeyValueScanFixture.scanFixture;
-import static org.junit.Assert.*;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertTrue;
 
 import java.io.IOException;
 import java.util.ArrayList;
@@ -28,6 +33,7 @@ import java.util.Arrays;
 import java.util.Collections;
 import java.util.List;
 import java.util.NavigableSet;
+import java.util.OptionalInt;
 import java.util.TreeSet;
 import java.util.concurrent.atomic.AtomicInteger;
 
@@ -42,7 +48,6 @@ import org.apache.hadoop.hbase.HBaseConfiguration;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.KeepDeletedCells;
 import org.apache.hadoop.hbase.KeyValue;
-import org.apache.hadoop.hbase.KeyValueTestUtil;
 import org.apache.hadoop.hbase.client.Get;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.filter.ColumnCountGetFilter;
@@ -51,7 +56,6 @@ import 
org.apache.hadoop.hbase.testclassification.RegionServerTests;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.hbase.util.EnvironmentEdge;
 import org.apache.hadoop.hbase.util.EnvironmentEdgeManagerTestHelper;
-import org.junit.Assert;
 import org.junit.Rule;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
@@ -66,11 +70,10 @@ public class TestStoreScanner {
   @Rule public final TestRule timeout = 
CategoryBasedTimeout.builder().withTimeout(this.getClass()).
   withLookingForStuckThread(true).build();
   private static final String CF_STR = "cf";
-  private static final byte [] CF = Bytes.toBytes(CF_STR);
+  private static final byte[] CF = Bytes.toBytes(CF_STR);
   static Configuration CONF = HBaseConfiguration.create();
   private ScanInfo scanInfo = new ScanInfo(CONF, CF, 0, Integer.MAX_VALUE, 
Long.MAX_VALUE,
   KeepDeletedCells.FALSE, HConstants.DEFAULT_BLOCKSIZE, 0, 
CellComparator.COMPARATOR, false);
-  private ScanType scanType = ScanType.USER_SCAN;
 
   /**
* From here on down, we have a bunch of defines and specific CELL_GRID of 
Cells. The
@@ -79,15 +82,15 @@ public class TestStoreScanner {
* {@link 
StoreScanner#optimize(org.apache.hadoop.hbase.regionserver.querymatcher.ScanQueryMatcher.MatchCode,
* Cell)} is not overly enthusiastic.
*/
-  private static final byte [] ZERO = new byte [] {'0'};
-  private static final byte [] ZERO_POINT_ZERO = new byte [] {'0', '.', '0'};
-  private static final byte [] ONE = new byte [] {'1'};
-  private static final byte [] TWO = new byte [] {'2'};
-  private static final byte [] TWO_POINT_TWO = new byte [] {'2', '.', '2'};
-  private static final byte [] THREE = new byte [] {'3'};
-  private static final byte [] FOUR = new byte [] {'4'};
-  private static final byte [] FIVE = new byte [] {'5'};
-  private static final byte [] VALUE = new byte [] {'v'};
+  private static final byte[] ZERO = new byte[] {'0'};
+  private static final byte[] ZERO_POINT_ZERO = new byte[] {'0', '.', '0'};
+  private static final byte[] ONE = new byte[] {'1'};
+  private static final byte[] TWO = new byte[] {'2'};
+  private static final byte[] TWO_POINT_TWO = new byte[] {'2', '.', '2'};
+  private static final byte[] THREE = new byte[] {'3'};
+  private static final byte[] FOUR = new byte[] {'4'};
+  private static final byte[] FIVE = new byte[] {'5'};
+  private static final byte[] VALUE = new byte[] {'v'};
   private static final int CELL_GRID_BLOCK2_BOUNDARY = 4;
   private static final int CELL_GRID_BLOCK3_BOUNDARY = 11;
   private static final int CELL_GRID_BLOCK4_BOUNDARY = 15;
@@ -100,32 +103,32 @@ public class TestStoreScanner {
* We will use this to test scan does the right thing as it
* we do Gets, StoreScanner#optimize, and what we do on (faked) block 
boundaries.
*/
-  private static final Cell [] CELL_GRID = new Cell [] {
-CellUtil.createCell(ONE, CF, ONE, 1L, KeyValue.Type.Put.getCode(), VALUE),
-CellUtil.createCell(ONE, CF, TWO, 1L, KeyValue.Type.Put.getCode(), VALUE),
-CellUtil.createCell(ONE, CF, THREE, 1L, KeyValue.Type.Put.getCode(), 
VALUE),
-CellUtil

[10/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultithreadedTableMapper.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultithreadedTableMapper.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultithreadedTableMapper.java
deleted file mode 100644
index 694a359..000
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultithreadedTableMapper.java
+++ /dev/null
@@ -1,264 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hbase.mapreduce;
-
-import static org.junit.Assert.assertTrue;
-import static org.junit.Assert.fail;
-
-import java.io.File;
-import java.io.IOException;
-import java.util.Iterator;
-import java.util.Map;
-import java.util.NavigableMap;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.fs.FileUtil;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.hbase.HConstants;
-import org.apache.hadoop.hbase.Cell;
-import org.apache.hadoop.hbase.CellUtil;
-import org.apache.hadoop.hbase.HBaseTestingUtility;
-import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.client.Put;
-import org.apache.hadoop.hbase.client.Result;
-import org.apache.hadoop.hbase.client.ResultScanner;
-import org.apache.hadoop.hbase.client.Scan;
-import org.apache.hadoop.hbase.client.Table;
-import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
-import org.apache.hadoop.hbase.testclassification.LargeTests;
-import org.apache.hadoop.hbase.testclassification.MapReduceTests;
-import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.hadoop.mapreduce.Job;
-import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
-import org.junit.AfterClass;
-import org.junit.BeforeClass;
-import org.junit.Test;
-import org.junit.experimental.categories.Category;
-
-/**
- * Test Map/Reduce job over HBase tables. The map/reduce process we're testing
- * on our tables is simple - take every row in the table, reverse the value of
- * a particular cell, and write it back to the table.
- */
-@Category({MapReduceTests.class, LargeTests.class})
-public class TestMultithreadedTableMapper {
-  private static final Log LOG = 
LogFactory.getLog(TestMultithreadedTableMapper.class);
-  private static final HBaseTestingUtility UTIL =
-  new HBaseTestingUtility();
-  static final TableName MULTI_REGION_TABLE_NAME = TableName.valueOf("mrtest");
-  static final byte[] INPUT_FAMILY = Bytes.toBytes("contents");
-  static final byte[] OUTPUT_FAMILY = Bytes.toBytes("text");
-  static final intNUMBER_OF_THREADS = 10;
-
-  @BeforeClass
-  public static void beforeClass() throws Exception {
-// Up the handlers; this test needs more than usual.
-
UTIL.getConfiguration().setInt(HConstants.REGION_SERVER_HIGH_PRIORITY_HANDLER_COUNT,
 10);
-UTIL.startMiniCluster();
-Table table =
-UTIL.createMultiRegionTable(MULTI_REGION_TABLE_NAME, new byte[][] { 
INPUT_FAMILY,
-OUTPUT_FAMILY });
-UTIL.loadTable(table, INPUT_FAMILY, false);
-UTIL.waitUntilAllRegionsAssigned(MULTI_REGION_TABLE_NAME);
-  }
-
-  @AfterClass
-  public static void afterClass() throws Exception {
-UTIL.shutdownMiniCluster();
-  }
-
-  /**
-   * Pass the given key and processed record reduce
-   */
-  public static class ProcessContentsMapper
-  extends TableMapper {
-
-/**
- * Pass the key, and reversed value to reduce
- *
- * @param key
- * @param value
- * @param context
- * @throws IOException
- */
-@Override
-public void map(ImmutableBytesWritable key, Result value,
-Context context)
-throws IOException, InterruptedException {
-  if (value.size() != 1) {
-throw new IOException("There should only be one input column");
-  }
-  Map>>
-  cf = value.getMap();
-  if(!cf.containsKey(INPUT_FAMILY)) {
-throw new IOException("Wrong input columns. Missing: '" +
-Bytes.toString(INPUT_FAMILY) + "'.");
-  }
-  // Get the origina

[49/50] [abbrv] hbase git commit: HBASE-16722 Fixed broken link in CatalogJanitor doc

2017-08-26 Thread busbey
HBASE-16722 Fixed broken link in CatalogJanitor doc

Signed-off-by: Chia-Ping Tsai 


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/f53051b5
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/f53051b5
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/f53051b5

Branch: refs/heads/HBASE-18467
Commit: f53051b59082e27a775d20a6e058d46696be25cd
Parents: f386a9a
Author: Jan Hentschel 
Authored: Mon Jan 2 14:27:49 2017 +0100
Committer: Chia-Ping Tsai 
Committed: Sat Aug 26 20:24:20 2017 +0800

--
 src/main/asciidoc/_chapters/architecture.adoc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/f53051b5/src/main/asciidoc/_chapters/architecture.adoc
--
diff --git a/src/main/asciidoc/_chapters/architecture.adoc 
b/src/main/asciidoc/_chapters/architecture.adoc
index 2ded813..6ef8375 100644
--- a/src/main/asciidoc/_chapters/architecture.adoc
+++ b/src/main/asciidoc/_chapters/architecture.adoc
@@ -580,7 +580,7 @@ See <> for more information on 
region assignment.
  CatalogJanitor
 
 Periodically checks and cleans up the `hbase:meta` table.
-See > for more information on the meta table.
+See <> for more information on the meta table.
 
 [[regionserver.arch]]
 == RegionServer



[42/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableSnapshotInputFormatImpl.java
--
diff --git 
a/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableSnapshotInputFormatImpl.java
 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableSnapshotInputFormatImpl.java
new file mode 100644
index 000..4331c0f
--- /dev/null
+++ 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableSnapshotInputFormatImpl.java
@@ -0,0 +1,252 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.mapreduce;
+
+import org.apache.hadoop.hbase.shaded.com.google.common.collect.Lists;
+import org.apache.hadoop.hbase.shaded.com.google.common.collect.Maps;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.classification.InterfaceAudience;
+import org.apache.hadoop.hbase.classification.InterfaceStability;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper;
+import org.apache.hadoop.hbase.snapshot.SnapshotManifest;
+import org.apache.hadoop.hbase.util.ConfigurationUtil;
+import org.apache.hadoop.hbase.util.FSUtils;
+
+import java.io.IOException;
+import java.util.AbstractMap;
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+import java.util.UUID;
+
+/**
+ * Shared implementation of mapreduce code over multiple table snapshots.
+ * Utilized by both mapreduce ({@link org.apache.hadoop.hbase.mapreduce
+ * .MultiTableSnapshotInputFormat} and mapred
+ * ({@link org.apache.hadoop.hbase.mapred.MultiTableSnapshotInputFormat} 
implementations.
+ */
+@InterfaceAudience.LimitedPrivate({ "HBase" })
+@InterfaceStability.Evolving
+public class MultiTableSnapshotInputFormatImpl {
+
+  private static final Log LOG = 
LogFactory.getLog(MultiTableSnapshotInputFormatImpl.class);
+
+  public static final String RESTORE_DIRS_KEY =
+  "hbase.MultiTableSnapshotInputFormat.restore.snapshotDirMapping";
+  public static final String SNAPSHOT_TO_SCANS_KEY =
+  "hbase.MultiTableSnapshotInputFormat.snapshotsToScans";
+
+  /**
+   * Configure conf to read from snapshotScans, with snapshots restored to a 
subdirectory of
+   * restoreDir.
+   * Sets: {@link #RESTORE_DIRS_KEY}, {@link #SNAPSHOT_TO_SCANS_KEY}
+   *
+   * @param conf
+   * @param snapshotScans
+   * @param restoreDir
+   * @throws IOException
+   */
+  public void setInput(Configuration conf, Map> 
snapshotScans,
+  Path restoreDir) throws IOException {
+Path rootDir = FSUtils.getRootDir(conf);
+FileSystem fs = rootDir.getFileSystem(conf);
+
+setSnapshotToScans(conf, snapshotScans);
+Map restoreDirs =
+generateSnapshotToRestoreDirMapping(snapshotScans.keySet(), 
restoreDir);
+setSnapshotDirs(conf, restoreDirs);
+restoreSnapshots(conf, restoreDirs, fs);
+  }
+
+  /**
+   * Return the list of splits extracted from the scans/snapshots pushed to 
conf by
+   * {@link
+   * #setInput(org.apache.hadoop.conf.Configuration, java.util.Map, 
org.apache.hadoop.fs.Path)}
+   *
+   * @param conf Configuration to determine splits from
+   * @return Return the list of splits extracted from the scans/snapshots 
pushed to conf
+   * @throws IOException
+   */
+  public List getSplits(Configuration 
conf)
+  throws IOException {
+Path rootDir = FSUtils.getRootDir(conf);
+FileSystem fs = rootDir.getFileSystem(conf);
+
+List rtn = Lists.newArrayList();
+
+Map> snapshotsToScans = getSnapshotsToScans(conf);
+Map snapshotsToRestoreDirs = getSnapshotDirs(conf);
+for (Map.Entry> entry : 
snapshotsToScans.entrySet()) {
+  String snapshotName = entry.getKey();
+
+  Path restoreDir = snapshotsToRestoreDirs.get(snapshotName);
+
+  SnapshotManifest manifest =
+  TableSnapshotInputFormatImpl.getSnapshotManifest(conf, snapshotName,

[32/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportExport.java
--
diff --git 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportExport.java
 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportExport.java
new file mode 100644
index 000..91d2696
--- /dev/null
+++ 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportExport.java
@@ -0,0 +1,726 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+import static org.mockito.Matchers.any;
+import static org.mockito.Mockito.doAnswer;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+import java.io.ByteArrayOutputStream;
+import java.io.File;
+import java.io.IOException;
+import java.io.PrintStream;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeepDeletedCells;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Durability;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.filter.FilterBase;
+import org.apache.hadoop.hbase.filter.PrefixFilter;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.mapreduce.Import.KeyValueImporter;
+import org.apache.hadoop.hbase.regionserver.wal.WALActionsListener;
+import org.apache.hadoop.hbase.regionserver.wal.WALEdit;
+import org.apache.hadoop.hbase.wal.WAL;
+import org.apache.hadoop.hbase.wal.WALKey;
+import org.apache.hadoop.hbase.testclassification.MediumTests;
+import org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.LauncherSecurityManager;
+import org.apache.hadoop.mapreduce.Mapper.Context;
+import org.apache.hadoop.util.ToolRunner;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.rules.TestName;
+import org.mockito.invocation.InvocationOnMock;
+import org.mockito.stubbing.Answer;
+
+/**
+ * Tests the table import and table export MR job functionality
+ */
+@Category({VerySlowMapReduceTests.class, MediumTests.class})
+public class TestImportExport {
+  private static final Log LOG = LogFactory.getLog(TestImportExport.class);
+  private static final HBaseTestingUtility UTIL = new HBaseTestingUtility();
+  private static final byte[] ROW1 = Bytes.toBytesBinary("\\x32row1");
+  private static final byte[] ROW2 = Bytes.toBytesBinary("\\x32row2");
+  private static final byte[] ROW3 = Bytes.toBytesBinary("\\x32row3");
+  private static final String FAMILYA_STRING = "a";
+  private static final String FAMILYB_STRING = "b";
+  private static final byte[] FAMILYA = Bytes.toBytes(FAMILYA_STRING);
+  private static final byte[] FAMIL

[28/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TsvImporterCustomTestMapperForOprAttr.java
--
diff --git 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TsvImporterCustomTestMapperForOprAttr.java
 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TsvImporterCustomTestMapperForOprAttr.java
new file mode 100644
index 000..a9da98b
--- /dev/null
+++ 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TsvImporterCustomTestMapperForOprAttr.java
@@ -0,0 +1,57 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Put;
+import 
org.apache.hadoop.hbase.mapreduce.ImportTsv.TsvParser.BadTsvLineException;
+import org.apache.hadoop.hbase.mapreduce.ImportTsv.TsvParser.ParsedLine;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Just shows a simple example of how the attributes can be extracted and added
+ * to the puts
+ */
+public class TsvImporterCustomTestMapperForOprAttr extends TsvImporterMapper {
+  @Override
+  protected void populatePut(byte[] lineBytes, ParsedLine parsed, Put put, int 
i)
+  throws BadTsvLineException, IOException {
+KeyValue kv;
+kv = new KeyValue(lineBytes, parsed.getRowKeyOffset(), 
parsed.getRowKeyLength(),
+parser.getFamily(i), 0, parser.getFamily(i).length, 
parser.getQualifier(i), 0,
+parser.getQualifier(i).length, ts, KeyValue.Type.Put, lineBytes, 
parsed.getColumnOffset(i),
+parsed.getColumnLength(i));
+if (parsed.getIndividualAttributes() != null) {
+  String[] attributes = parsed.getIndividualAttributes();
+  for (String attr : attributes) {
+String[] split = attr.split(ImportTsv.DEFAULT_ATTRIBUTES_SEPERATOR);
+if (split == null || split.length <= 1) {
+  throw new BadTsvLineException("Invalid attributes seperator 
specified" + attributes);
+} else {
+  if (split[0].length() <= 0 || split[1].length() <= 0) {
+throw new BadTsvLineException("Invalid attributes seperator 
specified" + attributes);
+  }
+  put.setAttribute(split[0], Bytes.toBytes(split[1]));
+}
+  }
+}
+put.add(kv);
+  }
+}

http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationSmallTests.java
--
diff --git 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationSmallTests.java
 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationSmallTests.java
new file mode 100644
index 000..69c4c7c
--- /dev/null
+++ 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationSmallTests.java
@@ -0,0 +1,1059 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.replication;
+
+import static org.junit.Assert.*;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.NavigableMap;
+import java.util.TreeMap;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactor

[44/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HRegionPartitioner.java
--
diff --git 
a/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HRegionPartitioner.java
 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HRegionPartitioner.java
new file mode 100644
index 000..3c3060b
--- /dev/null
+++ 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/HRegionPartitioner.java
@@ -0,0 +1,140 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.classification.InterfaceAudience;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.RegionLocator;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.mapred.TableOutputFormat;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Partitioner;
+
+/**
+ * This is used to partition the output keys into groups of keys.
+ * Keys are grouped according to the regions that currently exist
+ * so that each reducer fills a single region so load is distributed.
+ *
+ * This class is not suitable as partitioner creating hfiles
+ * for incremental bulk loads as region spread will likely change between time 
of
+ * hfile creation and load time. See {@link LoadIncrementalHFiles}
+ * and http://hbase.apache.org/book.html#arch.bulk.load";>Bulk 
Load.
+ *
+ * @param   The type of the key.
+ * @param   The type of the value.
+ */
+@InterfaceAudience.Public
+public class HRegionPartitioner
+extends Partitioner
+implements Configurable {
+
+  private static final Log LOG = LogFactory.getLog(HRegionPartitioner.class);
+  private Configuration conf = null;
+  // Connection and locator are not cleaned up; they just die when partitioner 
is done.
+  private Connection connection;
+  private RegionLocator locator;
+  private byte[][] startKeys;
+
+  /**
+   * Gets the partition number for a given key (hence record) given the total
+   * number of partitions i.e. number of reduce-tasks for the job.
+   *
+   * Typically a hash function on a all or a subset of the key.
+   *
+   * @param key  The key to be partitioned.
+   * @param value  The entry value.
+   * @param numPartitions  The total number of partitions.
+   * @return The partition number for the key.
+   * @see org.apache.hadoop.mapreduce.Partitioner#getPartition(
+   *   java.lang.Object, java.lang.Object, int)
+   */
+  @Override
+  public int getPartition(ImmutableBytesWritable key,
+  VALUE value, int numPartitions) {
+byte[] region = null;
+// Only one region return 0
+if (this.startKeys.length == 1){
+  return 0;
+}
+try {
+  // Not sure if this is cached after a split so we could have problems
+  // here if a region splits while mapping
+  region = 
this.locator.getRegionLocation(key.get()).getRegionInfo().getStartKey();
+} catch (IOException e) {
+  LOG.error(e);
+}
+for (int i = 0; i < this.startKeys.length; i++){
+  if (Bytes.compareTo(region, this.startKeys[i]) == 0 ){
+if (i >= numPartitions-1){
+  // cover if we have less reduces then regions.
+  return (Integer.toString(i).hashCode()
+  & Integer.MAX_VALUE) % numPartitions;
+}
+return i;
+  }
+}
+// if above fails to find start key that match we need to return something
+return 0;
+  }
+
+  /**
+   * Returns the current configuration.
+   *
+   * @return The current configuration.
+   * @see org.apache.hadoop.conf.Configurable#getConf()
+   */
+  @Override
+  public Configuration getConf() {
+return conf;
+  }
+
+  /**
+   * Sets the configuration. This is used

[17/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
deleted file mode 100644
index e80410f..000
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java
+++ /dev/null
@@ -1, +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.hbase.snapshot;
-
-import java.io.BufferedInputStream;
-import java.io.FileNotFoundException;
-import java.io.DataInput;
-import java.io.DataOutput;
-import java.io.IOException;
-import java.io.InputStream;
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.Comparator;
-import java.util.LinkedList;
-import java.util.List;
-
-import org.apache.commons.cli.CommandLine;
-import org.apache.commons.cli.Option;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FSDataInputStream;
-import org.apache.hadoop.fs.FSDataOutputStream;
-import org.apache.hadoop.fs.FileChecksum;
-import org.apache.hadoop.fs.FileStatus;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.FileUtil;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.fs.permission.FsPermission;
-import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.HBaseConfiguration;
-import org.apache.hadoop.hbase.HConstants;
-import org.apache.hadoop.hbase.HRegionInfo;
-import org.apache.hadoop.hbase.io.FileLink;
-import org.apache.hadoop.hbase.io.HFileLink;
-import org.apache.hadoop.hbase.io.WALLink;
-import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
-import org.apache.hadoop.hbase.mob.MobUtils;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotDescription;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotFileInfo;
-import 
org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotRegionManifest;
-import org.apache.hadoop.hbase.util.AbstractHBaseTool;
-import org.apache.hadoop.hbase.util.FSUtils;
-import org.apache.hadoop.hbase.util.HFileArchiveUtil;
-import org.apache.hadoop.hbase.util.Pair;
-import org.apache.hadoop.io.BytesWritable;
-import org.apache.hadoop.io.IOUtils;
-import org.apache.hadoop.io.NullWritable;
-import org.apache.hadoop.io.Writable;
-import org.apache.hadoop.mapreduce.Job;
-import org.apache.hadoop.mapreduce.JobContext;
-import org.apache.hadoop.mapreduce.Mapper;
-import org.apache.hadoop.mapreduce.InputFormat;
-import org.apache.hadoop.mapreduce.InputSplit;
-import org.apache.hadoop.mapreduce.RecordReader;
-import org.apache.hadoop.mapreduce.TaskAttemptContext;
-import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
-import org.apache.hadoop.mapreduce.security.TokenCache;
-import org.apache.hadoop.hbase.io.hadoopbackport.ThrottledInputStream;
-import org.apache.hadoop.util.StringUtils;
-import org.apache.hadoop.util.Tool;
-
-/**
- * Export the specified snapshot to a given FileSystem.
- *
- * The .snapshot/name folder is copied to the destination cluster
- * and then all the hfiles/wals are copied using a Map-Reduce Job in the 
.archive/ location.
- * When everything is done, the second cluster can restore the snapshot.
- */
-@InterfaceAudience.Public
-public class ExportSnapshot extends AbstractHBaseTool implements Tool {
-  public static final String NAME = "exportsnapshot";
-  /** Configuration prefix for overrides for the source filesystem */
-  public static final String CONF_SOURCE_PREFIX = NAME + ".from.";
-  /** Configuration prefix for overrides for the destination filesystem */
-  public static final String CONF_DEST_PREFIX = NAME + ".to.";
-
-  private static final Log LOG = LogFactory.getLog(ExportSnapshot.class);
-
-  private static final String MR_NUM_MAPS = "mapreduce.job.maps";
-  private static final Strin

[20/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
deleted file mode 100644
index ff458ff..000
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
+++ /dev/null
@@ -1,1027 +0,0 @@
-/**
- *
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hbase.mapreduce;
-
-import java.io.File;
-import java.io.IOException;
-import java.net.URL;
-import java.net.URLDecoder;
-import java.util.ArrayList;
-import java.util.Collection;
-import java.util.Enumeration;
-import java.util.HashMap;
-import java.util.HashSet;
-import java.util.List;
-import java.util.Map;
-import java.util.Set;
-import java.util.zip.ZipEntry;
-import java.util.zip.ZipFile;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.hbase.HBaseConfiguration;
-import org.apache.hadoop.hbase.HConstants;
-import org.apache.hadoop.hbase.MetaTableAccessor;
-import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.hbase.client.Connection;
-import org.apache.hadoop.hbase.client.ConnectionFactory;
-import org.apache.hadoop.hbase.client.Put;
-import org.apache.hadoop.hbase.client.Scan;
-import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
-import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
-import org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos;
-import org.apache.hadoop.hbase.security.User;
-import org.apache.hadoop.hbase.security.UserProvider;
-import org.apache.hadoop.hbase.security.token.TokenUtil;
-import org.apache.hadoop.hbase.util.Base64;
-import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.hadoop.hbase.zookeeper.ZKConfig;
-import org.apache.hadoop.io.Writable;
-import org.apache.hadoop.mapreduce.InputFormat;
-import org.apache.hadoop.mapreduce.Job;
-import org.apache.hadoop.util.StringUtils;
-
-import com.codahale.metrics.MetricRegistry;
-
-/**
- * Utility for {@link TableMapper} and {@link TableReducer}
- */
-@SuppressWarnings({ "rawtypes", "unchecked" })
-@InterfaceAudience.Public
-public class TableMapReduceUtil {
-  private static final Log LOG = LogFactory.getLog(TableMapReduceUtil.class);
-
-  /**
-   * Use this before submitting a TableMap job. It will appropriately set up
-   * the job.
-   *
-   * @param table  The table name to read from.
-   * @param scan  The scan instance with the columns, time range etc.
-   * @param mapper  The mapper class to use.
-   * @param outputKeyClass  The class of the output key.
-   * @param outputValueClass  The class of the output value.
-   * @param job  The current job to adjust.  Make sure the passed job is
-   * carrying all necessary HBase configuration.
-   * @throws IOException When setting up the details fails.
-   */
-  public static void initTableMapperJob(String table, Scan scan,
-  Class mapper,
-  Class outputKeyClass,
-  Class outputValueClass, Job job)
-  throws IOException {
-initTableMapperJob(table, scan, mapper, outputKeyClass, outputValueClass,
-job, true);
-  }
-
-
-  /**
-   * Use this before submitting a TableMap job. It will appropriately set up
-   * the job.
-   *
-   * @param table  The table name to read from.
-   * @param scan  The scan instance with the columns, time range etc.
-   * @param mapper  The mapper class to use.
-   * @param outputKeyClass  The class of the output key.
-   * @param outputValueClass  The class of the output value.
-   * @param job  The current job to adjust.  Make sure the passed job is
-   * carrying all necessary HBase configuration.
-   * @throws IOException When setting up the details fails.
-   */
-  public static void initTableMapperJob(TableName table,
-  Scan 

[38/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java
--
diff --git 
a/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java
 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java
new file mode 100644
index 000..acf6ff8
--- /dev/null
+++ 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java
@@ -0,0 +1,700 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce.replication;
+
+import java.io.IOException;
+import java.util.Arrays;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Abortable;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.client.TableSnapshotScanner;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.filter.FilterList;
+import org.apache.hadoop.hbase.filter.PrefixFilter;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.mapreduce.TableInputFormat;
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
+import org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormat;
+import org.apache.hadoop.hbase.mapreduce.TableMapper;
+import org.apache.hadoop.hbase.mapreduce.TableSplit;
+import org.apache.hadoop.hbase.replication.ReplicationException;
+import org.apache.hadoop.hbase.replication.ReplicationFactory;
+import org.apache.hadoop.hbase.replication.ReplicationPeerConfig;
+import org.apache.hadoop.hbase.replication.ReplicationPeerZKImpl;
+import org.apache.hadoop.hbase.replication.ReplicationPeers;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.hbase.util.Threads;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.MRJobConfig;
+import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+import 
org.apache.hadoop.hbase.shaded.com.google.common.annotations.VisibleForTesting;
+
+/**
+ * This map-only job compares the data from a local table with a remote one.
+ * Every cell is compared and must have exactly the same keys (even timestamp)
+ * as well as same value. It is possible to restrict the job by time range and
+ * families. The peer id that's provided must match the one given when the
+ * replication stream was setup.
+ * 
+ * Two counters are provided, Verifier.Counters.GOODROWS and BADROWS. The 
reason
+ * for a why a row is different is shown in the map's log.
+ */
+public class VerifyReplication extends Configured implements Tool {
+
+  private static final Log LOG =
+  LogFactory.getLog(VerifyReplication.class);
+
+  public final static String NAME = "verifyrep";
+  private final static String PEER_CONFIG_PREFIX = NAME + ".peer.";
+  long startTime = 0;
+  long endTime = Long.MAX_VALUE;
+  int batch = -1;
+  int versions = -1;
+  String tableName = null;
+  String families = null;
+  String delimiter = "";
+  String peerId = null;
+  String rowPrefixes = null;
+  int sleepMsBeforeReCompare = 0;
+  boolean verbose = false;
+  boolean includeDeletedCells = false;
+  //Source table s

[40/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
--
diff --git 
a/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
new file mode 100644
index 000..ff458ff
--- /dev/null
+++ 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
@@ -0,0 +1,1027 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URL;
+import java.net.URLDecoder;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Enumeration;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.zip.ZipEntry;
+import java.util.zip.ZipFile;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.MetaTableAccessor;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.classification.InterfaceAudience;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.hbase.security.UserProvider;
+import org.apache.hadoop.hbase.security.token.TokenUtil;
+import org.apache.hadoop.hbase.util.Base64;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.zookeeper.ZKConfig;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.mapreduce.InputFormat;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.util.StringUtils;
+
+import com.codahale.metrics.MetricRegistry;
+
+/**
+ * Utility for {@link TableMapper} and {@link TableReducer}
+ */
+@SuppressWarnings({ "rawtypes", "unchecked" })
+@InterfaceAudience.Public
+public class TableMapReduceUtil {
+  private static final Log LOG = LogFactory.getLog(TableMapReduceUtil.class);
+
+  /**
+   * Use this before submitting a TableMap job. It will appropriately set up
+   * the job.
+   *
+   * @param table  The table name to read from.
+   * @param scan  The scan instance with the columns, time range etc.
+   * @param mapper  The mapper class to use.
+   * @param outputKeyClass  The class of the output key.
+   * @param outputValueClass  The class of the output value.
+   * @param job  The current job to adjust.  Make sure the passed job is
+   * carrying all necessary HBase configuration.
+   * @throws IOException When setting up the details fails.
+   */
+  public static void initTableMapperJob(String table, Scan scan,
+  Class mapper,
+  Class outputKeyClass,
+  Class outputValueClass, Job job)
+  throws IOException {
+initTableMapperJob(table, scan, mapper, outputKeyClass, outputValueClass,
+job, true);
+  }
+
+
+  /**
+   * Use this before submitting a TableMap job. It will appropriately set up
+   * the job.
+   *
+   * @param table  The table name to read from.
+   * @param scan  The scan instance with the columns, time range etc.
+   * @param mapper  The mapper class to use.
+   * @param outputKeyClass  The class of the output key.
+   * @param outputValueClass  The class of the output value.
+   * @param job  The current job to adjust.  Make sure the passed job is
+   * carrying all necessary HBase configuration.
+   * @throws IOException When setting up the details fails.
+   */
+  public static void initTableMapperJob(TableName table,
+   

[23/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/Import.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/Import.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/Import.java
deleted file mode 100644
index b5bb2ec..000
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/Import.java
+++ /dev/null
@@ -1,780 +0,0 @@
-/**
- *
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hbase.mapreduce;
-
-import java.io.ByteArrayInputStream;
-import java.io.DataInput;
-import java.io.DataInputStream;
-import java.io.DataOutput;
-import java.io.IOException;
-import java.lang.reflect.InvocationTargetException;
-import java.lang.reflect.Method;
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.List;
-import java.util.Locale;
-import java.util.Map;
-import java.util.TreeMap;
-import java.util.UUID;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.conf.Configured;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.hbase.Cell;
-import org.apache.hadoop.hbase.CellComparator;
-import org.apache.hadoop.hbase.CellUtil;
-import org.apache.hadoop.hbase.HBaseConfiguration;
-import org.apache.hadoop.hbase.KeyValue;
-import org.apache.hadoop.hbase.KeyValueUtil;
-import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.ZooKeeperConnectionException;
-import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.hbase.client.Admin;
-import org.apache.hadoop.hbase.client.Connection;
-import org.apache.hadoop.hbase.client.ConnectionFactory;
-import org.apache.hadoop.hbase.client.Delete;
-import org.apache.hadoop.hbase.client.Durability;
-import org.apache.hadoop.hbase.client.Mutation;
-import org.apache.hadoop.hbase.client.Put;
-import org.apache.hadoop.hbase.client.RegionLocator;
-import org.apache.hadoop.hbase.client.Result;
-import org.apache.hadoop.hbase.client.Table;
-import org.apache.hadoop.hbase.filter.Filter;
-import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
-import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.hadoop.hbase.zookeeper.ZKClusterId;
-import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
-import org.apache.hadoop.io.RawComparator;
-import org.apache.hadoop.io.WritableComparable;
-import org.apache.hadoop.io.WritableComparator;
-import org.apache.hadoop.mapreduce.Job;
-import org.apache.hadoop.mapreduce.Partitioner;
-import org.apache.hadoop.mapreduce.Reducer;
-import org.apache.hadoop.mapreduce.TaskCounter;
-import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
-import org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat;
-import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
-import org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner;
-import org.apache.hadoop.util.Tool;
-import org.apache.hadoop.util.ToolRunner;
-import org.apache.zookeeper.KeeperException;
-
-
-/**
- * Import data written by {@link Export}.
- */
-@InterfaceAudience.Public
-public class Import extends Configured implements Tool {
-  private static final Log LOG = LogFactory.getLog(Import.class);
-  final static String NAME = "import";
-  public final static String CF_RENAME_PROP = "HBASE_IMPORTER_RENAME_CFS";
-  public final static String BULK_OUTPUT_CONF_KEY = "import.bulk.output";
-  public final static String FILTER_CLASS_CONF_KEY = "import.filter.class";
-  public final static String FILTER_ARGS_CONF_KEY = "import.filter.args";
-  public final static String TABLE_NAME = "import.table.name";
-  public final static String WAL_DURABILITY = "import.wal.durability";
-  public final static String HAS_LARGE_RESULT= "import.bulk.hasLargeResult";
-
-  private final static String JOB_NAME_CONF_KEY = "mapreduce.job.name";
-
-  public static class KeyValueWritableComparablePartitioner 
-  extends Partitioner {
-private static KeyValueWritableComparable[] START_KEYS = nul

[01/50] [abbrv] hbase git commit: Revert "So far -- fix this message" Revert miscommit [Forced Update!]

2017-08-26 Thread busbey
Repository: hbase
Updated Branches:
  refs/heads/HBASE-18467 991349739 -> ea7baa560 (forced update)


Revert "So far -- fix this message"
Revert miscommit

This reverts commit 3bc64dac951a8bb40e8687dc2e60049ee75856f5.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/e62fdd9d
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/e62fdd9d
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/e62fdd9d

Branch: refs/heads/HBASE-18467
Commit: e62fdd9db436c970f305ffb18c334dc420f9d75c
Parents: 20d272b
Author: Michael Stack 
Authored: Fri Aug 25 14:14:05 2017 -0700
Committer: Michael Stack 
Committed: Fri Aug 25 14:14:05 2017 -0700

--
 .../java/org/apache/hadoop/hbase/backup/BackupClientFactory.java  | 1 -
 .../java/org/apache/hadoop/hbase/filter/BinaryComparator.java | 1 -
 .../main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java  | 1 -
 .../apache/hadoop/hbase/security/AbstractHBaseSaslRpcClient.java  | 1 -
 .../apache/hadoop/hbase/security/access/AccessControlUtil.java| 3 ---
 5 files changed, 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/e62fdd9d/hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/BackupClientFactory.java
--
diff --git 
a/hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/BackupClientFactory.java
 
b/hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/BackupClientFactory.java
index 6db39f8..21d73cc 100644
--- 
a/hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/BackupClientFactory.java
+++ 
b/hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/BackupClientFactory.java
@@ -25,7 +25,6 @@ import 
org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient;
 import org.apache.hadoop.hbase.backup.impl.TableBackupClient;
 import org.apache.hadoop.hbase.client.Connection;
 
-@InterfaceAudience.Private
 public class BackupClientFactory {
 
   public static TableBackupClient create (Connection conn, String backupId, 
BackupRequest request)

http://git-wip-us.apache.org/repos/asf/hbase/blob/e62fdd9d/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/BinaryComparator.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/BinaryComparator.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/BinaryComparator.java
index 8a4aa34..87b622c 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/BinaryComparator.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/BinaryComparator.java
@@ -33,7 +33,6 @@ import 
org.apache.hadoop.hbase.shaded.com.google.protobuf.InvalidProtocolBufferE
 /**
  * A binary comparator which lexicographically compares against the specified
  * byte array using {@link 
org.apache.hadoop.hbase.util.Bytes#compareTo(byte[], byte[])}.
- * @since 2.0.0
  */
 @InterfaceAudience.Public
 public class BinaryComparator extends 
org.apache.hadoop.hbase.filter.ByteArrayComparable {

http://git-wip-us.apache.org/repos/asf/hbase/blob/e62fdd9d/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java
index 12c829e..7925505 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AbstractRpcClient.java
@@ -89,7 +89,6 @@ import org.apache.hadoop.security.token.TokenSelector;
  * outside the lock in {@link Call} and {@link HBaseRpcController} which means 
the implementations
  * of the callbacks are free to hold any lock.
  * 
- * @since 2.0.0
  */
 @InterfaceAudience.Private
 public abstract class AbstractRpcClient implements 
RpcClient {

http://git-wip-us.apache.org/repos/asf/hbase/blob/e62fdd9d/hbase-client/src/main/java/org/apache/hadoop/hbase/security/AbstractHBaseSaslRpcClient.java
--
diff --git 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/security/AbstractHBaseSaslRpcClient.java
 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/security/AbstractHBaseSaslRpcClient.java
index de2c96e..cd2f4cd 100644
--- 
a/hbase-client/src/main/java/org/apache/hadoop/hbase/security/AbstractHBaseSaslRpcClient.java
+++ 
b/hbase-client/src/main/java/org/apache/hadoop/hbase/security/AbstractHBaseSaslRpcClient.java
@@ -42,7 +42,6 @@ import org.apache.hadoop.security.token.TokenIdentifier;
 /**
  * A utility class that encapsulates SASL logic for RPC client. Copied from
  * org.apache.hado

[14/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestTableMapReduceUtil.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestTableMapReduceUtil.java
 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestTableMapReduceUtil.java
deleted file mode 100644
index ac2f20d..000
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestTableMapReduceUtil.java
+++ /dev/null
@@ -1,272 +0,0 @@
-/**
- *
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hbase.mapred;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
-
-import java.io.File;
-import java.io.IOException;
-import java.util.ArrayList;
-import java.util.Iterator;
-import java.util.List;
-import java.util.Set;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FileUtil;
-import org.apache.hadoop.hbase.HBaseTestingUtility;
-import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.testclassification.LargeTests;
-import org.apache.hadoop.hbase.testclassification.MapReduceTests;
-import org.apache.hadoop.hbase.client.Put;
-import org.apache.hadoop.hbase.client.Result;
-import org.apache.hadoop.hbase.client.Table;
-import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
-import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.hadoop.mapred.JobClient;
-import org.apache.hadoop.mapred.JobConf;
-import org.apache.hadoop.mapred.MapReduceBase;
-import org.apache.hadoop.mapred.OutputCollector;
-import org.apache.hadoop.mapred.Reporter;
-import org.apache.hadoop.mapred.RunningJob;
-import org.junit.AfterClass;
-import org.junit.Assert;
-import org.junit.Before;
-import org.junit.BeforeClass;
-import org.junit.Test;
-import org.junit.experimental.categories.Category;
-
-import org.apache.hadoop.hbase.shaded.com.google.common.collect.ImmutableMap;
-import org.apache.hadoop.hbase.shaded.com.google.common.collect.ImmutableSet;
-
-@Category({MapReduceTests.class, LargeTests.class})
-public class TestTableMapReduceUtil {
-
-  private static final Log LOG = LogFactory
-  .getLog(TestTableMapReduceUtil.class);
-
-  private static Table presidentsTable;
-  private static final String TABLE_NAME = "People";
-
-  private static final byte[] COLUMN_FAMILY = Bytes.toBytes("info");
-  private static final byte[] COLUMN_QUALIFIER = Bytes.toBytes("name");
-
-  private static ImmutableSet presidentsRowKeys = ImmutableSet.of(
-  "president1", "president2", "president3");
-  private static Iterator presidentNames = ImmutableSet.of(
-  "John F. Kennedy", "George W. Bush", "Barack Obama").iterator();
-
-  private static ImmutableSet actorsRowKeys = ImmutableSet.of("actor1",
-  "actor2");
-  private static Iterator actorNames = ImmutableSet.of(
-  "Jack Nicholson", "Martin Freeman").iterator();
-
-  private static String PRESIDENT_PATTERN = "president";
-  private static String ACTOR_PATTERN = "actor";
-  private static ImmutableMap> relation = 
ImmutableMap
-  .of(PRESIDENT_PATTERN, presidentsRowKeys, ACTOR_PATTERN, actorsRowKeys);
-
-  private static final HBaseTestingUtility UTIL = new HBaseTestingUtility();
-
-  @BeforeClass
-  public static void beforeClass() throws Exception {
-UTIL.startMiniCluster();
-presidentsTable = createAndFillTable(TableName.valueOf(TABLE_NAME));
-  }
-
-  @AfterClass
-  public static void afterClass() throws Exception {
-UTIL.shutdownMiniCluster();
-  }
-
-  @Before
-  public void before() throws IOException {
-LOG.info("before");
-UTIL.ensureSomeRegionServersAvailable(1);
-LOG.info("before done");
-  }
-
-  public static Table createAndFillTable(TableName tableName) throws 
IOException {
-Table table = UTIL.createTable(tableName, COLUMN_FAMILY);
-createPutCommand(table);
-return table;
-  }
-
-  private static void createPutCommand(Table table) throws IOException {
-for (String president : presidentsRowKeys) {
-  if (presidentNames.hasNext()) {
- 

[24/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java
deleted file mode 100644
index 7fea254..000
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java
+++ /dev/null
@@ -1,902 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hbase.mapreduce;
-
-import java.io.IOException;
-import java.io.UnsupportedEncodingException;
-import java.net.InetSocketAddress;
-import java.net.URLDecoder;
-import java.net.URLEncoder;
-import java.nio.charset.StandardCharsets;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.List;
-import java.util.Map;
-import java.util.Set;
-import java.util.TreeMap;
-import java.util.TreeSet;
-import java.util.UUID;
-import java.util.function.Function;
-import java.util.stream.Collectors;
-
-import org.apache.commons.lang.StringUtils;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.hbase.Cell;
-import org.apache.hadoop.hbase.CellComparator;
-import org.apache.hadoop.hbase.CellUtil;
-import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.hbase.client.TableDescriptor;
-import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
-import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
-import org.apache.hadoop.hbase.client.Connection;
-import org.apache.hadoop.hbase.client.ConnectionFactory;
-import org.apache.hadoop.hbase.client.Put;
-import org.apache.hadoop.hbase.client.RegionLocator;
-import org.apache.hadoop.hbase.client.Table;
-import org.apache.hadoop.hbase.fs.HFileSystem;
-import org.apache.hadoop.hbase.HConstants;
-import org.apache.hadoop.hbase.HRegionLocation;
-import org.apache.hadoop.hbase.HTableDescriptor;
-import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
-import org.apache.hadoop.hbase.io.compress.Compression;
-import org.apache.hadoop.hbase.io.compress.Compression.Algorithm;
-import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
-import org.apache.hadoop.hbase.io.hfile.CacheConfig;
-import org.apache.hadoop.hbase.io.hfile.HFile;
-import org.apache.hadoop.hbase.io.hfile.HFileContext;
-import org.apache.hadoop.hbase.io.hfile.HFileContextBuilder;
-import org.apache.hadoop.hbase.io.hfile.HFileWriterImpl;
-import org.apache.hadoop.hbase.KeyValue;
-import org.apache.hadoop.hbase.KeyValueUtil;
-import org.apache.hadoop.hbase.regionserver.BloomType;
-import org.apache.hadoop.hbase.regionserver.HStore;
-import org.apache.hadoop.hbase.regionserver.StoreFile;
-import org.apache.hadoop.hbase.regionserver.StoreFileWriter;
-import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
-import org.apache.hadoop.hbase.util.FSUtils;
-import org.apache.hadoop.io.NullWritable;
-import org.apache.hadoop.io.SequenceFile;
-import org.apache.hadoop.io.Text;
-import org.apache.hadoop.mapreduce.Job;
-import org.apache.hadoop.mapreduce.OutputFormat;
-import org.apache.hadoop.mapreduce.RecordWriter;
-import org.apache.hadoop.mapreduce.TaskAttemptContext;
-import org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter;
-import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
-import org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner;
-
-import 
org.apache.hadoop.hbase.shaded.com.google.common.annotations.VisibleForTesting;
-
-/**
- * Writes HFiles. Passed Cells must arrive in order.
- * Writes current time as the sequence id for the file. Sets the major 
compacted
- * attribute on created @{link {@link HFile}s. Calling write(null,null) will 
forcibly roll
- * all HFiles being written.
- * 
- * Using this class as part of a MapReduce job is best done
- 

[35/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/ScanPerformanceEvaluation.java
--
diff --git 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/ScanPerformanceEvaluation.java
 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/ScanPerformanceEvaluation.java
new file mode 100644
index 000..e669f14
--- /dev/null
+++ 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/ScanPerformanceEvaluation.java
@@ -0,0 +1,406 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.classification.InterfaceAudience;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.client.TableSnapshotScanner;
+import org.apache.hadoop.hbase.client.metrics.ScanMetrics;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
+import org.apache.hadoop.hbase.mapreduce.TableMapper;
+import org.apache.hadoop.hbase.util.AbstractHBaseTool;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.mapreduce.Counters;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
+import org.apache.hadoop.util.StringUtils;
+import org.apache.hadoop.util.ToolRunner;
+
+import org.apache.hadoop.hbase.shaded.com.google.common.base.Stopwatch;
+
+/**
+ * A simple performance evaluation tool for single client and MR scans
+ * and snapshot scans.
+ */
+@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.TOOLS)
+public class ScanPerformanceEvaluation extends AbstractHBaseTool {
+
+  private static final String HBASE_COUNTER_GROUP_NAME = "HBase Counters";
+
+  private String type;
+  private String file;
+  private String tablename;
+  private String snapshotName;
+  private String restoreDir;
+  private String caching;
+
+  @Override
+  public void setConf(Configuration conf) {
+super.setConf(conf);
+Path rootDir;
+try {
+  rootDir = FSUtils.getRootDir(conf);
+  rootDir.getFileSystem(conf);
+} catch (IOException ex) {
+  throw new RuntimeException(ex);
+}
+  }
+
+  @Override
+  protected void addOptions() {
+this.addRequiredOptWithArg("t", "type", "the type of the test. One of the 
following: streaming|scan|snapshotscan|scanmapreduce|snapshotscanmapreduce");
+this.addOptWithArg("f", "file", "the filename to read from");
+this.addOptWithArg("tn", "table", "the tablename to read from");
+this.addOptWithArg("sn", "snapshot", "the snapshot name to read from");
+this.addOptWithArg("rs", "restoredir", "the directory to restore the 
snapshot");
+this.addOptWithArg("ch", "caching", "scanner caching value");
+  }
+
+  @Override
+  protected void processOptions(CommandLine cmd) {
+type = cmd.getOptionValue("type");
+file = cmd.getOptionValue("file");
+tablename = cmd.getOptionValue("table");
+snapshotName = cmd.getOptionValue("snapshot");
+restoreDir = cmd.getOptionValue("restoredir");
+caching = cmd.getOptionValue("caching");
+  }
+
+  protected void testHdfsStreaming(Path filename) throws IOException {
+byte[] buf = new byte[1024];
+FileSystem fs = filename.getFileSystem(getConf());
+
+// read the file from start to finish
+Stopwatch fileOpenTimer = Stopwatch.createUnstarted();
+Stopwatch streamTimer = Stopwatch.createUnstarted();
+
+fileOpenTimer.start();
+FSDataInputStream in = fs.open(filename);
+fileOpenTimer.stop();
+
+long totalBytes = 0;
+str

[39/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatImpl.java
--
diff --git 
a/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatImpl.java
 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatImpl.java
new file mode 100644
index 000..403051f
--- /dev/null
+++ 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatImpl.java
@@ -0,0 +1,410 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.mapreduce;
+
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.shaded.com.google.common.collect.Lists;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.HDFSBlocksDistribution;
+import org.apache.hadoop.hbase.HDFSBlocksDistribution.HostAndWeight;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.classification.InterfaceAudience;
+import org.apache.hadoop.hbase.client.ClientSideRegionScanner;
+import org.apache.hadoop.hbase.client.IsolationLevel;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MapReduceProtos.TableSnapshotRegionSplit;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotDescription;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.SnapshotProtos.SnapshotRegionManifest;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper;
+import org.apache.hadoop.hbase.snapshot.SnapshotDescriptionUtils;
+import org.apache.hadoop.hbase.snapshot.SnapshotManifest;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.io.Writable;
+
+import java.io.ByteArrayOutputStream;
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.UUID;
+
+/**
+ * Hadoop MR API-agnostic implementation for mapreduce over table snapshots.
+ */
+@InterfaceAudience.Private
+public class TableSnapshotInputFormatImpl {
+  // TODO: Snapshots files are owned in fs by the hbase user. There is no
+  // easy way to delegate access.
+
+  public static final Log LOG = 
LogFactory.getLog(TableSnapshotInputFormatImpl.class);
+
+  private static final String SNAPSHOT_NAME_KEY = 
"hbase.TableSnapshotInputFormat.snapshot.name";
+  // key for specifying the root dir of the restored snapshot
+  protected static final String RESTORE_DIR_KEY = 
"hbase.TableSnapshotInputFormat.restore.dir";
+
+  /** See {@link #getBestLocations(Configuration, HDFSBlocksDistribution)} */
+  private static final String LOCALITY_CUTOFF_MULTIPLIER =
+"hbase.tablesnapshotinputformat.locality.cutoff.multiplier";
+  private static final float DEFAULT_LOCALITY_CUTOFF_MULTIPLIER = 0.8f;
+
+  /**
+   * Implementation class for InputSplit logic common between mapred and 
mapreduce.
+   */
+  public static class InputSplit implements Writable {
+
+private TableDescriptor htd;
+private HRegionInfo regionInfo;
+private String[] locations;
+private String scan;
+private String restoreDir;
+
+// constructor for mapreduce framework / Writable
+public InputSplit() {}
+
+public InputSplit(TableDescriptor htd, HRegionInfo regionInfo, 
List locations,
+Scan scan, Path restoreDir) {
+  this.htd = htd;
+  this.regionInfo = regionInfo;
+  if (locations == null || locations.isEmpty()) {
+this.locations = new String[0];
+  } else {
+this.locations = locations.toArray(new String[locations.size()]);

[47/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
HBASE-18640 Move mapreduce out of hbase-server into separate module.

- Moves out o.a.h.h.{mapred, mapreduce} to new hbase-mapreduce module which 
depends
  on hbase-server because of classes like *Snapshot{Input,Output}Format.java, 
WALs, replication, etc
- hbase-backup depends on it for WALPlayer and MR job stuff
- A bunch of tools needed to be pulled into hbase-mapreduce becuase of their 
dependencies on MR.
  These are: CompactionTool, LoadTestTool, PerformanceEvaluation, ExportSnapshot
  This is better place of them than hbase-server. But ideal place would be in 
separate hbase-tools module.
- There were some tests in hbase-server which were digging into these tools for 
static util funtions or
  confs. Moved these to better/easily shared place. For eg. security related 
stuff to HBaseKerberosUtils.
- Note that hbase-mapreduce has secondPartExecution tests. On my machine they 
took like 20 min, so maybe
  more on apache jenkins. That's basically equal reduction of runtime of 
hbase-server tests, which is a
  big win!

Change-Id: Ieeb7235014717ca83ee5cb13b2a27fddfa6838e8


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/664b6be0
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/664b6be0
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/664b6be0

Branch: refs/heads/HBASE-18467
Commit: 664b6be0ef65218328847ea501fa88cb877e6759
Parents: 8d33949
Author: Apekshit Sharma 
Authored: Sun Aug 20 14:34:16 2017 -0700
Committer: Apekshit Sharma 
Committed: Fri Aug 25 18:38:48 2017 -0700

--
 hbase-assembly/pom.xml  |4 +
 .../src/main/assembly/hadoop-two-compat.xml |1 +
 hbase-assembly/src/main/assembly/src.xml|1 +
 hbase-backup/pom.xml|   10 +
 hbase-examples/pom.xml  |4 +
 hbase-it/pom.xml|   16 +
 .../hadoop/hbase/IntegrationTestIngest.java |5 +-
 .../IntegrationTestIngestStripeCompactions.java |4 +-
 .../hbase/IntegrationTestIngestWithMOB.java |5 +-
 .../hbase/IntegrationTestRegionReplicaPerf.java |3 +-
 .../mapreduce/IntegrationTestImportTsv.java |1 -
 .../test/IntegrationTestLoadAndVerify.java  |2 +-
 hbase-mapreduce/pom.xml |  316 +++
 .../org/apache/hadoop/hbase/mapred/Driver.java  |   52 +
 .../hadoop/hbase/mapred/GroupingTableMap.java   |  157 ++
 .../hadoop/hbase/mapred/HRegionPartitioner.java |   95 +
 .../hadoop/hbase/mapred/IdentityTableMap.java   |   76 +
 .../hbase/mapred/IdentityTableReduce.java   |   61 +
 .../mapred/MultiTableSnapshotInputFormat.java   |  128 +
 .../apache/hadoop/hbase/mapred/RowCounter.java  |  121 +
 .../hadoop/hbase/mapred/TableInputFormat.java   |   90 +
 .../hbase/mapred/TableInputFormatBase.java  |  313 +++
 .../apache/hadoop/hbase/mapred/TableMap.java|   38 +
 .../hadoop/hbase/mapred/TableMapReduceUtil.java |  376 +++
 .../hadoop/hbase/mapred/TableOutputFormat.java  |  134 +
 .../hadoop/hbase/mapred/TableRecordReader.java  |  139 +
 .../hbase/mapred/TableRecordReaderImpl.java |  259 ++
 .../apache/hadoop/hbase/mapred/TableReduce.java |   38 +
 .../hbase/mapred/TableSnapshotInputFormat.java  |  166 ++
 .../apache/hadoop/hbase/mapred/TableSplit.java  |  154 +
 .../hadoop/hbase/mapred/package-info.java   |   26 +
 .../hadoop/hbase/mapreduce/CellCounter.java |  333 +++
 .../hadoop/hbase/mapreduce/CellCreator.java |  134 +
 .../hadoop/hbase/mapreduce/CopyTable.java   |  386 +++
 .../DefaultVisibilityExpressionResolver.java|  144 +
 .../apache/hadoop/hbase/mapreduce/Driver.java   |   64 +
 .../apache/hadoop/hbase/mapreduce/Export.java   |  197 ++
 .../hbase/mapreduce/GroupingTableMapper.java|  177 ++
 .../hbase/mapreduce/HFileInputFormat.java   |  174 ++
 .../hbase/mapreduce/HFileOutputFormat2.java |  902 ++
 .../hbase/mapreduce/HRegionPartitioner.java |  140 +
 .../hadoop/hbase/mapreduce/HashTable.java   |  747 +
 .../hbase/mapreduce/IdentityTableMapper.java|   67 +
 .../hbase/mapreduce/IdentityTableReducer.java   |   79 +
 .../apache/hadoop/hbase/mapreduce/Import.java   |  780 ++
 .../hadoop/hbase/mapreduce/ImportTsv.java   |  793 ++
 .../hadoop/hbase/mapreduce/JarFinder.java   |  186 ++
 .../hbase/mapreduce/KeyValueSerialization.java  |   88 +
 .../hbase/mapreduce/KeyValueSortReducer.java|   57 +
 .../mapreduce/MultiTableHFileOutputFormat.java  |  122 +
 .../hbase/mapreduce/MultiTableInputFormat.java  |  104 +
 .../mapreduce/MultiTableInputFormatBase.java|  296 ++
 .../hbase/mapreduce/MultiTableOutputFormat.java |  176 ++
 .../MultiTableSnapshotInputFormat.java  |  106 +
 .../MultiTableSnapshotInputFormatImpl.java  |  252 ++
 .../mapreduce/MultithreadedTableMapper.java |  301 ++
 .../hbase/mapreduce/MutationSerialization.java 

[31/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTsv.java
--
diff --git 
a/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTsv.java
 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTsv.java
new file mode 100644
index 000..7b6e684
--- /dev/null
+++ 
b/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTsv.java
@@ -0,0 +1,571 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.UUID;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.TableNotFoundException;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.io.hfile.CacheConfig;
+import org.apache.hadoop.hbase.io.hfile.HFile;
+import org.apache.hadoop.hbase.io.hfile.HFileScanner;
+import org.apache.hadoop.hbase.testclassification.LargeTests;
+import org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapred.Utils.OutputFileUtils.OutputFilesFilter;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.rules.ExpectedException;
+
+@Category({VerySlowMapReduceTests.class, LargeTests.class})
+public class TestImportTsv implements Configurable {
+
+  private static final Log LOG = LogFactory.getLog(TestImportTsv.class);
+  protected static final String NAME = TestImportTsv.class.getSimpleName();
+  protected static HBaseTestingUtility util = new HBaseTestingUtility();
+
+  // Delete the tmp directory after running doMROnTableTest. Boolean. Default 
is true.
+  protected static final String DELETE_AFTER_LOAD_CONF = NAME + 
".deleteAfterLoad";
+
+  /**
+   * Force use of combiner in doMROnTableTest. Boolean. Default is true.
+   */
+  protected static final String FORCE_COMBINER_CONF = NAME + ".forceCombiner";
+
+  private final String FAMILY = "FAM";
+  private TableName tn;
+  private Map args;
+
+  @Rule
+  public ExpectedException exception = ExpectedException.none();
+
+  public Configuration getConf() {
+return util.getConfiguration();
+  }
+
+  public void setConf(Configuration conf) {
+throw new IllegalArgumentException("setConf not supported");
+  }
+
+  @BeforeClass
+  public static void provisionCluster() throws Exception {
+util.startMiniCluster();
+  }
+
+  @AfterClass
+  public static void releaseCluster() throws Exception {
+util.shutdownMiniCluster();
+  }
+
+  @Before
+  public void setup() throws Exception {
+tn = TableName.valueOf("test-" + UUID.randomUUID());
+args = new HashMap<>();
+// Pre

[06/50] [abbrv] hbase git commit: HBASE-16324 Remove LegacyScanQueryMatcher

2017-08-26 Thread busbey
HBASE-16324 Remove LegacyScanQueryMatcher


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/8d33949b
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/8d33949b
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/8d33949b

Branch: refs/heads/HBASE-18467
Commit: 8d33949b8db072902783f63cd9aaa68cbd6b905f
Parents: 2773510
Author: zhangduo 
Authored: Fri Aug 25 17:02:03 2017 +0800
Committer: zhangduo 
Committed: Sat Aug 26 08:04:43 2017 +0800

--
 .../example/ZooKeeperScanPolicyObserver.java|  10 +-
 .../hbase/mob/DefaultMobStoreCompactor.java |   6 +-
 .../compactions/PartitionedMobCompactor.java|   8 +-
 .../hadoop/hbase/regionserver/HMobStore.java|   1 -
 .../MemStoreCompactorSegmentsIterator.java  |  26 +-
 .../regionserver/ReversedStoreScanner.java  |   8 +-
 .../hadoop/hbase/regionserver/StoreFlusher.java |  10 +-
 .../hadoop/hbase/regionserver/StoreScanner.java | 217 +++---
 .../regionserver/compactions/Compactor.java |  18 +-
 .../querymatcher/LegacyScanQueryMatcher.java| 384 ---
 ...estAvoidCellReferencesIntoShippedBlocks.java |  11 +-
 .../hadoop/hbase/client/TestFromClientSide.java |   4 +-
 .../TestRegionObserverScannerOpenHook.java  |  31 +-
 .../TestPartitionedMobCompactor.java|   6 +-
 .../regionserver/NoOpScanPolicyObserver.java|  24 +-
 .../regionserver/TestCompactingMemStore.java|  34 +-
 .../hbase/regionserver/TestDefaultMemStore.java |  66 +-
 .../regionserver/TestMobStoreCompaction.java|   5 +-
 .../regionserver/TestReversibleScanners.java|  22 +-
 .../hbase/regionserver/TestStoreScanner.java| 682 +--
 .../hbase/util/TestCoprocessorScanPolicy.java   |  24 +-
 21 files changed, 552 insertions(+), 1045 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hbase/blob/8d33949b/hbase-examples/src/main/java/org/apache/hadoop/hbase/coprocessor/example/ZooKeeperScanPolicyObserver.java
--
diff --git 
a/hbase-examples/src/main/java/org/apache/hadoop/hbase/coprocessor/example/ZooKeeperScanPolicyObserver.java
 
b/hbase-examples/src/main/java/org/apache/hadoop/hbase/coprocessor/example/ZooKeeperScanPolicyObserver.java
index 35f85f7..b489fe4 100644
--- 
a/hbase-examples/src/main/java/org/apache/hadoop/hbase/coprocessor/example/ZooKeeperScanPolicyObserver.java
+++ 
b/hbase-examples/src/main/java/org/apache/hadoop/hbase/coprocessor/example/ZooKeeperScanPolicyObserver.java
@@ -19,9 +19,9 @@
 package org.apache.hadoop.hbase.coprocessor.example;
 
 import java.io.IOException;
-import java.util.Collections;
 import java.util.List;
 import java.util.NavigableSet;
+import java.util.OptionalInt;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -194,9 +194,7 @@ public class ZooKeeperScanPolicyObserver implements 
RegionObserver {
   // take default action
   return null;
 }
-Scan scan = new Scan();
-scan.setMaxVersions(scanInfo.getMaxVersions());
-return new StoreScanner(store, scanInfo, scan, scanners,
+return new StoreScanner(store, scanInfo, OptionalInt.empty(), scanners,
 ScanType.COMPACT_RETAIN_DELETES, store.getSmallestReadPoint(), 
HConstants.OLDEST_TIMESTAMP);
   }
 
@@ -210,9 +208,7 @@ public class ZooKeeperScanPolicyObserver implements 
RegionObserver {
   // take default action
   return null;
 }
-Scan scan = new Scan();
-scan.setMaxVersions(scanInfo.getMaxVersions());
-return new StoreScanner(store, scanInfo, scan, scanners, scanType,
+return new StoreScanner(store, scanInfo, OptionalInt.empty(), scanners, 
scanType,
 store.getSmallestReadPoint(), earliestPutTs);
   }
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/8d33949b/hbase-server/src/main/java/org/apache/hadoop/hbase/mob/DefaultMobStoreCompactor.java
--
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mob/DefaultMobStoreCompactor.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/mob/DefaultMobStoreCompactor.java
index c475b17..89d2958 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/mob/DefaultMobStoreCompactor.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/mob/DefaultMobStoreCompactor.java
@@ -22,6 +22,7 @@ import java.io.InterruptedIOException;
 import java.util.ArrayList;
 import java.util.Date;
 import java.util.List;
+import java.util.OptionalInt;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -32,7 +33,6 @@ import org.apache.hadoop.hbase.CellUtil;
 import org.apache.hadoop.hbase.KeyValue;
 import org.apache.hadoop.hbase.KeyValueUtil;
 import org.apache.hadoop.hbase.classification.In

[43/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/ImportTsv.java
--
diff --git 
a/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/ImportTsv.java
 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/ImportTsv.java
new file mode 100644
index 000..b64271e
--- /dev/null
+++ 
b/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/ImportTsv.java
@@ -0,0 +1,793 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import static java.lang.String.format;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.Set;
+
+import org.apache.commons.lang.StringUtils;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.TableNotEnabledException;
+import org.apache.hadoop.hbase.TableNotFoundException;
+import org.apache.hadoop.hbase.classification.InterfaceAudience;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.RegionLocator;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.Base64;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
+import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
+import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
+import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
+import org.apache.hadoop.security.Credentials;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.shaded.com.google.common.base.Preconditions;
+import org.apache.hadoop.hbase.shaded.com.google.common.base.Splitter;
+import org.apache.hadoop.hbase.shaded.com.google.common.collect.Lists;
+
+/**
+ * Tool to import data from a TSV file.
+ *
+ * This tool is rather simplistic - it doesn't do any quoting or
+ * escaping, but is useful for many data loads.
+ *
+ * @see ImportTsv#usage(String)
+ */
+@InterfaceAudience.Public
+public class ImportTsv extends Configured implements Tool {
+
+  protected static final Log LOG = LogFactory.getLog(ImportTsv.class);
+
+  final static String NAME = "importtsv";
+
+  public final static String MAPPER_CONF_KEY = "importtsv.mapper.class";
+  public final static String BULK_OUTPUT_CONF_KEY = "importtsv.bulk.output";
+  public final static String TIMESTAMP_CONF_KEY = "importtsv.timestamp";
+  public final static String JOB_NAME_CONF_KEY = "mapreduce.job.name";
+  // TODO: the rest of these configs are used exclusively by TsvImporterMapper.
+  // Move them out of the tool and let the mapper handle its own validation.
+  public final static String DRY_RUN_CONF_KEY = "importtsv.dry.run";
+  // If true, bad lines are logged to stderr. Default: false.
+  public final static String LOG_BAD_LINES_CONF_KEY = 
"importtsv.log.bad.lines";
+  public final static String SKIP_LINES_CONF_KEY = "importtsv.skip.bad.lines";
+  public final static String SKIP_EMPTY_COLUMNS = 
"importtsv.skip.empty.columns";
+  public final static String COLUMNS_CONF_KEY = "importtsv.columns";
+  public final static String SEPARATOR_CONF_KEY = "importtsv.separator";
+  public final static String ATTRIBU

[16/50] [abbrv] hbase git commit: HBASE-18640 Move mapreduce out of hbase-server into separate module.

2017-08-26 Thread busbey
http://git-wip-us.apache.org/repos/asf/hbase/blob/664b6be0/hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
--
diff --git 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java 
b/hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
deleted file mode 100644
index eebb0f3..000
--- 
a/hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
+++ /dev/null
@@ -1,2626 +0,0 @@
-/**
- *
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hbase;
-
-import static 
org.codehaus.jackson.map.SerializationConfig.Feature.SORT_PROPERTIES_ALPHABETICALLY;
-
-import java.io.IOException;
-import java.io.PrintStream;
-import java.lang.reflect.Constructor;
-import java.math.BigDecimal;
-import java.math.MathContext;
-import java.text.DecimalFormat;
-import java.text.SimpleDateFormat;
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.Date;
-import java.util.LinkedList;
-import java.util.Locale;
-import java.util.Map;
-import java.util.Queue;
-import java.util.Random;
-import java.util.TreeMap;
-import java.util.NoSuchElementException;
-import java.util.concurrent.Callable;
-import java.util.concurrent.ExecutionException;
-import java.util.concurrent.ExecutorService;
-import java.util.concurrent.Executors;
-import java.util.concurrent.Future;
-
-import org.apache.commons.lang.StringUtils;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.conf.Configured;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.hbase.classification.InterfaceAudience;
-import org.apache.hadoop.hbase.client.Admin;
-import org.apache.hadoop.hbase.client.Append;
-import org.apache.hadoop.hbase.client.AsyncConnection;
-import org.apache.hadoop.hbase.client.AsyncTable;
-import org.apache.hadoop.hbase.client.BufferedMutator;
-import org.apache.hadoop.hbase.client.Connection;
-import org.apache.hadoop.hbase.client.ConnectionFactory;
-import org.apache.hadoop.hbase.client.Consistency;
-import org.apache.hadoop.hbase.client.Delete;
-import org.apache.hadoop.hbase.client.Durability;
-import org.apache.hadoop.hbase.client.Get;
-import org.apache.hadoop.hbase.client.Increment;
-import org.apache.hadoop.hbase.client.Put;
-import org.apache.hadoop.hbase.client.RawAsyncTable;
-import org.apache.hadoop.hbase.client.Result;
-import org.apache.hadoop.hbase.client.ResultScanner;
-import org.apache.hadoop.hbase.client.RowMutations;
-import org.apache.hadoop.hbase.client.Scan;
-import org.apache.hadoop.hbase.client.Table;
-import org.apache.hadoop.hbase.filter.BinaryComparator;
-import org.apache.hadoop.hbase.filter.CompareFilter;
-import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
-import org.apache.hadoop.hbase.filter.Filter;
-import org.apache.hadoop.hbase.filter.FilterAllFilter;
-import org.apache.hadoop.hbase.filter.FilterList;
-import org.apache.hadoop.hbase.filter.PageFilter;
-import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
-import org.apache.hadoop.hbase.filter.WhileMatchFilter;
-import org.apache.hadoop.hbase.io.compress.Compression;
-import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
-import org.apache.hadoop.hbase.io.hfile.RandomDistribution;
-import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
-import org.apache.hadoop.hbase.regionserver.BloomType;
-import org.apache.hadoop.hbase.regionserver.CompactingMemStore;
-import org.apache.hadoop.hbase.trace.HBaseHTraceConfiguration;
-import org.apache.hadoop.hbase.trace.SpanReceiverHost;
-import org.apache.hadoop.hbase.util.*;
-import org.apache.hadoop.io.LongWritable;
-import org.apache.hadoop.io.Text;
-import org.apache.hadoop.mapreduce.Job;
-import org.apache.hadoop.mapreduce.Mapper;
-import org.apache.hadoop.mapreduce.lib.input.NLineInputFormat;
-import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
-import org.apache.hadoop.mapreduce.lib.reduce.LongSumReducer;
-import org.apache.hadoop.util.Tool;
-import org.apache.hadoop.util.Tool

hbase-site git commit: INFRA-10751 Empty commit

2017-08-26 Thread git-site-role
Repository: hbase-site
Updated Branches:
  refs/heads/asf-site ebf9a8b87 -> ba0dcf9be


INFRA-10751 Empty commit


Project: http://git-wip-us.apache.org/repos/asf/hbase-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase-site/commit/ba0dcf9b
Tree: http://git-wip-us.apache.org/repos/asf/hbase-site/tree/ba0dcf9b
Diff: http://git-wip-us.apache.org/repos/asf/hbase-site/diff/ba0dcf9b

Branch: refs/heads/asf-site
Commit: ba0dcf9beec85bbef9194101ba044934f0049502
Parents: ebf9a8b
Author: jenkins 
Authored: Sat Aug 26 15:10:52 2017 +
Committer: jenkins 
Committed: Sat Aug 26 15:10:52 2017 +

--

--




[39/51] [partial] hbase-site git commit: Published site at .

2017-08-26 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/src-html/org/apache/hadoop/hbase/client/AsyncAdmin.html
--
diff --git a/apidocs/src-html/org/apache/hadoop/hbase/client/AsyncAdmin.html 
b/apidocs/src-html/org/apache/hadoop/hbase/client/AsyncAdmin.html
index d459974..9084eb0 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/client/AsyncAdmin.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/client/AsyncAdmin.html
@@ -60,1071 +60,1072 @@
 052 * 

053 * This feature is still under development, so marked as IA.Private. Will change to public when 054 * done. Use it with caution. -055 */ -056@InterfaceAudience.Public -057public interface AsyncAdmin { -058 -059 /** -060 * @param tableName Table to check. -061 * @return True if table exists already. The return value will be wrapped by a -062 * {@link CompletableFuture}. -063 */ -064 CompletableFuture tableExists(TableName tableName); -065 -066 /** -067 * List all the userspace tables. -068 * @return - returns a list of TableDescriptors wrapped by a {@link CompletableFuture}. -069 * @see #listTables(Optional, boolean) -070 */ -071 default CompletableFuture> listTables() { -072return listTables(Optional.empty(), false); -073 } -074 -075 /** -076 * List all the tables matching the given pattern. -077 * @param pattern The compiled regular expression to match against -078 * @param includeSysTables False to match only against userspace tables -079 * @return - returns a list of TableDescriptors wrapped by a {@link CompletableFuture}. -080 */ -081 CompletableFuture> listTables(Optional pattern, -082 boolean includeSysTables); -083 -084 /** -085 * List all of the names of userspace tables. -086 * @return a list of table names wrapped by a {@link CompletableFuture}. -087 * @see #listTableNames(Optional, boolean) -088 */ -089 default CompletableFuture> listTableNames() { -090return listTableNames(Optional.empty(), false); -091 } -092 -093 /** -094 * List all of the names of userspace tables. -095 * @param pattern The regular expression to match against -096 * @param includeSysTables False to match only against userspace tables -097 * @return a list of table names wrapped by a {@link CompletableFuture}. -098 */ -099 CompletableFuture> listTableNames(Optional pattern, -100 boolean includeSysTables); -101 -102 /** -103 * Method for getting the tableDescriptor -104 * @param tableName as a {@link TableName} -105 * @return the read-only tableDescriptor wrapped by a {@link CompletableFuture}. -106 */ -107 CompletableFuture getTableDescriptor(TableName tableName); -108 -109 /** -110 * Creates a new table. -111 * @param desc table descriptor for table -112 */ -113 default CompletableFuture createTable(TableDescriptor desc) { -114return createTable(desc, Optional.empty()); -115 } -116 -117 /** -118 * Creates a new table with the specified number of regions. The start key specified will become -119 * the end key of the first region of the table, and the end key specified will become the start -120 * key of the last region of the table (the first region has a null start key and the last region -121 * has a null end key). BigInteger math will be used to divide the key range specified into enough -122 * segments to make the required number of total regions. -123 * @param desc table descriptor for table -124 * @param startKey beginning of key range -125 * @param endKey end of key range -126 * @param numRegions the total number of regions to create -127 */ -128 CompletableFuture createTable(TableDescriptor desc, byte[] startKey, byte[] endKey, -129 int numRegions); -130 -131 /** -132 * Creates a new table with an initial set of empty regions defined by the specified split keys. -133 * The total number of regions created will be the number of split keys plus one. -134 * Note : Avoid passing empty split key. -135 * @param desc table descriptor for table -136 * @param splitKeys array of split keys for the initial regions of the table -137 */ -138 CompletableFuture createTable(TableDescriptor desc, Optional splitKeys); -139 -140 /** -141 * Deletes a table. -142 * @param tableName name of table to delete -143 */ -144 CompletableFuture deleteTable(TableName tableName); -145 -146 /** -147 * Delete tables matching the passed in pattern and wait on completion. Warning: Use this method -148 * carefully, there is no prompting and the effect is immediate. Consider using -149 * {@link #listTableNames(Optional, boolean) } and -150 * {@link #deleteTable(org.apache.hadoop.hbase.TableName)} -151 * @param pattern The pattern to match table nam


[18/51] [partial] hbase-site git commit: Published site at .

2017-08-26 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/devapidocs/org/apache/hadoop/hbase/AsyncMetaTableAccessor.MetaTableRawScanResultConsumer.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/AsyncMetaTableAccessor.MetaTableRawScanResultConsumer.html
 
b/devapidocs/org/apache/hadoop/hbase/AsyncMetaTableAccessor.MetaTableRawScanResultConsumer.html
index 94c81a5..9f5c436 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/AsyncMetaTableAccessor.MetaTableRawScanResultConsumer.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/AsyncMetaTableAccessor.MetaTableRawScanResultConsumer.html
@@ -117,7 +117,7 @@ var activeTableTab = "activeTableTab";
 
 
 
-private static final class AsyncMetaTableAccessor.MetaTableRawScanResultConsumer
+private static final class AsyncMetaTableAccessor.MetaTableRawScanResultConsumer
 extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true";
 title="class or interface in java.lang">Object
 implements RawScanResultConsumer
 
@@ -257,7 +257,7 @@ implements 
 
 currentRowCount
-private int currentRowCount
+private int currentRowCount
 
 
 
@@ -266,7 +266,7 @@ implements 
 
 rowUpperLimit
-private final int rowUpperLimit
+private final int rowUpperLimit
 
 
 
@@ -275,7 +275,7 @@ implements 
 
 visitor
-private final MetaTableAccessor.Visitor visitor
+private final MetaTableAccessor.Visitor visitor
 
 
 
@@ -284,7 +284,7 @@ implements 
 
 future
-private final http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">CompletableFutureVoid> future
+private final http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">CompletableFutureVoid> future
 
 
 
@@ -301,7 +301,7 @@ implements 
 
 MetaTableRawScanResultConsumer
-MetaTableRawScanResultConsumer(int rowUpperLimit,
+MetaTableRawScanResultConsumer(int rowUpperLimit,
MetaTableAccessor.Visitor visitor,
http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">CompletableFutureVoid> future)
 
@@ -320,7 +320,7 @@ implements 
 
 onError
-public void onError(http://docs.oracle.com/javase/8/docs/api/java/lang/Throwable.html?is-external=true";
 title="class or interface in java.lang">Throwable error)
+public void onError(http://docs.oracle.com/javase/8/docs/api/java/lang/Throwable.html?is-external=true";
 title="class or interface in java.lang">Throwable error)
 Description copied from 
interface: RawScanResultConsumer
 Indicate that we hit an unrecoverable error and the scan 
operation is terminated.
  
@@ -337,7 +337,7 @@ implements 
 
 onComplete
-public void onComplete()
+public void onComplete()
 Description copied from 
interface: RawScanResultConsumer
 Indicate that the scan operation is completed 
normally.
 
@@ -352,7 +352,7 @@ implements 
 
 onNext
-public void onNext(Result[] results,
+public void onNext(Result[] results,
RawScanResultConsumer.ScanController controller)
 Description copied from 
interface: RawScanResultConsumer
 Indicate that we have receive some data.

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/devapidocs/org/apache/hadoop/hbase/AsyncMetaTableAccessor.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/AsyncMetaTableAccessor.html 
b/devapidocs/org/apache/hadoop/hbase/AsyncMetaTableAccessor.html
index 3ee00b4..4f1a39f 100644
--- a/devapidocs/org/apache/hadoop/hbase/AsyncMetaTableAccessor.html
+++ b/devapidocs/org/apache/hadoop/hbase/AsyncMetaTableAccessor.html
@@ -110,10 +110,14 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Private
-public class AsyncMetaTableAccessor
+public class AsyncMetaTableAccessor
 extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true";
 title="class or interface in java.lang">Object
 The asynchronous meta table accessor. Used to read/write 
region and assignment information store
  in hbase:meta.
+
+Since:
+2.0.0
+
 
 
 
@@ -390,7 +394,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 LOG
-private static final org.apache.commons.logging.Log LOG
+private static final org.apache.commons.logging.Log LOG
 
 
 
@@ -399,7 +403,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/

[29/51] [partial] hbase-site git commit: Published site at .

2017-08-26 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/src-html/org/apache/hadoop/hbase/client/Scan.ReadType.html
--
diff --git a/apidocs/src-html/org/apache/hadoop/hbase/client/Scan.ReadType.html 
b/apidocs/src-html/org/apache/hadoop/hbase/client/Scan.ReadType.html
index dffd28e..a3085fd 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/client/Scan.ReadType.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/client/Scan.ReadType.html
@@ -149,1109 +149,1103 @@
 141  private long maxResultSize = -1;
 142  private boolean cacheBlocks = true;
 143  private boolean reversed = false;
-144  private Map> familyMap = new 
TreeMap<>(Bytes.BYTES_COMPARATOR);
-145  private Boolean asyncPrefetch = null;
-146
-147  /**
-148   * Parameter name for client scanner 
sync/async prefetch toggle.
-149   * When using async scanner, 
prefetching data from the server is done at the background.
-150   * The parameter currently won't have 
any effect in the case that the user has set
-151   * Scan#setSmall or Scan#setReversed
-152   */
-153  public static final String 
HBASE_CLIENT_SCANNER_ASYNC_PREFETCH =
-154  
"hbase.client.scanner.async.prefetch";
-155
-156  /**
-157   * Default value of {@link 
#HBASE_CLIENT_SCANNER_ASYNC_PREFETCH}.
-158   */
-159  public static final boolean 
DEFAULT_HBASE_CLIENT_SCANNER_ASYNC_PREFETCH = false;
-160
-161  /**
-162   * Set it true for small scan to get 
better performance Small scan should use pread and big scan
-163   * can use seek + read seek + read is 
fast but can cause two problem (1) resource contention (2)
-164   * cause too much network io [89-fb] 
Using pread for non-compaction read request
-165   * 
https://issues.apache.org/jira/browse/HBASE-7266 On the other hand, if setting 
it true, we
-166   * would do 
openScanner,next,closeScanner in one RPC call. It means the better performance 
for
-167   * small scan. [HBASE-9488]. Generally, 
if the scan range is within one data block(64KB), it could
-168   * be considered as a small scan.
-169   */
-170  private boolean small = false;
-171
-172  /**
-173   * The mvcc read point to use when open 
a scanner. Remember to clear it after switching regions as
-174   * the mvcc is only valid within region 
scope.
-175   */
-176  private long mvccReadPoint = -1L;
-177
-178  /**
-179   * The number of rows we want for this 
scan. We will terminate the scan if the number of return
-180   * rows reaches this value.
-181   */
-182  private int limit = -1;
-183
-184  /**
-185   * Control whether to use pread at 
server side.
-186   */
-187  private ReadType readType = 
ReadType.DEFAULT;
-188
-189  private boolean needCursorResult = 
false;
+144  private TimeRange tr = new 
TimeRange();
+145  private Map> familyMap =
+146new TreeMap>(Bytes.BYTES_COMPARATOR);
+147  private Boolean asyncPrefetch = null;
+148
+149  /**
+150   * Parameter name for client scanner 
sync/async prefetch toggle.
+151   * When using async scanner, 
prefetching data from the server is done at the background.
+152   * The parameter currently won't have 
any effect in the case that the user has set
+153   * Scan#setSmall or Scan#setReversed
+154   */
+155  public static final String 
HBASE_CLIENT_SCANNER_ASYNC_PREFETCH =
+156  
"hbase.client.scanner.async.prefetch";
+157
+158  /**
+159   * Default value of {@link 
#HBASE_CLIENT_SCANNER_ASYNC_PREFETCH}.
+160   */
+161  public static final boolean 
DEFAULT_HBASE_CLIENT_SCANNER_ASYNC_PREFETCH = false;
+162
+163  /**
+164   * Set it true for small scan to get 
better performance Small scan should use pread and big scan
+165   * can use seek + read seek + read is 
fast but can cause two problem (1) resource contention (2)
+166   * cause too much network io [89-fb] 
Using pread for non-compaction read request
+167   * 
https://issues.apache.org/jira/browse/HBASE-7266 On the other hand, if setting 
it true, we
+168   * would do 
openScanner,next,closeScanner in one RPC call. It means the better performance 
for
+169   * small scan. [HBASE-9488]. Generally, 
if the scan range is within one data block(64KB), it could
+170   * be considered as a small scan.
+171   */
+172  private boolean small = false;
+173
+174  /**
+175   * The mvcc read point to use when open 
a scanner. Remember to clear it after switching regions as
+176   * the mvcc is only valid within region 
scope.
+177   */
+178  private long mvccReadPoint = -1L;
+179
+180  /**
+181   * The number of rows we want for this 
scan. We will terminate the scan if the number of return
+182   * rows reaches this value.
+183   */
+184  private int limit = -1;
+185
+186  /**
+187   * Control whether to use pread at 
server side.
+188   */
+189  private ReadType readType = 
ReadType.DEFAULT;
 190
-191  /**
-192   * Create a Scan operation across all 
rows.
-193   */
-194  public Scan() {}
-195
-19

[40/51] [partial] hbase-site git commit: Published site at .

2017-08-26 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/org/apache/hadoop/hbase/io/class-use/TimeRange.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/io/class-use/TimeRange.html 
b/apidocs/org/apache/hadoop/hbase/io/class-use/TimeRange.html
index dab..4cd3d3b 100644
--- a/apidocs/org/apache/hadoop/hbase/io/class-use/TimeRange.html
+++ b/apidocs/org/apache/hadoop/hbase/io/class-use/TimeRange.html
@@ -102,19 +102,6 @@
 
 Uses of TimeRange in org.apache.hadoop.hbase.client
 
-Fields in org.apache.hadoop.hbase.client
 declared as TimeRange 
-
-Modifier and Type
-Field and Description
-
-
-
-protected TimeRange
-Query.tr 
-
-
-
-
 Fields in org.apache.hadoop.hbase.client
 with type parameters of type TimeRange 
 
 Modifier and Type
@@ -136,7 +123,9 @@
 
 
 TimeRange
-Query.getTimeRange() 
+Get.getTimeRange()
+Method for retrieving the get's TimeRange
+
 
 
 TimeRange
@@ -144,6 +133,10 @@
 Gets the TimeRange used for this increment.
 
 
+
+TimeRange
+Scan.getTimeRange() 
+
 
 
 
@@ -159,48 +152,6 @@
 
 
 
-
-Methods in org.apache.hadoop.hbase.client
 with parameters of type TimeRange 
-
-Modifier and Type
-Method and Description
-
-
-
-Get
-Get.setColumnFamilyTimeRange(byte[] cf,
-TimeRange tr) 
-
-
-Query
-Query.setColumnFamilyTimeRange(byte[] cf,
-TimeRange tr) 
-
-
-Scan
-Scan.setColumnFamilyTimeRange(byte[] cf,
-TimeRange tr) 
-
-
-Get
-Get.setTimeRange(TimeRange tr)
-Get versions of columns only within the specified timestamp 
range,
-
-
-
-Query
-Query.setTimeRange(TimeRange tr)
-Sets the TimeRange to be used by this Query
-
-
-
-Scan
-Scan.setTimeRange(TimeRange tr)
-Set versions of columns only within the specified timestamp 
range,
-
-
-
-
 
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/org/apache/hadoop/hbase/ipc/NettyRpcClientConfigHelper.html
--
diff --git 
a/apidocs/org/apache/hadoop/hbase/ipc/NettyRpcClientConfigHelper.html 
b/apidocs/org/apache/hadoop/hbase/ipc/NettyRpcClientConfigHelper.html
index dbcf1b4..39268e4 100644
--- a/apidocs/org/apache/hadoop/hbase/ipc/NettyRpcClientConfigHelper.html
+++ b/apidocs/org/apache/hadoop/hbase/ipc/NettyRpcClientConfigHelper.html
@@ -110,13 +110,17 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Public
-public class NettyRpcClientConfigHelper
+public class NettyRpcClientConfigHelper
 extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true";
 title="class or interface in java.lang">Object
 Helper class for passing config to 
NettyRpcClient.
  
  As hadoop Configuration can not pass an Object directly, we need to find a 
way to pass the
  EventLoopGroup to AsyncRpcClient if we want to use a single 
EventLoopGroup for
  the whole process.
+
+Since:
+2.0.0
+
 
 
 
@@ -213,7 +217,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 EVENT_LOOP_CONFIG
-public static final http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String EVENT_LOOP_CONFIG
+public static final http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String EVENT_LOOP_CONFIG
 
 See Also:
 Constant
 Field Values
@@ -234,7 +238,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 NettyRpcClientConfigHelper
-public NettyRpcClientConfigHelper()
+public NettyRpcClientConfigHelper()
 
 
 
@@ -251,7 +255,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 setEventLoopConfig
-public static void setEventLoopConfig(org.apache.hadoop.conf.Configuration conf,
+public static void setEventLoopConfig(org.apache.hadoop.conf.Configuration conf,
   
org.apache.hadoop.hbase.shaded.io.netty.channel.EventLoopGroup group,
   http://docs.oracle.com/javase/8/docs/api/java/lang/Class.html?is-external=true";
 title="class or interface in java.lang">Class channelClass)
 Set the EventLoopGroup and channel class for 
AsyncRpcClient.
@@ -263,7 +267,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 createEventLoopPerClient
-public static void createEventLoopPerClient(org.apache.hadoop.conf.Configuration conf)
+public static void createEventLoopPerClient(org.apache.hadoop.conf.Configuration conf)
 The AsyncRpcClient will create its own 
NioEventLoopGroup.
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/org/apache/hadoop/hbase/mapred/HRegionPartitioner.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/mapred/HRegionPartitioner.ht

[51/51] [partial] hbase-site git commit: Published site at .

2017-08-26 Thread git-site-role
Published site at .


Project: http://git-wip-us.apache.org/repos/asf/hbase-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase-site/commit/ebf9a8b8
Tree: http://git-wip-us.apache.org/repos/asf/hbase-site/tree/ebf9a8b8
Diff: http://git-wip-us.apache.org/repos/asf/hbase-site/diff/ebf9a8b8

Branch: refs/heads/asf-site
Commit: ebf9a8b87181f0a5e6b33147256024f471772795
Parents: e8ae197
Author: jenkins 
Authored: Sat Aug 26 15:10:07 2017 +
Committer: jenkins 
Committed: Sat Aug 26 15:10:07 2017 +

--
 acid-semantics.html | 4 +-
 apache_hbase_reference_guide.pdf| 19228 +
 apidocs/index-all.html  |35 +-
 .../apache/hadoop/hbase/client/AsyncAdmin.html  |   260 +-
 .../hadoop/hbase/client/AsyncAdminBuilder.html  |20 +-
 .../hadoop/hbase/client/AsyncConnection.html|34 +-
 .../apache/hadoop/hbase/client/AsyncTable.html  |14 +-
 .../hadoop/hbase/client/AsyncTableBase.html |72 +-
 .../hadoop/hbase/client/AsyncTableBuilder.html  |26 +-
 .../hbase/client/AsyncTableRegionLocator.html   |12 +-
 .../hbase/client/ColumnFamilyDescriptor.html|78 +-
 .../client/ColumnFamilyDescriptorBuilder.html   |   138 +-
 apidocs/org/apache/hadoop/hbase/client/Get.html |   189 +-
 .../org/apache/hadoop/hbase/client/Query.html   |   167 +-
 .../RawAsyncTable.CoprocessorCallable.html  | 4 +-
 .../RawAsyncTable.CoprocessorCallback.html  |10 +-
 .../hadoop/hbase/client/RawAsyncTable.html  |14 +-
 .../RawScanResultConsumer.ScanController.html   | 8 +-
 .../RawScanResultConsumer.ScanResumer.html  | 4 +-
 .../hbase/client/RawScanResultConsumer.html |16 +-
 .../hadoop/hbase/client/Scan.ReadType.html  | 8 +-
 .../org/apache/hadoop/hbase/client/Scan.html|   294 +-
 .../hbase/client/TableDescriptorBuilder.html|88 +-
 .../hbase/client/TableSnapshotScanner.html  | 9 +-
 .../hadoop/hbase/client/class-use/Get.html  |33 +-
 .../hadoop/hbase/client/class-use/Query.html|27 +-
 .../hadoop/hbase/client/class-use/Scan.html |61 +-
 .../hadoop/hbase/filter/BinaryComparator.html   |16 +-
 .../hadoop/hbase/io/class-use/TimeRange.html|63 +-
 .../hbase/ipc/NettyRpcClientConfigHelper.html   |14 +-
 .../hadoop/hbase/mapred/HRegionPartitioner.html | 8 +-
 .../hbase/mapred/TableInputFormatBase.html  | 6 +-
 .../hadoop/hbase/mapred/TableOutputFormat.html  | 2 +-
 .../hbase/mapreduce/HRegionPartitioner.html | 2 +-
 .../hbase/mapreduce/IdentityTableReducer.html   | 4 +-
 .../apache/hadoop/hbase/mapreduce/Import.html   | 6 +-
 .../hbase/mapreduce/KeyValueSortReducer.html| 2 +-
 .../hbase/mapreduce/MultiTableInputFormat.html  | 4 +-
 .../mapreduce/MultiTableInputFormatBase.html|16 +-
 .../hbase/mapreduce/TableInputFormatBase.html   |36 +-
 .../hbase/mapreduce/TableOutputFormat.html  | 2 +-
 .../mapreduce/TableSnapshotInputFormat.html |19 +-
 .../class-use/MultiTableInputFormatBase.html| 2 +-
 .../hadoop/hbase/mapreduce/package-summary.html | 2 +-
 .../apache/hadoop/hbase/client/AsyncAdmin.html  |  2131 +-
 .../hadoop/hbase/client/AsyncAdminBuilder.html  |   129 +-
 .../hadoop/hbase/client/AsyncConnection.html|   301 +-
 .../apache/hadoop/hbase/client/AsyncTable.html  |83 +-
 .../hadoop/hbase/client/AsyncTableBase.html |   781 +-
 .../hadoop/hbase/client/AsyncTableBuilder.html  |   161 +-
 .../hbase/client/AsyncTableRegionLocator.html   |57 +-
 .../hbase/client/ColumnFamilyDescriptor.html|   379 +-
 .../client/ColumnFamilyDescriptorBuilder.html   |  2625 +--
 .../org/apache/hadoop/hbase/client/Get.html |   995 +-
 .../org/apache/hadoop/hbase/client/Query.html   |   474 +-
 .../RawAsyncTable.CoprocessorCallable.html  |   405 +-
 .../RawAsyncTable.CoprocessorCallback.html  |   405 +-
 .../hadoop/hbase/client/RawAsyncTable.html  |   405 +-
 .../RawScanResultConsumer.ScanController.html   |   207 +-
 .../RawScanResultConsumer.ScanResumer.html  |   207 +-
 .../hbase/client/RawScanResultConsumer.html |   207 +-
 .../hadoop/hbase/client/Scan.ReadType.html  |  2196 +-
 .../org/apache/hadoop/hbase/client/Scan.html|  2196 +-
 .../hbase/client/TableDescriptorBuilder.html|  2845 +--
 .../hbase/client/TableSnapshotScanner.html  | 4 +-
 .../hadoop/hbase/filter/BinaryComparator.html   |   121 +-
 .../hbase/ipc/NettyRpcClientConfigHelper.html   |87 +-
 .../hadoop/hbase/mapred/HRegionPartitioner.html |   121 +-
 .../mapred/MultiTableSnapshotInputFormat.html   | 2 +-
 .../hbase/mapred/TableInputFormatBase.html  | 6 +-
 .../hadoop/hbase/mapred/TableOutputFormat.html  | 2 +-
 .../hadoop/hbase/mapreduce/CopyTable.html   |14 +-
 .../apache/hadoop/hbase/mapreduce/Export.html   | 8 +-
 .../hbase/mapr

[28/51] [partial] hbase-site git commit: Published site at .

2017-08-26 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/src-html/org/apache/hadoop/hbase/client/Scan.html
--
diff --git a/apidocs/src-html/org/apache/hadoop/hbase/client/Scan.html 
b/apidocs/src-html/org/apache/hadoop/hbase/client/Scan.html
index dffd28e..a3085fd 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/client/Scan.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/client/Scan.html
@@ -149,1109 +149,1103 @@
 141  private long maxResultSize = -1;
 142  private boolean cacheBlocks = true;
 143  private boolean reversed = false;
-144  private Map> familyMap = new 
TreeMap<>(Bytes.BYTES_COMPARATOR);
-145  private Boolean asyncPrefetch = null;
-146
-147  /**
-148   * Parameter name for client scanner 
sync/async prefetch toggle.
-149   * When using async scanner, 
prefetching data from the server is done at the background.
-150   * The parameter currently won't have 
any effect in the case that the user has set
-151   * Scan#setSmall or Scan#setReversed
-152   */
-153  public static final String 
HBASE_CLIENT_SCANNER_ASYNC_PREFETCH =
-154  
"hbase.client.scanner.async.prefetch";
-155
-156  /**
-157   * Default value of {@link 
#HBASE_CLIENT_SCANNER_ASYNC_PREFETCH}.
-158   */
-159  public static final boolean 
DEFAULT_HBASE_CLIENT_SCANNER_ASYNC_PREFETCH = false;
-160
-161  /**
-162   * Set it true for small scan to get 
better performance Small scan should use pread and big scan
-163   * can use seek + read seek + read is 
fast but can cause two problem (1) resource contention (2)
-164   * cause too much network io [89-fb] 
Using pread for non-compaction read request
-165   * 
https://issues.apache.org/jira/browse/HBASE-7266 On the other hand, if setting 
it true, we
-166   * would do 
openScanner,next,closeScanner in one RPC call. It means the better performance 
for
-167   * small scan. [HBASE-9488]. Generally, 
if the scan range is within one data block(64KB), it could
-168   * be considered as a small scan.
-169   */
-170  private boolean small = false;
-171
-172  /**
-173   * The mvcc read point to use when open 
a scanner. Remember to clear it after switching regions as
-174   * the mvcc is only valid within region 
scope.
-175   */
-176  private long mvccReadPoint = -1L;
-177
-178  /**
-179   * The number of rows we want for this 
scan. We will terminate the scan if the number of return
-180   * rows reaches this value.
-181   */
-182  private int limit = -1;
-183
-184  /**
-185   * Control whether to use pread at 
server side.
-186   */
-187  private ReadType readType = 
ReadType.DEFAULT;
-188
-189  private boolean needCursorResult = 
false;
+144  private TimeRange tr = new 
TimeRange();
+145  private Map> familyMap =
+146new TreeMap>(Bytes.BYTES_COMPARATOR);
+147  private Boolean asyncPrefetch = null;
+148
+149  /**
+150   * Parameter name for client scanner 
sync/async prefetch toggle.
+151   * When using async scanner, 
prefetching data from the server is done at the background.
+152   * The parameter currently won't have 
any effect in the case that the user has set
+153   * Scan#setSmall or Scan#setReversed
+154   */
+155  public static final String 
HBASE_CLIENT_SCANNER_ASYNC_PREFETCH =
+156  
"hbase.client.scanner.async.prefetch";
+157
+158  /**
+159   * Default value of {@link 
#HBASE_CLIENT_SCANNER_ASYNC_PREFETCH}.
+160   */
+161  public static final boolean 
DEFAULT_HBASE_CLIENT_SCANNER_ASYNC_PREFETCH = false;
+162
+163  /**
+164   * Set it true for small scan to get 
better performance Small scan should use pread and big scan
+165   * can use seek + read seek + read is 
fast but can cause two problem (1) resource contention (2)
+166   * cause too much network io [89-fb] 
Using pread for non-compaction read request
+167   * 
https://issues.apache.org/jira/browse/HBASE-7266 On the other hand, if setting 
it true, we
+168   * would do 
openScanner,next,closeScanner in one RPC call. It means the better performance 
for
+169   * small scan. [HBASE-9488]. Generally, 
if the scan range is within one data block(64KB), it could
+170   * be considered as a small scan.
+171   */
+172  private boolean small = false;
+173
+174  /**
+175   * The mvcc read point to use when open 
a scanner. Remember to clear it after switching regions as
+176   * the mvcc is only valid within region 
scope.
+177   */
+178  private long mvccReadPoint = -1L;
+179
+180  /**
+181   * The number of rows we want for this 
scan. We will terminate the scan if the number of return
+182   * rows reaches this value.
+183   */
+184  private int limit = -1;
+185
+186  /**
+187   * Control whether to use pread at 
server side.
+188   */
+189  private ReadType readType = 
ReadType.DEFAULT;
 190
-191  /**
-192   * Create a Scan operation across all 
rows.
-193   */
-194  public Scan() {}
-195
-196  /**
-197   * @deprecated use {@code new 
S

[30/51] [partial] hbase-site git commit: Published site at .

2017-08-26 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/src-html/org/apache/hadoop/hbase/client/RawScanResultConsumer.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/client/RawScanResultConsumer.html 
b/apidocs/src-html/org/apache/hadoop/hbase/client/RawScanResultConsumer.html
index c804d26..739b628 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/client/RawScanResultConsumer.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/client/RawScanResultConsumer.html
@@ -39,109 +39,110 @@
 031 * HBase in background while you process 
the returned data, you need to move the processing work to
 032 * another thread to make the {@code 
onNext} call return immediately. And please do NOT do any time
 033 * consuming tasks in all methods below 
unless you know what you are doing.
-034 */
-035@InterfaceAudience.Public
-036public interface RawScanResultConsumer 
{
-037
-038  /**
-039   * Used to resume a scan.
-040   */
-041  @InterfaceAudience.Public
-042  interface ScanResumer {
-043
-044/**
-045 * Resume the scan. You are free to 
call it multiple time but only the first call will take
-046 * effect.
-047 */
-048void resume();
-049  }
-050
-051  /**
-052   * Used to suspend or stop a scan, or 
get a scan cursor if available.
-053   * 

-054 * Notice that, you should only call the {@link #suspend()} or {@link #terminate()} inside onNext -055 * or onHeartbeat method. A IllegalStateException will be thrown if you call them at other places. -056 *

-057 * You can only call one of the {@link #suspend()} and {@link #terminate()} methods(of course you -058 * are free to not call them both), and the methods are not reentrant. An IllegalStateException -059 * will be thrown if you have already called one of the methods. -060 */ -061 @InterfaceAudience.Public -062 interface ScanController { -063 -064/** -065 * Suspend the scan. -066 *

-067 * This means we will stop fetching data in background, i.e., will not call onNext any more -068 * before you resume the scan. -069 * @return A resumer used to resume the scan later. -070 */ -071ScanResumer suspend(); -072 -073/** -074 * Terminate the scan. -075 *

-076 * This is useful when you have got enough results and want to stop the scan in onNext method, -077 * or you want to stop the scan in onHeartbeat method because it has spent too many time. -078 */ -079void terminate(); -080 -081/** -082 * Get the scan cursor if available. -083 * @return The scan cursor. -084 */ -085Optional cursor(); -086 } -087 -088 /** -089 * Indicate that we have receive some data. -090 * @param results the data fetched from HBase service. -091 * @param controller used to suspend or terminate the scan. Notice that the {@code controller} -092 * instance is only valid within scope of onNext method. You can only call its method in -093 * onNext, do NOT store it and call it later outside onNext. -094 */ -095 void onNext(Result[] results, ScanController controller); -096 -097 /** -098 * Indicate that there is a heartbeat message but we have not cumulated enough cells to call -099 * {@link #onNext(Result[], ScanController)}. -100 *

-101 * Note that this method will always be called when RS returns something to us but we do not have -102 * enough cells to call {@link #onNext(Result[], ScanController)}. Sometimes it may not be a -103 * 'heartbeat' message for RS, for example, we have a large row with many cells and size limit is -104 * exceeded before sending all the cells for this row. For RS it does send some data to us and the -105 * time limit has not been reached, but we can not return the data to client so here we call this -106 * method to tell client we have already received something. -107 *

-108 * This method give you a chance to terminate a slow scan operation. -109 * @param controller used to suspend or terminate the scan. Notice that the {@code controller} -110 * instance is only valid within the scope of onHeartbeat method. You can only call its -111 * method in onHeartbeat, do NOT store it and call it later outside onHeartbeat. -112 */ -113 default void onHeartbeat(ScanController controller) { -114 } -115 -116 /** -117 * Indicate that we hit an unrecoverable error and the scan operation is terminated. -118 *

-119 * We will not call {@link #onComplete()} after calling {@link #onError(Throwable)}. -120 */ -121 void onError(Throwable error); -122 -123 /** -124 * Indicate that the scan operation is completed normally. -125 */ -126 void onComplete(); -127 -128 /** -129 * If {@code scan.isScanMetricsEnabled()} returns true, then this method will be called prior to -130 * all other methods in this interface to give you the {@


[11/51] [partial] hbase-site git commit: Published site at .

2017-08-26 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/devapidocs/org/apache/hadoop/hbase/client/AsyncTable.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/client/AsyncTable.html 
b/devapidocs/org/apache/hadoop/hbase/client/AsyncTable.html
index 1b54d76..d9bd132 100644
--- a/devapidocs/org/apache/hadoop/hbase/client/AsyncTable.html
+++ b/devapidocs/org/apache/hadoop/hbase/client/AsyncTable.html
@@ -110,7 +110,7 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Public
-public interface AsyncTable
+public interface AsyncTable
 extends AsyncTableBase
 The asynchronous table for normal users.
  
@@ -119,6 +119,10 @@ extends Since:
+2.0.0
+
 
 
 
@@ -191,7 +195,7 @@ extends 
 
 getScanner
-default ResultScanner getScanner(byte[] family)
+default ResultScanner getScanner(byte[] family)
 Gets a scanner on the current table for the given 
family.
 
 Parameters:
@@ -207,7 +211,7 @@ extends 
 
 getScanner
-default ResultScanner getScanner(byte[] family,
+default ResultScanner getScanner(byte[] family,
  byte[] qualifier)
 Gets a scanner on the current table for the given family 
and qualifier.
 
@@ -225,7 +229,7 @@ extends 
 
 getScanner
-ResultScanner getScanner(Scan scan)
+ResultScanner getScanner(Scan scan)
 Returns a scanner on the current table as specified by the 
Scan object.
 
 Parameters:
@@ -241,7 +245,7 @@ extends 
 
 scan
-void scan(Scan scan,
+void scan(Scan scan,
   ScanResultConsumer consumer)
 The scan API uses the observer pattern. All results that 
match the given scan object will be
  passed to the given consumer by calling ScanResultConsumer.onNext(Result).

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/devapidocs/org/apache/hadoop/hbase/client/AsyncTableBase.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/client/AsyncTableBase.html 
b/devapidocs/org/apache/hadoop/hbase/client/AsyncTableBase.html
index b9c3905..7ebb699 100644
--- a/devapidocs/org/apache/hadoop/hbase/client/AsyncTableBase.html
+++ b/devapidocs/org/apache/hadoop/hbase/client/AsyncTableBase.html
@@ -110,7 +110,7 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Public
-public interface AsyncTableBase
+public interface AsyncTableBase
 The base interface for asynchronous version of Table. 
Obtain an instance from a
  AsyncConnection.
  
@@ -118,6 +118,10 @@ public interface http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in 
java.util.concurrent">CompletableFuture.
+
+Since:
+2.0.0
+
 
 
 
@@ -389,7 +393,7 @@ public interface 
 
 getName
-TableName getName()
+TableName getName()
 Gets the fully qualified table name instance of this 
table.
 
 
@@ -399,7 +403,7 @@ public interface 
 
 getConfiguration
-org.apache.hadoop.conf.Configuration getConfiguration()
+org.apache.hadoop.conf.Configuration getConfiguration()
 Returns the Configuration object used by this 
instance.
  
  The reference returned is not a copy, so any change made to it will affect 
this instance.
@@ -411,7 +415,7 @@ public interface 
 
 getRpcTimeout
-long getRpcTimeout(http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/TimeUnit.html?is-external=true";
 title="class or interface in 
java.util.concurrent">TimeUnit unit)
+long getRpcTimeout(http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/TimeUnit.html?is-external=true";
 title="class or interface in 
java.util.concurrent">TimeUnit unit)
 Get timeout of each rpc request in this Table instance. It 
will be overridden by a more
  specific rpc timeout config such as readRpcTimeout or writeRpcTimeout.
 
@@ -427,7 +431,7 @@ public interface 
 
 getReadRpcTimeout
-long getReadRpcTimeout(http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/TimeUnit.html?is-external=true";
 title="class or interface in 
java.util.concurrent">TimeUnit unit)
+long getReadRpcTimeout(http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/TimeUnit.html?is-external=true";
 title="class or interface in 
java.util.concurrent">TimeUnit unit)
 Get timeout of each rpc read request in this Table 
instance.
 
 
@@ -437,7 +441,7 @@ public interface 
 
 getWriteRpcTimeout
-long getWriteRpcTimeout(http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/TimeUnit.html?is-external=true";
 title="class or interface in 
java.util.concurrent">TimeUnit unit)
+long getWriteRpcTimeout(http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/TimeUnit.html?is-external=true";
 title="class or interface in 
java.util.concurrent">TimeUnit unit)
 Get timeout of each rpc write request in this Table 
instance.
 
 
@@ -447,7 +451,7 @@ public interface 
 
 getOperationTimeout
-long getOperationTimeout(http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/TimeUnit.html?is-external=true";
 title

[37/51] [partial] hbase-site git commit: Published site at .

2017-08-26 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/src-html/org/apache/hadoop/hbase/client/AsyncTableBase.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/client/AsyncTableBase.html 
b/apidocs/src-html/org/apache/hadoop/hbase/client/AsyncTableBase.html
index 8a2ddbd..ecdbfdb 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/client/AsyncTableBase.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/client/AsyncTableBase.html
@@ -49,396 +49,397 @@
 041 * 

042 * Usually the implementation will not throw any exception directly. You need to get the exception 043 * from the returned {@link CompletableFuture}. -044 */ -045@InterfaceAudience.Public -046public interface AsyncTableBase { -047 -048 /** -049 * Gets the fully qualified table name instance of this table. -050 */ -051 TableName getName(); -052 -053 /** -054 * Returns the {@link org.apache.hadoop.conf.Configuration} object used by this instance. -055 *

-056 * The reference returned is not a copy, so any change made to it will affect this instance. -057 */ -058 Configuration getConfiguration(); -059 -060 /** -061 * Get timeout of each rpc request in this Table instance. It will be overridden by a more -062 * specific rpc timeout config such as readRpcTimeout or writeRpcTimeout. -063 * @see #getReadRpcTimeout(TimeUnit) -064 * @see #getWriteRpcTimeout(TimeUnit) -065 */ -066 long getRpcTimeout(TimeUnit unit); -067 -068 /** -069 * Get timeout of each rpc read request in this Table instance. -070 */ -071 long getReadRpcTimeout(TimeUnit unit); -072 -073 /** -074 * Get timeout of each rpc write request in this Table instance. -075 */ -076 long getWriteRpcTimeout(TimeUnit unit); -077 -078 /** -079 * Get timeout of each operation in Table instance. -080 */ -081 long getOperationTimeout(TimeUnit unit); -082 -083 /** -084 * Get the timeout of a single operation in a scan. It works like operation timeout for other -085 * operations. -086 */ -087 long getScanTimeout(TimeUnit unit); -088 -089 /** -090 * Test for the existence of columns in the table, as specified by the Get. -091 *

-092 * This will return true if the Get matches one or more keys, false if not. -093 *

-094 * This is a server-side call so it prevents any data from being transfered to the client. -095 * @return true if the specified Get matches one or more keys, false if not. The return value will -096 * be wrapped by a {@link CompletableFuture}. -097 */ -098 default CompletableFuture exists(Get get) { -099return get(toCheckExistenceOnly(get)).thenApply(r -> r.getExists()); -100 } -101 -102 /** -103 * Extracts certain cells from a given row. -104 * @param get The object that specifies what data to fetch and from which row. -105 * @return The data coming from the specified row, if it exists. If the row specified doesn't -106 * exist, the {@link Result} instance returned won't contain any -107 * {@link org.apache.hadoop.hbase.KeyValue}, as indicated by {@link Result#isEmpty()}. The -108 * return value will be wrapped by a {@link CompletableFuture}. -109 */ -110 CompletableFuture get(Get get); -111 -112 /** -113 * Puts some data to the table. -114 * @param put The data to put. -115 * @return A {@link CompletableFuture} that always returns null when complete normally. -116 */ -117 CompletableFuture put(Put put); -118 -119 /** -120 * Deletes the specified cells/row. -121 * @param delete The object that specifies what to delete. -122 * @return A {@link CompletableFuture} that always returns null when complete normally. -123 */ -124 CompletableFuture delete(Delete delete); -125 -126 /** -127 * Appends values to one or more columns within a single row. -128 *

-129 * This operation does not appear atomic to readers. Appends are done under a single row lock, so -130 * write operations to a row are synchronized, but readers do not take row locks so get and scan -131 * operations can see this operation partially completed. -132 * @param append object that specifies the columns and amounts to be used for the increment -133 * operations -134 * @return values of columns after the append operation (maybe null). The return value will be -135 * wrapped by a {@link CompletableFuture}. -136 */ -137 CompletableFuture append(Append append); -138 -139 /** -140 * Increments one or more columns within a single row. -141 *

-142 * This operation does not appear atomic to readers. Increments are done under a single row lock, -143 * so write operations to a row are synchronized, but readers do not take row locks so get and -144 * scan operations can see this operation partially completed. -145 * @param increment object that specif


[32/51] [partial] hbase-site git commit: Published site at .

2017-08-26 Thread git-site-role
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/src-html/org/apache/hadoop/hbase/client/RawAsyncTable.CoprocessorCallable.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/client/RawAsyncTable.CoprocessorCallable.html
 
b/apidocs/src-html/org/apache/hadoop/hbase/client/RawAsyncTable.CoprocessorCallable.html
index da72c6e..ab96dc9 100644
--- 
a/apidocs/src-html/org/apache/hadoop/hbase/client/RawAsyncTable.CoprocessorCallable.html
+++ 
b/apidocs/src-html/org/apache/hadoop/hbase/client/RawAsyncTable.CoprocessorCallable.html
@@ -52,208 +52,209 @@
 044 * method. The {@link 
RawScanResultConsumer} exposes the implementation details of a 
scan(heartbeat)
 045 * so it is not suitable for a normal 
user. If it is still the only difference after we implement
 046 * most features of AsyncTable, we can 
think about merge these two interfaces.
-047 */
-048@InterfaceAudience.Public
-049public interface RawAsyncTable extends 
AsyncTableBase {
-050
-051  /**
-052   * The basic scan API uses the observer 
pattern. All results that match the given scan object will
-053   * be passed to the given {@code 
consumer} by calling {@code RawScanResultConsumer.onNext}.
-054   * {@code 
RawScanResultConsumer.onComplete} means the scan is finished, and
-055   * {@code 
RawScanResultConsumer.onError} means we hit an unrecoverable error and the scan 
is
-056   * terminated. {@code 
RawScanResultConsumer.onHeartbeat} means the RS is still working but we can
-057   * not get a valid result to call 
{@code RawScanResultConsumer.onNext}. This is usually because
-058   * the matched results are too sparse, 
for example, a filter which almost filters out everything
-059   * is specified.
-060   * 

-061 * Notice that, the methods of the given {@code consumer} will be called directly in the rpc -062 * framework's callback thread, so typically you should not do any time consuming work inside -063 * these methods, otherwise you will be likely to block at least one connection to RS(even more if -064 * the rpc framework uses NIO). -065 * @param scan A configured {@link Scan} object. -066 * @param consumer the consumer used to receive results. -067 */ -068 void scan(Scan scan, RawScanResultConsumer consumer); -069 -070 /** -071 * Delegate to a protobuf rpc call. -072 *

-073 * Usually, it is just a simple lambda expression, like: -074 * -075 *

-076   * 
-077   * (stub, controller, rpcCallback) 
-> {
-078   *   XXXRequest request = ...; // 
prepare the request
-079   *   stub.xxx(controller, request, 
rpcCallback);
-080   * }
-081   * 
-082   * 
-083 * -084 * And if you can prepare the {@code request} before calling the coprocessorService method, the -085 * lambda expression will be: -086 * -087 *
-088   * 
-089   * (stub, controller, rpcCallback) 
-> stub.xxx(controller, request, rpcCallback)
-090   * 
-091   * 
-092 */ -093 @InterfaceAudience.Public -094 @FunctionalInterface -095 interface CoprocessorCallable { -096 -097/** -098 * Represent the actual protobuf rpc call. -099 * @param stub the asynchronous stub -100 * @param controller the rpc controller, has already been prepared for you -101 * @param rpcCallback the rpc callback, has already been prepared for you -102 */ -103void call(S stub, RpcController controller, RpcCallback rpcCallback); -104 } -105 -106 /** -107 * Execute the given coprocessor call on the region which contains the given {@code row}. -108 *

-109 * The {@code stubMaker} is just a delegation to the {@code newStub} call. Usually it is only a -110 * one line lambda expression, like: -111 * -112 *

-113   * 
-114   * channel -> 
xxxService.newStub(channel)
-115   * 
-116   * 
-117 * -118 * @param stubMaker a delegation to the actual {@code newStub} call. -119 * @param callable a delegation to the actual protobuf rpc call. See the comment of -120 * {@link CoprocessorCallable} for more details. -121 * @param row The row key used to identify the remote region location -122 * @param the type of the asynchronous stub -123 * @param the type of the return value -124 * @return the return value of the protobuf rpc call, wrapped by a {@link CompletableFuture}. -125 * @see CoprocessorCallable -126 */ -127 CompletableFuture coprocessorService(Function stubMaker, -128 CoprocessorCallable callable, byte[] row); -129 -130 /** -131 * The callback when we want to execute a coprocessor call on a range of regions. -132 *

-133 * As the locating itself also takes some time, the implementation may want to send rpc calls on -134 * the fly, which means we do not know how many regions we have when we get the return value of -135 * the rpc calls, so we need an {@l


[27/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/src-html/org/apache/hadoop/hbase/client/TableDescriptorBuilder.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/client/TableDescriptorBuilder.html 
b/apidocs/src-html/org/apache/hadoop/hbase/client/TableDescriptorBuilder.html
index e64f371..0a963a2 100644
--- 
a/apidocs/src-html/org/apache/hadoop/hbase/client/TableDescriptorBuilder.html
+++ 
b/apidocs/src-html/org/apache/hadoop/hbase/client/TableDescriptorBuilder.html
@@ -55,1433 +55,1436 @@
 047import 
org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
 048import 
org.apache.hadoop.hbase.util.Bytes;
 049
-050@InterfaceAudience.Public
-051public class TableDescriptorBuilder {
-052  public static final Log LOG = 
LogFactory.getLog(TableDescriptorBuilder.class);
-053  @InterfaceAudience.Private
-054  public static final String SPLIT_POLICY 
= "SPLIT_POLICY";
-055  private static final Bytes 
SPLIT_POLICY_KEY = new Bytes(Bytes.toBytes(SPLIT_POLICY));
-056  /**
-057   * Used by HBase Shell interface to 
access this metadata
-058   * attribute which denotes the maximum 
size of the store file after which a
-059   * region split occurs.
-060   */
-061  @InterfaceAudience.Private
-062  public static final String MAX_FILESIZE 
= "MAX_FILESIZE";
-063  private static final Bytes 
MAX_FILESIZE_KEY
-064  = new 
Bytes(Bytes.toBytes(MAX_FILESIZE));
-065
-066  @InterfaceAudience.Private
-067  public static final String OWNER = 
"OWNER";
-068  @InterfaceAudience.Private
-069  public static final Bytes OWNER_KEY
-070  = new 
Bytes(Bytes.toBytes(OWNER));
-071
-072  /**
-073   * Used by rest interface to access 
this metadata attribute
-074   * which denotes if the table is Read 
Only.
-075   */
-076  @InterfaceAudience.Private
-077  public static final String READONLY = 
"READONLY";
-078  private static final Bytes 
READONLY_KEY
-079  = new 
Bytes(Bytes.toBytes(READONLY));
-080
-081  /**
-082   * Used by HBase Shell interface to 
access this metadata
-083   * attribute which denotes if the table 
is compaction enabled.
-084   */
-085  @InterfaceAudience.Private
-086  public static final String 
COMPACTION_ENABLED = "COMPACTION_ENABLED";
-087  private static final Bytes 
COMPACTION_ENABLED_KEY
-088  = new 
Bytes(Bytes.toBytes(COMPACTION_ENABLED));
-089
-090  /**
-091   * Used by HBase Shell interface to 
access this metadata
-092   * attribute which represents the 
maximum size of the memstore after which its
-093   * contents are flushed onto the 
disk.
-094   */
-095  @InterfaceAudience.Private
-096  public static final String 
MEMSTORE_FLUSHSIZE = "MEMSTORE_FLUSHSIZE";
-097  private static final Bytes 
MEMSTORE_FLUSHSIZE_KEY
-098  = new 
Bytes(Bytes.toBytes(MEMSTORE_FLUSHSIZE));
-099
-100  @InterfaceAudience.Private
-101  public static final String FLUSH_POLICY 
= "FLUSH_POLICY";
-102  private static final Bytes 
FLUSH_POLICY_KEY = new Bytes(Bytes.toBytes(FLUSH_POLICY));
-103  /**
-104   * Used by rest interface to access 
this metadata attribute
-105   * which denotes if it is a catalog 
table, either  hbase:meta .
-106   */
-107  @InterfaceAudience.Private
-108  public static final String IS_META = 
"IS_META";
-109  private static final Bytes 
IS_META_KEY
-110  = new 
Bytes(Bytes.toBytes(IS_META));
-111
-112  /**
-113   * {@link Durability} setting for the 
table.
-114   */
-115  @InterfaceAudience.Private
-116  public static final String DURABILITY = 
"DURABILITY";
-117  private static final Bytes 
DURABILITY_KEY
-118  = new 
Bytes(Bytes.toBytes("DURABILITY"));
-119
-120  /**
-121   * The number of region replicas for 
the table.
-122   */
-123  @InterfaceAudience.Private
-124  public static final String 
REGION_REPLICATION = "REGION_REPLICATION";
-125  private static final Bytes 
REGION_REPLICATION_KEY
-126  = new 
Bytes(Bytes.toBytes(REGION_REPLICATION));
-127
-128  /**
-129   * The flag to indicate whether or not 
the memstore should be
-130   * replicated for read-replicas 
(CONSISTENCY => TIMELINE).
-131   */
-132  @InterfaceAudience.Private
-133  public static final String 
REGION_MEMSTORE_REPLICATION = "REGION_MEMSTORE_REPLICATION";
-134  private static final Bytes 
REGION_MEMSTORE_REPLICATION_KEY
-135  = new 
Bytes(Bytes.toBytes(REGION_MEMSTORE_REPLICATION));
-136
-137  /**
-138   * Used by shell/rest interface to 
access this metadata
-139   * attribute which denotes if the table 
should be treated by region
-140   * normalizer.
-141   */
-142  @InterfaceAudience.Private
-143  public static final String 
NORMALIZATION_ENABLED = "NORMALIZATION_ENABLED";
-144  private static final Bytes 
NORMALIZATION_ENABLED_KEY
-145  = new 
Bytes(Bytes.toBytes(NORMALIZATION_ENABLED));
-146
-147  /**
-148   * Default durability for HTD is 
USE_DEFAULT, which defaults to HBase-global
-149   * default value
-150   */
-151 

[09/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/devapidocs/org/apache/hadoop/hbase/client/ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptor.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptor.html
 
b/devapidocs/org/apache/hadoop/hbase/client/ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptor.html
index cb711e8..4f6913c 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/client/ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptor.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/client/ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptor.html
@@ -118,7 +118,7 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Private
-public static class ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptor
+public static class ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptor
 extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true";
 title="class or interface in java.lang">Object
 implements ColumnFamilyDescriptor, http://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html?is-external=true";
 title="class or interface in java.lang">Comparable
 An ModifyableFamilyDescriptor contains information about a 
column family such as the
@@ -627,7 +627,7 @@ implements 
 
 name
-private final byte[] name
+private final byte[] name
 
 
 
@@ -636,7 +636,7 @@ implements 
 
 values
-private final http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true";
 title="class or interface in java.util">Map values
+private final http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true";
 title="class or interface in java.util">Map values
 
 
 
@@ -645,7 +645,7 @@ implements 
 
 configuration
-private final http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true";
 title="class or interface in java.util">MapString,http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String> configuration
+private final http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true";
 title="class or interface in java.util">MapString,http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String> configuration
 A map which holds the configuration specific to the column 
family. The
  keys of the map have the same names as config keys and override the
  defaults with cf-specific settings. Example usage may be for compactions,
@@ -667,7 +667,7 @@ implements 
 ModifyableColumnFamilyDescriptor
 @InterfaceAudience.Private
-public ModifyableColumnFamilyDescriptor(byte[] name)
+public ModifyableColumnFamilyDescriptor(byte[] name)
 Construct a column descriptor specifying only the family 
name The other
  attributes are defaulted.
 
@@ -685,7 +685,7 @@ public 
 ModifyableColumnFamilyDescriptor
 @InterfaceAudience.Private
-public ModifyableColumnFamilyDescriptor(ColumnFamilyDescriptor desc)
+public ModifyableColumnFamilyDescriptor(ColumnFamilyDescriptor desc)
 Constructor. Makes a deep copy of the supplied descriptor.
  TODO: make this private after the HCD is removed.
 
@@ -700,7 +700,7 @@ public 
 
 ModifyableColumnFamilyDescriptor
-private ModifyableColumnFamilyDescriptor(byte[] name,
+private ModifyableColumnFamilyDescriptor(byte[] name,
  http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true";
 title="class or interface in java.util">Map values,
  http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true";
 title="class or interface in java.util">MapString,http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String> config)
 
@@ -719,7 +719,7 @@ public 
 
 getName
-public byte[] getName()
+public byte[] getName()
 
 Specified by:
 getName in
 interface ColumnFamilyDescriptor
@@ -734,7 +734,7 @@ public 
 
 getNameAsString
-public http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String getNameAsString()
+public http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String getNameAsString()
 
 Speci

[49/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/index-all.html
--
diff --git a/apidocs/index-all.html b/apidocs/index-all.html
index 8ac6bda..68e45df 100644
--- a/apidocs/index-all.html
+++ b/apidocs/index-all.html
@@ -7677,11 +7677,15 @@
 
 getTagsOffset()
 - Method in interface org.apache.hadoop.hbase.Cell
  
+getTimeRange()
 - Method in class org.apache.hadoop.hbase.client.Get
+
+Method for retrieving the get's TimeRange
+
 getTimeRange()
 - Method in class org.apache.hadoop.hbase.client.Increment
 
 Gets the TimeRange used for this increment.
 
-getTimeRange()
 - Method in class org.apache.hadoop.hbase.client.Query
+getTimeRange()
 - Method in class org.apache.hadoop.hbase.client.Scan
  
 getTimestamp()
 - Method in interface org.apache.hadoop.hbase.Cell
  
@@ -11066,7 +11070,7 @@
  
 MultiTableInputFormat - Class in org.apache.hadoop.hbase.mapreduce
 
-Convert HBase tabular data from multiple scanners into a 
format that 
+Convert HBase tabular data from multiple scanners into a 
format that
  is consumable by Map/Reduce.
 
 MultiTableInputFormat()
 - Constructor for class org.apache.hadoop.hbase.mapreduce.MultiTableInputFormat
@@ -14434,19 +14438,13 @@ Input/OutputFormats, a table indexing MapReduce job, 
and utility methods.
  
 setColumnFamilyTimeRange(byte[],
 long, long) - Method in class org.apache.hadoop.hbase.client.Get
  
-setColumnFamilyTimeRange(byte[],
 TimeRange) - Method in class org.apache.hadoop.hbase.client.Get
- 
 setColumnFamilyTimeRange(byte[],
 long, long) - Method in class org.apache.hadoop.hbase.client.Query
 
 Get versions of columns only within the specified timestamp 
range,
  [minStamp, maxStamp) on a per CF bases.
 
-setColumnFamilyTimeRange(byte[],
 TimeRange) - Method in class org.apache.hadoop.hbase.client.Query
- 
 setColumnFamilyTimeRange(byte[],
 long, long) - Method in class org.apache.hadoop.hbase.client.Scan
  
-setColumnFamilyTimeRange(byte[],
 TimeRange) - Method in class org.apache.hadoop.hbase.client.Scan
- 
 setCompactionCompressionType(Compression.Algorithm)
 - Method in class org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder
  
 setCompactionCompressionType(Compression.Algorithm)
 - Method in class org.apache.hadoop.hbase.HColumnDescriptor
@@ -15326,32 +15324,15 @@ Input/OutputFormats, a table indexing MapReduce job, 
and utility methods.
 Get versions of columns only within the specified timestamp 
range,
  [minStamp, maxStamp).
 
-setTimeRange(TimeRange)
 - Method in class org.apache.hadoop.hbase.client.Get
-
-Get versions of columns only within the specified timestamp 
range,
-
 setTimeRange(long,
 long) - Method in class org.apache.hadoop.hbase.client.Increment
 
 Sets the TimeRange to be used on the Get for this 
increment.
 
-setTimeRange(TimeRange)
 - Method in class org.apache.hadoop.hbase.client.Query
-
-Sets the TimeRange to be used by this Query
-
-setTimeRange(long,
 long) - Method in class org.apache.hadoop.hbase.client.Query
-
-Sets the TimeRange to be used by this Query
- [minStamp, maxStamp).
-
 setTimeRange(long,
 long) - Method in class org.apache.hadoop.hbase.client.Scan
 
-Set versions of columns only within the specified timestamp 
range,
+Get versions of columns only within the specified timestamp 
range,
  [minStamp, maxStamp).
 
-setTimeRange(TimeRange)
 - Method in class org.apache.hadoop.hbase.client.Scan
-
-Set versions of columns only within the specified timestamp 
range,
-
 setTimestamp(Cell,
 long) - Static method in class org.apache.hadoop.hbase.CellUtil
 
 Sets the given timestamp to the cell.
@@ -17198,8 +17179,6 @@ Input/OutputFormats, a table indexing MapReduce job, 
and utility methods.
 
 Retrieve the Struct represented by 
this.
 
-tr - Variable in 
class org.apache.hadoop.hbase.client.Query
- 
 transformCell(Cell)
 - Method in class org.apache.hadoop.hbase.filter.Filter
 
 Give the filter a chance to transform the passed 
KeyValue.



[43/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/org/apache/hadoop/hbase/client/Query.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/client/Query.html 
b/apidocs/org/apache/hadoop/hbase/client/Query.html
index a1cfe33..075850e 100644
--- a/apidocs/org/apache/hadoop/hbase/client/Query.html
+++ b/apidocs/org/apache/hadoop/hbase/client/Query.html
@@ -18,7 +18,7 @@
 catch(err) {
 }
 //-->
-var methods = 
{"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":10,"i12":10,"i13":10,"i14":10,"i15":10,"i16":10,"i17":10,"i18":10,"i19":10,"i20":10,"i21":10};
+var methods = 
{"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":10,"i12":10,"i13":10,"i14":10,"i15":10,"i16":10,"i17":10};
 var tabs = {65535:["t0","All Methods"],2:["t2","Instance 
Methods"],8:["t4","Concrete Methods"]};
 var altColor = "altColor";
 var rowColor = "rowColor";
@@ -128,7 +128,7 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Public
-public abstract class Query
+public abstract class Query
 extends OperationWithAttributes
 
 
@@ -168,10 +168,6 @@ extends protected int
 targetReplicaId 
 
-
-protected TimeRange
-tr 
-
 
 
 
@@ -256,25 +252,21 @@ extends 
-TimeRange
-getTimeRange() 
-
-
 Query
 setACL(http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true";
 title="class or interface in java.util">MapString,org.apache.hadoop.hbase.security.access.Permission> perms) 
 
-
+
 Query
 setACL(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String user,
   
org.apache.hadoop.hbase.security.access.Permission perms) 
 
-
+
 Query
 setAuthorizations(org.apache.hadoop.hbase.security.visibility.Authorizations authorizations)
 Sets the authorizations to be used by this Query
 
 
-
+
 Query
 setColumnFamilyTimeRange(byte[] cf,
 long minStamp,
@@ -283,56 +275,37 @@ extends 
-Query
-setColumnFamilyTimeRange(byte[] cf,
-TimeRange tr) 
-
-
+
 Query
 setConsistency(Consistency consistency)
 Sets the consistency level for this operation
 
 
-
+
 Query
 setFilter(Filter filter)
 Apply the specified server-side filter when performing the 
Query.
 
 
-
+
 Query
 setIsolationLevel(IsolationLevel level)
 Set the isolation level for this query.
 
 
-
+
 Query
 setLoadColumnFamiliesOnDemand(boolean value)
 Set the value indicating whether loading CFs on demand 
should be allowed (cluster
  default is false).
 
 
-
+
 Query
 setReplicaId(int Id)
 Specify region replica id where Query will fetch data 
from.
 
 
-
-Query
-setTimeRange(long minStamp,
-long maxStamp)
-Sets the TimeRange to be used by this Query
- [minStamp, maxStamp).
-
-
-
-Query
-setTimeRange(TimeRange tr)
-Sets the TimeRange to be used by this Query
-
-
 
 
 
@@ -375,7 +348,7 @@ extends 
 
 filter
-protected Filter filter
+protected Filter filter
 
 
 
@@ -384,7 +357,7 @@ extends 
 
 targetReplicaId
-protected int targetReplicaId
+protected int targetReplicaId
 
 
 
@@ -393,7 +366,7 @@ extends 
 
 consistency
-protected Consistency consistency
+protected Consistency consistency
 
 
 
@@ -402,25 +375,16 @@ extends 
 
 colFamTimeRangeMap
-protected http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true";
 title="class or interface in java.util">Map colFamTimeRangeMap
+protected http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true";
 title="class or interface in java.util">Map colFamTimeRangeMap
 
 
 
 
 
-
-
-loadColumnFamiliesOnDemand
-protected http://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true";
 title="class or interface in java.lang">Boolean loadColumnFamiliesOnDemand
-
-
-
-
-
 
 
-tr
-protected TimeRange tr
+loadColumnFamiliesOnDemand
+protected http://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true";
 title="class or interface in java.lang">Boolean loadColumnFamiliesOnDemand
 
 
 
@@ -437,7 +401,7 @@ extends 
 
 Query
-public Query()
+public Query()
 
 
 
@@ -454,7 +418,7 @@ extends 
 
 getFilter
-public Filter getFilter()
+public Filter getFilter()
 
 Returns:
 Filter
@@ -467,7 +431,7 @@ extends 
 
 setFilter
-public Query setFilter(Filter filter)
+public Query setFilter(Filter filter)
 Apply the specified server-side filter when performing the 
Query. Only
  Filter.filterKeyValue(org.apache.hadoop.hbase.Cell)
 is called AFTER all tests for ttl,
  column match, deletes and column family's max versions have been run.
@@ -479,64 +443,13 @@ extends 
-
-
-
-
-getTimeRange
-public TimeRange getTimeRange()
-
-Returns:
-TimeRange
-
-
-
-
-
-
-
-
-setTimeRange
-public Query setTimeRange(Ti

[03/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/devapidocs/org/apache/hadoop/hbase/client/RawAsyncTable.CoprocessorCallable.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/RawAsyncTable.CoprocessorCallable.html
 
b/devapidocs/org/apache/hadoop/hbase/client/RawAsyncTable.CoprocessorCallable.html
index ec05fb6..05b51df 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/client/RawAsyncTable.CoprocessorCallable.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/client/RawAsyncTable.CoprocessorCallable.html
@@ -111,7 +111,7 @@ var activeTableTab = "activeTableTab";
 
 @InterfaceAudience.Public
  http://docs.oracle.com/javase/8/docs/api/java/lang/FunctionalInterface.html?is-external=true";
 title="class or interface in java.lang">@FunctionalInterface
-public static interface RawAsyncTable.CoprocessorCallable
+public static interface RawAsyncTable.CoprocessorCallable
 Delegate to a protobuf rpc call.
  
  Usually, it is just a simple lambda expression, like:
@@ -182,7 +182,7 @@ public static interface 
 
 call
-void call(S stub,
+void call(S stub,
   com.google.protobuf.RpcController controller,
   com.google.protobuf.RpcCallback rpcCallback)
 Represent the actual protobuf rpc call.

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/devapidocs/org/apache/hadoop/hbase/client/RawAsyncTable.CoprocessorCallback.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/RawAsyncTable.CoprocessorCallback.html
 
b/devapidocs/org/apache/hadoop/hbase/client/RawAsyncTable.CoprocessorCallback.html
index 73d2f0d..9bc108c 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/client/RawAsyncTable.CoprocessorCallback.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/client/RawAsyncTable.CoprocessorCallback.html
@@ -110,7 +110,7 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Public
-public static interface RawAsyncTable.CoprocessorCallback
+public static interface RawAsyncTable.CoprocessorCallback
 The callback when we want to execute a coprocessor call on 
a range of regions.
  
  As the locating itself also takes some time, the implementation may want to 
send rpc calls on
@@ -218,7 +218,7 @@ public static interface 
 
 onRegionComplete
-void onRegionComplete(HRegionInfo region,
+void onRegionComplete(HRegionInfo region,
   R resp)
 
 Parameters:
@@ -233,7 +233,7 @@ public static interface 
 
 onRegionError
-void onRegionError(HRegionInfo region,
+void onRegionError(HRegionInfo region,
http://docs.oracle.com/javase/8/docs/api/java/lang/Throwable.html?is-external=true";
 title="class or interface in java.lang">Throwable error)
 
 Parameters:
@@ -248,7 +248,7 @@ public static interface 
 
 onComplete
-void onComplete()
+void onComplete()
 Indicate that all responses of the regions have been 
notified by calling
  onRegionComplete(HRegionInfo,
 Object) or
  onRegionError(HRegionInfo,
 Throwable).
@@ -260,7 +260,7 @@ public static interface 
 
 onError
-void onError(http://docs.oracle.com/javase/8/docs/api/java/lang/Throwable.html?is-external=true";
 title="class or interface in java.lang">Throwable error)
+void onError(http://docs.oracle.com/javase/8/docs/api/java/lang/Throwable.html?is-external=true";
 title="class or interface in java.lang">Throwable error)
 Indicate that we got an error which does not belong to any 
regions. Usually a locating error.
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/devapidocs/org/apache/hadoop/hbase/client/RawAsyncTable.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/client/RawAsyncTable.html 
b/devapidocs/org/apache/hadoop/hbase/client/RawAsyncTable.html
index ec7f66d..82aa76a 100644
--- a/devapidocs/org/apache/hadoop/hbase/client/RawAsyncTable.html
+++ b/devapidocs/org/apache/hadoop/hbase/client/RawAsyncTable.html
@@ -110,7 +110,7 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Public
-public interface RawAsyncTable
+public interface RawAsyncTable
 extends AsyncTableBase
 A low level asynchronous table.
  
@@ -128,6 +128,10 @@ extends RawScanResultConsumer exposes 
the implementation details of a scan(heartbeat)
  so it is not suitable for a normal user. If it is still the only difference 
after we implement
  most features of AsyncTable, we can think about merge these two 
interfaces.
+
+Since:
+2.0.0
+
 
 
 
@@ -240,7 +244,7 @@ extends 
 
 scan
-void scan(Scan scan,
+void scan(Scan scan,
   RawScanResultConsumer consumer)
 The basic scan API uses the observer pattern. All results 
that match the given scan object will
  be passed to the given consumer by calling 
RawScanResultConsumer.onNext.
@@ -268,7 +272,7 @@ extends 
 
 coprocessorService
- http://docs.oracle.com/javase/8/docs/api/java/util/concurre

[25/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatBase.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatBase.html
 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatBase.html
index 695b018..1c344df 100644
--- 
a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatBase.html
+++ 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatBase.html
@@ -45,264 +45,263 @@
 037import 
org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 038import 
org.apache.hadoop.hbase.util.Bytes;
 039import 
org.apache.hadoop.hbase.util.Pair;
-040import 
org.apache.hadoop.hbase.util.RegionSizeCalculator;
-041import 
org.apache.hadoop.mapreduce.InputFormat;
-042import 
org.apache.hadoop.mapreduce.InputSplit;
-043import 
org.apache.hadoop.mapreduce.JobContext;
-044import 
org.apache.hadoop.mapreduce.RecordReader;
-045import 
org.apache.hadoop.mapreduce.TaskAttemptContext;
-046
-047import java.util.Map;
-048import java.util.HashMap;
-049import java.util.Iterator;
-050/**
-051 * A base for {@link 
MultiTableInputFormat}s. Receives a list of
-052 * {@link Scan} instances that define the 
input tables and
-053 * filters etc. Subclasses may use other 
TableRecordReader implementations.
-054 */
-055@InterfaceAudience.Public
-056public abstract class 
MultiTableInputFormatBase extends
-057
InputFormat {
-058
-059  private static final Log LOG = 
LogFactory.getLog(MultiTableInputFormatBase.class);
-060
-061  /** Holds the set of scans used to 
define the input. */
-062  private List scans;
-063
-064  /** The reader scanning the table, can 
be a custom one. */
-065  private TableRecordReader 
tableRecordReader = null;
-066
-067  /**
-068   * Builds a TableRecordReader. If no 
TableRecordReader was provided, uses the
-069   * default.
-070   *
-071   * @param split The split to work 
with.
-072   * @param context The current 
context.
-073   * @return The newly created record 
reader.
-074   * @throws IOException When creating 
the reader fails.
-075   * @throws InterruptedException when 
record reader initialization fails
-076   * @see 
org.apache.hadoop.mapreduce.InputFormat#createRecordReader(
-077   *  
org.apache.hadoop.mapreduce.InputSplit,
-078   *  
org.apache.hadoop.mapreduce.TaskAttemptContext)
-079   */
-080  @Override
-081  public 
RecordReader createRecordReader(
-082  InputSplit split, 
TaskAttemptContext context)
-083  throws IOException, 
InterruptedException {
-084TableSplit tSplit = (TableSplit) 
split;
-085LOG.info(MessageFormat.format("Input 
split length: {0} bytes.", tSplit.getLength()));
-086
-087if (tSplit.getTable() == null) {
-088  throw new IOException("Cannot 
create a record reader because of a"
-089  + " previous error. Please look 
at the previous logs lines from"
-090  + " the task's full log for 
more details.");
-091}
-092final Connection connection = 
ConnectionFactory.createConnection(context.getConfiguration());
-093Table table = 
connection.getTable(tSplit.getTable());
-094
-095if (this.tableRecordReader == null) 
{
-096  this.tableRecordReader = new 
TableRecordReader();
-097}
-098final TableRecordReader trr = 
this.tableRecordReader;
-099
-100try {
-101  Scan sc = tSplit.getScan();
-102  
sc.setStartRow(tSplit.getStartRow());
-103  
sc.setStopRow(tSplit.getEndRow());
-104  trr.setScan(sc);
-105  trr.setTable(table);
-106  return new 
RecordReader() {
-107
-108@Override
-109public void close() throws 
IOException {
-110  trr.close();
-111  connection.close();
-112}
-113
-114@Override
-115public ImmutableBytesWritable 
getCurrentKey() throws IOException, InterruptedException {
-116  return trr.getCurrentKey();
-117}
-118
-119@Override
-120public Result getCurrentValue() 
throws IOException, InterruptedException {
-121  return trr.getCurrentValue();
-122}
-123
-124@Override
-125public float getProgress() throws 
IOException, InterruptedException {
-126  return trr.getProgress();
-127}
-128
-129@Override
-130public void initialize(InputSplit 
inputsplit, TaskAttemptContext context)
-131throws IOException, 
InterruptedException {
-132  trr.initialize(inputsplit, 
context);
-133}
-134
-135@Override
-136public boolean nextKeyValue() 
throws IOException, InterruptedException {
-137  return trr.nextKeyValue();
-138}
-139  };
-140} catch (IOException ioe) {
-141  // If there is an exception make 
sure that all
-142  // resour

[36/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/src-html/org/apache/hadoop/hbase/client/AsyncTableBuilder.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/client/AsyncTableBuilder.html 
b/apidocs/src-html/org/apache/hadoop/hbase/client/AsyncTableBuilder.html
index a6b0124..5524002 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/client/AsyncTableBuilder.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/client/AsyncTableBuilder.html
@@ -37,86 +37,87 @@
 029 * The implementation should have default 
configurations set before returning the builder to user.
 030 * So users are free to only set the 
configs they care about to create a new
 031 * AsyncTable/RawAsyncTable instance.
-032 */
-033@InterfaceAudience.Public
-034public interface AsyncTableBuilder {
-035
-036  /**
-037   * Set timeout for a whole operation 
such as get, put or delete. Notice that scan will not be
-038   * effected by this value, see 
scanTimeoutNs.
-039   * 

-040 * Operation timeout and max attempt times(or max retry times) are both limitations for retrying, -041 * we will stop retrying when we reach any of the limitations. -042 * @see #setMaxAttempts(int) -043 * @see #setMaxRetries(int) -044 * @see #setScanTimeout(long, TimeUnit) -045 */ -046 AsyncTableBuilder setOperationTimeout(long timeout, TimeUnit unit); -047 -048 /** -049 * As now we have heartbeat support for scan, ideally a scan will never timeout unless the RS is -050 * crash. The RS will always return something before the rpc timed out or scan timed out to tell -051 * the client that it is still alive. The scan timeout is used as operation timeout for every -052 * operation in a scan, such as openScanner or next. -053 * @see #setScanTimeout(long, TimeUnit) -054 */ -055 AsyncTableBuilder setScanTimeout(long timeout, TimeUnit unit); -056 -057 /** -058 * Set timeout for each rpc request. -059 *

-060 * Notice that this will NOT change the rpc timeout for read(get, scan) request -061 * and write request(put, delete). -062 */ -063 AsyncTableBuilder setRpcTimeout(long timeout, TimeUnit unit); -064 -065 /** -066 * Set timeout for each read(get, scan) rpc request. -067 */ -068 AsyncTableBuilder setReadRpcTimeout(long timeout, TimeUnit unit); -069 -070 /** -071 * Set timeout for each write(put, delete) rpc request. -072 */ -073 AsyncTableBuilder setWriteRpcTimeout(long timeout, TimeUnit unit); -074 -075 /** -076 * Set the base pause time for retrying. We use an exponential policy to generate sleep time when -077 * retrying. -078 */ -079 AsyncTableBuilder setRetryPause(long pause, TimeUnit unit); -080 -081 /** -082 * Set the max retry times for an operation. Usually it is the max attempt times minus 1. -083 *

-084 * Operation timeout and max attempt times(or max retry times) are both limitations for retrying, -085 * we will stop retrying when we reach any of the limitations. -086 * @see #setMaxAttempts(int) -087 * @see #setOperationTimeout(long, TimeUnit) -088 */ -089 default AsyncTableBuilder setMaxRetries(int maxRetries) { -090return setMaxAttempts(retries2Attempts(maxRetries)); -091 } -092 -093 /** -094 * Set the max attempt times for an operation. Usually it is the max retry times plus 1. Operation -095 * timeout and max attempt times(or max retry times) are both limitations for retrying, we will -096 * stop retrying when we reach any of the limitations. -097 * @see #setMaxRetries(int) -098 * @see #setOperationTimeout(long, TimeUnit) -099 */ -100 AsyncTableBuilder setMaxAttempts(int maxAttempts); -101 -102 /** -103 * Set the number of retries that are allowed before we start to log. -104 */ -105 AsyncTableBuilder setStartLogErrorsCnt(int startLogErrorsCnt); -106 -107 /** -108 * Create the {@link AsyncTable} or {@link RawAsyncTable} instance. -109 */ -110 T build(); -111} +032 * @since 2.0.0 +033 */ +034@InterfaceAudience.Public +035public interface AsyncTableBuilder { +036 +037 /** +038 * Set timeout for a whole operation such as get, put or delete. Notice that scan will not be +039 * effected by this value, see scanTimeoutNs. +040 *

+041 * Operation timeout and max attempt times(or max retry times) are both limitations for retrying, +042 * we will stop retrying when we reach any of the limitations. +043 * @see #setMaxAttempts(int) +044 * @see #setMaxRetries(int) +045 * @see #setScanTimeout(long, TimeUnit) +046 */ +047 AsyncTableBuilder setOperationTimeout(long timeout, TimeUnit unit); +048 +049 /** +050 * As now we have heartbeat support for scan, ideally a scan will never timeout unless the RS is +051 * crash. The RS will always return something before the rpc timed out or scan timed out


[19/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/devapidocs/index-all.html
--
diff --git a/devapidocs/index-all.html b/devapidocs/index-all.html
index 607c302..e481c3b 100644
--- a/devapidocs/index-all.html
+++ b/devapidocs/index-all.html
@@ -3045,6 +3045,11 @@
  
 append(Append)
 - Method in class org.apache.hadoop.hbase.rest.client.RemoteHTable
  
+append(CellSetModel)
 - Method in class org.apache.hadoop.hbase.rest.RowResource
+
+Validates the input request parameters, parses columns from 
CellSetModel,
+ and invokes Append on HTable.
+
 append(TAppend)
 - Method in class org.apache.hadoop.hbase.thrift.ThriftServerRunner.HBaseHandler
  
 append(ByteBuffer,
 TAppend) - Method in class org.apache.hadoop.hbase.thrift2.ThriftHBaseServiceHandler
@@ -5671,10 +5676,6 @@
  
 beforeShipped()
 - Method in class org.apache.hadoop.hbase.regionserver.querymatcher.ExplicitColumnTracker
  
-beforeShipped()
 - Method in class org.apache.hadoop.hbase.regionserver.querymatcher.LegacyScanQueryMatcher
-
-Deprecated.
- 
 beforeShipped()
 - Method in class org.apache.hadoop.hbase.regionserver.querymatcher.NewVersionBehaviorTracker
  
 beforeShipped()
 - Method in class org.apache.hadoop.hbase.regionserver.querymatcher.NormalUserScanQueryMatcher
@@ -9791,6 +9792,8 @@
  
 CHECK_AND_PUT_KEY
 - Static variable in interface org.apache.hadoop.hbase.regionserver.MetricsRegionServerSource
  
+CHECK_APPEND
 - Static variable in class org.apache.hadoop.hbase.rest.RowResource
+ 
 CHECK_AUTHS_FOR_MUTATION
 - Static variable in class org.apache.hadoop.hbase.security.visibility.VisibilityConstants
  
 CHECK_COVERING_PERM
 - Static variable in class org.apache.hadoop.hbase.security.access.AccessController
@@ -9799,6 +9802,8 @@
  
 CHECK_FAILED
 - Static variable in class org.apache.hadoop.hbase.backup.impl.BackupAdminImpl
  
+CHECK_INCREMENT
 - Static variable in class org.apache.hadoop.hbase.rest.RowResource
+ 
 CHECK_MUTATE_FAILED_COUNT
 - Static variable in interface org.apache.hadoop.hbase.regionserver.MetricsRegionServerSource
  
 CHECK_MUTATE_FAILED_COUNT_DESC
 - Static variable in interface org.apache.hadoop.hbase.regionserver.MetricsRegionServerSource
@@ -10503,11 +10508,6 @@
 
 Checks the parameters passed to a constructor.
 
-checkPartialDropDeleteRange(Cell)
 - Method in class org.apache.hadoop.hbase.regionserver.querymatcher.LegacyScanQueryMatcher
-
-Deprecated.
-Handle partial-drop-deletes.
-
 checkPathExist(String,
 Configuration) - Static method in class 
org.apache.hadoop.hbase.backup.util.BackupUtils
 
 Check whether the backup path exist
@@ -17993,10 +17993,6 @@
  
 create(ScanInfo,
 ScanType, long, long, long, long, byte[], byte[], 
RegionCoprocessorHost) - Static method in class 
org.apache.hadoop.hbase.regionserver.querymatcher.CompactionScanQueryMatcher
  
-create(Scan,
 ScanInfo, NavigableSet, ScanType, long, long, long, long, 
byte[], byte[], RegionCoprocessorHost) - Static method in class 
org.apache.hadoop.hbase.regionserver.querymatcher.LegacyScanQueryMatcher
-
-Deprecated.
- 
 create(Scan,
 ScanInfo, ColumnTracker, DeleteTracker, boolean, long, long) - 
Static method in class org.apache.hadoop.hbase.regionserver.querymatcher.NormalUserScanQueryMatcher
  
 create(Scan,
 ScanInfo, ColumnTracker, boolean, long, long) - Static method in 
class org.apache.hadoop.hbase.regionserver.querymatcher.RawScanQueryMatcher
@@ -23655,11 +23651,6 @@
 
 Keeps track of deletes
 
-deletes
 - Variable in class org.apache.hadoop.hbase.regionserver.querymatcher.LegacyScanQueryMatcher
-
-Deprecated.
-Keeps track of deletes
-
 deletes
 - Variable in class org.apache.hadoop.hbase.regionserver.querymatcher.NormalUserScanQueryMatcher
 
 Keeps track of deletes
@@ -25359,20 +25350,12 @@
 
 DropDeletesCompactionScanQueryMatcher(ScanInfo,
 DeleteTracker, ColumnTracker, long, long, long, long) - Constructor 
for class org.apache.hadoop.hbase.regionserver.querymatcher.DropDeletesCompactionScanQueryMatcher
  
-dropDeletesFromRow
 - Variable in class org.apache.hadoop.hbase.regionserver.querymatcher.LegacyScanQueryMatcher
-
-Deprecated.
- 
 dropDeletesFromRow
 - Variable in class org.apache.hadoop.hbase.regionserver.querymatcher.StripeCompactionScanQueryMatcher
  
 dropDeletesInOutput
 - Variable in class org.apache.hadoop.hbase.regionserver.querymatcher.StripeCompactionScanQueryMatcher
  
 DropDeletesInOutput()
 - Constructor for enum org.apache.hadoop.hbase.regionserver.querymatcher.StripeCompactionScanQueryMatcher.DropDeletesInOutput
  
-dropDeletesToRow
 - Variable in class org.apache.hadoop.hbase.regionserver.querymatcher.LegacyScanQueryMatcher
-
-Deprecated.
- 
 dropDeletesToRow
 - Variable in class org.apache.hadoop.hbase.regionserver.querymatcher.StripeCompactionScanQueryMatcher
  
 dropDependentColumn
 - Variable in class org.apache.hadoop.hbase.filter.DependentColumnFilter
@@ -25637,12 +25620,6 @@
 Oldest put in any of the involved store files Used

[01/51] [partial] hbase-site git commit: Published site at .

Repository: hbase-site
Updated Branches:
  refs/heads/asf-site e8ae19713 -> ebf9a8b87


http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/devapidocs/org/apache/hadoop/hbase/client/TableDescriptorBuilder.ModifyableTableDescriptor.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/TableDescriptorBuilder.ModifyableTableDescriptor.html
 
b/devapidocs/org/apache/hadoop/hbase/client/TableDescriptorBuilder.ModifyableTableDescriptor.html
index a1ce360..4a0419dd 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/client/TableDescriptorBuilder.ModifyableTableDescriptor.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/client/TableDescriptorBuilder.ModifyableTableDescriptor.html
@@ -118,7 +118,7 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Private
-public static class TableDescriptorBuilder.ModifyableTableDescriptor
+public static class TableDescriptorBuilder.ModifyableTableDescriptor
 extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true";
 title="class or interface in java.lang">Object
 implements TableDescriptor, http://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html?is-external=true";
 title="class or interface in java.lang">Comparable
 TODO: make this private after removing the 
HTableDescriptor
@@ -672,7 +672,7 @@ implements 
 
 name
-private final TableName name
+private final TableName name
 
 
 
@@ -681,7 +681,7 @@ implements 
 
 values
-private final http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true";
 title="class or interface in java.util">Map values
+private final http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true";
 title="class or interface in java.util">Map values
 A map which holds the metadata information of the table. 
This metadata
  includes values like IS_META, SPLIT_POLICY, MAX_FILE_SIZE,
  READONLY, MEMSTORE_FLUSHSIZE etc...
@@ -693,7 +693,7 @@ implements 
 
 configuration
-private final http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true";
 title="class or interface in java.util">MapString,http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String> configuration
+private final http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true";
 title="class or interface in java.util">MapString,http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String> configuration
 A map which holds the configuration specific to the table. 
The keys of
  the map have the same names as config keys and override the defaults with
  table-specific settings. Example usage may be for compactions, etc.
@@ -705,7 +705,7 @@ implements 
 
 families
-private final http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true";
 title="class or interface in java.util">Map families
+private final http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true";
 title="class or interface in java.util">Map families
 Maps column family name to the respective 
FamilyDescriptors
 
 
@@ -724,7 +724,7 @@ implements 
 ModifyableTableDescriptor
 @InterfaceAudience.Private
-public ModifyableTableDescriptor(TableName name)
+public ModifyableTableDescriptor(TableName name)
 Construct a table descriptor specifying a TableName 
object
 
 Parameters:
@@ -739,7 +739,7 @@ public 
 
 ModifyableTableDescriptor
-private ModifyableTableDescriptor(TableDescriptor desc)
+private ModifyableTableDescriptor(TableDescriptor desc)
 
 
 
@@ -750,7 +750,7 @@ public @InterfaceAudience.Private
  http://docs.oracle.com/javase/8/docs/api/java/lang/Deprecated.html?is-external=true";
 title="class or interface in java.lang">@Deprecated
-public ModifyableTableDescriptor(TableName name,
+public ModifyableTableDescriptor(TableName name,
  TableDescriptor desc)
 Deprecated. 
 Construct a table descriptor by cloning the descriptor 
passed as a
@@ -771,7 +771,7 @@ public 
 
 ModifyableTableDescriptor
-private ModifyableTableDescriptor(TableName name,
+private ModifyableTableDescriptor(TableName name,
   http://docs.oracle.com/javase/8/docs/api/java/util/Collection.html?is-external=true";
 title="class or interface in java.util">Collection families,
   http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-exte

[15/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/devapidocs/org/apache/hadoop/hbase/client/AsyncAdminBuilder.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/client/AsyncAdminBuilder.html 
b/devapidocs/org/apache/hadoop/hbase/client/AsyncAdminBuilder.html
index ec4f659..effd8bf 100644
--- a/devapidocs/org/apache/hadoop/hbase/client/AsyncAdminBuilder.html
+++ b/devapidocs/org/apache/hadoop/hbase/client/AsyncAdminBuilder.html
@@ -106,10 +106,14 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Public
-public interface AsyncAdminBuilder
+public interface AsyncAdminBuilder
 For creating AsyncAdmin. The implementation 
should have default configurations set before
  returning the builder to user. So users are free to only set the configs they 
care about to
  create a new AsyncAdmin instance.
+
+Since:
+2.0.0
+
 
 
 
@@ -194,7 +198,7 @@ public interface 
 
 setOperationTimeout
-AsyncAdminBuilder setOperationTimeout(long timeout,
+AsyncAdminBuilder setOperationTimeout(long timeout,
   http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/TimeUnit.html?is-external=true";
 title="class or interface in 
java.util.concurrent">TimeUnit unit)
 Set timeout for a whole admin operation. Operation timeout 
and max attempt times(or max retry
  times) are both limitations for retrying, we will stop retrying when we reach 
any of the
@@ -214,7 +218,7 @@ public interface 
 
 setRpcTimeout
-AsyncAdminBuilder setRpcTimeout(long timeout,
+AsyncAdminBuilder setRpcTimeout(long timeout,
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/TimeUnit.html?is-external=true";
 title="class or interface in 
java.util.concurrent">TimeUnit unit)
 Set timeout for each rpc request.
 
@@ -232,7 +236,7 @@ public interface 
 
 setRetryPause
-AsyncAdminBuilder setRetryPause(long timeout,
+AsyncAdminBuilder setRetryPause(long timeout,
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/TimeUnit.html?is-external=true";
 title="class or interface in 
java.util.concurrent">TimeUnit unit)
 Set the base pause time for retrying. We use an exponential 
policy to generate sleep time when
  retrying.
@@ -251,7 +255,7 @@ public interface 
 
 setMaxRetries
-default AsyncAdminBuilder setMaxRetries(int maxRetries)
+default AsyncAdminBuilder setMaxRetries(int maxRetries)
 Set the max retry times for an admin operation. Usually it 
is the max attempt times minus 1.
  Operation timeout and max attempt times(or max retry times) are both 
limitations for retrying,
  we will stop retrying when we reach any of the limitations.
@@ -269,7 +273,7 @@ public interface 
 
 setMaxAttempts
-AsyncAdminBuilder setMaxAttempts(int maxAttempts)
+AsyncAdminBuilder setMaxAttempts(int maxAttempts)
 Set the max attempt times for an admin operation. Usually 
it is the max retry times plus 1.
  Operation timeout and max attempt times(or max retry times) are both 
limitations for retrying,
  we will stop retrying when we reach any of the limitations.
@@ -287,7 +291,7 @@ public interface 
 
 setStartLogErrorsCnt
-AsyncAdminBuilder setStartLogErrorsCnt(int startLogErrorsCnt)
+AsyncAdminBuilder setStartLogErrorsCnt(int startLogErrorsCnt)
 Set the number of retries that are allowed before we start 
to log.
 
 Parameters:
@@ -303,7 +307,7 @@ public interface 
 
 build
-AsyncAdmin build()
+AsyncAdmin build()
 Create a AsyncAdmin 
instance.
 
 Returns:

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/devapidocs/org/apache/hadoop/hbase/client/AsyncAdminRequestRetryingCaller.Callable.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/AsyncAdminRequestRetryingCaller.Callable.html
 
b/devapidocs/org/apache/hadoop/hbase/client/AsyncAdminRequestRetryingCaller.Callable.html
index fc621a5..4291618 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/client/AsyncAdminRequestRetryingCaller.Callable.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/client/AsyncAdminRequestRetryingCaller.Callable.html
@@ -110,7 +110,7 @@ var activeTableTab = "activeTableTab";
 
 
 http://docs.oracle.com/javase/8/docs/api/java/lang/FunctionalInterface.html?is-external=true";
 title="class or interface in java.lang">@FunctionalInterface
-public static interface AsyncAdminRequestRetryingCaller.Callable
+public static interface AsyncAdminRequestRetryingCaller.Callable
 
 
 
@@ -155,7 +155,7 @@ public static interface 
 
 call
-http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">CompletableFuture call(HBaseRpcController controller,
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">Completabl

[07/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/devapidocs/org/apache/hadoop/hbase/client/Get.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/client/Get.html 
b/devapidocs/org/apache/hadoop/hbase/client/Get.html
index 51e683f..1377eb8 100644
--- a/devapidocs/org/apache/hadoop/hbase/client/Get.html
+++ b/devapidocs/org/apache/hadoop/hbase/client/Get.html
@@ -18,7 +18,7 @@
 catch(err) {
 }
 //-->
-var methods = 
{"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":10,"i12":10,"i13":10,"i14":10,"i15":42,"i16":10,"i17":10,"i18":10,"i19":10,"i20":10,"i21":10,"i22":10,"i23":10,"i24":10,"i25":42,"i26":10,"i27":10,"i28":10,"i29":10,"i30":10,"i31":10,"i32":10,"i33":10,"i34":42,"i35":42,"i36":10,"i37":10,"i38":10,"i39":10,"i40":10,"i41":10,"i42":10};
+var methods = 
{"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":10,"i12":10,"i13":10,"i14":10,"i15":10,"i16":42,"i17":10,"i18":10,"i19":10,"i20":10,"i21":10,"i22":10,"i23":10,"i24":10,"i25":10,"i26":42,"i27":10,"i28":10,"i29":10,"i30":10,"i31":10,"i32":10,"i33":10,"i34":42,"i35":42,"i36":10,"i37":10,"i38":10,"i39":10,"i40":10,"i41":10};
 var tabs = {65535:["t0","All Methods"],2:["t2","Instance 
Methods"],8:["t4","Concrete Methods"],32:["t6","Deprecated Methods"]};
 var altColor = "altColor";
 var rowColor = "rowColor";
@@ -207,13 +207,17 @@ implements private int
 storeOffset 
 
+
+private TimeRange
+tr 
+
 
 
 
 
 
 Fields inherited from class org.apache.hadoop.hbase.client.Query
-colFamTimeRangeMap,
 consistency,
 filter,
 loadColumnFamiliesOnDemand,
 targetReplicaId,
 tr
+colFamTimeRangeMap,
 consistency,
 filter,
 loadColumnFamiliesOnDemand,
 targetReplicaId
 
 
 
@@ -344,20 +348,26 @@ implements 
 
 
+TimeRange
+getTimeRange()
+Method for retrieving the get's TimeRange
+
+
+
 boolean
 hasFamilies()
 Method for checking if any families have been inserted into 
this Get
 
 
-
+
 int
 hashCode() 
 
-
+
 boolean
 isCheckExistenceOnly() 
 
-
+
 boolean
 isClosestRowBefore()
 Deprecated. 
@@ -365,57 +375,57 @@ implements 
 
 
-
+
 int
 numFamilies()
 Method for retrieving the number of families to get 
from
 
 
-
+
 Get
 readAllVersions()
 Get all available versions.
 
 
-
+
 Get
 readVersions(int versions)
 Get up to the specified number of versions of each 
column.
 
 
-
+
 Get
 setACL(http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true";
 title="class or interface in java.util">MapString,Permission> perms) 
 
-
+
 Get
 setACL(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String user,
   Permission perms) 
 
-
+
 Get
 setAttribute(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String name,
 byte[] value)
 Sets an attribute.
 
 
-
+
 Get
 setAuthorizations(Authorizations authorizations)
 Sets the authorizations to be used by this Query
 
 
-
+
 Get
 setCacheBlocks(boolean cacheBlocks)
 Set whether blocks should be cached for this Get.
 
 
-
+
 Get
 setCheckExistenceOnly(boolean checkExistenceOnly) 
 
-
+
 Get
 setClosestRowBefore(boolean closestRowBefore)
 Deprecated. 
@@ -423,7 +433,7 @@ implements 
 
 
-
+
 Get
 setColumnFamilyTimeRange(byte[] cf,
 long minStamp,
@@ -432,11 +442,6 @@ implements 
 
 
-
-Get
-setColumnFamilyTimeRange(byte[] cf,
-TimeRange tr) 
-
 
 Get
 setConsistency(Consistency consistency)
@@ -518,17 +523,11 @@ implements 
 
 Get
-setTimeRange(TimeRange tr)
-Get versions of columns only within the specified timestamp 
range,
-
-
-
-Get
 setTimeStamp(long timestamp)
 Get versions of columns with the specified timestamp.
 
 
-
+
 http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true";
 title="class or interface in java.util">MapString,http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true";
 title="class or interface in java.lang">Object>
 toMap(int maxCols)
 Compile the details beyond the scope of getFingerprint 
(row, columns,
@@ -541,7 +540,7 @@ implements 
 
 Methods inherited from class org.apache.hadoop.hbase.client.Query
-doLoadColumnFamiliesOnDemand,
 getACL,
 getAuthorizations,
 getColumnFamilyTimeRange,
 getConsistency,
 getFilter,
 getIsolationLevel,
 getLoadColumnFamiliesOnDemandValue,
 get
 ReplicaId, getTimeRange
+doLoadColumnFamiliesOnDemand,
 getACL,
 getAuthorizations,
 getColumnFamilyTimeRange,
 getConsistency,
 getFilter,
 getIsolationLevel,
 getLoadColumnFamiliesOnDemandValue,
 get
 ReplicaId
 
 
 
@@ -632,13 +631,22 

[41/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/org/apache/hadoop/hbase/client/TableDescriptorBuilder.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/client/TableDescriptorBuilder.html 
b/apidocs/org/apache/hadoop/hbase/client/TableDescriptorBuilder.html
index 70979e6..82b706a 100644
--- a/apidocs/org/apache/hadoop/hbase/client/TableDescriptorBuilder.html
+++ b/apidocs/org/apache/hadoop/hbase/client/TableDescriptorBuilder.html
@@ -110,8 +110,12 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Public
-public class TableDescriptorBuilder
+public class TableDescriptorBuilder
 extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true";
 title="class or interface in java.lang">Object
+
+Since:
+2.0.0
+
 
 
 
@@ -363,7 +367,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 LOG
-public static final org.apache.commons.logging.Log LOG
+public static final org.apache.commons.logging.Log LOG
 
 
 
@@ -372,7 +376,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 DEFAULT_READONLY
-public static final boolean DEFAULT_READONLY
+public static final boolean DEFAULT_READONLY
 Constant that denotes whether the table is READONLY by 
default and is false
 
 See Also:
@@ -386,7 +390,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 DEFAULT_COMPACTION_ENABLED
-public static final boolean DEFAULT_COMPACTION_ENABLED
+public static final boolean DEFAULT_COMPACTION_ENABLED
 Constant that denotes whether the table is compaction 
enabled by default
 
 See Also:
@@ -400,7 +404,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 DEFAULT_NORMALIZATION_ENABLED
-public static final boolean DEFAULT_NORMALIZATION_ENABLED
+public static final boolean DEFAULT_NORMALIZATION_ENABLED
 Constant that denotes whether the table is normalized by 
default.
 
 See Also:
@@ -414,7 +418,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 DEFAULT_MEMSTORE_FLUSH_SIZE
-public static final long DEFAULT_MEMSTORE_FLUSH_SIZE
+public static final long DEFAULT_MEMSTORE_FLUSH_SIZE
 Constant that denotes the maximum default size of the 
memstore after which
  the contents are flushed to the store files
 
@@ -429,7 +433,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 DEFAULT_REGION_REPLICATION
-public static final int DEFAULT_REGION_REPLICATION
+public static final int DEFAULT_REGION_REPLICATION
 
 See Also:
 Constant
 Field Values
@@ -442,7 +446,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 DEFAULT_REGION_MEMSTORE_REPLICATION
-public static final boolean DEFAULT_REGION_MEMSTORE_REPLICATION
+public static final boolean DEFAULT_REGION_MEMSTORE_REPLICATION
 
 See Also:
 Constant
 Field Values
@@ -455,7 +459,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 NAMESPACE_TABLEDESC
-public static final TableDescriptor NAMESPACE_TABLEDESC
+public static final TableDescriptor NAMESPACE_TABLEDESC
 Table descriptor for namespace table
 
 
@@ -473,7 +477,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 toByteArray
-public static byte[] toByteArray(TableDescriptor desc)
+public static byte[] toByteArray(TableDescriptor desc)
 
 Parameters:
 desc - The table descriptor to serialize
@@ -488,7 +492,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 parseFrom
-public static TableDescriptor parseFrom(byte[] pbBytes)
+public static TableDescriptor parseFrom(byte[] pbBytes)
  throws 
org.apache.hadoop.hbase.exceptions.DeserializationException
 The input should be created by toByteArray(org.apache.hadoop.hbase.client.TableDescriptor).
 
@@ -507,7 +511,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 newBuilder
-public static TableDescriptorBuilder newBuilder(TableName name)
+public static TableDescriptorBuilder newBuilder(TableName name)
 
 
 
@@ -516,7 +520,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 copy
-public static TableDescriptor copy(TableDescriptor desc)
+public static TableDescriptor copy(TableDescriptor desc)
 
 
 
@@ -525,7 +529,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 copy
-public static TableDescriptor copy(TableName name,
+public static TableDescriptor copy(TableName name,
TableDescriptor desc)
 
 
@@ -535,7 +539,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 newBuilder
-public static TableDescriptorBuilder newBuilder(TableDescriptor desc)
+public static TableDescriptorBuilder newBuilder(TableDescriptor desc)
 Copy all configuration, values, families, and name from the 
input.
 
 Parameters:
@@ -551,7 +555

[50/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apache_hbase_reference_guide.pdf
--
diff --git a/apache_hbase_reference_guide.pdf b/apache_hbase_reference_guide.pdf
index 029a7cf..53281f3 100644
--- a/apache_hbase_reference_guide.pdf
+++ b/apache_hbase_reference_guide.pdf
@@ -5,16 +5,16 @@
 /Author (Apache HBase Team)
 /Creator (Asciidoctor PDF 1.5.0.alpha.15, based on Prawn 2.2.2)
 /Producer (Apache HBase Team)
-/ModDate (D:20170825144607+00'00')
-/CreationDate (D:20170825144607+00'00')
+/ModDate (D:20170826144616+00'00')
+/CreationDate (D:20170826144616+00'00')
 >>
 endobj
 2 0 obj
 << /Type /Catalog
 /Pages 3 0 R
 /Names 26 0 R
-/Outlines 4314 0 R
-/PageLabels 4522 0 R
+/Outlines 4315 0 R
+/PageLabels 4523 0 R
 /PageMode /UseOutlines
 /OpenAction [7 0 R /FitH 842.89]
 /ViewerPreferences << /DisplayDocTitle true
@@ -24,7 +24,7 @@ endobj
 3 0 obj
 << /Type /Pages
 /Count 663
-/Kids [7 0 R 12 0 R 14 0 R 16 0 R 18 0 R 20 0 R 22 0 R 24 0 R 44 0 R 47 0 R 50 
0 R 54 0 R 63 0 R 66 0 R 69 0 R 71 0 R 76 0 R 80 0 R 83 0 R 89 0 R 91 0 R 94 0 
R 96 0 R 103 0 R 109 0 R 114 0 R 116 0 R 130 0 R 133 0 R 142 0 R 150 0 R 160 0 
R 169 0 R 180 0 R 184 0 R 186 0 R 190 0 R 201 0 R 208 0 R 217 0 R 225 0 R 231 0 
R 239 0 R 248 0 R 261 0 R 270 0 R 277 0 R 286 0 R 295 0 R 302 0 R 309 0 R 317 0 
R 323 0 R 331 0 R 338 0 R 347 0 R 356 0 R 367 0 R 378 0 R 386 0 R 393 0 R 401 0 
R 408 0 R 416 0 R 425 0 R 434 0 R 441 0 R 449 0 R 462 0 R 469 0 R 477 0 R 484 0 
R 493 0 R 501 0 R 508 0 R 513 0 R 518 0 R 523 0 R 539 0 R 549 0 R 554 0 R 568 0 
R 574 0 R 579 0 R 581 0 R 583 0 R 588 0 R 596 0 R 602 0 R 607 0 R 613 0 R 624 0 
R 637 0 R 656 0 R 671 0 R 682 0 R 684 0 R 686 0 R 694 0 R 706 0 R 715 0 R 725 0 
R 731 0 R 734 0 R 738 0 R 743 0 R 746 0 R 749 0 R 751 0 R 754 0 R 758 0 R 760 0 
R 765 0 R 769 0 R 774 0 R 779 0 R 782 0 R 788 0 R 790 0 R 794 0 R 803 0 R 805 0 
R 808 0 R 811 0 R 814 0 R 817 0 R 831 0 
 R 838 0 R 847 0 R 858 0 R 864 0 R 876 0 R 880 0 R 883 0 R 887 0 R 890 0 R 895 
0 R 904 0 R 912 0 R 916 0 R 920 0 R 925 0 R 929 0 R 931 0 R 946 0 R 957 0 R 963 
0 R 969 0 R 972 0 R 980 0 R 988 0 R 992 0 R 998 0 R 1003 0 R 1005 0 R 1007 0 R 
1009 0 R 1020 0 R 1028 0 R 1032 0 R 1039 0 R 1047 0 R 1055 0 R 1059 0 R 1065 0 
R 1070 0 R 1078 0 R 1083 0 R 1088 0 R 1090 0 R 1096 0 R 1102 0 R 1104 0 R  
0 R 1121 0 R 1125 0 R 1127 0 R 1131 0 R 1134 0 R 1139 0 R 1142 0 R 1154 0 R 
1158 0 R 1164 0 R 1171 0 R 1176 0 R 1180 0 R 1184 0 R 1186 0 R 1189 0 R 1192 0 
R 1195 0 R 1199 0 R 1203 0 R 1207 0 R 1212 0 R 1216 0 R 1219 0 R 1221 0 R 1233 
0 R 1236 0 R 1244 0 R 1253 0 R 1259 0 R 1263 0 R 1265 0 R 1275 0 R 1278 0 R 
1284 0 R 1292 0 R 1295 0 R 1302 0 R 1311 0 R 1313 0 R 1315 0 R 1325 0 R 1327 0 
R 1329 0 R 1332 0 R 1334 0 R 1336 0 R 1338 0 R 1340 0 R 1343 0 R 1347 0 R 1352 
0 R 1354 0 R 1356 0 R 1358 0 R 1363 0 R 1370 0 R 1375 0 R 1378 0 R 1380 0 R 
1383 0 R 1387 0 R 1389 0 R 1392 0 R 1394 0 R 1396 0 R 1399
  0 R 1404 0 R 1409 0 R 1417 0 R 1422 0 R 1436 0 R 1448 0 R 1451 0 R 1455 0 R 
1469 0 R 1478 0 R 1492 0 R 1498 0 R 1506 0 R 1520 0 R 1534 0 R 1546 0 R 1551 0 
R 1558 0 R 1569 0 R 1575 0 R 1581 0 R 1589 0 R 1592 0 R 1601 0 R 1608 0 R 1612 
0 R 1625 0 R 1627 0 R 1633 0 R 1639 0 R 1643 0 R 1651 0 R 1660 0 R 1664 0 R 
1666 0 R 1668 0 R 1681 0 R 1687 0 R 1695 0 R 1702 0 R 1716 0 R 1721 0 R 1730 0 
R 1738 0 R 1744 0 R 1751 0 R 1755 0 R 1758 0 R 1760 0 R 1766 0 R 1772 0 R 1778 
0 R 1782 0 R 1790 0 R 1795 0 R 1801 0 R 1806 0 R 1808 0 R 1817 0 R 1824 0 R 
1830 0 R 1835 0 R 1839 0 R 1842 0 R 1847 0 R 1852 0 R 1859 0 R 1861 0 R 1863 0 
R 1866 0 R 1874 0 R 1877 0 R 1884 0 R 1894 0 R 1897 0 R 1902 0 R 1904 0 R 1909 
0 R 1912 0 R 1914 0 R 1919 0 R 1929 0 R 1931 0 R 1933 0 R 1935 0 R 1937 0 R 
1940 0 R 1942 0 R 1944 0 R 1947 0 R 1949 0 R 1951 0 R 1955 0 R 1959 0 R 1968 0 
R 1970 0 R 1972 0 R 1978 0 R 1980 0 R 1985 0 R 1987 0 R 1989 0 R 1996 0 R 2001 
0 R 2005 0 R 2009 0 R 2013 0 R 2015 0 R 2017 0 R 2021 0 R 20
 24 0 R 2026 0 R 2028 0 R 2032 0 R 2034 0 R 2037 0 R 2039 0 R 2041 0 R 2043 0 R 
2050 0 R 2053 0 R 2058 0 R 2060 0 R 2062 0 R 2064 0 R 2066 0 R 2074 0 R 2085 0 
R 2099 0 R 2110 0 R 2114 0 R 2119 0 R 2123 0 R 2126 0 R 2131 0 R 2137 0 R 2139 
0 R 2143 0 R 2145 0 R 2147 0 R 2149 0 R 2153 0 R 2155 0 R 2168 0 R 2171 0 R 
2179 0 R 2185 0 R 2197 0 R 2211 0 R 2225 0 R 2242 0 R 2246 0 R 2248 0 R 2252 0 
R 2270 0 R 2276 0 R 2288 0 R 2292 0 R 2296 0 R 2305 0 R 2317 0 R 2323 0 R 2333 
0 R 2346 0 R 2365 0 R 2374 0 R 2377 0 R 2386 0 R 2404 0 R 2411 0 R 2414 0 R 
2419 0 R 2423 0 R 2426 0 R 2435 0 R 2444 0 R 2447 0 R 2449 0 R 2453 0 R 2468 0 
R 2476 0 R 2481 0 R 2485 0 R 2489 0 R 2491 0 R 2493 0 R 2495 0 R 2500 0 R 2513 
0 R 2523 0 R 2532 0 R 2541 0 R 2547 0 R 2558 0 R 2565 0 R 2571 0 R 2573 0 R 
2583 0 R 2591 0 R 2601 0 R 2605 0 R 2616 0 R 2620 0 R 2630 0 R 2638 0 R 2646 0 
R 2652 0 R 2656 0 R 2660 0 R 2664 0 R 2666 0 R 2672 0 R 2676 0 R 2680 0 R 2686 
0 R 2692 0 R 2695 0 R 2701 0 R 2705 0 R 2

[33/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/src-html/org/apache/hadoop/hbase/client/Query.html
--
diff --git a/apidocs/src-html/org/apache/hadoop/hbase/client/Query.html 
b/apidocs/src-html/org/apache/hadoop/hbase/client/Query.html
index 50a358b..9fde7a7 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/client/Query.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/client/Query.html
@@ -25,263 +25,227 @@
 017 */
 018package org.apache.hadoop.hbase.client;
 019
-020import java.io.IOException;
-021import java.util.Map;
-022
-023import 
org.apache.hadoop.hbase.shaded.com.google.common.collect.Maps;
-024import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
-025import 
org.apache.hadoop.hbase.exceptions.DeserializationException;
-026import 
org.apache.hadoop.hbase.filter.Filter;
-027import 
org.apache.hadoop.hbase.io.TimeRange;
-028import 
org.apache.hadoop.hbase.security.access.AccessControlConstants;
-029import 
org.apache.hadoop.hbase.security.access.AccessControlUtil;
-030import 
org.apache.hadoop.hbase.security.access.Permission;
-031import 
org.apache.hadoop.hbase.security.visibility.Authorizations;
-032import 
org.apache.hadoop.hbase.security.visibility.VisibilityConstants;
-033import 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
-034
-035import 
org.apache.hadoop.hbase.shaded.com.google.common.collect.ArrayListMultimap;
-036import 
org.apache.hadoop.hbase.shaded.com.google.common.collect.ListMultimap;
-037import 
org.apache.hadoop.hbase.util.Bytes;
-038
-039@InterfaceAudience.Public
-040public abstract class Query extends 
OperationWithAttributes {
-041  private static final String 
ISOLATION_LEVEL = "_isolationlevel_";
-042  protected Filter filter = null;
-043  protected int targetReplicaId = -1;
-044  protected Consistency consistency = 
Consistency.STRONG;
-045  protected Map 
colFamTimeRangeMap = Maps.newTreeMap(Bytes.BYTES_COMPARATOR);
-046  protected Boolean 
loadColumnFamiliesOnDemand = null;
-047  protected TimeRange tr = new 
TimeRange();
-048  /**
-049   * @return Filter
-050   */
-051  public Filter getFilter() {
-052return filter;
-053  }
-054
-055  /**
-056   * Apply the specified server-side 
filter when performing the Query. Only
-057   * {@link 
Filter#filterKeyValue(org.apache.hadoop.hbase.Cell)} is called AFTER all tests 
for ttl,
-058   * column match, deletes and column 
family's max versions have been run.
-059   * @param filter filter to run on the 
server
-060   * @return this for invocation 
chaining
-061   */
-062  public Query setFilter(Filter filter) 
{
-063this.filter = filter;
-064return this;
-065  }
-066
-067  /**
-068   * @return TimeRange
-069   */
-070  public TimeRange getTimeRange() {
-071return tr;
-072  }
-073
-074  /**
-075   * Sets the TimeRange to be used by 
this Query
-076   * @param tr TimeRange
-077   * @return Query
+020import java.util.Map;
+021
+022import 
org.apache.hadoop.hbase.shaded.com.google.common.collect.Maps;
+023import 
org.apache.hadoop.hbase.classification.InterfaceAudience;
+024import 
org.apache.hadoop.hbase.exceptions.DeserializationException;
+025import 
org.apache.hadoop.hbase.filter.Filter;
+026import 
org.apache.hadoop.hbase.io.TimeRange;
+027import 
org.apache.hadoop.hbase.security.access.AccessControlConstants;
+028import 
org.apache.hadoop.hbase.security.access.AccessControlUtil;
+029import 
org.apache.hadoop.hbase.security.access.Permission;
+030import 
org.apache.hadoop.hbase.security.visibility.Authorizations;
+031import 
org.apache.hadoop.hbase.security.visibility.VisibilityConstants;
+032import 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+033
+034import 
org.apache.hadoop.hbase.shaded.com.google.common.collect.ArrayListMultimap;
+035import 
org.apache.hadoop.hbase.shaded.com.google.common.collect.ListMultimap;
+036import 
org.apache.hadoop.hbase.util.Bytes;
+037
+038@InterfaceAudience.Public
+039public abstract class Query extends 
OperationWithAttributes {
+040  private static final String 
ISOLATION_LEVEL = "_isolationlevel_";
+041  protected Filter filter = null;
+042  protected int targetReplicaId = -1;
+043  protected Consistency consistency = 
Consistency.STRONG;
+044  protected Map 
colFamTimeRangeMap = Maps.newTreeMap(Bytes.BYTES_COMPARATOR);
+045  protected Boolean 
loadColumnFamiliesOnDemand = null;
+046  /**
+047   * @return Filter
+048   */
+049  public Filter getFilter() {
+050return filter;
+051  }
+052
+053  /**
+054   * Apply the specified server-side 
filter when performing the Query. Only
+055   * {@link 
Filter#filterKeyValue(org.apache.hadoop.hbase.Cell)} is called AFTER all tests 
for ttl,
+056   * column match, deletes and column 
family's max versions have been run.
+057   * @param filter filter to run on the 
server
+058   * @return this for invocation 
chaining
+059   */
+060  public Query setFilter(Filter filter) 

[08/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/devapidocs/org/apache/hadoop/hbase/client/ColumnFamilyDescriptorBuilder.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/ColumnFamilyDescriptorBuilder.html 
b/devapidocs/org/apache/hadoop/hbase/client/ColumnFamilyDescriptorBuilder.html
index 7e6327d..987bfa6 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/client/ColumnFamilyDescriptorBuilder.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/client/ColumnFamilyDescriptorBuilder.html
@@ -110,8 +110,12 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Public
-public class ColumnFamilyDescriptorBuilder
+public class ColumnFamilyDescriptorBuilder
 extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true";
 title="class or interface in java.lang">Object
+
+Since:
+2.0.0
+
 
 
 
@@ -779,7 +783,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 COLUMN_DESCRIPTOR_VERSION
-private static final byte COLUMN_DESCRIPTOR_VERSION
+private static final byte COLUMN_DESCRIPTOR_VERSION
 
 See Also:
 Constant
 Field Values
@@ -793,7 +797,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 IN_MEMORY_COMPACTION
 @InterfaceAudience.Private
-public static final http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String IN_MEMORY_COMPACTION
+public static final http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String IN_MEMORY_COMPACTION
 
 See Also:
 Constant
 Field Values
@@ -806,7 +810,7 @@ public static final http://docs.oracle.com/javase/8/docs/api/java/
 
 
 IN_MEMORY_COMPACTION_BYTES
-private static final Bytes IN_MEMORY_COMPACTION_BYTES
+private static final Bytes IN_MEMORY_COMPACTION_BYTES
 
 
 
@@ -816,7 +820,7 @@ public static final http://docs.oracle.com/javase/8/docs/api/java/
 
 IN_MEMORY
 @InterfaceAudience.Private
-public static final http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String IN_MEMORY
+public static final http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String IN_MEMORY
 
 See Also:
 Constant
 Field Values
@@ -829,7 +833,7 @@ public static final http://docs.oracle.com/javase/8/docs/api/java/
 
 
 IN_MEMORY_BYTES
-private static final Bytes IN_MEMORY_BYTES
+private static final Bytes IN_MEMORY_BYTES
 
 
 
@@ -839,7 +843,7 @@ public static final http://docs.oracle.com/javase/8/docs/api/java/
 
 COMPRESSION
 @InterfaceAudience.Private
-public static final http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String COMPRESSION
+public static final http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String COMPRESSION
 
 See Also:
 Constant
 Field Values
@@ -852,7 +856,7 @@ public static final http://docs.oracle.com/javase/8/docs/api/java/
 
 
 COMPRESSION_BYTES
-private static final Bytes COMPRESSION_BYTES
+private static final Bytes COMPRESSION_BYTES
 
 
 
@@ -862,7 +866,7 @@ public static final http://docs.oracle.com/javase/8/docs/api/java/
 
 COMPRESSION_COMPACT
 @InterfaceAudience.Private
-public static final http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String COMPRESSION_COMPACT
+public static final http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String COMPRESSION_COMPACT
 
 See Also:
 Constant
 Field Values
@@ -875,7 +879,7 @@ public static final http://docs.oracle.com/javase/8/docs/api/java/
 
 
 COMPRESSION_COMPACT_BYTES
-private static final Bytes COMPRESSION_COMPACT_BYTES
+private static final Bytes COMPRESSION_COMPACT_BYTES
 
 
 
@@ -885,7 +889,7 @@ public static final http://docs.oracle.com/javase/8/docs/api/java/
 
 DATA_BLOCK_ENCODING
 @InterfaceAudience.Private
-public static final http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String DATA_BLOCK_ENCODING
+public static final http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String DATA_BLOCK_ENCODING
 
 See Also:
 Constant
 Field Values
@@ -898,7 +902,7 @@ public static final http://docs.oracle.com/javase/8/docs/api/java/
 
 
 DATA_BLOCK_ENCODING_BYTES
-private static final Bytes DATA_BLOCK_ENCODING_BYTES
+private static final Bytes DATA_BLOCK_ENCODING_BYTES
 
 
 
@@ -908,7 +912,7 @@ public static final http://docs.oracle.com/javase/8/docs/api/java/
 
 BLOCKCACHE
 @InterfaceAudience.Private
-public static final http://docs

[45/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/org/apache/hadoop/hbase/client/ColumnFamilyDescriptorBuilder.html
--
diff --git 
a/apidocs/org/apache/hadoop/hbase/client/ColumnFamilyDescriptorBuilder.html 
b/apidocs/org/apache/hadoop/hbase/client/ColumnFamilyDescriptorBuilder.html
index aea3777..8042714 100644
--- a/apidocs/org/apache/hadoop/hbase/client/ColumnFamilyDescriptorBuilder.html
+++ b/apidocs/org/apache/hadoop/hbase/client/ColumnFamilyDescriptorBuilder.html
@@ -110,8 +110,12 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Public
-public class ColumnFamilyDescriptorBuilder
+public class ColumnFamilyDescriptorBuilder
 extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true";
 title="class or interface in java.lang">Object
+
+Since:
+2.0.0
+
 
 
 
@@ -479,7 +483,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 DEFAULT_MOB_THRESHOLD
-public static final long DEFAULT_MOB_THRESHOLD
+public static final long DEFAULT_MOB_THRESHOLD
 
 See Also:
 Constant
 Field Values
@@ -492,7 +496,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 DEFAULT_MOB_COMPACT_PARTITION_POLICY
-public static final MobCompactPartitionPolicy 
DEFAULT_MOB_COMPACT_PARTITION_POLICY
+public static final MobCompactPartitionPolicy 
DEFAULT_MOB_COMPACT_PARTITION_POLICY
 
 
 
@@ -501,7 +505,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 DEFAULT_DFS_REPLICATION
-public static final short DEFAULT_DFS_REPLICATION
+public static final short DEFAULT_DFS_REPLICATION
 
 See Also:
 Constant
 Field Values
@@ -514,7 +518,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 NEW_VERSION_BEHAVIOR
-public static final http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String NEW_VERSION_BEHAVIOR
+public static final http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String NEW_VERSION_BEHAVIOR
 
 See Also:
 Constant
 Field Values
@@ -527,7 +531,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 DEFAULT_NEW_VERSION_BEHAVIOR
-public static final boolean DEFAULT_NEW_VERSION_BEHAVIOR
+public static final boolean DEFAULT_NEW_VERSION_BEHAVIOR
 
 See Also:
 Constant
 Field Values
@@ -540,7 +544,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 DEFAULT_COMPRESSION
-public static 
final org.apache.hadoop.hbase.io.compress.Compression.Algorithm DEFAULT_COMPRESSION
+public static 
final org.apache.hadoop.hbase.io.compress.Compression.Algorithm DEFAULT_COMPRESSION
 Default compression type.
 
 
@@ -550,7 +554,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 DEFAULT_DATA_BLOCK_ENCODING
-public static final DataBlockEncoding DEFAULT_DATA_BLOCK_ENCODING
+public static final DataBlockEncoding DEFAULT_DATA_BLOCK_ENCODING
 Default data block encoding algorithm.
 
 
@@ -560,7 +564,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 DEFAULT_MAX_VERSIONS
-public static final int DEFAULT_MAX_VERSIONS
+public static final int DEFAULT_MAX_VERSIONS
 Default number of versions of a record to keep.
 
 See Also:
@@ -574,7 +578,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 DEFAULT_MIN_VERSIONS
-public static final int DEFAULT_MIN_VERSIONS
+public static final int DEFAULT_MIN_VERSIONS
 Default is not to keep a minimum of versions.
 
 See Also:
@@ -588,7 +592,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 DEFAULT_IN_MEMORY
-public static final boolean DEFAULT_IN_MEMORY
+public static final boolean DEFAULT_IN_MEMORY
 Default setting for whether to try and serve this column 
family from memory
  or not.
 
@@ -603,7 +607,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 DEFAULT_KEEP_DELETED
-public static final KeepDeletedCells DEFAULT_KEEP_DELETED
+public static final KeepDeletedCells DEFAULT_KEEP_DELETED
 Default setting for preventing deleted from being collected 
immediately.
 
 
@@ -613,7 +617,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 DEFAULT_BLOCKCACHE
-public static final boolean DEFAULT_BLOCKCACHE
+public static final boolean DEFAULT_BLOCKCACHE
 Default setting for whether to use a block cache or 
not.
 
 See Also:
@@ -627,7 +631,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?
 
 
 DEFAULT_CACHE_DATA_ON_WRITE
-public static final boolean DEFAULT_CACHE_DATA_ON_WRITE
+public static final boolean DEFAULT_CACHE_DATA_ON_WRITE
 Default setting for whether to cache data blocks on write 
if block caching
  is enabled.
 
@@ -642,7 +646,7 @@ extends http://docs.oracle.com/javase/8/docs/api/

[38/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/src-html/org/apache/hadoop/hbase/client/AsyncAdminBuilder.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/client/AsyncAdminBuilder.html 
b/apidocs/src-html/org/apache/hadoop/hbase/client/AsyncAdminBuilder.html
index 8fce4aa..f676477 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/client/AsyncAdminBuilder.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/client/AsyncAdminBuilder.html
@@ -35,70 +35,71 @@
 027 * For creating {@link AsyncAdmin}. The 
implementation should have default configurations set before
 028 * returning the builder to user. So 
users are free to only set the configs they care about to
 029 * create a new AsyncAdmin instance.
-030 */
-031@InterfaceAudience.Public
-032public interface AsyncAdminBuilder {
-033
-034  /**
-035   * Set timeout for a whole admin 
operation. Operation timeout and max attempt times(or max retry
-036   * times) are both limitations for 
retrying, we will stop retrying when we reach any of the
-037   * limitations.
-038   * @param timeout
-039   * @param unit
-040   * @return this for invocation 
chaining
-041   */
-042  AsyncAdminBuilder 
setOperationTimeout(long timeout, TimeUnit unit);
-043
-044  /**
-045   * Set timeout for each rpc request.
-046   * @param timeout
-047   * @param unit
-048   * @return this for invocation 
chaining
-049   */
-050  AsyncAdminBuilder setRpcTimeout(long 
timeout, TimeUnit unit);
-051
-052  /**
-053   * Set the base pause time for 
retrying. We use an exponential policy to generate sleep time when
-054   * retrying.
-055   * @param timeout
-056   * @param unit
-057   * @return this for invocation 
chaining
-058   */
-059  AsyncAdminBuilder setRetryPause(long 
timeout, TimeUnit unit);
-060
-061  /**
-062   * Set the max retry times for an admin 
operation. Usually it is the max attempt times minus 1.
-063   * Operation timeout and max attempt 
times(or max retry times) are both limitations for retrying,
-064   * we will stop retrying when we reach 
any of the limitations.
-065   * @param maxRetries
-066   * @return this for invocation 
chaining
-067   */
-068  default AsyncAdminBuilder 
setMaxRetries(int maxRetries) {
-069return 
setMaxAttempts(retries2Attempts(maxRetries));
-070  }
-071
-072  /**
-073   * Set the max attempt times for an 
admin operation. Usually it is the max retry times plus 1.
-074   * Operation timeout and max attempt 
times(or max retry times) are both limitations for retrying,
-075   * we will stop retrying when we reach 
any of the limitations.
-076   * @param maxAttempts
-077   * @return this for invocation 
chaining
-078   */
-079  AsyncAdminBuilder setMaxAttempts(int 
maxAttempts);
-080
-081  /**
-082   * Set the number of retries that are 
allowed before we start to log.
-083   * @param startLogErrorsCnt
-084   * @return this for invocation 
chaining
-085   */
-086  AsyncAdminBuilder 
setStartLogErrorsCnt(int startLogErrorsCnt);
-087
-088  /**
-089   * Create a {@link AsyncAdmin} 
instance.
-090   * @return a {@link AsyncAdmin} 
instance
-091   */
-092  AsyncAdmin build();
-093}
+030 * @since 2.0.0
+031 */
+032@InterfaceAudience.Public
+033public interface AsyncAdminBuilder {
+034
+035  /**
+036   * Set timeout for a whole admin 
operation. Operation timeout and max attempt times(or max retry
+037   * times) are both limitations for 
retrying, we will stop retrying when we reach any of the
+038   * limitations.
+039   * @param timeout
+040   * @param unit
+041   * @return this for invocation 
chaining
+042   */
+043  AsyncAdminBuilder 
setOperationTimeout(long timeout, TimeUnit unit);
+044
+045  /**
+046   * Set timeout for each rpc request.
+047   * @param timeout
+048   * @param unit
+049   * @return this for invocation 
chaining
+050   */
+051  AsyncAdminBuilder setRpcTimeout(long 
timeout, TimeUnit unit);
+052
+053  /**
+054   * Set the base pause time for 
retrying. We use an exponential policy to generate sleep time when
+055   * retrying.
+056   * @param timeout
+057   * @param unit
+058   * @return this for invocation 
chaining
+059   */
+060  AsyncAdminBuilder setRetryPause(long 
timeout, TimeUnit unit);
+061
+062  /**
+063   * Set the max retry times for an admin 
operation. Usually it is the max attempt times minus 1.
+064   * Operation timeout and max attempt 
times(or max retry times) are both limitations for retrying,
+065   * we will stop retrying when we reach 
any of the limitations.
+066   * @param maxRetries
+067   * @return this for invocation 
chaining
+068   */
+069  default AsyncAdminBuilder 
setMaxRetries(int maxRetries) {
+070return 
setMaxAttempts(retries2Attempts(maxRetries));
+071  }
+072
+073  /**
+074   * Set the max attempt times for an 
admin operation. Usually it is the max retry times plus 1.
+075   * Operation timeout and max attempt 
times(or max retry times) are both limitations for retrying,
+076  

[35/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/src-html/org/apache/hadoop/hbase/client/ColumnFamilyDescriptorBuilder.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/client/ColumnFamilyDescriptorBuilder.html
 
b/apidocs/src-html/org/apache/hadoop/hbase/client/ColumnFamilyDescriptorBuilder.html
index a08278c..04904c7 100644
--- 
a/apidocs/src-html/org/apache/hadoop/hbase/client/ColumnFamilyDescriptorBuilder.html
+++ 
b/apidocs/src-html/org/apache/hadoop/hbase/client/ColumnFamilyDescriptorBuilder.html
@@ -49,1335 +49,1338 @@
 041import 
org.apache.hadoop.hbase.util.PrettyPrinter;
 042import 
org.apache.hadoop.hbase.util.PrettyPrinter.Unit;
 043
-044@InterfaceAudience.Public
-045public class 
ColumnFamilyDescriptorBuilder {
-046  // For future backward compatibility
-047
-048  // Version  3 was when column names 
become byte arrays and when we picked up
-049  // Time-to-live feature.  Version 4 was 
when we moved to byte arrays, HBASE-82.
-050  // Version  5 was when bloom filter 
descriptors were removed.
-051  // Version  6 adds metadata as a map 
where keys and values are byte[].
-052  // Version  7 -- add new compression 
and hfile blocksize to HColumnDescriptor (HBASE-1217)
-053  // Version  8 -- reintroduction of 
bloom filters, changed from boolean to enum
-054  // Version  9 -- add data block 
encoding
-055  // Version 10 -- change metadata to 
standard type.
-056  // Version 11 -- add column family 
level configuration.
-057  private static final byte 
COLUMN_DESCRIPTOR_VERSION = (byte) 11;
-058
-059  @InterfaceAudience.Private
-060  public static final String 
IN_MEMORY_COMPACTION = "IN_MEMORY_COMPACTION";
-061  private static final Bytes 
IN_MEMORY_COMPACTION_BYTES = new Bytes(Bytes.toBytes(IN_MEMORY_COMPACTION));
-062
-063  @InterfaceAudience.Private
-064  public static final String IN_MEMORY = 
HConstants.IN_MEMORY;
-065  private static final Bytes 
IN_MEMORY_BYTES = new Bytes(Bytes.toBytes(IN_MEMORY));
-066
-067  // These constants are used as FileInfo 
keys
-068  @InterfaceAudience.Private
-069  public static final String COMPRESSION 
= "COMPRESSION";
-070  private static final Bytes 
COMPRESSION_BYTES = new Bytes(Bytes.toBytes(COMPRESSION));
+044/**
+045 * @since 2.0.0
+046 */
+047@InterfaceAudience.Public
+048public class 
ColumnFamilyDescriptorBuilder {
+049  // For future backward compatibility
+050
+051  // Version  3 was when column names 
become byte arrays and when we picked up
+052  // Time-to-live feature.  Version 4 was 
when we moved to byte arrays, HBASE-82.
+053  // Version  5 was when bloom filter 
descriptors were removed.
+054  // Version  6 adds metadata as a map 
where keys and values are byte[].
+055  // Version  7 -- add new compression 
and hfile blocksize to HColumnDescriptor (HBASE-1217)
+056  // Version  8 -- reintroduction of 
bloom filters, changed from boolean to enum
+057  // Version  9 -- add data block 
encoding
+058  // Version 10 -- change metadata to 
standard type.
+059  // Version 11 -- add column family 
level configuration.
+060  private static final byte 
COLUMN_DESCRIPTOR_VERSION = (byte) 11;
+061
+062  @InterfaceAudience.Private
+063  public static final String 
IN_MEMORY_COMPACTION = "IN_MEMORY_COMPACTION";
+064  private static final Bytes 
IN_MEMORY_COMPACTION_BYTES = new Bytes(Bytes.toBytes(IN_MEMORY_COMPACTION));
+065
+066  @InterfaceAudience.Private
+067  public static final String IN_MEMORY = 
HConstants.IN_MEMORY;
+068  private static final Bytes 
IN_MEMORY_BYTES = new Bytes(Bytes.toBytes(IN_MEMORY));
+069
+070  // These constants are used as FileInfo 
keys
 071  @InterfaceAudience.Private
-072  public static final String 
COMPRESSION_COMPACT = "COMPRESSION_COMPACT";
-073  private static final Bytes 
COMPRESSION_COMPACT_BYTES = new Bytes(Bytes.toBytes(COMPRESSION_COMPACT));
+072  public static final String COMPRESSION 
= "COMPRESSION";
+073  private static final Bytes 
COMPRESSION_BYTES = new Bytes(Bytes.toBytes(COMPRESSION));
 074  @InterfaceAudience.Private
-075  public static final String 
DATA_BLOCK_ENCODING = "DATA_BLOCK_ENCODING";
-076  private static final Bytes 
DATA_BLOCK_ENCODING_BYTES = new Bytes(Bytes.toBytes(DATA_BLOCK_ENCODING));
-077  /**
-078   * Key for the BLOCKCACHE attribute. A 
more exact name would be
-079   * CACHE_DATA_ON_READ because this flag 
sets whether or not we cache DATA
-080   * blocks. We always cache INDEX and 
BLOOM blocks; caching these blocks cannot
-081   * be disabled.
-082   */
-083  @InterfaceAudience.Private
-084  public static final String BLOCKCACHE = 
"BLOCKCACHE";
-085  private static final Bytes 
BLOCKCACHE_BYTES = new Bytes(Bytes.toBytes(BLOCKCACHE));
+075  public static final String 
COMPRESSION_COMPACT = "COMPRESSION_COMPACT";
+076  private static final Bytes 
COMPRESSION_COMPACT_BYTES = new Bytes(Bytes.toBytes(COMPRESSION_COMPACT));
+077  @InterfaceAudience.Private
+078  public static final 

[05/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/devapidocs/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.NamespaceProcedureBiConsumer.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.NamespaceProcedureBiConsumer.html
 
b/devapidocs/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.NamespaceProcedureBiConsumer.html
index 60e4aeb..d86b9d3 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.NamespaceProcedureBiConsumer.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.NamespaceProcedureBiConsumer.html
@@ -126,7 +126,7 @@ var activeTableTab = "activeTableTab";
 
 
 
-private abstract class RawAsyncHBaseAdmin.NamespaceProcedureBiConsumer
+private abstract class RawAsyncHBaseAdmin.NamespaceProcedureBiConsumer
 extends RawAsyncHBaseAdmin.ProcedureBiConsumer
 
 
@@ -248,7 +248,7 @@ extends 
 
 namespaceName
-protected final http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String namespaceName
+protected final http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String namespaceName
 
 
 
@@ -265,7 +265,7 @@ extends 
 
 NamespaceProcedureBiConsumer
-NamespaceProcedureBiConsumer(AsyncAdmin admin,
+NamespaceProcedureBiConsumer(AsyncAdmin admin,
  http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String namespaceName)
 
 
@@ -283,7 +283,7 @@ extends 
 
 getOperationType
-abstract http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String getOperationType()
+abstract http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String getOperationType()
 
 
 
@@ -292,7 +292,7 @@ extends 
 
 getDescription
-http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String getDescription()
+http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String getDescription()
 
 
 
@@ -301,7 +301,7 @@ extends 
 
 onFinished
-void onFinished()
+void onFinished()
 
 Specified by:
 onFinished in
 class RawAsyncHBaseAdmin.ProcedureBiConsumer
@@ -314,7 +314,7 @@ extends 
 
 onError
-void onError(http://docs.oracle.com/javase/8/docs/api/java/lang/Throwable.html?is-external=true";
 title="class or interface in java.lang">Throwable error)
+void onError(http://docs.oracle.com/javase/8/docs/api/java/lang/Throwable.html?is-external=true";
 title="class or interface in java.lang">Throwable error)
 
 Specified by:
 onError in
 class RawAsyncHBaseAdmin.ProcedureBiConsumer

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/devapidocs/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.ProcedureBiConsumer.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.ProcedureBiConsumer.html
 
b/devapidocs/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.ProcedureBiConsumer.html
index 2e51fbe..b15ba87 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.ProcedureBiConsumer.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.ProcedureBiConsumer.html
@@ -121,7 +121,7 @@ var activeTableTab = "activeTableTab";
 
 
 
-private abstract class RawAsyncHBaseAdmin.ProcedureBiConsumer
+private abstract class RawAsyncHBaseAdmin.ProcedureBiConsumer
 extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true";
 title="class or interface in java.lang">Object
 implements http://docs.oracle.com/javase/8/docs/api/java/util/function/BiConsumer.html?is-external=true";
 title="class or interface in java.util.function">BiConsumerVoid,http://docs.oracle.com/javase/8/docs/api/java/lang/Throwable.html?is-external=true";
 title="class or interface in java.lang">Throwable>
 
@@ -226,7 +226,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/util/function/
 
 
 admin
-protected final AsyncAdmin admin
+protected final AsyncAdmin admin
 
 
 
@@ -243,7 +243,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/util/function/
 
 
 ProcedureBiConsumer
-ProcedureBiConsumer(AsyncAdmin admin)
+ProcedureBiConsumer(AsyncAdmin admin)
 
 
 
@@ -260,7 +260,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/util/function/
 
 
 onFinished
-abstract void onFinished()
+abstract void onFinished()
 
 
 
@@ -269,7 +269,7 @@ implements http://docs.oracle.com/javase/8/docs/api/java/util/function/
 
 
 onError
-a

[17/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/devapidocs/org/apache/hadoop/hbase/classification/class-use/InterfaceAudience.Private.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/classification/class-use/InterfaceAudience.Private.html
 
b/devapidocs/org/apache/hadoop/hbase/classification/class-use/InterfaceAudience.Private.html
index efec396..71d1e4d 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/classification/class-use/InterfaceAudience.Private.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/classification/class-use/InterfaceAudience.Private.html
@@ -1179,97 +1179,101 @@ service.
 
 
 
+class 
+BackupClientFactory 
+
+
 interface 
 BackupCopyJob
 Backup copy job is a part of a backup process.
 
 
-
+
 class 
 BackupDriver
 Command-line entry point for backup operation
 
 
-
+
 class 
 BackupInfo
 An object to encapsulate the information for each backup 
session
 
 
-
+
 interface 
 BackupMergeJob
 Backup merge operation job interface.
 
 
-
+
 class 
 BackupRequest
 POJO class for backup request
 
 
-
+
 interface 
 BackupRestoreConstants
 BackupRestoreConstants holds a bunch of HBase Backup and 
Restore constants
 
 
-
+
 class 
 BackupRestoreFactory
 Factory implementation for backup/restore related jobs
 
 
-
+
 class 
 BackupTableInfo
 Backup related information encapsulated for a table.
 
 
-
+
 class 
 BackupType 
 
-
+
 class 
 FailedArchiveException
 Exception indicating that some files in the requested set 
could not be archived.
 
 
-
+
 class 
 HBackupFileSystem
 View to an on-disk Backup Image FileSytem Provides the set 
of methods necessary to interact with
  the on-disk Backup Image data.
 
 
-
+
 class 
 HFileArchiver
 Utility class to handle the removal of HFiles (or the 
respective StoreFiles)
  for a HRegion from the FileSystem.
 
 
-
+
 (package private) class 
 LogUtils
 Utility class for disabling Zk and client logging
 
 
-
+
 class 
 RestoreDriver
 Command-line entry point for restore operation
 
 
-
+
 interface 
 RestoreJob
 Restore operation job interface Concrete implementation is 
provided by backup provider, see
  BackupRestoreFactory
 
 
-
+
 class 
 RestoreRequest
 POJO class for restore request
@@ -4635,6 +4639,12 @@ service.
 
 
 class 
+RegionSizeCalculator
+Computes size of each region for given table and given 
column families.
+
+
+
+class 
 TableSnapshotInputFormatImpl
 Hadoop MR API-agnostic implementation for mapreduce over 
table snapshots.
 
@@ -7826,66 +7836,60 @@ service.
 
 
 class 
-LegacyScanQueryMatcher
-Deprecated. 
-
-
-
-class 
 MajorCompactionScanQueryMatcher
 Query matcher for major compaction.
 
 
-
+
 class 
 MinorCompactionScanQueryMatcher
 Query matcher for minor compaction.
 
 
-
+
 class 
 NewVersionBehaviorTracker
 A tracker both implementing ColumnTracker and 
DeleteTracker, used for mvcc-sensitive scanning.
 
 
-
+
 class 
 NormalUserScanQueryMatcher
 Query matcher for normal user scan.
 
 
-
+
 class 
 RawScanQueryMatcher
 Query matcher for raw scan.
 
 
-
+
 class 
 ScanDeleteTracker
 This class is responsible for the tracking and enforcement 
of Deletes during the course of a Scan
  operation.
 
 
-
+
 class 
 ScanQueryMatcher
 A query matcher that is specifically designed for the scan 
case.
 
 
-
+
 class 
 ScanWildcardColumnTracker
 Keeps track of the columns for a scan if they are not 
explicitly specified
 
 
-
+
 class 
 StripeCompactionScanQueryMatcher
 Query matcher for stripe compaction if range drop deletes 
is used.
 
 
-
+
 class 
 UserScanQueryMatcher
 Query matcher for user scan.
@@ -10080,17 +10084,11 @@ service.
 
 
 class 
-RegionSizeCalculator
-Computes size of each region for given table and given 
column families.
-
-
-
-class 
 RegionSplitCalculator
 This is a generic region split calculator.
 
 
-
+
 class 
 RegionSplitter
 The RegionSplitter 
class provides several utilities to help in the
@@ -10098,99 +10096,99 @@ service.
  instead of having HBase handle that automatically.
 
 
-
+
 class 
 RetryCounter 
 
-
+
 class 
 RetryCounterFactory 
 
-
+
 class 
 RowBloomContext
 Handles ROW bloom related context.
 
 
-
+
 class 
 RowBloomHashKey 
 
-
+
 class 
 RowColBloomContext
 Handles ROWCOL bloom related context.
 
 
-
+
 class 
 RowColBloomHashKey
 An hash key for ROWCOL bloom.
 
 
-
+
 class 
 ServerCommandLine
 Base class for command lines that start up various HBase 
daemons.
 
 
-
+
 class 
 Sleeper
 Sleeper for current thread.
 
 
-
+
 class 
 SoftObjectPool
 A SoftReference based shared object pool.
 
 
-
+
 class 
 StealJobQueue
 This queue allows a ThreadPoolExecutor to steal jobs from 
another ThreadPoolExecutor.
 
 
-
+
 class 
 Strings
 Utility for Strings.
 
 
-
+
 class 
 Threads
 Thread Utility
 
 
-
+
 class 
 Triple
 Utility class to manage a triple.
 
 
-
+
 class 
 UnsafeAccess 
 
-
+
 class 
 UnsafeAvailChecker 
 
-
+
 class 
 WeakObjectPool
 A WeakReference based

[21/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/checkstyle.rss
--
diff --git a/checkstyle.rss b/checkstyle.rss
index 18266d6..ce692ce 100644
--- a/checkstyle.rss
+++ b/checkstyle.rss
@@ -25,8 +25,8 @@ under the License.
 en-us
 ©2007 - 2017 The Apache Software Foundation
 
-  File: 2031,
- Errors: 12867,
+  File: 2029,
+ Errors: 12865,
  Warnings: 0,
  Infos: 0
   
@@ -1777,7 +1777,7 @@ under the License.
   0
 
 
-  1
+  0
 
   
   
@@ -2846,7 +2846,7 @@ under the License.
   
   
 
-  http://hbase.apache.org/checkstyle.html#org.apache.hadoop.hbase.mapreduce.PutSortReducer.java";>org/apache/hadoop/hbase/mapreduce/PutSortReducer.java
+  http://hbase.apache.org/checkstyle.html#org.apache.hadoop.hbase.regionserver.CompactionPipeline.java";>org/apache/hadoop/hbase/regionserver/CompactionPipeline.java
 
 
   0
@@ -2855,12 +2855,12 @@ under the License.
   0
 
 
-  2
+  7
 
   
   
 
-  http://hbase.apache.org/checkstyle.html#org.apache.hadoop.hbase.regionserver.CompactionPipeline.java";>org/apache/hadoop/hbase/regionserver/CompactionPipeline.java
+  http://hbase.apache.org/checkstyle.html#org.apache.hadoop.hbase.mapreduce.PutSortReducer.java";>org/apache/hadoop/hbase/mapreduce/PutSortReducer.java
 
 
   0
@@ -2869,7 +2869,7 @@ under the License.
   0
 
 
-  7
+  2
 
   
   
@@ -6299,7 +6299,7 @@ under the License.
   0
 
 
-  3
+  2
 
   
   
@@ -6831,7 +6831,7 @@ under the License.
   0
 
 
-  4
+  2
 
   
   
@@ -7046,20 +7046,6 @@ under the License.
   
   
 
-  http://hbase.apache.org/checkstyle.html#test-classes.org.apache.hadoop.hbase.PerformanceEvaluation_Counter.properties";>test-classes/org/apache/hadoop/hbase/PerformanceEvaluation_Counter.properties
-
-
-  0
-
-
-  0
-
-
-  1
-
-  
-  
-
   http://hbase.apache.org/checkstyle.html#org.apache.hadoop.hbase.spark.example.hbasecontext.JavaHBaseDistributedScan.java";>org/apache/hadoop/hbase/spark/example/hbasecontext/JavaHBaseDistributedScan.java
 
 
@@ -8068,7 +8054,7 @@ under the License.
   
   
 
-  http://hbase.apache.org/checkstyle.html#org.apache.hadoop.hbase.mapreduce.MultiTableOutputFormat.java";>org/apache/hadoop/hbase/mapreduce/MultiTableOutputFormat.java
+  http://hbase.apache.org/checkstyle.html#org.apache.hadoop.hbase.regionserver.OffheapChunk.java";>org/apache/hadoop/hbase/regionserver/OffheapChunk.java
 
 
   0
@@ -8077,12 +8063,12 @@ under the License.
   0
 
 
-  3
+  0
 
   
   
 
-  http://hbase.apache.org/checkstyle.html#org.apache.hadoop.hbase.regionserver.OffheapChunk.java";>org/apache/hadoop/hbase/regionserver/OffheapChunk.java
+  http://hbase.apache.org/checkstyle.html#org.apache.hadoop.hbase.mapreduce.MultiTableOutputFormat.java";>org/apache/hadoop/hbase/mapreduce/MultiTableOutputFormat.java
 
 
   0
@@ -8091,7 +8077,7 @@ under the License.
   0
 
 
-  0
+  3
 
   
   
@@ -8273,7 +8259,7 @@ under the License.
   0
 
 
-  2
+  3
 
   
   
@@ -8866,20 +8852,6 @@ under the License.
   
   
 
-  http://hbase.ap

[10/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/devapidocs/org/apache/hadoop/hbase/client/BatchScanResultCache.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/BatchScanResultCache.html 
b/devapidocs/org/apache/hadoop/hbase/client/BatchScanResultCache.html
index 144a1f6..584fbb5 100644
--- a/devapidocs/org/apache/hadoop/hbase/client/BatchScanResultCache.html
+++ b/devapidocs/org/apache/hadoop/hbase/client/BatchScanResultCache.html
@@ -114,7 +114,7 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Private
-public class BatchScanResultCache
+public class BatchScanResultCache
 extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true";
 title="class or interface in java.lang">Object
 implements ScanResultCache
 A scan result cache for batched scan, i.e,
@@ -122,6 +122,10 @@ implements Since:
+2.0.0
+
 
 
 
@@ -262,7 +266,7 @@ implements 
 
 batch
-private final int batch
+private final int batch
 
 
 
@@ -271,7 +275,7 @@ implements 
 
 lastCell
-private Cell lastCell
+private Cell lastCell
 
 
 
@@ -280,7 +284,7 @@ implements 
 
 lastResultPartial
-private boolean lastResultPartial
+private boolean lastResultPartial
 
 
 
@@ -289,7 +293,7 @@ implements 
 
 partialResults
-private final http://docs.oracle.com/javase/8/docs/api/java/util/Deque.html?is-external=true";
 title="class or interface in java.util">Deque partialResults
+private final http://docs.oracle.com/javase/8/docs/api/java/util/Deque.html?is-external=true";
 title="class or interface in java.util">Deque partialResults
 
 
 
@@ -298,7 +302,7 @@ implements 
 
 numCellsOfPartialResults
-private int numCellsOfPartialResults
+private int numCellsOfPartialResults
 
 
 
@@ -307,7 +311,7 @@ implements 
 
 numberOfCompleteRows
-private int numberOfCompleteRows
+private int numberOfCompleteRows
 
 
 
@@ -324,7 +328,7 @@ implements 
 
 BatchScanResultCache
-public BatchScanResultCache(int batch)
+public BatchScanResultCache(int batch)
 
 
 
@@ -341,7 +345,7 @@ implements 
 
 recordLastResult
-private void recordLastResult(Result result)
+private void recordLastResult(Result result)
 
 
 
@@ -350,7 +354,7 @@ implements 
 
 createCompletedResult
-private Result createCompletedResult()
+private Result createCompletedResult()
   throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true";
 title="class or interface in java.io">IOException
 
 Throws:
@@ -364,7 +368,7 @@ implements 
 
 regroupResults
-private Result regroupResults(Result result)
+private Result regroupResults(Result result)
 
 
 
@@ -373,7 +377,7 @@ implements 
 
 addAndGet
-public Result[] addAndGet(Result[] results,
+public Result[] addAndGet(Result[] results,
   boolean isHeartbeatMessage)
throws http://docs.oracle.com/javase/8/docs/api/java/io/IOException.html?is-external=true";
 title="class or interface in java.io">IOException
 Description copied from 
interface: ScanResultCache
@@ -397,7 +401,7 @@ implements 
 
 clear
-public void clear()
+public void clear()
 Description copied from 
interface: ScanResultCache
 Clear the cached result if any. Called when scan error and 
we will start from a start of a row
  again.
@@ -413,7 +417,7 @@ implements 
 
 numberOfCompleteRows
-public int numberOfCompleteRows()
+public int numberOfCompleteRows()
 Description copied from 
interface: ScanResultCache
 Return the number of complete rows. Used to implement 
limited scan.
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/devapidocs/org/apache/hadoop/hbase/client/ColumnFamilyDescriptor.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/ColumnFamilyDescriptor.html 
b/devapidocs/org/apache/hadoop/hbase/client/ColumnFamilyDescriptor.html
index 6f4644b..0cd8d28 100644
--- a/devapidocs/org/apache/hadoop/hbase/client/ColumnFamilyDescriptor.html
+++ b/devapidocs/org/apache/hadoop/hbase/client/ColumnFamilyDescriptor.html
@@ -106,13 +106,17 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Public
-public interface ColumnFamilyDescriptor
+public interface ColumnFamilyDescriptor
 An ColumnFamilyDescriptor contains information about a 
column family such as the
  number of versions, compression settings, etc.
 
  It is used as input when creating a table or adding a column.
 
  To construct a new instance, use the ColumnFamilyDescriptorBuilder 
methods
+
+Since:
+2.0.0
+
 
 
 
@@ -329,7 +333,7 @@ public interface 
 COMPARATOR
 @InterfaceAudience.Private
-static final http://docs.oracle.com/javase/8/docs/api/java/util/Comparator.html?is-external=true";
 title="class or interface in java.util">Comparator COMPARATOR
+static final http://docs.oracle.com/javase/8/docs/api/java/util/Comparator.html?is-external=true";
 title="c

[44/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/org/apache/hadoop/hbase/client/Get.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/client/Get.html 
b/apidocs/org/apache/hadoop/hbase/client/Get.html
index bfb42e9..f5ad665 100644
--- a/apidocs/org/apache/hadoop/hbase/client/Get.html
+++ b/apidocs/org/apache/hadoop/hbase/client/Get.html
@@ -18,7 +18,7 @@
 catch(err) {
 }
 //-->
-var methods = 
{"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":10,"i12":10,"i13":10,"i14":10,"i15":42,"i16":10,"i17":10,"i18":10,"i19":10,"i20":10,"i21":10,"i22":10,"i23":10,"i24":10,"i25":42,"i26":10,"i27":10,"i28":10,"i29":10,"i30":10,"i31":10,"i32":10,"i33":10,"i34":42,"i35":42,"i36":10,"i37":10,"i38":10,"i39":10,"i40":10,"i41":10,"i42":10};
+var methods = 
{"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":10,"i12":10,"i13":10,"i14":10,"i15":10,"i16":42,"i17":10,"i18":10,"i19":10,"i20":10,"i21":10,"i22":10,"i23":10,"i24":10,"i25":10,"i26":42,"i27":10,"i28":10,"i29":10,"i30":10,"i31":10,"i32":10,"i33":10,"i34":42,"i35":42,"i36":10,"i37":10,"i38":10,"i39":10,"i40":10,"i41":10};
 var tabs = {65535:["t0","All Methods"],2:["t2","Instance 
Methods"],8:["t4","Concrete Methods"],32:["t6","Deprecated Methods"]};
 var altColor = "altColor";
 var rowColor = "rowColor";
@@ -170,7 +170,7 @@ implements 
 
 Fields inherited from class org.apache.hadoop.hbase.client.Query
-colFamTimeRangeMap,
 consistency,
 filter,
 loadColumnFamiliesOnDemand,
 targetReplicaId,
 tr
+colFamTimeRangeMap,
 consistency,
 filter,
 loadColumnFamiliesOnDemand,
 targetReplicaId
 
 
 
@@ -301,20 +301,26 @@ implements 
 
 
+TimeRange
+getTimeRange()
+Method for retrieving the get's TimeRange
+
+
+
 boolean
 hasFamilies()
 Method for checking if any families have been inserted into 
this Get
 
 
-
+
 int
 hashCode() 
 
-
+
 boolean
 isCheckExistenceOnly() 
 
-
+
 boolean
 isClosestRowBefore()
 Deprecated. 
@@ -322,57 +328,57 @@ implements 
 
 
-
+
 int
 numFamilies()
 Method for retrieving the number of families to get 
from
 
 
-
+
 Get
 readAllVersions()
 Get all available versions.
 
 
-
+
 Get
 readVersions(int versions)
 Get up to the specified number of versions of each 
column.
 
 
-
+
 Get
 setACL(http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true";
 title="class or interface in java.util">MapString,org.apache.hadoop.hbase.security.access.Permission> perms) 
 
-
+
 Get
 setACL(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String user,
   
org.apache.hadoop.hbase.security.access.Permission perms) 
 
-
+
 Get
 setAttribute(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String name,
 byte[] value)
 Sets an attribute.
 
 
-
+
 Get
 setAuthorizations(org.apache.hadoop.hbase.security.visibility.Authorizations authorizations)
 Sets the authorizations to be used by this Query
 
 
-
+
 Get
 setCacheBlocks(boolean cacheBlocks)
 Set whether blocks should be cached for this Get.
 
 
-
+
 Get
 setCheckExistenceOnly(boolean checkExistenceOnly) 
 
-
+
 Get
 setClosestRowBefore(boolean closestRowBefore)
 Deprecated. 
@@ -380,7 +386,7 @@ implements 
 
 
-
+
 Get
 setColumnFamilyTimeRange(byte[] cf,
 long minStamp,
@@ -389,11 +395,6 @@ implements 
 
 
-
-Get
-setColumnFamilyTimeRange(byte[] cf,
-TimeRange tr) 
-
 
 Get
 setConsistency(Consistency consistency)
@@ -475,17 +476,11 @@ implements 
 
 Get
-setTimeRange(TimeRange tr)
-Get versions of columns only within the specified timestamp 
range,
-
-
-
-Get
 setTimeStamp(long timestamp)
 Get versions of columns with the specified timestamp.
 
 
-
+
 http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true";
 title="class or interface in java.util">MapString,http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true";
 title="class or interface in java.lang">Object>
 toMap(int maxCols)
 Compile the details beyond the scope of getFingerprint 
(row, columns,
@@ -498,7 +493,7 @@ implements 
 
 Methods inherited from class org.apache.hadoop.hbase.client.Query
-doLoadColumnFamiliesOnDemand,
 getACL,
 getAuthorizations,
 getColumnFamilyTimeRange,
 getConsistency,
 getFilter,
 getIsolationLevel,
 getLoadColumnFamiliesOnDemandValue,
 get
 ReplicaId, getTimeRange
+doLoadColumnFamiliesOnDemand,
 getACL,
 getAuthorizations,
 getColumnFamilyTimeRange,
 getConsistency,
 getFilter,
 getIsolationLevel,
 getLoadColumnFamiliesOnDemandVa

[13/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/devapidocs/org/apache/hadoop/hbase/client/AsyncMasterRequestRpcRetryingCaller.Callable.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/AsyncMasterRequestRpcRetryingCaller.Callable.html
 
b/devapidocs/org/apache/hadoop/hbase/client/AsyncMasterRequestRpcRetryingCaller.Callable.html
index c0df6ac..239ee99 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/client/AsyncMasterRequestRpcRetryingCaller.Callable.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/client/AsyncMasterRequestRpcRetryingCaller.Callable.html
@@ -110,7 +110,7 @@ var activeTableTab = "activeTableTab";
 
 
 http://docs.oracle.com/javase/8/docs/api/java/lang/FunctionalInterface.html?is-external=true";
 title="class or interface in java.lang">@FunctionalInterface
-public static interface AsyncMasterRequestRpcRetryingCaller.Callable
+public static interface AsyncMasterRequestRpcRetryingCaller.Callable
 
 
 
@@ -155,7 +155,7 @@ public static interface 
 
 call
-http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">CompletableFuture call(HBaseRpcController controller,
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">CompletableFuture call(HBaseRpcController controller,
   
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos.MasterService.Interface stub)
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/devapidocs/org/apache/hadoop/hbase/client/AsyncMasterRequestRpcRetryingCaller.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/AsyncMasterRequestRpcRetryingCaller.html
 
b/devapidocs/org/apache/hadoop/hbase/client/AsyncMasterRequestRpcRetryingCaller.html
index 51a7fd4..8bb3d98 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/client/AsyncMasterRequestRpcRetryingCaller.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/client/AsyncMasterRequestRpcRetryingCaller.html
@@ -115,9 +115,13 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Private
-public class AsyncMasterRequestRpcRetryingCaller
+public class AsyncMasterRequestRpcRetryingCaller
 extends AsyncRpcRetryingCaller
 Retry caller for a request call to master.
+
+Since:
+2.0.0
+
 
 
 
@@ -248,7 +252,7 @@ extends 
 
 callable
-private final AsyncMasterRequestRpcRetryingCaller.Callable callable
+private final AsyncMasterRequestRpcRetryingCaller.Callable callable
 
 
 
@@ -265,7 +269,7 @@ extends 
 
 AsyncMasterRequestRpcRetryingCaller
-public AsyncMasterRequestRpcRetryingCaller(org.apache.hadoop.hbase.shaded.io.netty.util.HashedWheelTimer retryTimer,
+public AsyncMasterRequestRpcRetryingCaller(org.apache.hadoop.hbase.shaded.io.netty.util.HashedWheelTimer retryTimer,
AsyncConnectionImpl conn,
AsyncMasterRequestRpcRetryingCaller.Callable callable,
long pauseNs,
@@ -289,7 +293,7 @@ extends 
 
 doCall
-protected void doCall()
+protected void doCall()
 
 Specified by:
 doCall in
 class AsyncRpcRetryingCaller
@@ -302,7 +306,7 @@ extends 
 
 call
-public http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">CompletableFuture call()
+public http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">CompletableFuture call()
 
 Overrides:
 call in
 class AsyncRpcRetryingCaller

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/devapidocs/org/apache/hadoop/hbase/client/AsyncRequestFuture.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/client/AsyncRequestFuture.html 
b/devapidocs/org/apache/hadoop/hbase/client/AsyncRequestFuture.html
index 193d228..a4ee68a 100644
--- a/devapidocs/org/apache/hadoop/hbase/client/AsyncRequestFuture.html
+++ b/devapidocs/org/apache/hadoop/hbase/client/AsyncRequestFuture.html
@@ -106,11 +106,15 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Private
-public interface AsyncRequestFuture
+public interface AsyncRequestFuture
 The context used to wait for results from one submit call.
  1) If AsyncProcess is set to track errors globally, and not per call (for 
HTable puts),
 then errors and failed operations in this object will reflect global 
errors.
  2) If submit call is made with needResults false, results will not be 
saved.
+
+Since:
+2.0.0
+
 
 
 
@@ -172,7 +176,7 @@ public interface 
 
 hasError
-boolean hasErro

[46/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/org/apache/hadoop/hbase/client/AsyncTableBuilder.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/client/AsyncTableBuilder.html 
b/apidocs/org/apache/hadoop/hbase/client/AsyncTableBuilder.html
index ea9fa85..a02bfd5 100644
--- a/apidocs/org/apache/hadoop/hbase/client/AsyncTableBuilder.html
+++ b/apidocs/org/apache/hadoop/hbase/client/AsyncTableBuilder.html
@@ -102,12 +102,16 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Public
-public interface AsyncTableBuilder
+public interface AsyncTableBuilder
 For creating AsyncTable 
or RawAsyncTable.
  
  The implementation should have default configurations set before returning 
the builder to user.
  So users are free to only set the configs they care about to create a new
  AsyncTable/RawAsyncTable instance.
+
+Since:
+2.0.0
+
 
 
 
@@ -214,7 +218,7 @@ public interface 
 
 setOperationTimeout
-AsyncTableBuilder setOperationTimeout(long timeout,
+AsyncTableBuilder setOperationTimeout(long timeout,
  http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/TimeUnit.html?is-external=true";
 title="class or interface in 
java.util.concurrent">TimeUnit unit)
 Set timeout for a whole operation such as get, put or 
delete. Notice that scan will not be
  effected by this value, see scanTimeoutNs.
@@ -235,7 +239,7 @@ public interface 
 
 setScanTimeout
-AsyncTableBuilder setScanTimeout(long timeout,
+AsyncTableBuilder setScanTimeout(long timeout,
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/TimeUnit.html?is-external=true";
 title="class or interface in 
java.util.concurrent">TimeUnit unit)
 As now we have heartbeat support for scan, ideally a scan 
will never timeout unless the RS is
  crash. The RS will always return something before the rpc timed out or scan 
timed out to tell
@@ -253,7 +257,7 @@ public interface 
 
 setRpcTimeout
-AsyncTableBuilder setRpcTimeout(long timeout,
+AsyncTableBuilder setRpcTimeout(long timeout,
http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/TimeUnit.html?is-external=true";
 title="class or interface in 
java.util.concurrent">TimeUnit unit)
 Set timeout for each rpc request.
  
@@ -267,7 +271,7 @@ public interface 
 
 setReadRpcTimeout
-AsyncTableBuilder setReadRpcTimeout(long timeout,
+AsyncTableBuilder setReadRpcTimeout(long timeout,
http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/TimeUnit.html?is-external=true";
 title="class or interface in 
java.util.concurrent">TimeUnit unit)
 Set timeout for each read(get, scan) rpc request.
 
@@ -278,7 +282,7 @@ public interface 
 
 setWriteRpcTimeout
-AsyncTableBuilder setWriteRpcTimeout(long timeout,
+AsyncTableBuilder setWriteRpcTimeout(long timeout,
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/TimeUnit.html?is-external=true";
 title="class or interface in 
java.util.concurrent">TimeUnit unit)
 Set timeout for each write(put, delete) rpc request.
 
@@ -289,7 +293,7 @@ public interface 
 
 setRetryPause
-AsyncTableBuilder setRetryPause(long pause,
+AsyncTableBuilder setRetryPause(long pause,
http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/TimeUnit.html?is-external=true";
 title="class or interface in 
java.util.concurrent">TimeUnit unit)
 Set the base pause time for retrying. We use an exponential 
policy to generate sleep time when
  retrying.
@@ -301,7 +305,7 @@ public interface 
 
 setMaxRetries
-default AsyncTableBuilder setMaxRetries(int maxRetries)
+default AsyncTableBuilder setMaxRetries(int maxRetries)
 Set the max retry times for an operation. Usually it is the 
max attempt times minus 1.
  
  Operation timeout and max attempt times(or max retry times) are both 
limitations for retrying,
@@ -319,7 +323,7 @@ public interface 
 
 setMaxAttempts
-AsyncTableBuilder setMaxAttempts(int maxAttempts)
+AsyncTableBuilder setMaxAttempts(int maxAttempts)
 Set the max attempt times for an operation. Usually it is 
the max retry times plus 1. Operation
  timeout and max attempt times(or max retry times) are both limitations for 
retrying, we will
  stop retrying when we reach any of the limitations.
@@ -336,7 +340,7 @@ public interface 
 
 setStartLogErrorsCnt
-AsyncTableBuilder setStartLogErrorsCnt(int startLogErrorsCnt)
+AsyncTableBuilder setStartLogErrorsCnt(int startLogErrorsCnt)
 Set the number of retries that are allowed before we start 
to log.
 
 
@@ -346,7 +350,7 @@ public interface 
 
 build
-T build()
+T build()
 Create the AsyncTable 
or RawAsyncTable instance.
 
 

http://git-wip-us.apache.org/repos/asf/hbase-site/blob

[48/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/org/apache/hadoop/hbase/client/AsyncAdmin.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/client/AsyncAdmin.html 
b/apidocs/org/apache/hadoop/hbase/client/AsyncAdmin.html
index 591b827..90aac08 100644
--- a/apidocs/org/apache/hadoop/hbase/client/AsyncAdmin.html
+++ b/apidocs/org/apache/hadoop/hbase/client/AsyncAdmin.html
@@ -102,11 +102,15 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Public
-public interface AsyncAdmin
+public interface AsyncAdmin
 The asynchronous administrative API for HBase.
  
  This feature is still under development, so marked as IA.Private. Will change 
to public when
  done. Use it with caution.
+
+Since:
+2.0.0
+
 
 
 
@@ -940,7 +944,7 @@ public interface 
 
 tableExists
-http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">CompletableFutureBoolean> tableExists(TableName tableName)
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">CompletableFutureBoolean> tableExists(TableName tableName)
 
 Parameters:
 tableName - Table to check.
@@ -956,7 +960,7 @@ public interface 
 
 listTables
-default http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">CompletableFutureList> listTables()
+default http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">CompletableFutureList> listTables()
 List all the userspace tables.
 
 Returns:
@@ -972,7 +976,7 @@ public interface 
 
 listTables
-http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">CompletableFutureList> listTables(http://docs.oracle.com/javase/8/docs/api/java/util/Optional.html?is-external=true";
 title="class or interface in java.util">OptionalPattern> pattern,
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">CompletableFutureList> listTables(http://docs.oracle.com/javase/8/docs/api/java/util/Optional.html?is-external=true";
 title="class or interface in java.util">OptionalPattern> pattern,
 
boolean includeSysTables)
 List all the tables matching the given pattern.
 
@@ -990,7 +994,7 @@ public interface 
 
 listTableNames
-default http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">CompletableFutureList> listTableNames()
+default http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">CompletableFutureList> listTableNames()
 List all of the names of userspace tables.
 
 Returns:
@@ -1006,7 +1010,7 @@ public interface 
 
 listTableNames
-http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">CompletableFutureList> listTableNames(http://docs.oracle.com/javase/8/

[12/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/devapidocs/org/apache/hadoop/hbase/client/AsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder.html
--
diff --git 
a/devapidocs/org/apache/hadoop/hbase/client/AsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder.html
 
b/devapidocs/org/apache/hadoop/hbase/client/AsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder.html
index f5a9061..06c58a6 100644
--- 
a/devapidocs/org/apache/hadoop/hbase/client/AsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder.html
+++ 
b/devapidocs/org/apache/hadoop/hbase/client/AsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder.html
@@ -118,7 +118,7 @@ var activeTableTab = "activeTableTab";
 
 
 
-public class AsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder
+public class AsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder
 extends AsyncRpcRetryingCallerFactory.BuilderBase
 
 
@@ -320,7 +320,7 @@ extends 
 
 scannerId
-private http://docs.oracle.com/javase/8/docs/api/java/lang/Long.html?is-external=true";
 title="class or interface in java.lang">Long scannerId
+private http://docs.oracle.com/javase/8/docs/api/java/lang/Long.html?is-external=true";
 title="class or interface in java.lang">Long scannerId
 
 
 
@@ -329,7 +329,7 @@ extends 
 
 scan
-private Scan scan
+private Scan scan
 
 
 
@@ -338,7 +338,7 @@ extends 
 
 scanMetrics
-private ScanMetrics scanMetrics
+private ScanMetrics scanMetrics
 
 
 
@@ -347,7 +347,7 @@ extends 
 
 resultCache
-private ScanResultCache resultCache
+private ScanResultCache resultCache
 
 
 
@@ -356,7 +356,7 @@ extends 
 
 consumer
-private RawScanResultConsumer 
consumer
+private RawScanResultConsumer 
consumer
 
 
 
@@ -365,7 +365,7 @@ extends 
 
 stub
-private org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ClientService.Interface
 stub
+private org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ClientService.Interface
 stub
 
 
 
@@ -374,7 +374,7 @@ extends 
 
 loc
-private HRegionLocation loc
+private HRegionLocation loc
 
 
 
@@ -383,7 +383,7 @@ extends 
 
 isRegionServerRemote
-private boolean isRegionServerRemote
+private boolean isRegionServerRemote
 
 
 
@@ -392,7 +392,7 @@ extends 
 
 scannerLeaseTimeoutPeriodNs
-private long scannerLeaseTimeoutPeriodNs
+private long scannerLeaseTimeoutPeriodNs
 
 
 
@@ -401,7 +401,7 @@ extends 
 
 scanTimeoutNs
-private long scanTimeoutNs
+private long scanTimeoutNs
 
 
 
@@ -410,7 +410,7 @@ extends 
 
 rpcTimeoutNs
-private long rpcTimeoutNs
+private long rpcTimeoutNs
 
 
 
@@ -427,7 +427,7 @@ extends 
 
 ScanSingleRegionCallerBuilder
-public ScanSingleRegionCallerBuilder()
+public ScanSingleRegionCallerBuilder()
 
 
 
@@ -444,7 +444,7 @@ extends 
 
 id
-public AsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder id(long scannerId)
+public AsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder id(long scannerId)
 
 
 
@@ -453,7 +453,7 @@ extends 
 
 setScan
-public AsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder setScan(Scan scan)
+public AsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder setScan(Scan scan)
 
 
 
@@ -462,7 +462,7 @@ extends 
 
 metrics
-public AsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder metrics(ScanMetrics scanMetrics)
+public AsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder metrics(ScanMetrics scanMetrics)
 
 
 
@@ -471,7 +471,7 @@ extends 
 
 remote
-public AsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder remote(boolean isRegionServerRemote)
+public AsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder remote(boolean isRegionServerRemote)
 
 
 
@@ -480,7 +480,7 @@ extends 
 
 resultCache
-public AsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder resultCache(ScanResultCache resultCache)
+public AsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder resultCache(ScanResultCache resultCache)
 
 
 
@@ -489,7 +489,7 @@ extends 
 
 consumer
-public AsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder consumer(RawScanResultConsumer consumer)
+public AsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder consumer(RawScanResultConsumer consumer)
 
 
 
@@ -498,7 +498,7 @@ extends 
 
 stub
-public AsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder stub(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ClientService.Interface stub)
+public AsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder stub(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ClientService.Interface stub)
 
 
 
@@ -507,7 +507,7 @@ extends 
 
 location
-public AsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder location(HRegionLocation loc)
+public AsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder location(HRegionLocation loc)
 
 
 
@@ -516,7 +516,7 @@ extends 
 
 scannerLeaseTimeoutPeriod
-public AsyncRpcRetryingCallerFactory.ScanSingleRegionCallerBuilder sca

[42/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/org/apache/hadoop/hbase/client/Scan.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/client/Scan.html 
b/apidocs/org/apache/hadoop/hbase/client/Scan.html
index 56dcad0..dd4c38d 100644
--- a/apidocs/org/apache/hadoop/hbase/client/Scan.html
+++ b/apidocs/org/apache/hadoop/hbase/client/Scan.html
@@ -18,7 +18,7 @@
 catch(err) {
 }
 //-->
-var methods = 
{"i0":10,"i1":10,"i2":9,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":10,"i12":10,"i13":10,"i14":10,"i15":10,"i16":10,"i17":42,"i18":10,"i19":10,"i20":10,"i21":10,"i22":10,"i23":10,"i24":10,"i25":10,"i26":10,"i27":10,"i28":10,"i29":10,"i30":42,"i31":10,"i32":10,"i33":10,"i34":10,"i35":10,"i36":10,"i37":10,"i38":10,"i39":10,"i40":10,"i41":10,"i42":10,"i43":10,"i44":10,"i45":10,"i46":10,"i47":10,"i48":10,"i49":10,"i50":10,"i51":10,"i52":10,"i53":10,"i54":42,"i55":42,"i56":10,"i57":10,"i58":10,"i59":10,"i60":10,"i61":10,"i62":10,"i63":10,"i64":10,"i65":10,"i66":42,"i67":42,"i68":42,"i69":10,"i70":10,"i71":10,"i72":10,"i73":10,"i74":10,"i75":10,"i76":10};
+var methods = 
{"i0":10,"i1":10,"i2":9,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":10,"i12":10,"i13":10,"i14":10,"i15":10,"i16":10,"i17":42,"i18":10,"i19":10,"i20":10,"i21":10,"i22":10,"i23":10,"i24":10,"i25":10,"i26":10,"i27":10,"i28":10,"i29":10,"i30":10,"i31":42,"i32":10,"i33":10,"i34":10,"i35":10,"i36":10,"i37":10,"i38":10,"i39":10,"i40":10,"i41":10,"i42":10,"i43":10,"i44":10,"i45":10,"i46":10,"i47":10,"i48":10,"i49":10,"i50":10,"i51":10,"i52":10,"i53":10,"i54":42,"i55":42,"i56":10,"i57":10,"i58":10,"i59":10,"i60":10,"i61":10,"i62":10,"i63":10,"i64":10,"i65":10,"i66":42,"i67":42,"i68":42,"i69":10,"i70":10,"i71":10,"i72":10,"i73":10,"i74":10,"i75":10};
 var tabs = {65535:["t0","All Methods"],1:["t1","Static 
Methods"],2:["t2","Instance Methods"],8:["t4","Concrete 
Methods"],32:["t6","Deprecated Methods"]};
 var altColor = "altColor";
 var rowColor = "rowColor";
@@ -244,7 +244,7 @@ extends 
 
 Fields inherited from class org.apache.hadoop.hbase.client.Query
-colFamTimeRangeMap,
 consistency,
 filter,
 loadColumnFamiliesOnDemand,
 targetReplicaId,
 tr
+colFamTimeRangeMap,
 consistency,
 filter,
 loadColumnFamiliesOnDemand,
 targetReplicaId
 
 
 
@@ -422,48 +422,52 @@ extends getStopRow() 
 
 
+TimeRange
+getTimeRange() 
+
+
 boolean
 hasFamilies() 
 
-
+
 boolean
 hasFilter() 
 
-
+
 boolean
 includeStartRow() 
 
-
+
 boolean
 includeStopRow() 
 
-
+
 http://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true";
 title="class or interface in java.lang">Boolean
 isAsyncPrefetch() 
 
-
+
 boolean
 isGetScan() 
 
-
+
 boolean
 isNeedCursorResult() 
 
-
+
 boolean
 isRaw() 
 
-
+
 boolean
 isReversed()
 Get whether this scan is a reversed one.
 
 
-
+
 boolean
 isScanMetricsEnabled() 
 
-
+
 boolean
 isSmall()
 Deprecated. 
@@ -471,74 +475,74 @@ extends 
 
 
-
+
 int
 numFamilies() 
 
-
+
 Scan
 readAllVersions()
 Get all available versions.
 
 
-
+
 Scan
 readVersions(int versions)
 Get up to the specified number of versions of each 
column.
 
 
-
+
 Scan
 setACL(http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true";
 title="class or interface in java.util">MapString,org.apache.hadoop.hbase.security.access.Permission> perms) 
 
-
+
 Scan
 setACL(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String user,
   
org.apache.hadoop.hbase.security.access.Permission perms) 
 
-
+
 Scan
 setAllowPartialResults(boolean allowPartialResults)
 Setting whether the caller wants to see the partial results 
when server returns
  less-than-expected cells.
 
 
-
+
 Scan
 setAsyncPrefetch(boolean asyncPrefetch) 
 
-
+
 Scan
 setAttribute(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String name,
 byte[] value)
 Sets an attribute.
 
 
-
+
 Scan
 setAuthorizations(org.apache.hadoop.hbase.security.visibility.Authorizations authorizations)
 Sets the authorizations to be used by this Query
 
 
-
+
 Scan
 setBatch(int batch)
 Set the maximum number of cells to return for each call to 
next().
 
 
-
+
 Scan
 setCacheBlocks(boolean cacheBlocks)
 Set whether blocks should be cached for this Scan.
 
 
-
+
 Scan
 setCaching(int caching)
 Set the number of rows for caching that will be passed to 
scanners.
 
 
-
+
 Scan
 setColumnFamilyTimeRange(byte[] cf,
 long minStamp,
@@ -547,11 +551,6 @@ extends 
 
 
-
-Scan
-setColumnFamilyTimeRange(byte[] cf,
-TimeRange tr) 
-
 
 Scan
 setConsistency(Consistency consistency)
@@ -718,49 +717,43 @@ extends Scan
 setTimeRange(lon

[06/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/devapidocs/org/apache/hadoop/hbase/client/Query.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/client/Query.html 
b/devapidocs/org/apache/hadoop/hbase/client/Query.html
index f27098f..a53fe3a 100644
--- a/devapidocs/org/apache/hadoop/hbase/client/Query.html
+++ b/devapidocs/org/apache/hadoop/hbase/client/Query.html
@@ -18,7 +18,7 @@
 catch(err) {
 }
 //-->
-var methods = 
{"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":10,"i12":10,"i13":10,"i14":10,"i15":10,"i16":10,"i17":10,"i18":10,"i19":10,"i20":10,"i21":10};
+var methods = 
{"i0":10,"i1":10,"i2":10,"i3":10,"i4":10,"i5":10,"i6":10,"i7":10,"i8":10,"i9":10,"i10":10,"i11":10,"i12":10,"i13":10,"i14":10,"i15":10,"i16":10,"i17":10};
 var tabs = {65535:["t0","All Methods"],2:["t2","Instance 
Methods"],8:["t4","Concrete Methods"]};
 var altColor = "altColor";
 var rowColor = "rowColor";
@@ -128,7 +128,7 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Public
-public abstract class Query
+public abstract class Query
 extends OperationWithAttributes
 
 
@@ -172,10 +172,6 @@ extends protected int
 targetReplicaId 
 
-
-protected TimeRange
-tr 
-
 
 
 
@@ -260,25 +256,21 @@ extends 
-TimeRange
-getTimeRange() 
-
-
 Query
 setACL(http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true";
 title="class or interface in java.util">MapString,Permission> perms) 
 
-
+
 Query
 setACL(http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String user,
   Permission perms) 
 
-
+
 Query
 setAuthorizations(Authorizations authorizations)
 Sets the authorizations to be used by this Query
 
 
-
+
 Query
 setColumnFamilyTimeRange(byte[] cf,
 long minStamp,
@@ -287,56 +279,37 @@ extends 
-Query
-setColumnFamilyTimeRange(byte[] cf,
-TimeRange tr) 
-
-
+
 Query
 setConsistency(Consistency consistency)
 Sets the consistency level for this operation
 
 
-
+
 Query
 setFilter(Filter filter)
 Apply the specified server-side filter when performing the 
Query.
 
 
-
+
 Query
 setIsolationLevel(IsolationLevel level)
 Set the isolation level for this query.
 
 
-
+
 Query
 setLoadColumnFamiliesOnDemand(boolean value)
 Set the value indicating whether loading CFs on demand 
should be allowed (cluster
  default is false).
 
 
-
+
 Query
 setReplicaId(int Id)
 Specify region replica id where Query will fetch data 
from.
 
 
-
-Query
-setTimeRange(long minStamp,
-long maxStamp)
-Sets the TimeRange to be used by this Query
- [minStamp, maxStamp).
-
-
-
-Query
-setTimeRange(TimeRange tr)
-Sets the TimeRange to be used by this Query
-
-
 
 
 
@@ -379,7 +352,7 @@ extends 
 
 ISOLATION_LEVEL
-private static final http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String ISOLATION_LEVEL
+private static final http://docs.oracle.com/javase/8/docs/api/java/lang/String.html?is-external=true";
 title="class or interface in java.lang">String ISOLATION_LEVEL
 
 See Also:
 Constant
 Field Values
@@ -392,7 +365,7 @@ extends 
 
 filter
-protected Filter filter
+protected Filter filter
 
 
 
@@ -401,7 +374,7 @@ extends 
 
 targetReplicaId
-protected int targetReplicaId
+protected int targetReplicaId
 
 
 
@@ -410,7 +383,7 @@ extends 
 
 consistency
-protected Consistency consistency
+protected Consistency consistency
 
 
 
@@ -419,25 +392,16 @@ extends 
 
 colFamTimeRangeMap
-protected http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true";
 title="class or interface in java.util">Map colFamTimeRangeMap
+protected http://docs.oracle.com/javase/8/docs/api/java/util/Map.html?is-external=true";
 title="class or interface in java.util">Map colFamTimeRangeMap
 
 
 
 
 
-
-
-loadColumnFamiliesOnDemand
-protected http://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true";
 title="class or interface in java.lang">Boolean loadColumnFamiliesOnDemand
-
-
-
-
-
 
 
-tr
-protected TimeRange tr
+loadColumnFamiliesOnDemand
+protected http://docs.oracle.com/javase/8/docs/api/java/lang/Boolean.html?is-external=true";
 title="class or interface in java.lang">Boolean loadColumnFamiliesOnDemand
 
 
 
@@ -454,7 +418,7 @@ extends 
 
 Query
-public Query()
+public Query()
 
 
 
@@ -471,7 +435,7 @@ extends 
 
 getFilter
-public Filter getFilter()
+public Filter getFilter()
 
 Returns:
 Filter
@@ -484,7 +448,7 @@ extends 
 
 setFilter
-public Query setFilter(Filter filter)
+public Query setFilter(Filter filter)
 Apply the specified server-side filter when performing the 
Query. Only
  Filter.filterKeyVa

[23/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.html 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.html
index c7ba868..852622e 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.html
@@ -106,8 +106,8 @@
 098private BufferedMutator mutator;
 099
 100/**
-101 * @throws IOException 
-102 * 
+101 * @throws IOException
+102 *
 103 */
 104public TableRecordWriter() throws 
IOException {
 105  String tableName = 
conf.get(OUTPUT_TABLE);
@@ -155,7 +155,7 @@
 147
 148  /**
 149   * Creates a new record writer.
-150   * 
+150   *
 151   * Be aware that the baseline javadoc 
gives the impression that there is a single
 152   * {@link RecordWriter} per job but in 
HBase, it is more natural if we give you a new
 153   * RecordWriter per call of this 
method. You must close the returned RecordWriter when done.

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.html 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.html
index f0a4a6d..e158b00 100644
--- 
a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.html
+++ 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.html
@@ -56,7 +56,7 @@
 048@InterfaceAudience.Public
 049public class TableRecordReaderImpl {
 050  public static final String 
LOG_PER_ROW_COUNT
-051= 
"hbase.mapreduce.log.scanner.rowcount";
+051  = 
"hbase.mapreduce.log.scanner.rowcount";
 052
 053  private static final Log LOG = 
LogFactory.getLog(TableRecordReaderImpl.class);
 054

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.html
 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.html
index 5764d08..eeecb61 100644
--- 
a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.html
+++ 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormat.html
@@ -56,166 +56,165 @@
 048 * wals, etc) directly to provide maximum 
performance. The snapshot is not required to be
 049 * restored to the live cluster or 
cloned. This also allows to run the mapreduce job from an
 050 * online or offline hbase cluster. The 
snapshot files can be exported by using the
-051 * {@link 
org.apache.hadoop.hbase.snapshot.ExportSnapshot} tool, to a pure-hdfs cluster, 

-052 * and this InputFormat can be used to 
run the mapreduce job directly over the snapshot files. 
+051 * {@link 
org.apache.hadoop.hbase.snapshot.ExportSnapshot} tool, to a pure-hdfs 
cluster,
+052 * and this InputFormat can be used to 
run the mapreduce job directly over the snapshot files.
 053 * The snapshot should not be deleted 
while there are jobs reading from snapshot files.
 054 * 

055 * Usage is similar to TableInputFormat, and -056 * {@link TableMapReduceUtil#initTableSnapshotMapperJob(String, Scan, Class, Class, Class, Job, -057 * boolean, Path)} -058 * can be used to configure the job. -059 *

{@code
-060 * Job job = new Job(conf);
-061 * Scan scan = new Scan();
-062 * 
TableMapReduceUtil.initTableSnapshotMapperJob(snapshotName,
-063 *  scan, MyTableMapper.class, 
MyMapKeyOutput.class,
-064 *  MyMapOutputValueWritable.class, 
job, true);
-065 * }
-066 * 
-067 *

-068 * Internally, this input format restores the snapshot into the given tmp directory. Similar to -069 * {@link TableInputFormat} an InputSplit is created per region. The region is opened for reading -070 * from each RecordReader. An internal RegionScanner is used to execute the -071 * {@link org.apache.hadoop.hbase.CellScanner} obtained from the user. -072 *

-073 * HBase owns all the data and snapshot files on the filesystem. Only the 'hbase' user can read from -074 * snapshot files and data files. -075 * To read from snapshot files directly from the file system, the user who is running the MR job -076 * must have sufficient permissions to access snapshot and reference files. -077 * This means that to run mapreduce over snapshot files, the MR job has to be run as the HBase -078 * user or the user must have group or other privileges in the filesystem (See HBASE-8369). -079 * Note that, given o


[26/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/src-html/org/apache/hadoop/hbase/client/TableSnapshotScanner.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/client/TableSnapshotScanner.html 
b/apidocs/src-html/org/apache/hadoop/hbase/client/TableSnapshotScanner.html
index de8a96f..e4c5f52 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/client/TableSnapshotScanner.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/client/TableSnapshotScanner.html
@@ -51,7 +51,7 @@
 043 * 

044 * This also allows one to run the scan from an 045 * online or offline hbase cluster. The snapshot files can be exported by using the -046 * {@link org.apache.hadoop.hbase.snapshot.ExportSnapshot} tool, +046 * org.apache.hadoop.hbase.snapshot.ExportSnapshot tool, 047 * to a pure-hdfs cluster, and this scanner can be used to 048 * run the scan directly over the snapshot files. The snapshot should not be deleted while there 049 * are open scanners reading from snapshot files. @@ -68,7 +68,7 @@ 060 * snapshot files, the job has to be run as the HBase user or the user must have group or other 061 * priviledges in the filesystem (See HBASE-8369). Note that, given other users access to read from 062 * snapshot/data files will completely circumvent the access control enforced by HBase. -063 * @see org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormat +063 * See org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormat. 064 */ 065@InterfaceAudience.Public 066public class TableSnapshotScanner extends AbstractClientScanner { http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/src-html/org/apache/hadoop/hbase/filter/BinaryComparator.html -- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/filter/BinaryComparator.html b/apidocs/src-html/org/apache/hadoop/hbase/filter/BinaryComparator.html index 27b83d9..59c8423 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/filter/BinaryComparator.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/filter/BinaryComparator.html @@ -41,66 +41,67 @@ 033/** 034 * A binary comparator which lexicographically compares against the specified 035 * byte array using {@link org.apache.hadoop.hbase.util.Bytes#compareTo(byte[], byte[])}. -036 */ -037@InterfaceAudience.Public -038public class BinaryComparator extends org.apache.hadoop.hbase.filter.ByteArrayComparable { -039 /** -040 * Constructor -041 * @param value value -042 */ -043 public BinaryComparator(byte[] value) { -044super(value); -045 } -046 -047 @Override -048 public int compareTo(byte [] value, int offset, int length) { -049return Bytes.compareTo(this.value, 0, this.value.length, value, offset, length); -050 } -051 -052 @Override -053 public int compareTo(ByteBuffer value, int offset, int length) { -054return ByteBufferUtils.compareTo(this.value, 0, this.value.length, value, offset, length); -055 } -056 -057 /** -058 * @return The comparator serialized using pb -059 */ -060 public byte [] toByteArray() { -061 ComparatorProtos.BinaryComparator.Builder builder = -062 ComparatorProtos.BinaryComparator.newBuilder(); -063 builder.setComparable(ProtobufUtil.toByteArrayComparable(this.value)); -064return builder.build().toByteArray(); -065 } -066 -067 /** -068 * @param pbBytes A pb serialized {@link BinaryComparator} instance -069 * @return An instance of {@link BinaryComparator} made from bytes -070 * @throws DeserializationException -071 * @see #toByteArray -072 */ -073 public static BinaryComparator parseFrom(final byte [] pbBytes) -074 throws DeserializationException { -075ComparatorProtos.BinaryComparator proto; -076try { -077 proto = ComparatorProtos.BinaryComparator.parseFrom(pbBytes); -078} catch (InvalidProtocolBufferException e) { -079 throw new DeserializationException(e); -080} -081return new BinaryComparator(proto.getComparable().getValue().toByteArray()); -082 } -083 -084 /** -085 * @param other -086 * @return true if and only if the fields of the comparator that are serialized -087 * are equal to the corresponding fields in other. Used for testing. -088 */ -089 boolean areSerializedFieldsEqual(ByteArrayComparable other) { -090if (other == this) return true; -091if (!(other instanceof BinaryComparator)) return false; -092 -093return super.areSerializedFieldsEqual(other); -094 } -095} +036 * @since 2.0.0 +037 */ +038@InterfaceAudience.Public +039public class BinaryComparator extends org.apache.hadoop.hbase.filter.ByteArrayComparable { +040 /** +041 * Constructor +042 * @param value value +043 */ +044 public BinaryComparator(byte[] value) { +045super(value); +046 } +047 +048 @Override +049 public int compareTo(byte [] value, int offset, int


[47/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/org/apache/hadoop/hbase/client/AsyncAdminBuilder.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/client/AsyncAdminBuilder.html 
b/apidocs/org/apache/hadoop/hbase/client/AsyncAdminBuilder.html
index b18c147..1b09c7f 100644
--- a/apidocs/org/apache/hadoop/hbase/client/AsyncAdminBuilder.html
+++ b/apidocs/org/apache/hadoop/hbase/client/AsyncAdminBuilder.html
@@ -102,10 +102,14 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Public
-public interface AsyncAdminBuilder
+public interface AsyncAdminBuilder
 For creating AsyncAdmin. The implementation 
should have default configurations set before
  returning the builder to user. So users are free to only set the configs they 
care about to
  create a new AsyncAdmin instance.
+
+Since:
+2.0.0
+
 
 
 
@@ -190,7 +194,7 @@ public interface 
 
 setOperationTimeout
-AsyncAdminBuilder setOperationTimeout(long timeout,
+AsyncAdminBuilder setOperationTimeout(long timeout,
   http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/TimeUnit.html?is-external=true";
 title="class or interface in 
java.util.concurrent">TimeUnit unit)
 Set timeout for a whole admin operation. Operation timeout 
and max attempt times(or max retry
  times) are both limitations for retrying, we will stop retrying when we reach 
any of the
@@ -210,7 +214,7 @@ public interface 
 
 setRpcTimeout
-AsyncAdminBuilder setRpcTimeout(long timeout,
+AsyncAdminBuilder setRpcTimeout(long timeout,
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/TimeUnit.html?is-external=true";
 title="class or interface in 
java.util.concurrent">TimeUnit unit)
 Set timeout for each rpc request.
 
@@ -228,7 +232,7 @@ public interface 
 
 setRetryPause
-AsyncAdminBuilder setRetryPause(long timeout,
+AsyncAdminBuilder setRetryPause(long timeout,
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/TimeUnit.html?is-external=true";
 title="class or interface in 
java.util.concurrent">TimeUnit unit)
 Set the base pause time for retrying. We use an exponential 
policy to generate sleep time when
  retrying.
@@ -247,7 +251,7 @@ public interface 
 
 setMaxRetries
-default AsyncAdminBuilder setMaxRetries(int maxRetries)
+default AsyncAdminBuilder setMaxRetries(int maxRetries)
 Set the max retry times for an admin operation. Usually it 
is the max attempt times minus 1.
  Operation timeout and max attempt times(or max retry times) are both 
limitations for retrying,
  we will stop retrying when we reach any of the limitations.
@@ -265,7 +269,7 @@ public interface 
 
 setMaxAttempts
-AsyncAdminBuilder setMaxAttempts(int maxAttempts)
+AsyncAdminBuilder setMaxAttempts(int maxAttempts)
 Set the max attempt times for an admin operation. Usually 
it is the max retry times plus 1.
  Operation timeout and max attempt times(or max retry times) are both 
limitations for retrying,
  we will stop retrying when we reach any of the limitations.
@@ -283,7 +287,7 @@ public interface 
 
 setStartLogErrorsCnt
-AsyncAdminBuilder setStartLogErrorsCnt(int startLogErrorsCnt)
+AsyncAdminBuilder setStartLogErrorsCnt(int startLogErrorsCnt)
 Set the number of retries that are allowed before we start 
to log.
 
 Parameters:
@@ -299,7 +303,7 @@ public interface 
 
 build
-AsyncAdmin build()
+AsyncAdmin build()
 Create a AsyncAdmin 
instance.
 
 Returns:

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/org/apache/hadoop/hbase/client/AsyncConnection.html
--
diff --git a/apidocs/org/apache/hadoop/hbase/client/AsyncConnection.html 
b/apidocs/org/apache/hadoop/hbase/client/AsyncConnection.html
index 5e523f3..5fcc0c9 100644
--- a/apidocs/org/apache/hadoop/hbase/client/AsyncConnection.html
+++ b/apidocs/org/apache/hadoop/hbase/client/AsyncConnection.html
@@ -106,9 +106,13 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Public
-public interface AsyncConnection
+public interface AsyncConnection
 extends http://docs.oracle.com/javase/8/docs/api/java/io/Closeable.html?is-external=true";
 title="class or interface in java.io">Closeable
 The asynchronous version of Connection.
+
+Since:
+2.0.0
+
 
 
 
@@ -243,7 +247,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/io/Closeable.html
 
 
 getConfiguration
-org.apache.hadoop.conf.Configuration getConfiguration()
+org.apache.hadoop.conf.Configuration getConfiguration()
 Returns the Configuration object used by this 
instance.
  
  The reference returned is not a copy, so any change made to it will affect 
this instance.
@@ -255,7 +259,7 @@ extends http://docs.oracle.com/javase/8/docs/api/java/io/Closeable.html
 
 
 getRegionLocator
-AsyncTableRegionLocator getRegionLocator(TableName tableName)
+AsyncTableRegionLocator getRegio

[16/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/devapidocs/org/apache/hadoop/hbase/client/AsyncAdmin.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/client/AsyncAdmin.html 
b/devapidocs/org/apache/hadoop/hbase/client/AsyncAdmin.html
index ff435ee..43348c4 100644
--- a/devapidocs/org/apache/hadoop/hbase/client/AsyncAdmin.html
+++ b/devapidocs/org/apache/hadoop/hbase/client/AsyncAdmin.html
@@ -106,11 +106,15 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Public
-public interface AsyncAdmin
+public interface AsyncAdmin
 The asynchronous administrative API for HBase.
  
  This feature is still under development, so marked as IA.Private. Will change 
to public when
  done. Use it with caution.
+
+Since:
+2.0.0
+
 
 
 
@@ -944,7 +948,7 @@ public interface 
 
 tableExists
-http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">CompletableFutureBoolean> tableExists(TableName tableName)
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">CompletableFutureBoolean> tableExists(TableName tableName)
 
 Parameters:
 tableName - Table to check.
@@ -960,7 +964,7 @@ public interface 
 
 listTables
-default http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">CompletableFutureList> listTables()
+default http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">CompletableFutureList> listTables()
 List all the userspace tables.
 
 Returns:
@@ -976,7 +980,7 @@ public interface 
 
 listTables
-http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">CompletableFutureList> listTables(http://docs.oracle.com/javase/8/docs/api/java/util/Optional.html?is-external=true";
 title="class or interface in java.util">OptionalPattern> pattern,
+http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">CompletableFutureList> listTables(http://docs.oracle.com/javase/8/docs/api/java/util/Optional.html?is-external=true";
 title="class or interface in java.util">OptionalPattern> pattern,
 
boolean includeSysTables)
 List all the tables matching the given pattern.
 
@@ -994,7 +998,7 @@ public interface 
 
 listTableNames
-default http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">CompletableFutureList> listTableNames()
+default http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">CompletableFutureList> listTableNames()
 List all of the names of userspace tables.
 
 Returns:
@@ -1010,7 +1014,7 @@ public interface 
 
 listTableNames
-http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">CompletableFutureList> listTableNames(http://docs.oracl

[14/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/devapidocs/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.html
--
diff --git a/devapidocs/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.html 
b/devapidocs/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.html
index d59138d..d615a4c 100644
--- a/devapidocs/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.html
+++ b/devapidocs/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.html
@@ -114,10 +114,14 @@ var activeTableTab = "activeTableTab";
 
 
 @InterfaceAudience.Private
-public class AsyncHBaseAdmin
+public class AsyncHBaseAdmin
 extends http://docs.oracle.com/javase/8/docs/api/java/lang/Object.html?is-external=true";
 title="class or interface in java.lang">Object
 implements AsyncAdmin
 The implementation of AsyncAdmin.
+
+Since:
+2.0.0
+
 
 
 
@@ -901,7 +905,7 @@ implements 
 
 LOG
-private static final org.apache.commons.logging.Log LOG
+private static final org.apache.commons.logging.Log LOG
 
 
 
@@ -910,7 +914,7 @@ implements 
 
 rawAdmin
-private final RawAsyncHBaseAdmin rawAdmin
+private final RawAsyncHBaseAdmin rawAdmin
 
 
 
@@ -919,7 +923,7 @@ implements 
 
 pool
-private final http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ExecutorService.html?is-external=true";
 title="class or interface in java.util.concurrent">ExecutorService pool
+private final http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ExecutorService.html?is-external=true";
 title="class or interface in java.util.concurrent">ExecutorService pool
 
 
 
@@ -936,7 +940,7 @@ implements 
 
 AsyncHBaseAdmin
-AsyncHBaseAdmin(RawAsyncHBaseAdmin rawAdmin,
+AsyncHBaseAdmin(RawAsyncHBaseAdmin rawAdmin,
 http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ExecutorService.html?is-external=true";
 title="class or interface in 
java.util.concurrent">ExecutorService pool)
 
 
@@ -954,7 +958,7 @@ implements 
 
 wrap
-private  http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in 
java.util.concurrent">CompletableFuture wrap(http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in 
java.util.concurrent">CompletableFuture future)
+private  http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in 
java.util.concurrent">CompletableFuture wrap(http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in 
java.util.concurrent">CompletableFuture future)
 
 
 
@@ -963,7 +967,7 @@ implements 
 
 tableExists
-public http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">CompletableFutureBoolean> tableExists(TableName tableName)
+public http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">CompletableFutureBoolean> tableExists(TableName tableName)
 
 Specified by:
 tableExists in
 interface AsyncAdmin
@@ -981,7 +985,7 @@ implements 
 
 listTables
-public http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">CompletableFutureList> listTables(http://docs.oracle.com/javase/8/docs/api/java/util/Optional.html?is-external=true";
 title="class or interface in java.util">OptionalPattern> pattern,
+public http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html?is-external=true";
 title="class or interface in java.util.concurrent">CompletableFutureList> listTables(http://docs.oracle.com/javase/8/docs/api/java/util/Optional.html?is-external=true";
 title="class or interface in java.util">OptionalPattern> pattern,

boolean includeSysTables)
 Description copied from 
interface: AsyncAdmin
 List all the t

[31/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/src-html/org/apache/hadoop/hbase/client/RawAsyncTable.html
--
diff --git a/apidocs/src-html/org/apache/hadoop/hbase/client/RawAsyncTable.html 
b/apidocs/src-html/org/apache/hadoop/hbase/client/RawAsyncTable.html
index da72c6e..ab96dc9 100644
--- a/apidocs/src-html/org/apache/hadoop/hbase/client/RawAsyncTable.html
+++ b/apidocs/src-html/org/apache/hadoop/hbase/client/RawAsyncTable.html
@@ -52,208 +52,209 @@
 044 * method. The {@link 
RawScanResultConsumer} exposes the implementation details of a 
scan(heartbeat)
 045 * so it is not suitable for a normal 
user. If it is still the only difference after we implement
 046 * most features of AsyncTable, we can 
think about merge these two interfaces.
-047 */
-048@InterfaceAudience.Public
-049public interface RawAsyncTable extends 
AsyncTableBase {
-050
-051  /**
-052   * The basic scan API uses the observer 
pattern. All results that match the given scan object will
-053   * be passed to the given {@code 
consumer} by calling {@code RawScanResultConsumer.onNext}.
-054   * {@code 
RawScanResultConsumer.onComplete} means the scan is finished, and
-055   * {@code 
RawScanResultConsumer.onError} means we hit an unrecoverable error and the scan 
is
-056   * terminated. {@code 
RawScanResultConsumer.onHeartbeat} means the RS is still working but we can
-057   * not get a valid result to call 
{@code RawScanResultConsumer.onNext}. This is usually because
-058   * the matched results are too sparse, 
for example, a filter which almost filters out everything
-059   * is specified.
-060   * 

-061 * Notice that, the methods of the given {@code consumer} will be called directly in the rpc -062 * framework's callback thread, so typically you should not do any time consuming work inside -063 * these methods, otherwise you will be likely to block at least one connection to RS(even more if -064 * the rpc framework uses NIO). -065 * @param scan A configured {@link Scan} object. -066 * @param consumer the consumer used to receive results. -067 */ -068 void scan(Scan scan, RawScanResultConsumer consumer); -069 -070 /** -071 * Delegate to a protobuf rpc call. -072 *

-073 * Usually, it is just a simple lambda expression, like: -074 * -075 *

-076   * 
-077   * (stub, controller, rpcCallback) 
-> {
-078   *   XXXRequest request = ...; // 
prepare the request
-079   *   stub.xxx(controller, request, 
rpcCallback);
-080   * }
-081   * 
-082   * 
-083 * -084 * And if you can prepare the {@code request} before calling the coprocessorService method, the -085 * lambda expression will be: -086 * -087 *
-088   * 
-089   * (stub, controller, rpcCallback) 
-> stub.xxx(controller, request, rpcCallback)
-090   * 
-091   * 
-092 */ -093 @InterfaceAudience.Public -094 @FunctionalInterface -095 interface CoprocessorCallable { -096 -097/** -098 * Represent the actual protobuf rpc call. -099 * @param stub the asynchronous stub -100 * @param controller the rpc controller, has already been prepared for you -101 * @param rpcCallback the rpc callback, has already been prepared for you -102 */ -103void call(S stub, RpcController controller, RpcCallback rpcCallback); -104 } -105 -106 /** -107 * Execute the given coprocessor call on the region which contains the given {@code row}. -108 *

-109 * The {@code stubMaker} is just a delegation to the {@code newStub} call. Usually it is only a -110 * one line lambda expression, like: -111 * -112 *

-113   * 
-114   * channel -> 
xxxService.newStub(channel)
-115   * 
-116   * 
-117 * -118 * @param stubMaker a delegation to the actual {@code newStub} call. -119 * @param callable a delegation to the actual protobuf rpc call. See the comment of -120 * {@link CoprocessorCallable} for more details. -121 * @param row The row key used to identify the remote region location -122 * @param the type of the asynchronous stub -123 * @param the type of the return value -124 * @return the return value of the protobuf rpc call, wrapped by a {@link CompletableFuture}. -125 * @see CoprocessorCallable -126 */ -127 CompletableFuture coprocessorService(Function stubMaker, -128 CoprocessorCallable callable, byte[] row); -129 -130 /** -131 * The callback when we want to execute a coprocessor call on a range of regions. -132 *

-133 * As the locating itself also takes some time, the implementation may want to send rpc calls on -134 * the fly, which means we do not know how many regions we have when we get the return value of -135 * the rpc calls, so we need an {@link #onComplete()} which is used to tell you that we have -136 * passed all the return values to you


[24/51] [partial] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/ebf9a8b8/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.html
--
diff --git 
a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.html 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.html
index 28784f5..a3ad745 100644
--- 
a/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.html
+++ 
b/apidocs/src-html/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.html
@@ -53,612 +53,611 @@
 045import 
org.apache.hadoop.hbase.util.Addressing;
 046import 
org.apache.hadoop.hbase.util.Bytes;
 047import 
org.apache.hadoop.hbase.util.Pair;
-048import 
org.apache.hadoop.hbase.util.RegionSizeCalculator;
-049import 
org.apache.hadoop.hbase.util.Strings;
-050import 
org.apache.hadoop.mapreduce.InputFormat;
-051import 
org.apache.hadoop.mapreduce.InputSplit;
-052import 
org.apache.hadoop.mapreduce.JobContext;
-053import 
org.apache.hadoop.mapreduce.RecordReader;
-054import 
org.apache.hadoop.mapreduce.TaskAttemptContext;
-055import org.apache.hadoop.net.DNS;
-056import 
org.apache.hadoop.util.StringUtils;
-057
-058/**
-059 * A base for {@link TableInputFormat}s. 
Receives a {@link Connection}, a {@link TableName},
-060 * an {@link Scan} instance that defines 
the input columns etc. Subclasses may use
-061 * other TableRecordReader 
implementations.
-062 *
-063 * Subclasses MUST ensure 
initializeTable(Connection, TableName) is called for an instance to
-064 * function properly. Each of the entry 
points to this class used by the MapReduce framework,
-065 * {@link #createRecordReader(InputSplit, 
TaskAttemptContext)} and {@link #getSplits(JobContext)},
-066 * will call {@link 
#initialize(JobContext)} as a convenient centralized location to handle
-067 * retrieving the necessary configuration 
information. If your subclass overrides either of these
-068 * methods, either call the parent 
version or call initialize yourself.
-069 *
-070 * 

-071 * An example of a subclass: -072 *

-073 *   class ExampleTIF extends 
TableInputFormatBase {
-074 *
-075 * {@literal @}Override
-076 * protected void 
initialize(JobContext context) throws IOException {
-077 *   // We are responsible for the 
lifecycle of this connection until we hand it over in
-078 *   // initializeTable.
-079 *   Connection connection = 
ConnectionFactory.createConnection(HBaseConfiguration.create(
-080 *  
job.getConfiguration()));
-081 *   TableName tableName = 
TableName.valueOf("exampleTable");
-082 *   // mandatory. once passed here, 
TableInputFormatBase will handle closing the connection.
-083 *   initializeTable(connection, 
tableName);
-084 *   byte[][] inputColumns = new byte 
[][] { Bytes.toBytes("columnA"),
-085 * Bytes.toBytes("columnB") };
-086 *   // optional, by default we'll 
get everything for the table.
-087 *   Scan scan = new Scan();
-088 *   for (byte[] family : 
inputColumns) {
-089 * scan.addFamily(family);
-090 *   }
-091 *   Filter exampleFilter = new 
RowFilter(CompareOp.EQUAL, new RegexStringComparator("aa.*"));
-092 *   scan.setFilter(exampleFilter);
-093 *   setScan(scan);
-094 * }
-095 *   }
-096 * 
-097 */ -098@InterfaceAudience.Public -099public abstract class TableInputFormatBase -100extends InputFormat { -101 -102 /** Specify if we enable auto-balance for input in M/R jobs.*/ -103 public static final String MAPREDUCE_INPUT_AUTOBALANCE = "hbase.mapreduce.input.autobalance"; -104 /** Specify if ratio for data skew in M/R jobs, it goes well with the enabling hbase.mapreduce -105 * .input.autobalance property.*/ -106 public static final String INPUT_AUTOBALANCE_MAXSKEWRATIO = "hbase.mapreduce.input.autobalance" + -107 ".maxskewratio"; -108 /** Specify if the row key in table is text (ASCII between 32~126), -109 * default is true. False means the table is using binary row key*/ -110 public static final String TABLE_ROW_TEXTKEY = "hbase.table.row.textkey"; -111 -112 private static final Log LOG = LogFactory.getLog(TableInputFormatBase.class); -113 -114 private static final String NOT_INITIALIZED = "The input format instance has not been properly " + -115 "initialized. Ensure you call initializeTable either in your constructor or initialize " + -116 "method"; -117 private static final String INITIALIZATION_ERROR = "Cannot create a record reader because of a" + -118" previous error. Please look at the previous logs lines from" + -119" the task's full log for more details."; -120 -121 /** Holds the details for the internal scanner. -122 * -123 * @see Scan */ -124 private Scan scan = null; -125 /** The {@link Admin}. */ -126 private Admin admin; -127 /** The {@link Table} to scan. */ -128 private Table tabl

  1   2   >