http://git-wip-us.apache.org/repos/asf/hbase/blob/2e9a55be/src/main/asciidoc/_chapters/developer.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/developer.adoc 
b/src/main/asciidoc/_chapters/developer.adoc
index 6a546fb..0ada9a6 100644
--- a/src/main/asciidoc/_chapters/developer.adoc
+++ b/src/main/asciidoc/_chapters/developer.adoc
@@ -46,7 +46,7 @@ As Apache HBase is an Apache Software Foundation project, see 
<<asf,asf>>
 === Mailing Lists
 
 Sign up for the dev-list and the user-list.
-See the link:http://hbase.apache.org/mail-lists.html[mailing lists] page.
+See the link:https://hbase.apache.org/mail-lists.html[mailing lists] page.
 Posing questions - and helping to answer other people's questions - is 
encouraged! There are varying levels of experience on both lists so patience 
and politeness are encouraged (and please stay on topic.)
 
 [[slack]]
@@ -64,7 +64,7 @@ FreeNode offers a web-based client, but most people prefer a 
native client, and
 
 === Jira
 
-Check for existing issues in 
link:https://issues.apache.org/jira/browse/HBASE[Jira].
+Check for existing issues in 
link:https://issues.apache.org/jira/projects/HBASE/issues[Jira].
 If it's either a new feature request, enhancement, or a bug, file a ticket.
 
 We track multiple types of work in JIRA:
@@ -173,8 +173,8 @@ GIT is our repository of record for all but the Apache 
HBase website.
 We used to be on SVN.
 We migrated.
 See link:https://issues.apache.org/jira/browse/INFRA-7768[Migrate Apache HBase 
SVN Repos to Git].
-See link:http://hbase.apache.org/source-repository.html[Source Code
-                Management] page for contributor and committer links or search 
for HBase on the link:http://git.apache.org/[Apache Git] page.
+See link:https://hbase.apache.org/source-repository.html[Source Code
+                Management] page for contributor and committer links or search 
for HBase on the link:https://git.apache.org/[Apache Git] page.
 
 == IDEs
 
@@ -479,8 +479,7 @@ mvn -DskipTests package assembly:single deploy
 
 If you see `Unable to find resource 'VM_global_library.vm'`, ignore it.
 It's not an error.
-It is link:http://jira.codehaus.org/browse/MSITE-286[officially
-                        ugly] though.
+It is link:https://issues.apache.org/jira/browse/MSITE-286[officially ugly] 
though.
 
 [[releasing]]
 == Releasing Apache HBase
@@ -540,35 +539,30 @@ For the build to sign them for you, you a properly 
configured _settings.xml_ in
 
 [[maven.release]]
 === Making a Release Candidate
-
-NOTE: These instructions are for building HBase 1.y.z
-
-.Point Releases
-If you are making a point release (for example to quickly address a critical 
incompatibility or security problem) off of a release branch instead of a 
development branch, the tagging instructions are slightly different.
-I'll prefix those special steps with _Point Release Only_.
+Only committers may make releases of hbase artifacts.
 
 .Before You Begin
-Before you make a release candidate, do a practice run by deploying a snapshot.
-Before you start, check to be sure recent builds have been passing for the 
branch from where you are going to take your release.
-You should also have tried recent branch tips out on a cluster under load, 
perhaps by running the `hbase-it` integration test suite for a few hours to 
'burn in' the near-candidate bits.
-
-.Point Release Only
+Make sure your environment is properly set up. Maven and Git are the main 
tooling
+used in the below. You'll need a properly configured _settings.xml_ file in 
your
+local _~/.m2_ maven repository with logins for apache repos (See 
<<maven.settings.xml>>).
+You will also need to have a published signing key. Browse the Hadoop
+link:http://wiki.apache.org/hadoop/HowToRelease[How To Release] wiki page on
+how to release. It is a model for most of the instructions below. It often has 
more
+detail on particular steps, for example, on adding your code signing key to the
+project KEYS file up in Apache or on how to update JIRA in preparation for 
release.
+
+Before you make a release candidate, do a practice run by deploying a SNAPSHOT.
+Check to be sure recent builds have been passing for the branch from where you
+are going to take your release. You should also have tried recent branch tips
+out on a cluster under load, perhaps by running the `hbase-it` integration test
+suite for a few hours to 'burn in' the near-candidate bits.
+
+
+.Specifying the Heap Space for Maven
 [NOTE]
 ====
-At this point you should tag the previous release branch (ex: 0.96.1) with the 
new point release tag (e.g.
-0.96.1.1 tag). Any commits with changes for the point release should go 
against the new tag.
-====
-
-The Hadoop link:http://wiki.apache.org/hadoop/HowToRelease[How To
-                    Release] wiki page is used as a model for most of the 
instructions below.
-                    Although it now stale, it may have more detail on 
particular sections, so
-                    it is worth review especially if you get stuck.
-
-.Specifying the Heap Space for Maven on OSX
-[NOTE]
-====
-On OSX, you may run into OutOfMemoryErrors building, particularly building the 
site and
-documentation. Up the heap and permgen space for Maven by setting the 
`MAVEN_OPTS` variable.
+You may run into OutOfMemoryErrors building, particularly building the site and
+documentation. Up the heap for Maven by setting the `MAVEN_OPTS` variable.
 You can prefix the variable to the Maven command, as in the following example:
 
 ----
@@ -579,10 +573,19 @@ You could also set this in an environment variable or 
alias in your shell.
 ====
 
 
-NOTE: The script _dev-support/make_rc.sh_ automates many of these steps.
-It does not do the modification of the _CHANGES.txt_                    for 
the release, the close of the staging repository in Apache Maven (human 
intervention is needed here), the checking of the produced artifacts to ensure 
they are 'good' -- e.g.
-extracting the produced tarballs, verifying that they look right, then 
starting HBase and checking that everything is running correctly, then the 
signing and pushing of the tarballs to 
link:http://people.apache.org[people.apache.org].
-The script handles everything else, and comes in handy.
+[NOTE]
+====
+The script _dev-support/make_rc.sh_ automates many of the below steps.
+It will checkout a tag, clean the checkout, build src and bin tarballs,
+and deploy the built jars to repository.apache.org.
+It does NOT do the modification of the _CHANGES.txt_ for the release,
+the checking of the produced artifacts to ensure they are 'good' --
+e.g. extracting the produced tarballs, verifying that they
+look right, then starting HBase and checking that everything is running
+correctly -- or the signing and pushing of the tarballs to
+link:https://people.apache.org[people.apache.org].
+Take a look. Modify/improve as you see fit.
+====
 
 .Procedure: Release Procedure
 . Update the _CHANGES.txt_ file and the POM files.
@@ -593,63 +596,123 @@ Adjust the version in all the POM files appropriately.
 If you are making a release candidate, you must remove the `-SNAPSHOT` label 
from all versions
 in all pom.xml files.
 If you are running this receipe to publish a snapshot, you must keep the 
`-SNAPSHOT` suffix on the hbase version.
-The link:http://mojo.codehaus.org/versions-maven-plugin/[Versions
-                            Maven Plugin] can be of use here.
+The link:http://www.mojohaus.org/versions-maven-plugin/[Versions Maven Plugin] 
can be of use here.
 To set a version in all the many poms of the hbase multi-module project, use a 
command like the following:
 +
 [source,bourne]
 ----
-
-$ mvn clean org.codehaus.mojo:versions-maven-plugin:1.3.1:set 
-DnewVersion=0.96.0
+$ mvn clean org.codehaus.mojo:versions-maven-plugin:2.5:set 
-DnewVersion=2.1.0-SNAPSHOT
 ----
 +
-Make sure all versions in poms are changed! Checkin the _CHANGES.txt_ and any 
version changes.
+Make sure all versions in poms are changed! Checkin the _CHANGES.txt_ and any 
maven version changes.
 
 . Update the documentation.
 +
 Update the documentation under _src/main/asciidoc_.
-This usually involves copying the latest from master and making 
version-particular
+This usually involves copying the latest from master branch and making 
version-particular
 adjustments to suit this release candidate version.
 
-. Build the source tarball.
+. Clean the checkout dir
 +
-Now, build the source tarball.
-This tarball is Hadoop-version-independent.
-It is just the pure source code and documentation without a particular hadoop 
taint, etc.
-Add the `-Prelease` profile when building.
-It checks files for licenses and will fail the build if unlicensed files are 
present.
+[source,bourne]
+----
+
+$ mvn clean
+$ git clean -f -x -d
+----
+
+
+. Run Apache-Rat
+Check licenses are good
 +
 [source,bourne]
 ----
 
-$ mvn clean install -DskipTests assembly:single 
-Dassembly.file=hbase-assembly/src/main/assembly/src.xml -Prelease
+$ mvn apache-rat
 ----
 +
-Extract the tarball and make sure it looks good.
-A good test for the src tarball being 'complete' is to see if you can build 
new tarballs from this source bundle.
-If the source tarball is good, save it off to a _version directory_, a 
directory somewhere where you are collecting all of the tarballs you will 
publish as part of the release candidate.
-For example if you were building an hbase-0.96.0 release candidate, you might 
call the directory _hbase-0.96.0RC0_.
-Later you will publish this directory as our release candidate.
+If the above fails, check the rat log.
 
-. Build the binary tarball.
 +
-Next, build the binary tarball.
-Add the `-Prelease`                        profile when building.
-It checks files for licenses and will fail the build if unlicensed files are 
present.
-Do it in two steps.
+[source,bourne]
+----
+$ grep 'Rat check' patchprocess/mvn_apache_rat.log
+----
 +
-* First install into the local repository
+
+. Create a release tag.
+Presuming you have run basic tests, the rat check, passes and all is
+looking good, now is the time to tag the release candidate (You
+always remove the tag if you need to redo). To tag, do
+what follows substituting in the version appropriate to your build.
+All tags should be signed tags; i.e. pass the _-s_ option (See
+link:http://https://git-scm.com/book/id/v2/Git-Tools-Signing-Your-Work[Signing 
Your Work]
+for how to set up your git environment for signing).
+
 +
 [source,bourne]
 ----
 
-$ mvn clean install -DskipTests -Prelease
+$ git tag -s 2.0.0-alpha4-RC0 -m "Tagging the 2.0.0-alpha4 first Releae 
Candidate (Candidates start at zero)"
+----
+
+Or, if you are making a release, tags should have a _rel/_ prefix to ensure
+they are preserved in the Apache repo as in:
+
+[source,bourne]
+----
++$ git tag -s rel/2.0.0-alpha4 -m "Tagging the 2.0.0-alpha4 Release"
 ----
 
-* Next, generate documentation and assemble the tarball.
+Push the (specific) tag (only) so others have access.
++
+[source,bourne]
+----
+
+$ git push origin 2.0.0-alpha4-RC0
+----
++
+For how to delete tags, see
+link:http://www.manikrathee.com/how-to-delete-a-tag-in-git.html[How to Delete 
a Tag]. Covers
+deleting tags that have not yet been pushed to the remote Apache
+repo as well as delete of tags pushed to Apache.
+
+
+. Build the source tarball.
++
+Now, build the source tarball. Lets presume we are building the source
+tarball for the tag _2.0.0-alpha4-RC0_ into _/tmp/hbase-2.0.0-alpha4-RC0/_
+(This step requires that the mvn and git clean steps described above have just 
been done).
 +
 [source,bourne]
 ----
+$ git archive --format=tar.gz 
--output="/tmp/hbase-2.0.0-alpha4-RC0/hbase-2.0.0-alpha4-src.tar.gz" 
--prefix="hbase-2.0.0-alpha4/" $git_tag
+----
+
+Above we generate the hbase-2.0.0-alpha4-src.tar.gz tarball into the
+_/tmp/hbase-2.0.0-alpha4-RC0_ build output directory (We don't want the _RC0_ 
in the name or prefix.
+These bits are currently a release candidate but if the VOTE passes, they will 
become the release so we do not taint
+the artifact names with _RCX_).
+
+. Build the binary tarball.
+Next, build the binary tarball. Add the `-Prelease` profile when building.
+It runs the license apache-rat check among other rules that help ensure
+all is wholesome. Do it in two steps.
+
+First install into the local repository
+
+[source,bourne]
+----
+
+$ mvn clean install -DskipTests -Prelease
+----
+
+Next, generate documentation and assemble the tarball. Be warned,
+this next step can take a good while, a couple of hours generating site
+documentation.
+
+[source,bourne]
+----
 
 $ mvn install -DskipTests site assembly:single -Prelease
 ----
@@ -659,26 +722,23 @@ Otherwise, the build complains that hbase modules are not 
in the maven repositor
 when you try to do it all in one step, especially on a fresh repository.
 It seems that you need the install goal in both steps.
 +
-Extract the generated tarball and check it out.
+Extract the generated tarball -- you'll find it under
+_hbase-assembly/target_ and check it out.
 Look at the documentation, see if it runs, etc.
-If good, copy the tarball to the above mentioned _version directory_.
+If good, copy the tarball beside the source tarball in the
+build output directory.
 
-. Create a new tag.
-+
-.Point Release Only
-[NOTE]
-====
-The following step that creates a new tag can be skipped since you've already 
created the point release tag
-====
-+
-Tag the release at this point since it looks good.
-If you find an issue later, you can delete the tag and start over.
-Release needs to be tagged for the next step.
 
 . Deploy to the Maven Repository.
 +
-Next, deploy HBase to the Apache Maven repository, using the `apache-release` 
profile instead of the `release` profile when running the `mvn deploy` command.
-This profile invokes the Apache pom referenced by our pom files, and also 
signs your artifacts published to Maven, as long as the _settings.xml_ is 
configured correctly, as described in <<maven.settings.xml>>.
+Next, deploy HBase to the Apache Maven repository. Add the
+apache-release` profile when running the `mvn deploy` command.
+This profile comes from the Apache parent pom referenced by our pom files.
+It does signing of your artifacts published to Maven, as long as the
+_settings.xml_ is configured correctly, as described in <<maven.settings.xml>>.
+This step depends on the local repository having been populate
+by the just-previous bin tarball build.
+
 +
 [source,bourne]
 ----
@@ -692,16 +752,24 @@ More work needs to be done on these maven artifacts to 
make them generally avail
 We do not release HBase tarball to the Apache Maven repository. To avoid 
deploying the tarball, do not
 include the `assembly:single` goal in your `mvn deploy` command. Check the 
deployed artifacts as described in the next section.
 
+.make_rc.sh
+[NOTE]
+====
+If you run the _dev-support/make_rc.sh_ script, this is as far as it takes you.
+To finish the release, take up the script from here on out.
+====
+
 . Make the Release Candidate available.
 +
 The artifacts are in the maven repository in the staging area in the 'open' 
state.
 While in this 'open' state you can check out what you've published to make 
sure all is good.
-To do this, log in to Apache's Nexus at 
link:http://repository.apache.org[repository.apache.org] using your Apache ID.
+To do this, log in to Apache's Nexus at 
link:https://repository.apache.org[repository.apache.org] using your Apache ID.
 Find your artifacts in the staging repository. Click on 'Staging Repositories' 
and look for a new one ending in "hbase" with a status of 'Open', select it.
 Use the tree view to expand the list of repository contents and inspect if the 
artifacts you expect are present. Check the POMs.
 As long as the staging repo is open you can re-upload if something is missing 
or built incorrectly.
 +
 If something is seriously wrong and you would like to back out the upload, you 
can use the 'Drop' button to drop and delete the staging repository.
+Sometimes the upload fails in the middle. This is another reason you might 
have to 'Drop' the upload from the staging repository.
 +
 If it checks out, close the repo using the 'Close' button. The repository must 
be closed before a public URL to it becomes available. It may take a few 
minutes for the repository to close. Once complete you'll see a public URL to 
the repository in the Nexus UI. You may also receive an email with the URL. 
Provide the URL to the temporary staging repository in the email that announces 
the release candidate.
 (Folks will need to add this repo URL to their local poms or to their local 
_settings.xml_ file to pull the published release candidate artifacts.)
@@ -716,39 +784,25 @@ Check it out and run its simple test to make sure maven 
artifacts are properly d
 Be sure to edit the pom to point to the proper staging repository.
 Make sure you are pulling from the repository when tests run and that you are 
not getting from your local repository, by either passing the `-U` flag or 
deleting your local repo content and check maven is pulling from remote out of 
the staging repository.
 ====
-+
-See link:http://www.apache.org/dev/publishing-maven-artifacts.html[Publishing 
Maven Artifacts] for some pointers on this maven staging process.
-+
-NOTE: We no longer publish using the maven release plugin.
-Instead we do +mvn deploy+.
-It seems to give us a backdoor to maven release publishing.
-If there is no _-SNAPSHOT_ on the version string, then we are 'deployed' to 
the apache maven repository staging directory from which we can publish URLs 
for candidates and later, if they pass, publish as release (if a _-SNAPSHOT_ on 
the version string, deploy will put the artifacts up into apache snapshot 
repos).
-+
+
+See link:https://www.apache.org/dev/publishing-maven-artifacts.html[Publishing 
Maven Artifacts] for some pointers on this maven staging process.
+
 If the HBase version ends in `-SNAPSHOT`, the artifacts go elsewhere.
 They are put into the Apache snapshots repository directly and are immediately 
available.
 Making a SNAPSHOT release, this is what you want to happen.
 
-. If you used the _make_rc.sh_ script instead of doing
-  the above manually, do your sanity checks now.
-+
-At this stage, you have two tarballs in your 'version directory' and a set of 
artifacts in a staging area of the maven repository, in the 'closed' state.
-These are publicly accessible in a temporary staging repository whose URL you 
should have gotten in an email.
-The above mentioned script, _make_rc.sh_ does all of the above for you minus 
the check of the artifacts built, the closing of the staging repository up in 
maven, and the tagging of the release.
-If you run the script, do your checks at this stage verifying the src and bin 
tarballs and checking what is up in staging using hbase-downstreamer project.
-Tag before you start the build.
-You can always delete it if the build goes haywire.
-
-. Sign, fingerprint and then 'stage' your release candiate version directory 
via svnpubsub by committing your directory to 
link:https://dist.apache.org/repos/dist/dev/hbase/[The 'dev' distribution 
directory] (See comments on 
link:https://issues.apache.org/jira/browse/HBASE-10554[HBASE-10554 Please 
delete old releases from mirroring system] but in essence it is an svn checkout 
of https://dist.apache.org/repos/dist/dev/hbase -- releases are at 
https://dist.apache.org/repos/dist/release/hbase). In the _version directory_ 
run the following commands:
-+
+At this stage, you have two tarballs in your 'build output directory' and a 
set of artifacts in a staging area of the maven repository, in the 'closed' 
state.
+Next sign, fingerprint and then 'stage' your release candiate build output 
directory via svnpubsub by committing
+your directory to link:https://dist.apache.org/repos/dist/dev/hbase/[The 'dev' 
distribution directory] (See comments on 
link:https://issues.apache.org/jira/browse/HBASE-10554[HBASE-10554 Please 
delete old releases from mirroring system] but in essence it is an svn checkout 
of https://dist.apache.org/repos/dist/dev/hbase -- releases are at 
https://dist.apache.org/repos/dist/release/hbase). In the _version directory_ 
run the following commands:
+
 [source,bourne]
 ----
 
-$ for i in *.tar.gz; do echo $i; gpg --print-mds $i > $i.mds ; done
 $ for i in *.tar.gz; do echo $i; gpg --print-md MD5 $i > $i.md5 ; done
 $ for i in *.tar.gz; do echo $i; gpg --print-md SHA512 $i > $i.sha ; done
 $ for i in *.tar.gz; do echo $i; gpg --armor --output $i.asc --detach-sig $i  
; done
 $ cd ..
-# Presuming our 'version directory' is named 0.96.0RC0, copy it to the svn 
checkout of the dist dev dir
+# Presuming our 'build output directory' is named 0.96.0RC0, copy it to the 
svn checkout of the dist dev dir
 # in this case named hbase.dist.dev.svn
 $ cd /Users/stack/checkouts/hbase.dist.dev.svn
 $ svn info
@@ -815,7 +869,7 @@ This plugin is run when you specify the +site+ goal as in 
when you run +mvn site
 See <<appendix_contributing_to_documentation,appendix contributing to 
documentation>> for more information on building the documentation.
 
 [[hbase.org]]
-== Updating link:http://hbase.apache.org[hbase.apache.org]
+== Updating link:https://hbase.apache.org[hbase.apache.org]
 
 [[hbase.org.site.contributing]]
 === Contributing to hbase.apache.org
@@ -823,7 +877,7 @@ See <<appendix_contributing_to_documentation,appendix 
contributing to documentat
 See <<appendix_contributing_to_documentation,appendix contributing to 
documentation>> for more information on contributing to the documentation or 
website.
 
 [[hbase.org.site.publishing]]
-=== Publishing link:http://hbase.apache.org[hbase.apache.org]
+=== Publishing link:https://hbase.apache.org[hbase.apache.org]
 
 See <<website_publish>> for instructions on publishing the website and 
documentation.
 
@@ -920,7 +974,7 @@ Also, keep in mind that if you are running tests in the 
`hbase-server` module yo
 === Unit Tests
 
 Apache HBase test cases are subdivided into four categories: small, medium, 
large, and
-integration with corresponding JUnit 
link:http://www.junit.org/node/581[categories]: `SmallTests`, `MediumTests`, 
`LargeTests`, `IntegrationTests`.
+integration with corresponding JUnit 
link:https://github.com/junit-team/junit4/wiki/Categories[categories]: 
`SmallTests`, `MediumTests`, `LargeTests`, `IntegrationTests`.
 JUnit categories are denoted using java annotations and look like this in your 
unit test code.
 
 [source,java]
@@ -1223,7 +1277,7 @@ $ mvn clean install test -Dtest=TestZooKeeper  
-PskipIntegrationTests
 ==== Running integration tests against mini cluster
 
 HBase 0.92 added a `verify` maven target.
-Invoking it, for example by doing `mvn verify`, will run all the phases up to 
and including the verify phase via the maven 
link:http://maven.apache.org/plugins/maven-failsafe-plugin/[failsafe
+Invoking it, for example by doing `mvn verify`, will run all the phases up to 
and including the verify phase via the maven 
link:https://maven.apache.org/plugins/maven-failsafe-plugin/[failsafe
                         plugin], running all the above mentioned HBase unit 
tests as well as tests that are in the HBase integration test group.
 After you have completed +mvn install -DskipTests+ You can run just the 
integration tests by invoking:
 
@@ -1278,7 +1332,7 @@ Currently there is no support for running integration 
tests against a distribute
 The tests interact with the distributed cluster by using the methods in the 
`DistributedHBaseCluster` (implementing `HBaseCluster`) class, which in turn 
uses a pluggable `ClusterManager`.
 Concrete implementations provide actual functionality for carrying out 
deployment-specific and environment-dependent tasks (SSH, etc). The default 
`ClusterManager` is `HBaseClusterManager`, which uses SSH to remotely execute 
start/stop/kill/signal commands, and assumes some posix commands (ps, etc). 
Also assumes the user running the test has enough "power" to start/stop servers 
on the remote machines.
 By default, it picks up `HBASE_SSH_OPTS`, `HBASE_HOME`, `HBASE_CONF_DIR` from 
the env, and uses `bin/hbase-daemon.sh` to carry out the actions.
-Currently tarball deployments, deployments which uses _hbase-daemons.sh_, and 
link:http://incubator.apache.org/ambari/[Apache Ambari]                    
deployments are supported.
+Currently tarball deployments, deployments which uses _hbase-daemons.sh_, and 
link:https://incubator.apache.org/ambari/[Apache Ambari]                    
deployments are supported.
 _/etc/init.d/_ scripts are not supported for now, but it can be easily added.
 For other deployment options, a ClusterManager can be implemented and plugged 
in.
 
@@ -1286,7 +1340,7 @@ For other deployment options, a ClusterManager can be 
implemented and plugged in
 ==== Destructive integration / system tests (ChaosMonkey)
 
 HBase 0.96 introduced a tool named `ChaosMonkey`, modeled after
-link:http://techblog.netflix.com/2012/07/chaos-monkey-released-into-wild.html[same-named
 tool by Netflix's Chaos Monkey tool].
+link:https://netflix.github.io/chaosmonkey/[same-named tool by Netflix's Chaos 
Monkey tool].
 ChaosMonkey simulates real-world
 faults in a running cluster by killing or disconnecting random servers, or 
injecting
 other failures into the environment. You can use ChaosMonkey as a stand-alone 
tool
@@ -1790,10 +1844,10 @@ The script checks the directory for sub-directory 
called _.git/_, before proceed
 === Submitting Patches
 
 If you are new to submitting patches to open source or new to submitting 
patches to Apache, start by
- reading the link:http://commons.apache.org/patches.html[On Contributing 
Patches] page from
- link:http://commons.apache.org/[Apache Commons Project].
+ reading the link:https://commons.apache.org/patches.html[On Contributing 
Patches] page from
+ link:https://commons.apache.org/[Apache Commons Project].
 It provides a nice overview that applies equally to the Apache HBase Project.
-link:http://accumulo.apache.org/git.html[Accumulo doc on how to contribute and 
develop] is also
+link:https://accumulo.apache.org/git.html[Accumulo doc on how to contribute 
and develop] is also
 good read to understand development workflow.
 
 [[submitting.patches.create]]
@@ -1887,11 +1941,11 @@ Significant new features should provide an integration 
test in addition to unit
 [[reviewboard]]
 ==== ReviewBoard
 
-Patches larger than one screen, or patches that will be tricky to review, 
should go through link:http://reviews.apache.org[ReviewBoard].
+Patches larger than one screen, or patches that will be tricky to review, 
should go through link:https://reviews.apache.org[ReviewBoard].
 
 .Procedure: Use ReviewBoard
 . Register for an account if you don't already have one.
-  It does not use the credentials from 
link:http://issues.apache.org[issues.apache.org].
+  It does not use the credentials from 
link:https://issues.apache.org[issues.apache.org].
   Log in.
 . Click [label]#New Review Request#.
 . Choose the `hbase-git` repository.
@@ -1917,8 +1971,8 @@ For more information on how to use ReviewBoard, see 
link:http://www.reviewboard.
 
 New committers are encouraged to first read Apache's generic committer 
documentation:
 
-* link:http://www.apache.org/dev/new-committers-guide.html[Apache New 
Committer Guide]
-* link:http://www.apache.org/dev/committers.html[Apache Committer FAQ]
+* link:https://www.apache.org/dev/new-committers-guide.html[Apache New 
Committer Guide]
+* link:https://www.apache.org/dev/committers.html[Apache Committer FAQ]
 
 ===== Review
 
@@ -1934,7 +1988,7 @@ Use the btn:[Submit Patch]                        button 
in JIRA, just like othe
 
 ===== Reject
 
-Patches which do not adhere to the guidelines in 
link:https://wiki.apache.org/hadoop/Hbase/HowToCommit/hadoop/Hbase/HowToContribute#[HowToContribute]
 and to the 
link:https://wiki.apache.org/hadoop/Hbase/HowToCommit/hadoop/CodeReviewChecklist#[code
 review checklist] should be rejected.
+Patches which do not adhere to the guidelines in 
link:https://hbase.apache.org/book.html#developer[HowToContribute] and to the 
link:https://wiki.apache.org/hadoop/CodeReviewChecklist[code review checklist] 
should be rejected.
 Committers should always be polite to contributors and try to instruct and 
encourage them to contribute better patches.
 If a committer wishes to improve an unacceptable patch, then it should first 
be rejected, and a new patch should be attached by the committer for review.
 
@@ -2116,6 +2170,77 @@ However any substantive discussion (as with any off-list 
project-related discuss
 
 Misspellings and/or bad grammar is preferable to the disruption a JIRA comment 
edit causes: See the discussion at 
link:http://search-hadoop.com/?q=%5BReopened%5D+%28HBASE-451%29+Remove+HTableDescriptor+from+HRegionInfo&fc_project=HBase[Re:(HBASE-451)
 Remove HTableDescriptor from HRegionInfo]
 
+[[thirdparty]]
+=== The hbase-thirdparty dependency and shading/relocation
+
+A new project was created for the release of hbase-2.0.0. It was called
+`hbase-thirdparty`. This project exists only to provide the main hbase
+project with relocated -- or shaded -- versions of popular thirdparty
+libraries such as guava, netty, and protobuf. The mainline HBase project
+relies on the relocated versions of these libraries gotten from 
hbase-thirdparty
+rather than on finding these classes in their usual locations. We do this so
+we can specify whatever the version we wish. If we don't relocate, we must
+harmonize our version to match that which hadoop and/or spark uses.
+
+For developers, this means you need to be careful referring to classes from
+netty, guava, protobuf, gson, etc. (see the hbase-thirdparty pom.xml for what
+it provides). Devs must refer to the hbase-thirdparty provided classes. In
+practice, this is usually not an issue (though it can be a bit of a pain). You
+will have to hunt for the relocated version of your particular class. You'll
+find it by prepending the general relocation prefix of 
`org.apache.hadoop.hbase.shaded.`.
+For example if you are looking for `com.google.protobuf.Message`, the relocated
+version used by HBase internals can be found at
+`org.apache.hadoop.hbase.shaded.com.google.protobuf.Message`.
+
+For a few thirdparty libs, like protobuf (see the protobuf chapter in this book
+for the why), your IDE may give you both options -- the `com.google.protobuf.*`
+and the `org.apache.hadoop.hbase.shaded.com.google.protobuf.*` -- because both
+classes are on your CLASSPATH. Unless you are doing the particular juggling
+required in Coprocessor Endpoint development (again see above cited protobuf
+chapter), you'll want to use the shaded version, always.
+
+Of note, the relocation of netty is particular. The netty folks have put in
+place facility to aid relocation; it seems like shading netty is a popular 
project.
+One case of this requires the setting of a peculiar system property on the JVM
+so that classes out in the bundld shared library (.so) can be found in their
+relocated location. Here is the property that needs to be set:
+
+`-Dorg.apache.hadoop.hbase.shaded.io.netty.packagePrefix=org.apache.hadoop.hbase.shaded.`
+
+(Note that the trailing '.' is required). Starting hbase normally or when 
running
+test suites, the setting of this property is done for you. If you are doing 
something
+out of the ordinary, starting hbase from your own context, you'll need to 
provide
+this property on platforms that favor the bundled .so. See release notes on 
HBASE-18271
+for more. The complaint you see is something like the following:
+`Cause: java.lang.RuntimeException: Failed construction of Master: class 
org.apache.hadoop.hbase.master.HMasterorg.apache.hadoop.hbase.shaded.io.netty.channel.epoll.`
+
+If running unit tests and you run into the above message, add the system 
property
+to your surefire configuration by doing like the below:
+
+[source,xml]
+----
+  <plugin>
+    <artifactId>maven-surefire-plugin</artifactId>
+    <configuration>
+      <systemPropertyVariables>
+        
<org.apache.hadoop.hbase.shaded.io.netty.packagePrefix>org.apache.hadoop.hbase.shaded.</org.apache.hadoop.hbase.shaded.io.netty.packagePrefix>
+      </systemPropertyVariables>
+    </configuration>
+  </plugin>
+----
+
+Again the trailing period in the value above is intended.
+
+The `hbase-thirdparty` project has groupid of `org.apache.hbase.thirdparty`.
+As of this writing, it provides three jars; one for netty with an artifactid of
+`hbase-thirdparty-netty`, one for protobuf at `hbase-thirdparty-protobuf` and 
then
+a jar for all else -- gson, guava -- at `hbase-thirdpaty-miscellaneous`.
+
+The hbase-thirdparty artifacts are a product produced by the Apache HBase
+project under the aegis of the HBase Project Management Committee. Releases
+are done via the usual voting project on the hbase dev mailing list. If issue
+in the hbase-thirdparty, use the hbase JIRA and mailing lists to post notice.
+
 [[hbase.archetypes.development]]
 === Development of HBase-related Maven archetypes
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/2e9a55be/src/main/asciidoc/_chapters/external_apis.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/external_apis.adoc 
b/src/main/asciidoc/_chapters/external_apis.adoc
index 2f85461..ffb6ee6 100644
--- a/src/main/asciidoc/_chapters/external_apis.adoc
+++ b/src/main/asciidoc/_chapters/external_apis.adoc
@@ -29,7 +29,7 @@
 
 This chapter will cover access to Apache HBase either through non-Java 
languages and
 through custom protocols. For information on using the native HBase APIs, 
refer to
-link:http://hbase.apache.org/apidocs/index.html[User API Reference] and the
+link:https://hbase.apache.org/apidocs/index.html[User API Reference] and the
 <<hbase_apis,HBase APIs>> chapter.
 
 == REST
@@ -288,18 +288,17 @@ your filter to the file. For example, to return only rows 
for
 which keys start with <codeph>u123</codeph> and use a batch size
 of 100, the filter file would look like this:
 
-+++
-<pre>
-&lt;Scanner batch="100"&gt;
-  &lt;filter&gt;
+[source,xml]
+----
+<Scanner batch="100">
+  <filter>
     {
       "type": "PrefixFilter",
       "value": "u123"
     }
-  &lt;/filter&gt;
-&lt;/Scanner&gt;
-</pre>
-+++
+  </filter>
+</Scanner>
+----
 
 Pass the file to the `-d` argument of the `curl` request.
 |curl -vi -X PUT \
@@ -626,7 +625,9 @@ Documentation about Thrift has moved to <<thrift>>.
 == C/C++ Apache HBase Client
 
 FB's Chip Turner wrote a pure C/C++ client.
-link:https://github.com/facebook/native-cpp-hbase-client[Check it out].
+link:https://github.com/hinaria/native-cpp-hbase-client[Check it out].
+
+C++ client implementation. To see 
link:https://issues.apache.org/jira/browse/HBASE-14850[HBASE-14850].
 
 [[jdo]]
 
@@ -640,8 +641,8 @@ represent persistent data.
 This code example has the following dependencies:
 
 . HBase 0.90.x or newer
-. commons-beanutils.jar (http://commons.apache.org/)
-. commons-pool-1.5.5.jar (http://commons.apache.org/)
+. commons-beanutils.jar (https://commons.apache.org/)
+. commons-pool-1.5.5.jar (https://commons.apache.org/)
 . transactional-tableindexed for HBase 0.90 
(https://github.com/hbase-trx/hbase-transactional-tableindexed)
 
 .Download `hbase-jdo`
@@ -801,7 +802,7 @@ with HBase.
 ----
 resolvers += "Apache HBase" at 
"https://repository.apache.org/content/repositories/releases";
 
-resolvers += "Thrift" at "http://people.apache.org/~rawson/repo/";
+resolvers += "Thrift" at "https://people.apache.org/~rawson/repo/";
 
 libraryDependencies ++= Seq(
     "org.apache.hadoop" % "hadoop-core" % "0.20.2",

http://git-wip-us.apache.org/repos/asf/hbase/blob/2e9a55be/src/main/asciidoc/_chapters/faq.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/faq.adoc 
b/src/main/asciidoc/_chapters/faq.adoc
index 9034d4b..0e498ac 100644
--- a/src/main/asciidoc/_chapters/faq.adoc
+++ b/src/main/asciidoc/_chapters/faq.adoc
@@ -33,10 +33,10 @@ When should I use HBase?::
   See <<arch.overview>> in the Architecture chapter.
 
 Are there other HBase FAQs?::
-  See the FAQ that is up on the wiki, 
link:http://wiki.apache.org/hadoop/Hbase/FAQ[HBase Wiki FAQ].
+  See the FAQ that is up on the wiki, 
link:https://wiki.apache.org/hadoop/Hbase/FAQ[HBase Wiki FAQ].
 
 Does HBase support SQL?::
-  Not really. SQL-ish support for HBase via link:http://hive.apache.org/[Hive] 
is in development, however Hive is based on MapReduce which is not generally 
suitable for low-latency requests. See the <<datamodel>> section for examples 
on the HBase client.
+  Not really. SQL-ish support for HBase via 
link:https://hive.apache.org/[Hive] is in development, however Hive is based on 
MapReduce which is not generally suitable for low-latency requests. See the 
<<datamodel>> section for examples on the HBase client.
 
 How can I find examples of NoSQL/HBase?::
   See the link to the BigTable paper in <<other.info>>, as well as the other 
papers.

http://git-wip-us.apache.org/repos/asf/hbase/blob/2e9a55be/src/main/asciidoc/_chapters/getting_started.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/getting_started.adoc 
b/src/main/asciidoc/_chapters/getting_started.adoc
index 0e50273..2fdb949 100644
--- a/src/main/asciidoc/_chapters/getting_started.adoc
+++ b/src/main/asciidoc/_chapters/getting_started.adoc
@@ -51,7 +51,7 @@ Apart from downloading HBase, this procedure should take less 
than 10 minutes.
 
 Prior to HBase 0.94.x, HBase expected the loopback IP address to be 127.0.0.1.
 Ubuntu and some other distributions default to 127.0.1.1 and this will cause
-problems for you. See link:http://devving.com/?p=414[Why does HBase care about 
/etc/hosts?] for detail
+problems for you. See 
link:https://web-beta.archive.org/web/20140104070155/http://blog.devving.com/why-does-hbase-care-about-etchosts[Why
 does HBase care about /etc/hosts?] for detail
 
 The following _/etc/hosts_ file works correctly for HBase 0.94.x and earlier, 
on Ubuntu. Use this as a template if you run into trouble.
 [listing]
@@ -70,7 +70,7 @@ See <<java,Java>> for information about supported JDK 
versions.
 === Get Started with HBase
 
 .Procedure: Download, Configure, and Start HBase in Standalone Mode
-. Choose a download site from this list of 
link:http://www.apache.org/dyn/closer.cgi/hbase/[Apache Download Mirrors].
+. Choose a download site from this list of 
link:https://www.apache.org/dyn/closer.cgi/hbase/[Apache Download Mirrors].
   Click on the suggested top link.
   This will take you to a mirror of _HBase Releases_.
   Click on the folder named _stable_ and then download the binary file that 
ends in _.tar.gz_ to your local filesystem.
@@ -307,7 +307,7 @@ You can skip the HDFS configuration to continue storing 
your data in the local f
 This procedure assumes that you have configured Hadoop and HDFS on your local 
system and/or a remote
 system, and that they are running and available. It also assumes you are using 
Hadoop 2.
 The guide on
-link:http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html[Setting
 up a Single Node Cluster]
+link:https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html[Setting
 up a Single Node Cluster]
 in the Hadoop documentation is a good starting point.
 ====
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/2e9a55be/src/main/asciidoc/_chapters/hbase-default.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/hbase-default.adoc 
b/src/main/asciidoc/_chapters/hbase-default.adoc
index 6b11945..9b3cfb7 100644
--- a/src/main/asciidoc/_chapters/hbase-default.adoc
+++ b/src/main/asciidoc/_chapters/hbase-default.adoc
@@ -376,23 +376,6 @@ The WAL file writer implementation.
 .Default
 `org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter`
 
-
-[[hbase.master.distributed.log.replay]]
-*`hbase.master.distributed.log.replay`*::
-+
-.Description
-Enable 'distributed log replay' as default engine splitting
-    WAL files on server crash.  This default is new in hbase 1.0.  To fall
-    back to the old mode 'distributed log splitter', set the value to
-    'false'.  'Disributed log replay' improves MTTR because it does not
-    write intermediate files.  'DLR' required that 'hfile.format.version'
-    be set to version 3 or higher.
-
-+
-.Default
-`true`
-
-
 [[hbase.regionserver.global.memstore.size]]
 *`hbase.regionserver.global.memstore.size`*::
 +
@@ -461,11 +444,12 @@ The host name or IP address of the name server (DNS)
 
       A split policy determines when a region should be split. The various 
other split policies that
       are available currently are ConstantSizeRegionSplitPolicy, 
DisabledRegionSplitPolicy,
-      DelimitedKeyPrefixRegionSplitPolicy, KeyPrefixRegionSplitPolicy etc.
+      DelimitedKeyPrefixRegionSplitPolicy, KeyPrefixRegionSplitPolicy,
+      BusyRegionSplitPolicy, SteppingSplitPolicy etc.
 
 +
 .Default
-`org.apache.hadoop.hbase.regionserver.IncreasingToUpperBoundRegionSplitPolicy`
+`org.apache.hadoop.hbase.regionserver.SteppingSplitPolicy`
 
 
 [[zookeeper.session.timeout]]
@@ -475,7 +459,7 @@ The host name or IP address of the name server (DNS)
 ZooKeeper session timeout in milliseconds. It is used in two different ways.
       First, this value is used in the ZK client that HBase uses to connect to 
the ensemble.
       It is also used by HBase when it starts a ZK server and it is passed as 
the 'maxSessionTimeout'. See
-      
http://hadoop.apache.org/zookeeper/docs/current/zookeeperProgrammers.html#ch_zkSessions.
+      
https://hadoop.apache.org/zookeeper/docs/current/zookeeperProgrammers.html#ch_zkSessions.
       For example, if an HBase region server connects to a ZK ensemble that's 
also managed
       by HBase, then the
       session timeout will be the one specified by this configuration. But, a 
region server that connects
@@ -539,7 +523,7 @@ The host name or IP address of the name server (DNS)
 +
 .Description
 Port used by ZooKeeper peers to talk to each other.
-    See 
http://hadoop.apache.org/zookeeper/docs/r3.1.1/zookeeperStarted.html#sc_RunningReplicatedZooKeeper
+    See 
https://hadoop.apache.org/zookeeper/docs/r3.1.1/zookeeperStarted.html#sc_RunningReplicatedZooKeeper
     for more information.
 +
 .Default
@@ -551,7 +535,7 @@ Port used by ZooKeeper peers to talk to each other.
 +
 .Description
 Port used by ZooKeeper for leader election.
-    See 
http://hadoop.apache.org/zookeeper/docs/r3.1.1/zookeeperStarted.html#sc_RunningReplicatedZooKeeper
+    See 
https://hadoop.apache.org/zookeeper/docs/r3.1.1/zookeeperStarted.html#sc_RunningReplicatedZooKeeper
     for more information.
 +
 .Default
@@ -636,7 +620,7 @@ Property from ZooKeeper's config zoo.cfg.
 *`hbase.client.write.buffer`*::
 +
 .Description
-Default size of the HTable client write buffer in bytes.
+Default size of the BufferedMutator write buffer in bytes.
     A bigger buffer takes more memory -- on both the client and server
     side since server instantiates the passed write buffer to process
     it -- but a larger buffer size reduces the number of RPCs made.
@@ -694,7 +678,7 @@ The maximum number of concurrent tasks a single HTable 
instance will
     send to a single region server.
 +
 .Default
-`5`
+`2`
 
 
 [[hbase.client.max.perregion.tasks]]
@@ -1263,9 +1247,8 @@ A comma-separated list of sizes for buckets for the 
bucketcache
 +
 .Description
 The HFile format version to use for new files.
-      Version 3 adds support for tags in hfiles (See 
http://hbase.apache.org/book.html#hbase.tags).
-      Distributed Log Replay requires that tags are enabled. Also see the 
configuration
-      'hbase.replication.rpc.codec'.
+      Version 3 adds support for tags in hfiles (See 
https://hbase.apache.org/book.html#hbase.tags).
+      Also see the configuration 'hbase.replication.rpc.codec'.
 
 +
 .Default
@@ -1963,7 +1946,7 @@ If the DFSClient configuration
 
       Class used to execute the regions balancing when the period occurs.
       See the class comment for more on how it works
-      
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.html
+      
https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.html
       It replaces the DefaultLoadBalancer as the default (since renamed
       as the SimpleLoadBalancer).
 
@@ -2023,17 +2006,6 @@ A comma-separated list of
 .Default
 ``
 
-
-[[hbase.coordinated.state.manager.class]]
-*`hbase.coordinated.state.manager.class`*::
-+
-.Description
-Fully qualified name of class implementing coordinated state manager.
-+
-.Default
-`org.apache.hadoop.hbase.coordination.ZkCoordinatedStateManager`
-
-
 [[hbase.regionserver.storefile.refresh.period]]
 *`hbase.regionserver.storefile.refresh.period`*::
 +
@@ -2111,7 +2083,7 @@ Fully qualified name of class implementing coordinated 
state manager.
 
 +
 .Default
-`10`
+`16`
 
 
 [[hbase.replication.rpc.codec]]

http://git-wip-us.apache.org/repos/asf/hbase/blob/2e9a55be/src/main/asciidoc/_chapters/hbase_apis.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/hbase_apis.adoc 
b/src/main/asciidoc/_chapters/hbase_apis.adoc
index f27c9dc..e466db9 100644
--- a/src/main/asciidoc/_chapters/hbase_apis.adoc
+++ b/src/main/asciidoc/_chapters/hbase_apis.adoc
@@ -28,7 +28,7 @@
 :experimental:
 
 This chapter provides information about performing operations using HBase 
native APIs.
-This information is not exhaustive, and provides a quick reference in addition 
to the link:http://hbase.apache.org/apidocs/index.html[User API Reference].
+This information is not exhaustive, and provides a quick reference in addition 
to the link:https://hbase.apache.org/apidocs/index.html[User API Reference].
 The examples here are not comprehensive or complete, and should be used for 
purposes of illustration only.
 
 Apache HBase also works with multiple external APIs.

http://git-wip-us.apache.org/repos/asf/hbase/blob/2e9a55be/src/main/asciidoc/_chapters/mapreduce.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/mapreduce.adoc 
b/src/main/asciidoc/_chapters/mapreduce.adoc
index dfa843a..cc9dce4 100644
--- a/src/main/asciidoc/_chapters/mapreduce.adoc
+++ b/src/main/asciidoc/_chapters/mapreduce.adoc
@@ -27,10 +27,10 @@
 :icons: font
 :experimental:
 
-Apache MapReduce is a software framework used to analyze large amounts of 
data, and is the framework used most often with 
link:http://hadoop.apache.org/[Apache Hadoop].
+Apache MapReduce is a software framework used to analyze large amounts of 
data. It is provided by link:https://hadoop.apache.org/[Apache Hadoop].
 MapReduce itself is out of the scope of this document.
-A good place to get started with MapReduce is 
http://hadoop.apache.org/docs/r2.6.0/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html.
-MapReduce version 2 (MR2)is now part of 
link:http://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-site/[YARN].
+A good place to get started with MapReduce is 
https://hadoop.apache.org/docs/r2.6.0/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html.
+MapReduce version 2 (MR2)is now part of 
link:https://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-site/[YARN].
 
 This chapter discusses specific configuration steps you need to take to use 
MapReduce on data within HBase.
 In addition, it discusses other interactions and issues between HBase and 
MapReduce
@@ -40,44 +40,88 @@ link:http://www.cascading.org/[alternative API] for 
MapReduce.
 .`mapred` and `mapreduce`
 [NOTE]
 ====
-There are two mapreduce packages in HBase as in MapReduce itself: 
_org.apache.hadoop.hbase.mapred_      and _org.apache.hadoop.hbase.mapreduce_.
-The former does old-style API and the latter the new style.
+There are two mapreduce packages in HBase as in MapReduce itself: 
_org.apache.hadoop.hbase.mapred_ and _org.apache.hadoop.hbase.mapreduce_.
+The former does old-style API and the latter the new mode.
 The latter has more facility though you can usually find an equivalent in the 
older package.
 Pick the package that goes with your MapReduce deploy.
-When in doubt or starting over, pick the _org.apache.hadoop.hbase.mapreduce_.
-In the notes below, we refer to o.a.h.h.mapreduce but replace with the 
o.a.h.h.mapred if that is what you are using.
+When in doubt or starting over, pick _org.apache.hadoop.hbase.mapreduce_.
+In the notes below, we refer to _o.a.h.h.mapreduce_ but replace with
+_o.a.h.h.mapred_ if that is what you are using.
 ====
 
 [[hbase.mapreduce.classpath]]
 == HBase, MapReduce, and the CLASSPATH
 
-By default, MapReduce jobs deployed to a MapReduce cluster do not have access 
to either the HBase configuration under `$HBASE_CONF_DIR` or the HBase classes.
+By default, MapReduce jobs deployed to a MapReduce cluster do not have access 
to
+either the HBase configuration under `$HBASE_CONF_DIR` or the HBase classes.
 
-To give the MapReduce jobs the access they need, you could add 
_hbase-site.xml_ to _$HADOOP_HOME/conf_ and add HBase jars to the 
_$HADOOP_HOME/lib_ directory.
-You would then need to copy these changes across your cluster. Or you can edit 
_$HADOOP_HOME/conf/hadoop-env.sh_ and add them to the `HADOOP_CLASSPATH` 
variable.
-However, this approach is not recommended because it will pollute your Hadoop 
install with HBase references.
-It also requires you to restart the Hadoop cluster before Hadoop can use the 
HBase data.
+To give the MapReduce jobs the access they need, you could add 
_hbase-site.xml_to _$HADOOP_HOME/conf_ and add HBase jars to the 
_$HADOOP_HOME/lib_ directory.
+You would then need to copy these changes across your cluster. Or you could 
edit _$HADOOP_HOME/conf/hadoop-env.sh_ and add hbase dependencies to the 
`HADOOP_CLASSPATH` variable.
+Neither of these approaches is recommended because it will pollute your Hadoop 
install with HBase references.
+It also requires you restart the Hadoop cluster before Hadoop can use the 
HBase data.
 
-The recommended approach is to let HBase add its dependency jars itself and 
use `HADOOP_CLASSPATH` or `-libjars`.
+The recommended approach is to let HBase add its dependency jars and use 
`HADOOP_CLASSPATH` or `-libjars`.
 
-Since HBase 0.90.x, HBase adds its dependency JARs to the job configuration 
itself.
-The dependencies only need to be available on the local `CLASSPATH`.
-The following example runs the bundled HBase 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/RowCounter.html[RowCounter]
 MapReduce job against a table named `usertable`.
-If you have not set the environment variables expected in the command (the 
parts prefixed by a `$` sign and surrounded by curly braces), you can use the 
actual system paths instead.
-Be sure to use the correct version of the HBase JAR for your system.
-The backticks (``` symbols) cause the shell to execute the sub-commands, 
setting the output of `hbase classpath` (the command to dump HBase CLASSPATH) 
to `HADOOP_CLASSPATH`.
+Since HBase `0.90.x`, HBase adds its dependency JARs to the job configuration 
itself.
+The dependencies only need to be available on the local `CLASSPATH` and from 
here they'll be picked
+up and bundled into the fat job jar deployed to the MapReduce cluster. A basic 
trick just passes
+the full hbase classpath -- all hbase and dependent jars as well as 
configurations -- to the mapreduce
+job runner letting hbase utility pick out from the full-on classpath what it 
needs adding them to the
+MapReduce job configuration (See the source at 
`TableMapReduceUtil#addDependencyJars(org.apache.hadoop.mapreduce.Job)` for how 
this is done).
+
+
+The following example runs the bundled HBase 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/RowCounter.html[RowCounter]
 MapReduce job against a table named `usertable`.
+It sets into `HADOOP_CLASSPATH` the jars hbase needs to run in an MapReduce 
context (including configuration files such as hbase-site.xml).
+Be sure to use the correct version of the HBase JAR for your system; replace 
the VERSION string in the below command line w/ the version of
+your local hbase install.  The backticks (``` symbols) cause the shell to 
execute the sub-commands, setting the output of `hbase classpath` into 
`HADOOP_CLASSPATH`.
 This example assumes you use a BASH-compatible shell.
 
 [source,bash]
 ----
-$ HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` 
${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/lib/hbase-server-VERSION.jar 
rowcounter usertable
+$ HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` \
+  ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/lib/hbase-mapreduce-VERSION.jar \
+  org.apache.hadoop.hbase.mapreduce.RowCounter usertable
+----
+
+The above command will launch a row counting mapreduce job against the hbase 
cluster that is pointed to by your local configuration on a cluster that the 
hadoop configs are pointing to.
+
+The main for the `hbase-mapreduce.jar` is a Driver that lists a few basic 
mapreduce tasks that ship with hbase.
+For example, presuming your install is hbase `2.0.0-SNAPSHOT`:
+
+[source,bash]
+----
+$ HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` \
+  ${HADOOP_HOME}/bin/hadoop jar 
${HBASE_HOME}/lib/hbase-mapreduce-2.0.0-SNAPSHOT.jar
+An example program must be given as the first argument.
+Valid program names are:
+  CellCounter: Count cells in HBase table.
+  WALPlayer: Replay WAL files.
+  completebulkload: Complete a bulk data load.
+  copytable: Export a table from local cluster to peer cluster.
+  export: Write table data to HDFS.
+  exportsnapshot: Export the specific snapshot to a given FileSystem.
+  import: Import data written by Export.
+  importtsv: Import data in TSV format.
+  rowcounter: Count rows in HBase table.
+  verifyrep: Compare the data from tables in two different clusters. WARNING: 
It doesn't work for incrementColumnValues'd cells since the timestamp is 
changed after being appended to the log.
+
+----
+
+You can use the above listed shortnames for mapreduce jobs as in the below 
re-run of the row counter job (again, presuming your install is hbase 
`2.0.0-SNAPSHOT`):
+
+[source,bash]
+----
+$ HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` \
+  ${HADOOP_HOME}/bin/hadoop jar 
${HBASE_HOME}/lib/hbase-mapreduce-2.0.0-SNAPSHOT.jar \
+  rowcounter usertable
 ----
 
-When the command runs, internally, the HBase JAR finds the dependencies it 
needs and adds them to the MapReduce job configuration.
-See the source at 
`TableMapReduceUtil#addDependencyJars(org.apache.hadoop.mapreduce.Job)` for how 
this is done.
+You might find the more selective `hbase mapredcp` tool output of interest; it 
lists the minimum set of jars needed
+to run a basic mapreduce job against an hbase install. It does not include 
configuration. You'll probably need to add
+these if you want your MapReduce job to find the target cluster. You'll 
probably have to also add pointers to extra jars
+once you start to do anything of substance. Just specify the extras by passing 
the system propery `-Dtmpjars` when
+you run `hbase mapredcp`. 
 
-The command `hbase mapredcp` can also help you dump the CLASSPATH entries 
required by MapReduce, which are the same jars 
`TableMapReduceUtil#addDependencyJars` would add.
-You can add them together with HBase conf directory to `HADOOP_CLASSPATH`.
 For jobs that do not package their dependencies or call 
`TableMapReduceUtil#addDependencyJars`, the following command structure is 
necessary:
 
 [source,bash]
@@ -215,10 +259,10 @@ $ ${HADOOP_HOME}/bin/hadoop jar 
${HBASE_HOME}/hbase-server-VERSION.jar rowcounte
 
 == HBase as a MapReduce Job Data Source and Data Sink
 
-HBase can be used as a data source, 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableInputFormat.html[TableInputFormat],
 and data sink, 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.html[TableOutputFormat]
 or 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/MultiTableOutputFormat.html[MultiTableOutputFormat],
 for MapReduce jobs.
-Writing MapReduce jobs that read or write HBase, it is advisable to subclass 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableMapper.html[TableMapper]
        and/or 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableReducer.html[TableReducer].
-See the do-nothing pass-through classes 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/IdentityTableMapper.html[IdentityTableMapper]
 and 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/IdentityTableReducer.html[IdentityTableReducer]
 for basic usage.
-For a more involved example, see 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/RowCounter.html[RowCounter]
 or review the `org.apache.hadoop.hbase.mapreduce.TestTableMapReduce` unit test.
+HBase can be used as a data source, 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableInputFormat.html[TableInputFormat],
 and data sink, 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.html[TableOutputFormat]
 or 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/MultiTableOutputFormat.html[MultiTableOutputFormat],
 for MapReduce jobs.
+Writing MapReduce jobs that read or write HBase, it is advisable to subclass 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableMapper.html[TableMapper]
        and/or 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableReducer.html[TableReducer].
+See the do-nothing pass-through classes 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/IdentityTableMapper.html[IdentityTableMapper]
 and 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/IdentityTableReducer.html[IdentityTableReducer]
 for basic usage.
+For a more involved example, see 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/RowCounter.html[RowCounter]
 or review the `org.apache.hadoop.hbase.mapreduce.TestTableMapReduce` unit test.
 
 If you run MapReduce jobs that use HBase as source or sink, need to specify 
source and sink table and column names in your configuration.
 
@@ -231,7 +275,7 @@ On insert, HBase 'sorts' so there is no point 
double-sorting (and shuffling data
 If you do not need the Reduce, your map might emit counts of records processed 
for reporting at the end of the job, or set the number of Reduces to zero and 
use TableOutputFormat.
 If running the Reduce step makes sense in your case, you should typically use 
multiple reducers so that load is spread across the HBase cluster.
 
-A new HBase partitioner, the 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/HRegionPartitioner.html[HRegionPartitioner],
 can run as many reducers the number of existing regions.
+A new HBase partitioner, the 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/HRegionPartitioner.html[HRegionPartitioner],
 can run as many reducers the number of existing regions.
 The HRegionPartitioner is suitable when your table is large and your upload 
will not greatly alter the number of existing regions upon completion.
 Otherwise use the default partitioner.
 
@@ -242,7 +286,7 @@ For more on how this mechanism works, see 
<<arch.bulk.load>>.
 
 == RowCounter Example
 
-The included 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/RowCounter.html[RowCounter]
 MapReduce job uses `TableInputFormat` and does a count of all rows in the 
specified table.
+The included 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/RowCounter.html[RowCounter]
 MapReduce job uses `TableInputFormat` and does a count of all rows in the 
specified table.
 To run it, use the following command:
 
 [source,bash]
@@ -262,13 +306,13 @@ If you have classpath errors, see 
<<hbase.mapreduce.classpath>>.
 [[splitter.default]]
 === The Default HBase MapReduce Splitter
 
-When 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableInputFormat.html[TableInputFormat]
 is used to source an HBase table in a MapReduce job, its splitter will make a 
map task for each region of the table.
+When 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableInputFormat.html[TableInputFormat]
 is used to source an HBase table in a MapReduce job, its splitter will make a 
map task for each region of the table.
 Thus, if there are 100 regions in the table, there will be 100 map-tasks for 
the job - regardless of how many column families are selected in the Scan.
 
 [[splitter.custom]]
 === Custom Splitters
 
-For those interested in implementing custom splitters, see the method 
`getSplits` in 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.html[TableInputFormatBase].
+For those interested in implementing custom splitters, see the method 
`getSplits` in 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.html[TableInputFormatBase].
 That is where the logic for map-task assignment resides.
 
 [[mapreduce.example]]
@@ -308,7 +352,7 @@ if (!b) {
 }
 ----
 
-...and the mapper instance would extend 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableMapper.html[TableMapper]...
+...and the mapper instance would extend 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableMapper.html[TableMapper]...
 
 [source,java]
 ----
@@ -356,7 +400,7 @@ if (!b) {
 }
 ----
 
-An explanation is required of what `TableMapReduceUtil` is doing, especially 
with the reducer. 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.html[TableOutputFormat]
 is being used as the outputFormat class, and several parameters are being set 
on the config (e.g., `TableOutputFormat.OUTPUT_TABLE`), as well as setting the 
reducer output key to `ImmutableBytesWritable` and reducer value to `Writable`.
+An explanation is required of what `TableMapReduceUtil` is doing, especially 
with the reducer. 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.html[TableOutputFormat]
 is being used as the outputFormat class, and several parameters are being set 
on the config (e.g., `TableOutputFormat.OUTPUT_TABLE`), as well as setting the 
reducer output key to `ImmutableBytesWritable` and reducer value to `Writable`.
 These could be set by the programmer on the job and conf, but 
`TableMapReduceUtil` tries to make things easier.
 
 The following is the example mapper, which will create a `Put` and matching 
the input `Result` and emit it.

http://git-wip-us.apache.org/repos/asf/hbase/blob/2e9a55be/src/main/asciidoc/_chapters/ops_mgt.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/ops_mgt.adoc 
b/src/main/asciidoc/_chapters/ops_mgt.adoc
index 6181b13..d4478fa 100644
--- a/src/main/asciidoc/_chapters/ops_mgt.adoc
+++ b/src/main/asciidoc/_chapters/ops_mgt.adoc
@@ -332,7 +332,7 @@ See <<hfile_tool>>.
 === WAL Tools
 
 [[hlog_tool]]
-==== `FSHLog` tool
+==== FSHLog tool
 
 The main method on `FSHLog` offers manual split and dump facilities.
 Pass it WALs or the product of a split, the content of the _recovered.edits_.
@@ -353,9 +353,9 @@ Similarly you can force a split of a log file directory by 
doing:
 ----
 
 [[hlog_tool.prettyprint]]
-===== WAL Pretty Printer
+===== WALPrettyPrinter
 
-The WAL Pretty Printer is a tool with configurable options to print the 
contents of a WAL.
+The `WALPrettyPrinter` is a tool with configurable options to print the 
contents of a WAL.
 You can invoke it via the HBase cli with the 'wal' command.
 
 ----
@@ -365,7 +365,7 @@ You can invoke it via the HBase cli with the 'wal' command.
 .WAL Printing in older versions of HBase
 [NOTE]
 ====
-Prior to version 2.0, the WAL Pretty Printer was called the 
`HLogPrettyPrinter`, after an internal name for HBase's write ahead log.
+Prior to version 2.0, the `WALPrettyPrinter` was called the 
`HLogPrettyPrinter`, after an internal name for HBase's write ahead log.
 In those versions, you can print the contents of a WAL using the same 
configuration as above, but with the 'hlog' command.
 
 ----
@@ -444,12 +444,56 @@ See Jonathan Hsieh's 
link:https://blog.cloudera.com/blog/2012/06/online-hbase-ba
 === Export
 
 Export is a utility that will dump the contents of table to HDFS in a sequence 
file.
-Invoke via:
+The Export can be run via a Coprocessor Endpoint or MapReduce. Invoke via:
 
+*mapreduce-based Export*
 ----
 $ bin/hbase org.apache.hadoop.hbase.mapreduce.Export <tablename> <outputdir> 
[<versions> [<starttime> [<endtime>]]]
 ----
 
+*endpoint-based Export*
+----
+$ bin/hbase org.apache.hadoop.hbase.coprocessor.Export <tablename> <outputdir> 
[<versions> [<starttime> [<endtime>]]]
+----
+
+*The Comparison of Endpoint-based Export And Mapreduce-based Export*
+|===
+||Endpoint-based Export|Mapreduce-based Export
+
+|HBase version requirement
+|2.0+
+|0.2.1+
+
+|Maven dependency
+|hbase-endpoint
+|hbase-mapreduce (2.0+), hbase-server(prior to 2.0)
+
+|Requirement before dump
+|mount the endpoint.Export on the target table
+|deploy the MapReduce framework
+
+|Read latency
+|low, directly read the data from region
+|normal, traditional RPC scan
+
+|Read Scalability
+|depend on number of regions
+|depend on number of mappers (see TableInputFormatBase#getSplits)
+
+|Timeout
+|operation timeout. configured by hbase.client.operation.timeout
+|scan timeout. configured by hbase.client.scanner.timeout.period
+
+|Permission requirement
+|READ, EXECUTE
+|READ
+
+|Fault tolerance
+|no
+|depend on MapReduce
+|===
+
+
 NOTE: To see usage instructions, run the command with no options. Available 
options include
 specifying column families and applying filters during the export.
 
@@ -577,7 +621,7 @@ There are two ways to invoke this utility, with explicit 
classname and via the d
 
 .Explicit Classname
 ----
-$ bin/hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles 
<hdfs://storefileoutput> <tablename>
+$ bin/hbase org.apache.hadoop.hbase.tool.LoadIncrementalHFiles 
<hdfs://storefileoutput> <tablename>
 ----
 
 .Driver
@@ -620,7 +664,7 @@ To NOT run WALPlayer as a mapreduce job on your cluster, 
force it to run all in
 [[rowcounter]]
 === RowCounter and CellCounter
 
-link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/RowCounter.html[RowCounter]
        is a mapreduce job to count all the rows of a table.
+link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/RowCounter.html[RowCounter]
        is a mapreduce job to count all the rows of a table.
 This is a good utility to use as a sanity check to ensure that HBase can read 
all the blocks of a table if there are any concerns of metadata inconsistency.
 It will run the mapreduce all in a single process but it will run faster if 
you have a MapReduce cluster in place for it to exploit. It is also possible to 
limit
 the time range of data to be scanned by using the `--starttime=[starttime]` 
and `--endtime=[endtime]` flags.
@@ -633,7 +677,7 @@ RowCounter only counts one version per cell.
 
 Note: caching for the input Scan is configured via 
`hbase.client.scanner.caching` in the job configuration.
 
-HBase ships another diagnostic mapreduce job called 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/CellCounter.html[CellCounter].
+HBase ships another diagnostic mapreduce job called 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/CellCounter.html[CellCounter].
 Like RowCounter, it gathers more fine-grained statistics about your table.
 The statistics gathered by RowCounter are more fine-grained and include:
 
@@ -666,7 +710,7 @@ See 
link:https://issues.apache.org/jira/browse/HBASE-4391[HBASE-4391 Add ability
 === Offline Compaction Tool
 
 See the usage for the
-link:http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/regionserver/CompactionTool.html[CompactionTool].
+link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/regionserver/CompactionTool.html[CompactionTool].
 Run it like:
 
 [source, bash]
@@ -722,7 +766,7 @@ The LoadTestTool has received many updates in recent HBase 
releases, including s
 [[ops.regionmgt.majorcompact]]
 === Major Compaction
 
-Major compactions can be requested via the HBase shell or 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Admin.html#majorCompact%28java.lang.String%29[Admin.majorCompact].
+Major compactions can be requested via the HBase shell or 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Admin.html#majorCompact-org.apache.hadoop.hbase.TableName-[Admin.majorCompact].
 
 Note: major compactions do NOT do region merges.
 See <<compaction,compaction>> for more information about compactions.
@@ -739,7 +783,7 @@ $ bin/hbase org.apache.hadoop.hbase.util.Merge <tablename> 
<region1> <region2>
 
 If you feel you have too many regions and want to consolidate them, Merge is 
the utility you need.
 Merge must run be done when the cluster is down.
-See the 
link:http://ofps.oreilly.com/titles/9781449396107/performance.html[O'Reilly 
HBase
+See the 
link:https://web.archive.org/web/20111231002503/http://ofps.oreilly.com/titles/9781449396107/performance.html[O'Reilly
 HBase
           Book] for an example of usage.
 
 You will need to pass 3 parameters to this application.
@@ -868,7 +912,7 @@ But usually disks do the "John Wayne" -- i.e.
 take a while to go down spewing errors in _dmesg_ -- or for some reason, run 
much slower than their companions.
 In this case you want to decommission the disk.
 You have two options.
-You can 
link:http://wiki.apache.org/hadoop/FAQ#I_want_to_make_a_large_cluster_smaller_by_taking_out_a_bunch_of_nodes_simultaneously._How_can_this_be_done.3F[decommission
+You can 
link:https://wiki.apache.org/hadoop/FAQ#I_want_to_make_a_large_cluster_smaller_by_taking_out_a_bunch_of_nodes_simultaneously._How_can_this_be_done.3F[decommission
             the datanode] or, less disruptive in that only the bad disks data 
will be rereplicated, can stop the datanode, unmount the bad volume (You can't 
umount a volume while the datanode is using it), and then restart the datanode 
(presuming you have set dfs.datanode.failed.volumes.tolerated > 0). The 
regionserver will throw some errors in its logs as it recalibrates where to get 
its data from -- it will likely roll its WAL log too -- but in general but for 
some latency spikes, it should keep on chugging.
 
 .Short Circuit Reads
@@ -1006,7 +1050,7 @@ In this case, or if you are in a OLAP environment and 
require having locality, t
 [[hbase_metrics]]
 == HBase Metrics
 
-HBase emits metrics which adhere to the 
link:http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/metrics/package-summary.html[Hadoop
 metrics] API.
+HBase emits metrics which adhere to the 
link:https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Metrics.html[Hadoop
 Metrics] API.
 Starting with HBase 0.95footnote:[The Metrics system was redone in
           HBase 0.96. See Migration
             to the New Metrics Hotness – Metrics2 by Elliot Clark for 
detail], HBase is configured to emit a default set of metrics with a default 
sampling period of every 10 seconds.
@@ -1021,7 +1065,7 @@ To configure metrics for a given region server, edit the 
_conf/hadoop-metrics2-h
 Restart the region server for the changes to take effect.
 
 To change the sampling rate for the default sink, edit the line beginning with 
`*.period`.
-To filter which metrics are emitted or to extend the metrics framework, see 
http://hadoop.apache.org/docs/current/api/org/apache/hadoop/metrics2/package-summary.html
+To filter which metrics are emitted or to extend the metrics framework, see 
https://hadoop.apache.org/docs/current/api/org/apache/hadoop/metrics2/package-summary.html
 
 .HBase Metrics and Ganglia
 [NOTE]
@@ -1029,7 +1073,7 @@ To filter which metrics are emitted or to extend the 
metrics framework, see http
 By default, HBase emits a large number of metrics per region server.
 Ganglia may have difficulty processing all these metrics.
 Consider increasing the capacity of the Ganglia server or reducing the number 
of metrics emitted by HBase.
-See 
link:http://hadoop.apache.org/docs/current/api/org/apache/hadoop/metrics2/package-summary.html#filtering[Metrics
 Filtering].
+See 
link:https://hadoop.apache.org/docs/current/api/org/apache/hadoop/metrics2/package-summary.html#filtering[Metrics
 Filtering].
 ====
 
 === Disabling Metrics
@@ -1287,7 +1331,7 @@ Have a look in the Web UI.
 == Cluster Replication
 
 NOTE: This information was previously available at
-link:http://hbase.apache.org#replication[Cluster Replication].
+link:https://hbase.apache.org/0.94/replication.html[Cluster Replication].
 
 HBase provides a cluster replication mechanism which allows you to keep one 
cluster's state synchronized with that of another cluster, using the 
write-ahead log (WAL) of the source cluster to propagate the changes.
 Some use cases for cluster replication include:
@@ -1323,9 +1367,11 @@ If a slave cluster does run out of room, or is 
inaccessible for other reasons, i
 .Consistency Across Replicated Clusters
 [WARNING]
 ====
-How your application builds on top of the HBase API matters when replication 
is in play. HBase's replication system provides at-least-once delivery of 
client edits for an enabled column family to each configured destination 
cluster. In the event of failure to reach a given destination, the replication 
system will retry sending edits in a way that might repeat a given message. 
Further more, there is not a guaranteed order of delivery for client edits. In 
the event of a RegionServer failing, recovery of the replication queue happens 
independent of recovery of the individual regions that server was previously 
handling. This means that it is possible for the not-yet-replicated edits to be 
serviced by a RegionServer that is currently slower to replicate than the one 
that handles edits from after the failure.
+How your application builds on top of the HBase API matters when replication 
is in play. HBase's replication system provides at-least-once delivery of 
client edits for an enabled column family to each configured destination 
cluster. In the event of failure to reach a given destination, the replication 
system will retry sending edits in a way that might repeat a given message. 
HBase provides two ways of replication, one is the original replication and the 
other is serial replication. In the previous way of replication, there is not a 
guaranteed order of delivery for client edits. In the event of a RegionServer 
failing, recovery of the replication queue happens independent of recovery of 
the individual regions that server was previously handling. This means that it 
is possible for the not-yet-replicated edits to be serviced by a RegionServer 
that is currently slower to replicate than the one that handles edits from 
after the failure.
 
 The combination of these two properties (at-least-once delivery and the lack 
of message ordering) means that some destination clusters may end up in a 
different state if your application makes use of operations that are not 
idempotent, e.g. Increments.
+
+To solve the problem, HBase now supports serial replication, which sends edits 
to destination cluster as the order of requests from client.
 ====
 
 .Terminology Changes
@@ -1351,6 +1397,7 @@ image::hbase_replication_diagram.jpg[]
 HBase replication borrows many concepts from the [firstterm]_statement-based 
replication_ design used by MySQL.
 Instead of SQL statements, entire WALEdits (consisting of multiple cell 
inserts coming from Put and Delete operations on the clients) are replicated in 
order to maintain atomicity.
 
+[[hbase.replication.management]]
 === Managing and Configuring Cluster Replication
 .Cluster Configuration Overview
 
@@ -1365,11 +1412,15 @@ Instead of SQL statements, entire WALEdits (consisting 
of multiple cell inserts
 LOG.info("Replicating "+clusterId + " -> " + peerClusterId);
 ----
 
+.Serial Replication Configuration
+See <<Serial Replication,Serial Replication>>
+
 .Cluster Management Commands
 add_peer <ID> <CLUSTER_KEY>::
   Adds a replication relationship between two clusters. +
   * ID -- a unique string, which must not contain a hyphen.
   * CLUSTER_KEY: composed using the following template, with appropriate 
place-holders: 
`hbase.zookeeper.quorum:hbase.zookeeper.property.clientPort:zookeeper.znode.parent`
+  * STATE(optional): ENABLED or DISABLED, default value is ENABLED
 list_peers:: list all replication relationships known by this cluster
 enable_peer <ID>::
   Enable a previously-disabled replication relationship
@@ -1385,6 +1436,40 @@ enable_table_replication <TABLE_NAME>::
 disable_table_replication <TABLE_NAME>::
   Disable the table replication switch for all its column families.
 
+=== Serial Replication
+
+Note: this feature is introduced in HBase 1.5
+
+.Function of serial replication
+
+Serial replication supports to push logs to the destination cluster in the 
same order as logs reach to the source cluster.
+
+.Why need serial replication?
+In replication of HBase, we push mutations to destination cluster by reading 
WAL in each region server. We have a queue for WAL files so we can read them in 
order of creation time. However, when region-move or RS failure occurs in 
source cluster, the hlog entries that are not pushed before region-move or 
RS-failure will be pushed by original RS(for region move) or another RS which 
takes over the remained hlog of dead RS(for RS failure), and the new entries 
for the same region(s) will be pushed by the RS which now serves the region(s), 
but they push the hlog entries of a same region concurrently without 
coordination.
+
+This treatment can possibly lead to data inconsistency between source and 
destination clusters:
+
+1. there are put and then delete written to source cluster.
+
+2. due to region-move / RS-failure, they are pushed by different 
replication-source threads to peer cluster.
+
+3. if delete is pushed to peer cluster before put, and flush and major-compact 
occurs in peer cluster before put is pushed to peer cluster, the delete is 
collected and the put remains in peer cluster, but in source cluster the put is 
masked by the delete, hence data inconsistency between source and destination 
clusters.
+
+
+.Serial replication configuration
+
+. Set REPLICATION_SCOPE=>2 on the column family which is to be replicated 
serially when creating tables.
+
+ REPLICATION_SCOPE is a column family level attribute. Its value can be 0, 1 
or 2. Value 0 means replication is disabled, 1 means replication is enabled but 
which not guarantee log order, and 2 means serial replication is enabled.
+
+. This feature relies on zk-less assignment, and conflicts with distributed 
log replay, so users must set hbase.assignment.usezk=false and 
hbase.master.distributed.log.replay=false to support this feature.(Note that 
distributed log replay is deprecated and has already been purged from 2.0)
+
+.Limitations in serial replication
+
+Now we read and push logs in one RS to one peer in one thread, so if one log 
has not been pushed, all logs after it will be blocked. One wal file may 
contain wal edits from different tables, if one of the tables(or its CF) which 
REPLICATION_SCOPE is 2, and it is blocked, then all edits will be blocked, 
although other tables do not need serial replication. If you want to prevent 
this, then you need to split these tables/cfs into different peers.
+
+More details about serial replication can be found in 
link:https://issues.apache.org/jira/browse/HBASE-9465[HBASE-9465].
+
 === Verifying Replicated Data
 
 The `VerifyReplication` MapReduce job, which is included in HBase, performs a 
systematic comparison of replicated data between two different clusters. Run 
the VerifyReplication job on the master cluster, supplying it with the peer ID 
and table name to use for validation. You can limit the verification further by 
specifying a time range or specific families. The job's short name is 
`verifyrep`. To run the job, use a command like the following:
@@ -1414,7 +1499,7 @@ A single WAL edit goes through several steps in order to 
be replicated to a slav
 . The edit is tagged with the master's UUID and added to a buffer.
   When the buffer is filled, or the reader reaches the end of the file, the 
buffer is sent to a random region server on the slave cluster.
 . The region server reads the edits sequentially and separates them into 
buffers, one buffer per table.
-  After all edits are read, each buffer is flushed using 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html[Table],
 HBase's normal client.
+  After all edits are read, each buffer is flushed using 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html[Table],
 HBase's normal client.
   The master's UUID and the UUIDs of slaves which have already consumed the 
data are preserved in the edits they are applied, in order to prevent 
replication loops.
 . In the master, the offset for the WAL that is currently being replicated is 
registered in ZooKeeper.
 
@@ -2049,7 +2134,7 @@ The act of copying these files creates new HDFS metadata, 
which is why a restore
 === Live Cluster Backup - Replication
 
 This approach assumes that there is a second cluster.
-See the HBase page on 
link:http://hbase.apache.org/book.html#replication[replication] for more 
information.
+See the HBase page on 
link:https://hbase.apache.org/book.html#_cluster_replication[replication] for 
more information.
 
 [[ops.backup.live.copytable]]
 === Live Cluster Backup - CopyTable
@@ -2258,7 +2343,7 @@ as in <<snapshots_s3>>.
 - You must be using HBase 1.2 or higher with Hadoop 2.7.1 or
   higher. No version of HBase supports Hadoop 2.7.0.
 - Your hosts must be configured to be aware of the Azure blob storage 
filesystem.
-  See http://hadoop.apache.org/docs/r2.7.1/hadoop-azure/index.html.
+  See https://hadoop.apache.org/docs/r2.7.1/hadoop-azure/index.html.
 
 After you meet the prerequisites, follow the instructions
 in <<snapshots_s3>>, replacingthe protocol specifier with `wasb://` or 
`wasbs://`.
@@ -2321,7 +2406,7 @@ See <<gcpause,gcpause>>, 
<<trouble.log.gc,trouble.log.gc>> and elsewhere (TODO:
 Generally less regions makes for a smoother running cluster (you can always 
manually split the big regions later (if necessary) to spread the data, or 
request load, over the cluster); 20-200 regions per RS is a reasonable range.
 The number of regions cannot be configured directly (unless you go for fully 
<<disable.splitting,disable.splitting>>); adjust the region size to achieve the 
target region size given table size.
 
-When configuring regions for multiple tables, note that most region settings 
can be set on a per-table basis via 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HTableDescriptor.html[HTableDescriptor],
 as well as shell commands.
+When configuring regions for multiple tables, note that most region settings 
can be set on a per-table basis via 
link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HTableDescriptor.html[HTableDescriptor],
 as well as shell commands.
 These settings will override the ones in `hbase-site.xml`.
 That is useful if your tables have different workloads/use cases.
 
@@ -2478,7 +2563,7 @@ void rename(Admin admin, String oldTableName, TableName 
newTableName) {
 RegionServer Grouping (A.K.A `rsgroup`) is an advanced feature for
 partitioning regionservers into distinctive groups for strict isolation. It
 should only be used by users who are sophisticated enough to understand the
-full implications and have a sufficient background in managing HBase clusters. 
+full implications and have a sufficient background in managing HBase clusters.
 It was developed by Yahoo! and they run it at scale on their large grid 
cluster.
 See 
link:http://www.slideshare.net/HBaseCon/keynote-apache-hbase-at-yahoo-scale[HBase
 at Yahoo! Scale].
 
@@ -2491,20 +2576,20 @@ rsgroup at a time. By default, all tables and 
regionservers belong to the
 APIs. A custom balancer implementation tracks assignments per rsgroup and makes
 sure to move regions to the relevant regionservers in that rsgroup. The rsgroup
 information is stored in a regular HBase table, and a zookeeper-based read-only
-cache is used at cluster bootstrap time. 
+cache is used at cluster bootstrap time.
 
-To enable, add the following to your hbase-site.xml and restart your Master: 
+To enable, add the following to your hbase-site.xml and restart your Master:
 
 [source,xml]
 ----
- <property> 
-   <name>hbase.coprocessor.master.classes</name> 
-   <value>org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint</value> 
- </property> 
- <property> 
-   <name>hbase.master.loadbalancer.class</name> 
-   <value>org.apache.hadoop.hbase.rsgroup.RSGroupBasedLoadBalancer</value> 
- </property> 
+ <property>
+   <name>hbase.coprocessor.master.classes</name>
+   <value>org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint</value>
+ </property>
+ <property>
+   <name>hbase.master.loadbalancer.class</name>
+   <value>org.apache.hadoop.hbase.rsgroup.RSGroupBasedLoadBalancer</value>
+ </property>
 ----
 
 Then use the shell _rsgroup_ commands to create and manipulate RegionServer
@@ -2514,7 +2599,7 @@ rsgroup commands available in the hbase shell type:
 [source, bash]
 ----
  hbase(main):008:0> help ‘rsgroup’
- Took 0.5610 seconds 
+ Took 0.5610 seconds
 ----
 
 High level, you create a rsgroup that is other than the `default` group using
@@ -2531,8 +2616,8 @@ Here is example using a few of the rsgroup  commands. To 
add a group, do as foll
 
 [source, bash]
 ----
- hbase(main):008:0> add_rsgroup 'my_group' 
- Took 0.5610 seconds 
+ hbase(main):008:0> add_rsgroup 'my_group'
+ Took 0.5610 seconds
 ----
 
 
@@ -2556,11 +2641,11 @@ ERROR: 
org.apache.hadoop.hbase.exceptions.UnknownProtocolException: No registere
 ====
 
 Add a server (specified by hostname + port) to the just-made group using the
-_move_servers_rsgroup_ command as follows: 
+_move_servers_rsgroup_ command as follows:
 
 [source, bash]
 ----
- hbase(main):010:0> move_servers_rsgroup 'my_group',['k.att.net:51129'] 
+ hbase(main):010:0> move_servers_rsgroup 'my_group',['k.att.net:51129']
 ----
 
 .Hostname and Port vs ServerName

http://git-wip-us.apache.org/repos/asf/hbase/blob/2e9a55be/src/main/asciidoc/_chapters/other_info.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/other_info.adoc 
b/src/main/asciidoc/_chapters/other_info.adoc
index 8bcbe0f..f2dd1b8 100644
--- a/src/main/asciidoc/_chapters/other_info.adoc
+++ b/src/main/asciidoc/_chapters/other_info.adoc
@@ -32,16 +32,14 @@
 === HBase Videos
 
 .Introduction to HBase
-* 
link:http://www.cloudera.com/content/cloudera/en/resources/library/presentation/chicago_data_summit_apache_hbase_an_introduction_todd_lipcon.html[Introduction
 to HBase] by Todd Lipcon (Chicago Data Summit 2011).
-* 
link:http://www.cloudera.com/videos/intorduction-hbase-todd-lipcon[Introduction 
to HBase] by Todd Lipcon (2010).
-link:http://www.cloudera.com/videos/hadoop-world-2011-presentation-video-building-realtime-big-data-services-at-facebook-with-hadoop-and-hbase[Building
 Real Time Services at Facebook with HBase] by Jonathan Gray (Hadoop World 
2011).
-
-link:http://www.cloudera.com/videos/hw10_video_how_stumbleupon_built_and_advertising_platform_using_hbase_and_hadoop[HBase
 and Hadoop, Mixing Real-Time and Batch Processing at StumbleUpon] by JD Cryans 
(Hadoop World 2010).
+* link:https://vimeo.com/23400732[Introduction to HBase] by Todd Lipcon 
(Chicago Data Summit 2011).
+* link:https://vimeo.com/26804675[Building Real Time Services at Facebook with 
HBase] by Jonathan Gray (Berlin buzzwords 2011)
+* 
link:http://www.cloudera.com/videos/hw10_video_how_stumbleupon_built_and_advertising_platform_using_hbase_and_hadoop[The
 Multiple Uses Of HBase] by Jean-Daniel Cryans(Berlin buzzwords 2011).
 
 [[other.info.pres]]
 === HBase Presentations (Slides)
 
-link:http://www.cloudera.com/content/cloudera/en/resources/library/hadoopworld/hadoop-world-2011-presentation-video-advanced-hbase-schema-design.html[Advanced
 HBase Schema Design] by Lars George (Hadoop World 2011).
+link:https://www.slideshare.net/cloudera/hadoop-world-2011-advanced-hbase-schema-design-lars-george-cloudera[Advanced
 HBase Schema Design] by Lars George (Hadoop World 2011).
 
 
link:http://www.slideshare.net/cloudera/chicago-data-summit-apache-hbase-an-introduction[Introduction
 to HBase] by Todd Lipcon (Chicago Data Summit 2011).
 
@@ -61,9 +59,7 @@ 
link:http://ianvarley.com/UT/MR/Varley_MastersReport_Full_2009-08-07.pdf[No Rela
 
 link:https://blog.cloudera.com/blog/category/hbase/[Cloudera's HBase Blog] has 
a lot of links to useful HBase information.
 
-* 
link:https://blog.cloudera.com/blog/2010/04/cap-confusion-problems-with-partition-tolerance/[CAP
 Confusion] is a relevant entry for background information on distributed 
storage systems.
-
-link:http://wiki.apache.org/hadoop/HBase/HBasePresentations[HBase Wiki] has a 
page with a number of presentations.
+link:https://blog.cloudera.com/blog/2010/04/cap-confusion-problems-with-partition-tolerance/[CAP
 Confusion] is a relevant entry for background information on distributed 
storage systems.
 
 link:http://refcardz.dzone.com/refcardz/hbase[HBase RefCard] from DZone.
 

Reply via email to