http://git-wip-us.apache.org/repos/asf/hbase/blob/7139c90e/src/main/asciidoc/_chapters/troubleshooting.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/troubleshooting.adoc 
b/src/main/asciidoc/_chapters/troubleshooting.adoc
index afe24fe..1776c9e 100644
--- a/src/main/asciidoc/_chapters/troubleshooting.adoc
+++ b/src/main/asciidoc/_chapters/troubleshooting.adoc
@@ -32,36 +32,35 @@
 
 Always start with the master log (TODO: Which lines?). Normally it's just 
printing the same lines over and over again.
 If not, then there's an issue.
-Google or link:http://search-hadoop.com[search-hadoop.com] should return some 
hits for those exceptions you're seeing. 
+Google or link:http://search-hadoop.com[search-hadoop.com] should return some 
hits for those exceptions you're seeing.
 
 An error rarely comes alone in Apache HBase, usually when something gets 
screwed up what will follow may be hundreds of exceptions and stack traces 
coming from all over the place.
-The best way to approach this type of problem is to walk the log up to where 
it all began, for example one trick with RegionServers is that they will print 
some metrics when aborting so grepping for _Dump_ should get you around the 
start of the problem. 
+The best way to approach this type of problem is to walk the log up to where 
it all began, for example one trick with RegionServers is that they will print 
some metrics when aborting so grepping for _Dump_ should get you around the 
start of the problem.
 
-RegionServer suicides are ``normal'', as this is what they do when something 
goes wrong.
-For example, if ulimit and max transfer threads (the two most important 
initial settings, see <<ulimit,ulimit>> and 
<<dfs.datanode.max.transfer.threads,dfs.datanode.max.transfer.threads>>) aren't 
changed, it will make it impossible at some point for DataNodes to create new 
threads that from the HBase point of view is seen as if HDFS was gone.
+RegionServer suicides are 'normal', as this is what they do when something 
goes wrong.
+For example, if ulimit and max transfer threads (the two most important 
initial settings, see <<ulimit>> and <<dfs.datanode.max.transfer.threads>>) 
aren't changed, it will make it impossible at some point for DataNodes to 
create new threads that from the HBase point of view is seen as if HDFS was 
gone.
 Think about what would happen if your MySQL database was suddenly unable to 
access files on your local file system, well it's the same with HBase and HDFS.
 Another very common reason to see RegionServers committing seppuku is when 
they enter prolonged garbage collection pauses that last longer than the 
default ZooKeeper session timeout.
-For more information on GC pauses, see the 
link:http://www.cloudera.com/blog/2011/02/avoiding-full-gcs-in-hbase-with-memstore-local-allocation-buffers-part-1/[3
-        part blog post] by Todd Lipcon and <<gcpause,gcpause>> above. 
+For more information on GC pauses, see the 
link:http://www.cloudera.com/blog/2011/02/avoiding-full-gcs-in-hbase-with-memstore-local-allocation-buffers-part-1/[3
 part blog post] by Todd Lipcon and <<gcpause>> above.
 
 [[trouble.log]]
 == Logs
 
-The key process logs are as follows... (replace <user> with the user that 
started the service, and <hostname> for the machine name) 
+The key process logs are as follows... (replace <user> with the user that 
started the service, and <hostname> for the machine name)
 
-NameNode: _$HADOOP_HOME/logs/hadoop-<user>-namenode-<hostname>.log_    
+NameNode: _$HADOOP_HOME/logs/hadoop-<user>-namenode-<hostname>.log_
 
-DataNode: _$HADOOP_HOME/logs/hadoop-<user>-datanode-<hostname>.log_    
+DataNode: _$HADOOP_HOME/logs/hadoop-<user>-datanode-<hostname>.log_
 
-JobTracker: _$HADOOP_HOME/logs/hadoop-<user>-jobtracker-<hostname>.log_    
+JobTracker: _$HADOOP_HOME/logs/hadoop-<user>-jobtracker-<hostname>.log_
 
-TaskTracker: _$HADOOP_HOME/logs/hadoop-<user>-tasktracker-<hostname>.log_    
+TaskTracker: _$HADOOP_HOME/logs/hadoop-<user>-tasktracker-<hostname>.log_
 
-HMaster: _$HBASE_HOME/logs/hbase-<user>-master-<hostname>.log_    
+HMaster: _$HBASE_HOME/logs/hbase-<user>-master-<hostname>.log_
 
-RegionServer: _$HBASE_HOME/logs/hbase-<user>-regionserver-<hostname>.log_    
+RegionServer: _$HBASE_HOME/logs/hbase-<user>-regionserver-<hostname>.log_
 
-ZooKeeper: _TODO_    
+ZooKeeper: _TODO_
 
 [[trouble.log.locations]]
 === Log Locations
@@ -75,14 +74,14 @@ Production deployments need to run on a cluster.
 The NameNode log is on the NameNode server.
 The HBase Master is typically run on the NameNode server, and well as 
ZooKeeper.
 
-For smaller clusters the JobTracker is typically run on the NameNode server as 
well.
+For smaller clusters the JobTracker/ResourceManager is typically run on the 
NameNode server as well.
 
 [[trouble.log.locations.datanode]]
 ==== DataNode
 
 Each DataNode server will have a DataNode log for HDFS, as well as a 
RegionServer log for HBase.
 
-Additionally, each DataNode server will also have a TaskTracker log for 
MapReduce task execution.
+Additionally, each DataNode server will also have a TaskTracker/NodeManager 
log for MapReduce task execution.
 
 [[trouble.log.levels]]
 === Log Levels
@@ -97,12 +96,12 @@ To enable RPC-level logging, browse to the RegionServer UI 
and click on _Log Lev
 Set the log level to `DEBUG` for the package `org.apache.hadoop.ipc` (Thats 
right, for `hadoop.ipc`, NOT, `hbase.ipc`). Then tail the RegionServers log.
 Analyze.
 
-To disable, set the logging level back to `INFO` level. 
+To disable, set the logging level back to `INFO` level.
 
 [[trouble.log.gc]]
 === JVM Garbage Collection Logs
 
-HBase is memory intensive, and using the default GC you can see long pauses in 
all threads including the _Juliet Pause_ aka "GC of Death". To help debug this 
or confirm this is happening GC logging can be turned on in the Java virtual 
machine. 
+HBase is memory intensive, and using the default GC you can see long pauses in 
all threads including the _Juliet Pause_ aka "GC of Death". To help debug this 
or confirm this is happening GC logging can be turned on in the Java virtual 
machine.
 
 To enable, in _hbase-env.sh_, uncomment one of the below lines :
 
@@ -132,7 +131,7 @@ At this point you should see logs like so:
 ----
 
 In this section, the first line indicates a 0.0007360 second pause for the CMS 
to initially mark.
-This pauses the entire VM, all threads for that period of time. 
+This pauses the entire VM, all threads for that period of time.
 
 The third line indicates a "minor GC", which pauses the VM for 0.0101110 
seconds - aka 10 milliseconds.
 It has reduced the "ParNew" from about 5.5m to 576k.
@@ -158,16 +157,16 @@ Later on in this cycle we see:
 ----
 
 The first line indicates that the CMS concurrent mark (finding garbage) has 
taken 2.4 seconds.
-But this is a _concurrent_ 2.4 seconds, Java has not been paused at any point 
in time. 
+But this is a _concurrent_ 2.4 seconds, Java has not been paused at any point 
in time.
 
-There are a few more minor GCs, then there is a pause at the 2nd last line: 
+There are a few more minor GCs, then there is a pause at the 2nd last line:
 [source]
 ----
 
 64901.616: [GC[YG occupancy: 645 K (5568 K)]64901.616: [Rescan (parallel) , 
0.0020210 secs]64901.618: [weak refs processing, 0.0027950 secs] [1 CMS-remark: 
2866753K(3055704K)] 2867399K(3061272K), 0.0049380 secs] [Times: user=0.00 
sys=0.01, real=0.01 secs]
-----      
+----
 
-The pause here is 0.0049380 seconds (aka 4.9 milliseconds) to 'remark' the 
heap. 
+The pause here is 0.0049380 seconds (aka 4.9 milliseconds) to 'remark' the 
heap.
 
 At this point the sweep starts, and you can watch the heap size go down:
 
@@ -180,20 +179,20 @@ At this point the sweep starts, and you can watch the 
heap size go down:
 64904.953: [CMS-concurrent-sweep: 2.030/3.332 secs] [Times: user=9.57 
sys=0.26, real=3.33 secs]
 ----
 
-At this point, the CMS sweep took 3.332 seconds, and heap went from about ~ 
2.8 GB to 1.3 GB (approximate). 
+At this point, the CMS sweep took 3.332 seconds, and heap went from about ~ 
2.8 GB to 1.3 GB (approximate).
 
 The key points here is to keep all these pauses low.
-CMS pauses are always low, but if your ParNew starts growing, you can see 
minor GC pauses approach 100ms, exceed 100ms and hit as high at 400ms. 
+CMS pauses are always low, but if your ParNew starts growing, you can see 
minor GC pauses approach 100ms, exceed 100ms and hit as high at 400ms.
 
 This can be due to the size of the ParNew, which should be relatively small.
-If your ParNew is very large after running HBase for a while, in one example a 
ParNew was about 150MB, then you might have to constrain the size of ParNew 
(The larger it is, the longer the collections take but if its too small, 
objects are promoted to old gen too quickly). In the below we constrain new gen 
size to 64m. 
+If your ParNew is very large after running HBase for a while, in one example a 
ParNew was about 150MB, then you might have to constrain the size of ParNew 
(The larger it is, the longer the collections take but if its too small, 
objects are promoted to old gen too quickly). In the below we constrain new gen 
size to 64m.
 
-Add the below line in _hbase-env.sh_: 
+Add the below line in _hbase-env.sh_:
 [source,bourne]
 ----
 
 export SERVER_GC_OPTS="$SERVER_GC_OPTS -XX:NewSize=64m -XX:MaxNewSize=64m"
-----      
+----
 
 Similarly, to enable GC logging for client processes, uncomment one of the 
below lines in _hbase-env.sh_:
 
@@ -212,8 +211,7 @@ Similarly, to enable GC logging for client processes, 
uncomment one of the below
 # If <FILE-PATH> is not replaced, the log file(.gc) would be generated in the 
HBASE_LOG_DIR .
 ----
 
-For more information on GC pauses, see the 
link:http://www.cloudera.com/blog/2011/02/avoiding-full-gcs-in-hbase-with-memstore-local-allocation-buffers-part-1/[3
-          part blog post] by Todd Lipcon and <<gcpause,gcpause>> above. 
+For more information on GC pauses, see the 
link:http://www.cloudera.com/blog/2011/02/avoiding-full-gcs-in-hbase-with-memstore-local-allocation-buffers-part-1/[3
 part blog post] by Todd Lipcon and <<gcpause>> above.
 
 [[trouble.resources]]
 == Resources
@@ -222,19 +220,18 @@ For more information on GC pauses, see the 
link:http://www.cloudera.com/blog/201
 === search-hadoop.com
 
 link:http://search-hadoop.com[search-hadoop.com] indexes all the mailing lists 
and is great for historical searches.
-Search here first when you have an issue as its more than likely someone has 
already had your problem. 
+Search here first when you have an issue as its more than likely someone has 
already had your problem.
 
 [[trouble.resources.lists]]
 === Mailing Lists
 
-Ask a question on the link:http://hbase.apache.org/mail-lists.html[Apache
-          HBase mailing lists].
+Ask a question on the link:http://hbase.apache.org/mail-lists.html[Apache 
HBase mailing lists].
 The 'dev' mailing list is aimed at the community of developers actually 
building Apache HBase and for features currently under development, and 'user' 
is generally used for questions on released versions of Apache HBase.
 Before going to the mailing list, make sure your question has not already been 
answered by searching the mailing list archives first.
-Use <<trouble.resources.searchhadoop,trouble.resources.searchhadoop>>.
+Use <<trouble.resources.searchhadoop>>.
 Take some time crafting your question.
 See link:http://www.mikeash.com/getting_answers.html[Getting Answers] for 
ideas on crafting good questions.
-A quality question that includes all context and exhibits evidence the author 
has tried to find answers in the manual and out on lists is more likely to get 
a prompt response. 
+A quality question that includes all context and exhibits evidence the author 
has tried to find answers in the manual and out on lists is more likely to get 
a prompt response.
 
 [[trouble.resources.irc]]
 === IRC
@@ -244,7 +241,7 @@ A quality question that includes all context and exhibits 
evidence the author ha
 [[trouble.resources.jira]]
 === JIRA
 
-link:https://issues.apache.org/jira/browse/HBASE[JIRA] is also really helpful 
when looking for Hadoop/HBase-specific issues. 
+link:https://issues.apache.org/jira/browse/HBASE[JIRA] is also really helpful 
when looking for Hadoop/HBase-specific issues.
 
 [[trouble.tools]]
 == Tools
@@ -256,54 +253,54 @@ link:https://issues.apache.org/jira/browse/HBASE[JIRA] is 
also really helpful wh
 ==== Master Web Interface
 
 The Master starts a web-interface on port 16010 by default.
-(Up to and including 0.98 this was port 60010) 
+(Up to and including 0.98 this was port 60010)
 
-The Master web UI lists created tables and their definition (e.g., 
ColumnFamilies, blocksize, etc.). Additionally, the available RegionServers in 
the cluster are listed along with selected high-level metrics (requests, number 
of regions, usedHeap, maxHeap). The Master web UI allows navigation to each 
RegionServer's web UI. 
+The Master web UI lists created tables and their definition (e.g., 
ColumnFamilies, blocksize, etc.). Additionally, the available RegionServers in 
the cluster are listed along with selected high-level metrics (requests, number 
of regions, usedHeap, maxHeap). The Master web UI allows navigation to each 
RegionServer's web UI.
 
 [[trouble.tools.builtin.webregion]]
 ==== RegionServer Web Interface
 
 RegionServers starts a web-interface on port 16030 by default.
-(Up to an including 0.98 this was port 60030) 
+(Up to an including 0.98 this was port 60030)
 
-The RegionServer web UI lists online regions and their start/end keys, as well 
as point-in-time RegionServer metrics (requests, regions, storeFileIndexSize, 
compactionQueueSize, etc.). 
+The RegionServer web UI lists online regions and their start/end keys, as well 
as point-in-time RegionServer metrics (requests, regions, storeFileIndexSize, 
compactionQueueSize, etc.).
 
-See <<hbase_metrics,hbase metrics>> for more information in metric 
definitions. 
+See <<hbase_metrics>> for more information in metric definitions.
 
 [[trouble.tools.builtin.zkcli]]
 ==== zkcli
 
 `zkcli` is a very useful tool for investigating ZooKeeper-related issues.
-To invoke: 
+To invoke:
 [source,bourne]
 ----
 ./hbase zkcli -server host:port <cmd> <args>
-----          
+----
 
 The commands (and arguments) are:
 
 [source]
 ----
-       connect host:port
-       get path [watch]
-       ls path [watch]
-       set path data [version]
-       delquota [-n|-b] path
-       quit
-       printwatches on|off
-       create [-s] [-e] path data acl
-       stat path [watch]
-       close
-       ls2 path [watch]
-       history
-       listquota path
-       setAcl path acl
-       getAcl path
-       sync path
-       redo cmdno
-       addauth scheme auth
-       delete path [version]
-       setquota -n|-b val path
+  connect host:port
+  get path [watch]
+  ls path [watch]
+  set path data [version]
+  delquota [-n|-b] path
+  quit
+  printwatches on|off
+  create [-s] [-e] path data acl
+  stat path [watch]
+  close
+  ls2 path [watch]
+  history
+  listquota path
+  setAcl path acl
+  getAcl path
+  sync path
+  redo cmdno
+  addauth scheme auth
+  delete path [version]
+  setquota -n|-b val path
 ----
 
 [[trouble.tools.external]]
@@ -313,13 +310,13 @@ The commands (and arguments) are:
 ==== tail
 
 `tail` is the command line tool that lets you look at the end of a file.
-Add the ``-f'' option and it will refresh when new data is available.
-It's useful when you are wondering what's happening, for example, when a 
cluster is taking a long time to shutdown or startup as you can just fire a new 
terminal and tail the master log (and maybe a few RegionServers). 
+Add the `-f` option and it will refresh when new data is available.
+It's useful when you are wondering what's happening, for example, when a 
cluster is taking a long time to shutdown or startup as you can just fire a new 
terminal and tail the master log (and maybe a few RegionServers).
 
 [[trouble.tools.top]]
 ==== top
 
-`top` is probably one of the most important tool when first trying to see 
what's running on a machine and how the resources are consumed.
+`top` is probably one of the most important tools when first trying to see 
what's running on a machine and how the resources are consumed.
 Here's an example from production system:
 
 [source]
@@ -338,15 +335,15 @@ Swap: 16008732k total,    14348k used, 15994384k free, 
11106908k cached
 ----
 
 Here we can see that the system load average during the last five minutes is 
3.75, which very roughly means that on average 3.75 threads were waiting for 
CPU time during these 5 minutes.
-In general, the ``perfect'' utilization equals to the number of cores, under 
that number the machine is under utilized and over that the machine is over 
utilized.
-This is an important concept, see this article to understand it more: 
link:http://www.linuxjournal.com/article/9001. 
+In general, the _perfect_ utilization equals to the number of cores, under 
that number the machine is under utilized and over that the machine is over 
utilized.
+This is an important concept, see this article to understand it more: 
http://www.linuxjournal.com/article/9001.
 
 Apart from load, we can see that the system is using almost all its available 
RAM but most of it is used for the OS cache (which is good). The swap only has 
a few KBs in it and this is wanted, high numbers would indicate swapping 
activity which is the nemesis of performance of Java systems.
-Another way to detect swapping is when the load average goes through the roof 
(although this could also be caused by things like a dying disk, among others). 
+Another way to detect swapping is when the load average goes through the roof 
(although this could also be caused by things like a dying disk, among others).
 
 The list of processes isn't super useful by default, all we know is that 3 
java processes are using about 111% of the CPUs.
-To know which is which, simply type ``c'' and each line will be expanded.
-Typing ``1'' will give you the detail of how each CPU is used instead of the 
average for all of them like shown here. 
+To know which is which, simply type `c` and each line will be expanded.
+Typing `1` will give you the detail of how each CPU is used instead of the 
average for all of them like shown here.
 
 [[trouble.tools.jps]]
 ==== jps
@@ -366,7 +363,7 @@ hadoop@sv4borg12:~$ jps
 18776 jmx
 ----
 
-In order, we see a: 
+In order, we see a:
 
 * Hadoop TaskTracker, manages the local Childs
 * HBase RegionServer, serves regions
@@ -391,7 +388,7 @@ hadoop   17789  155 35.2 9067824 8604364 ?     S&lt;l  
Mar04 9855:48 /usr/java/j
 
 `jstack` is one of the most important tools when trying to figure out what a 
java process is doing apart from looking at the logs.
 It has to be used in conjunction with jps in order to give it a process id.
-It shows a list of threads, each one has a name, and they appear in the order 
that they were created (so the top ones are the most recent threads). Here are 
a few example: 
+It shows a list of threads, each one has a name, and they appear in the order 
that they were created (so the top ones are the most recent threads). Here are 
a few example:
 
 The main thread of a RegionServer waiting for something to do from the master:
 
@@ -452,12 +449,12 @@ A handler thread that's waiting for stuff to do (like 
put, delete, scan, etc):
 ----
 "IPC Server handler 16 on 60020" daemon prio=10 tid=0x00007f16b011d800 
nid=0x4a5e waiting on condition [0x00007f16afefd000..0x00007f16afefd9f0]
    java.lang.Thread.State: WAITING (parking)
-               at sun.misc.Unsafe.park(Native Method)
-               - parking to wait for  <0x00007f16cd3f8dd8> (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
-               at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
-               at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
-               at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
-               at 
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1013)
+          at sun.misc.Unsafe.park(Native Method)
+          - parking to wait for  <0x00007f16cd3f8dd8> (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
+          at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
+          at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
+          at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
+          at 
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1013)
 ----
 
 And one that's busy doing an increment of a counter (it's in the phase where 
it's trying to create a scanner in order to read the last value):
@@ -466,21 +463,21 @@ And one that's busy doing an increment of a counter (it's 
in the phase where it'
 ----
 "IPC Server handler 66 on 60020" daemon prio=10 tid=0x00007f16b006e800 
nid=0x4a90 runnable [0x00007f16acb77000..0x00007f16acb77cf0]
    java.lang.Thread.State: RUNNABLE
-               at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.<init>(KeyValueHeap.java:56)
-               at 
org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:79)
-               at 
org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1202)
-               at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.<init>(HRegion.java:2209)
-               at 
org.apache.hadoop.hbase.regionserver.HRegion.instantiateInternalScanner(HRegion.java:1063)
-               at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1055)
-               at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1039)
-               at 
org.apache.hadoop.hbase.regionserver.HRegion.getLastIncrement(HRegion.java:2875)
-               at 
org.apache.hadoop.hbase.regionserver.HRegion.incrementColumnValue(HRegion.java:2978)
-               at 
org.apache.hadoop.hbase.regionserver.HRegionServer.incrementColumnValue(HRegionServer.java:2433)
-               at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
-               at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
-               at java.lang.reflect.Method.invoke(Method.java:597)
-               at 
org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:560)
-               at 
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1027)
+          at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.<init>(KeyValueHeap.java:56)
+          at 
org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:79)
+          at 
org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1202)
+          at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.<init>(HRegion.java:2209)
+          at 
org.apache.hadoop.hbase.regionserver.HRegion.instantiateInternalScanner(HRegion.java:1063)
+          at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1055)
+          at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1039)
+          at 
org.apache.hadoop.hbase.regionserver.HRegion.getLastIncrement(HRegion.java:2875)
+          at 
org.apache.hadoop.hbase.regionserver.HRegion.incrementColumnValue(HRegion.java:2978)
+          at 
org.apache.hadoop.hbase.regionserver.HRegionServer.incrementColumnValue(HRegionServer.java:2433)
+          at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
+          at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
+          at java.lang.reflect.Method.invoke(Method.java:597)
+          at 
org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:560)
+          at 
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1027)
 ----
 
 A thread that receives data from HDFS:
@@ -489,26 +486,26 @@ A thread that receives data from HDFS:
 ----
 "IPC Client (47) connection to sv4borg9/10.4.24.40:9000 from hadoop" daemon 
prio=10 tid=0x00007f16a02d0000 nid=0x4fa3 runnable 
[0x00007f16b517d000..0x00007f16b517dbf0]
    java.lang.Thread.State: RUNNABLE
-               at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
-               at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:215)
-               at 
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
-               at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
-               - locked <0x00007f17d5b68c00> (a sun.nio.ch.Util$1)
-               - locked <0x00007f17d5b68be8> (a 
java.util.Collections$UnmodifiableSet)
-               - locked <0x00007f1877959b50> (a sun.nio.ch.EPollSelectorImpl)
-               at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
-               at 
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:332)
-               at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
-               at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
-               at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
-               at java.io.FilterInputStream.read(FilterInputStream.java:116)
-               at 
org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:304)
-               at 
java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
-               at 
java.io.BufferedInputStream.read(BufferedInputStream.java:237)
-               - locked <0x00007f1808539178> (a java.io.BufferedInputStream)
-               at java.io.DataInputStream.readInt(DataInputStream.java:370)
-               at 
org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:569)
-               at org.apache.hadoop.ipc.Client$Connection.run(Client.java:477)
+          at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
+          at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:215)
+          at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
+          at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
+          - locked <0x00007f17d5b68c00> (a sun.nio.ch.Util$1)
+          - locked <0x00007f17d5b68be8> (a 
java.util.Collections$UnmodifiableSet)
+          - locked <0x00007f1877959b50> (a sun.nio.ch.EPollSelectorImpl)
+          at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
+          at 
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:332)
+          at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
+          at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
+          at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
+          at java.io.FilterInputStream.read(FilterInputStream.java:116)
+          at 
org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:304)
+          at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
+          at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
+          - locked <0x00007f1808539178> (a java.io.BufferedInputStream)
+          at java.io.DataInputStream.readInt(DataInputStream.java:370)
+          at 
org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:569)
+          at org.apache.hadoop.ipc.Client$Connection.run(Client.java:477)
 ----
 
 And here is a master trying to recover a lease after a RegionServer died:
@@ -518,84 +515,82 @@ And here is a master trying to recover a lease after a 
RegionServer died:
 "LeaseChecker" daemon prio=10 tid=0x00000000407ef800 nid=0x76cd waiting on 
condition [0x00007f6d0eae2000..0x00007f6d0eae2a70]
 --
    java.lang.Thread.State: WAITING (on object monitor)
-               at java.lang.Object.wait(Native Method)
-               at java.lang.Object.wait(Object.java:485)
-               at org.apache.hadoop.ipc.Client.call(Client.java:726)
-               - locked <0x00007f6d1cd28f80> (a 
org.apache.hadoop.ipc.Client$Call)
-               at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
-               at $Proxy1.recoverBlock(Unknown Source)
-               at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2636)
-               at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.<init>(DFSClient.java:2832)
-               at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:529)
-               at 
org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:186)
-               at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:530)
-               at 
org.apache.hadoop.hbase.util.FSUtils.recoverFileLease(FSUtils.java:619)
-               at 
org.apache.hadoop.hbase.regionserver.wal.HLog.splitLog(HLog.java:1322)
-               at 
org.apache.hadoop.hbase.regionserver.wal.HLog.splitLog(HLog.java:1210)
-               at 
org.apache.hadoop.hbase.master.HMaster.splitLogAfterStartup(HMaster.java:648)
-               at 
org.apache.hadoop.hbase.master.HMaster.joinCluster(HMaster.java:572)
-               at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:503)
+          at java.lang.Object.wait(Native Method)
+          at java.lang.Object.wait(Object.java:485)
+          at org.apache.hadoop.ipc.Client.call(Client.java:726)
+          - locked <0x00007f6d1cd28f80> (a org.apache.hadoop.ipc.Client$Call)
+          at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
+          at $Proxy1.recoverBlock(Unknown Source)
+          at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2636)
+          at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.<init>(DFSClient.java:2832)
+          at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:529)
+          at 
org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:186)
+          at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:530)
+          at 
org.apache.hadoop.hbase.util.FSUtils.recoverFileLease(FSUtils.java:619)
+          at 
org.apache.hadoop.hbase.regionserver.wal.HLog.splitLog(HLog.java:1322)
+          at 
org.apache.hadoop.hbase.regionserver.wal.HLog.splitLog(HLog.java:1210)
+          at 
org.apache.hadoop.hbase.master.HMaster.splitLogAfterStartup(HMaster.java:648)
+          at 
org.apache.hadoop.hbase.master.HMaster.joinCluster(HMaster.java:572)
+          at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:503)
 ----
 
 [[trouble.tools.opentsdb]]
 ==== OpenTSDB
 
 link:http://opentsdb.net[OpenTSDB] is an excellent alternative to Ganglia as 
it uses Apache HBase to store all the time series and doesn't have to 
downsample.
-Monitoring your own HBase cluster that hosts OpenTSDB is a good exercise. 
+Monitoring your own HBase cluster that hosts OpenTSDB is a good exercise.
 
-Here's an example of a cluster that's suffering from hundreds of compactions 
launched almost all around the same time, which severely affects the IO 
performance: (TODO: insert graph plotting compactionQueueSize) 
+Here's an example of a cluster that's suffering from hundreds of compactions 
launched almost all around the same time, which severely affects the IO 
performance: (TODO: insert graph plotting compactionQueueSize)
 
 It's a good practice to build dashboards with all the important graphs per 
machine and per cluster so that debugging issues can be done with a single 
quick look.
 For example, at StumbleUpon there's one dashboard per cluster with the most 
important metrics from both the OS and Apache HBase.
-You can then go down at the machine level and get even more detailed metrics. 
+You can then go down at the machine level and get even more detailed metrics.
 
 [[trouble.tools.clustersshtop]]
 ==== clusterssh+top
 
 clusterssh+top, it's like a poor man's monitoring system and it can be quite 
useful when you have only a few machines as it's very easy to setup.
 Starting clusterssh will give you one terminal per machine and another 
terminal in which whatever you type will be retyped in every window.
-This means that you can type ``top'' once and it will start it for all of your 
machines at the same time giving you full view of the current state of your 
cluster.
-You can also tail all the logs at the same time, edit files, etc. 
+This means that you can type `top` once and it will start it for all of your 
machines at the same time giving you full view of the current state of your 
cluster.
+You can also tail all the logs at the same time, edit files, etc.
 
 [[trouble.client]]
 == Client
 
-For more information on the HBase client, see <<client,client>>. 
+For more information on the HBase client, see <<client,client>>.
 
 [[trouble.client.scantimeout]]
 === ScannerTimeoutException or UnknownScannerException
 
 This is thrown if the time between RPC calls from the client to RegionServer 
exceeds the scan timeout.
 For example, if `Scan.setCaching` is set to 500, then there will be an RPC 
call to fetch the next batch of rows every 500 `.next()` calls on the 
ResultScanner because data is being transferred in blocks of 500 rows to the 
client.
-Reducing the setCaching value may be an option, but setting this value too low 
makes for inefficient processing on numbers of rows. 
+Reducing the setCaching value may be an option, but setting this value too low 
makes for inefficient processing on numbers of rows.
 
-See <<perf.hbase.client.caching,perf.hbase.client.caching>>. 
+See <<perf.hbase.client.caching>>.
 
 === Performance Differences in Thrift and Java APIs
 
-Poor performance, or even `ScannerTimeoutExceptions`, can occur if 
`Scan.setCaching` is too high, as discussed in 
<<trouble.client.scantimeout,trouble.client.scantimeout>>.
+Poor performance, or even `ScannerTimeoutExceptions`, can occur if 
`Scan.setCaching` is too high, as discussed in <<trouble.client.scantimeout>>.
 If the Thrift client uses the wrong caching settings for a given workload, 
performance can suffer compared to the Java API.
-To set caching for a given scan in the Thrift client, use the 
`scannerGetList(scannerId,
-          numRows)` method, where `numRows` is an integer representing the 
number of rows to cache.
+To set caching for a given scan in the Thrift client, use the 
`scannerGetList(scannerId, numRows)` method, where `numRows` is an integer 
representing the number of rows to cache.
 In one case, it was found that reducing the cache for Thrift scans from 1000 
to 100 increased performance to near parity with the Java API given the same 
queries.
 
-See also Jesse Andersen's 
link:http://blog.cloudera.com/blog/2014/04/how-to-use-the-hbase-thrift-interface-part-3-using-scans/[blog
 post]  about using Scans with Thrift.
+See also Jesse Andersen's 
link:http://blog.cloudera.com/blog/2014/04/how-to-use-the-hbase-thrift-interface-part-3-using-scans/[blog
 post] about using Scans with Thrift.
 
 [[trouble.client.lease.exception]]
-=== `LeaseException` when calling`Scanner.next`
+=== `LeaseException` when calling `Scanner.next`
 
-In some situations clients that fetch data from a RegionServer get a 
LeaseException instead of the usual 
<<trouble.client.scantimeout,trouble.client.scantimeout>>.
-Usually the source of the exception is 
`org.apache.hadoop.hbase.regionserver.Leases.removeLease(Leases.java:230)`      
  (line number may vary). It tends to happen in the context of a slow/freezing 
RegionServer#next call.
+In some situations clients that fetch data from a RegionServer get a 
LeaseException instead of the usual <<trouble.client.scantimeout>>.
+Usually the source of the exception is 
`org.apache.hadoop.hbase.regionserver.Leases.removeLease(Leases.java:230)` 
(line number may vary). It tends to happen in the context of a slow/freezing 
`RegionServer#next` call.
 It can be prevented by having `hbase.rpc.timeout` > 
`hbase.regionserver.lease.period`.
-Harsh J investigated the issue as part of the mailing list thread 
link:http://mail-archives.apache.org/mod_mbox/hbase-user/201209.mbox/%3CCAOcnVr3R-LqtKhFsk8Bhrm-YW2i9O6J6Fhjz2h7q6_sxvwd2yw%40mail.gmail.com%3E[HBase,
-          mail # user - Lease does not exist exceptions]      
+Harsh J investigated the issue as part of the mailing list thread 
link:http://mail-archives.apache.org/mod_mbox/hbase-user/201209.mbox/%3CCAOcnVr3R-LqtKhFsk8Bhrm-YW2i9O6J6Fhjz2h7q6_sxvwd2yw%40mail.gmail.com%3E[HBase,
 mail # user - Lease does not exist exceptions]
 
 [[trouble.client.scarylogs]]
-=== Shell or client application throws lots of scary exceptions during 
normaloperation
+=== Shell or client application throws lots of scary exceptions during normal 
operation
 
-Since 0.20.0 the default log level for `org.apache.hadoop.hbase.*`is DEBUG. 
+Since 0.20.0 the default log level for `org.apache.hadoop.hbase.*`is DEBUG.
 
-On your clients, edit _$HBASE_HOME/conf/log4j.properties_ and change this: 
`log4j.logger.org.apache.hadoop.hbase=DEBUG` to this: 
`log4j.logger.org.apache.hadoop.hbase=INFO`, or even 
`log4j.logger.org.apache.hadoop.hbase=WARN`. 
+On your clients, edit _$HBASE_HOME/conf/log4j.properties_ and change this: 
`log4j.logger.org.apache.hadoop.hbase=DEBUG` to this: 
`log4j.logger.org.apache.hadoop.hbase=INFO`, or even 
`log4j.logger.org.apache.hadoop.hbase=WARN`.
 
 [[trouble.client.longpauseswithcompression]]
 === Long Client Pauses With Compression
@@ -604,20 +599,19 @@ This is a fairly frequent question on the Apache HBase 
dist-list.
 The scenario is that a client is typically inserting a lot of data into a 
relatively un-optimized HBase cluster.
 Compression can exacerbate the pauses, although it is not the source of the 
problem.
 
-See <<precreate.regions,precreate.regions>> on the pattern for pre-creating 
regions and confirm that the table isn't starting with a single region.
+See <<precreate.regions>> on the pattern for pre-creating regions and confirm 
that the table isn't starting with a single region.
 
-See <<perf.configurations,perf.configurations>> for cluster configuration, 
particularly `hbase.hstore.blockingStoreFiles`, 
`hbase.hregion.memstore.block.multiplier`, `MAX_FILESIZE` (region size), and 
`MEMSTORE_FLUSHSIZE.`      
+See <<perf.configurations>> for cluster configuration, particularly 
`hbase.hstore.blockingStoreFiles`, `hbase.hregion.memstore.block.multiplier`, 
`MAX_FILESIZE` (region size), and `MEMSTORE_FLUSHSIZE.`
 
 A slightly longer explanation of why pauses can happen is as follows: Puts are 
sometimes blocked on the MemStores which are blocked by the flusher thread 
which is blocked because there are too many files to compact because the 
compactor is given too many small files to compact and has to compact the same 
data repeatedly.
 This situation can occur even with minor compactions.
 Compounding this situation, Apache HBase doesn't compress data in memory.
 Thus, the 64MB that lives in the MemStore could become a 6MB file after 
compression - which results in a smaller StoreFile.
-The upside is that more data is packed into the same region, but performance 
is achieved by being able to write larger files - which is why HBase waits 
until the flushize before writing a new StoreFile.
+The upside is that more data is packed into the same region, but performance 
is achieved by being able to write larger files - which is why HBase waits 
until the flushsize before writing a new StoreFile.
 And smaller StoreFiles become targets for compaction.
-Without compression the files are much bigger and don't need as much 
compaction, however this is at the expense of I/O. 
+Without compression the files are much bigger and don't need as much 
compaction, however this is at the expense of I/O.
 
-For additional information, see this thread on 
link:http://search-hadoop.com/m/WUnLM6ojHm1/Long+client+pauses+with+compression&subj=Long+client+pauses+with+compression[Long
-          client pauses with compression]. 
+For additional information, see this thread on 
link:http://search-hadoop.com/m/WUnLM6ojHm1/Long+client+pauses+with+compression&subj=Long+client+pauses+with+compression[Long
 client pauses with compression].
 
 [[trouble.client.security.rpc.krb]]
 === Secure Client Connect ([Caused by GSSException: No valid credentials 
provided...])
@@ -631,11 +625,12 @@ Secure Client Connect ([Caused by GSSException: No valid 
credentials provided
 
 This issue is caused by bugs in the MIT Kerberos replay_cache component, 
link:http://krbdev.mit.edu/rt/Ticket/Display.html?id=1201[#1201] and 
link:http://krbdev.mit.edu/rt/Ticket/Display.html?id=5924[#5924].
 These bugs caused the old version of krb5-server to erroneously block 
subsequent requests sent from a Principal.
-This caused krb5-server to block the connections sent from one Client (one 
HTable instance with multi-threading connection instances for each 
regionserver); Messages, such as `Request is a replay (34)`, are logged in the 
client log You can ignore the messages, because HTable will retry 5 * 10 (50) 
times for each failed connection by default.
-HTable will throw IOException if any connection to the regionserver fails 
after the retries, so that the user client code for HTable instance can handle 
it further. 
+This caused krb5-server to block the connections sent from one Client (one 
HTable instance with multi-threading connection instances for each 
RegionServer); Messages, such as `Request is a replay (34)`, are logged in the 
client log You can ignore the messages, because HTable will retry 5 * 10 (50) 
times for each failed connection by default.
+HTable will throw IOException if any connection to the RegionServer fails 
after the retries, so that the user client code for HTable instance can handle 
it further.
+NOTE: `HTable` is deprecated in HBase 1.0, in favor of `Table`.
 
 Alternatively, update krb5-server to a version which solves these issues, such 
as krb5-server-1.10.3.
-See JIRA link:https://issues.apache.org/jira/browse/HBASE-10379[HBASE-10379] 
for more details. 
+See JIRA link:https://issues.apache.org/jira/browse/HBASE-10379[HBASE-10379] 
for more details.
 
 [[trouble.client.zookeeper]]
 === ZooKeeper Client Connection Errors
@@ -663,51 +658,46 @@ Errors like this...
  server localhost/127.0.0.1:2181
 ----
 
-... are either due to ZooKeeper being down, or unreachable due to network 
issues. 
+...are either due to ZooKeeper being down, or unreachable due to network 
issues.
 
-The utility <<trouble.tools.builtin.zkcli,trouble.tools.builtin.zkcli>> may 
help investigate ZooKeeper issues. 
+The utility <<trouble.tools.builtin.zkcli>> may help investigate ZooKeeper 
issues.
 
 [[trouble.client.oome.directmemory.leak]]
-=== Client running out of memory though heap size seems to be stable (but 
theoff-heap/direct heap keeps growing)
+=== Client running out of memory though heap size seems to be stable (but the 
off-heap/direct heap keeps growing)
 
-You are likely running into the issue that is described and worked through in 
the mail thread 
link:http://search-hadoop.com/m/ubhrX8KvcH/Suspected+memory+leak&subj=Re+Suspected+memory+leak[HBase,
-          mail # user - Suspected memory leak] and continued over in 
link:http://search-hadoop.com/m/p2Agc1Zy7Va/MaxDirectMemorySize+Was%253A+Suspected+memory+leak&subj=Re+FeedbackRe+Suspected+memory+leak[HBase,
-          mail # dev - FeedbackRe: Suspected memory leak].
+You are likely running into the issue that is described and worked through in 
the mail thread 
link:http://search-hadoop.com/m/ubhrX8KvcH/Suspected+memory+leak&subj=Re+Suspected+memory+leak[HBase,
 mail # user - Suspected memory leak] and continued over in 
link:http://search-hadoop.com/m/p2Agc1Zy7Va/MaxDirectMemorySize+Was%253A+Suspected+memory+leak&subj=Re+FeedbackRe+Suspected+memory+leak[HBase,
 mail # dev - FeedbackRe: Suspected memory leak].
 A workaround is passing your client-side JVM a reasonable value for 
`-XX:MaxDirectMemorySize`.
-By default, the `MaxDirectMemorySize` is equal to your `-Xmx` max heapsize 
setting (if `-Xmx` is set). Try seting it to something smaller (for example, 
one user had success setting it to `1g` when they had a client-side heap of 
`12g`). If you set it too small, it will bring on `FullGCs` so keep it a bit 
hefty.
-You want to make this setting client-side only especially if you are running 
the new experiemental server-side off-heap cache since this feature depends on 
being able to use big direct buffers (You may have to keep separate client-side 
and server-side config dirs). 
+By default, the `MaxDirectMemorySize` is equal to your `-Xmx` max heapsize 
setting (if `-Xmx` is set). Try setting it to something smaller (for example, 
one user had success setting it to `1g` when they had a client-side heap of 
`12g`). If you set it too small, it will bring on `FullGCs` so keep it a bit 
hefty.
+You want to make this setting client-side only especially if you are running 
the new experimental server-side off-heap cache since this feature depends on 
being able to use big direct buffers (You may have to keep separate client-side 
and server-side config dirs).
 
 [[trouble.client.slowdown.admin]]
 === Client Slowdown When Calling Admin Methods (flush, compact, etc.)
 
 This is a client issue fixed by 
link:https://issues.apache.org/jira/browse/HBASE-5073[HBASE-5073] in 0.90.6.
-There was a ZooKeeper leak in the client and the client was getting pummeled 
by ZooKeeper events with each additional invocation of the admin API. 
+There was a ZooKeeper leak in the client and the client was getting pummeled 
by ZooKeeper events with each additional invocation of the admin API.
 
 [[trouble.client.security.rpc]]
 === Secure Client Cannot Connect ([Caused by GSSException: No valid 
credentials provided(Mechanism level: Failed to find any Kerberos tgt)])
 
-There can be several causes that produce this symptom. 
+There can be several causes that produce this symptom.
 
 First, check that you have a valid Kerberos ticket.
 One is required in order to set up communication with a secure Apache HBase 
cluster.
-Examine the ticket currently in the credential cache, if any, by running the 
klist command line utility.
-If no ticket is listed, you must obtain a ticket by running the kinit command 
with either a keytab specified, or by interactively entering a password for the 
desired principal. 
+Examine the ticket currently in the credential cache, if any, by running the 
`klist` command line utility.
+If no ticket is listed, you must obtain a ticket by running the `kinit` 
command with either a keytab specified, or by interactively entering a password 
for the desired principal.
 
-Then, consult the 
link:http://docs.oracle.com/javase/1.5.0/docs/guide/security/jgss/tutorials/Troubleshooting.html[Java
-          Security Guide troubleshooting section].
-The most common problem addressed there is resolved by setting 
javax.security.auth.useSubjectCredsOnly system property value to false. 
+Then, consult the 
link:http://docs.oracle.com/javase/1.5.0/docs/guide/security/jgss/tutorials/Troubleshooting.html[Java
 Security Guide troubleshooting section].
+The most common problem addressed there is resolved by setting 
`javax.security.auth.useSubjectCredsOnly` system property value to `false`.
 
 Because of a change in the format in which MIT Kerberos writes its credentials 
cache, there is a bug in the Oracle JDK 6 Update 26 and earlier that causes 
Java to be unable to read the Kerberos credentials cache created by versions of 
MIT Kerberos 1.8.1 or higher.
-If you have this problematic combination of components in your environment, to 
work around this problem, first log in with kinit and then immediately refresh 
the credential cache with kinit -R.
-The refresh will rewrite the credential cache without the problematic 
formatting. 
+If you have this problematic combination of components in your environment, to 
work around this problem, first log in with `kinit` and then immediately 
refresh the credential cache with `kinit -R`.
+The refresh will rewrite the credential cache without the problematic 
formatting.
 
-Finally, depending on your Kerberos configuration, you may need to install the 
link:http://docs.oracle.com/javase/1.4.2/docs/guide/security/jce/JCERefGuide.html[Java
-          Cryptography Extension], or JCE.
-Insure the JCE jars are on the classpath on both server and client systems. 
+Finally, depending on your Kerberos configuration, you may need to install the 
link:http://docs.oracle.com/javase/1.4.2/docs/guide/security/jce/JCERefGuide.html[Java
 Cryptography Extension], or JCE.
+Insure the JCE jars are on the classpath on both server and client systems.
 
-You may also need to download the 
link:http://www.oracle.com/technetwork/java/javase/downloads/jce-6-download-429243.html[unlimited
-          strength JCE policy files].
-Uncompress and extract the downloaded file, and install the policy jars into 
<java-home>/lib/security. 
+You may also need to download the 
link:http://www.oracle.com/technetwork/java/javase/downloads/jce-6-download-429243.html[unlimited
 strength JCE policy files].
+Uncompress and extract the downloaded file, and install the policy jars into 
_<java-home>/lib/security_.
 
 [[trouble.mapreduce]]
 == MapReduce
@@ -717,9 +707,8 @@ Uncompress and extract the downloaded file, and install the 
policy jars into <ja
 
 This following stacktrace happened using `ImportTsv`, but things like this can 
happen on any job with a mis-configuration.
 
-[source]
+[source,text]
 ----
-
     WARN mapred.LocalJobRunner: job_local_0001
 java.lang.IllegalArgumentException: Can't read partitions file
        at 
org.apache.hadoop.hbase.mapreduce.hadoopbackport.TotalOrderPartitioner.setConf(TotalOrderPartitioner.java:111)
@@ -738,15 +727,14 @@ Caused by: java.io.FileNotFoundException: File 
_partition.lst does not exist.
        at 
org.apache.hadoop.hbase.mapreduce.hadoopbackport.TotalOrderPartitioner.readPartitions(TotalOrderPartitioner.java:296)
 ----
 
-.. see the critical portion of the stack? It's...
+...see the critical portion of the stack? It's...
 
 [source]
 ----
-
 at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210)
 ----
 
-LocalJobRunner means the job is running locally, not on the cluster. 
+LocalJobRunner means the job is running locally, not on the cluster.
 
 To solve this problem, you should run your MR job with your `HADOOP_CLASSPATH` 
set to include the HBase dependencies.
 The "hbase classpath" utility can be used to do this easily.
@@ -754,54 +742,51 @@ For example (substitute VERSION with your HBase version):
 
 [source,bourne]
 ----
-
-          HADOOP_CLASSPATH=`hbase classpath` hadoop jar 
$HBASE_HOME/hbase-server-VERSION.jar rowcounter usertable
+HADOOP_CLASSPATH=`hbase classpath` hadoop jar 
$HBASE_HOME/hbase-server-VERSION.jar rowcounter usertable
 ----
 
-See 
link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/package-summary.html#classpath[
-          
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/package-summary.html#classpath]
        for more information on HBase MapReduce jobs and classpaths. 
+See 
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/package-summary.html#classpathfor
 more information on HBase MapReduce jobs and classpaths.
 
 [[trouble.hbasezerocopybytestring]]
 === Launching a job, you get java.lang.IllegalAccessError: 
com/google/protobuf/HBaseZeroCopyByteString or class 
com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
com.google.protobuf.LiteralByteString
 
 See link:https://issues.apache.org/jira/browse/HBASE-10304[HBASE-10304 Running 
an hbase job jar: IllegalAccessError: class 
com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
com.google.protobuf.LiteralByteString] and 
link:https://issues.apache.org/jira/browse/HBASE-11118[HBASE-11118 non 
environment variable solution for "IllegalAccessError: class 
com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
com.google.protobuf.LiteralByteString"].
 The issue can also show up when trying to run spark jobs.
-See link:https://issues.apache.org/jira/browse/HBASE-10877[HBASE-10877 HBase 
non-retriable exception list should be expanded]. 
+See link:https://issues.apache.org/jira/browse/HBASE-10877[HBASE-10877 HBase 
non-retriable exception list should be expanded].
 
 [[trouble.namenode]]
 == NameNode
 
-For more information on the NameNode, see <<arch.hdfs,arch.hdfs>>. 
+For more information on the NameNode, see <<arch.hdfs>>.
 
 [[trouble.namenode.disk]]
 === HDFS Utilization of Tables and Regions
 
 To determine how much space HBase is using on HDFS use the `hadoop` shell 
commands from the NameNode.
-For example... 
+For example...
 
 
 [source,bourne]
 ----
 hadoop fs -dus /hbase/
----- 
-...returns the summarized disk utilization for all HBase objects. 
+----
+...returns the summarized disk utilization for all HBase objects.
 
 
 [source,bourne]
 ----
 hadoop fs -dus /hbase/myTable
----- 
-...returns the summarized disk utilization for the HBase table 'myTable'. 
+----
+...returns the summarized disk utilization for the HBase table 'myTable'.
 
 
 [source,bourne]
 ----
 hadoop fs -du /hbase/myTable
----- 
-...returns a list of the regions under the HBase table 'myTable' and their 
disk utilization. 
+----
+...returns a list of the regions under the HBase table 'myTable' and their 
disk utilization.
 
-For more information on HDFS shell commands, see the 
link:http://hadoop.apache.org/common/docs/current/file_system_shell.html[HDFS
-          FileSystem Shell documentation]. 
+For more information on HDFS shell commands, see the 
link:http://hadoop.apache.org/common/docs/current/file_system_shell.html[HDFS 
FileSystem Shell documentation].
 
 [[trouble.namenode.hbase.objects]]
 === Browsing HDFS for HBase Objects
@@ -809,36 +794,35 @@ For more information on HDFS shell commands, see the 
link:http://hadoop.apache.o
 Sometimes it will be necessary to explore the HBase objects that exist on HDFS.
 These objects could include the WALs (Write Ahead Logs), tables, regions, 
StoreFiles, etc.
 The easiest way to do this is with the NameNode web application that runs on 
port 50070.
-The NameNode web application will provide links to the all the DataNodes in 
the cluster so that they can be browsed seamlessly. 
+The NameNode web application will provide links to the all the DataNodes in 
the cluster so that they can be browsed seamlessly.
 
-The HDFS directory structure of HBase tables in the cluster is... 
+The HDFS directory structure of HBase tables in the cluster is...
 [source]
 ----
 
 /hbase
-     /<Table>             (Tables in the cluster)
-          /<Region>           (Regions for the table)
-               /<ColumnFamily>      (ColumnFamilies for the Region for the 
table)
-                    /<StoreFile>        (StoreFiles for the ColumnFamily for 
the Regions for the table)
-----      
+    /<Table>                    (Tables in the cluster)
+        /<Region>               (Regions for the table)
+            /<ColumnFamily>     (ColumnFamilies for the Region for the table)
+                /<StoreFile>    (StoreFiles for the ColumnFamily for the 
Regions for the table)
+----
 
-The HDFS directory structure of HBase WAL is.. 
+The HDFS directory structure of HBase WAL is..
 [source]
 ----
 
 /hbase
-     /.logs
-          /<RegionServer>    (RegionServers)
-               /<WAL>           (WAL files for the RegionServer)
-----      
+    /.logs
+        /<RegionServer>    (RegionServers)
+            /<WAL>         (WAL files for the RegionServer)
+----
 
-See the 
link:http://hadoop.apache.org/common/docs/current/hdfs_user_guide.html[HDFS User
-          Guide] for other non-shell diagnostic utilities like `fsck`. 
+See the 
link:http://hadoop.apache.org/common/docs/current/hdfs_user_guide.html[HDFS 
User Guide] for other non-shell diagnostic utilities like `fsck`.
 
 [[trouble.namenode.0size.hlogs]]
 ==== Zero size WALs with data in them
 
-Problem: when getting a listing of all the files in a region server's .logs 
directory, one file has a size of 0 but it contains data.
+Problem: when getting a listing of all the files in a RegionServer's _.logs_ 
directory, one file has a size of 0 but it contains data.
 
 Answer: It's an HDFS quirk.
 A file that's currently being written to will appear to have a size of 0 but 
once it's closed it will show its true size
@@ -848,7 +832,7 @@ A file that's currently being written to will appear to 
have a size of 0 but onc
 
 Two common use-cases for querying HDFS for HBase objects is research the 
degree of uncompaction of a table.
 If there are a large number of StoreFiles for each ColumnFamily it could 
indicate the need for a major compaction.
-Additionally, after a major compaction if the resulting StoreFile is "small" 
it could indicate the need for a reduction of ColumnFamilies for the table. 
+Additionally, after a major compaction if the resulting StoreFile is "small" 
it could indicate the need for a reduction of ColumnFamilies for the table.
 
 [[trouble.network]]
 == Network
@@ -856,25 +840,25 @@ Additionally, after a major compaction if the resulting 
StoreFile is "small" it
 [[trouble.network.spikes]]
 === Network Spikes
 
-If you are seeing periodic network spikes you might want to check the 
`compactionQueues` to see if major compactions are happening. 
+If you are seeing periodic network spikes you might want to check the 
`compactionQueues` to see if major compactions are happening.
 
-See <<managed.compactions,managed.compactions>> for more information on 
managing compactions. 
+See <<managed.compactions>> for more information on managing compactions.
 
 [[trouble.network.loopback]]
 === Loopback IP
 
 HBase expects the loopback IP Address to be 127.0.0.1.
-See the Getting Started section on <<loopback.ip,loopback.ip>>. 
+See the Getting Started section on <<loopback.ip>>.
 
 [[trouble.network.ints]]
 === Network Interfaces
 
-Are all the network interfaces functioning correctly? Are you sure? See the 
Troubleshooting Case Study in <<trouble.casestudy,trouble.casestudy>>. 
+Are all the network interfaces functioning correctly? Are you sure? See the 
Troubleshooting Case Study in <<trouble.casestudy>>.
 
 [[trouble.rs]]
 == RegionServer
 
-For more information on the RegionServers, see 
<<regionserver.arch,regionserver.arch>>. 
+For more information on the RegionServers, see <<regionserver.arch>>.
 
 [[trouble.rs.startup]]
 === Startup Errors
@@ -882,9 +866,9 @@ For more information on the RegionServers, see 
<<regionserver.arch,regionserver.
 [[trouble.rs.startup.master_no_region]]
 ==== Master Starts, But RegionServers Do Not
 
-The Master believes the RegionServers have the IP of 127.0.0.1 - which is 
localhost and resolves to the master's own localhost. 
+The Master believes the RegionServers have the IP of 127.0.0.1 - which is 
localhost and resolves to the master's own localhost.
 
-The RegionServers are erroneously informing the Master that their IP addresses 
are 127.0.0.1. 
+The RegionServers are erroneously informing the Master that their IP addresses 
are 127.0.0.1.
 
 Modify _/etc/hosts_ on the region servers, from...
 
@@ -923,7 +907,7 @@ java.lang.UnsatisfiedLinkError: no gplcompression in 
java.library.path
 ----
 
 \... then there is a path issue with the compression libraries.
-See the Configuration section on link:[LZO compression configuration]. 
+See the Configuration section on link:[LZO compression configuration].
 
 [[trouble.rs.runtime]]
 === Runtime Errors
@@ -933,7 +917,7 @@ See the Configuration section on link:[LZO compression 
configuration].
 
 Are you running an old JVM (< 1.6.0_u21?)? When you look at a thread dump, 
does it look like threads are BLOCKED but no one holds the lock all are blocked 
on? See link:https://issues.apache.org/jira/browse/HBASE-3622[HBASE 3622 
Deadlock in
             HBaseServer (JVM bug?)].
-Adding `-XX:+UseMembar` to the HBase `HBASE_OPTS` in _conf/hbase-env.sh_ may 
fix it. 
+Adding `-XX:+UseMembar` to the HBase `HBASE_OPTS` in _conf/hbase-env.sh_ may 
fix it.
 
 [[trouble.rs.runtime.filehandles]]
 ==== java.io.IOException...(Too many open files)
@@ -949,20 +933,20 @@ Disk-related IOException in BlockReceiver constructor. 
Cause is java.io.IOExcept
         at java.io.File.createNewFile(File.java:883)
 ----
 
-\... see the Getting Started section on link:[ulimit and nproc configuration]. 
+\... see the Getting Started section on link:[ulimit and nproc configuration].
 
 [[trouble.rs.runtime.xceivers]]
 ==== xceiverCount 258 exceeds the limit of concurrent xcievers 256
 
-This typically shows up in the DataNode logs. 
+This typically shows up in the DataNode logs.
 
-See the Getting Started section on link:[xceivers configuration]. 
+See the Getting Started section on link:[xceivers configuration].
 
 [[trouble.rs.runtime.oom_nt]]
 ==== System instability, and the presence of "java.lang.OutOfMemoryError: 
unable to createnew native thread in exceptions" HDFS DataNode logs or that of 
any system daemon
 
-See the Getting Started section on link:[ulimit and nproc configuration].
-The default on recent Linux distributions is 1024 - which is far too low for 
HBase. 
+See the Getting Started section on ulimit and nproc configuration.
+The default on recent Linux distributions is 1024 - which is far too low for 
HBase.
 
 [[trouble.rs.runtime.gc]]
 ==== DFS instability and/or RegionServer lease timeouts
@@ -977,19 +961,19 @@ If you see warning messages like this...
 2009-02-24 10:01:36,472 WARN 
org.apache.hadoop.hbase.regionserver.HRegionServer: unable to report to master 
for xxx milliseconds - retrying
 ----
 
-\... or see full GC compactions then you may be experiencing full GC's. 
+\... or see full GC compactions then you may be experiencing full GC's.
 
 [[trouble.rs.runtime.nolivenodes]]
 ==== "No live nodes contain current block" and/or YouAreDeadException
 
-These errors can happen either when running out of OS file handles or in 
periods of severe network problems where the nodes are unreachable. 
+These errors can happen either when running out of OS file handles or in 
periods of severe network problems where the nodes are unreachable.
 
-See the Getting Started section on link:[ulimit and nproc configuration] and 
check your network. 
+See the Getting Started section on ulimit and nproc configuration and check 
your network.
 
 [[trouble.rs.runtime.zkexpired]]
 ==== ZooKeeper SessionExpired events
 
-Master or RegionServers shutting down with messages like those in the logs: 
+Master or RegionServers shutting down with messages like those in the logs:
 
 [source]
 ----
@@ -1011,7 +995,7 @@ ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: 
ZooKeeper session expi
 ----
 
 The JVM is doing a long running garbage collecting which is pausing every 
threads (aka "stop the world"). Since the RegionServer's local ZooKeeper client 
cannot send heartbeats, the session times out.
-By design, we shut down any node that isn't able to contact the ZooKeeper 
ensemble after getting a timeout so that it stops serving data that may already 
be assigned elsewhere. 
+By design, we shut down any node that isn't able to contact the ZooKeeper 
ensemble after getting a timeout so that it stops serving data that may already 
be assigned elsewhere.
 
 * Make sure you give plenty of RAM (in _hbase-env.sh_), the default of 1GB 
won't be able to sustain long running imports.
 * Make sure you don't swap, the JVM never behaves well under swapping.
@@ -1019,33 +1003,32 @@ By design, we shut down any node that isn't able to 
contact the ZooKeeper ensemb
   For example, if you are running a MapReduce job using 6 CPU-intensive tasks 
on a machine with 4 cores, you are probably starving the RegionServer enough to 
create longer garbage collection pauses.
 * Increase the ZooKeeper session timeout
 
-If you wish to increase the session timeout, add the following to your 
_hbase-site.xml_ to increase the timeout from the default of 60 seconds to 120 
seconds. 
+If you wish to increase the session timeout, add the following to your 
_hbase-site.xml_ to increase the timeout from the default of 60 seconds to 120 
seconds.
 
 [source,xml]
 ----
-
 <property>
-    <name>zookeeper.session.timeout</name>
-    <value>1200000</value>
+  <name>zookeeper.session.timeout</name>
+  <value>1200000</value>
 </property>
 <property>
-    <name>hbase.zookeeper.property.tickTime</name>
-    <value>6000</value>
+  <name>hbase.zookeeper.property.tickTime</name>
+  <value>6000</value>
 </property>
 ----
 
-Be aware that setting a higher timeout means that the regions served by a 
failed RegionServer will take at least that amount of time to be transfered to 
another RegionServer.
-For a production system serving live requests, we would instead recommend 
setting it lower than 1 minute and over-provision your cluster in order the 
lower the memory load on each machines (hence having less garbage to collect 
per machine). 
+Be aware that setting a higher timeout means that the regions served by a 
failed RegionServer will take at least that amount of time to be transferred to 
another RegionServer.
+For a production system serving live requests, we would instead recommend 
setting it lower than 1 minute and over-provision your cluster in order the 
lower the memory load on each machines (hence having less garbage to collect 
per machine).
 
-If this is happening during an upload which only happens once (like initially 
loading all your data into HBase), consider bulk loading. 
+If this is happening during an upload which only happens once (like initially 
loading all your data into HBase), consider bulk loading.
 
-See <<trouble.zookeeper.general,trouble.zookeeper.general>> for other general 
information about ZooKeeper troubleshooting. 
+See <<trouble.zookeeper.general>> for other general information about 
ZooKeeper troubleshooting.
 
 [[trouble.rs.runtime.notservingregion]]
 ==== NotServingRegionException
 
 This exception is "normal" when found in the RegionServer logs at DEBUG level.
-This exception is returned back to the client and then the client goes back to 
hbase:meta to find the new location of the moved region.
+This exception is returned back to the client and then the client goes back to 
`hbase:meta` to find the new location of the moved region.
 
 However, if the NotServingRegionException is logged ERROR, then the client ran 
out of retries and something probably wrong.
 
@@ -1054,22 +1037,21 @@ However, if the NotServingRegionException is logged 
ERROR, then the client ran o
 
 Fix your DNS.
 In versions of Apache HBase before 0.92.x, reverse DNS needs to give same 
answer as forward lookup.
-See link:https://issues.apache.org/jira/browse/HBASE-3431[HBASE 3431
-           RegionServer is not using the name given it by the master; double 
entry in master listing of servers] for gorey details. 
+See link:https://issues.apache.org/jira/browse/HBASE-3431[HBASE 3431 
RegionServer is not using the name given it by the master; double entry in 
master listing of servers] for gorey details.
 
 [[brand.new.compressor]]
 ==== Logs flooded with '2011-01-10 12:40:48,407 INFO 
org.apache.hadoop.io.compress.CodecPool: Gotbrand-new compressor' messages
 
 We are not using the native versions of compression libraries.
 See link:https://issues.apache.org/jira/browse/HBASE-1900[HBASE-1900 Put back 
native support when hadoop 0.21 is released].
-Copy the native libs from hadoop under hbase lib dir or symlink them into 
place and the message should go away. 
+Copy the native libs from hadoop under HBase lib dir or symlink them into 
place and the message should go away.
 
 [[trouble.rs.runtime.client_went_away]]
 ==== Server handler X on 60020 caught: java.nio.channels.ClosedChannelException
 
 If you see this type of message it means that the region server was trying to 
read/send data from/to a client but it already went away.
 Typical causes for this are if the client was killed (you see a storm of 
messages like this when a MapReduce job is killed or fails) or if the client 
receives a SocketTimeoutException.
-It's harmless, but you should consider digging in a bit more if you aren't 
doing something to trigger them. 
+It's harmless, but you should consider digging in a bit more if you aren't 
doing something to trigger them.
 
 === Snapshot Errors Due to Reverse DNS
 
@@ -1079,7 +1061,7 @@ If you see errors like the following on your 
RegionServers, check your reverse D
 
 ----
 
-2013-05-01 00:04:56,356 DEBUG org.apache.hadoop.hbase.procedure.Subprocedure: 
Subprocedure 'backup1' 
+2013-05-01 00:04:56,356 DEBUG org.apache.hadoop.hbase.procedure.Subprocedure: 
Subprocedure 'backup1'
 coordinator notified of 'acquire', waiting on 'reached' or 'abort' from 
coordinator.
 ----
 
@@ -1088,7 +1070,7 @@ You can see a hostname mismatch by looking for the 
following type of message in
 
 ----
 
-2013-05-01 00:03:00,614 INFO 
org.apache.hadoop.hbase.regionserver.HRegionServer: Master passed us hostname 
+2013-05-01 00:03:00,614 INFO 
org.apache.hadoop.hbase.regionserver.HRegionServer: Master passed us hostname
 to use. Was=myhost-1234, Now=ip-10-55-88-99.ec2.internal
 ----
 
@@ -1100,20 +1082,20 @@ to use. Was=myhost-1234, Now=ip-10-55-88-99.ec2.internal
 [[trouble.master]]
 == Master
 
-For more information on the Master, see <<master,master>>. 
+For more information on the Master, see <<master,master>>.
 
 [[trouble.master.startup]]
 === Startup Errors
 
 [[trouble.master.startup.migration]]
-==== Master says that you need to run the hbase migrations script
+==== Master says that you need to run the HBase migrations script
 
-Upon running that, the hbase migrations script says no files in root directory.
+Upon running that, the HBase migrations script says no files in root directory.
 
-HBase expects the root directory to either not exist, or to have already been 
initialized by hbase running a previous time.
+HBase expects the root directory to either not exist, or to have already been 
initialized by HBase running a previous time.
 If you create a new directory for HBase using Hadoop DFS, this error will 
occur.
 Make sure the HBase root directory does not currently exist or has been 
initialized by a previous run of HBase.
-Sure fire solution is to just use Hadoop dfs to delete the HBase root and let 
HBase create and initialize the directory itself. 
+Sure fire solution is to just use Hadoop dfs to delete the HBase root and let 
HBase create and initialize the directory itself.
 
 [[trouble.master.startup.zk.buffer]]
 ==== Packet len6080218 is out of range!
@@ -1138,21 +1120,21 @@ A ZooKeeper server wasn't able to start, throws that 
error.
 xyz is the name of your server.
 
 This is a name lookup problem.
-HBase tries to start a ZooKeeper server on some machine but that machine isn't 
able to find itself in the `hbase.zookeeper.quorum` configuration. 
+HBase tries to start a ZooKeeper server on some machine but that machine isn't 
able to find itself in the `hbase.zookeeper.quorum` configuration.
 
 Use the hostname presented in the error message instead of the value you used.
-If you have a DNS server, you can set `hbase.zookeeper.dns.interface` and 
`hbase.zookeeper.dns.nameserver` in _hbase-site.xml_ to make sure it resolves 
to the correct FQDN. 
+If you have a DNS server, you can set `hbase.zookeeper.dns.interface` and 
`hbase.zookeeper.dns.nameserver` in _hbase-site.xml_ to make sure it resolves 
to the correct FQDN.
 
 [[trouble.zookeeper.general]]
 === ZooKeeper, The Cluster Canary
 
-ZooKeeper is the cluster's "canary in the mineshaft". It'll be the first to 
notice issues if any so making sure its happy is the short-cut to a humming 
cluster. 
+ZooKeeper is the cluster's "canary in the mineshaft". It'll be the first to 
notice issues if any so making sure its happy is the short-cut to a humming 
cluster.
 
 See the link:http://wiki.apache.org/hadoop/ZooKeeper/Troubleshooting[ZooKeeper 
Operating Environment Troubleshooting] page.
 It has suggestions and tools for checking disk and networking performance; i.e.
-the operating environment your ZooKeeper and HBase are running in. 
+the operating environment your ZooKeeper and HBase are running in.
 
-Additionally, the utility 
<<trouble.tools.builtin.zkcli,trouble.tools.builtin.zkcli>> may help 
investigate ZooKeeper issues. 
+Additionally, the utility <<trouble.tools.builtin.zkcli>> may help investigate 
ZooKeeper issues.
 
 [[trouble.ec2]]
 == Amazon EC2
@@ -1161,7 +1143,7 @@ Additionally, the utility 
<<trouble.tools.builtin.zkcli,trouble.tools.builtin.zk
 === ZooKeeper does not seem to work on Amazon EC2
 
 HBase does not start when deployed as Amazon EC2 instances.
-Exceptions like the below appear in the Master and/or RegionServer logs: 
+Exceptions like the below appear in the Master and/or RegionServer logs:
 
 [source]
 ----
@@ -1174,18 +1156,18 @@ Exceptions like the below appear in the Master and/or 
RegionServer logs:
 ----
 
 Security group policy is blocking the ZooKeeper port on a public address.
-Use the internal EC2 host names when configuring the ZooKeeper quorum peer 
list. 
+Use the internal EC2 host names when configuring the ZooKeeper quorum peer 
list.
 
 [[trouble.ec2.instability]]
 === Instability on Amazon EC2
 
 Questions on HBase and Amazon EC2 come up frequently on the HBase dist-list.
-Search for old threads using link:http://search-hadoop.com/[Search Hadoop]     
        
+Search for old threads using link:http://search-hadoop.com/[Search Hadoop]
 
 [[trouble.ec2.connection]]
 === Remote Java Connection into EC2 Cluster Not Working
 
-See Andrew's answer here, up on the user list: 
link:http://search-hadoop.com/m/sPdqNFAwyg2[Remote Java client connection into 
EC2 instance]. 
+See Andrew's answer here, up on the user list: 
link:http://search-hadoop.com/m/sPdqNFAwyg2[Remote Java client connection into 
EC2 instance].
 
 [[trouble.versions]]
 == HBase and Hadoop version issues
@@ -1213,7 +1195,7 @@ sv4r6s38:       at 
org.apache.hadoop.security.UserGroupInformation.ensureInitial
 ----
 
 you need to copy under _hbase/lib_, the _commons-configuration-X.jar_ you find 
in your Hadoop's _lib_ directory.
-That should fix the above complaint. 
+That should fix the above complaint.
 
 [[trouble.wrong.version]]
 === ...cannot communicate with client version...
@@ -1221,7 +1203,7 @@ That should fix the above complaint.
 If you see something like the following in your logs [computeroutput]+... 
2012-09-24
           10:20:52,168 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled 
exception. Starting
           shutdown. org.apache.hadoop.ipc.RemoteException: Server IPC version 
7 cannot communicate
-          with client version 4 ...+ ...are you trying to talk to an Hadoop 
2.0.x from an HBase that has an Hadoop 1.0.x client? Use the HBase built 
against Hadoop 2.0 or rebuild your HBase passing the +-Dhadoop.profile=2.0+ 
attribute to Maven (See <<maven.build.hadoop,maven.build.hadoop>> for more). 
+          with client version 4 ...+ ...are you trying to talk to an Hadoop 
2.0.x from an HBase that has an Hadoop 1.0.x client? Use the HBase built 
against Hadoop 2.0 or rebuild your HBase passing the +-Dhadoop.profile=2.0+ 
attribute to Maven (See <<maven.build.hadoop>> for more).
 
 == IPC Configuration Conflicts with Hadoop
 
@@ -1280,48 +1262,49 @@ These changes were backported to HBase 0.98.x and apply 
to all newer versions.
 | ipc.client.kill.max
 | hbase.ipc.client.kill.max
 
-| ipc.server.scan.vtime.weight 
-| hbase.ipc.server.scan.vtime.weight 
+| ipc.server.scan.vtime.weight
+| hbase.ipc.server.scan.vtime.weight
 |===
 
 == HBase and HDFS
 
 General configuration guidance for Apache HDFS is out of the scope of this 
guide.
-Refer to the documentation available at link:http://hadoop.apache.org/ for 
extensive information about configuring HDFS.
-This section deals with HDFS in terms of HBase. 
+Refer to the documentation available at http://hadoop.apache.org/ for 
extensive information about configuring HDFS.
+This section deals with HDFS in terms of HBase.
 
 In most cases, HBase stores its data in Apache HDFS.
 This includes the HFiles containing the data, as well as the write-ahead logs 
(WALs) which store data before it is written to the HFiles and protect against 
RegionServer crashes.
 HDFS provides reliability and protection to data in HBase because it is 
distributed.
 To operate with the most efficiency, HBase needs data to be available locally.
-Therefore, it is a good practice to run an HDFS datanode on each RegionServer.
+Therefore, it is a good practice to run an HDFS DataNode on each RegionServer.
+
+.Important Information and Guidelines for HBase and HDFS
 
-.Important Information and Guidelines for HBase and HDFSHBase is a client of 
HDFS.::
+HBase is a client of HDFS.::
   HBase is an HDFS client, using the HDFS `DFSClient` class, and references to 
this class appear in HBase logs with other HDFS client log messages.
 
 Configuration is necessary in multiple places.::
   Some HDFS configurations relating to HBase need to be done at the HDFS 
(server) side.
-  Others must be done within HBase (at the client side). Other settings need 
to be set at both the server and client side. 
+  Others must be done within HBase (at the client side). Other settings need 
to be set at both the server and client side.
 
 Write errors which affect HBase may be logged in the HDFS logs rather than 
HBase logs.::
-  When writing, HDFS pipelines communications from one datanode to another.
-  HBase communicates to both the HDFS namenode and datanode, using the HDFS 
client classes.
-  Communication problems between datanodes are logged in the HDFS logs, not 
the HBase logs.
+  When writing, HDFS pipelines communications from one DataNode to another.
+  HBase communicates to both the HDFS NameNode and DataNode, using the HDFS 
client classes.
+  Communication problems between DataNodes are logged in the HDFS logs, not 
the HBase logs.
 
 HBase communicates with HDFS using two different ports.::
-  HBase communicates with datanodes using the `ipc.Client` interface and the 
`DataNode` class.
+  HBase communicates with DataNodes using the `ipc.Client` interface and the 
`DataNode` class.
   References to these will appear in HBase logs.
   Each of these communication channels use a different port (50010 and 50020 
by default). The ports are configured in the HDFS configuration, via the 
`dfs.datanode.address` and `dfs.datanode.ipc.address`            parameters.
 
 Errors may be logged in HBase, HDFS, or both.::
   When troubleshooting HDFS issues in HBase, check logs in both places for 
errors.
 
-HDFS takes a while to mark a node as dead. You can configure HDFS to avoid 
using stale
-          datanodes.::
+HDFS takes a while to mark a node as dead. You can configure HDFS to avoid 
using stale DataNodes.::
   By default, HDFS does not mark a node as dead until it is unreachable for 
630 seconds.
-  In Hadoop 1.1 and Hadoop 2.x, this can be alleviated by enabling checks for 
stale datanodes, though this check is disabled by default.
+  In Hadoop 1.1 and Hadoop 2.x, this can be alleviated by enabling checks for 
stale DataNodes, though this check is disabled by default.
   You can enable the check for reads and writes separately, via 
`dfs.namenode.avoid.read.stale.datanode` and 
`dfs.namenode.avoid.write.stale.datanode settings`.
-  A stale datanode is one that has not been reachable for 
`dfs.namenode.stale.datanode.interval`            (default is 30 seconds). 
Stale datanodes are avoided, and marked as the last possible target for a read 
or write operation.
+  A stale DataNode is one that has not been reachable for 
`dfs.namenode.stale.datanode.interval` (default is 30 seconds). Stale datanodes 
are avoided, and marked as the last possible target for a read or write 
operation.
   For configuration details, see the HDFS documentation.
 
 Settings for HDFS retries and timeouts are important to HBase.::
@@ -1332,9 +1315,9 @@ Settings for HDFS retries and timeouts are important to 
HBase.::
   Check the Hadoop documentation for the most current values and 
recommendations.
 
 .Connection Timeouts
-Connection timeouts occur between the client (HBASE) and the HDFS datanode.
+Connection timeouts occur between the client (HBASE) and the HDFS DataNode.
 They may occur when establishing a connection, attempting to read, or 
attempting to write.
-The two settings below are used in combination, and affect connections between 
the DFSClient and the datanode, the ipc.cClient and the datanode, and 
communication between two datanodes. 
+The two settings below are used in combination, and affect connections between 
the DFSClient and the DataNode, the ipc.cClient and the DataNode, and 
communication between two DataNodes.
 
 `dfs.client.socket-timeout` (default: 60000)::
   The amount of time before a client connection times out when establishing a 
connection or reading.
@@ -1351,7 +1334,7 @@ The following types of errors are often seen in the logs.
             continue java.net.SocketTimeoutException: 60000 millis timeout 
while waiting for channel
             to be ready for connect. ch : 
java.nio.channels.SocketChannel[connection-pending
             remote=/region-server-1:50010]`::
-  All datanodes for a block are dead, and recovery is not possible.
+  All DataNodes for a block are dead, and recovery is not possible.
   Here is the sequence of events that leads to this error:
 
 `INFO org.apache.hadoop.HDFS.DFSClient: Exception in createBlockOutputStream
@@ -1360,7 +1343,7 @@ The following types of errors are often seen in the logs.
             xxx:50010]`::
   This type of error indicates a write issue.
   In this case, the master wants to split the log.
-  It does not have a local datanode so it tries to connect to a remote 
datanode, but the datanode is dead.
+  It does not have a local DataNodes so it tries to connect to a remote 
DataNode, but the DataNode is dead.
 
 [[trouble.tests]]
 == Running unit or integration tests
@@ -1397,12 +1380,12 @@ at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster
 ----
 
 \... then try issuing the command +umask 022+ before launching tests.
-This is a workaround for 
link:https://issues.apache.org/jira/browse/HDFS-2556[HDFS-2556]      
+This is a workaround for 
link:https://issues.apache.org/jira/browse/HDFS-2556[HDFS-2556]
 
 [[trouble.casestudy]]
 == Case Studies
 
-For Performance and Troubleshooting Case Studies, see 
<<casestudies,casestudies>>. 
+For Performance and Troubleshooting Case Studies, see <<casestudies>>.
 
 [[trouble.crypto]]
 == Cryptographic Features
@@ -1415,30 +1398,30 @@ This problem manifests as exceptions ultimately caused 
by:
 [source]
 ----
 Caused by: sun.security.pkcs11.wrapper.PKCS11Exception: CKR_ARGUMENTS_BAD
-       at sun.security.pkcs11.wrapper.PKCS11.C_DecryptUpdate(Native Method)
-       at sun.security.pkcs11.P11Cipher.implDoFinal(P11Cipher.java:795)
+  at sun.security.pkcs11.wrapper.PKCS11.C_DecryptUpdate(Native Method)
+  at sun.security.pkcs11.P11Cipher.implDoFinal(P11Cipher.java:795)
 ----
 
 This problem appears to affect some versions of OpenJDK 7 shipped by some 
Linux vendors.
 NSS is configured as the default provider.
-If the host has an x86_64 architecture, depending on if the vendor packages 
contain the defect, the NSS provider will not function correctly. 
+If the host has an x86_64 architecture, depending on if the vendor packages 
contain the defect, the NSS provider will not function correctly.
 
 To work around this problem, find the JRE home directory and edit the file 
_lib/security/java.security_.
-Edit the file to comment out the line: 
+Edit the file to comment out the line:
 
 [source]
 ----
 security.provider.1=sun.security.pkcs11.SunPKCS11 
${java.home}/lib/security/nss.cfg
 ----
 
-Then renumber the remaining providers accordingly. 
+Then renumber the remaining providers accordingly.
 
 == Operating System Specific Issues
 
 === Page Allocation Failure
 
 NOTE: This issue is known to affect CentOS 6.2 and possibly CentOS 6.5.
-It may also affect some versions of Red Hat Enterprise Linux, according to 
link:https://bugzilla.redhat.com/show_bug.cgi?id=770545.
+It may also affect some versions of Red Hat Enterprise Linux, according to 
https://bugzilla.redhat.com/show_bug.cgi?id=770545.
 
 Some users have reported seeing the following error:
 
@@ -1447,7 +1430,7 @@ kernel: java: page allocation failure. order:4, mode:0x20
 ----
 
 Raising the value of `min_free_kbytes` was reported to fix this problem.
-This parameter is set to a percentage of the amount of RAM on your system, and 
is described in more detail at 
link:http://www.centos.org/docs/5/html/5.1/Deployment_Guide/s3-proc-sys-vm.html.
 
+This parameter is set to a percentage of the amount of RAM on your system, and 
is described in more detail at 
http://www.centos.org/docs/5/html/5.1/Deployment_Guide/s3-proc-sys-vm.html.
 
 To find the current value on your system, run the following command:
 
@@ -1460,7 +1443,7 @@ Try doubling, then quadrupling the value.
 Note that setting the value too low or too high could have detrimental effects 
on your system.
 Consult your operating system vendor for specific recommendations.
 
-Use the following command to modify the value of `min_free_kbytes`, 
substituting [replaceable]_<value>_ with your intended value:
+Use the following command to modify the value of `min_free_kbytes`, 
substituting _<value>_ with your intended value:
 
 ----
 [user@host]# echo <value> > /proc/sys/vm/min_free_kbytes
@@ -1470,7 +1453,7 @@ Use the following command to modify the value of 
`min_free_kbytes`, substituting
 
 === NoSuchMethodError: java.util.concurrent.ConcurrentHashMap.keySet
 
-If you see this in your logs: 
+If you see this in your logs:
 [source]
 ----
 Caused by: java.lang.NoSuchMethodError: 
java.util.concurrent.ConcurrentHashMap.keySet()Ljava/util/concurrent/ConcurrentHashMap$KeySetView;
@@ -1485,4 +1468,4 @@ Caused by: java.lang.NoSuchMethodError: 
java.util.concurrent.ConcurrentHashMap.k
 then check if you compiled with jdk8 and tried to run it on jdk7.
 If so, this won't work.
 Run on jdk8 or recompile with jdk7.
-See link:https://issues.apache.org/jira/browse/HBASE-10607[HBASE-10607 [JDK8] 
NoSuchMethodError involving ConcurrentHashMap.keySet if running on JRE 7]. 
+See link:https://issues.apache.org/jira/browse/HBASE-10607[HBASE-10607 JDK8 
NoSuchMethodError involving ConcurrentHashMap.keySet if running on JRE 7].

http://git-wip-us.apache.org/repos/asf/hbase/blob/7139c90e/src/main/asciidoc/_chapters/unit_testing.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/unit_testing.adoc 
b/src/main/asciidoc/_chapters/unit_testing.adoc
index 1ffedf1..3f70001 100644
--- a/src/main/asciidoc/_chapters/unit_testing.adoc
+++ b/src/main/asciidoc/_chapters/unit_testing.adoc
@@ -42,7 +42,7 @@ This example will add unit tests to the following example 
class:
 
 public class MyHBaseDAO {
 
-    public static void insertRecord(HTableInterface table, HBaseTestObj obj)
+    public static void insertRecord(Table.getTable(table), HBaseTestObj obj)
     throws Exception {
         Put put = createPut(obj);
         table.put(put);
@@ -129,17 +129,19 @@ Next, add a `@RunWith` annotation to your test class, to 
direct it to use Mockit
 
 @RunWith(MockitoJUnitRunner.class)
 public class TestMyHBaseDAO{
-  @Mock 
-  private HTableInterface table;
   @Mock
-  private HTablePool hTablePool;
+  Configuration config = HBaseConfiguration.create();
+  @Mock
+  Connection connection = ConnectionFactory.createConnection(config);
+  @Mock 
+  private Table table;
   @Captor
   private ArgumentCaptor putCaptor;
 
   @Test
   public void testInsertRecord() throws Exception {
     //return mock table when getTable is called
-    when(hTablePool.getTable("tablename")).thenReturn(table);
+    when(connection.getTable(TableName.valueOf("tablename")).thenReturn(table);
     //create test object and make a call to the DAO that needs testing
     HBaseTestObj obj = new HBaseTestObj();
     obj.setRowKey("ROWKEY-1");
@@ -162,7 +164,7 @@ This code populates `HBaseTestObj` with ``ROWKEY-1'', 
``DATA-1'', ``DATA-2'' as
 It then inserts the record into the mocked table.
 The Put that the DAO would have inserted is captured, and values are tested to 
verify that they are what you expected them to be.
 
-The key here is to manage htable pool and htable instance creation outside the 
DAO.
+The key here is to manage Connection and Table instance creation outside the 
DAO.
 This allows you to mock them cleanly and test Puts as shown above.
 Similarly, you can now expand into other operations such as Get, Scan, or 
Delete.
 

Reply via email to