Repository: hbase
Updated Branches:
  refs/heads/HBASE-11533 c0ddb224b -> abaea39ed


http://git-wip-us.apache.org/repos/asf/hbase/blob/abaea39e/src/main/asciidoc/hbase-default.xml
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/hbase-default.xml 
b/src/main/asciidoc/hbase-default.xml
deleted file mode 100644
index 630bc8e..0000000
--- a/src/main/asciidoc/hbase-default.xml
+++ /dev/null
@@ -1,1393 +0,0 @@
-
-
-:doctype: book
-:numbered:
-:toc: left
-:icons: font
-:experimental:
-
-[[_hbase_default_configurations]]
-== HBase Default Configuration
-
-The documentation below is generated using the default hbase configuration 
file, _hbase-default.xml_, as source.
-
-
-
-[[_hbase.tmp.dir]]
-.hbase.tmp.dir
-Description:: Temporary directory on the local filesystem.
-    Change this setting to point to a location more permanent
-    than '/tmp', the usual resolve for java.io.tmpdir, as the
-    '/tmp' directory is cleared on machine restart.
-Default:: ${java.io.tmpdir}/hbase-${user.name} +
-  
-[[_hbase.rootdir]]
-.hbase.rootdir
-Description:: The directory shared by region servers and into
-    which HBase persists.  The URL should be 'fully-qualified'
-    to include the filesystem scheme.  For example, to specify the
-    HDFS directory '/hbase' where the HDFS instance's namenode is
-    running at namenode.example.org on port 9000, set this value to:
-    hdfs://namenode.example.org:9000/hbase.  By default, we write
-    to whatever ${hbase.tmp.dir} is set too -- usually /tmp --
-    so change this configuration or else all data will be lost on
-    machine restart.
-Default:: ${hbase.tmp.dir}/hbase +
-  
-[[_hbase.cluster.distributed]]
-.hbase.cluster.distributed
-Description:: The mode the cluster will be in. Possible values are
-      false for standalone mode and true for distributed mode.  If
-      false, startup will run all HBase and ZooKeeper daemons together
-      in the one JVM.
-Default:: false +
-  
-[[_hbase.zookeeper.quorum]]
-.hbase.zookeeper.quorum
-Description:: Comma separated list of servers in the ZooKeeper ensemble
-    (This config. should have been named hbase.zookeeper.ensemble).
-    For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com".
-    By default this is set to localhost for local and pseudo-distributed modes
-    of operation. For a fully-distributed setup, this should be set to a full
-    list of ZooKeeper ensemble servers. If HBASE_MANAGES_ZK is set in 
hbase-env.sh
-    this is the list of servers which hbase will start/stop ZooKeeper on as
-    part of cluster start/stop.  Client-side, we will take this list of
-    ensemble members and put it together with the hbase.zookeeper.clientPort
-    config. and pass it into zookeeper constructor as the connectString
-    parameter.
-Default:: localhost +
-  
-[[_hbase.local.dir]]
-.hbase.local.dir
-Description:: Directory on the local filesystem to be used
-    as a local storage.
-Default:: ${hbase.tmp.dir}/local/ +
-  
-[[_hbase.master.info.port]]
-.hbase.master.info.port
-Description:: The port for the HBase Master web UI.
-    Set to -1 if you do not want a UI instance run.
-Default:: 16010 +
-  
-[[_hbase.master.info.bindAddress]]
-.hbase.master.info.bindAddress
-Description:: The bind address for the HBase Master web UI
-    
-Default:: 0.0.0.0 +
-  
-[[_hbase.master.logcleaner.plugins]]
-.hbase.master.logcleaner.plugins
-Description:: A comma-separated list of BaseLogCleanerDelegate invoked by
-    the LogsCleaner service. These WAL cleaners are called in order,
-    so put the cleaner that prunes the most files in front. To
-    implement your own BaseLogCleanerDelegate, just put it in HBase's classpath
-    and add the fully qualified class name here. Always add the above
-    default log cleaners in the list.
-Default:: org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner +
-  
-[[_hbase.master.logcleaner.ttl]]
-.hbase.master.logcleaner.ttl
-Description:: Maximum time a WAL can stay in the .oldlogdir directory,
-    after which it will be cleaned by a Master thread.
-Default:: 600000 +
-  
-[[_hbase.master.hfilecleaner.plugins]]
-.hbase.master.hfilecleaner.plugins
-Description:: A comma-separated list of BaseHFileCleanerDelegate invoked by
-    the HFileCleaner service. These HFiles cleaners are called in order,
-    so put the cleaner that prunes the most files in front. To
-    implement your own BaseHFileCleanerDelegate, just put it in HBase's 
classpath
-    and add the fully qualified class name here. Always add the above
-    default log cleaners in the list as they will be overwritten in
-    hbase-site.xml.
-Default:: org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner +
-  
-[[_hbase.master.catalog.timeout]]
-.hbase.master.catalog.timeout
-Description:: Timeout value for the Catalog Janitor from the master to
-    META.
-Default:: 600000 +
-  
-[[_hbase.master.infoserver.redirect]]
-.hbase.master.infoserver.redirect
-Description:: Whether or not the Master listens to the Master web
-      UI port (hbase.master.info.port) and redirects requests to the web
-      UI server shared by the Master and RegionServer.
-Default:: true +
-  
-[[_hbase.regionserver.port]]
-.hbase.regionserver.port
-Description:: The port the HBase RegionServer binds to.
-Default:: 16020 +
-  
-[[_hbase.regionserver.info.port]]
-.hbase.regionserver.info.port
-Description:: The port for the HBase RegionServer web UI
-    Set to -1 if you do not want the RegionServer UI to run.
-Default:: 16030 +
-  
-[[_hbase.regionserver.info.bindAddress]]
-.hbase.regionserver.info.bindAddress
-Description:: The address for the HBase RegionServer web UI
-Default:: 0.0.0.0 +
-  
-[[_hbase.regionserver.info.port.auto]]
-.hbase.regionserver.info.port.auto
-Description:: Whether or not the Master or RegionServer
-    UI should search for a port to bind to. Enables automatic port
-    search if hbase.regionserver.info.port is already in use.
-    Useful for testing, turned off by default.
-Default:: false +
-  
-[[_hbase.regionserver.handler.count]]
-.hbase.regionserver.handler.count
-Description:: Count of RPC Listener instances spun up on RegionServers.
-    Same property is used by the Master for count of master handlers.
-Default:: 30 +
-  
-[[_hbase.ipc.server.callqueue.handler.factor]]
-.hbase.ipc.server.callqueue.handler.factor
-Description:: Factor to determine the number of call queues.
-      A value of 0 means a single queue shared between all the handlers.
-      A value of 1 means that each handler has its own queue.
-Default:: 0.1 +
-  
-[[_hbase.ipc.server.callqueue.read.ratio]]
-.hbase.ipc.server.callqueue.read.ratio
-Description:: Split the call queues into read and write queues.
-      The specified interval (which should be between 0.0 and 1.0)
-      will be multiplied by the number of call queues.
-      A value of 0 indicate to not split the call queues, meaning that both 
read and write
-      requests will be pushed to the same set of queues.
-      A value lower than 0.5 means that there will be less read queues than 
write queues.
-      A value of 0.5 means there will be the same number of read and write 
queues.
-      A value greater than 0.5 means that there will be more read queues than 
write queues.
-      A value of 1.0 means that all the queues except one are used to dispatch 
read requests.
-
-      Example: Given the total number of call queues being 10
-      a read.ratio of 0 means that: the 10 queues will contain both read/write 
requests.
-      a read.ratio of 0.3 means that: 3 queues will contain only read requests
-      and 7 queues will contain only write requests.
-      a read.ratio of 0.5 means that: 5 queues will contain only read requests
-      and 5 queues will contain only write requests.
-      a read.ratio of 0.8 means that: 8 queues will contain only read requests
-      and 2 queues will contain only write requests.
-      a read.ratio of 1 means that: 9 queues will contain only read requests
-      and 1 queues will contain only write requests.
-    
-Default:: 0 +
-  
-[[_hbase.ipc.server.callqueue.scan.ratio]]
-.hbase.ipc.server.callqueue.scan.ratio
-Description:: Given the number of read call queues, calculated from the total 
number
-      of call queues multiplied by the callqueue.read.ratio, the scan.ratio 
property
-      will split the read call queues into small-read and long-read queues.
-      A value lower than 0.5 means that there will be less long-read queues 
than short-read queues.
-      A value of 0.5 means that there will be the same number of short-read 
and long-read queues.
-      A value greater than 0.5 means that there will be more long-read queues 
than short-read queues
-      A value of 0 or 1 indicate to use the same set of queues for gets and 
scans.
-
-      Example: Given the total number of read call queues being 8
-      a scan.ratio of 0 or 1 means that: 8 queues will contain both long and 
short read requests.
-      a scan.ratio of 0.3 means that: 2 queues will contain only long-read 
requests
-      and 6 queues will contain only short-read requests.
-      a scan.ratio of 0.5 means that: 4 queues will contain only long-read 
requests
-      and 4 queues will contain only short-read requests.
-      a scan.ratio of 0.8 means that: 6 queues will contain only long-read 
requests
-      and 2 queues will contain only short-read requests.
-    
-Default:: 0 +
-  
-[[_hbase.regionserver.msginterval]]
-.hbase.regionserver.msginterval
-Description:: Interval between messages from the RegionServer to Master
-    in milliseconds.
-Default:: 3000 +
-  
-[[_hbase.regionserver.regionSplitLimit]]
-.hbase.regionserver.regionSplitLimit
-Description:: Limit for the number of regions after which no more region
-    splitting should take place. This is not a hard limit for the number of
-    regions but acts as a guideline for the regionserver to stop splitting 
after
-    a certain limit. Default is MAX_INT; i.e. do not block splitting.
-Default:: 2147483647 +
-  
-[[_hbase.regionserver.logroll.period]]
-.hbase.regionserver.logroll.period
-Description:: Period at which we will roll the commit log regardless
-    of how many edits it has.
-Default:: 3600000 +
-  
-[[_hbase.regionserver.logroll.errors.tolerated]]
-.hbase.regionserver.logroll.errors.tolerated
-Description:: The number of consecutive WAL close errors we will allow
-    before triggering a server abort.  A setting of 0 will cause the
-    region server to abort if closing the current WAL writer fails during
-    log rolling.  Even a small value (2 or 3) will allow a region server
-    to ride over transient HDFS errors.
-Default:: 2 +
-  
-[[_hbase.regionserver.hlog.reader.impl]]
-.hbase.regionserver.hlog.reader.impl
-Description:: The WAL file reader implementation.
-Default:: org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader +
-  
-[[_hbase.regionserver.hlog.writer.impl]]
-.hbase.regionserver.hlog.writer.impl
-Description:: The WAL file writer implementation.
-Default:: org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter +
-  
-[[_hbase.master.distributed.log.replay]]
-.hbase.master.distributed.log.replay
-Description:: Enable 'distributed log replay' as default engine splitting
-    WAL files on server crash.  This default is new in hbase 1.0.  To fall
-    back to the old mode 'distributed log splitter', set the value to
-    'false'.  'Disributed log replay' improves MTTR because it does not
-    write intermediate files.  'DLR' required that 'hfile.format.version'
-    be set to version 3 or higher. 
-    
-Default:: true +
-  
-[[_hbase.regionserver.global.memstore.size]]
-.hbase.regionserver.global.memstore.size
-Description:: Maximum size of all memstores in a region server before new
-      updates are blocked and flushes are forced. Defaults to 40% of heap.
-      Updates are blocked and flushes are forced until size of all memstores
-      in a region server hits 
hbase.regionserver.global.memstore.size.lower.limit.
-Default:: 0.4 +
-  
-[[_hbase.regionserver.global.memstore.size.lower.limit]]
-.hbase.regionserver.global.memstore.size.lower.limit
-Description:: Maximum size of all memstores in a region server before flushes 
are forced.
-      Defaults to 95% of hbase.regionserver.global.memstore.size.
-      A 100% value for this value causes the minimum possible flushing to 
occur when updates are 
-      blocked due to memstore limiting.
-Default:: 0.95 +
-  
-[[_hbase.regionserver.optionalcacheflushinterval]]
-.hbase.regionserver.optionalcacheflushinterval
-Description:: 
-    Maximum amount of time an edit lives in memory before being automatically 
flushed.
-    Default 1 hour. Set it to 0 to disable automatic flushing.
-Default:: 3600000 +
-  
-[[_hbase.regionserver.catalog.timeout]]
-.hbase.regionserver.catalog.timeout
-Description:: Timeout value for the Catalog Janitor from the regionserver to 
META.
-Default:: 600000 +
-  
-[[_hbase.regionserver.dns.interface]]
-.hbase.regionserver.dns.interface
-Description:: The name of the Network Interface from which a region server
-      should report its IP address.
-Default:: default +
-  
-[[_hbase.regionserver.dns.nameserver]]
-.hbase.regionserver.dns.nameserver
-Description:: The host name or IP address of the name server (DNS)
-      which a region server should use to determine the host name used by the
-      master for communication and display purposes.
-Default:: default +
-  
-[[_hbase.regionserver.region.split.policy]]
-.hbase.regionserver.region.split.policy
-Description:: 
-      A split policy determines when a region should be split. The various 
other split policies that
-      are available currently are ConstantSizeRegionSplitPolicy, 
DisabledRegionSplitPolicy,
-      DelimitedKeyPrefixRegionSplitPolicy, KeyPrefixRegionSplitPolicy etc.
-    
-Default:: 
org.apache.hadoop.hbase.regionserver.IncreasingToUpperBoundRegionSplitPolicy +
-  
-[[_zookeeper.session.timeout]]
-.zookeeper.session.timeout
-Description:: ZooKeeper session timeout in milliseconds. It is used in two 
different ways.
-      First, this value is used in the ZK client that HBase uses to connect to 
the ensemble.
-      It is also used by HBase when it starts a ZK server and it is passed as 
the 'maxSessionTimeout'. See
-      
http://hadoop.apache.org/zookeeper/docs/current/zookeeperProgrammers.html#ch_zkSessions.
-      For example, if a HBase region server connects to a ZK ensemble that's 
also managed by HBase, then the
-      session timeout will be the one specified by this configuration. But, a 
region server that connects
-      to an ensemble managed with a different configuration will be subjected 
that ensemble's maxSessionTimeout. So,
-      even though HBase might propose using 90 seconds, the ensemble can have 
a max timeout lower than this and
-      it will take precedence. The current default that ZK ships with is 40 
seconds, which is lower than HBase's.
-    
-Default:: 90000 +
-  
-[[_zookeeper.znode.parent]]
-.zookeeper.znode.parent
-Description:: Root ZNode for HBase in ZooKeeper. All of HBase's ZooKeeper
-      files that are configured with a relative path will go under this node.
-      By default, all of HBase's ZooKeeper file path are configured with a
-      relative path, so they will all go under this directory unless changed.
-Default:: /hbase +
-  
-[[_zookeeper.znode.rootserver]]
-.zookeeper.znode.rootserver
-Description:: Path to ZNode holding root region location. This is written by
-      the master and read by clients and region servers. If a relative path is
-      given, the parent folder will be ${zookeeper.znode.parent}. By default,
-      this means the root location is stored at /hbase/root-region-server.
-Default:: root-region-server +
-  
-[[_zookeeper.znode.acl.parent]]
-.zookeeper.znode.acl.parent
-Description:: Root ZNode for access control lists.
-Default:: acl +
-  
-[[_hbase.zookeeper.dns.interface]]
-.hbase.zookeeper.dns.interface
-Description:: The name of the Network Interface from which a ZooKeeper server
-      should report its IP address.
-Default:: default +
-  
-[[_hbase.zookeeper.dns.nameserver]]
-.hbase.zookeeper.dns.nameserver
-Description:: The host name or IP address of the name server (DNS)
-      which a ZooKeeper server should use to determine the host name used by 
the
-      master for communication and display purposes.
-Default:: default +
-  
-[[_hbase.zookeeper.peerport]]
-.hbase.zookeeper.peerport
-Description:: Port used by ZooKeeper peers to talk to each other.
-    See 
http://hadoop.apache.org/zookeeper/docs/r3.1.1/zookeeperStarted.html#sc_RunningReplicatedZooKeeper
-    for more information.
-Default:: 2888 +
-  
-[[_hbase.zookeeper.leaderport]]
-.hbase.zookeeper.leaderport
-Description:: Port used by ZooKeeper for leader election.
-    See 
http://hadoop.apache.org/zookeeper/docs/r3.1.1/zookeeperStarted.html#sc_RunningReplicatedZooKeeper
-    for more information.
-Default:: 3888 +
-  
-[[_hbase.zookeeper.useMulti]]
-.hbase.zookeeper.useMulti
-Description:: Instructs HBase to make use of ZooKeeper's multi-update 
functionality.
-    This allows certain ZooKeeper operations to complete more quickly and 
prevents some issues
-    with rare Replication failure scenarios (see the release note of 
HBASE-2611 for an example).
-    IMPORTANT: only set this to true if all ZooKeeper servers in the cluster 
are on version 3.4+
-    and will not be downgraded.  ZooKeeper versions before 3.4 do not support 
multi-update and
-    will not fail gracefully if multi-update is invoked (see ZOOKEEPER-1495).
-Default:: true +
-  
-[[_hbase.config.read.zookeeper.config]]
-.hbase.config.read.zookeeper.config
-Description:: 
-        Set to true to allow HBaseConfiguration to read the
-        zoo.cfg file for ZooKeeper properties. Switching this to true
-        is not recommended, since the functionality of reading ZK
-        properties from a zoo.cfg file has been deprecated.
-Default:: false +
-  
-[[_hbase.zookeeper.property.initLimit]]
-.hbase.zookeeper.property.initLimit
-Description:: Property from ZooKeeper's config zoo.cfg.
-    The number of ticks that the initial synchronization phase can take.
-Default:: 10 +
-  
-[[_hbase.zookeeper.property.syncLimit]]
-.hbase.zookeeper.property.syncLimit
-Description:: Property from ZooKeeper's config zoo.cfg.
-    The number of ticks that can pass between sending a request and getting an
-    acknowledgment.
-Default:: 5 +
-  
-[[_hbase.zookeeper.property.dataDir]]
-.hbase.zookeeper.property.dataDir
-Description:: Property from ZooKeeper's config zoo.cfg.
-    The directory where the snapshot is stored.
-Default:: ${hbase.tmp.dir}/zookeeper +
-  
-[[_hbase.zookeeper.property.clientPort]]
-.hbase.zookeeper.property.clientPort
-Description:: Property from ZooKeeper's config zoo.cfg.
-    The port at which the clients will connect.
-Default:: 2181 +
-  
-[[_hbase.zookeeper.property.maxClientCnxns]]
-.hbase.zookeeper.property.maxClientCnxns
-Description:: Property from ZooKeeper's config zoo.cfg.
-    Limit on number of concurrent connections (at the socket level) that a
-    single client, identified by IP address, may make to a single member of
-    the ZooKeeper ensemble. Set high to avoid zk connection issues running
-    standalone and pseudo-distributed.
-Default:: 300 +
-  
-[[_hbase.client.write.buffer]]
-.hbase.client.write.buffer
-Description:: Default size of the HTable client write buffer in bytes.
-    A bigger buffer takes more memory -- on both the client and server
-    side since server instantiates the passed write buffer to process
-    it -- but a larger buffer size reduces the number of RPCs made.
-    For an estimate of server-side memory-used, evaluate
-    hbase.client.write.buffer * hbase.regionserver.handler.count
-Default:: 2097152 +
-  
-[[_hbase.client.pause]]
-.hbase.client.pause
-Description:: General client pause value.  Used mostly as value to wait
-    before running a retry of a failed get, region lookup, etc.
-    See hbase.client.retries.number for description of how we backoff from
-    this initial pause amount and how this pause works w/ retries.
-Default:: 100 +
-  
-[[_hbase.client.retries.number]]
-.hbase.client.retries.number
-Description:: Maximum retries.  Used as maximum for all retryable
-    operations such as the getting of a cell's value, starting a row update,
-    etc.  Retry interval is a rough function based on hbase.client.pause.  At
-    first we retry at this interval but then with backoff, we pretty quickly 
reach
-    retrying every ten seconds.  See HConstants#RETRY_BACKOFF for how the 
backup
-    ramps up.  Change this setting and hbase.client.pause to suit your 
workload.
-Default:: 35 +
-  
-[[_hbase.client.max.total.tasks]]
-.hbase.client.max.total.tasks
-Description:: The maximum number of concurrent tasks a single HTable instance 
will
-    send to the cluster.
-Default:: 100 +
-  
-[[_hbase.client.max.perserver.tasks]]
-.hbase.client.max.perserver.tasks
-Description:: The maximum number of concurrent tasks a single HTable instance 
will
-    send to a single region server.
-Default:: 5 +
-  
-[[_hbase.client.max.perregion.tasks]]
-.hbase.client.max.perregion.tasks
-Description:: The maximum number of concurrent connections the client will
-    maintain to a single Region. That is, if there is already
-    hbase.client.max.perregion.tasks writes in progress for this region, new 
puts
-    won't be sent to this region until some writes finishes.
-Default:: 1 +
-  
-[[_hbase.client.scanner.caching]]
-.hbase.client.scanner.caching
-Description:: Number of rows that will be fetched when calling next
-    on a scanner if it is not served from (local, client) memory. Higher
-    caching values will enable faster scanners but will eat up more memory
-    and some calls of next may take longer and longer times when the cache is 
empty.
-    Do not set this value such that the time between invocations is greater
-    than the scanner timeout; i.e. hbase.client.scanner.timeout.period
-Default:: 100 +
-  
-[[_hbase.client.keyvalue.maxsize]]
-.hbase.client.keyvalue.maxsize
-Description:: Specifies the combined maximum allowed size of a KeyValue
-    instance. This is to set an upper boundary for a single entry saved in a
-    storage file. Since they cannot be split it helps avoiding that a region
-    cannot be split any further because the data is too large. It seems wise
-    to set this to a fraction of the maximum region size. Setting it to zero
-    or less disables the check.
-Default:: 10485760 +
-  
-[[_hbase.client.scanner.timeout.period]]
-.hbase.client.scanner.timeout.period
-Description:: Client scanner lease period in milliseconds.
-Default:: 60000 +
-  
-[[_hbase.client.localityCheck.threadPoolSize]]
-.hbase.client.localityCheck.threadPoolSize
-Description:: 
-Default:: 2 +
-  
-[[_hbase.bulkload.retries.number]]
-.hbase.bulkload.retries.number
-Description:: Maximum retries.  This is maximum number of iterations
-    to atomic bulk loads are attempted in the face of splitting operations
-    0 means never give up.
-Default:: 10 +
-  
-[[_hbase.balancer.period
-    ]]
-.hbase.balancer.period
-    
-Description:: Period at which the region balancer runs in the Master.
-Default:: 300000 +
-  
-[[_hbase.regions.slop]]
-.hbase.regions.slop
-Description:: Rebalance if any regionserver has average + (average * slop) 
regions.
-Default:: 0.2 +
-  
-[[_hbase.server.thread.wakefrequency]]
-.hbase.server.thread.wakefrequency
-Description:: Time to sleep in between searches for work (in milliseconds).
-    Used as sleep interval by service threads such as log roller.
-Default:: 10000 +
-  
-[[_hbase.server.versionfile.writeattempts]]
-.hbase.server.versionfile.writeattempts
-Description:: 
-    How many time to retry attempting to write a version file
-    before just aborting. Each attempt is seperated by the
-    hbase.server.thread.wakefrequency milliseconds.
-Default:: 3 +
-  
-[[_hbase.hregion.memstore.flush.size]]
-.hbase.hregion.memstore.flush.size
-Description:: 
-    Memstore will be flushed to disk if size of the memstore
-    exceeds this number of bytes.  Value is checked by a thread that runs
-    every hbase.server.thread.wakefrequency.
-Default:: 134217728 +
-  
-[[_hbase.hregion.percolumnfamilyflush.size.lower.bound]]
-.hbase.hregion.percolumnfamilyflush.size.lower.bound
-Description:: 
-    If FlushLargeStoresPolicy is used, then every time that we hit the
-    total memstore limit, we find out all the column families whose memstores
-    exceed this value, and only flush them, while retaining the others whose
-    memstores are lower than this limit. If none of the families have their
-    memstore size more than this, all the memstores will be flushed
-    (just as usual). This value should be less than half of the total memstore
-    threshold (hbase.hregion.memstore.flush.size).
-    
-Default:: 16777216 +
-  
-[[_hbase.hregion.preclose.flush.size]]
-.hbase.hregion.preclose.flush.size
-Description:: 
-      If the memstores in a region are this size or larger when we go
-      to close, run a "pre-flush" to clear out memstores before we put up
-      the region closed flag and take the region offline.  On close,
-      a flush is run under the close flag to empty memory.  During
-      this time the region is offline and we are not taking on any writes.
-      If the memstore content is large, this flush could take a long time to
-      complete.  The preflush is meant to clean out the bulk of the memstore
-      before putting up the close flag and taking the region offline so the
-      flush that runs under the close flag has little to do.
-Default:: 5242880 +
-  
-[[_hbase.hregion.memstore.block.multiplier]]
-.hbase.hregion.memstore.block.multiplier
-Description:: 
-    Block updates if memstore has hbase.hregion.memstore.block.multiplier
-    times hbase.hregion.memstore.flush.size bytes.  Useful preventing
-    runaway memstore during spikes in update traffic.  Without an
-    upper-bound, memstore fills such that when it flushes the
-    resultant flush files take a long time to compact or split, or
-    worse, we OOME.
-Default:: 4 +
-  
-[[_hbase.hregion.memstore.mslab.enabled]]
-.hbase.hregion.memstore.mslab.enabled
-Description:: 
-      Enables the MemStore-Local Allocation Buffer,
-      a feature which works to prevent heap fragmentation under
-      heavy write loads. This can reduce the frequency of stop-the-world
-      GC pauses on large heaps.
-Default:: true +
-  
-[[_hbase.hregion.max.filesize]]
-.hbase.hregion.max.filesize
-Description:: 
-    Maximum HFile size. If the sum of the sizes of a region's HFiles has grown 
to exceed this 
-    value, the region is split in two.
-Default:: 10737418240 +
-  
-[[_hbase.hregion.majorcompaction]]
-.hbase.hregion.majorcompaction
-Description:: Time between major compactions, expressed in milliseconds. Set 
to 0 to disable
-      time-based automatic major compactions. User-requested and size-based 
major compactions will
-      still run. This value is multiplied by 
hbase.hregion.majorcompaction.jitter to cause
-      compaction to start at a somewhat-random time during a given window of 
time. The default value
-      is 7 days, expressed in milliseconds. If major compactions are causing 
disruption in your
-      environment, you can configure them to run at off-peak times for your 
deployment, or disable
-      time-based major compactions by setting this parameter to 0, and run 
major compactions in a
-      cron job or by another external mechanism.
-Default:: 604800000 +
-  
-[[_hbase.hregion.majorcompaction.jitter]]
-.hbase.hregion.majorcompaction.jitter
-Description:: A multiplier applied to hbase.hregion.majorcompaction to cause 
compaction to occur
-      a given amount of time either side of hbase.hregion.majorcompaction. The 
smaller the number,
-      the closer the compactions will happen to the 
hbase.hregion.majorcompaction
-      interval.
-Default:: 0.50 +
-  
-[[_hbase.hstore.compactionThreshold]]
-.hbase.hstore.compactionThreshold
-Description::  If more than this number of StoreFiles exist in any one Store 
-      (one StoreFile is written per flush of MemStore), a compaction is run to 
rewrite all 
-      StoreFiles into a single StoreFile. Larger values delay compaction, but 
when compaction does
-      occur, it takes longer to complete.
-Default:: 3 +
-  
-[[_hbase.hstore.flusher.count]]
-.hbase.hstore.flusher.count
-Description::  The number of flush threads. With fewer threads, the MemStore 
flushes will be
-      queued. With more threads, the flushes will be executed in parallel, 
increasing the load on
-      HDFS, and potentially causing more compactions. 
-Default:: 2 +
-  
-[[_hbase.hstore.blockingStoreFiles]]
-.hbase.hstore.blockingStoreFiles
-Description::  If more than this number of StoreFiles exist in any one Store 
(one StoreFile
-     is written per flush of MemStore), updates are blocked for this region 
until a compaction is
-      completed, or until hbase.hstore.blockingWaitTime has been exceeded.
-Default:: 10 +
-  
-[[_hbase.hstore.blockingWaitTime]]
-.hbase.hstore.blockingWaitTime
-Description::  The time for which a region will block updates after reaching 
the StoreFile limit
-    defined by hbase.hstore.blockingStoreFiles. After this time has elapsed, 
the region will stop 
-    blocking updates even if a compaction has not been completed.
-Default:: 90000 +
-  
-[[_hbase.hstore.compaction.min]]
-.hbase.hstore.compaction.min
-Description:: The minimum number of StoreFiles which must be eligible for 
compaction before 
-      compaction can run. The goal of tuning hbase.hstore.compaction.min is to 
avoid ending up with 
-      too many tiny StoreFiles to compact. Setting this value to 2 would cause 
a minor compaction 
-      each time you have two StoreFiles in a Store, and this is probably not 
appropriate. If you
-      set this value too high, all the other values will need to be adjusted 
accordingly. For most 
-      cases, the default value is appropriate. In previous versions of HBase, 
the parameter
-      hbase.hstore.compaction.min was named hbase.hstore.compactionThreshold.
-Default:: 3 +
-  
-[[_hbase.hstore.compaction.max]]
-.hbase.hstore.compaction.max
-Description:: The maximum number of StoreFiles which will be selected for a 
single minor 
-      compaction, regardless of the number of eligible StoreFiles. 
Effectively, the value of
-      hbase.hstore.compaction.max controls the length of time it takes a 
single compaction to
-      complete. Setting it larger means that more StoreFiles are included in a 
compaction. For most
-      cases, the default value is appropriate.
-Default:: 10 +
-  
-[[_hbase.hstore.compaction.min.size]]
-.hbase.hstore.compaction.min.size
-Description:: A StoreFile smaller than this size will always be eligible for 
minor compaction. 
-      HFiles this size or larger are evaluated by 
hbase.hstore.compaction.ratio to determine if 
-      they are eligible. Because this limit represents the "automatic 
include"limit for all 
-      StoreFiles smaller than this value, this value may need to be reduced in 
write-heavy 
-      environments where many StoreFiles in the 1-2 MB range are being 
flushed, because every 
-      StoreFile will be targeted for compaction and the resulting StoreFiles 
may still be under the
-      minimum size and require further compaction. If this parameter is 
lowered, the ratio check is
-      triggered more quickly. This addressed some issues seen in earlier 
versions of HBase but 
-      changing this parameter is no longer necessary in most situations. 
Default: 128 MB expressed 
-      in bytes.
-Default:: 134217728 +
-  
-[[_hbase.hstore.compaction.max.size]]
-.hbase.hstore.compaction.max.size
-Description:: A StoreFile larger than this size will be excluded from 
compaction. The effect of 
-      raising hbase.hstore.compaction.max.size is fewer, larger StoreFiles 
that do not get 
-      compacted often. If you feel that compaction is happening too often 
without much benefit, you
-      can try raising this value. Default: the value of LONG.MAX_VALUE, 
expressed in bytes.
-Default:: 9223372036854775807 +
-  
-[[_hbase.hstore.compaction.ratio]]
-.hbase.hstore.compaction.ratio
-Description:: For minor compaction, this ratio is used to determine whether a 
given StoreFile 
-      which is larger than hbase.hstore.compaction.min.size is eligible for 
compaction. Its
-      effect is to limit compaction of large StoreFiles. The value of 
hbase.hstore.compaction.ratio
-      is expressed as a floating-point decimal. A large ratio, such as 10, 
will produce a single 
-      giant StoreFile. Conversely, a low value, such as .25, will produce 
behavior similar to the 
-      BigTable compaction algorithm, producing four StoreFiles. A moderate 
value of between 1.0 and
-      1.4 is recommended. When tuning this value, you are balancing write 
costs with read costs. 
-      Raising the value (to something like 1.4) will have more write costs, 
because you will 
-      compact larger StoreFiles. However, during reads, HBase will need to 
seek through fewer 
-      StoreFiles to accomplish the read. Consider this approach if you cannot 
take advantage of 
-      Bloom filters. Otherwise, you can lower this value to something like 1.0 
to reduce the 
-      background cost of writes, and use Bloom filters to control the number 
of StoreFiles touched 
-      during reads. For most cases, the default value is appropriate.
-Default:: 1.2F +
-  
-[[_hbase.hstore.compaction.ratio.offpeak]]
-.hbase.hstore.compaction.ratio.offpeak
-Description:: Allows you to set a different (by default, more aggressive) 
ratio for determining
-      whether larger StoreFiles are included in compactions during off-peak 
hours. Works in the 
-      same way as hbase.hstore.compaction.ratio. Only applies if 
hbase.offpeak.start.hour and 
-      hbase.offpeak.end.hour are also enabled.
-Default:: 5.0F +
-  
-[[_hbase.hstore.time.to.purge.deletes]]
-.hbase.hstore.time.to.purge.deletes
-Description:: The amount of time to delay purging of delete markers with 
future timestamps. If 
-      unset, or set to 0, all delete markers, including those with future 
timestamps, are purged 
-      during the next major compaction. Otherwise, a delete marker is kept 
until the major compaction 
-      which occurs after the marker's timestamp plus the value of this 
setting, in milliseconds.
-    
-Default:: 0 +
-  
-[[_hbase.offpeak.start.hour]]
-.hbase.offpeak.start.hour
-Description:: The start of off-peak hours, expressed as an integer between 0 
and 23, inclusive.
-      Set to -1 to disable off-peak.
-Default:: -1 +
-  
-[[_hbase.offpeak.end.hour]]
-.hbase.offpeak.end.hour
-Description:: The end of off-peak hours, expressed as an integer between 0 and 
23, inclusive. Set
-      to -1 to disable off-peak.
-Default:: -1 +
-  
-[[_hbase.regionserver.thread.compaction.throttle]]
-.hbase.regionserver.thread.compaction.throttle
-Description:: There are two different thread pools for compactions, one for 
large compactions and
-      the other for small compactions. This helps to keep compaction of lean 
tables (such as
-        hbase:meta) fast. If a compaction is larger than this threshold, it
-      goes into the large compaction pool. In most cases, the default value is 
appropriate. Default:
-      2 x hbase.hstore.compaction.max x hbase.hregion.memstore.flush.size 
(which defaults to 128MB).
-      The value field assumes that the value of 
hbase.hregion.memstore.flush.size is unchanged from
-      the default.
-Default:: 2684354560 +
-  
-[[_hbase.hstore.compaction.kv.max]]
-.hbase.hstore.compaction.kv.max
-Description:: The maximum number of KeyValues to read and then write in a 
batch when flushing or
-      compacting. Set this lower if you have big KeyValues and problems with 
Out Of Memory
-      Exceptions Set this higher if you have wide, small rows. 
-Default:: 10 +
-  
-[[_hbase.storescanner.parallel.seek.enable]]
-.hbase.storescanner.parallel.seek.enable
-Description:: 
-      Enables StoreFileScanner parallel-seeking in StoreScanner,
-      a feature which can reduce response latency under special conditions.
-Default:: false +
-  
-[[_hbase.storescanner.parallel.seek.threads]]
-.hbase.storescanner.parallel.seek.threads
-Description:: 
-      The default thread pool size if parallel-seeking feature enabled.
-Default:: 10 +
-  
-[[_hfile.block.cache.size]]
-.hfile.block.cache.size
-Description:: Percentage of maximum heap (-Xmx setting) to allocate to block 
cache
-        used by a StoreFile. Default of 0.4 means allocate 40%.
-        Set to 0 to disable but it's not recommended; you need at least
-        enough cache to hold the storefile indices.
-Default:: 0.4 +
-  
-[[_hfile.block.index.cacheonwrite]]
-.hfile.block.index.cacheonwrite
-Description:: This allows to put non-root multi-level index blocks into the 
block
-          cache at the time the index is being written.
-Default:: false +
-  
-[[_hfile.index.block.max.size]]
-.hfile.index.block.max.size
-Description:: When the size of a leaf-level, intermediate-level, or root-level
-          index block in a multi-level block index grows to this size, the
-          block is written out and a new block is started.
-Default:: 131072 +
-  
-[[_hbase.bucketcache.ioengine]]
-.hbase.bucketcache.ioengine
-Description:: Where to store the contents of the bucketcache. One of: onheap, 
-      offheap, or file. If a file, set it to file:PATH_TO_FILE. See 
https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/io/hfile/CacheConfig.html
 for more information.
-    
-Default::  +
-  
-[[_hbase.bucketcache.combinedcache.enabled]]
-.hbase.bucketcache.combinedcache.enabled
-Description:: Whether or not the bucketcache is used in league with the LRU 
-      on-heap block cache. In this mode, indices and blooms are kept in the 
LRU 
-      blockcache and the data blocks are kept in the bucketcache.
-Default:: true +
-  
-[[_hbase.bucketcache.size]]
-.hbase.bucketcache.size
-Description:: The size of the buckets for the bucketcache if you only use a 
single size. 
-      Defaults to the default blocksize, which is 64 * 1024.
-Default:: 65536 +
-  
-[[_hbase.bucketcache.sizes]]
-.hbase.bucketcache.sizes
-Description:: A comma-separated list of sizes for buckets for the bucketcache 
-      if you use multiple sizes. Should be a list of block sizes in order from 
smallest 
-      to largest. The sizes you use will depend on your data access patterns.
-Default::  +
-  
-[[_hfile.format.version]]
-.hfile.format.version
-Description:: The HFile format version to use for new files.
-      Version 3 adds support for tags in hfiles (See 
http://hbase.apache.org/book.html#hbase.tags).
-      Distributed Log Replay requires that tags are enabled. Also see the 
configuration
-      'hbase.replication.rpc.codec'. 
-      
-Default:: 3 +
-  
-[[_hfile.block.bloom.cacheonwrite]]
-.hfile.block.bloom.cacheonwrite
-Description:: Enables cache-on-write for inline blocks of a compound Bloom 
filter.
-Default:: false +
-  
-[[_io.storefile.bloom.block.size]]
-.io.storefile.bloom.block.size
-Description:: The size in bytes of a single block ("chunk") of a compound Bloom
-          filter. This size is approximate, because Bloom blocks can only be
-          inserted at data block boundaries, and the number of keys per data
-          block varies.
-Default:: 131072 +
-  
-[[_hbase.rs.cacheblocksonwrite]]
-.hbase.rs.cacheblocksonwrite
-Description:: Whether an HFile block should be added to the block cache when 
the
-          block is finished.
-Default:: false +
-  
-[[_hbase.rpc.timeout]]
-.hbase.rpc.timeout
-Description:: This is for the RPC layer to define how long HBase client 
applications
-        take for a remote call to time out. It uses pings to check connections
-        but will eventually throw a TimeoutException.
-Default:: 60000 +
-  
-[[_hbase.rpc.shortoperation.timeout]]
-.hbase.rpc.shortoperation.timeout
-Description:: This is another version of "hbase.rpc.timeout". For those RPC 
operation
-        within cluster, we rely on this configuration to set a short timeout 
limitation
-        for short operation. For example, short rpc timeout for region 
server's trying
-        to report to active master can benefit quicker master failover process.
-Default:: 10000 +
-  
-[[_hbase.ipc.client.tcpnodelay]]
-.hbase.ipc.client.tcpnodelay
-Description:: Set no delay on rpc socket connections.  See
-    
http://docs.oracle.com/javase/1.5.0/docs/api/java/net/Socket.html#getTcpNoDelay()
-Default:: true +
-  
-[[_hbase.master.keytab.file]]
-.hbase.master.keytab.file
-Description:: Full path to the kerberos keytab file to use for logging in
-    the configured HMaster server principal.
-Default::  +
-  
-[[_hbase.master.kerberos.principal]]
-.hbase.master.kerberos.principal
-Description:: Ex. "hbase/_h...@example.com".  The kerberos principal name
-    that should be used to run the HMaster process.  The principal name should
-    be in the form: user/hostname@DOMAIN.  If "_HOST" is used as the hostname
-    portion, it will be replaced with the actual hostname of the running
-    instance.
-Default::  +
-  
-[[_hbase.regionserver.keytab.file]]
-.hbase.regionserver.keytab.file
-Description:: Full path to the kerberos keytab file to use for logging in
-    the configured HRegionServer server principal.
-Default::  +
-  
-[[_hbase.regionserver.kerberos.principal]]
-.hbase.regionserver.kerberos.principal
-Description:: Ex. "hbase/_h...@example.com".  The kerberos principal name
-    that should be used to run the HRegionServer process.  The principal name
-    should be in the form: user/hostname@DOMAIN.  If "_HOST" is used as the
-    hostname portion, it will be replaced with the actual hostname of the
-    running instance.  An entry for this principal must exist in the file
-    specified in hbase.regionserver.keytab.file
-Default::  +
-  
-[[_hadoop.policy.file]]
-.hadoop.policy.file
-Description:: The policy configuration file used by RPC servers to make
-      authorization decisions on client requests.  Only used when HBase
-      security is enabled.
-Default:: hbase-policy.xml +
-  
-[[_hbase.superuser]]
-.hbase.superuser
-Description:: List of users or groups (comma-separated), who are allowed
-    full privileges, regardless of stored ACLs, across the cluster.
-    Only used when HBase security is enabled.
-Default::  +
-  
-[[_hbase.auth.key.update.interval]]
-.hbase.auth.key.update.interval
-Description:: The update interval for master key for authentication tokens
-    in servers in milliseconds.  Only used when HBase security is enabled.
-Default:: 86400000 +
-  
-[[_hbase.auth.token.max.lifetime]]
-.hbase.auth.token.max.lifetime
-Description:: The maximum lifetime in milliseconds after which an
-    authentication token expires.  Only used when HBase security is enabled.
-Default:: 604800000 +
-  
-[[_hbase.ipc.client.fallback-to-simple-auth-allowed]]
-.hbase.ipc.client.fallback-to-simple-auth-allowed
-Description:: When a client is configured to attempt a secure connection, but 
attempts to
-      connect to an insecure server, that server may instruct the client to
-      switch to SASL SIMPLE (unsecure) authentication. This setting controls
-      whether or not the client will accept this instruction from the server.
-      When false (the default), the client will not allow the fallback to 
SIMPLE
-      authentication, and will abort the connection.
-Default:: false +
-  
-[[_hbase.display.keys]]
-.hbase.display.keys
-Description:: When this is set to true the webUI and such will display all 
start/end keys
-                 as part of the table details, region names, etc. When this is 
set to false,
-                 the keys are hidden.
-Default:: true +
-  
-[[_hbase.coprocessor.region.classes]]
-.hbase.coprocessor.region.classes
-Description:: A comma-separated list of Coprocessors that are loaded by
-    default on all tables. For any override coprocessor method, these classes
-    will be called in order. After implementing your own Coprocessor, just put
-    it in HBase's classpath and add the fully qualified class name here.
-    A coprocessor can also be loaded on demand by setting HTableDescriptor.
-Default::  +
-  
-[[_hbase.rest.port]]
-.hbase.rest.port
-Description:: The port for the HBase REST server.
-Default:: 8080 +
-  
-[[_hbase.rest.readonly]]
-.hbase.rest.readonly
-Description:: Defines the mode the REST server will be started in. Possible 
values are:
-    false: All HTTP methods are permitted - GET/PUT/POST/DELETE.
-    true: Only the GET method is permitted.
-Default:: false +
-  
-[[_hbase.rest.threads.max]]
-.hbase.rest.threads.max
-Description:: The maximum number of threads of the REST server thread pool.
-        Threads in the pool are reused to process REST requests. This
-        controls the maximum number of requests processed concurrently.
-        It may help to control the memory used by the REST server to
-        avoid OOM issues. If the thread pool is full, incoming requests
-        will be queued up and wait for some free threads.
-Default:: 100 +
-  
-[[_hbase.rest.threads.min]]
-.hbase.rest.threads.min
-Description:: The minimum number of threads of the REST server thread pool.
-        The thread pool always has at least these number of threads so
-        the REST server is ready to serve incoming requests.
-Default:: 2 +
-  
-[[_hbase.rest.support.proxyuser]]
-.hbase.rest.support.proxyuser
-Description:: Enables running the REST server to support proxy-user mode.
-Default:: false +
-  
-[[_hbase.defaults.for.version.skip]]
-.hbase.defaults.for.version.skip
-Description:: Set to true to skip the 'hbase.defaults.for.version' check.
-    Setting this to true can be useful in contexts other than
-    the other side of a maven generation; i.e. running in an
-    ide.  You'll want to set this boolean to true to avoid
-    seeing the RuntimException complaint: "hbase-default.xml file
-    seems to be for and old version of HBase (\${hbase.version}), this
-    version is X.X.X-SNAPSHOT"
-Default:: false +
-  
-[[_hbase.coprocessor.master.classes]]
-.hbase.coprocessor.master.classes
-Description:: A comma-separated list of
-    org.apache.hadoop.hbase.coprocessor.MasterObserver coprocessors that are
-    loaded by default on the active HMaster process. For any implemented
-    coprocessor methods, the listed classes will be called in order. After
-    implementing your own MasterObserver, just put it in HBase's classpath
-    and add the fully qualified class name here.
-Default::  +
-  
-[[_hbase.coprocessor.abortonerror]]
-.hbase.coprocessor.abortonerror
-Description:: Set to true to cause the hosting server (master or regionserver)
-      to abort if a coprocessor fails to load, fails to initialize, or throws 
an
-      unexpected Throwable object. Setting this to false will allow the server 
to
-      continue execution but the system wide state of the coprocessor in 
question
-      will become inconsistent as it will be properly executing in only a 
subset
-      of servers, so this is most useful for debugging only.
-Default:: true +
-  
-[[_hbase.online.schema.update.enable]]
-.hbase.online.schema.update.enable
-Description:: Set true to enable online schema changes.
-Default:: true +
-  
-[[_hbase.table.lock.enable]]
-.hbase.table.lock.enable
-Description:: Set to true to enable locking the table in zookeeper for schema 
change operations.
-    Table locking from master prevents concurrent schema modifications to 
corrupt table
-    state.
-Default:: true +
-  
-[[_hbase.table.max.rowsize]]
-.hbase.table.max.rowsize
-Description:: 
-      Maximum size of single row in bytes (default is 1 Gb) for Get'ting
-      or Scan'ning without in-row scan flag set. If row size exceeds this limit
-      RowTooBigException is thrown to client.
-    
-Default:: 1073741824 +
-  
-[[_hbase.thrift.minWorkerThreads]]
-.hbase.thrift.minWorkerThreads
-Description:: The "core size" of the thread pool. New threads are created on 
every
-    connection until this many threads are created.
-Default:: 16 +
-  
-[[_hbase.thrift.maxWorkerThreads]]
-.hbase.thrift.maxWorkerThreads
-Description:: The maximum size of the thread pool. When the pending request 
queue
-    overflows, new threads are created until their number reaches this number.
-    After that, the server starts dropping connections.
-Default:: 1000 +
-  
-[[_hbase.thrift.maxQueuedRequests]]
-.hbase.thrift.maxQueuedRequests
-Description:: The maximum number of pending Thrift connections waiting in the 
queue. If
-     there are no idle threads in the pool, the server queues requests. Only
-     when the queue overflows, new threads are added, up to
-     hbase.thrift.maxQueuedRequests threads.
-Default:: 1000 +
-  
-[[_hbase.thrift.htablepool.size.max]]
-.hbase.thrift.htablepool.size.max
-Description:: The upper bound for the table pool used in the Thrift gateways 
server.
-      Since this is per table name, we assume a single table and so with 1000 
default
-      worker threads max this is set to a matching number. For other workloads 
this number
-      can be adjusted as needed.
-    
-Default:: 1000 +
-  
-[[_hbase.regionserver.thrift.framed]]
-.hbase.regionserver.thrift.framed
-Description:: Use Thrift TFramedTransport on the server side.
-      This is the recommended transport for thrift servers and requires a 
similar setting
-      on the client side. Changing this to false will select the default 
transport,
-      vulnerable to DoS when malformed requests are issued due to THRIFT-601.
-    
-Default:: false +
-  
-[[_hbase.regionserver.thrift.framed.max_frame_size_in_mb]]
-.hbase.regionserver.thrift.framed.max_frame_size_in_mb
-Description:: Default frame size when using framed transport
-Default:: 2 +
-  
-[[_hbase.regionserver.thrift.compact]]
-.hbase.regionserver.thrift.compact
-Description:: Use Thrift TCompactProtocol binary serialization protocol.
-Default:: false +
-  
-[[_hbase.data.umask.enable]]
-.hbase.data.umask.enable
-Description:: Enable, if true, that file permissions should be assigned
-      to the files written by the regionserver
-Default:: false +
-  
-[[_hbase.data.umask]]
-.hbase.data.umask
-Description:: File permissions that should be used to write data
-      files when hbase.data.umask.enable is true
-Default:: 000 +
-  
-[[_hbase.metrics.showTableName]]
-.hbase.metrics.showTableName
-Description:: Whether to include the prefix "tbl.tablename" in per-column 
family metrics.
-       If true, for each metric M, per-cf metrics will be reported for 
tbl.T.cf.CF.M, if false,
-       per-cf metrics will be aggregated by column-family across tables, and 
reported for cf.CF.M.
-       In both cases, the aggregated metric M across tables and cfs will be 
reported.
-Default:: true +
-  
-[[_hbase.metrics.exposeOperationTimes]]
-.hbase.metrics.exposeOperationTimes
-Description:: Whether to report metrics about time taken performing an
-      operation on the region server.  Get, Put, Delete, Increment, and Append 
can all
-      have their times exposed through Hadoop metrics per CF and per region.
-Default:: true +
-  
-[[_hbase.snapshot.enabled]]
-.hbase.snapshot.enabled
-Description:: Set to true to allow snapshots to be taken / restored / cloned.
-Default:: true +
-  
-[[_hbase.snapshot.restore.take.failsafe.snapshot]]
-.hbase.snapshot.restore.take.failsafe.snapshot
-Description:: Set to true to take a snapshot before the restore operation.
-      The snapshot taken will be used in case of failure, to restore the 
previous state.
-      At the end of the restore operation this snapshot will be deleted
-Default:: true +
-  
-[[_hbase.snapshot.restore.failsafe.name]]
-.hbase.snapshot.restore.failsafe.name
-Description:: Name of the failsafe snapshot taken by the restore operation.
-      You can use the {snapshot.name}, {table.name} and {restore.timestamp} 
variables
-      to create a name based on what you are restoring.
-Default:: hbase-failsafe-{snapshot.name}-{restore.timestamp} +
-  
-[[_hbase.server.compactchecker.interval.multiplier]]
-.hbase.server.compactchecker.interval.multiplier
-Description:: The number that determines how often we scan to see if 
compaction is necessary.
-        Normally, compactions are done after some events (such as memstore 
flush), but if
-        region didn't receive a lot of writes for some time, or due to 
different compaction
-        policies, it may be necessary to check it periodically. The interval 
between checks is
-        hbase.server.compactchecker.interval.multiplier multiplied by
-        hbase.server.thread.wakefrequency.
-Default:: 1000 +
-  
-[[_hbase.lease.recovery.timeout]]
-.hbase.lease.recovery.timeout
-Description:: How long we wait on dfs lease recovery in total before giving up.
-Default:: 900000 +
-  
-[[_hbase.lease.recovery.dfs.timeout]]
-.hbase.lease.recovery.dfs.timeout
-Description:: How long between dfs recover lease invocations. Should be larger 
than the sum of
-        the time it takes for the namenode to issue a block recovery command 
as part of
-        datanode; dfs.heartbeat.interval and the time it takes for the primary
-        datanode, performing block recovery to timeout on a dead datanode; 
usually
-        dfs.client.socket-timeout. See the end of HBASE-8389 for more.
-Default:: 64000 +
-  
-[[_hbase.column.max.version]]
-.hbase.column.max.version
-Description:: New column family descriptors will use this value as the default 
number of versions
-      to keep.
-Default:: 1 +
-  
-[[_hbase.dfs.client.read.shortcircuit.buffer.size]]
-.hbase.dfs.client.read.shortcircuit.buffer.size
-Description:: If the DFSClient configuration
-    dfs.client.read.shortcircuit.buffer.size is unset, we will
-    use what is configured here as the short circuit read default
-    direct byte buffer size. DFSClient native default is 1MB; HBase
-    keeps its HDFS files open so number of file blocks * 1MB soon
-    starts to add up and threaten OOME because of a shortage of
-    direct memory.  So, we set it down from the default.  Make
-    it > the default hbase block size set in the HColumnDescriptor
-    which is usually 64k.
-    
-Default:: 131072 +
-  
-[[_hbase.regionserver.checksum.verify]]
-.hbase.regionserver.checksum.verify
-Description:: 
-        If set to true (the default), HBase verifies the checksums for hfile
-        blocks. HBase writes checksums inline with the data when it writes out
-        hfiles. HDFS (as of this writing) writes checksums to a separate file
-        than the data file necessitating extra seeks.  Setting this flag saves
-        some on i/o.  Checksum verification by HDFS will be internally disabled
-        on hfile streams when this flag is set.  If the hbase-checksum 
verification
-        fails, we will switch back to using HDFS checksums (so do not disable 
HDFS
-        checksums!  And besides this feature applies to hfiles only, not to 
WALs).
-        If this parameter is set to false, then hbase will not verify any 
checksums,
-        instead it will depend on checksum verification being done in the HDFS 
client.  
-    
-Default:: true +
-  
-[[_hbase.hstore.bytes.per.checksum]]
-.hbase.hstore.bytes.per.checksum
-Description:: 
-        Number of bytes in a newly created checksum chunk for HBase-level
-        checksums in hfile blocks.
-    
-Default:: 16384 +
-  
-[[_hbase.hstore.checksum.algorithm]]
-.hbase.hstore.checksum.algorithm
-Description:: 
-      Name of an algorithm that is used to compute checksums. Possible values
-      are NULL, CRC32, CRC32C.
-    
-Default:: CRC32 +
-  
-[[_hbase.status.published]]
-.hbase.status.published
-Description:: 
-      This setting activates the publication by the master of the status of 
the region server.
-      When a region server dies and its recovery starts, the master will push 
this information
-      to the client application, to let them cut the connection immediately 
instead of waiting
-      for a timeout.
-    
-Default:: false +
-  
-[[_hbase.status.publisher.class]]
-.hbase.status.publisher.class
-Description:: 
-      Implementation of the status publication with a multicast message.
-    
-Default:: 
org.apache.hadoop.hbase.master.ClusterStatusPublisher$MulticastPublisher +
-  
-[[_hbase.status.listener.class]]
-.hbase.status.listener.class
-Description:: 
-      Implementation of the status listener with a multicast message.
-    
-Default:: 
org.apache.hadoop.hbase.client.ClusterStatusListener$MulticastListener +
-  
-[[_hbase.status.multicast.address.ip]]
-.hbase.status.multicast.address.ip
-Description:: 
-      Multicast address to use for the status publication by multicast.
-    
-Default:: 226.1.1.3 +
-  
-[[_hbase.status.multicast.address.port]]
-.hbase.status.multicast.address.port
-Description:: 
-      Multicast port to use for the status publication by multicast.
-    
-Default:: 16100 +
-  
-[[_hbase.dynamic.jars.dir]]
-.hbase.dynamic.jars.dir
-Description:: 
-      The directory from which the custom filter/co-processor jars can be 
loaded
-      dynamically by the region server without the need to restart. However,
-      an already loaded filter/co-processor class would not be un-loaded. See
-      HBASE-1936 for more details.
-    
-Default:: ${hbase.rootdir}/lib +
-  
-[[_hbase.security.authentication]]
-.hbase.security.authentication
-Description:: 
-      Controls whether or not secure authentication is enabled for HBase.
-      Possible values are 'simple' (no authentication), and 'kerberos'.
-    
-Default:: simple +
-  
-[[_hbase.rest.filter.classes]]
-.hbase.rest.filter.classes
-Description:: 
-      Servlet filters for REST service.
-    
-Default:: org.apache.hadoop.hbase.rest.filter.GzipFilter +
-  
-[[_hbase.master.loadbalancer.class]]
-.hbase.master.loadbalancer.class
-Description:: 
-      Class used to execute the regions balancing when the period occurs.
-      See the class comment for more on how it works
-      
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.html
-      It replaces the DefaultLoadBalancer as the default (since renamed
-      as the SimpleLoadBalancer).
-    
-Default:: org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer +
-  
-[[_hbase.security.exec.permission.checks]]
-.hbase.security.exec.permission.checks
-Description:: 
-      If this setting is enabled and ACL based access control is active (the
-      AccessController coprocessor is installed either as a system coprocessor
-      or on a table as a table coprocessor) then you must grant all relevant
-      users EXEC privilege if they require the ability to execute coprocessor
-      endpoint calls. EXEC privilege, like any other permission, can be
-      granted globally to a user, or to a user on a per table or per namespace
-      basis. For more information on coprocessor endpoints, see the coprocessor
-      section of the HBase online manual. For more information on granting or
-      revoking permissions using the AccessController, see the security
-      section of the HBase online manual.
-    
-Default:: false +
-  
-[[_hbase.procedure.regionserver.classes]]
-.hbase.procedure.regionserver.classes
-Description:: A comma-separated list of 
-    org.apache.hadoop.hbase.procedure.RegionServerProcedureManager procedure 
managers that are 
-    loaded by default on the active HRegionServer process. The lifecycle 
methods (init/start/stop) 
-    will be called by the active HRegionServer process to perform the specific 
globally barriered 
-    procedure. After implementing your own RegionServerProcedureManager, just 
put it in 
-    HBase's classpath and add the fully qualified class name here.
-    
-Default::  +
-  
-[[_hbase.procedure.master.classes]]
-.hbase.procedure.master.classes
-Description:: A comma-separated list of
-    org.apache.hadoop.hbase.procedure.MasterProcedureManager procedure 
managers that are
-    loaded by default on the active HMaster process. A procedure is identified 
by its signature and
-    users can use the signature and an instant name to trigger an execution of 
a globally barriered
-    procedure. After implementing your own MasterProcedureManager, just put it 
in HBase's classpath
-    and add the fully qualified class name here.
-Default::  +
-  
-[[_hbase.coordinated.state.manager.class]]
-.hbase.coordinated.state.manager.class
-Description:: Fully qualified name of class implementing coordinated state 
manager.
-Default:: org.apache.hadoop.hbase.coordination.ZkCoordinatedStateManager +
-  
-[[_hbase.regionserver.storefile.refresh.period]]
-.hbase.regionserver.storefile.refresh.period
-Description:: 
-      The period (in milliseconds) for refreshing the store files for the 
secondary regions. 0
-      means this feature is disabled. Secondary regions sees new files (from 
flushes and
-      compactions) from primary once the secondary region refreshes the list 
of files in the
-      region (there is no notification mechanism). But too frequent refreshes 
might cause
-      extra Namenode pressure. If the files cannot be refreshed for longer 
than HFile TTL
-      (hbase.master.hfilecleaner.ttl) the requests are rejected. Configuring 
HFile TTL to a larger
-      value is also recommended with this setting.
-    
-Default:: 0 +
-  
-[[_hbase.region.replica.replication.enabled]]
-.hbase.region.replica.replication.enabled
-Description:: 
-      Whether asynchronous WAL replication to the secondary region replicas is 
enabled or not.
-      If this is enabled, a replication peer named 
"region_replica_replication" will be created
-      which will tail the logs and replicate the mutatations to region 
replicas for tables that
-      have region replication > 1. If this is enabled once, disabling this 
replication also
-      requires disabling the replication peer using shell or ReplicationAdmin 
java class.
-      Replication to secondary region replicas works over standard 
inter-cluster replication. 
-      So replication, if disabled explicitly, also has to be enabled by 
setting "hbase.replication" 
-      to true for this feature to work.
-    
-Default:: false +
-  
-[[_hbase.http.filter.initializers]]
-.hbase.http.filter.initializers
-Description:: 
-      A comma separated list of class names. Each class in the list must 
extend 
-      org.apache.hadoop.hbase.http.FilterInitializer. The corresponding Filter 
will 
-      be initialized. Then, the Filter will be applied to all user facing jsp 
-      and servlet web pages. 
-      The ordering of the list defines the ordering of the filters.
-      The default StaticUserWebFilter add a user principal as defined by the 
-      hbase.http.staticuser.user property.
-    
-Default:: org.apache.hadoop.hbase.http.lib.StaticUserWebFilter +
-  
-[[_hbase.security.visibility.mutations.checkauths]]
-.hbase.security.visibility.mutations.checkauths
-Description:: 
-      This property if enabled, will check whether the labels in the 
visibility expression are associated
-      with the user issuing the mutation
-    
-Default:: false +
-  
-[[_hbase.http.max.threads]]
-.hbase.http.max.threads
-Description:: 
-      The maximum number of threads that the HTTP Server will create in its 
-      ThreadPool.
-    
-Default:: 10 +
-  
-[[_hbase.replication.rpc.codec]]
-.hbase.replication.rpc.codec
-Description:: 
-               The codec that is to be used when replication is enabled so that
-               the tags are also replicated. This is used along with HFileV3 
which 
-               supports tags in them.  If tags are not used or if the hfile 
version used
-               is HFileV2 then KeyValueCodec can be used as the replication 
codec. Note that
-               using KeyValueCodecWithTags for replication when there are no 
tags causes no harm.
-       
-Default:: org.apache.hadoop.hbase.codec.KeyValueCodecWithTags +
-  
-[[_hbase.http.staticuser.user]]
-.hbase.http.staticuser.user
-Description:: 
-      The user name to filter as, on static web filters
-      while rendering content. An example use is the HDFS
-      web UI (user to be used for browsing files).
-    
-Default:: dr.stack +
-  
-[[_hbase.regionserver.handler.abort.on.error.percent]]
-.hbase.regionserver.handler.abort.on.error.percent
-Description:: The percent of region server RPC threads failed to abort RS.
-    -1 Disable aborting; 0 Abort if even a single handler has died;
-    0.x Abort only when this percent of handlers have died;
-    1 Abort only all of the handers have died.
-Default:: 0.5 +
-  
\ No newline at end of file

Reply via email to