[ 
https://issues.apache.org/jira/browse/HBASE-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-5699:
-------------------------
    Release Note: 
HBase's write-ahead-log (WAL) can now be configured to use multiple HDFS 
pipelines in parallel to provide better write throughput for clusters by using 
additional disks. By default, HBase will still use only a single HDFS-based 
WAL. 

To run with multiple WALs, alter the hbase-site.xml property 
"hbase.wal.provider" to have the value "multiwal". To return to having HBase 
determine what kind of WAL implementation to use either remove the property all 
together or set it to "defaultProvider".

Altering the WAL provider used by a particular RegionServer requires restarting 
that instance.  RegionServers using the original WAL implementation and those 
using the "multiwal" implementation can each handle recovery of either set of 
WALs, so a zero-downtime configuration update is possible through a rolling 
restart.

This issue introduces the following configurations:

hbase.wal.regiongrouping.numgroups is how many provider instances the 
'multiwal' should create. Default is two.

hbase.wal.regiongrouping.strategy is the strategy to use figuring which 
provider instance an edit should go to. Default is 'identity'. Identity is the 
only current built-in option.

hbase.wal.regiongrouping.delegate is the type of the provider the multiwal 
creates. Default is default from WALFactory.

  was:
HBase's write-ahead-log (WAL) can now be configured to use multiple HDFS 
pipelines in parallel to provide better write throughput for clusters by using 
additional disks. By default, HBase will still use only a single HDFS-based 
WAL. 

To run with multiple WALs, alter the hbase-site.xml property 
"hbase.wal.provider" to have the value "multiwal". To return to having HBase 
determine what kind of WAL implementation to use either remove the property all 
together or set it to "defaultProvider".

Altering the WAL provider used by a particular RegionServer requires restarting 
that instance.  RegionServers using the original WAL implementation and those 
using the "multiwal" implementation can each handle recovery of either set of 
WALs, so a zero-downtime configuration update is possible through a rolling 
restart.


> Run with > 1 WAL in HRegionServer
> ---------------------------------
>
>                 Key: HBASE-5699
>                 URL: https://issues.apache.org/jira/browse/HBASE-5699
>             Project: HBase
>          Issue Type: Improvement
>          Components: Performance, wal
>            Reporter: binlijin
>            Assignee: Sean Busbey
>            Priority: Critical
>             Fix For: 1.0.0, 1.1.0
>
>         Attachments: HBASE-5699.3.patch.txt, HBASE-5699.4.patch.txt, 
> HBASE-5699_#workers_vs_MiB_per_s_1x1col_512Bval_wal_count_1,2,4.tiff, 
> HBASE-5699_disabled_and_regular_#workers_vs_MiB_per_s_1x1col_512Bval_wal_count_1,2,4.tiff,
>  HBASE-5699_write_iops_multiwal-1_1_to_200_threads.tiff, 
> HBASE-5699_write_iops_multiwal-2_10,50,120,190,260,330,400_threads.tiff, 
> HBASE-5699_write_iops_multiwal-4_10,50,120,190,260,330,400_threads.tiff, 
> HBASE-5699_write_iops_multiwal-6_10,50,120,190,260,330,400_threads.tiff, 
> HBASE-5699_write_iops_upstream_1_to_200_threads.tiff, PerfHbase.txt, 
> hbase-5699_multiwal_400-threads_stats_sync_heavy.txt, 
> hbase-5699_total_throughput_sync_heavy.txt, 
> results-hbase5699-upstream.txt.bz2, results-hbase5699-wals-1.txt.bz2, 
> results-updated-hbase5699-wals-2.txt.bz2, 
> results-updated-hbase5699-wals-4.txt.bz2, 
> results-updated-hbase5699-wals-6.txt.bz2
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to