[ 
https://issues.apache.org/jira/browse/HBASE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingyun Tian updated HBASE-19358:
---------------------------------
    Description: 
The way we splitting log now is like the following figure:
!https://issues.apache.org/jira/secure/attachment/12902234/split-logic-old.jpg!
The problem is the OutputSink will write the recovered edits during splitting 
log, which means it will create one WriterAndPath for each region and retain it 
until the end. If the cluster is small and the number of regions per rs is 
large, it will create too many HDFS streams at the same time. Then it is prone 
to failure since each datanode need to handle too many streams.

Thus I come up with a new way to split log.  
!http://example.com/image.png!
We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, we 
will pick the largest EntryBuffer and write it to a file (close the writer 
after finish). Then after we read all entries into memory, we will start a 
writeAndCloseThreadPool, it starts a certain number of threads to write all 
buffers to files. Thus it will not create HDFS streams more than 
hbase.regionserver.hlog.splitlog.writer.threads we set.
The biggest benefit is we can control the number of streams we create during 
splitting log, 
it will not exceeds *_hbase.regionserver.wal.max.splitters * 
hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is 
*_hbase.regionserver.wal.max.splitters * the number of region the hlog 
contains_*.


  was:
The way we splitting log now is like the following figure:

The problem is the OutputSink will write the recovered edits during splitting 
log, which means it will create one WriterAndPath for each region and retain it 
until the end. If the cluster is small and the number of regions per rs is 
large, it will create too many HDFS streams at the same time. Then it is prone 
to failure since each datanode need to handle too many streams.

Thus I come up with a new way to split log.  

We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, we 
will pick the largest EntryBuffer and write it to a file (close the writer 
after finish). Then after we read all entries into memory, we will start a 
writeAndCloseThreadPool, it starts a certain number of threads to write all 
buffers to files. Thus it will not create HDFS streams more than 
hbase.regionserver.hlog.splitlog.writer.threads we set.
The biggest benefit is we can control the number of streams we create during 
splitting log, 
it will not exceeds *_hbase.regionserver.wal.max.splitters * 
hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is 
*_hbase.regionserver.wal.max.splitters * the number of region the hlog 
contains_*.



> Improve the stability of splitting log when do fail over
> --------------------------------------------------------
>
>                 Key: HBASE-19358
>                 URL: https://issues.apache.org/jira/browse/HBASE-19358
>             Project: HBase
>          Issue Type: Improvement
>          Components: MTTR
>    Affects Versions: 0.98.24
>            Reporter: Jingyun Tian
>            Assignee: Jingyun Tian
>         Attachments: HBASE-19358.patch, split-1-log.png, split-logic-new.jpg, 
> split-logic-old.jpg, split-table.png, split_test_result.png
>
>
> The way we splitting log now is like the following figure:
> !https://issues.apache.org/jira/secure/attachment/12902234/split-logic-old.jpg!
> The problem is the OutputSink will write the recovered edits during splitting 
> log, which means it will create one WriterAndPath for each region and retain 
> it until the end. If the cluster is small and the number of regions per rs is 
> large, it will create too many HDFS streams at the same time. Then it is 
> prone to failure since each datanode need to handle too many streams.
> Thus I come up with a new way to split log.  
> !http://example.com/image.png!
> We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, 
> we will pick the largest EntryBuffer and write it to a file (close the writer 
> after finish). Then after we read all entries into memory, we will start a 
> writeAndCloseThreadPool, it starts a certain number of threads to write all 
> buffers to files. Thus it will not create HDFS streams more than 
> hbase.regionserver.hlog.splitlog.writer.threads we set.
> The biggest benefit is we can control the number of streams we create during 
> splitting log, 
> it will not exceeds *_hbase.regionserver.wal.max.splitters * 
> hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is 
> *_hbase.regionserver.wal.max.splitters * the number of region the hlog 
> contains_*.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to