[ 
https://issues.apache.org/jira/browse/HBASE-8755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13687957#comment-13687957
 ] 

chunhui shen commented on HBASE-8755:
-------------------------------------

Attractd by the crazy improvement, I have tried a quick performance test, seems 
not same as my initial think.

Test Data:
1.0.94 version with this patch
2.One client putting data to one regionserver(autoflush=true)
3.300 RPC handler for regionserver

*a.client using 5 concurrent thread*

Without patch:

Write Threads: 5 Write Rows: 200000 Consume Time: 42s
*Avg TPS: 4651*

With patch:

Write Threads: 5 Write Rows: 200000 Consume Time: 43s
*Avg TPS: 4545*


*b.client using 50 concurrent thread*

Without patch:

Write Threads: 50 Write Rows: 2000000 Consume Time: 110s
*Avg TPS: 18018*

With patch:

Write Threads: 50 Write Rows: 2000000 Consume Time: 118s
*Avg TPS: 16806*

*c.client using 200concurrent thread*

Without patch:

Write Threads: 200 Write Rows: 2000000 Consume Time: 80s
*Avg TPS: 24691*

With patch:

Write Threads: 200 Write Rows: 2000000 Consume Time: 64s
*Avg TPS: 30769*


{format}
a> 5 YCSB clients, each with 80 concurrent write theads (auto-flush = true)
b> each YCSB writes 5000,000 rows
c> all 20 regions of the target table are moved to a single RS
{format}

As the above test description, it means 400 concurrent theads writing data to 
one RS.

I personally think this patch will work if regionserver is under very high 
pressure,
for general pressure, it will degrade a little.

I just take a quick test, maybe there's something wrong.
About the improvement scenario, more tests would be better.

                
> A new write thread model for HLog to improve the overall HBase write 
> throughput
> -------------------------------------------------------------------------------
>
>                 Key: HBASE-8755
>                 URL: https://issues.apache.org/jira/browse/HBASE-8755
>             Project: HBase
>          Issue Type: Improvement
>          Components: wal
>            Reporter: Feng Honghua
>         Attachments: HBASE-8755-0.94-V0.patch, HBASE-8755-0.94-V1.patch, 
> HBASE-8755-trunk-V0.patch
>
>
> In current write model, each write handler thread (executing put()) will 
> individually go through a full 'append (hlog local buffer) => HLog writer 
> append (write to hdfs) => HLog writer sync (sync hdfs)' cycle for each write, 
> which incurs heavy race condition on updateLock and flushLock.
> The only optimization where checking if current syncTillHere > txid in 
> expectation for other thread help write/sync its own txid to hdfs and 
> omitting the write/sync actually help much less than expectation.
> Three of my colleagues(Ye Hangjun / Wu Zesheng / Zhang Peng) at Xiaomi 
> proposed a new write thread model for writing hdfs sequence file and the 
> prototype implementation shows a 4X improvement for throughput (from 17000 to 
> 70000+). 
> I apply this new write thread model in HLog and the performance test in our 
> test cluster shows about 3X throughput improvement (from 12150 to 31520 for 1 
> RS, from 22000 to 70000 for 5 RS), the 1 RS write throughput (1K row-size) 
> even beats the one of BigTable (Precolator published in 2011 says Bigtable's 
> write throughput then is 31002). I can provide the detailed performance test 
> results if anyone is interested.
> The change for new write thread model is as below:
>  1> All put handler threads append the edits to HLog's local pending buffer; 
> (it notifies AsyncWriter thread that there is new edits in local buffer)
>  2> All put handler threads wait in HLog.syncer() function for underlying 
> threads to finish the sync that contains its txid;
>  3> An single AsyncWriter thread is responsible for retrieve all the buffered 
> edits in HLog's local pending buffer and write to the hdfs 
> (hlog.writer.append); (it notifies AsyncFlusher thread that there is new 
> writes to hdfs that needs a sync)
>  4> An single AsyncFlusher thread is responsible for issuing a sync to hdfs 
> to persist the writes by AsyncWriter; (it notifies the AsyncNotifier thread 
> that sync watermark increases)
>  5> An single AsyncNotifier thread is responsible for notifying all pending 
> put handler threads which are waiting in the HLog.syncer() function
>  6> No LogSyncer thread any more (since there is always 
> AsyncWriter/AsyncFlusher threads do the same job it does)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to