[ 
https://issues.apache.org/jira/browse/ACCUMULO-1950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Newton reassigned ACCUMULO-1950:
-------------------------------------

    Assignee: Eric Newton

> Reduce the number of calls to hsync 
> ------------------------------------
>
>                 Key: ACCUMULO-1950
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-1950
>             Project: Accumulo
>          Issue Type: Improvement
>            Reporter: Keith Turner
>            Assignee: Eric Newton
>             Fix For: 1.7.0
>
>
> As mutations written to a tablet server its buffered and once this buffer 
> exceeds a certain size the data is dumped to the walog and then inserted into 
> an in memory sorted map.   These walog buffers are per a client and the max 
> size is determined by tserver.mutation.queue.max.  
> Accumulo 1.5 and 1.6 call hsync() in hadoop 2 which ensures data is flushed 
> to disk.   This introduces a fixed delay when flushing walog buffers.  The 
> smaller tserver.mutation.queue.max is, the more frequently the walog buffers 
> are flushed.   With many clients writing to a tserver, this is not much of a 
> concern because all of their walog buffers are flushed using group commit.  
> This results in high throughput because large batches of data being written 
> before hsync is called.  However if a few client writing to a tserver there 
> will be a lot more calls to hsync.  It would be nice the # of calls to hsync 
> was a function of the amount of data written regardless of the number of 
> concurrent clients.  Currently as the number of concurrent clients goes down, 
> the number of calls to hsync goes up.
> In 1.6 and 1.5 this can be mitigated by increasing 
> tserver.mutation.queue.max, however this is multiplied by the number of 
> concurrent writers.  So increasing it can improve performance of a single 
> writer but increases the chances of many concurrent writers exhausting memory.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to