[ 
https://issues.apache.org/jira/browse/ACCUMULO-3303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200688#comment-14200688
 ] 

Adam Fuchs commented on ACCUMULO-3303:
--------------------------------------

[~ecn] Thanks for the insight. Are you referring to code within HDFS, or is 
there Accumulo code that does that fragmentation? Maybe this is a more general 
performance bug with writing to a large HDSF file in the way we do? Do we have 
any dfs logger perf tests that could be used to help isolate this?

Sounds like one probable solution to this ticket would be to warn people that 
are trying to use a >2GB/1.1 WAL not to do so (and document appropriately).

> funky performance with large WAL
> --------------------------------
>
>                 Key: ACCUMULO-3303
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-3303
>             Project: Accumulo
>          Issue Type: Bug
>          Components: logger, tserver
>    Affects Versions: 1.6.1
>            Reporter: Adam Fuchs
>         Attachments: 1GB_WAL.png, 2GB_WAL.png, 4GB_WAL.png, 512MB_WAL.png, 
> 8GB_WAL.png, WAL_disabled.png
>
>
> The tserver seems to get into a funky state when writing to a large 
> write-ahead log. I ran some continuous ingest tests varying 
> tserver.walog.max.size in {512M, 1G, 2G, 4G, 8G} and got some results that I 
> have yet to understand. I was expecting to see the effects of walog metadata 
> management as described in ACCUMULO-2889, but I also found an additional 
> behavior of ingest slowing down for long periods when using a large walog 
> size.
> The cluster configuration was as follows:
> {code}
> Accumulo version: 1.6.2-SNAPSHOT (current head of origin/1.6)
> Nodes: 4
> Masters: 1
> Slaves: 3
> Cores per node: 24
> Drives per node: 8x1TB data + 2 raided system
> Memory per node: 64GB
> tserver.memory.maps.max=2G
> table.file.compress.type=snappy (for ci table only)
> tserver.mutation.queue.max=16M
> tserver.wal.sync.method=hflush
> Native maps enabled
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to