On Sat, Nov 14, 2015 at 10:11:31PM +0800, CHENG Yuk-Pong, Daniel  wrote:
> Hi List,
> 
> 
> I have read the Gotcha[1] page:
> 
>    Files with a lot of random writes can become heavily fragmented
> (10000+ extents) causing trashing on HDDs and excessive multi-second
> spikes of CPU load on systems with an SSD or **large amount a RAM**.
> 
> Why could large amount of memory worsen the problem?

   Because the kernel will hang on to lots of changes in RAM for
longer. With less memory, there's more pressure to write out dirty
pages to disk, so the changes get written out in smaller pieces more
often. With more memory, the changes being written out get "lumpier".

> If **too much** memory is a problem, is it possible to limit the
> memory btrfs use?

   There's some VM knobs you can twiddle, I believe, but I haven't
really played with them myself -- I'm sure there's more knowledgable
people around here who can suggest suitable things to play with.

   Hugo.

> Background info:
> 
> I am running a heavy-write database server with 96GB ram. In the worse
> case it cause multi minutes of high cpu loads. Systemd keeping kill
> and restarting services, and old job don't die because they stuck in
> uninterruptable wait... etc.
> 
> Tried with nodatacow, but it seems only affect new file. It is not an
> subvolume option either...
> 
> 
> Regards,
> Daniel
> 
> 
> [1] https://btrfs.wiki.kernel.org/index.php/Gotchas#Fragmentation

-- 
Hugo Mills             | Anyone who says their system is completely secure
hugo@... carfax.org.uk | understands neither systems nor security.
http://carfax.org.uk/  |
PGP: E2AB1DE4          |                                        Bruce Schneier

Attachment: signature.asc
Description: Digital signature

Reply via email to