Nicholas D Steeves posted on Thu, 28 Jul 2016 13:53:31 -0400 as excerpted:

> Additionally, I've read that -o autodefrag doesn't yet work well for
> large databases.  Would a supplementary targeted defrag policy be useful
> here?  For example: a general cron/systemd.trigger default of "-t 32M",
> and then another job for /var/lib/mysql/ with a policy of "-f -t 1G"? 
> Or did your findings also show that large databases did not benefit from
> larger target extent defrags?

That the autodefrag mount option didn't work well with large rewrite-
pattern files like vm images and databases was the previous advice, yes, 
but that changed at some point.  I'm not sure if autodefrag has always 
worked this way and they simply weren't sure before, or if it changed, 
but in any case, these days, it doesn't rewrite the entire file, only a 
(relatively) larger block of it than the individual 4 KiB block that 
would otherwise be rewritten.  (I'm not sure what size, perhaps the same 
256 KiB that's the kernel default for manual defrag?)

As such, it scales better than it would if the full gig-size (or 
whatever) file was being rewritten, altho there will still be some 
fragmentation.

And for the same reason, it's actually not as bad with snapshots as it 
might have been otherwise, because it only cows/de-reflinks a bit more of 
the file than would otherwise be cowed due to the write in any case, so 
it doesn't duplicate the entire file as originally feared by some, either.

Tho the only way to be sure would be to try it.


Meanwhile, it's worth noting that autodefrag works best if on from the 
beginning, so fragmentation doesn't get ahead of it.  Here, I ensure 
autodefrag is on from the first time I mount it, while the filesystem is 
still empty.  That way, fragmentation should never get out of hand, 
fragmenting free space so badly that free large extents to defrag into
/can't/ be found, as may be the case if autodefrag isn't turned on until 
later and manual defrag hasn't been done regularly either.  There have 
been a few reports of people waiting to turn it on until the filesystem 
is highly fragmented, and then having several days of low performance as 
defrag tries to catch up.  If it's consistently on from the beginning, 
that shouldn't happen.

Of course that may mean backing up and recreating the filesystem fresh in 
ordered to have autodefrag on from the beginning, if you're looking at 
trying it on existing filesystems that are likely highly fragmented.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to