Zygo Blaxell posted on Mon, 13 Apr 2015 00:04:36 -0400 as excerpted:

> A database ends up maxing out at about a factor of two space usage
> because it tends to write short uniform-sized bursts of pages randomly,
> so we get a pattern a bit like bricks in a wall:
> 
>         0 MB AA BB CC DD EE FF GG HH II JJ KK 1 MB half the extents 0 MB
>          LL MM NN OO PP QQ RR SS TT UU V 1 MB the other half
> 
>         0 MB ALLBMMCNNDOOEPPFQQGRRHSSITTJUUKV 1 MB what the file looks
>         like
> 
> Fixing this is non-trivial (it may require an incompatible disk format
> change).  Until this is fixed, the most space-efficient approach seems
> to be to force compression (so the maximum extent is 128K instead of
> 1GB) and never defragment database files ever.

... Or set the database file nocow at creation, and don't snapshot it, so 
overwrites are always in-place.  (Btrfs compression and checksumming get 
turned off with nocow, but as we've seen, compression isn't all that 
effective on random-rewrite-pattern files anyway, and databases generally 
have their own data integrity handling, so neither one is a huge loss, 
and the in-place rewrite makes for better performance and a more 
predictable steady-state.)

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to