Hi again,
Benchmarking over time seems a good idea, but what if I see that a
particular database does indeed degrade in performance? How can I then
selectively improve performance for that file, since disabling cow only
works for new empty files?
Is it correct that bundling small random
On Tue, Jun 16, 2015 at 09:06:56AM +0200, Ingvar Bogdahn wrote:
Hi again,
Benchmarking over time seems a good idea, but what if I see that a
particular database does indeed degrade in performance? How can I
then selectively improve performance for that file, since disabling
cow only works
On Tue, Jun 16, 2015 at 2:06 PM, Ingvar Bogdahn
ingvar.bogd...@googlemail.com wrote:
Hi again,
Benchmarking over time seems a good idea, but what if I see that a
particular database does indeed degrade in performance? How can I then
selectively improve performance for that file, since
Hello there,
I'm planing to use btrfs for a medium-sized webserver. It is commonly
recommended to set nodatacow for database files to avoid performance
degradation. However, apparently nodatacow disables some of my main
motivations of using btrfs : checksumming and (probably) incremental
On Mon, Jun 15, 2015 at 11:34:35AM +0200, Ingvar Bogdahn wrote:
Hello there,
I'm planing to use btrfs for a medium-sized webserver. It is
commonly recommended to set nodatacow for database files to avoid
performance degradation. However, apparently nodatacow disables some
of my main