>> Using hundreds or thousands of snapshots is probably fine
>> mostly.

As I mentioned previously, with a link to the relevant email
describing the details, the real issue is reflinks/backrefs.
Usually subvolume and snapshots involve them.

> We find that typically apt is very slow on a machine with 50
> or so snapshots and raid10. Slow as in probably 10x slower as
> doing the same update on a machine with 'single' and no
> snapshots.

That seems to indicate using snapshots on a '/' volume to
provide a "rollback machine" like SUSE. Since '/' usually has
many small files and installation of upgraded packages involves
only a small part of them, that usually involves a lot of
reflinks/backrefs.

But that you find that the system has slowed down significantly
in ordinary operations is unusual, because what is slow in
situations with many relinks/backrefs per extent is not access,
but operations like 'balance' or 'delete'.

Guessing wildly what you describe seems more the effect of low
locality (aka high fragmentation) which is often the result of
the 'ssd' option which should always be explicitly disabled
(even for volumes on flash SSD storage). I would suggest some
use of 'filefrag' to analyze and perhaps use of 'defrag' and
'balance'.

Another possibility is having enabled compression with the
presence of many in-place updates on some files, which can
result also in low locality (high fragmentation).

As usual with Btrfs, there are corner cases to avoid: 'defrag'
should be done before 'balance' and with compression switched
off (IIRC):

https://wiki.archlinux.org/index.php/Btrfs#Defragmentation

  Defragmenting a file which has a COW copy (either a snapshot
  copy or one made with cp --reflink or bcp) plus using the -c
  switch with a compression algorithm may result in two
  unrelated files effectively increasing the disk usage.

https://wiki.debian.org/Btrfs

  Mounting with -o autodefrag will duplicate reflinked or
  snapshotted files when you run a balance. Also, whenever a
  portion of the fs is defragmented with "btrfs filesystem
  defragment" those files will lose their reflinks and the data
  will be "duplicated" with n-copies. The effect of this is that
  volumes that make heavy use of reflinks or snapshots will run
  out of space.

  Additionally, if you have a lot of snapshots or reflinked files,
  please use "-f" to flush data for each file before going to the
  next file.

I prefer dump-and-reload.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to