B.H. Hello. I have a setup with 4 RAID10 arrays, 4 drives each (using md). Device usage is as follows:
# btrfs device usage /storage/bkp1 /dev/md1, ID: 1 Device size: 10.92TiB Device slack: 0.00B Data,single: 10.19TiB Metadata,RAID1: 199.00GiB System,RAID1: 8.00MiB Unallocated: 542.79GiB /dev/md2, ID: 2 Device size: 10.92TiB Device slack: 0.00B Data,single: 10.21TiB Metadata,RAID1: 181.00GiB System,RAID1: 8.00MiB Unallocated: 541.80GiB /dev/md3, ID: 3 Device size: 10.92TiB Device slack: 0.00B Data,single: 10.41TiB Metadata,RAID1: 65.00GiB Unallocated: 457.81GiB /dev/md4, ID: 4 Device size: 10.92TiB Device slack: 0.00B Data,single: 9.89TiB Metadata,RAID1: 89.00GiB Unallocated: 959.81GiB Mount options: compress=zlib,commit=60,noatime This setup is used to store regular backups from 2 different sites (each on different subvolume with regular snapshots). The backup is done using rsync as the source storage is using xfs not btrfs. This setup has been working excellently for about 7 months. Curently, it has about 100 snapshots in total. Recently, i've started to face problems with "transaction aborted" messages and volume going read-only. This happens unexpectedly, after several hours of rsync running. As a precursor, it throws several warnings about tasks blocked for 120 seconds, this is probably connected to a long time required to commit transaction. After transaction abort, i reboot the server, then restart the backup and it seems to continue OK until the next crash. Scrub didn't find and errors on the volume. I'm unable to run btrfs check as it consumes all of the RAM and crashes. Any suggestions what's going wrong and how to fix this? # uname -a Linux yemot-4u 4.4.0-31-generic #50-Ubuntu SMP Wed Jul 13 00:07:12 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux # btrfs --version btrfs-progs v4.7 Thanks in advance! -- משיח NOW! Moshiach is coming very soon, prepare yourself! יחי אדוננו מורינו ורבינו מלך המשיח לעולם ועד! -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html