I have a system with less than 50% disk space used.  It just started rejecting 
writes due to lack of disk space.  I ran "btrfs balance" and then it started 
working correctly again.  It seems that a btrfs filesystem if left alone will 
eventually get fragmented enough that it rejects writes (I've had similar 
issues with other systems running BTRFS with other kernel versions).

Is this a known issue?

Is there any good way of recognising when it's likely to happen?  Is there 
anything I can do other than rewriting a medium size file to determine when 
it's happened?

# uname -a 
Linux trex 4.9.0-3-amd64 #1 SMP Debian 4.9.30-2+deb9u2 (2017-06-26) x86_64 
GNU/Linux
# df -h / 
Filesystem      Size  Used Avail Use% Mounted on 
/dev/sdc        239G  113G  126G  48% /
# btrfs fi df / 
Data, RAID1: total=117.00GiB, used=111.81GiB 
System, RAID1: total=32.00MiB, used=48.00KiB 
Metadata, RAID1: total=1.00GiB, used=516.00MiB 
GlobalReserve, single: total=246.59MiB, used=0.00B
# btrfs dev usa / 
/dev/sdc, ID: 1 
  Device size:           238.47GiB 
  Device slack:              0.00B 
  Data,RAID1:            117.00GiB 
  Metadata,RAID1:          1.00GiB 
  System,RAID1:           32.00MiB 
  Unallocated:           120.44GiB 

/dev/sdd, ID: 2 
  Device size:           238.47GiB 
  Device slack:              0.00B 
  Data,RAID1:            117.00GiB 
  Metadata,RAID1:          1.00GiB 
  System,RAID1:           32.00MiB 
  Unallocated:           120.44GiB

-- 
My Main Blog         http://etbe.coker.com.au/
My Documents Blog    http://doc.coker.com.au/

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to