On Thu, Feb 14, 2013 at 11:05:39AM -0700, Chris Murphy wrote: > > On Feb 14, 2013, at 1:59 AM, Hugo Mills <h...@carfax.org.uk> wrote: > >> > >> Data, RAID1: total=2.66TB, used=2.66TB > > > > This is the amount of actual useful data (i.e. what you see with du > > or ls -l). Double this (because it's RAID-1) to get the number of > > bytes or raw storage used. > > Right, the decoder ring. Effectively no outsiders will understand > this. It contradicts the behavior of conventional df with btrfs > volumes. And it becomes untenable with per subvolume profiles.
Correct, but *all* other single-value (or small-number-of-values) displays of space usage fail in similar ways. We've(*) had this discussion out on this mailing list many times before. All "simple" displays of disk usage will cause someone to misinterpret something at some point, and get cross. (*) For non-"you" values of "we". If you want a display of "raw bytes used/free", then someone will complain that they had 20GB free, wrote a 10GB file, and it's all gone. If you want a display of "usable data used/free", then we can't predict the "free" part. There is no single set of values that will make this simple. > >> Total devices 2 FS bytes used 1.64TB > >> devid 1 size 2.73TB used 1.64TB path /dev/sdi1 > >> devid 2 size 2.73TB used 2.67TB path /dev/sde1 > > > > This is the amount of raw disk space allocated. The total of used > > here should add up to twice the "total" values above (for > > Data+Metadata+System). > > I'm mostly complaining about the first line. If 2.67TB of writes to sde1 are > successful enough to be stated as "used" on that device, then FS bytes used > should be at least 2.67TB. The values shown above are for bytes *allocated* -- i.e. the "total" values shown in btrfs fi df. You haven't added in the metadata, which I'm willing to bet is another 100 GiB or so allocated space, bringing you up to the 2.67 TiB. (There's another problem with this display, which is that it's actually showing TiB, not TB. There have been patches for this, but I don't know if any are current). > > > >> So I can't tell if it's ~1.64TB copied or 2.6TB. 2.66 TiB. The 1.64TiB is clearly wrong, given all the other values. Hence my conclusion below. > > Looks like /dev/sdi1 isn't actually being written to -- it should > > be the same allocation as /dev/sde1. > > Yeah he's getting a lot of these, and I don't know what it is: > > > Feb 14 08:32:30 nerv kernel: [180511.760850] lost page write due to I/O > > error on /dev/sdd1 > > It's not tied to btrfs or libata so I don't think it's the drive itself > reporting the write error. I think maybe the kernel has become confused as a > result of the original ICRC ABRT, and the subsequent change from sdd to sdi. That would be my conclusion, too. But with the newly-appeared /dev/sdi1, btrfs fi show picks it up as belonging to the FS (because it's got the same UUID), but it's not been picked up by the kernel, so the kernel's not trying to write to it, and it's therefore massively out of date. I think the solution, if it's certain that the drive is now behaving sensibly again, is one of: * unmount, btrfs dev scan, remount, scrub or * btrfs dev delete missing, add /dev/sdi1 to the FS, and balance Hugo. -- === Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk === PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk --- I must be musical: I've got *loads* of CDs ---
signature.asc
Description: Digital signature