On Wednesday 07 October 2009 05:17:54 Chris Mason escribió:
Thanks, I'll try to reproduce. Which raid level did you use for data?
If not raid1, could you try with raid1? ;)
I'm not sure, since the utils won't tell. I mkfs'ed and mounted one of the 3.5GB
files with no special options, and
On Wed, Oct 07, 2009 at 03:51:46PM +0200, Diego Calleja wrote:
On Wednesday 07 October 2009 05:17:54 Chris Mason escribió:
Thanks, I'll try to reproduce. Which raid level did you use for data?
If not raid1, could you try with raid1? ;)
I'm not sure, since the utils won't tell. I mkfs'ed
On Wednesday 07 October 2009 21:45:29 Chris Mason escribió:
I'm afraid this is good old enospc. Balancing still needs some work to
be completely safe.
I've tried using less data and raid1, but I can't reproduce it.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the
By the way, i think it'd be useful if debug-tree would tell which policy
the fs is applying to each chunk. Something like this:
item 4 key (FIRST_CHUNK_TREE CHUNK_ITEM 8379826176) itemoff 3495
itemsize 112
chunk length 319881216 owner 2 type 17 (data on RAID1)
I was playing with btrfs with 2 files of 3.5 GB (using loop), I completely
zeroed one of the files. As expected, I had checksum failures, and I run
btrfs-vol -b just to see what happened, and I got this (using -rc3):
[25765.340492] btrfs csum failed ino 260 off 122880 csum 2566472073 private
On Tue, Oct 06, 2009 at 08:48:32PM +0200, Diego Calleja wrote:
I was playing with btrfs with 2 files of 3.5 GB (using loop), I completely
zeroed one of the files. As expected, I had checksum failures, and I run
btrfs-vol -b just to see what happened, and I got this (using -rc3):
Thanks, I'll