Hi all,
I just noticed a mismatch between statfs.f_bfree and statfs.f_bavail, i.e.
(squeeze)fslab2:~# ./statfs /data/fhgfs/storage1/
/data/fhgfs/storage1/: avail: 3162112 free: 801586610176
(with
uint64_t avail = statbuf.f_bavail * statbuf.f_bsize;
uint64_t free = statbuf.f_bf
Hello Chris,
On 05/23/2013 10:33 PM, Chris Mason wrote:
But I was using 8 drives. I'll try with 12.
My benchmarks were on flash, so the rmw I was seeing may not have had as
big an impact.
I just further played with it and simply introduced a requeue in
raid56_rmw_stripe() if the rbio is
On 05/23/2013 09:37 PM, Chris Mason wrote:
> Quoting Bernd Schubert (2013-05-23 15:33:24)
>> Btw, any chance to generally use chunksize/chunklen instead of stripe,
>> such as the md layer does it? IMHO it is less confusing to use
>> n-datadisks * chunksize = stripesize.
>
On 05/23/2013 03:34 PM, Chris Mason wrote:
> Quoting Bernd Schubert (2013-05-23 09:22:41)
>> On 05/23/2013 03:11 PM, Chris Mason wrote:
>>> Quoting Bernd Schubert (2013-05-23 08:55:47)
>>>> Hi all,
>>>>
>>>> we got a new test system here
On 05/23/2013 03:41 PM, Bob Marley wrote:
> On 23/05/2013 15:22, Bernd Schubert wrote:
>>
>> Yeah, I know and I'm using iostat already. md raid6 does not do rmw,
>> but does not fill the device queue, afaik it flushes the underlying
>> devices quickly as it does no
On 05/23/2013 03:11 PM, Chris Mason wrote:
Quoting Bernd Schubert (2013-05-23 08:55:47)
Hi all,
we got a new test system here and I just also tested btrfs raid6 on
that. Write performance is slightly lower than hw-raid (LSI megasas) and
md-raid6, but it probably would be much better than any
Hi all,
we got a new test system here and I just also tested btrfs raid6 on
that. Write performance is slightly lower than hw-raid (LSI megasas) and
md-raid6, but it probably would be much better than any of these two, if
it wouldn't read all the during the writes. Is this a known issue? This
On 03/27/2013 10:18 AM, Hugo Mills wrote:
On Wed, Mar 27, 2013 at 12:28:23AM +0100, Clemens Eisserer wrote:
I am using a btrfs loopback mounted file with lzo-compression on
Linux-3.7.9, and I ran into "No space left on device" messages,
although df reports only 55% of space is used:
# touch tes
On 01/16/2013 12:32 AM, Tom Kusmierz wrote:
p.s. bizzare that when I "fill" ext4 partition with test data everything
check's up OK (crc over all files), but with Chris tool it gets
corrupted - for both Adaptec crappy pcie controller and for mother board
built in one. Also since courses of histor
Simply mounting and umounting the device will now *always* crash the kernel.
Logs of a 3.8-git debug kernel are below.
I am not at all familiar with the btrfs code, but can't we simply
abort the transaction and return -EIO instead of BUG_ON()?
All those BUG_ON()s look scary... Having a failed fil
On 01/15/2013 02:35 PM, Bernd Schubert wrote:
Hrmm, that bug then seems to cause another bug. After the file system
went into RO, I simply umounted and mounted again and a few seconds
after that my entire system failed. Relevant logs are attached.
Further log attachment:
btrfsck /dev/vg_fuj2
On 08/19/2011 09:36 PM, Josef Bacik wrote:
On 08/19/2011 12:45 PM, Bernd Schubert wrote:
Just for performance tests I run:
./bonnie++ -d /mnt/btrfs -s0 -n 1:256:256:1 -r 0
and this causes and endless number of stack traces. Those seem to
come from:
use_block_rsv()
ret
I think we either should remove it or replace by WARN_ON_ONCE()
Remove WARN_ON(1) in a common code path
From: Bernd Schubert
Something like bonnie++ -d /mnt/btrfs -s0 -n 1:256:256:1 -r 0
will trigger lots of those WARN_ON(1), so lets remove it.
Signed-off-by: Bernd Schubert
---
fs/btrfs
Just for performance tests I run:
./bonnie++ -d /mnt/btrfs -s0 -n 1:256:256:1 -r 0
and this causes and endless number of stack traces. Those seem to
come from:
use_block_rsv()
ret = block_rsv_use_bytes(block_rsv, blocksize);
if (!ret)
return block_rsv;
i
14 matches
Mail list logo