[BUG]
Reports about btrfs hang running btrfs/124 with default mount option and
btrfs/125 with nospace_cache or space_cache=v2 mount options, with
following backtrace.
Call Trace:
__schedule+0x2d4/0xae0
schedule+0x3d/0x90
btrfs_start_ordered_extent+0x160/0x200 [btrfs]
?
[BUG]
Reports about btrfs hang running btrfs/124 with default mount option and
btrfs/125 with nospace_cache or space_cache=v2 mount options, with
following backtrace.
Call Trace:
__schedule+0x2d4/0xae0
schedule+0x3d/0x90
btrfs_start_ordered_extent+0x160/0x200 [btrfs]
?
At 02/28/2017 02:51 AM, Andrei Borzenkov wrote:
This is VM under QEMU/KVM running openSUSE Tumbleweed. I boot it
infrequently for short time to test something. Last time it installed
quite a lot of updates including kernel (I think 4.9.11 was the last
version); I do not remember whether I
At 02/28/2017 12:14 AM, Filipe Manana wrote:
On Fri, Feb 24, 2017 at 2:06 AM, Qu Wenruo wrote:
If run btrfs/125 with nospace_cache or space_cache=v2 mount option,
btrfs will block with the following backtrace:
Happens with btrfs/124 without any mount options too.
[ ... ]
> I have a 6-device test setup at home and I tried various setups
> and I think I got rather better than that.
* 'raid1' profile:
soft# btrfs fi df /mnt/sdb5
Data, RAID1:
>>> On Mon, 27 Feb 2017 22:11:29 +, p...@btrfs.list.sabi.co.uk (Peter
>>> Grandi) said:
> [ ... ]
>> I have a 6-device test setup at home and I tried various setups
>> and I think I got rather better than that.
[ ... ]
> That's a range of 700-1300 4KiB random mixed-rw IOPS,
Rerun with 1M
On 2017-02-27 14:15, John Marrett wrote:
Liubo correctly identified direct IO as a solution for my test
performance issues, with it in use I achieved 908 read and 305 write,
not quite as fast as ZFS but more than adequate for my needs. I then
applied Peter's recommendation of switching to raid10
Hi,
On further testing I found that many of the functionality works fine
(most of the core functionality). One function I could not get to work
with this x86_64 kernel and i386 userland is
btrfs subvolme functionality works; btrfsck works; scrub works fine.
BUT
---
$ sudo btrfs send aa >
Liubo correctly identified direct IO as a solution for my test
performance issues, with it in use I achieved 908 read and 305 write,
not quite as fast as ZFS but more than adequate for my needs. I then
applied Peter's recommendation of switching to raid10 and tripled
performance again up to 3000
This is VM under QEMU/KVM running openSUSE Tumbleweed. I boot it
infrequently for short time to test something. Last time it installed
quite a lot of updates including kernel (I think 4.9.11 was the last
version); I do not remember whether I rebooted it after that. Today I
booted it to check
On Mon, Feb 27, 2017 at 4:14 PM, Filipe Manana wrote:
Also, forgot to mention before, looking at the subject, the term
deadlock is not correct, as it's a hang.
A deadlock is when a task is trying to acquire a resource (typically a
lock) that is already held by some other
On Sun, Feb 26, 2017 at 07:18:42PM -0500, Dave Jones wrote:
> Hitting this fairly frequently.. I'm not sure if this is the same bug I've
> been hitting occasionally since 4.9. The assertion looks new to me at least.
>
It was recently introduced by my commit and used to catch data loss at
On Mon, Feb 27, 2017 at 07:53:48AM -0800, Liu Bo wrote:
> On Sun, Feb 26, 2017 at 07:18:42PM -0500, Dave Jones wrote:
> > Hitting this fairly frequently.. I'm not sure if this is the same bug I've
> > been hitting occasionally since 4.9. The assertion looks new to me at
> > least.
> >
>
>
[ ... ]
> a ten disk raid1 using 7.2k 3 TB SAS drives
Those are really low IOPS-per-TB devices, but good choice for
SAS, as they will have SCT/ERC.
> and used aio to test IOOP rates. I was surprised to measure
> 215 read and 72 write IOOPs on the clean new filesystem.
For that you really want
On Fri, Feb 24, 2017 at 2:06 AM, Qu Wenruo wrote:
> If run btrfs/125 with nospace_cache or space_cache=v2 mount option,
> btrfs will block with the following backtrace:
Happens with btrfs/124 without any mount options too.
>
> Call Trace:
> __schedule+0x2d4/0xae0
>
On Mon, Feb 27, 2017 at 08:20:49AM -0500, John Marrett wrote:
> In preparation for a system and storage upgrade I performed some btrfs
> performance tests. I created a ten disk raid1 using 7.2k 3 TB SAS
> drives and used aio to test IOOP rates. I was surprised to measure 215
> read and 72 write
Hi,
can please anybody comment on that one? Josef? Chris? I still need those
patches to be able to let btrfs run for more than 24hours without ENOSPC
issues.
Greets,
Stefan
Am 27.02.2017 um 08:22 schrieb Qu Wenruo:
>
>
> At 02/25/2017 04:23 PM, Stefan Priebe - Profihost AG wrote:
>> Dear Qu,
In preparation for a system and storage upgrade I performed some btrfs
performance tests. I created a ten disk raid1 using 7.2k 3 TB SAS
drives and used aio to test IOOP rates. I was surprised to measure 215
read and 72 write IOOPs on the clean new filesystem. Sequential writes
ran as expected at
On Mon, Feb 27, 2017 at 11:40:31AM +0800, Qu Wenruo wrote:
>
>
> At 02/24/2017 10:32 AM, Lakshmipathi.G wrote:
> >Hi.
> >
> >I tried to a create list of corruption test scenarios for scrubbing process
> >with RAID5.
> >Here's the list:
>
19 matches
Mail list logo