On Thu, Sep 19, 2013 at 08:37:07PM -0700, Darrick J. Wong wrote:
> When btrfs creates a bioset, we must also allocate the integrity data pool.
> Otherwise btrfs will crash when it tries to submit a bio to a checksumming
> disk:
>
> BUG: unable to handle kernel NULL pointer dereference at
OK, that's clear.
Nice space simulator btw :-) you should add a link somewhere in btrfs wiki...
Thanks
> Date: Thu, 26 Sep 2013 14:46:05 +0100
> From: h...@carfax.org.uk
> To: miaous...@hotmail.com
> CC: linux-btrfs@vger.kernel.org
> Subject: Re: [raidX vs
On Thu, Sep 26, 2013 at 02:55:38PM +, miaou sami wrote:
> OK, that's clear.
> Nice space simulator btw :-) you should add a link somewhere in btrfs wiki...
There is one, linked from the first line of the relevant section in
the FAQ.
Hugo.
> Thanks
>
* Josef Bacik wrote:
> Btrfs needs a simple way to know if it needs to let go of it's read lock on a
> rwsem. Introduce rwsem_is_contended to check to see if there are any waiters
> on
> this rwsem currently. This is just a hueristic, it is meant to be light and
> not
> 100% accurate and cal
Thank you, it is quite clear now.
I guess that on multi device, raid0 vs single would be a matter of performance
vs ease of low level hardware data recovery.
The wiki
https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices says:
"When you have drives with differing sizes and
On Thu, Sep 26, 2013 at 01:40:57PM +, miaou sami wrote:
> Thank you, it is quite clear now.
>
>
> I guess that on multi device, raid0 vs single would be a matter of
> performance vs ease of low level hardware data recovery.
>
>
> The wiki
> https://btrfs.wiki.kernel.org/index.php/Using_Bt
Btrfs needs a simple way to know if it needs to let go of it's read lock on a
rwsem. Introduce rwsem_is_contended to check to see if there are any waiters on
this rwsem currently. This is just a hueristic, it is meant to be light and not
100% accurate and called by somebody already holding on to
We can starve out the transaction commit with a bunch of caching threads all
running at the same time. This is because we will only drop the
extent_commit_sem if we need_resched(), which isn't likely to happen since we
will be reading a lot from the disk so have already schedule()'ed plenty. Alex
* Josef Bacik wrote:
> On Fri, Sep 20, 2013 at 07:12:47AM +0200, Ingo Molnar wrote:
> >
> > * Josef Bacik wrote:
> >
> > > We can starve out the transaction commit with a bunch of caching threads
> > > all running at the same time. This is because we will only drop the
> > > extent_commit_
On Fri, Sep 20, 2013 at 07:12:47AM +0200, Ingo Molnar wrote:
>
> * Josef Bacik wrote:
>
> > We can starve out the transaction commit with a bunch of caching threads
> > all running at the same time. This is because we will only drop the
> > extent_commit_sem if we need_resched(), which isn't
On Thu, Sep 26, 2013 at 12:22:49PM +, miaou sami wrote:
> Hi btrfs guys,
>
> could someone explain to me the differences in mkfs.btrfs:
>
> - between -d raid0 and -d single
In RAID0, data is striped across all the devices, so the first 64k
of a file will go on device 1, the next 64k will
Hi btrfs guys,
could someone explain to me the differences in mkfs.btrfs:
- between -d raid0 and -d single
- between -m raid1 and -m dup
- between -m raid0 and -m single
My understanding is that raidX should be used in case of multi devices and
single/dup should be used in case of single devi
On Wed, 25 Sep 2013 10:11:25 -0400, Josef Bacik wrote:
> On Wed, Sep 25, 2013 at 09:47:44PM +0800, Miao Xie wrote:
>> When doing space balance and subvolume destroy at the same time, we met
>> the following oops:
>>
>> kernel BUG at fs/btrfs/relocation.c:2247!
>> RIP: 0010: [] prepare_to_merge+0x15
13 matches
Mail list logo