When the end of an extent exceeds the end of the specified range,
the extent will be accidentally truncated.
Signed-off-by: Li Zefan
---
fs/btrfs/free-space-cache.c |9 -
1 files changed, 8 insertions(+), 1 deletions(-)
diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-
We're taking a free space extent out of the free space cache, trimming
it and then putting it back into the cache.
However for an extent that is smaller than the specified minimum length,
it's taken out but won't be put back, which causes space leak.
Signed-off-by: Li Zefan
---
Unfortunately I
On Mon, Jun 20, 2011 at 04:15:37PM -0400, Christoph Hellwig wrote:
> i_alloc_sem is a rather special rw_semaphore. It's the last one that may
> be released by a non-owner, and it's write side is always mirrored by
> real exclusion. It's intended use it to wait for all pending direct I/O
> request
Hello,
a backtrace from a machine running a Btrfs RAID0 array with two disks is
attached. I've never seen this bug before. Please let me know if you need any
further info/experiments.
Andrej
[ 2040.038677] [ cut here ]
[ 2040.038696] kernel BUG at fs/btrfs/extent-tree.c
* Initialize ret in btrfs_csum_file_block
* Do not abort when xattr is not supported in the source directory
* Remove size limitation of 256M
* Alloc data chunk in a smaller size (8M) to make btrfs image smaller
* Let user specify the btrfs image name
Depends on below patch from samsung guys:
http:
(2011/06/21 9:40), Chris Mason wrote:
> Excerpts from David Sterba's message of 2011-06-20 20:24:35 -0400:
>> On Mon, Jun 20, 2011 at 08:41:39AM +0900, Tsutomu Itoh wrote:
>>> (2011/06/19 13:34), Tsutomu Itoh wrote:
> I've fixed this up by moving the delayed metadata run down into the
> sna
Excerpts from David Sterba's message of 2011-06-20 20:24:35 -0400:
> On Mon, Jun 20, 2011 at 08:41:39AM +0900, Tsutomu Itoh wrote:
> > (2011/06/19 13:34), Tsutomu Itoh wrote:
> > >> I've fixed this up by moving the delayed metadata run down into the
> > >> snapshot creation code, please take a look
On Mon, Jun 20, 2011 at 08:41:39AM +0900, Tsutomu Itoh wrote:
> (2011/06/19 13:34), Tsutomu Itoh wrote:
> >> I've fixed this up by moving the delayed metadata run down into the
> >> snapshot creation code, please take a look. If nobody objects I'll have
> >> this in the pull I send to Linus this w
On 06/20/2011 05:51 PM, Henning Rohlfs wrote:
Hello,
I've migrated my system to btrfs (raid1) a few months ago. Since then
the performance has been pretty bad, but recently it's gotten
unbearable: a simple sync called while the system is idle can take 20 up
to 60 seconds. Creating or deleting fi
On Mon, Jun 20, 2011 at 06:12:10PM +0800, Miao Xie wrote:
> >From 457f39393b2e3d475fbba029b90b6a4e17b94d43 Mon Sep 17 00:00:00 2001
> From: Miao Xie
> Date: Mon, 20 Jun 2011 17:21:51 +0800
> Subject: [PATCH] btrfs: fix inconsonant inode information
>
> When iputting the inode, We may leave the de
On Wed, Jun 15, 2011 at 07:41:37AM -0400, Chris Mason wrote:
> There is definitely a window where two procs can be inside
> create_snapshot() at the same time in the same transaction.
I trust you on that. I was trying to follow the callgraph from
btrfs_strat_transaction called from create_snapshot
Hi Miao,
Miao Xie wrote:
Hi, Jim
Could you test the attached patch for me?
I have done some quick tests, it worked well. But I'm not sure if it can fix
the bug you reported or not, so I need your help!
So far I haven't been able to reproduce with your patch
applied. I'd like to test for a
On Mon, Jun 20, 2011 at 02:29:24PM -0700, Joel Becker wrote:
> Oh god you're making the world scary. Are you guaranteeing that
> all allocation changes are locked out by the time we get into
> file_aio_write() and file_aio_read()? This is not obvious to me.
I have no idea how ocfs2's inter
On Mon, Jun 20, 2011 at 02:32:03PM -0700, Joel Becker wrote:
> Are we guaranteed that all allocation changes are locked out by
> i_dio_count>0? I don't think we are. The ocfs2 code very strongly
> assumes the state of a file's allocation when it holds i_alloc_sem. I
> feel like we lose tha
Hello,
I've migrated my system to btrfs (raid1) a few months ago. Since then
the performance has been pretty bad, but recently it's gotten
unbearable: a simple sync called while the system is idle can take 20 up
to 60 seconds. Creating or deleting files often has several seconds
latency, too.
On Mon, Jun 20, 2011 at 04:15:37PM -0400, Christoph Hellwig wrote:
> i_alloc_sem is a rather special rw_semaphore. It's the last one that may
> be released by a non-owner, and it's write side is always mirrored by
> real exclusion. It's intended use it to wait for all pending direct I/O
> request
On Mon, Jun 20, 2011 at 04:15:39PM -0400, Christoph Hellwig wrote:
> Maintain i_dio_count for all filesystems, not just those using DIO_LOCKING.
> This these filesystems to also protect truncate against direct I/O requests
> by using common code. Right now the only non-DIO_LOCKING filesystem that
On Mon, Jun 20, 2011 at 04:15:33PM -0400, Christoph Hellwig wrote:
> This series removes it in favour of a simpler counter scheme, thus getting
> rid of the rw_semaphore non-owner APIs as requests by Thomas, while at the
> same time shrinking the size of struct inode by 160 bytes on 64-bit systems.
Add a new rw_semaphore to protect bmap against truncate. Previous
i_alloc_sem was abused for this, but it's going away in this series.
Signed-off-by: Christoph Hellwig
Index: linux-2.6/fs/fat/inode.c
===
--- linux-2.6.orig/fs/fat/i
Add a new rw_semaphore to protect page_mkwrite against truncate. Previous
i_alloc_sem was abused for this, but it's going away in this series.
Signed-off-by: Christoph Hellwig
Index: linux-2.6/fs/ext4/inode.c
===
--- linux-2.6.orig
Maintain i_dio_count for all filesystems, not just those using DIO_LOCKING.
This these filesystems to also protect truncate against direct I/O requests
by using common code. Right now the only non-DIO_LOCKING filesystem that
appears to do so is XFS, which uses an opencoded variant of the i_dio_cou
Let filesystems handle waiting for direct I/O requests themselves instead
of doing it beforehand. This means filesystem-specific locks to prevent
new dio referenes from appearing can be held. This is important to allow
generalizing i_dio_count to non-DIO_LOCKING filesystems.
Signed-off-by: Chris
i_alloc_sem is a rather special rw_semaphore. It's the last one that may
be released by a non-owner, and it's write side is always mirrored by
real exclusion. It's intended use it to wait for all pending direct I/O
requests to finish before starting a truncate.
Replace it with a hand-grown const
Reject zero sized reads as soon as we know our I/O length, and don't
borther with locks or allocations that might have to be cleaned up
otherwise.
Signed-off-by: Christoph Hellwig
Index: linux-2.6/fs/direct-io.c
===
--- linux-2.6.or
i_alloc_sem has always been a bit of an odd "lock". It's the only remaining
rw_semaphore that can be released by a different thread than the one that
locked it, and it's use case in the core direct I/O code is more like a
counter given that the writers already have external serialization.
This se
Wait for all direct I/O requests to finish before performing a truncate.
Signed-off-by: Christoph Hellwig
Index: linux-2.6/fs/btrfs/inode.c
===
--- linux-2.6.orig/fs/btrfs/inode.c 2011-06-11 12:58:46.615017504 +0200
+++ linux-2.
Now that the last users is gone these can be removed.
Signed-off-by: Christoph Hellwig
Index: linux-2.6/include/linux/rwsem.h
===
--- linux-2.6.orig/include/linux/rwsem.h2011-06-20 14:58:15.449148809
+0200
+++ linux-2.6/inc
First, we can sometimes free the state we're merging, which means anybody who
calls merge_state() may have the state it passed in free'ed. This is
problematic because we could end up caching the state, which makes caching
useless as the state will no longer be part of the tree. So instead of free
When doing DIO tracing I noticed we were doing a ton of allocations, a lot of
the time for extent_states. Some of the time we don't even use the prealloc'ed
extent_state, it just get's free'd up. So instead create a per-cpu cache like
the radix tree stuff. So we will check to see if our per-cpu
On Mon, Jun 20, 2011 at 10:22 AM, Hugo Mills wrote:
> On Mon, Jun 20, 2011 at 07:17:22PM +0400, Proskurin Kirill wrote:
>> On 06/20/2011 06:34 PM, Helmut Hullen wrote:
>> >Du meintest am 20.06.11:
>> >>What we have:
>> >>SL6 - kernel 2.6.32-131.2.1.el6.x86_64
>> >>mdadm RAID5 with 8 HDD - 27T part
On Mon, Jun 20, 2011 at 07:17:22PM +0400, Proskurin Kirill wrote:
> On 06/20/2011 06:34 PM, Helmut Hullen wrote:
> >Du meintest am 20.06.11:
> >>What we have:
> >>SL6 - kernel 2.6.32-131.2.1.el6.x86_64
> >>mdadm RAID5 with 8 HDD - 27T partition.
> >
> >You should take a newer kernel. On my system I
On 06/20/2011 06:34 PM, Helmut Hullen wrote:
Hallo, Proskurin,
Du meintest am 20.06.11:
I`m new to btrfs and do some testing now.
What we have:
SL6 - kernel 2.6.32-131.2.1.el6.x86_64
mdadm RAID5 with 8 HDD - 27T partition.
You should take a newer kernel. On my system I needed 2.6.38.5 and
Hallo, Proskurin,
Du meintest am 20.06.11:
> I`m new to btrfs and do some testing now.
> What we have:
> SL6 - kernel 2.6.32-131.2.1.el6.x86_64
> mdadm RAID5 with 8 HDD - 27T partition.
You should take a newer kernel. On my system I needed 2.6.38.5 and newer
for btrfs; older kernels lead to u
Hi,
On Mon, Jun 20, 2011 at 03:29:45PM +0400, Proskurin Kirill wrote:
> What we have:
> SL6 - kernel 2.6.32-131.2.1.el6.x86_64
> mdadm RAID5 with 8 HDD - 27T partition.
btw .32 is very old
> Mount options is "noatime,noacl,compress-force"
> I use scribe daemon to copy log files from 200 hosts to
Hello all.
I`m new to btrfs and do some testing now.
What we have:
SL6 - kernel 2.6.32-131.2.1.el6.x86_64
mdadm RAID5 with 8 HDD - 27T partition.
Mount options is "noatime,noacl,compress-force"
I use scribe daemon to copy log files from 200 hosts to that partition
for stress testing.
But I f
Hi, Jim
Could you test the attached patch for me?
I have done some quick tests, it worked well. But I'm not sure if it can fix
the bug you reported or not, so I need your help!
Thanks
Miao
On fri, 17 Jun 2011 10:10:31 -0600, Jim Schutt wrote:
> Hi,
>
> I've hit this delayed-inode BUG several t
36 matches
Mail list logo