Previous patches make the allocater return -ENOSPC if there is
no unreserved free meta space. This patch updates tree log code
and various other places to propagate/handle the ENOSPC error.
Signed-off-by: Yan Zheng zheng@oracle.com
---
diff -urp 10/fs/btrfs/disk-io.c 11/fs/btrfs/disk-io.c
reserve metadata space for handling orphan inodes
Signed-off-by: Yan Zheng zheng@oracle.com
---
diff -urp 9/fs/btrfs/btrfs_inode.h 10/fs/btrfs/btrfs_inode.h
--- 9/fs/btrfs/btrfs_inode.h2010-04-18 10:26:38.326701000 +0800
+++ 10/fs/btrfs/btrfs_inode.h 2010-04-18 10:50:26.564697845 +0800
Introduce metadata reservation context for delayed allocation and
update various related functions.
This patch also introduces EXTENT_FIRST_DELALLOC control bit for
set/clear_extent_bit. It tells set/clear_bit_hook whether they
are processing the first extent_state with EXTENT_DELALLOC bit
set.
Shrink delay allocated space in a synchronized manner is more
controllable than flushing all delay allocated space in an async
thread.
Signed-off-by: Yan Zheng zheng@oracle.com
---
diff -urp 3/fs/btrfs/ctree.h 4/fs/btrfs/ctree.h
--- 3/fs/btrfs/ctree.h 2010-04-18 08:13:15.457699211 +0800
+++
The size of reserved space is stored in space_info. If block groups
of different raid types are linked to separate space_info, changing
allocation profile will corrupt reserved space accounting.
Signed-off-by: Yan Zheng zheng@oracle.com
---
diff -urp 1/fs/btrfs/ctree.h 2/fs/btrfs/ctree.h
---
We already have fs_info-chunk_mutex to avoid concurrent
chunk creation.
Signed-off-by: Yan Zheng zheng@oracle.com
---
diff -urp 2/fs/btrfs/ctree.h 3/fs/btrfs/ctree.h
--- 2/fs/btrfs/ctree.h 2010-04-18 08:12:22.086699485 +0800
+++ 3/fs/btrfs/ctree.h 2010-04-18 08:13:15.457699211 +0800
@@
Introducing contexts for metadata reseravtion has two major
advantages. First, it makes metadata reseravtion more traceable.
Second, it can reclaim freed space and re-add them to the itself
after transaction committed.
Signed-off-by: Yan Zheng zheng@oracle.com
---
diff -urp
All code in init_btrfs_i can be moved into btrfs_alloc_inode()
Signed-off-by: Yan Zheng zheng@oracle.com
---
diff -urp 4/fs/btrfs/inode.c 5/fs/btrfs/inode.c
--- 4/fs/btrfs/inode.c 2010-04-18 08:13:48.183698829 +0800
+++ 5/fs/btrfs/inode.c 2010-04-18 10:59:07.534719917 +0800
@@ -3595,40
On Mon, Apr 19, 2010 at 06:45:44PM +0800, Yan, Zheng wrote:
We already have fs_info-chunk_mutex to avoid concurrent
chunk creation.
Signed-off-by: Yan Zheng zheng@oracle.com
---
diff -urp 2/fs/btrfs/ctree.h 3/fs/btrfs/ctree.h
--- 2/fs/btrfs/ctree.h2010-04-18
On Mon, Apr 19, 2010 at 9:57 PM, Josef Bacik jo...@redhat.com wrote:
The purpose of maybe_allocate_chunk was that there is no way to know if some
other CPU is currently trying to allocate a chunk for the given space info.
We
could have two cpu's come inot do_chunk_alloc at relatively the
On Mon, Apr 19, 2010 at 10:46:12PM +0800, Yan, Zheng wrote:
On Mon, Apr 19, 2010 at 9:57 PM, Josef Bacik jo...@redhat.com wrote:
The purpose of maybe_allocate_chunk was that there is no way to know if some
other CPU is currently trying to allocate a chunk for the given space info.
We
On Mon, Apr 19, 2010 at 10:48 PM, Josef Bacik jo...@redhat.com wrote:
On Mon, Apr 19, 2010 at 10:46:12PM +0800, Yan, Zheng wrote:
On Mon, Apr 19, 2010 at 9:57 PM, Josef Bacik jo...@redhat.com wrote:
The purpose of maybe_allocate_chunk was that there is no way to know if
some
other CPU is
Hi all,
It seems like memory barrier is not required in cached_block_group.I
am looking at kernel 2.6.34-rc2.
cache_block_group(struct btrfs_block_group_cache *cache)
{
smp_mb();
if (cache-cached != BTRFS_CACHE_NO)
return 0;
}
This function is called from btrfs_alloc_logged_file_extent
-Original Message-
From: linux-btrfs-ow...@vger.kernel.org [mailto:linux-btrfs-
ow...@vger.kernel.org] On Behalf Of Sebastian 'gonX' Jensen
Sent: Tuesday, April 13, 2010 2:22 AM
To: linux-btrfs
Subject: Regarding full drives
I'm by no means a Linux guru... but btrfs-show shows me
14 matches
Mail list logo