> -Original Message-
> From: linux-btrfs-ow...@vger.kernel.org [mailto:linux-btrfs-
> ow...@vger.kernel.org] On Behalf Of Sebastian 'gonX' Jensen
> Sent: Tuesday, April 13, 2010 2:22 AM
> To: linux-btrfs
> Subject: Regarding full drives
>
> I'm by no means a Linux guru... but btrfs-show sh
Backstory:
I had a FakeRAID setup with two SATA drives. The storage controller seems to be
messed up
on my motherboard so I popped in a PCI SATA controller and connected the two
SATA drives
to the controller.
Problem:
I already had two identical (thanks to the FakeRAID) btrfs partitions on two
Hi all,
It seems like memory barrier is not required in cached_block_group.I
am looking at kernel 2.6.34-rc2.
cache_block_group(struct btrfs_block_group_cache *cache)
{
smp_mb();
if (cache->cached != BTRFS_CACHE_NO)
return 0;
}
This function is called from btrfs_alloc_logged_file_extent
On Mon, Apr 19, 2010 at 10:48 PM, Josef Bacik wrote:
> On Mon, Apr 19, 2010 at 10:46:12PM +0800, Yan, Zheng wrote:
>> On Mon, Apr 19, 2010 at 9:57 PM, Josef Bacik wrote:
>> > The purpose of maybe_allocate_chunk was that there is no way to know if
>> > some
>> > other CPU is currently trying to a
On Mon, Apr 19, 2010 at 10:46:12PM +0800, Yan, Zheng wrote:
> On Mon, Apr 19, 2010 at 9:57 PM, Josef Bacik wrote:
> > The purpose of maybe_allocate_chunk was that there is no way to know if some
> > other CPU is currently trying to allocate a chunk for the given space info.
> > We
> > could have
On Mon, Apr 19, 2010 at 9:57 PM, Josef Bacik wrote:
> The purpose of maybe_allocate_chunk was that there is no way to know if some
> other CPU is currently trying to allocate a chunk for the given space info.
> We
> could have two cpu's come inot do_chunk_alloc at relatively the same time and
>
On Mon, Apr 19, 2010 at 06:45:44PM +0800, Yan, Zheng wrote:
> We already have fs_info->chunk_mutex to avoid concurrent
> chunk creation.
>
> Signed-off-by: Yan Zheng
>
> ---
> diff -urp 2/fs/btrfs/ctree.h 3/fs/btrfs/ctree.h
> --- 2/fs/btrfs/ctree.h2010-04-18 08:12:22.086699485 +0800
> ++
Besides simplify the code, this change makes sure all metadata
reservation for normal metadata operations are released after
committing transaction.
Signed-off-by: Yan Zheng
---
diff -urp 5/fs/btrfs/ctree.h 6/fs/btrfs/ctree.h
--- 5/fs/btrfs/ctree.h 2010-04-19 16:45:19.528217522 +0800
+++ 6/fs/b
Reserve metadata space for extent tree, checksum tree and root tree
Signed-off-by: Yan Zheng
---
diff -urp 8/fs/btrfs/ctree.h 9/fs/btrfs/ctree.h
--- 8/fs/btrfs/ctree.h 2010-04-18 10:26:38.327697818 +0800
+++ 9/fs/btrfs/ctree.h 2010-04-18 10:30:01.883697869 +0800
@@ -682,21 +682,15 @@ struct bt
All code in init_btrfs_i can be moved into btrfs_alloc_inode()
Signed-off-by: Yan Zheng
---
diff -urp 4/fs/btrfs/inode.c 5/fs/btrfs/inode.c
--- 4/fs/btrfs/inode.c 2010-04-18 08:13:48.183698829 +0800
+++ 5/fs/btrfs/inode.c 2010-04-18 10:59:07.534719917 +0800
@@ -3595,40 +3595,10 @@ again:
Introducing contexts for metadata reseravtion has two major
advantages. First, it makes metadata reseravtion more traceable.
Second, it can reclaim freed space and re-add them to the itself
after transaction committed.
Signed-off-by: Yan Zheng
---
diff -urp 5/fs/btrfs/ctree.c 6/fs/btrfs/ctree.c
We already have fs_info->chunk_mutex to avoid concurrent
chunk creation.
Signed-off-by: Yan Zheng
---
diff -urp 2/fs/btrfs/ctree.h 3/fs/btrfs/ctree.h
--- 2/fs/btrfs/ctree.h 2010-04-18 08:12:22.086699485 +0800
+++ 3/fs/btrfs/ctree.h 2010-04-18 08:13:15.457699211 +0800
@@ -700,9 +700,7 @@ struct
The size of reserved space is stored in space_info. If block groups
of different raid types are linked to separate space_info, changing
allocation profile will corrupt reserved space accounting.
Signed-off-by: Yan Zheng
---
diff -urp 1/fs/btrfs/ctree.h 2/fs/btrfs/ctree.h
--- 1/fs/btrfs/ctree.h
Shrink delay allocated space in a synchronized manner is more
controllable than flushing all delay allocated space in an async
thread.
Signed-off-by: Yan Zheng
---
diff -urp 3/fs/btrfs/ctree.h 4/fs/btrfs/ctree.h
--- 3/fs/btrfs/ctree.h 2010-04-18 08:13:15.457699211 +0800
+++ 4/fs/btrfs/ctree.h
Pre-allocate space for data relocation. This can detect ENOPSC
condition caused by fragmentation of free space.
Signed-off-by: Yan Zheng
---
diff -urp 11/fs/btrfs/ctree.h 12/fs/btrfs/ctree.h
--- 11/fs/btrfs/ctree.h 2010-04-18 10:50:26.565702000 +0800
+++ 12/fs/btrfs/ctree.h 2010-04-18 10:55:07.7
Introduce metadata reservation context for delayed allocation and
update various related functions.
This patch also introduces EXTENT_FIRST_DELALLOC control bit for
set/clear_extent_bit. It tells set/clear_bit_hook whether they
are processing the first extent_state with EXTENT_DELALLOC bit
set. Th
reserve metadata space for handling orphan inodes
Signed-off-by: Yan Zheng
---
diff -urp 9/fs/btrfs/btrfs_inode.h 10/fs/btrfs/btrfs_inode.h
--- 9/fs/btrfs/btrfs_inode.h2010-04-18 10:26:38.326701000 +0800
+++ 10/fs/btrfs/btrfs_inode.h 2010-04-18 10:50:26.564697845 +0800
@@ -151,6 +151,7 @@
Previous patches make the allocater return -ENOSPC if there is
no unreserved free meta space. This patch updates tree log code
and various other places to propagate/handle the ENOSPC error.
Signed-off-by: Yan Zheng
---
diff -urp 10/fs/btrfs/disk-io.c 11/fs/btrfs/disk-io.c
--- 10/fs/btrfs/disk-io
18 matches
Mail list logo