On Tue, 23 Jul 2013 13:22:18 +0800, Wang Shilong wrote:
+if (btrfs_test_opt(root, RECOVERY))
+seq_puts(seq, ,auto_recovery);
recovery without the auto_.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
On 07/23/2013 03:43 PM, Stefan Behrens wrote:
On Tue, 23 Jul 2013 13:22:18 +0800, Wang Shilong wrote:
+if (btrfs_test_opt(root, RECOVERY))
+seq_puts(seq, ,auto_recovery);
recovery without the auto_
Thanks, i will update the patch ^_^
Wang,
--
To unsubscribe from this list:
Hi,
I know that raid10 requires at least 4 drives, and I understand how it
works with an even number of equally-sized drives 4 in number (
btrfs will stripe over the N/2 mirrors).
But I'm curious if it works well with mixed-sized drives when you have
4 drives. In my case I have:
Data, RAID10:
May I ask why the decision to implement snapshotting through
subvolumes? I've been very curious about why the design wasn't to
simply allow snapshotting of any directory or file.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to
Greetings
Encountered this bug on linux-3.11-rc2
https://bugzilla.kernel.org/show_bug.cgi?id=60608
Best Regards
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at
Hello btrfs people,
I am using btrfs to span across two SSDs at the moment. One is a 256GB
and the other is a 128GB. So as of now, I have the data in single form
and the metadata in a RAID1. I have heard that btrfs can adjust to some
degree for devices in a RAID array that vary in sizes due to
On Tue, Jul 23, 2013 at 07:32:36AM -0700, Curtis Shimamoto wrote:
I am using btrfs to span across two SSDs at the moment. One is a 256GB
and the other is a 128GB. So as of now, I have the data in single form
and the metadata in a RAID1. I have heard that btrfs can adjust to some
degree for
Hello,
For over a year now, I've been experimenting with stacked filesystems as a way
to save on resources. A basic OS layer is shared among Containers, each of
which stacks a layer with modifications on top of it. This approach means that
Containers share buffer cache and loaded
On Tue, Jul 23, 2013 at 9:47 AM, Rick van Rein r...@vanrein.org wrote:
Hello,
For over a year now, I've been experimenting with stacked filesystems as a
way to save on resources. A basic OS layer is shared among Containers, each
of which stacks a layer with modifications on top of it.
On Tue, Jul 23, 2013 at 10:32 AM, Curtis Shimamoto
sugar.and.scru...@gmail.com wrote:
[...]
Additionally, though not quite as much of a concern to me, the machine in
which these drives live is an Ivy Bridge Laptop, so there are actually
only two available SATA3 ports. The odd drive out at
Hello,
For over a year now, I've been experimenting with stacked filesystems
as a way to save on resources. A basic OS layer is shared among
Containers, each of which stacks a layer with modifications on top of
it. This approach means that Containers share buffer cache and
loaded
Now... since the snapshot's FS tree is a direct duplicate of the
original FS tree (actually, it's the same tree, but they look like
different things to the outside world), they share everything --
including things like inode numbers. This is OK within a subvolume,
because we have the
On Tue, Jul 23, 2013 at 07:47:41PM +0200, Gabriel de Perthuis wrote:
Now... since the snapshot's FS tree is a direct duplicate of the
original FS tree (actually, it's the same tree, but they look like
different things to the outside world), they share everything --
including things like
Le mar. 23 juil. 2013 21:30:13 CEST, Hugo Mills a écrit :
On Tue, Jul 23, 2013 at 07:47:41PM +0200, Gabriel de Perthuis wrote:
Now... since the snapshot's FS tree is a direct duplicate of the
original FS tree (actually, it's the same tree, but they look like
different things to the outside
Why not just create the new dev_id on the destination snapshot of any
directory? That way the snapshot can share inodes with is source.
On Tue, Jul 23, 2013 at 2:30 PM, Hugo Mills h...@carfax.org.uk wrote:
On Tue, Jul 23, 2013 at 07:47:41PM +0200, Gabriel de Perthuis wrote:
Now... since the
I was hitting the BUG_ON() at the end of merge_reloc_roots() because we were
aborting the transaction at some point previously and then getting an error when
we tried to drop the reloc root. I fixed btrfs_drop_snapshot to re-add us to
the dead roots list if we failed, but this isn't the right
Hi Cwilu and Gabriel,
I wasn't aware that work was already being done. I actually imagined having to
defend what I brougt up :-)
What you sent looks interesting and useful, especially the support in
userspace. I will investigate these tools!
Till then -- thanks!
-Rick
--
To unsubscribe
On Jul 23, 2013, at 1:43 PM, Jerome Haltom was...@cogito.cx wrote:
Why not just create the new dev_id on the destination snapshot of any
directory?
Right now, snapshots of subvolumes do not contain the contents of contained
subvolumes. Hmmm, that sounds horrid.
Subvolume A
File 1
Yeah. I was merely curious about the architecture limits that drove
the design this way, to begin with. Mostly because it seems odd. It
seems like the most obvious and most natural thing from the user's
perspective to do would just be able to reflink directories. Like
every decent source control
On Tue, Jul 23, 2013 at 06:39:57PM -0500, Jerome Haltom wrote:
Yeah. I was merely curious about the architecture limits that drove
the design this way, to begin with. Mostly because it seems odd. It
seems like the most obvious and most natural thing from the user's
perspective to do would just
On Jul 23, 2013, at 7:27 PM, Josef Bacik jba...@fusionio.com wrote:
Subvolumes are described as directories simply to make it easier to
understand.
Directories do not change the heirarchy within the file system itself, they
are
simply items in the btree like anything else, they are not
I just notice the following commands succeed:
mount dev mnt -o thread_pool=-1
This is ridiculous, only positive thread_pool makes sense,this
patch adds sanity checks for them, and also catches the error of
ENOMEM if allocating memory fails.
Signed-off-by: Wang Shilong
Although for most time, int is enough for subvolid, we should
ensure safety in theory.
Signed-off-by: Wang Shilong wangsl.f...@cn.fujitsu.com
Reviewed-by: Miao Xie mi...@cn.fujitsu.com
---
fs/btrfs/super.c | 16 +++-
1 file changed, 7 insertions(+), 9 deletions(-)
diff --git
Some options are missing in btrfs_show_options(), this patch
adds them.
Signed-off-by: Wang Shilong wangsl.f...@cn.fujitsu.com
Reviewed-by: Miao Xie mi...@cn.fujitsu.com
---
V1-V2: s/auto_recovery/recovery(Thanks to Stefan)
---
fs/btrfs/super.c | 14 ++
1 file changed, 14
Hello,
Since 3.7.X kernel series I noticed kernel BUG in btrfs.
I use:
Linux demo 3.9.11-dlj #1 SMP Tue Jul 23 04:45:02 CEST 2013 x86_64 AMD
FX(tm)-8150 Eight-Core Processor AuthenticAMD GNU/Linux.
I notice below BUG once per day.
After the BUG, the system stop running some processes, and can't
25 matches
Mail list logo