On Tue, Mar 18, 2014 at 09:02:07AM +, Duncan wrote:
First just a note that you hijacked Mr Manana's patch thread. Replying
(...)
I did, I use mutt, I know about in Reply-To, I was tired, I screwed up,
sorry, and there was no undo :)
Since you don't have to worry about the data I'd suggest
Help string of btrfs dev scan is inconsistent with man page,
which lacks the fact that -d|--all-device is conflict with device.
This patch fixes the description
Signed-off-by: Qu Wenruo quwen...@cn.fujitsu.com
---
cmds-device.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git
Fix memleak in get_raid56_used().
Signed-off-by: Qu Wenruo quwen...@cn.fujitsu.com
---
cmds-fi-disk_usage.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/cmds-fi-disk_usage.c b/cmds-fi-disk_usage.c
index a3b06be..2bd591d 100644
--- a/cmds-fi-disk_usage.c
+++ b/cmds-fi-disk_usage.c
@@
Man page of btrfs has some minor problem like:
1. Duplicant entry for filesystem df
2. Inconsistent parameters
3. Non-paired parens
4. Missing options
5. Wrong parameters
This patch fixes these minor bug.
Signed-off-by: Qu Wenruo quwen...@cn.fujitsu.com
---
man/btrfs.8.in | 184
On Mar 19, 2014, at 12:09 AM, Marc MERLIN m...@merlins.org wrote:
7) you can remove a drive from an array, add files, and then if you plug
the drive in, it apparently gets auto sucked in back in the array.
There is no rebuild that happens, you now have an inconsistent array where
one
On Wed, Mar 12, 2014 at 07:50:28PM +0530, Chandan Rajendra wrote:
bio_vec-{bv_offset, bv_len} cannot be relied upon by the end bio functions
to track the file offset range operated on by the bio. Hence this patch adds
two new members to 'struct btrfs_io_bio' to track the file offset range.
This kind of crashes happens me very often when I delete a large
number (+200) of snapshots at once.
There is a very high IO for a while, and after that, the system
freezed at intervals. I had to reboot the system to get it responsive
again.
Versions used:
kernel-3.13.6-200.fc20.x86_64
On Wed, Mar 19, 2014 at 08:52:37AM +0100, Juan Orti Alcaine wrote:
This kind of crashes happens me very often when I delete a large
number (+200) of snapshots at once.
There is a very high IO for a while, and after that, the system
freezed at intervals. I had to reboot the system to get it
Thank you very much Ben :)
I did go though the links send by you got the complete details for
sending the kernel component.
Also my change has a patch in btrfs-tools. It will be nice if you can
share the process for submitting that patch also.
Regards,
Ajesh
On Tue, Mar 18, 2014 at 7:17 PM,
For an incremental send, fix the process of determining whether the directory
inode we're currently processing needs to have its move/rename operation
delayed.
We were ignoring the fact that if the inode's new immediate ancestor has a
higher
inode number than ours but wasn't renamed/moved, we
On Wed, Mar 19, 2014 at 12:32:55AM -0600, Chris Murphy wrote:
On Mar 19, 2014, at 12:09 AM, Marc MERLIN m...@merlins.org wrote:
7) you can remove a drive from an array, add files, and then if you plug
the drive in, it apparently gets auto sucked in back in the array.
There is no
My server died last night during a btrfs send/receive to a btrfs radi5 array
Here are the logs. Is this anything known or with a possible workaround?
Thanks,
Marc
btrfs-rmw-2: page allocation failure: order:1, mode:0x8020
CPU: 1 PID: 12499 Comm: btrfs-rmw-2 Not tainted
These should be put in front of struct bio bio,
otherwise, it might lead to errors, according to bioset_create()'s comments,
--
Note that the bio must be embedded at the END of that structure always,
or things will
On Mar 19, 2014, at 9:40 AM, Marc MERLIN m...@merlins.org wrote:
After adding a drive, I couldn't quite tell if it was striping over 11
drive2 or 10, but it felt that at least at times, it was striping over 11
drives with write failures on the missing drive.
I can't prove it, but I'm
I added an optimization for large files where we would stop searching for
backrefs once we had looked at the number of references we currently had for
this extent. This works great most of the time, but for snapshots that point to
this extent and has changes in the original root this assumption
On 03/19/2014 11:45 AM, Marc MERLIN wrote:
My server died last night during a btrfs send/receive to a btrfs radi5 array
Here are the logs. Is this anything known or with a possible workaround?
Thanks,
Marc
btrfs-rmw-2: page allocation failure: order:1, mode:0x8020
This is an order 1 atomic
On Tue, Mar 18, 2014 at 01:48:00PM +0630, chandan wrote:
The earlier patchset posted by Chandra Seethraman was to get 4k
blocksize to work with ppc64's 64k PAGE_SIZE.
Are we talking about metadata block sizes or data block sizes?
The root node of tree root tree has 1957 bytes being written by
On Tue, Mar 18, 2014 at 06:55:13PM +0800, Liu Bo wrote:
On Mon, Mar 17, 2014 at 03:41:31PM +0100, David Sterba wrote:
There are enough EINVAL's that verify correcntess of the input
parameters and it's not always clear which one fails. The EOPNOTSUPP
errocode is close to the true reason of
On Wed, Mar 19, 2014 at 01:35:14PM -0400, Josef Bacik wrote:
I added an optimization for large files where we would stop searching for
backrefs once we had looked at the number of references we currently had for
this extent. This works great most of the time, but for snapshots that point
to
On Wed, Mar 19, 2014 at 12:20:08PM -0400, Chris Mason wrote:
On 03/19/2014 11:45 AM, Marc MERLIN wrote:
My server died last night during a btrfs send/receive to a btrfs radi5
array
Here are the logs. Is this anything known or with a possible workaround?
Thanks,
Marc
btrfs-rmw-2: page
On Wed, Mar 19, 2014 at 10:53:33AM -0600, Chris Murphy wrote:
Yes, although it's limited, you apparently only lose new data that was added
after you went into degraded mode and only if you add another drive where
you write more data.
In real life this shouldn't be too common, even if it is
On 3/19/14, 6:37 PM, Marc MERLIN m...@merlins.org wrote:
On Wed, Mar 19, 2014 at 12:20:08PM -0400, Chris Mason wrote:
On 03/19/2014 11:45 AM, Marc MERLIN wrote:
My server died last night during a btrfs send/receive to a btrfs radi5
array
Here are the logs. Is this anything known or with a
On Thu, Mar 20, 2014 at 12:13:36AM +, Chris Mason wrote:
Should I double it?
For now, I have the copy running again, and it's been going for 8 hours
without failure on the old kernel but of course that doesn't mean my 2TB
copy will complete without hitting the bug again.
Sorry, I
On 3/19/14, 8:20 PM, Marc MERLIN m...@merlins.org wrote:
On Thu, Mar 20, 2014 at 12:13:36AM +, Chris Mason wrote:
Should I double it?
For now, I have the copy running again, and it's been going for 8 hours
without failure on the old kernel but of course that doesn't mean my
2TB
copy
On Tue, Mar 18, 2014 at 05:56:06PM +, Filipe David Borba Manana wrote:
No need to search in the send tree for the generation number of the inode,
we already have it in the recorded_ref structure passed to us.
Reviewed-by: Liu Bo bo.li@oracle.com
-liubo
Signed-off-by: Filipe David
David Sterba dste...@suse.cz writes:
On Tue, Mar 18, 2014 at 01:48:00PM +0630, chandan wrote:
The earlier patchset posted by Chandra Seethraman was to get 4k
blocksize to work with ppc64's 64k PAGE_SIZE.
Are we talking about metadata block sizes or data block sizes?
The root node of tree
26 matches
Mail list logo