Re: btrfs-rmw-2: page allocation failure: order:1, mode:0x8020

2014-03-20 Thread Duncan
Chris Mason posted on Thu, 20 Mar 2014 01:01:35 + as excerpted: Message-ID: cf4fb5f3.2116%...@fb.com Accept-Language: en-US Content-Type: text/plain; charset=euc-kr Content-Transfer-Encoding: base64 Content-Language: en-US Sorry, I misspoke, you should bump

Re: How to handle a RAID5 arrawy with a failing drive? - raid5 mostly works, just no rebuilds

2014-03-20 Thread Duncan
Marc MERLIN posted on Wed, 19 Mar 2014 08:40:31 -0700 as excerpted: That's the thing though. If the bad device hadn't been forcibly removed, and apparently the only way to do this was to unmount, make the device node disappear, and remount in degraded mode, it looked to me like btrfs was

Re: How to handle a RAID5 arrawy with a failing drive? - raid5 mostly works, just no rebuilds

2014-03-20 Thread Tobias Holst
I think after the balance it was a fine, non-degraded RAID again... As far as I remember. Tobby 2014-03-20 1:46 GMT+01:00 Marc MERLIN m...@merlins.org: On Thu, Mar 20, 2014 at 01:44:20AM +0100, Tobias Holst wrote: I tried the RAID6 implementation of btrfs and I looks like I had the same

[PATCH] Btrfs-progs: btrfs: remove dead code in handle_options

2014-03-20 Thread Rakesh Pandit
Just cleanup: remove useless return type, while loop and dead code. Signed-off-by: Rakesh Pandit rak...@tuxera.com --- btrfs.c | 33 +++-- 1 file changed, 11 insertions(+), 22 deletions(-) diff --git a/btrfs.c b/btrfs.c index 16458ef..25257b6 100644 --- a/btrfs.c +++

Re: btrfs-rmw-2: page allocation failure: order:1, mode:0x8020

2014-03-20 Thread Chris Mason
On 03/20/2014 02:19 AM, Duncan wrote: Chris Mason posted on Thu, 20 Mar 2014 01:01:35 + as excerpted: Message-ID: cf4fb5f3.2116%...@fb.com Accept-Language: en-US Content-Type: text/plain; charset=euc-kr Content-Transfer-Encoding: base64 Content-Language: en-US Sorry, I misspoke, you

btrfs scrub process prevents system suspend

2014-03-20 Thread Jakub Klinkovský
Today I accidentally discovered that it is not possible to suspend (hibernate) the system while the btrfs scrub process is running (see the attached log). Could this be considered a bug, or did I miss something? Some more info: $ uname -a Linux asusntb 3.13.6-1-ARCH #1 SMP PREEMPT Fri Mar 7

Re: btrfs scrub process prevents system suspend

2014-03-20 Thread Josef Bacik
On 03/20/2014 11:21 AM, Jakub Klinkovský wrote: Today I accidentally discovered that it is not possible to suspend (hibernate) the system while the btrfs scrub process is running (see the attached log). Could this be considered a bug, or did I miss something? Some more info: $ uname -a Linux

Re: btrfs scrub process prevents system suspend

2014-03-20 Thread George Eleftheriou
Hi, I think this issue came up recently. You can read more about it here: http://comments.gmane.org/gmane.comp.file-systems.btrfs/33106 -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord...@vger.kernel.org More majordomo info at

Re: btrfs scrub process prevents system suspend

2014-03-20 Thread Jakub Klinkovský
On 20.03.14 at 16:31, George Eleftheriou wrote: Hi, I think this issue came up recently. You can read more about it here: http://comments.gmane.org/gmane.comp.file-systems.btrfs/33106 Thank you for the link and apologies for the noise. pgp0bCRrL7PhB.pgp Description: PGP signature

Re: Please help me to contribute to btrfs project

2014-03-20 Thread David Sterba
On Wed, Mar 19, 2014 at 02:47:50PM +0530, Ajesh js wrote: I did go though the links send by you got the complete details for sending the kernel component. Also my change has a patch in btrfs-tools. It will be nice if you can share the process for submitting that patch also. The patch

Re: Please advise on repair action

2014-03-20 Thread Adam Khan
Does anyone know if my issue is btrfs related or if it is more likely hardware related: http://www.spinics.net/lists/linux-btrfs/msg30999.html My kernel is from Debian Jessie: 3.13-1-amd64 #1 SMP Debian 3.13.5-1 (2014-03-04) x86_64 GNU/Linux Thanks for any insight On 19/03/14 01:57 AM, Adam

Re: [PATCH] Btrfs: all super blocks of the replaced disk must be scratched

2014-03-20 Thread David Sterba
On Mon, Mar 17, 2014 at 07:58:06PM +0800, Anand Jain wrote: In a normal scenario when sys-admin replaces a disk, the expeted is btrfs will release the disk completely. However the below test case gives a wrong impression that replaced disk is still is in use. $ btrfs rep start /dev/sde

Re: btrfs scrub process prevents system suspend

2014-03-20 Thread Marc MERLIN
On Thu, Mar 20, 2014 at 11:30:33AM -0400, Josef Bacik wrote: Yeah there's a way to make suspend run commands while it goes down, you'll want to make it do btrfs scrub cancel on your btrfs fses. If you search the archives you'll see we've covered this recently and the guy posted the script

Re: btrfs-progs tagged as v3.12

2014-03-20 Thread WorMzy Tykashi
On 25 November 2013 21:45, Chris Mason chris.ma...@fusionio.com wrote: Hi everyone, I've tagged the current btrfs-progs repo as v3.12. The new idea is that instead of making the poor distros pull from git, I'll be creating tagged releases at roughly the same pace as Linus cuts kernels.

Re: btrfs-progs tagged as v3.12

2014-03-20 Thread Chris Mason
On 03/20/2014 07:36 PM, WorMzy Tykashi wrote: On 25 November 2013 21:45, Chris Mason chris.ma...@fusionio.com wrote: Hi everyone, I've tagged the current btrfs-progs repo as v3.12. The new idea is that instead of making the poor distros pull from git, I'll be creating tagged releases at

Re: btrfs-progs tagged as v3.12

2014-03-20 Thread WorMzy Tykashi
On 20 March 2014 23:55, Chris Mason c...@fb.com wrote: On 03/20/2014 07:36 PM, WorMzy Tykashi wrote: On 25 November 2013 21:45, Chris Mason chris.ma...@fusionio.com wrote: Hi everyone, I've tagged the current btrfs-progs repo as v3.12. The new idea is that instead of making the poor

[PATCH] Btrfs-progs: btrfs-image: don't call pthread_join on IDs not present

2014-03-20 Thread Rakesh Pandit
If pthread_create fails in mdrestore_init, then number of threads created could be less then num of threads option. Hence pass number of successful pthread_create calls to mdrestore_destroy, so that we don't call pthread_join on IDs not present when pthread_create fails. metadump_init already had

[PATCH v2] Btrfs-progs: btrfs-image: don't call pthread_join on IDs not present

2014-03-20 Thread Rakesh Pandit
If pthread_create fails in mdrestore_init, then number of threads created could be less then num of threads option. Hence pass number of successful pthread_create calls to mdrestore_destroy, so that we don't call pthread_join on IDs not present when pthread_create fails. metadump_init already had

Re: [PATCH] Btrfs: fix deadlock with nested trans handles

2014-03-20 Thread Rich Freeman
On Sat, Mar 15, 2014 at 7:51 AM, Duncan 1i5t5.dun...@cox.net wrote: 1) Does running the snapper cleanup command from that cron job manually trigger the problem as well? As you can imagine I'm not too keen to trigger this often. But yes, I just gave it a shot on my SSD and cleaning a few days

Especially broken btrfs

2014-03-20 Thread sepero...@gmx.com
Hello all. I submit bugs to different foss projects regularly, but I don't really have a bug report this time. I have a broken filesystem to report. And I have no idea how to reproduce it. I am including a link to the filesystem itself, because it appears to be unrepairable and unrestorable.

Re: [PATCH] Btrfs: fix deadlock with nested trans handles

2014-03-20 Thread Duncan
Rich Freeman posted on Thu, 20 Mar 2014 22:13:51 -0400 as excerpted: However, I am my snapshots one at a time at a rate of one every 5-30 minutes, and while that is creating surprisingly high disk loads on my ssd and hard drives, I don't get any panics. I figured that having only one

Re: Understanding btrfs and backups = automatic snapshot script

2014-03-20 Thread Marc MERLIN
On Sun, Mar 16, 2014 at 10:42:24PM -0700, Marc MERLIN wrote: On Thu, Mar 06, 2014 at 09:33:24PM +, Duncan wrote: However, best snapshot management practice does progressive snapshot thinning, so you never have more than a few hundred snapshots to manage at once. Think of it this way.