Chris Mason posted on Thu, 20 Mar 2014 01:01:35 + as excerpted:
Message-ID: cf4fb5f3.2116%...@fb.com
Accept-Language: en-US
Content-Type: text/plain; charset=euc-kr
Content-Transfer-Encoding: base64
Content-Language: en-US
Sorry, I misspoke, you should bump
Marc MERLIN posted on Wed, 19 Mar 2014 08:40:31 -0700 as excerpted:
That's the thing though. If the bad device hadn't been forcibly removed,
and apparently the only way to do this was to unmount, make the device
node disappear, and remount in degraded mode, it looked to me like btrfs
was
I think after the balance it was a fine, non-degraded RAID again... As
far as I remember.
Tobby
2014-03-20 1:46 GMT+01:00 Marc MERLIN m...@merlins.org:
On Thu, Mar 20, 2014 at 01:44:20AM +0100, Tobias Holst wrote:
I tried the RAID6 implementation of btrfs and I looks like I had the
same
Just cleanup: remove useless return type, while loop and dead code.
Signed-off-by: Rakesh Pandit rak...@tuxera.com
---
btrfs.c | 33 +++--
1 file changed, 11 insertions(+), 22 deletions(-)
diff --git a/btrfs.c b/btrfs.c
index 16458ef..25257b6 100644
--- a/btrfs.c
+++
On 03/20/2014 02:19 AM, Duncan wrote:
Chris Mason posted on Thu, 20 Mar 2014 01:01:35 + as excerpted:
Message-ID: cf4fb5f3.2116%...@fb.com
Accept-Language: en-US
Content-Type: text/plain; charset=euc-kr
Content-Transfer-Encoding: base64
Content-Language: en-US
Sorry, I misspoke, you
Today I accidentally discovered that it is not possible to suspend (hibernate)
the system while the btrfs scrub process is running (see the attached log).
Could this be considered a bug, or did I miss something?
Some more info:
$ uname -a
Linux asusntb 3.13.6-1-ARCH #1 SMP PREEMPT Fri Mar 7
On 03/20/2014 11:21 AM, Jakub Klinkovský wrote:
Today I accidentally discovered that it is not possible to suspend (hibernate)
the system while the btrfs scrub process is running (see the attached log).
Could this be considered a bug, or did I miss something?
Some more info:
$ uname -a
Linux
Hi,
I think this issue came up recently. You can read more about it here:
http://comments.gmane.org/gmane.comp.file-systems.btrfs/33106
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at
On 20.03.14 at 16:31, George Eleftheriou wrote:
Hi,
I think this issue came up recently. You can read more about it here:
http://comments.gmane.org/gmane.comp.file-systems.btrfs/33106
Thank you for the link and apologies for the noise.
pgp0bCRrL7PhB.pgp
Description: PGP signature
On Wed, Mar 19, 2014 at 02:47:50PM +0530, Ajesh js wrote:
I did go though the links send by you got the complete details for
sending the kernel component.
Also my change has a patch in btrfs-tools. It will be nice if you can
share the process for submitting that patch also.
The patch
Does anyone know if my issue is btrfs related or if it is more likely hardware
related:
http://www.spinics.net/lists/linux-btrfs/msg30999.html
My kernel is from Debian Jessie:
3.13-1-amd64 #1 SMP Debian 3.13.5-1 (2014-03-04) x86_64 GNU/Linux
Thanks for any insight
On 19/03/14 01:57 AM, Adam
On Mon, Mar 17, 2014 at 07:58:06PM +0800, Anand Jain wrote:
In a normal scenario when sys-admin replaces a disk, the
expeted is btrfs will release the disk completely.
However the below test case gives a wrong impression that
replaced disk is still is in use.
$ btrfs rep start /dev/sde
On Thu, Mar 20, 2014 at 11:30:33AM -0400, Josef Bacik wrote:
Yeah there's a way to make suspend run commands while it goes down,
you'll want to make it do btrfs scrub cancel on your btrfs fses. If you
search the archives you'll see we've covered this recently and the guy
posted the script
On 25 November 2013 21:45, Chris Mason chris.ma...@fusionio.com wrote:
Hi everyone,
I've tagged the current btrfs-progs repo as v3.12. The new idea is that
instead of making the poor distros pull from git, I'll be creating
tagged releases at roughly the same pace as Linus cuts kernels.
On 03/20/2014 07:36 PM, WorMzy Tykashi wrote:
On 25 November 2013 21:45, Chris Mason chris.ma...@fusionio.com wrote:
Hi everyone,
I've tagged the current btrfs-progs repo as v3.12. The new idea is that
instead of making the poor distros pull from git, I'll be creating
tagged releases at
On 20 March 2014 23:55, Chris Mason c...@fb.com wrote:
On 03/20/2014 07:36 PM, WorMzy Tykashi wrote:
On 25 November 2013 21:45, Chris Mason chris.ma...@fusionio.com wrote:
Hi everyone,
I've tagged the current btrfs-progs repo as v3.12. The new idea is that
instead of making the poor
If pthread_create fails in mdrestore_init, then number of threads
created could be less then num of threads option. Hence pass number of
successful pthread_create calls to mdrestore_destroy, so that we don't
call pthread_join on IDs not present when pthread_create fails.
metadump_init already had
If pthread_create fails in mdrestore_init, then number of threads
created could be less then num of threads option. Hence pass number of
successful pthread_create calls to mdrestore_destroy, so that we don't
call pthread_join on IDs not present when pthread_create fails.
metadump_init already had
On Sat, Mar 15, 2014 at 7:51 AM, Duncan 1i5t5.dun...@cox.net wrote:
1) Does running the snapper cleanup command from that cron job manually
trigger the problem as well?
As you can imagine I'm not too keen to trigger this often. But yes, I
just gave it a shot on my SSD and cleaning a few days
Hello all. I submit bugs to different foss projects regularly, but I don't
really have a bug report this time. I have a broken filesystem to report. And I
have no idea how to reproduce it.
I am including a link to the filesystem itself, because it appears to be
unrepairable and unrestorable.
Rich Freeman posted on Thu, 20 Mar 2014 22:13:51 -0400 as excerpted:
However, I am my snapshots one at a time at a rate of one every 5-30
minutes, and while that is creating surprisingly high disk loads on my
ssd and hard drives, I don't get any panics. I figured that having only
one
On Sun, Mar 16, 2014 at 10:42:24PM -0700, Marc MERLIN wrote:
On Thu, Mar 06, 2014 at 09:33:24PM +, Duncan wrote:
However, best snapshot management practice does progressive snapshot
thinning, so you never have more than a few hundred snapshots to manage
at once. Think of it this way.
22 matches
Mail list logo