Current error messages are like following:
Error: unable to create FS with metadata profile 32 (have 2 devices)
Error: unable to create FS with metadata profile 256 (have 2 devices)
Obviously it is hard for users to interpret profile XX to proper
meaning, such as raidN. So use recongizable
On Thu, 15 May 2014, Christoph Hellwig wrote:
Date: Thu, 15 May 2014 22:40:09 -0700
From: Christoph Hellwig h...@infradead.org
To: Dave Chinner da...@fromorbit.com
Cc: fste...@vger.kernel.org, linux-fsde...@vger.kernel.org,
linux-e...@vger.kernel.org, linux-btrfs@vger.kernel.org,
On Fri, 16 May 2014, Christoph Hellwig wrote:
Date: Fri, 16 May 2014 01:53:20 -0700
From: Christoph Hellwig h...@infradead.org
To: Luk?? Czerner lczer...@redhat.com
Cc: Christoph Hellwig h...@infradead.org, Dave Chinner
da...@fromorbit.com,
fste...@vger.kernel.org,
On Fri, May 16, 2014 at 10:48 AM, Lukáš Czerner lczer...@redhat.com wrote:
On Thu, 15 May 2014, Christoph Hellwig wrote:
Date: Thu, 15 May 2014 22:40:09 -0700
From: Christoph Hellwig h...@infradead.org
To: Dave Chinner da...@fromorbit.com
Cc: fste...@vger.kernel.org,
Hello all,
I ran a modified version of the file system fuzzer
(https://github.com/sughodke/fsfuzzer) for one of the projects I am working,
and at one point I got a possible crash.
I got the following trace on a device with a 32bit kernel 3.14. I have
searched the Bugzilla for this issue,
On Fri, May 16, 2014 at 12:58:46PM +0800, Wang Shilong wrote:
Hi Anand,
On 05/16/2014 12:32 PM, Anand Jain wrote:
David,
As mentioned, this patch will back-out the earlier patch
50275bacab0f62b91453fbfa29e75c2bb77bf9b6
I am confused on what I am missing ? Any comment?
You are
(I'm not subscribed to linux-kernel, pls copy me on the anwsers)
Hi there,
# uname -a
Linux zafu 3.14.4-1-ARCH #1 SMP PREEMPT Tue May 13 16:41:39 CEST 2014 x86_64
GNU/Linux
I've seen the same thing happen several times in the last couple of months (so
with said kernel version + at least 3.13)
Signed-off-by: David Sterba dste...@suse.cz
---
Sorry for the stupid typo in subject, I'm resending so there's a correct patch
in patchwork.
fs/btrfs/inode-map.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/btrfs/inode-map.c b/fs/btrfs/inode-map.c
index
btrfs-devlist: Dumps kernel fs_devices for debug purpose
(This is not for integration, use it until we have a
replacement tool)
Anand Jain (1):
btrfs: introduce BTRFS_IOC_GET_DEVS
fs/btrfs/super.c | 86 ++
fs/btrfs/volumes.c | 142
On Fri, May 16, 2014 at 10:02:07AM -0400, Calvin Walton wrote:
Instead of renaming the test suite, why not just backronym it to mean
something different? The letter x is used to mean cross in many
contexts, so xfstests could easily mean cross-filesystem tests - a
name that fits perfectly!
BTRFS_IOC_FS_INFO return num_devices which does not include seed disks,
BTRFS_IOC_DEV_INFO fetches seed disk when probed. So in this case hits
the btrfs-progs bug:
get_fs_info()
::
BUG_ON(ndevs = fi_args-num_devices);
which is very easy to hit by using
On Fri, 2014-05-16 at 10:55 +0200, Lukáš Czerner wrote:
On Fri, 16 May 2014, Christoph Hellwig wrote:
On Fri, May 16, 2014 at 10:48:42AM +0200, Luk?? Czerner wrote:
As requested I've created a new mailing list for xfstests
development and discussion. Reflecting the fact that the test
Wang,
After a much of investigation on this, I think I found a
better approach to fix this. Can you kindly comment on
patch
[PATCH] btrfs: ioctl BTRFS_IOC_FS_INFO and BTRFS_IOC_DEV_INFO
miss-matched with slots
Thanks,
Anand
On 14/05/14 16:30, Anand Jain wrote:
Hello Wang,
sure
Wang,
There seems to be a problem - after we delete the seed
disk, the total_devices didn't decrement back to 1.
reproducer as in the below test case. (I used btrfs-devlist
(posted) to check fs_devices).
# mkfs.btrfs -f /dev/sdb
# btrfstune -S 1 /dev/sdb
# mount /dev/sdb /mnt
#
On Mon, May 05, 2014 at 06:07:51PM +0100, Hugo Mills wrote:
If precisely one of those bitflips puts the broken key back into order
relative to its two neighbours, we probably have a fix for the bitflip,
and so we write it back to the FS.
This sounds safe enough to me. I'll add the patch to
On Fri, May 16, 2014 at 04:22:36PM +0200, David Sterba wrote:
On Mon, May 05, 2014 at 06:07:51PM +0100, Hugo Mills wrote:
If precisely one of those bitflips puts the broken key back into order
relative to its two neighbours, we probably have a fix for the bitflip,
and so we write it back to
2014-05-16 22:14 GMT+08:00 Anand Jain anand.j...@oracle.com:
Wang,
There seems to be a problem - after we delete the seed
disk, the total_devices didn't decrement back to 1.
reproducer as in the below test case. (I used btrfs-devlist
(posted) to check fs_devices).
There should be other
2014-05-16 22:44 GMT+08:00 Anand Jain anand.j...@oracle.com:
Wang,
when we unmount mount (instead of remount) and followed
with device del seed it ends up with null pointer deref at
btrfs_shrink_dev. Thats because the btrfs_root is not set for
seed disk as we mounted the writable
On Wed, May 07, 2014 at 01:07:17PM -0700, Mark Fasheh wrote:
+struct ref {
+ u64 bytenr;
+ u64 num_bytes;
+ u64 parent;
+ u64 root;
+
+ struct rb_node bytenr_node;
+};
A way too
Tested-by: Anand Jain anand.j...@oracle.com
On 11/05/14 23:14, Liu Bo wrote:
Same as normal devices, seed devices should be initialized with
fs_info-dev_root as well, otherwise we'll get a NULL pointer crash.
Cc: Chris Murphy li...@colorremedies.com
Reported-by: Chris Murphy
On 16/05/14 22:40, Shilong Wang wrote:
2014-05-16 22:06 GMT+08:00 Anand Jain anand.j...@oracle.com:
BTRFS_IOC_FS_INFO return num_devices which does not include seed disks,
BTRFS_IOC_DEV_INFO fetches seed disk when probed. So in this case hits
the btrfs-progs bug:
get_fs_info()
On Wed, May 07, 2014 at 01:07:14PM -0700, Mark Fasheh wrote:
The first two patches set up for qgroups:
- The change in patch #1 is optional. It corrects the print of qgroup bytes
to be %llu as they are unsigned values. This means however that corrupted
groups will no longer show a negative
2014-05-16 23:13 GMT+08:00 Anand Jain anand.j...@oracle.com:
On 16/05/14 22:40, Shilong Wang wrote:
2014-05-16 22:06 GMT+08:00 Anand Jain anand.j...@oracle.com:
BTRFS_IOC_FS_INFO return num_devices which does not include seed disks,
BTRFS_IOC_DEV_INFO fetches seed disk when probed. So in
While doing rsyncs of large archives from one RAID-1 btrfs filesystem
to another RAID-1 btrfs filesystem:
btrfs filesystem 1: sda + sdb (RAID-1), being copied to:
btrfs filesystem 2: sdc + sdd (RAID-1)
Server has 32 GB RAM
I can observe the following:
From time to time, rsync freezes, while
On Thu, May 08, 2014 at 01:48:43PM +0300, Konstantinos Skarlatos wrote:
On 8/5/2014 4:26 πμ, Wang Shilong wrote:
This patch adds an option '--check-data-csum' to verify data csums.
fsck won't check data csums unless users specify this option explictly.
Can this option be added to btrfs restore
On Fri, May 16, 2014 at 09:23:37AM +0800, Gui Hecheng wrote:
Add sys chunk array and backup roots info if the new option '-f'
if specified.
This may be useful for debugging sys_chunk related issues.
Nice, thanks.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the
On Thu, May 15, 2014 at 09:29:08AM +0800, Gui Hecheng wrote:
Add '-h' option for help for super-recover,
update the manpage at the same time.
We don't have the short option for help, a few patches have been already
rejected to change that.
--
To unsubscribe from this list: send the line
On Fri, May 16, 2014 at 06:37:27PM +0200, David Sterba wrote:
On Thu, May 15, 2014 at 09:29:08AM +0800, Gui Hecheng wrote:
Add '-h' option for help for super-recover,
update the manpage at the same time.
We don't have the short option for help, a few patches have been already
rejected to
On Thu, May 15, 2014 at 12:53:52PM -0500, Eric Sandeen wrote:
So, in my testing, I found that re-mkfsing a device with the same UUID lead
to weird distant segfaults in other bits of code. Probably due to the
uuid cache? /handwave - I didn't dig into it, because ...
Ok, something that needs
The change titled:
Btrfs: fix broken free space cache after the system crashed
can increment a block group cache object twice in find_free_extent() and
never decrement it twice, resulting in a memory leak.
This is easy to reproduce by having kmemleak enabled and the following
steps:
On Fri, 16 May 2014 14:06:24 -0400
Calvin Walton calvin.wal...@kepstin.ca wrote:
No comment on the performance issue, other than to say that I've seen
similar on RAID-10 before, I think.
Also, what happens when the system crashes, and one drive has
several hundred megabytes data more than
On 05/16/2014 04:41 PM, Tomasz Chmielewski wrote:
On Fri, 16 May 2014 14:06:24 -0400
Calvin Walton calvin.wal...@kepstin.ca wrote:
No comment on the performance issue, other than to say that I've seen
similar on RAID-10 before, I think.
Also, what happens when the system crashes, and one
On Fri, May 16, 2014 at 01:53:20AM -0700, Christoph Hellwig wrote:
On Fri, May 16, 2014 at 10:48:42AM +0200, Luk?? Czerner wrote:
As requested I've created a new mailing list for xfstests
development and discussion. Reflecting the fact that the test
harness is not really XFS specific
33 matches
Mail list logo