Re: [PATCH 02/10] btrfs-progs: save error number correctly in check_chunks_and_extents

2015-10-20 Thread Eryu Guan
On Mon, Oct 19, 2015 at 03:41:04PM +0200, David Sterba wrote: > On Mon, Oct 19, 2015 at 07:37:52PM +0800, Eryu Guan wrote: > > Coverity reports assigning value from "err" to "ret", but that stored > > value is overwritten by check_extent_refs() before it can be used. > > If you fix a coverity

RE: [PATCH v6 0/4] VFS: In-kernel copy system call

2015-10-20 Thread Zhao Lei
Hi, Anna Schumaker This patchset compile ok in x86 and x86_64 target, But failed in arm when compiling btrfs dir, and output following error message: :1304:2: warning: #warning syscall copy_file_range not implemented [-Wcpp] Reproduce: merge commands: cd /mnt/big1/linux git fetch -q --all

[RFC PATCH] Btrfs-progs: remove compressed option from 'qgroup limit'

2015-10-20 Thread Liu Bo
The current design of btrfs quota doesn't support "quota limit after compression" after commit e2d1f92399af ("btrfs: qgroup: do a reservation in a higher level.") So remove it to make things clear. Signed-off-by: Liu Bo --- Documentation/btrfs-qgroup.asciidoc | 4

[RESEND PATCH 1/2] btrfs: check-integrity: Fix returned errno codes

2015-10-20 Thread Luis de Bethencourt
check-integrity is using -1 instead of the -ENOMEM defined macro to specify that a buffer allocation failed. Since the error number is propagated, the caller will get a -EPERM which is the wrong error condition. Also, the smatch tool complains with the following warnings:

[RESEND PATCH 0/2] btrfs: Fix returned errno codes

2015-10-20 Thread Luis de Bethencourt
Hello, This is a resend of this patch series. It was posted on September 24 [0] These two patches fix instances where -1 is used to specify a buffer allocation fail, instead of using -ENOMEM. Patch 1/2 is already reviewed by David Sterba. Best regards, Luis [0]:

[RESEND PATCH 2/2] btrfs: reada: Fix returned errno code

2015-10-20 Thread Luis de Bethencourt
reada is using -1 instead of the -ENOMEM defined macro to specify that a buffer allocation failed. Since the error number is propagated, the caller will get a -EPERM which is the wrong error condition. Also, updating the caller to return the exact value from reada_add_block. Smatch tool warning:

Re: Expected behavior of bad sectors on one drive in a RAID1

2015-10-20 Thread Austin S Hemmelgarn
On 2015-10-20 09:15, Russell Coker wrote: On Wed, 21 Oct 2015 12:00:59 AM Austin S Hemmelgarn wrote: https://www.gnu.org/software/ddrescue/ At this stage I would use ddrescue or something similar to copy data from the failing disk to a fresh disk, then do a BTRFS scrub to regenerate the

[PATCH] xfstests: btrfs/012: add a regression test for deleting ext2_saved

2015-10-20 Thread Liu Bo
Btrfs now has changed to delete subvolume/snapshot asynchronously, which means that after umount, if we've already deleted 'ext2_saved', rollback can still be completed, which should not. So this adds a regression test for this. Signed-off-by: Liu Bo --- tests/btrfs/012 |

Re: N-Way (traditional) RAID-1 development status

2015-10-20 Thread Austin S Hemmelgarn
On 2015-10-19 23:13, james harvey wrote: Wanted to see if there's active development on N-Way (traditional) RAID-1. By this, I mean that RAID-1 across "n" disks traditionally means "n" copies of data, but btrfs currently implements RAID-1 as "2" copies of data. So, unlike traditional RAID-1,

How to remove missing device on RAID1?

2015-10-20 Thread Kyle Manna
Hi all, I have a collection of three (was 4) 1-2TB devices with data and metadata in a RAID1 mirror. Last night I was struck by the Click of Death on an old Samsung drive. I removed the device from the system, rebooted and mounted the volume with `-o degraded` and the file system seems fine and

Re: Expected behavior of bad sectors on one drive in a RAID1

2015-10-20 Thread Duncan
Austin S Hemmelgarn posted on Tue, 20 Oct 2015 09:59:17 -0400 as excerpted: >>> It is worth clarifying also that: >>> a. While BTRFS will not return bad data in this case, it also won't >>> automatically repair the corruption. >> >> Really? If so I think that's a bug in BTRFS. When mounted rw

Re: Expected behavior of bad sectors on one drive in a RAID1

2015-10-20 Thread Duncan
james harvey posted on Tue, 20 Oct 2015 00:16:15 -0400 as excerpted: > Background - > > My fileserver had a "bad event" last week. Shut it down normally to add > a new hard drive, and it would no longer post. Tried about 50 times, > doing the typical everything non-essential unplugged,

[PATCH 0/3 v3] Balance filters: stripes, enhanced limit and usage

2015-10-20 Thread David Sterba
A few more enhancements, I'd like to see all changes to the balance filters merged into one major release. Thanks. Changelog (v3): * I've noticed that we can enhance the 'usage' filter the same way, so do it to be consistent with the rest * the flags for all the new filters have been renamed

[PATCH 2/3] btrfs: add balance filter for stripes

2015-10-20 Thread David Sterba
From: Gabríel Arthúr Pétursson Balance block groups which have the given number of stripes, defined by a range min..max. This is useful to selectively rebalance only chunks that do not span enough devices, applies to RAID0/10/5/6. Signed-off-by: Gabríel Arthúr Pétursson

[PATCH 1/3] btrfs: extend balance filter limit to take minimum and maximum

2015-10-20 Thread David Sterba
The 'limit' filter is underdesigned, it should have been a range for [min,max], with some relaxed semantics when one of the bounds is missing. Besides that, using a full u64 for a single value is a waste of bytes. Let's fix both by extending the use of the u64 bytes for the [min,max] range. This

[PATCH 3/3] btrfs: extend balance filter usage to take minimum and maximum

2015-10-20 Thread David Sterba
Similar to the 'limit' filter, we can enhance the 'usage' filter to accept a range. The change is backward compatible, the range is applied only in connection with the BTRFS_BALANCE_ARGS_USAGE_RANGE flag. We don't have a usecase yet, the current syntax has been sufficient. The enhancement should

Re: [RFC PATCH] btrfs/ioctl.c: Prefer inode with lowest offset as source for clone

2015-10-20 Thread Timofey Titovets
2015-10-20 17:56 GMT+03:00, Filipe Manana : > On Tue, Oct 20, 2015 at 2:29 PM, Timofey Titovets > wrote: >> For performance reason, leave data at the start of disk, is preferable >> while deduping > > Have you made any performance tests to verify that?

Re: How to remove missing device on RAID1?

2015-10-20 Thread Philip Seeger
Hi Kyle, On 10/20/2015 07:24 PM, Kyle Manna wrote: I removed the device from the system, rebooted and mounted the volume with `-o degraded` and the file system seems fine and usable. I'm waiting on a replacement, drive but want to remove the old drive and re-balance in the meantime. This

Re: How to remove missing device on RAID1?

2015-10-20 Thread Duncan
Kyle Manna posted on Tue, 20 Oct 2015 10:24:48 -0700 as excerpted: > Hi all, > > I have a collection of three (was 4) 1-2TB devices with data and > metadata in a RAID1 mirror. Last night I was struck by the Click of > Death on an old Samsung drive. > > I removed the device from the system,

Re: How to remove missing device on RAID1?

2015-10-20 Thread Goffredo Baroncelli
On 2015-10-20 19:24, Kyle Manna wrote: > Hi all, [...] > How do I remove the missing device? I tried the `btrfs device delete > missing /mnt` but was greeted with "ERROR: missing is not a block > device". A quick look at that btrfs-progs git repo shows that > `stat("missing")` is called, which

Re: Expected behavior of bad sectors on one drive in a RAID1

2015-10-20 Thread Austin S Hemmelgarn
On 2015-10-20 14:54, Duncan wrote: But tho I'm a user not a dev and thus haven't actually checked the source code itself, my believe here is with Russ and disagrees with Austin, as based on what I've read both on the wiki and seen here previously, btrfs runtime (that is, not during scrub)

Re: Expected behavior of bad sectors on one drive in a RAID1

2015-10-20 Thread Austin S Hemmelgarn
On 2015-10-20 15:20, Duncan wrote: Austin S Hemmelgarn posted on Tue, 20 Oct 2015 09:59:17 -0400 as excerpted: It is worth clarifying also that: a. While BTRFS will not return bad data in this case, it also won't automatically repair the corruption. Really? If so I think that's a bug in

Re: Expected behavior of bad sectors on one drive in a RAID1

2015-10-20 Thread Tim Walberg
On 10/20/2015 15:59 -0400, Austin S Hemmelgarn wrote: >> . >> With a 32-bit checksum and a 4k block (the math is easier with >> smaller numbers), that's 4128 bits, which means that a random >> single bit error will have a approximately 0.24% chance of >> occurring

Re: Expected behavior of bad sectors on one drive in a RAID1

2015-10-20 Thread Duncan
Austin S Hemmelgarn posted on Tue, 20 Oct 2015 15:48:07 -0400 as excerpted: > FWIW, my assessment is based on some testing I did a while back (kernel > 3.14 IIRC) using a VM. The (significantly summarized of course) > procedure I used was: > 1. Create a basic minimalistic Linux system in a VM

Re: How to remove missing device on RAID1?

2015-10-20 Thread Kyle Manna
Thanks for the follow-up Duncan, that makes sense. I assumed I was doing something wrong. I downloaded the devel branch of of btrfs-progs and got it running before I saw the need for a kernel patch and decided to wait. For anyone following this later, I needed to use the following to get the

Re: How to remove missing device on RAID1?

2015-10-20 Thread Henk Slager
copy-paste error corrected On Wed, Oct 21, 2015 at 12:40 AM, Henk Slager wrote: > I had a similar issue some time ago, around the time kernel 4.1.6 was > just there. > In case you don't want to wait for new disk or decide to just run the > filesystem with 1 disk less or maybe

[PATCH] btrfs: fix possible leak in btrfs_ioctl_balance()

2015-10-20 Thread Christian Engelmayer
Commit 8eb934591f8b ("btrfs: check unsupported filters in balance arguments") adds a jump to exit label out_bargs in case the argument check fails. At this point in addition to the bargs memory, the memory for struct btrfs_balance_control has already been allocated. Ownership of bctl is passed to

Re: How to remove missing device on RAID1?

2015-10-20 Thread Henk Slager
I had a similar issue some time ago, around the time kernel 4.1.6 was just there. In case you don't want to wait for new disk or decide to just run the filesystem with 1 disk less or maybe later on replace 1 of the still healthy disks with a double/bigger sized one and use current/older

Re: [RFC PATCH] btrfs/ioctl.c: Prefer inode with lowest offset as source for clone

2015-10-20 Thread Filipe Manana
On Tue, Oct 20, 2015 at 2:29 PM, Timofey Titovets wrote: > For performance reason, leave data at the start of disk, is preferable > while deduping Have you made any performance tests to verify that? > It's might sense for the reasons: > 1. Spinning rust - start of the disk

Re: Btrfs/RAID5 became unmountable after SATA cable fault

2015-10-20 Thread Duncan
Janos Toth F. posted on Mon, 19 Oct 2015 10:39:06 +0200 as excerpted: > I was in the middle of replacing the drives of my NAS one-by-one (I > wished to move to bigger and faster storage at the end), so I used one > more SATA drive + SATA cable than usual. Unfortunately, the extra cable > turned

[RFC PATCH V2] btrfs/ioctl.c: extent_same - Use inode as src which, close to disk beginning

2015-10-20 Thread Timofey Titovets
It's just a proof of concept, and i hope to see feedback/ideas/review about it. --- While deduplication, Btrfs produce extent and file fragmentation But it's can be optimized by compute - which inode data placed a closest to beginning of hdd It's allow to: 1. Performance boost on hdd

[RFC PATCH V2] btrfs/ioctl.c: extent_same - Use inode as src which, close to disk beginning

2015-10-20 Thread Timofey Titovets
It's just a proof of concept, and i hope to see feedback, ideas about it. --- While deduplication, Btrfs produce extent and file fragmentation But it's can be optimized by compute - which inode data placed a closest to beginning of hdd It's allow to reach: 1. Permonace boost on hdd (beginning

Re: Expected behavior of bad sectors on one drive in a RAID1

2015-10-20 Thread Austin S Hemmelgarn
On 2015-10-20 00:45, Russell Coker wrote: On Tue, 20 Oct 2015 03:16:15 PM james harvey wrote: sda appears to be going bad, with my low threshold of "going bad", and will be replaced ASAP. It just developed 16 reallocated sectors, and has 40 current pending sectors. I'm currently running a

Re: Expected behavior of bad sectors on one drive in a RAID1

2015-10-20 Thread Russell Coker
On Wed, 21 Oct 2015 12:00:59 AM Austin S Hemmelgarn wrote: > > https://www.gnu.org/software/ddrescue/ > > > > At this stage I would use ddrescue or something similar to copy data from > > the failing disk to a fresh disk, then do a BTRFS scrub to regenerate > > the missing data. > > > > I

[RFC PATCH] btrfs/ioctl.c: Prefer inode with lowest offset as source for clone

2015-10-20 Thread Timofey Titovets
For performance reason, leave data at the start of disk, is preferable while deduping It's might sense for the reasons: 1. Spinning rust - start of the disk is much faster 2. Btrfs can deallocate empty data chunk from the end of fs - ie it's compact fs Signed-off-by: Timofey Titovets