Thanks for the review first.
At 03/28/2017 12:38 AM, David Sterba wrote:
On Fri, Mar 24, 2017 at 10:00:23AM +0800, Qu Wenruo wrote:
Unlike mirror based profiles, RAID5/6 recovery needs to read out the
whole full stripe.
And if we don't do proper protect, it can easily cause race condition.
In
Duncan posted on Tue, 28 Mar 2017 02:41:52 + as excerpted:
> Which in usual terms means making the perms root-only, with the binary
> set to some controlled-access group and set-SUID-root (or appropriate
> security attributes, I'm drawing a blank on the word I want ATM), and
> then letting the
27.03.2017 22:32, Chris Murphy пишет:
> How about if qgroups are enabled, then non-root user is prevented from
> creating new subvolumes?
>
> Or is there a way for a new nested subvolume to be included in its
> parent's quota, rather than the new subvolume having a whole new quota
> limit?
>
The
Chris Murphy posted on Mon, 27 Mar 2017 15:11:34 -0600 as excerpted:
>> What are actual use cases for creating subvolumes by 'normal' users?
>>
>> Does someone have an example?
>>
>> Why is it possible at all, by default?
>
> I have a single git subvolume in my user directory, inside of which are
At 03/27/2017 08:01 PM, Austin S. Hemmelgarn wrote:
On 2017-03-27 07:02, Moritz Sichert wrote:
Am 27.03.2017 um 05:46 schrieb Qu Wenruo:
At 03/27/2017 11:26 AM, Andrei Borzenkov wrote:
27.03.2017 03:39, Qu Wenruo пишет:
At 03/26/2017 06:03 AM, Moritz Sichert wrote:
Hi,
I tried to conf
On Mon, Mar 27, 2017 at 2:06 PM, Hans van Kranenburg
wrote:
> On 03/27/2017 09:53 PM, Roman Mamedov wrote:
>> On Mon, 27 Mar 2017 13:32:47 -0600
>> Chris Murphy wrote:
>>
>>> How about if qgroups are enabled, then non-root user is prevented from
>>> creating new subvolumes?
>>
>> That sounds like
From: Sergei Trofimovich
The easiest way to reproduce the error is to try to build
btrfs-progs with
$ make LDFLAGS=-Wl,--no-undefined
btrfs-list.o: In function `lookup_ino_path':
btrfs-list.c:(.text+0x7d2): undefined reference to `__error'
Noticed by Denis Descheneaux when snapper t
On 03/27/2017 09:53 PM, Roman Mamedov wrote:
> On Mon, 27 Mar 2017 13:32:47 -0600
> Chris Murphy wrote:
>
>> How about if qgroups are enabled, then non-root user is prevented from
>> creating new subvolumes?
>
> That sounds like, if you turn your headlights on in a car, then in-vehicle air
> con
On Mon, 27 Mar 2017 13:32:47 -0600
Chris Murphy wrote:
> How about if qgroups are enabled, then non-root user is prevented from
> creating new subvolumes?
That sounds like, if you turn your headlights on in a car, then in-vehicle air
conditioner randomly stops working. :)
Two things only vaguel
How about if qgroups are enabled, then non-root user is prevented from
creating new subvolumes?
Or is there a way for a new nested subvolume to be included in its
parent's quota, rather than the new subvolume having a whole new quota
limit?
Tricky problem.
Chris Murphy
--
To unsubscribe from th
On 03/27/2017 12:36 PM, David Sterba wrote:
> On Mon, Mar 27, 2017 at 12:29:57PM -0500, Goldwyn Rodrigues wrote:
>> From: Goldwyn Rodrigues
>>
>> We are facing the same problem with EDQUOT which was experienced with
>> ENOSPC. Not sure if we require a full ticketing system such as ENOSPC, but
>>
On Thu, Mar 16, 2017 at 09:59:38AM +0800, Qu Wenruo wrote:
>
>
> At 03/15/2017 10:38 PM, David Sterba wrote:
> > On Mon, Mar 13, 2017 at 02:32:04PM -0600, ednadol...@gmail.com wrote:
> >> From: Edmund Nadolski
> >>
> >> Define the SEQ_NONE macro to replace (u64)-1 in places where said
> >> value
On Wed, Mar 15, 2017 at 08:35:06AM +0800, Qu Wenruo wrote:
> At 03/14/2017 07:55 PM, David Sterba wrote:
> > On Tue, Mar 14, 2017 at 12:55:06PM +0100, David Sterba wrote:
> >> On Tue, Mar 14, 2017 at 04:05:02PM +0800, Qu Wenruo wrote:
> >>> v2:
> >>> Abstract the original code to read out data in
On Thu, Mar 16, 2017 at 11:18:30AM +0800, Qu Wenruo wrote:
> In latest linux api headers, __bitwise is already defined in
> /usr/include/linux/types.h.
>
> So kerncompat.h will re-define __bitwise, and cause gcc warning.
>
> Fix it by checking if __bitwise is already define.
>
> Signed-off-by: Q
On Mon, Mar 27, 2017 at 12:29:57PM -0500, Goldwyn Rodrigues wrote:
> From: Goldwyn Rodrigues
>
> We are facing the same problem with EDQUOT which was experienced with
> ENOSPC. Not sure if we require a full ticketing system such as ENOSPC, but
> here is a quick fix, which may be too big a hammer.
On Tue, Mar 21, 2017 at 09:00:55AM +0800, Qu Wenruo wrote:
> In check_convert_image(), for normal HOLE case, if the file extents are
> smaller than image size, we set ret to -EINVAL and print error message.
>
> But forget to return.
>
> This patch adds the missing return to fix it.
>
> Signed-of
From: Goldwyn Rodrigues
We are facing the same problem with EDQUOT which was experienced with
ENOSPC. Not sure if we require a full ticketing system such as ENOSPC, but
here is a quick fix, which may be too big a hammer.
Quotas are reserved during the start of an operation, incrementing
qg->rese
On Fri, Mar 17, 2017 at 10:06:33AM +0800, Qu Wenruo wrote:
> In check_extent_data_item(), after checking extent item of one data
> extent, we search inlined data backref, then EXTENT_DATA_REF_KEY.
>
> But we didn't search SHARED_DATA_REF, so if the backref is
> SHARED_DATA_REF, then we will raise
On Sat, Mar 25, 2017 at 09:48:28AM +0300, Denis Kirjanov wrote:
> On 3/25/17, Jeff Mahoney wrote:
> > On 3/24/17 5:02 AM, Denis Kirjanov wrote:
> >> Hi guys,
> >>
> >> Looks like that current code does GFP_KERNEL allocation inside
> >> __link_block_group.
> >> the function invokes kobject_add and
On Fri, Mar 24, 2017 at 04:48:00PM -0700, Liu Bo wrote:
> Otherwise, we may later skip this page when repairing bad copy from
> good copy.
Can you please enhance the changelog? At least some more pointers what
and where it could go wrong if the value is not set properly, to get the
rough idea. Tha
On Fri, Mar 24, 2017 at 12:13:42PM -0700, Liu Bo wrote:
> In raid56 senario, after trying parity recovery, we didn't set
> mirror_num for btrfs_bio with failed mirror_num, hence
> end_bio_extent_readpage() will report a random mirror_num in dmesg
> log.
>
> Cc: David Sterba
> Signed-off-by: Liu B
On Fri, Mar 24, 2017 at 12:13:35PM -0700, Liu Bo wrote:
> Now that scrub can fix data errors with the help of parity for raid56
> profile, repair during read is able to as well.
>
> Although the mirror num in raid56 senario has different meanings, i.e.
(typo: scenario)
> 0 or 1: read data direct
On Fri, Mar 24, 2017 at 10:00:24AM +0800, Qu Wenruo wrote:
> --- a/fs/btrfs/scrub.c
> +++ b/fs/btrfs/scrub.c
> @@ -1065,6 +1065,7 @@ static int scrub_handle_errored_block(struct
> scrub_block *sblock_to_check)
> unsigned int have_csum;
> struct scrub_block *sblocks_for_recheck; /* hold
On Fri, Mar 24, 2017 at 10:00:23AM +0800, Qu Wenruo wrote:
> Unlike mirror based profiles, RAID5/6 recovery needs to read out the
> whole full stripe.
>
> And if we don't do proper protect, it can easily cause race condition.
>
> Introduce 2 new functions: lock_full_stripe() and unlock_full_strip
On Fri, Mar 24, 2017 at 03:04:50PM -0700, Liu Bo wrote:
> Commit 20a7db8ab3f2 ("btrfs: add dummy callback for readpage_io_failed
> and drop checks") made a cleanup around readpage_io_failed_hook, and
> it was supposed to keep the original sematics, but it also
> unexpectedly disabled repair during
Karoly Pados posted on Mon, 27 Mar 2017 14:12:58 + as excerpted:
>> But, even there, a close read says missing tells btrfs to delete the
>> first device described by the filesystem metadata that wasn't present
>> when the filesystem was mounted. And since your case does a remount,
>> not a ful
On Fri, Mar 17, 2017 at 11:51:20PM +0300, Dan Carpenter wrote:
> This isn't super serious because you need CAP_ADMIN to run this code.
>
> I added this integer overflow check last year but apparently I am
> rubbish at writing integer overflow checks... There are two issues.
> First, access_ok() w
On Mon, 27 Mar 2017 16:49:47 +0200
Christian Theune wrote:
> Also: the idea of migrating on btrfs also has its downside - the performance
> of “mkdir” and “fsync” is abysmal at the moment. I’m waiting for the current
> shrinking job to finish but this is likely limited to the “find free space”
Hi,
> On Mar 27, 2017, at 4:48 PM, Roman Mamedov wrote:
>
> On Mon, 27 Mar 2017 15:20:37 +0200
> Christian Theune wrote:
>
>> (Background info: we’re migrating large volumes from btrfs to xfs and can
>> only do this step by step: copying some data, shrinking the btrfs volume,
>> extending the
On Mon, 27 Mar 2017 15:20:37 +0200
Christian Theune wrote:
> (Background info: we’re migrating large volumes from btrfs to xfs and can
> only do this step by step: copying some data, shrinking the btrfs volume,
> extending the xfs volume, rinse repeat. If someone should have any
> suggestions to
Hi,
> On Mar 27, 2017, at 4:17 PM, Austin S. Hemmelgarn
> wrote:
>
> One other thing that I just thought of:
> For a backup system, assuming some reasonable thinning system is used for the
> backups, I would personally migrate things slowly over time by putting new
> backups on the new filesy
March 24, 2017 6:49 AM, "Duncan" <1i5t5.dun...@cox.net> wrote:
> Karoly Pados posted on Thu, 23 Mar 2017 14:07:31 + as excerpted:
>
> [ Kernel 4.9.13, progs 4.9.1:
>
> 1) Mkfs.btrfs a two-device raid1 data/metadata btrfs and mount it.
>
> Don't put any data on it.
>
> 2) Remove a device ph
On 2017-03-27 09:54, Christian Theune wrote:
Hi,
On Mar 27, 2017, at 3:50 PM, Christian Theune wrote:
Hi,
On Mar 27, 2017, at 3:46 PM, Austin S. Hemmelgarn wrote:
Something I’d like to verify: does having traffic on the volume have
the potential to delay this infinitely? I.e. does the s
On 2017-03-27 09:50, Christian Theune wrote:
Hi,
On Mar 27, 2017, at 3:46 PM, Austin S. Hemmelgarn wrote:
Something I’d like to verify: does having traffic on the volume have
the potential to delay this infinitely? I.e. does the system write
to any segments that we’re trying to free so it m
Hi,
> On Mar 27, 2017, at 3:50 PM, Christian Theune wrote:
>
> Hi,
>
>> On Mar 27, 2017, at 3:46 PM, Austin S. Hemmelgarn
>> wrote:
>>>
Something I’d like to verify: does having traffic on the volume have
the potential to delay this infinitely? I.e. does the system write
to an
Hi,
> On Mar 27, 2017, at 3:46 PM, Austin S. Hemmelgarn
> wrote:
>>
>>> Something I’d like to verify: does having traffic on the volume have
>>> the potential to delay this infinitely? I.e. does the system write
>>> to any segments that we’re trying to free so it may have to work on
>>> the sam
On 2017-03-27 09:24, Hugo Mills wrote:
On Mon, Mar 27, 2017 at 03:20:37PM +0200, Christian Theune wrote:
Hi,
On Mar 27, 2017, at 3:07 PM, Hugo Mills wrote:
On my hardware (consumer HDDs and SATA, RAID-1 over 6 devices), it
takes about a minute to move 1 GiB of data. At that rate, it would
On Fri, Mar 17, 2017 at 08:35:01AM +0800, Qu Wenruo wrote:
>
>
> At 03/17/2017 12:33 AM, David Sterba wrote:
> > On Thu, Mar 16, 2017 at 11:18:31AM +0800, Qu Wenruo wrote:
> >> Since btrfs_reserved_ranges array is just used to store btrfs reserved
> >> ranges, no one will nor should modify them a
On Mon, Mar 27, 2017 at 03:20:37PM +0200, Christian Theune wrote:
> Hi,
>
> > On Mar 27, 2017, at 3:07 PM, Hugo Mills wrote:
> >
> > On my hardware (consumer HDDs and SATA, RAID-1 over 6 devices), it
> > takes about a minute to move 1 GiB of data. At that rate, it would
> > take 1000 minutes (
Hi,
> On Mar 27, 2017, at 3:07 PM, Hugo Mills wrote:
>
> On my hardware (consumer HDDs and SATA, RAID-1 over 6 devices), it
> takes about a minute to move 1 GiB of data. At that rate, it would
> take 1000 minutes (or about 16 hours) to move 1 TiB of data.
>
> However, there are cases where
On 27/03/17 13:00, J. Hart wrote:
> That is a very interesting idea. I'll try some experiments with this.
You might want to look into two tools which I have found useful for
similar backups:
1) rsnapshot -- this uses rsync for backing up multiple systems and has
been stable for quite a long time
On Mon, Mar 27, 2017 at 01:17:26PM +0200, Christian Theune wrote:
> Hi,
>
> I’m currently shrinking a device and it seems that the performance of shrink
> is abysmal. I intended to shrink a ~22TiB filesystem down to 20TiB. This is
> still using LVM underneath so that I can’t just remove a device
> On Mar 27, 2017, at 1:51 PM, Christian Theune wrote:
>
> Hi,
>
> (I hope I’m not double posting. My mail client was misconfigured and I think
> I only managed to send the mail correctly this time.)
Turns out I did double post. Mea culpa.
--
Christian Theune · c...@flyingcircus.io · +49 345
On Sun, Mar 26, 2017 at 09:58:21AM -0400, Chris wrote:
> "make clean-all" leaves generated files under the "kernel-shared" directory.
>
> This causes problems when trying to build under the same tree when the
> absolute path changes.
>
> For example if you mount the source tree in one location,
>
On Mon, Mar 27, 2017 at 08:52:09AM +0800, Qu Wenruo wrote:
> Reported-by: Chris
> Signed-off-by: Qu Wenruo
Applied, thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.o
On 2017-03-27 07:02, Moritz Sichert wrote:
Am 27.03.2017 um 05:46 schrieb Qu Wenruo:
At 03/27/2017 11:26 AM, Andrei Borzenkov wrote:
27.03.2017 03:39, Qu Wenruo пишет:
At 03/26/2017 06:03 AM, Moritz Sichert wrote:
Hi,
I tried to configure qgroups on a btrfs filesystem but was really
surp
That is a very interesting idea. I'll try some experiments with this.
Many Thanks for the assistance:-)
J. Hart
On 03/27/2017 01:57 AM, Marat Khalili wrote:
Just some consideration, since I've faced similar but no exactly same
problem: use rsync, but create snapshots on target machine. B
On 2017-03-25 23:00, J. Hart wrote:
I have a Btrfs filesystem on a backup server. This filesystem has a
directory to hold backups for filesystems from remote machines. In this
directory is a subdirectory for each machine. Under each machine
subdirectory is one directory for each filesystem (ex
Hi,
(I hope I’m not double posting. My mail client was misconfigured and I think I
only managed to send the mail correctly this time.)
I’m currently shrinking a device and it seems that the performance of shrink is
abysmal. I intended to shrink a ~22TiB filesystem down to 20TiB. This is still
On Sun, Mar 26, 2017 at 04:37:05PM -0400, Mike Gilbert wrote:
> A Gentoo user reports that snapper broke after upgrading to btrfs-progs-4.10.
>
> snapper: symbol lookup error: /usr/lib64/libbtrfs.so.0: undefined
> symbol: __error
>
> https://bugs.gentoo.org/show_bug.cgi?id=613890
>
> It seems th
Hi,
I’m currently shrinking a device and it seems that the performance of shrink is
abysmal. I intended to shrink a ~22TiB filesystem down to 20TiB. This is still
using LVM underneath so that I can’t just remove a device from the filesystem
but have to use the resize command.
Label: 'backy' u
Am 27.03.2017 um 05:46 schrieb Qu Wenruo:
>
>
> At 03/27/2017 11:26 AM, Andrei Borzenkov wrote:
>> 27.03.2017 03:39, Qu Wenruo пишет:
>>>
>>>
>>> At 03/26/2017 06:03 AM, Moritz Sichert wrote:
Hi,
I tried to configure qgroups on a btrfs filesystem but was really
surprised that
52 matches
Mail list logo