vinayak hegde posted on Thu, 01 Mar 2018 14:56:46 +0530 as excerpted:
> This will happen over and over again until we have completely
> overwritten the original extent, at which point your space usage will go
> back down to ~302g.We split big extents with cow, so unless you've got
> lots of space
On 3/2/18 1:59 PM, Nikolay Borisov wrote:
>
>
> On 2.03.2018 20:46, je...@suse.com wrote:
>> From: Jeff Mahoney
>> @@ -135,8 +141,9 @@ static int cmd_quota_rescan(int argc, char **argv)
>> }
>> }
>>
>> -if (ioctlnum != BTRFS_IOC_QUOTA_RESCAN &&
According to tlv_put()'s prototype, data and attrlen needs to be
exchanged in the macro, but seems all callers are already aware of
this misorder and are therefore not affected.
Signed-off-by: Liu Bo
---
fs/btrfs/send.c | 4 ++--
1 file changed, 2 insertions(+), 2
Rebuild on missing device is as same as recover, after it's done, rbio
has data which is consistent with on-disk data, so it can be cached to
avoid further reads.
Signed-off-by: Liu Bo
---
fs/btrfs/raid56.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff
Variable "success" is only checked when !sctx->is_dev_replace.
Signed-off-by: Liu Bo
---
fs/btrfs/scrub.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index e3203a1..1b5ce2f 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@
In the last step of scrub_handle_error_block, we try to combine good
copies on all possible mirrors, this works fine for raid1 and raid10,
but not for raid56 as it's doing parity rebuild.
If parity rebuild doesn't get back with correct data which matches its
checksum, in case of replace we'd
async_missing_raid56() is identical to async_read_rebuild().
Signed-off-by: Liu Bo
---
fs/btrfs/raid56.c | 18 +-
1 file changed, 1 insertion(+), 17 deletions(-)
diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
index bb8a3c5..efb42dc 100644
---
In case of raid56, writes and rebuilds always take BTRFS_STRIPE_LEN(64K)
as unit, however, scrub_extent() sets blocksize as unit, so rebuild
process may be triggered on every block on a same stripe.
A typical example would be that when we're replacing a disappeared disk,
all reads on the disks
On Fri, Mar 2, 2018 at 7:00 PM, Liu Bo wrote:
> On Fri, Mar 02, 2018 at 11:29:33AM -0700, Liu Bo wrote:
>> On Wed, Feb 28, 2018 at 03:56:10PM +, fdman...@kernel.org wrote:
>> > From: Filipe Manana
>> >
>> > If we have a file with 2 (or more) hard
On Fri, Mar 2, 2018 at 6:29 PM, Liu Bo wrote:
> On Wed, Feb 28, 2018 at 03:56:10PM +, fdman...@kernel.org wrote:
>> From: Filipe Manana
>>
>> If we have a file with 2 (or more) hard links in the same directory,
>> remove one of the hard links, create
On Fri, Mar 02, 2018 at 11:29:33AM -0700, Liu Bo wrote:
> On Wed, Feb 28, 2018 at 03:56:10PM +, fdman...@kernel.org wrote:
> > From: Filipe Manana
> >
> > If we have a file with 2 (or more) hard links in the same directory,
> > remove one of the hard links, create a new
On Wed, Feb 28, 2018 at 03:56:10PM +, fdman...@kernel.org wrote:
> From: Filipe Manana
>
> If we have a file with 2 (or more) hard links in the same directory,
> remove one of the hard links, create a new file (or link an existing file)
> in the same directory with the
On 2.03.2018 20:46, je...@suse.com wrote:
> From: Jeff Mahoney
>
> This patch adds a new -W option to wait for a rescan without starting a
> new operation. This is useful for things like xfstests where we want
> do to do a "btrfs quota enable" and not continue until the
From: Jeff Mahoney
It's unlikely we're going to modify a pathname argument, so codify that
and use const.
Signed-off-by: Jeff Mahoney
---
chunk-recover.c | 4 ++--
cmds-device.c | 2 +-
cmds-fi-usage.c | 6 +++---
cmds-rescue.c | 4 ++--
send-utils.c| 4
From: Jeff Mahoney
The only mechanism we have in the progs for searching qgroups is to load
all of them and filter the results. This works for qgroup show but
to add quota information to 'btrfs subvoluem show' it's pretty wasteful.
This patch splits out setting up the search
From: Jeff Mahoney
One of the common requests I receive is for 'df' like facilities
for subvolume usage. Really, the request is for monitoring tools to be
able to understand when subvolumes may be approaching quota in the same
manner traditional file systems approach ENOSPC.
From: Jeff Mahoney
We use structures to pass the info and limit from the kernel as items
but store the individual values separately in btrfs_qgroup. We already
have a btrfs_qgroup_limit structure that's used for setting the limit.
This patch introduces a btrfs_qgroup_info
From: Jeff Mahoney
This patch reports on the first-level qgroup, if any, associated with
a particular subvolume. It displays the usage and limit, subject
to the usual unit parameters.
Signed-off-by: Jeff Mahoney
---
cmds-subvolume.c | 46
From: Jeff Mahoney
The btrfs qgroup show command currently only exports qgroup IDs,
forcing the user to resolve which subvolume each corresponds to.
This patch adds pathname resolution to qgroup show so that when
the -P option is used, the last column contains the pathname of
From: Jeff Mahoney
In print_single_qgroup_table we check the loop index against
BTRFS_QGROUP_CHILD, but what we really mean is "last column." Since
we have an enum value to indicate the last value, use that instead
of assuming that BTRFS_QGROUP_CHILD is always last.
From: Jeff Mahoney
This patch adds a new -W option to wait for a rescan without starting a
new operation. This is useful for things like xfstests where we want
do to do a "btrfs quota enable" and not continue until the subsequent
rescan has finished.
In addition to documenting
From: Jeff Mahoney
Hi all -
The following series addresses some usability issues with the qgroups UI.
1) Adds -W option so we can wait on a rescan completing without starting one.
2) Adds qgroup information to 'btrfs subvolume show'
3) Adds a -P option to show pathnames for
On 3/2/18 1:39 PM, je...@suse.com wrote:
> From: Jeff Mahoney
>
> Hi all -
>
> The following series addresses some usability issues with the qgroups UI.
>
> 1) Adds -W option so we can wait on a rescan completing without starting one.
> 2) Adds qgroup information to 'btrfs
On Wed, Feb 28, 2018 at 03:55:40PM +, fdman...@kernel.org wrote:
> From: Filipe Manana
>
> If in the same transaction we rename a special file (fifo, character/block
> device or symbolic link), create a hard link for it having its old name
> then sync the log, we will end
From: Jeff Mahoney
The btrfs qgroup show command currently only exports qgroup IDs,
forcing the user to resolve which subvolume each corresponds to.
This patch adds pathname resolution to qgroup show so that when
the -P option is used, the last column contains the pathname of
From: Jeff Mahoney
One of the common requests I receive is for 'df' like facilities
for subvolume usage. Really, the request is for monitoring tools to be
able to understand when subvolumes may be approaching quota in the same
manner traditional file systems approach ENOSPC.
From: Jeff Mahoney
This patch adds a new -W option to wait for a rescan without starting a
new operation. This is useful for things like xfstests where we want
do to do a "btrfs quota enable" and not continue until the subsequent
rescan has finished.
In addition to documenting
From: Jeff Mahoney
This patch reports on the first-level qgroup, if any, associated with
a particular subvolume. It displays the usage and limit, subject
to the usual unit parameters.
Signed-off-by: Jeff Mahoney
---
cmds-subvolume.c | 46
From: Jeff Mahoney
One of the common requests I receive is for 'df' like facilities
for subvolume usage. Really, the request is for monitoring tools to be
able to understand when subvolumes may be approaching quota in the same
manner traditional file systems approach ENOSPC.
From: Jeff Mahoney
The only mechanism we have in the progs for searching qgroups is to load
all of them and filter the results. This works for qgroup show but
to add quota information to 'btrfs subvoluem show' it's pretty wasteful.
This patch splits out setting up the search
From: Jeff Mahoney
This patch reports on the first-level qgroup, if any, associated with
a particular subvolume. It displays the usage and limit, subject
to the usual unit parameters.
Signed-off-by: Jeff Mahoney
---
cmds-subvolume.c | 46
From: Jeff Mahoney
We use structures to pass the info and limit from the kernel as items
but store the individual values separately in btrfs_qgroup. We already
have a btrfs_qgroup_limit structure that's used for setting the limit.
This patch introduces a btrfs_qgroup_info
From: Jeff Mahoney
In print_single_qgroup_table we check the loop index against
BTRFS_QGROUP_CHILD, but what we really mean is "last column." Since
we have an enum value to indicate the last value, use that instead
of assuming that BTRFS_QGROUP_CHILD is always last.
From: Jeff Mahoney
We use structures to pass the info and limit from the kernel as items
but store the individual values separately in btrfs_qgroup. We already
have a btrfs_qgroup_limit structure that's used for setting the limit.
This patch introduces a btrfs_qgroup_info
From: Jeff Mahoney
It's unlikely we're going to modify a pathname argument, so codify that
and use const.
Signed-off-by: Jeff Mahoney
---
chunk-recover.c | 4 ++--
cmds-device.c | 2 +-
cmds-fi-usage.c | 6 +++---
cmds-rescue.c | 4 ++--
send-utils.c| 4
From: Jeff Mahoney
The only mechanism we have in the progs for searching qgroups is to load
all of them and filter the results. This works for qgroup show but
to add quota information to 'btrfs subvoluem show' it's pretty wasteful.
This patch splits out setting up the search
From: Jeff Mahoney
Hi all -
The following series addresses some usability issues with the qgroups UI.
1) Adds -W option so we can wait on a rescan completing without starting one.
2) Adds qgroup information to 'btrfs subvolume show'
3) Adds a -P option to show pathnames for
On Wed, Feb 21, 2018 at 03:31:40PM -0800, Howard McLauchlan wrote:
> Btrfs has two mount options for SSD optimizations: ssd and ssd_spread.
> Presently there is an option to disable all SSD optimizations, but there
> isn't an option to disable just ssd_spread.
>
> This patch adds a mount option
On Thu, Mar 01, 2018 at 09:40:41PM +0200, Nikolay Borisov wrote:
>
>
> On 1.03.2018 21:04, Alex Adriaanse wrote:
> > On Feb 16, 2018, at 1:44 PM, Austin S. Hemmelgarn
> > wrote:
...
>
> > [496003.641729] BTRFS: error (device xvdc) in __btrfs_free_extent:7076:
> >
Thanks
My point was to understand if this action was taken by BTRFS or
automously by scsi.
>From your word it seems clear to me that this should go in
KERNEL_DEBUG level, instead of KERNEL_NOTICE
Bye
2018-03-02 16:18 GMT+01:00 David Sterba :
> On Fri, Mar 02, 2018 at 12:37:49PM
On Fri, Mar 02, 2018 at 12:37:49PM +0100, Menion wrote:
> Is it really a no problem? I mean, for some reason BTRFS is
> continuously read the HDD capacity in an array, that does not seem to
> be really correct
The message comes from SCSI:
On 2018年03月02日 19:00, Filipe Manana wrote:
> On Fri, Mar 2, 2018 at 10:54 AM, Qu Wenruo wrote:
>>
>>
>> On 2018年03月02日 18:46, Filipe Manana wrote:
>>> On Fri, Mar 2, 2018 at 5:22 AM, Qu Wenruo wrote:
Normally when specifying max_inline, we should
Is it really a no problem? I mean, for some reason BTRFS is
continuously read the HDD capacity in an array, that does not seem to
be really correct
Bye
2018-02-26 11:07 GMT+01:00 Menion :
> Hi all
> I have recently started to operate an array of 5x8TB HDD (WD RED) in RAID5
>
On Fri, Mar 2, 2018 at 10:54 AM, Qu Wenruo wrote:
>
>
> On 2018年03月02日 18:46, Filipe Manana wrote:
>> On Fri, Mar 2, 2018 at 5:22 AM, Qu Wenruo wrote:
>>> Normally when specifying max_inline, we should normally limit it by
>>> uncompressed extent size, as
On 2018年03月02日 16:37, Nikolay Borisov wrote:
>
>
> On 2.03.2018 10:34, Qu Wenruo wrote:
>>
>>
>> On 2018年03月02日 16:21, Misono, Tomohiro wrote:
>>> On 2018/03/02 14:22, Qu Wenruo wrote:
Btrfs shows max_inline option into kernel message, but for
max_inline=4096, btrfs won't really
On 2018年03月02日 18:46, Filipe Manana wrote:
> On Fri, Mar 2, 2018 at 5:22 AM, Qu Wenruo wrote:
>> Normally when specifying max_inline, we should normally limit it by
>> uncompressed extent size, as it's the only thing user can control.
>
> Why does it matter that users can
On Fri, Mar 2, 2018 at 5:22 AM, Qu Wenruo wrote:
> Normally when specifying max_inline, we should normally limit it by
> uncompressed extent size, as it's the only thing user can control.
Why does it matter that users can control it? Will they write less (or
more) data to files
Hello,
I am contacting you to be my foreign partner in a financial transaction in my
Corporation with the objective of investing the fund in your country.
Thanks in anticipation of your response.
Mr.Jeddy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a
On 2.03.2018 10:34, Qu Wenruo wrote:
>
>
> On 2018年03月02日 16:21, Misono, Tomohiro wrote:
>> On 2018/03/02 14:22, Qu Wenruo wrote:
>>> Btrfs shows max_inline option into kernel message, but for
>>> max_inline=4096, btrfs won't really inline 4096 bytes inline data if
>>> it's not compressed.
>>
On 2018年03月02日 16:21, Misono, Tomohiro wrote:
> On 2018/03/02 14:22, Qu Wenruo wrote:
>> Btrfs shows max_inline option into kernel message, but for
>> max_inline=4096, btrfs won't really inline 4096 bytes inline data if
>> it's not compressed.
>
> Hello,
> I have a question.
>
> man mount(8)
On 2.03.2018 10:21, Misono, Tomohiro wrote:
> On 2018/03/02 14:22, Qu Wenruo wrote:
>> Btrfs shows max_inline option into kernel message, but for
>> max_inline=4096, btrfs won't really inline 4096 bytes inline data if
>> it's not compressed.
>
> Hello,
> I have a question.
>
> man mount(8)
On 2018/03/02 14:22, Qu Wenruo wrote:
> Btrfs shows max_inline option into kernel message, but for
> max_inline=4096, btrfs won't really inline 4096 bytes inline data if
> it's not compressed.
Hello,
I have a question.
man mount(8) says:
max_inline=bytes
Specify the maximum
On 2.03.2018 07:22, Qu Wenruo wrote:
> This patchset intends to reduce confusion about "max_inline" mount
> option.
>
> The max_inline mount option has the following problems:
>
> 1) Different behavior for plain and compressed data extent
>For plain data extent, it's limiting the extent
53 matches
Mail list logo