On 2017-11-02 12:28, ST wrote:
On Thu, 2017-11-02 at 19:16 +0300, Marat Khalili wrote:
Could somebody among developers please elaborate on this issue - is
checking quota going always to be done by root? If so - btrfs might be
a no-go for our use case...
Not a developer, but sysadmin here:
On Wed, Nov 01, 2017 at 06:01:23PM -0600, Liu Bo wrote:
> This is to reproduce a raid6 reconstruction bug after two drives
> getting offline and online via hotplug.
>
> Signed-off-by: James Alandt
> Signed-off-by: Liu Bo
I don't have 5 deletable pool
> >> There's one other caveat though, only root can use the qgroup ioctls,
> >> which means that only root can check quotas.
> >
> > Only root can check quotas?! That is really strange. How users are
> > supposed to know they are about to be out of space?... Is this by design
> > so and will
Duncan,
Thanks for the reply (apologies, I am not subscribed to the list).
I compiled 4.14.0-rc7 and tried balance again with increasing amounts of dusage
but it errored out again at dusage=60 with enospc. Will try downgrading to
4.9.x and try again.
# uname -a
Linux nas 4.14.0-rc7+ #1 SMP
On 11/02/2017 04:26 PM, Martin Raiber wrote:
> On 02.11.2017 16:10 Hans van Kranenburg wrote:
>> On 11/02/2017 04:02 PM, Martin Raiber wrote:
>>> snapshot cleanup is a little slow in my case (50TB volume). Would it
>>> help to have multiple btrfs-cleaner threads? The block layer underneath
>>>
On 02.11.2017 16:10 Hans van Kranenburg wrote:
> On 11/02/2017 04:02 PM, Martin Raiber wrote:
>> snapshot cleanup is a little slow in my case (50TB volume). Would it
>> help to have multiple btrfs-cleaner threads? The block layer underneath
>> would have higher throughput with more simultaneous
On Thu, 2017-11-02 at 19:16 +0300, Marat Khalili wrote:
> > Could somebody among developers please elaborate on this issue - is
> checking quota going always to be done by root? If so - btrfs might be
> a no-go for our use case...
>
> Not a developer, but sysadmin here: what prevents you from
02.11.2017 20:13, Austin S. Hemmelgarn пишет:
>>
>> 2. I want to limit access to sftp, so there will be no custom commands
>> to execute...
> A custom version of the 'quota' command would be easy to add in there.
> In fact, this is really the only option right now, since setting up sudo
> (or
On Thu, Nov 2, 2017 at 7:17 AM, Austin S. Hemmelgarn
wrote:
>> And the worst performing machine was the one with the most RAM and a
>> fast NVMe drive and top of the line hardware.
>
> Somewhat nonsensically, I'll bet that NVMe is a contributing factor in this
> particular
On 2017-11-02 14:09, Dave wrote:
On Thu, Nov 2, 2017 at 7:17 AM, Austin S. Hemmelgarn
wrote:
And the worst performing machine was the one with the most RAM and a
fast NVMe drive and top of the line hardware.
Somewhat nonsensically, I'll bet that NVMe is a contributing
On Thu, Nov 02, 2017 at 11:49:52PM +0800, Eryu Guan wrote:
> On Wed, Nov 01, 2017 at 06:01:23PM -0600, Liu Bo wrote:
> > This is to reproduce a raid6 reconstruction bug after two drives
> > getting offline and online via hotplug.
> >
> > Signed-off-by: James Alandt
> >
On 2017年11月02日 13:50, Lakshmipathi.G wrote:
>>
>> I'll try to reproduce it with FS_CHECK_INTEGRITY enabled.
>>
>
> Okay thanks for the details. Here's the kernel config file which hits
> this issue:
> https://github.com/Lakshmipathi/btrfsqa/blob/master/setup/config/kernel.config#L3906
We feedback IO progress when it falls below 2/3 times of the limit
obtained from btrfs_async_submit_limit(), and creates a wait for the
write process and makes progress during the async submission.
In general device/transport q depth is 256 and, btrfs_async_submit_limit()
returns 256 times per
This patch maintains consistency of mode in device get and put. And is just
a cleanup patch. There isn't any problem that was noticed, so no functional
changes.
Signed-off-by: Anand Jain
---
v2: commit update
v3: commit update
fs/btrfs/volumes.c | 16
1
On Tue, Oct 31, 2017 at 07:53:47AM +0800, Qu Wenruo wrote:
>
>
> On 2017年10月31日 01:14, Liu Bo wrote:
> > First and foremost, here are the problems we have right now,
> >
> > a) %thread_pool is configurable via mount option, however for those
> >'workers' who really needs concurrency, there
Am Tue, 31 Oct 2017 20:37:27 -0400
schrieb Dave :
> > Also, you can declare the '.firefox/default/' directory to be
> > NOCOW, and that "just works".
>
> The cache is in a separate location from the profiles, as I'm sure you
> know. The reason I suggested a separate
Am Wed, 1 Nov 2017 02:51:58 -0400
schrieb Dave :
> >
> >> To reconcile those conflicting goals, the only idea I have come up
> >> with so far is to use btrfs send-receive to perform incremental
> >> backups
> >
> > As already said by Romain Mamedov, rsync is viable
Dave wrote:
Has this been discussed here? Has anything changed since it was written?
I have (more or less) been following the mailing list since this feature
was suggested. I have been drooling over it since, but not much have
happened.
Parity-based redundancy (RAID5/6/triple parity and
Adding __init macro gives kernel a hint that this function is only
used during the initialization phase and its memory resources can be
freed up after.
Signed-off-by: Liu Bo
---
fs/btrfs/compression.h | 2 +-
fs/btrfs/ctree.h | 6 +++---
fs/btrfs/delayed-ref.c | 2 +-
[BUG]
If we run btrfs with CONFIG_BTRFS_FS_RUN_SANITY_TESTS=y, it will
instantly cause kernel panic like:
--
...
assertion failed: 0, file: fs/btrfs/disk-io.c, line: 3853
...
Call Trace:
btrfs_mark_buffer_dirty+0x187/0x1f0 [btrfs]
setup_items_for_insert+0x385/0x650 [btrfs]
On 25.10.2017 17:59, fdman...@kernel.org wrote:
> From: Filipe Manana
>
> This implements support the zero range operation of fallocate. For now
> at least it's as simple as possible while reusing most of the existing
> fallocate and hole punching infrastructure.
>
>
> >>>
> >>> Ok. I'll use more standard approaches. Which of following commands will
> >>> work with BTRFS:
> >>>
> >>> https://debian-handbook.info/browse/stable/sect.quotas.html
> >> None, qgroups are the only option right now with BTRFS, and it's pretty
> >> likely to stay that way since the
On 30.10.2017 19:14, Liu Bo wrote:
> First and foremost, here are the problems we have right now,
>
> a) %thread_pool is configurable via mount option, however for those
>'workers' who really needs concurrency, there are at most
>"min(num_cpus+2, 8)" threads running to process works,
I think it is just a matter of lack of resources.
The very few paid resources to work on btrfs probably does not have
priority to work on parity raid.
(And honestly, parity raid is probably much better implemented below
the filesystem in any case, i.e. in say the md driver or the array
itself).
On 31.10.2017 19:44, David Sterba wrote:
> We take the fs_devices::device_list_mutex mutex in write_all_supers
> which will prevent any add/del changes to the device list. Therefore we
> don't need to use the RCU variant list_for_each_entry_rcu in any of the
> called functions.
>
>
Okay thanks, I'll disable above mentioned entry in kernel config and
run the tests.
Will get back with updated results.
Cheers,
Lakshmipathi.G
http://www.giis.co.in http://www.webminal.org
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to
On 2.11.2017 09:04, Qu Wenruo wrote:
> [BUG]
> If we run btrfs with CONFIG_BTRFS_FS_RUN_SANITY_TESTS=y, it will
> instantly cause kernel panic like:
>
> --
> ...
> assertion failed: 0, file: fs/btrfs/disk-io.c, line: 3853
> ...
> Call Trace:
> btrfs_mark_buffer_dirty+0x187/0x1f0 [btrfs]
>
On 31.10.2017 19:44, David Sterba wrote:
> This fixes potential bio leaks, in several error paths. Unfortunatelly
> the device structure freeing is opencoded in many places and I missed
> them when introducing the flush_bio.
>
> Most of the time, devices get freed through call_rcu(...,
On Thu, Nov 2, 2017 at 7:07 AM, Austin S. Hemmelgarn
wrote:
> On 2017-11-01 21:39, Dave wrote:
>> I'm going to make this change now. What would be a good way to
>> implement this so that the change applies to the $HOME/.cache of each
>> user?
>>
>> The simple way would be to
On Thu, Nov 2, 2017 at 5:16 PM, Kai Krakow wrote:
>
> You may want to try btrfs autodefrag mount option and see if it
> improves things (tho, the effect may take days or weeks to apply if you
> didn't enable it right from the creation of the filesystem).
>
> Also,
After disabling FS_CHECK_INTEGRITY and FS_RUN_SANITY_TESTS in the
config. Now the issue resolved. thanks.
cast: https://asciinema.org/a/Dy6eHhhWPEIxotVbVUBWrhFeF
Cheers,
Lakshmipathi.G
http://www.giis.co.in http://www.webminal.org
On Thu, Nov 2, 2017 at 12:55 PM, Lakshmipathi.G
On 1.11.2017 14:14, Qu Wenruo wrote:
> For the following types, we have items with variable length:
> (With BTRFS_ prefix and _KEY suffix snipped)
>
> DIR_ITEM
> DIR_INDEX
> XATTR_ITEM
> INODE_REF
> INODE_EXTREF
> ROOT_REF
> ROOT_BACKREF
>
> They all use @name_len to indicate their name
Well it looks like things have stabilizedfor the
moment at least.
$ btrfs scrub start --offline --progress /dev/disk/by-id/XX3
Doing offline scrub [o] [681/681]
Scrub result:
Tree bytes scrubbed: 5234425856
Tree extents scrubbed: 638968
Data bytes scrubbed: 4353724284928
Data extents
On 2017年11月02日 20:12, Nikolay Borisov wrote:
>
>
> On 1.11.2017 14:14, Qu Wenruo wrote:
>> For the following types, we have items with variable length:
>> (With BTRFS_ prefix and _KEY suffix snipped)
>>
>> DIR_ITEM
>> DIR_INDEX
>> XATTR_ITEM
>> INODE_REF
>> INODE_EXTREF
>> ROOT_REF
>>
On 2.11.2017 08:11, Anand Jain wrote:
> This patch maintains consistency of mode in device get and put. And is just
> a cleanup patch. There isn't any problem that was noticed, so no functional
> changes.
>
> Signed-off-by: Anand Jain
> ---
Reviewed-by: Nikolay Borisov
On 1.11.2017 14:14, Qu Wenruo wrote:
> For the following types, we have items with variable length:
> (With BTRFS_ prefix and _KEY suffix snipped)
>
> DIR_ITEM
> DIR_INDEX
> XATTR_ITEM
> INODE_REF
> INODE_EXTREF
> ROOT_REF
> ROOT_BACKREF
>
> They all use @name_len to indicate their name
On 2017年11月02日 20:06, Nikolay Borisov wrote:
>
>
> On 1.11.2017 14:14, Qu Wenruo wrote:
>> For the following types, we have items with variable length:
>> (With BTRFS_ prefix and _KEY suffix snipped)
>>
>> DIR_ITEM
>> DIR_INDEX
>> XATTR_ITEM
>> INODE_REF
>> INODE_EXTREF
>> ROOT_REF
>>
On 2.11.2017 14:37, Qu Wenruo wrote:
>
>
> On 2017年11月02日 20:06, Nikolay Borisov wrote:
>>
>>
>> On 1.11.2017 14:14, Qu Wenruo wrote:
>>> For the following types, we have items with variable length:
>>> (With BTRFS_ prefix and _KEY suffix snipped)
>>>
>>> DIR_ITEM
>>> DIR_INDEX
>>>
On 2.11.2017 14:48, Qu Wenruo wrote:
>
>
> On 2017年11月02日 20:12, Nikolay Borisov wrote:
>>
>>
>> On 1.11.2017 14:14, Qu Wenruo wrote:
>>> For the following types, we have items with variable length:
>>> (With BTRFS_ prefix and _KEY suffix snipped)
>>>
>>> DIR_ITEM
>>> DIR_INDEX
>>>
On 2017-11-02 05:09, ST wrote:
Ok. I'll use more standard approaches. Which of following commands will
work with BTRFS:
https://debian-handbook.info/browse/stable/sect.quotas.html
None, qgroups are the only option right now with BTRFS, and it's pretty
likely to stay that way since the
On 2017-11-02 03:29, ronnie sahlberg wrote:
I think it is just a matter of lack of resources.
The very few paid resources to work on btrfs probably does not have
priority to work on parity raid.
(And honestly, parity raid is probably much better implemented below
the filesystem in any case, i.e.
Looks good.
Reviewed-by: Anand Jain
Thanks, Anand
On 11/01/2017 01:44 AM, David Sterba wrote:
Signed-off-by: David Sterba
---
fs/btrfs/volumes.c | 39 +++
1 file changed, 11 insertions(+), 28 deletions(-)
On 31.10.2017 19:44, David Sterba wrote:
> Overview of the main locks protecting various device-related structures.
>
> Signed-off-by: David Sterba
> ---
> fs/btrfs/volumes.c | 66
> ++
> 1 file changed, 66 insertions(+)
>
On 11/01/2017 01:44 AM, David Sterba wrote:
This fixes potential bio leaks, in several error paths. Unfortunatelly
the device structure freeing is opencoded in many places and I missed
them when introducing the flush_bio.
Most of the time, devices get freed through call_rcu(..., free_device),
On 2017-11-01 21:39, Dave wrote:
On Wed, Nov 1, 2017 at 8:21 AM, Austin S. Hemmelgarn
wrote:
The cache is in a separate location from the profiles, as I'm sure you
know. The reason I suggested a separate BTRFS subvolume for
$HOME/.cache is that this will prevent the
On 2017-11-01 20:09, Dave wrote:
On Wed, Nov 1, 2017 at 1:48 PM, Peter Grandi wrote:
When defragmenting individual files on a BTRFS filesystem with
COW, I assume reflinks between that file and all snapshots are
broken. So if there are 30 snapshots on that volume,
On 2017年11月02日 20:56, Nikolay Borisov wrote:
>
>
> On 2.11.2017 14:48, Qu Wenruo wrote:
>>
>>
>> On 2017年11月02日 20:12, Nikolay Borisov wrote:
>>>
>>>
>>> On 1.11.2017 14:14, Qu Wenruo wrote:
For the following types, we have items with variable length:
(With BTRFS_ prefix and _KEY
Hi,
snapshot cleanup is a little slow in my case (50TB volume). Would it
help to have multiple btrfs-cleaner threads? The block layer underneath
would have higher throughput with more simultaneous read/write requests.
Regards,
Martin Raiber
--
To unsubscribe from this list: send the line
On 2017-11-02 11:02, Martin Raiber wrote:
Hi,
snapshot cleanup is a little slow in my case (50TB volume). Would it
help to have multiple btrfs-cleaner threads? The block layer underneath
would have higher throughput with more simultaneous read/write requests.
I think a bigger impact would be
Hi Martin,
On 11/02/2017 04:02 PM, Martin Raiber wrote:
>
> snapshot cleanup is a little slow in my case (50TB volume). Would it
> help to have multiple btrfs-cleaner threads? The block layer underneath
> would have higher throughput with more simultaneous read/write requests.
Just curious:
*
On 02/11/17 04:39, Dave wrote:
I'm going to make this change now. What would be a good way to
implement this so that the change applies to the $HOME/.cache of each
user?
I'd make each user's .cache a symlink (should work but if it won't then
bind mount) to a per-user directory in some
On Thu, Nov 2, 2017 at 4:46 PM, Kai Krakow wrote:
> Am Wed, 1 Nov 2017 02:51:58 -0400
> schrieb Dave :
>
>> >
>> >> To reconcile those conflicting goals, the only idea I have come up
>> >> with so far is to use btrfs send-receive to perform
BTW, it would be better to see if this patch also solves your problem.
https://patchwork.kernel.org/patch/10038011/
Thanks,
Qu
On 2017年11月03日 12:00, Lakshmipathi.G wrote:
> After disabling FS_CHECK_INTEGRITY and FS_RUN_SANITY_TESTS in the
> config. Now the issue resolved. thanks.
>
> cast:
> BTW, it would be better to see if this patch also solves your problem.
>
> https://patchwork.kernel.org/patch/10038011/
Okay sure, let me apply this patch and check the results.
Cheers,
Lakshmipathi.G
http://www.giis.co.in http://www.webminal.org
--
To unsubscribe from this list: send
54 matches
Mail list logo