On 2/3/2021 7:17 PM, fdman...@kernel.org wrote:
From: Filipe Manana
During the nocow writeback path, we currently iterate the rbtree of block
groups twice: once for checking if the target block group is RO with the
call to btrfs_extent_readonly()), and once again for getting a nocow
reference o
For compressed read, we always submit page read using page size.
This doesn't work well with subpage, as for subpage one page can contain
several sectors.
Such submission will read range out of what we want, and cause problems.
Thankfully to make it subpage compatible, we only need to change how
Currently check_compressed_csum() completely relies on sectorsize ==
PAGE_SIZE to do checksum verification for compressed extents.
To make it subpage compatible, this patch will:
- Do extra calculation for the csum range
Since we have multiple sectors inside a page, we need to only hash
the ra
During the long time subpage development, I forgot to properly check
compression code after just one compression read success during early
development.
It turns out that, with current RO support, the compression read needs
more modification to make it work.
Thankfully, the patchset is small, and
On 2021-02-03 21:23, Andrew Luke Nesbit wrote:
On 03/02/2021 19:04, cedric.dew...@eclipso.eu wrote:
I am looking for a way to make a raid 1 of two SSD's, and to be able
to detect corrupted blocks, much like btrfs does that. I recall being
told about a month ago to use a specific piece of sof
This patch removes unneeded return variables, using only
'0' instead.
It fixes the following warning detected by coccinelle:
./fs/btrfs/extent_map.c:299:5-8: Unneeded variable: "ret". Return "0" on
line 331
./fs/btrfs/disk-io.c:4402:5-8: Unneeded variable: "ret". Return "0" on
line 4410
Reported-b
On 4.02.21 г. 5:17 ч., Wang Yugui wrote:
> Hi,
>
> I tried to run btrfs misc-next(5.11-rc6 +81patches) based on linux LTS
> 5.10.12 with the same other kernel components and the same kernel config.
>
> Better dbench(sync open) result on both Throughput and max_latency.
>
If i understand corr
On Sun, Jan 31, 2021 at 9:50 PM Su Yue wrote:
> On Mon 01 Feb 2021 at 10:35, Qu Wenruo
> wrote:
> > On 2021/1/29 下午2:39, Erik Jensen wrote:
> >> On Mon, Jan 25, 2021 at 8:54 PM Erik Jensen
> >> wrote:
> >>> On Wed, Jan 20, 2021 at 1:08 AM Erik Jensen
> >>> wrote:
> On Wed, Jan 20, 2021 at
On 2021/2/2 下午9:37, Anand Jain wrote:
It is much simpler to reproduce. I am using two systems with different
pagesizes to test the subpage readonly support.
On a host with pagesize = 4k.
truncate -s 3g 3g.img
mkfs.btrfs ./3g.img
mount -o loop,compress=zstd ./3g.img /btrfs
xfs_io
Hi,
I tried to run btrfs misc-next(5.11-rc6 +81patches) based on linux LTS
5.10.12 with the same other kernel components and the same kernel config.
Better dbench(sync open) result on both Throughput and max_latency.
Operation CountAvgLatMaxLat
On Sat, Jan 30, 2021 at 1:59 AM Patrick Bihlmayer wrote:
>
> Hello together,
>
> today i had an issue with my cache drive on my Unraid Server.
> I used a 500GB SSD as cache drive.
>
> Unfortunately i added another cache drive (wanted a separate drive for my VMs
> and accidentally added into the c
On 2021/2/4 上午5:54, jos...@mailmag.net wrote:
Good Evening.
I have a large BTRFS array, (14 Drives, ~100 TB RAW) which has been having
problems mounting on boot without timing out. This causes the system to drop to
emergency mode. I am then able to mount the array in emergency mode and all
On 03/02/2021 21:54, jos...@mailmag.net wrote:
> Good Evening.
>
> I have a large BTRFS array, (14 Drives, ~100 TB RAW) which has been having
> problems mounting on boot without timing out. This causes the system to drop
> to emergency mode. I am then able to mount the array in emergency mode an
Hi,
I've been looking a bit into the btrfs space cache and came to following
conclusions. Please correct me if I'm wrong:
1. The space cache mount option only modifies how the space cache is persisted
and not the in-memory structures (hence why I have 2,3 GiB
btrfs_free_space_bitmap slab with
Good Evening.
I have a large BTRFS array, (14 Drives, ~100 TB RAW) which has been having
problems mounting on boot without timing out. This causes the system to drop to
emergency mode. I am then able to mount the array in emergency mode and all
data appears fine, but upon reboot it fails again.
On 03/02/2021 19:04, cedric.dew...@eclipso.eu wrote:
I am looking for a way to make a raid 1 of two SSD's, and to be able to detect
corrupted blocks, much like btrfs does that. I recall being told about a month
ago to use a specific piece of software for that, but i forgot to make a note
of it
--- Ursprüngliche Nachricht ---
Von: " "
Datum: 03.02.2021 20:04:32
An: ", linux-btrfs"
Betreff: put 2 hard drives in mdadm raid 1 and detect bitrot like btrfs does,
what's that called?
Hi All,
I am looking for a way to make a raid 1 of two SSD's, and to be able to detect
corrupted
Hi All,
I am looking for a way to make a raid 1 of two SSD's, and to be able to detect
corrupted blocks, much like btrfs does that. I recall being told about a month
ago to use a specific piece of software for that, but i forgot to make a note
of it, and I can't find it anymore.
What's that c
On Tue, 2021-02-02 at 11:56 +, Filipe Manana wrote:
> On Sun, Jan 31, 2021 at 3:52 PM Roman Anasal | BDSU
> wrote:
> > On Mon, Jan 25, 2021 at 20:51 + Filipe Manana wrote:
> > > On Mon, Jan 25, 2021 at 7:51 PM Roman Anasal <
> > > roman.ana...@bdsu.de>
> > > wrote:
> > > > Second example:
On Fri, Jan 29, 2021 at 11:32:06AM -0500, Josef Bacik wrote:
> On 1/22/21 3:46 PM, Omar Sandoval wrote:
> > From: Omar Sandoval
> >
> > This series adds an API for reading compressed data on a filesystem
> > without decompressing it as well as support for writing compressed data
> > directly to t
Hi,
There is the dbench(sync open) test result of misc-next(5.11-rc6 +81patches)
1, the MaxLat is changed from 1900ms level to 1000ms level.
that is a good improvement.
2, It is OK that NTCreateX/Rename/Unlink/WriteX have the same level of
MaxLat because all of them will write something
On 03/02/2021 09:57, Dan Carpenter wrote:
> Hello Naohiro Aota,
>
> The patch 0d57e73ac5ae: "btrfs: mark block groups to copy for
> device-replace" from Jan 26, 2021, leads to the following static
> checker warning:
>
> fs/btrfs/dev-replace.c:505 mark_block_group_to_copy()
> error: do
On Thu, Jan 28, 2021 at 07:25:08PM +0800, Qu Wenruo wrote:
> In read_extent_buffer_pages(), if we failed to lock the page atomically,
> we just exit with return value 0.
>
> This is pretty counter-intuitive, as normally if we can't lock what we
> need, we would return something like -EAGAIN.
>
>
On Tue, Jan 26, 2021 at 04:33:44PM +0800, Qu Wenruo wrote:
> Qu Wenruo (18):
> btrfs: merge PAGE_CLEAR_DIRTY and PAGE_SET_WRITEBACK to
> PAGE_START_WRITEBACK
> btrfs: set UNMAPPED bit early in btrfs_clone_extent_buffer() for
> subpage support
> btrfs: introduce the skeleton of btrfs_s
On Tue, Feb 02, 2021 at 10:28:36AM +0800, Qu Wenruo wrote:
>
> Signed-off-by: Qu Wenruo
> Signed-off-by: David Sterba
> ---
> Changelog:
> v5.1:
> - Modify the error paths before calling begin_page_read()
> The error path needs to unlock the page manually.
>
> To David,
>
> The modification
From: Filipe Manana
When we active a swap file, at btrfs_swap_activate(), we acquire the
exclusive operation lock to prevent the physical location of the swap
file extents to be changed by operations such as balance and device
replace/resize/remove. We also call there can_nocow_extent() which,
am
From: Filipe Manana
When creating a snapshot we check if the current number of swap files, in
the root, is non-zero, and if it is, we error out and warn that we can not
create the snapshot because there are active swap files.
However this is racy because when a task started activation of a swap
From: Filipe Manana
During the nocow writeback path, we currently iterate the rbtree of block
groups twice: once for checking if the target block group is RO with the
call to btrfs_extent_readonly()), and once again for getting a nocow
reference on the block group with a call to btrfs_inc_nocow_w
From: Filipe Manana
After the two previous patches:
btrfs: avoid checking for RO block group twice during nocow writeback
btrfs: fix race between writes to swap files and scrub
it is no longer used, so just remove it.
Signed-off-by: Filipe Manana
---
fs/btrfs/ctree.h | 1 -
fs/btr
From: Filipe Manana
The following patchset fixes 2 bugs with the swapfile support, where we can
end up falling back to COW when writing to an active swapfile. As a bonus,
it makes the NOCOW write patch, for both buffered and direct IO, more efficient
by avoiding doing repeated worked when checkin
On 3.02.21 г. 12:51 ч., Filipe Manana wrote:
> On Tue, Feb 2, 2021 at 2:15 PM Wang Yugui wrote:
>>
>> Hi, Filipe Manana
>>
>> There are some dbench(sync mode) result on the same hardware,
>> but with different linux kernel
>>
>> 4.14.200
>> Operation CountAvgLatMaxLat
>> -
On Tue, Feb 2, 2021 at 2:15 PM Wang Yugui wrote:
>
> Hi, Filipe Manana
>
> There are some dbench(sync mode) result on the same hardware,
> but with different linux kernel
>
> 4.14.200
> Operation CountAvgLatMaxLat
>
> WriteX225281
Hello Naohiro Aota,
The patch 0d57e73ac5ae: "btrfs: mark block groups to copy for
device-replace" from Jan 26, 2021, leads to the following static
checker warning:
fs/btrfs/dev-replace.c:505 mark_block_group_to_copy()
error: double unlocked '&fs_info->trans_lock' (orig line 486)
33 matches
Mail list logo