On 2021/02/03 15:58, Anand Jain wrote:
>
>
> On 2/3/2021 2:10 PM, Damien Le Moal wrote:
>> On 2021/02/03 14:22, Anand Jain wrote:
>>> On 1/26/2021 10:24 AM, Naohiro Aota wrote:
Conventional zones do not have a write pointer, so we cannot use it to
determine the allocation offset if a bl
On 2/3/2021 2:10 PM, Damien Le Moal wrote:
On 2021/02/03 14:22, Anand Jain wrote:
On 1/26/2021 10:24 AM, Naohiro Aota wrote:
Conventional zones do not have a write pointer, so we cannot use it to
determine the allocation offset if a block group contains a conventional
zone.
But instead, we
On 2021/02/03 14:22, Anand Jain wrote:
> On 1/26/2021 10:24 AM, Naohiro Aota wrote:
>> Conventional zones do not have a write pointer, so we cannot use it to
>> determine the allocation offset if a block group contains a conventional
>> zone.
>>
>> But instead, we can consider the end of the last a
On 1/26/2021 10:24 AM, Naohiro Aota wrote:
Conventional zones do not have a write pointer, so we cannot use it to
determine the allocation offset if a block group contains a conventional
zone.
But instead, we can consider the end of the last allocated extent in the
block group as an allocation o
02.02.2021 10:53, Hugo Mills пишет:
> On Mon, Feb 01, 2021 at 11:51:06PM +0100, Christoph Anton Mitterer wrote:
>> On Mon, 2021-02-01 at 10:46 +, Hugo Mills wrote:
>>> It'll fail *obviously*. I'm not sure how graceful it is. :)
>>
>> Okay that doesn't sound like it was very trustworthy... :-
02.02.2021 00:53, Chris Murphy пишет:
> It needs testing but I think -c option can work for this case, because
> the parent on both source and destination are identical, even if the
> new destination (the old source) has an unexpected received subvolume
> uuid.
>
Incremental send requires base sn
On Tue, Feb 02, 2021 at 04:50:26PM +, Johannes Thumshirn wrote:
> On 02/02/2021 15:58, David Sterba wrote:
> >> static int check_async_write(struct btrfs_fs_info *fs_info,
> >> struct btrfs_inode *bi)
> >> {
> >> + if (btrfs_is_zoned(fs_info))
> >> + return 0;
5 disk raid1 created with Linux 3.18 once upon a time. Most disks have
been replaced through the years and I was about to replace yet another
one with a couple of bad blocks.
Running Linux 5.10.0-2-amd64 #1 SMP Debian 5.10.9-1 (2021-01-20)
x86_64 GNU/Linux. Same problem with Debian 5.9.15-1 (2020-
On 02/02/2021 15:58, David Sterba wrote:
>> static int check_async_write(struct btrfs_fs_info *fs_info,
>> struct btrfs_inode *bi)
>> {
>> +if (btrfs_is_zoned(fs_info))
>> +return 0;
> This check need to be after the other ones as zoned is a static per-fs
On Tue, Jan 26, 2021 at 11:25:07AM +0900, Naohiro Aota wrote:
> If more than one IO is issued for one file extent, these IO can be written
> to separate regions on a device. Since we cannot map one file extent to
> such a separate area, we need to follow the "one IO == one ordered extent"
> rule.
>
On Tue, Jan 26, 2021 at 11:25:10AM +0900, Naohiro Aota wrote:
> In ZONED, btrfs uses per-FS zoned_meta_io_lock to serialize the metadata
> write IOs.
>
> Even with these serialization, write bios sent from btree_write_cache_pages
> can be reordered by async checksum workers as these workers are pe
Hi, Filipe Manana
There are some dbench(sync mode) result on the same hardware,
but with different linux kernel
4.14.200
Operation CountAvgLatMaxLat
WriteX225281 5.16382.143
Flush 32161 2.25062.669
Throughpu
It is much simpler to reproduce. I am using two systems with different
pagesizes to test the subpage readonly support.
On a host with pagesize = 4k.
truncate -s 3g 3g.img
mkfs.btrfs ./3g.img
mount -o loop,compress=zstd ./3g.img /btrfs
xfs_io -f -c "pwrite -S 0xab 0 128k" /btrfs/foo
u
Hi, Filipe Manana
> On Tue, Feb 2, 2021 at 5:42 AM Wang Yugui wrote:
> >
> > Hi, Filipe Manana
> >
> > The dbench result with these patches is very good. thanks a lot.
> >
> > This is the dbench(synchronous mode) result , and then a question.
> >
> > command: dbench -s -t 60 -D /btrfs/ 32
> > mou
On Sun, Jan 31, 2021 at 3:52 PM Roman Anasal | BDSU
wrote:
>
> On Mon, Jan 25, 2021 at 20:51 + Filipe Manana wrote:
> > On Mon, Jan 25, 2021 at 7:51 PM Roman Anasal
> > wrote:
> > > Second example:
> > > # case 2: same ino at different path
> > > btrfs subvolume create subvol1
> > > btr
On 2/2/2021 6:23 PM, Qu Wenruo wrote:
On 2021/2/2 下午5:21, Anand Jain wrote:
Qu,
fstests ran fine on an aarch64 kvm with this patch set.
Do you mean subpage patchset?
With 4K sector size?
No way it can run fine...
No . fstests ran with sectorsize == pagesize == 64k. These aren't
s
On Tue, Feb 2, 2021 at 5:42 AM Wang Yugui wrote:
>
> Hi, Filipe Manana
>
> The dbench result with these patches is very good. thanks a lot.
>
> This is the dbench(synchronous mode) result , and then a question.
>
> command: dbench -s -t 60 -D /btrfs/ 32
> mount option:ssd,space_cache=v2
> kernel:5
On 2021/2/2 下午5:21, Anand Jain wrote:
Qu,
fstests ran fine on an aarch64 kvm with this patch set.
Do you mean subpage patchset?
With 4K sector size?
No way it can run fine...
Long enough fsstress can crash the kernel with btrfs_csum_one_bio()
unable to locate the corresponding ordered
Qu,
fstests ran fine on an aarch64 kvm with this patch set.
Further, I was running few hand tests as below, and it fails
with - Unable to handle kernel paging.
Test case looks something like..
On x86_64 create btrfs on a file 11g
copy /usr into /test-mnt stops at enospc
set compressio
19 matches
Mail list logo