waiter_lock doesn't need any special lock handing. Also make it static.
Signed-off-by: Benjamin Marzinski
---
multipathd/waiter.c | 14 +++---
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/multipathd/waiter.c b/multipathd/waiter.c
index 2a221465..52793698 100644
--- a/mul
When there are a huge number of paths (> 1) The about of time that
the checkerloop can hold the vecs lock for while checking the paths
can get to be large enough that it starves other vecs lock users. If
path checking takes long enough, it's possible that uxlsnr threads will
never run. To deal
The uxlsnr clients never block waiting on the vecs->lock. Instead they
register to get woken up and call trylock() when the lock is dropped.
Add code to track when they are registered to get woken up. The
checkerloop now checks if there are waiting uxlsnr clients as well.
Signed-off-by: Benjamin M
checking if tv_sec is 0 is a holdover from before we had
get_monotonic_time(), when we used to zero out tv_sec on failure.
Also, use normalize_timespec() to simplify setting the sleep time.
Signed-off-by: Benjamin Marzinski
---
multipathd/main.c | 62 -
check_timeout() is called whenever it's time to handle a client, and if
it detects a timeout, it will switch to the CLT_SEND state. However it
may already be in the CLT_SEND state, and may have already sent the
length, and possibly some data. Restarting the CLT_SEND state will cause
it to restart
If there are a very large number of paths that all get checked at the
same time, it is possible for the path checkers to starve out other
vecs->lock users, like uevent processing. To avoid this, if the path
checkers are taking a long time, checkerloop now occasionally unlocks
and allows other vecs-
use the uatomic operations to track how many threads are waiting in
lock() for mutex_locks. This will be used by a later patch.
Signed-off-by: Benjamin Marzinski
---
libmultipath/lock.h | 16
multipathd/main.c | 2 +-
2 files changed, 17 insertions(+), 1 deletion(-)
diff --g
On 2022-07-28 15:29, Bart Van Assche wrote:
>> But I am fine with going back to bdev_is_zone_start if you and Damien
>> feel strongly otherwise.
> The "zone start LBA" terminology occurs in ZBC-1, ZBC-2 and ZNS but
> "zone aligned" not. I prefer "zone start" because it is clear,
> unambiguous and
>> if (endio) {
>> int r = endio(ti, bio, &error);
>> switch (r) {
>> @@ -1155,6 +1151,10 @@ static void clone_endio(struct bio *bio)
>> }
>> }
>>
>> +if (static_branch_unlikely(&zoned_enabled) &&
>> +unlikely(bdev_is_zoned(bio->bi_bdev
On 2022-07-28 14:15, David Sterba wrote:
> On Wed, Jul 27, 2022 at 06:22:41PM +0200, Pankaj Raghav wrote:
>> --- a/drivers/md/dm-zoned-target.c
>> +++ b/drivers/md/dm-zoned-target.c
>> @@ -792,6 +792,10 @@ static int dmz_fixup_devices(struct dm_target *ti)
>> return -EI
On 2022-07-27 23:59, Chaitanya Kulkarni wrote:
>> Sequential Write:
>>
>> x-x-x-x
>> | IOdepth |8|16
>> |
>> x-x
On 2022-07-28 05:09, Damien Le Moal wrote:
>> }
>
> This change should go into patch 3. Otherwise, adding patch 3 only will
> break the nvme target zone code.
>
Ok.
>>
>> static unsigned long get_nr_zones_from_buf(struct nvmet_req *req, u32
>> bufsize)
>> diff --git a/include/linux/blkdev.h
12 matches
Mail list logo