As we introduced three lists(async, defer, link), there could been
many sqe allocation. A natural idea is using kmem_cache to satisfy
the allocation just like io_kiocb does.
Signed-off-by: Zhengyuan Liu
---
fs/io_uring.c | 13 -
1 file changed, 8 insertions(+), 5 deletions(-)
diff -
sq->cached_sq_head and cq->cached_cq_tail are both unsigned int.
if cached_sq_head gets overflowed before cached_cq_tail, then we
may miss a barrier req. As cached_cq_tail moved always following
cached_sq_head, the NQ should be enough.
Signed-off-by: Zhengyuan Liu
---
fs/io_uring.c | 2 +-
1 fil
We would queue a work for each req in defer and link list without
increasing async->cnt, so we shouldn't decrease it while exiting
from workqueue as well as shouldn't process the req in async list.
Signed-off-by: Zhengyuan Liu
---
fs/io_uring.c | 9 -
1 file changed, 8 insertions(+), 1 d
Since the inclusion of blk-mq, elevator= kernel argument was not being
considered anymore, making it impossible to specify a specific elevator
at boot time as it was used before.
This is done by checking chosen_elevator global variable, which is
populated once elevator= kernel argument is passed.
Oh, I forgot to mention there is a git branch of these patches available
here:
https://github.com/Eideticom/blktests nvme_fixes
Logan
Using modinfo fails if the given module is built-in. Instead,
just check for the parameter's existence in sysfs.
Signed-off-by: Logan Gunthorpe
---
common/rc | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/common/rc b/common/rc
index 49050c71dabf..d48f73c5bf3d 100644
--- a/co
Move all the lines to modprobe nvmet and nvme-loop
into _setup_nvmet() and _cleanup_nvmet() helper functions
and call _cleanup_nvmet() using _register_test_cleanup()
to ensure it's always called after the test terminates.
This will allow us to improve the cleanup of these tests and
not leave the s
This ensures any test that fails or is interrupted will cleanup
their subsystems. This will prevent the system from being left
in an inconsistent state that will fail subsequent tests.
Signed-off-by: Logan Gunthorpe
---
tests/nvme/rc | 43 +++
1 file chang
Tests 003 and 004 do not call nvme disconnect. In most cases it is
cleaned up by removing the modules but it should be made explicit.
Signed-off-by: Logan Gunthorpe
---
tests/nvme/003 | 1 +
tests/nvme/003.out | 1 +
tests/nvme/004 | 1 +
tests/nvme/004.out | 1 +
4 files changed, 4 ins
Now that the other discovery tests ignore the generation counter value,
create a new test to specifically check that it increments when
subsystems are added or removed from ports and when allow_any_host
is set/unset.
Signed-off-by: Logan Gunthorpe
---
tests/nvme/030 | 76
On test systems with existing nvme drives or built-in modules it may not
be possible to remove nvme-core in order to re-probe it with
multipath=1.
Instead, skip the test if the multipath parameter is not already set
ahead of time.
Note: the multipath parameter of nvme-core is set by default if
CO
From: Michael Moese
Several NVMe tests (002, 016, 017) used a pipe to a sed call filtering
the output. This call is moved to a new filter function nvme/rc and
the calls to sed are replaced by this function.
Additionally, the test nvme/016 failed for me due to the Generation
counter being greater
In order to ensure tests properly clean themselves up, even if
they are subject to interruption, add the ability to call a test
specified function at cleanup time.
Any test can call _register_test_cleanup with the first argument
as a function to call after the test ends or is interrupted
(similar
Flushing the char device now results in the warning:
nvme nvme1: using deprecated NVME_IOCTL_IO_CMD ioctl on the char
device!
Instead, call the flush on the namespace.
Signed-off-by: Logan Gunthorpe
---
tests/nvme/015 | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git
Hi,
This patchset cleans up a number of issues and pain points
I've had with getting the nvme blktests to pass and run cleanly.
The first three patches are meant to fix the Generation Counter
issue that's been discussed before but hasn't been fixed in months.
I primarily use a slightly fixed up p
It is no longer important for correct test functionality to
remove the modules between tests. Therefore, we ignore errors
if the modules are not removed (ie. if they are builtin).
With this patch, it is now safe to run the tests with the nvmet
modules built-in. This will be more convienent for dev
nvme-cli at some point started printing the error message:
NVMe status: CAP_EXCEEDED: The execution of the command has caused the
capacity of the namespace to be exceeded(0x6081)
This was not accounted for by test 018 and caused it to fail.
This test does not need to test the error mes
Hi Sagi,
Another question, from what I understand from the code, the client
always rdma_writes data on writes (with imm) from a remote pool of
server buffers dedicated to it. Essentially all writes are immediate (no
rdma reads ever). How is that different than using send wrs to a set of
pre-p
Hi Satya,
On Wed, Jul 10, 2019 at 03:56:08PM -0700, Satya Tangirala wrote:
> Introduce fscrypt_set_bio_crypt_ctx for filesystems to call to set up
> encryption contexts in bios, and fscrypt_evict_crypt_key to evict
> the encryption context associated with an inode.
>
> Inline encryption is contro
On 7/11/19 6:59 PM, Martin K. Petersen wrote:
> Hi Chaitanya,
>
>> +static inline sector_t bdev_nr_sects(struct block_device *bdev)
>> +{
>> +return part_nr_sects_read(bdev->bd_part);
>> +}
> Can bdev end up being NULL in any of the call sites?
>
> Otherwise no objections.
>
Thanks for mentioni
On 7/12/19 8:25 AM, Tejun Heo wrote:
> Hello, Konstantin.
>
> On Thu, Jul 11, 2019 at 01:19:47PM +0300, Konstantin Khlebnikov wrote:
>> +CONTROL GROUP - BLOCK IO CONTROLLER (BLKIO)
>> +L: cgro...@vger.kernel.org
>> +F: Documentation/cgroup-v1/blkio-controller.rst
>> +F: block/blk-cgroup.c
>> +F
Hello, Konstantin.
On Thu, Jul 11, 2019 at 01:19:47PM +0300, Konstantin Khlebnikov wrote:
> +CONTROL GROUP - BLOCK IO CONTROLLER (BLKIO)
> +L: cgro...@vger.kernel.org
> +F: Documentation/cgroup-v1/blkio-controller.rst
> +F: block/blk-cgroup.c
> +F: include/linux/blk-cgroup.h
> +F: block/
On Fri, Jul 12, 2019 at 2:22 AM Sagi Grimberg wrote:
>
>
> >> My main issues which were raised before are:
> >> - IMO there isn't any justification to this ibtrs layering separation
> >> given that the only user of this is your ibnbd. Unless you are
> >> trying to submit another consumer,
On 19-07-12 10:47:22, Ming Lei wrote:
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index e5ef40c603ca..028c5d78e409 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -2205,6 +2205,64 @@ int blk_mq_alloc_rqs(struct blk_mq_tag_set *set,
> struct blk_mq_tags *tags,
> return -ENOMEM;
On Fri, Jun 21, 2019 at 12:07 PM Minwoo Im wrote:
>
> We can request task management IOCTL command(MPI2_FUNCTION_SCSI_TASK_MGMT)
> to /dev/mpt3ctl. If the given task_type is either abort task or query
> task, it may need a field named "Initiator Port Transfer Tag to Manage"
> in the IU.
>
> Curre
On 07/11, Josef Bacik wrote:
>
> On Thu, Jul 11, 2019 at 03:40:06PM +0200, Oleg Nesterov wrote:
> > rq_qos_wait() inside the main loop does
> >
> > if (!has_sleeper && acquire_inflight_cb(rqw, private_data)) {
> > finish_wait(&rqw->wait, &data.wq);
> >
> >
Hi Sagi,
> >> Another question, from what I understand from the code, the client
> >> always rdma_writes data on writes (with imm) from a remote pool of
> >> server buffers dedicated to it. Essentially all writes are immediate (no
> >> rdma reads ever). How is that different than using send wrs to
27 matches
Mail list logo