On 1/17/19 11:23 AM, Omar Sandoval wrote:
> On Thu, Jan 10, 2019 at 06:37:13PM +0900, Shin'ichiro Kawasaki wrote:
>> Fio zbd zone mode is necessary for zoned block devices. Introduce the
>> helper function _have_fio_zbd_zonemode() to check that the installed
>> fio version supports the option --zon
On 1/17/19 11:18 AM, Omar Sandoval wrote:
> On Thu, Jan 10, 2019 at 06:37:12PM +0900, Shin'ichiro Kawasaki wrote:
>> set_scheduler() function defined in common/multipath-over-rdma is useful
>> to set up a specific IO scheduler not only for multipath tests but also
>> for zoned block device tests. M
Hi Omar,
On 1/17/19 11:16 AM, Omar Sandoval wrote:
> On Thu, Jan 10, 2019 at 06:37:09PM +0900, Shin'ichiro Kawasaki wrote:
>> The current blktests infrastucture and test cases do not support zoned block
>> devices and no specific test cases exist to test these block devices special
>> features (zo
From: Omar Sandoval
Sent: Wednesday, January 16, 2019 6:16 PM
To: Shinichiro Kawasaki
Cc: linux-block@vger.kernel.org; Omar Sandoval; Masato Suzuki; Jens Axboe;
Matias Bjorling; Hannes Reinecke; Mike Snitzer; Martin K . Petersen; Chaitanya
Kulkarni
Subject: Re: [PATCH blktests v2 00/16] Im
Use scsi_debug's dif/dix to cover block layer's integrity function
test, then it can serve as block integrity regeression test.
Signed-off-by: Ming Lei
---
tests/block/028 | 42 ++
tests/block/028.out | 9 +
2 files changed, 51 insertions(+)
On 1/16/19 5:40 PM, Omar Sandoval wrote:
On Tue, Jan 15, 2019 at 08:40:41AM -0800, Bart Van Assche wrote:
On Tue, 2019-01-01 at 19:13 -0800, Bart Van Assche wrote:
On 12/4/18 9:47 AM, Josef Bacik wrote:
In order to test io.latency and other cgroup related things we need some
supporting helpers
On Thu, Jan 10, 2019 at 06:37:13PM +0900, Shin'ichiro Kawasaki wrote:
> Fio zbd zone mode is necessary for zoned block devices. Introduce the
> helper function _have_fio_zbd_zonemode() to check that the installed
> fio version supports the option --zonemode=zbd.
Testing version numbers is fragile.
On Thu, Jan 10, 2019 at 06:37:12PM +0900, Shin'ichiro Kawasaki wrote:
> set_scheduler() function defined in common/multipath-over-rdma is useful
> to set up a specific IO scheduler not only for multipath tests but also
> for zoned block device tests. Move this function to common/rc to allow
> its u
On Thu, Jan 10, 2019 at 06:37:09PM +0900, Shin'ichiro Kawasaki wrote:
> The current blktests infrastucture and test cases do not support zoned block
> devices and no specific test cases exist to test these block devices special
> features (zone report and reset, sequential write constraint). This p
Hi Roger,
On 2019/1/16 下午10:52, Roger Pau Monné wrote:
> On Wed, Jan 16, 2019 at 09:47:41PM +0800, Dongli Zhang wrote:
>> There is no need to wake up xen_blkif_schedule() as kthread_stop() is able
>> to already wake up the kernel thread.
>>
>> Signed-off-by: Dongli Zhang
>> ---
>> drivers/block/
On Tue, Jan 15, 2019 at 08:40:41AM -0800, Bart Van Assche wrote:
> On Tue, 2019-01-01 at 19:13 -0800, Bart Van Assche wrote:
> > On 12/4/18 9:47 AM, Josef Bacik wrote:
> > > In order to test io.latency and other cgroup related things we need some
> > > supporting helpers to setup and tear down cgro
On 1/16/19 6:31 PM, Jeff Moyer wrote:
> Jens Axboe writes:
>
>> On 1/16/19 5:50 PM, Jeff Moyer wrote:
>>> Hi, Jens,
>>>
>>> It looks to me like calling io_uring_register more than once (for either
>>> IORING_REGISTER_BUFFERS or IORING_REGISTER_FILES) will leak the
>>> references taken in previous
Jens Axboe writes:
> On 1/16/19 5:50 PM, Jeff Moyer wrote:
>> Hi, Jens,
>>
>> It looks to me like calling io_uring_register more than once (for either
>> IORING_REGISTER_BUFFERS or IORING_REGISTER_FILES) will leak the
>> references taken in previous calls.
>
> Oops, thanks for that. Let's make i
On 1/16/19 5:50 PM, Jeff Moyer wrote:
> Hi, Jens,
>
> It looks to me like calling io_uring_register more than once (for either
> IORING_REGISTER_BUFFERS or IORING_REGISTER_FILES) will leak the
> references taken in previous calls.
Oops, thanks for that. Let's make it -EBUSY though, everything end
On Wed, 2019-01-16 at 19:54 -0500, Douglas Gilbert wrote:
> On 2019-01-16 6:56 p.m., Bart Van Assche wrote:
> > On Wed, 2019-01-16 at 10:57 -0500, Douglas Gilbert wrote:
> > > The block layer assumes scsi_request:sense is always a valid
> > > pointer. This is set up once in scsi_mq_init_request() a
On 2019-01-16 6:56 p.m., Bart Van Assche wrote:
On Wed, 2019-01-16 at 10:57 -0500, Douglas Gilbert wrote:
The block layer assumes scsi_request:sense is always a valid
pointer. This is set up once in scsi_mq_init_request() and the
containing scsi_cmnd object is used often, being re-initialized
by
Hi, Jens,
It looks to me like calling io_uring_register more than once (for either
IORING_REGISTER_BUFFERS or IORING_REGISTER_FILES) will leak the
references taken in previous calls.
Signed-off-by: Jeff Moyer
---
If this makes sense to you, feel free to just fold this into your
patches w/o any
On 2019/1/17 上午12:32, Konrad Rzeszutek Wilk wrote:
> On Tue, Jan 08, 2019 at 04:24:32PM +0800, Dongli Zhang wrote:
>> oops. Please ignore this v5 patch.
>>
>> I just realized Linus suggested in an old email not use BUG()/BUG_ON() in
>> the code.
>>
>> I will switch to the WARN() solution and re
On Wed, 2019-01-16 at 10:57 -0500, Douglas Gilbert wrote:
> The block layer assumes scsi_request:sense is always a valid
> pointer. This is set up once in scsi_mq_init_request() and the
> containing scsi_cmnd object is used often, being re-initialized
> by scsi_init_command(). That works unless som
On 1/16/19 4:09 PM, Dave Chinner wrote:
> On Wed, Jan 16, 2019 at 03:21:21PM -0700, Jens Axboe wrote:
>> On 1/16/19 3:09 PM, Dave Chinner wrote:
>>> On Wed, Jan 16, 2019 at 02:20:53PM -0700, Jens Axboe wrote:
On 1/16/19 1:53 PM, Dave Chinner wrote:
I'd be fine with that restriction, espec
On Wed, Jan 16, 2019 at 03:21:21PM -0700, Jens Axboe wrote:
> On 1/16/19 3:09 PM, Dave Chinner wrote:
> > On Wed, Jan 16, 2019 at 02:20:53PM -0700, Jens Axboe wrote:
> >> On 1/16/19 1:53 PM, Dave Chinner wrote:
> >> I'd be fine with that restriction, especially since it can get relaxed
> >> down th
Split the header generation from the (normal) memcpy part if a
bytestring is copied into the command buffer. This allows in-place
generation of the bytestring content. For example, copy_from_user may be
used without an intermediate buffer.
Signed-off-by: Jonas Rabenstein
---
block/sed-opal.c | 2
Every step starts with resetting the cmd buffer as well as the comid and
constructs the appropriate OPAL_CALL command. Consequently, those
actions may be combined into one generic function. On should take care
that the opening and closing tokens for the argument list are already
emitted by cmd_star
Every step ends by calling cmd_finalize (via finalize_and_send)
yet every step adds the token OPAL_ENDLIST on its own. Moving
this into cmd_finalize decreases code duplication.
Co-authored-by: Jonas Rabenstein
Signed-off-by: David Kozub
Signed-off-by: Jonas Rabenstein
---
block/sed-opal.c | 25
This should make no change in functionality.
The formatting changes were triggered by checkpatch.pl.
Signed-off-by: David Kozub
---
block/sed-opal.c | 19 +++
1 file changed, 11 insertions(+), 8 deletions(-)
diff --git a/block/sed-opal.c b/block/sed-opal.c
index e0de4dd448b3..c8
Originally each of the opal functions that call next include
opal_discovery0 in the array of steps. This is superfluous and
can be done always inside next.
Signed-off-by: David Kozub
---
block/sed-opal.c | 88 +++-
1 file changed, 42 insertions(+), 46
response_get_{string,u64} include error handling for argument resp being
NULL but response_get_token does not handle this.
Make all three of response_get_{string,u64,token} handle NULL resp in
the same way.
Co-authored-by: Jonas Rabenstein
Signed-off-by: David Kozub
Signed-off-by: Jonas Rabenst
Add function address (and if available its symbol) to the message if a
step function fails.
Signed-off-by: Jonas Rabenstein
---
block/sed-opal.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/block/sed-opal.c b/block/sed-opal.c
index 1332547e5a99..4225f23b2165 100644
---
Instead of having multiple places defining the same argument list to get
a specific column of a sed-opal table, provide a generic version and
call it from those functions.
Signed-off-by: Jonas Rabenstein
---
block/opal_proto.h | 2 +
block/sed-opal.c | 132 +--
Also the values of OPAL_UID_LENGTH and OPAL_METHOD_LENGTH are the same,
it is weird to use OPAL_UID_LENGTH for the definition of the methods.
Signed-off-by: Jonas Rabenstein
---
block/sed-opal.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/sed-opal.c b/block/sed-opal
response_get_token had already been in place, its functionality had
been duplicated within response_get_{u64,bytestring} with the same error
handling. Unify the handling by reusing response_get_token within the
other functions.
Co-authored-by: Jonas Rabenstein
Signed-off-by: David Kozub
Signed-o
This patch series extends OPAL support: it adds IOCTL for setting the shadow
MBR done flag which can be useful for unlocking an OPAL disk on boot and it adds
IOCTL for writing to the shadow MBR. Also included are some minor fixes and
improvements.
This series is based on the original work done by
Enable users to mark the shadow mbr as done without completely
deactivating the shadow mbr feature. This may be useful on reboots,
when the power to the disk is not disconnected in between and the shadow
mbr stores the required boot files. Of course, this saves also the
(few) commands required to e
The steps argument is only read by the next function, so it can
be passed directly as an argument rather than via opal_dev.
Normally, the steps is an array on the stack, so the pointer stops
being valid then the function that set opal_dev.steps returns.
If opal_dev.steps was not set to NULL before
Allow modification of the shadow mbr. If the shadow mbr is not marked as
done, this data will be presented read only as the device content. Only
after marking the shadow mbr as done and unlocking a locking range the
actual content is accessible.
Co-authored-by: David Kozub
Signed-off-by: Jonas Ra
Check whether the shadow mbr does fit in the provided space on the
target. Also a proper firmware should handle this case and return an
error we may prevent problems or even damage with crappy firmwares.
Signed-off-by: Jonas Rabenstein
---
block/opal_proto.h | 16
block/sed-opal
All add_token_* functions have a common set of conditions that have to
be checked. Use a common function for those checks in order to avoid
different behaviour as well as code duplication.
Co-authored-by: David Kozub
Signed-off-by: Jonas Rabenstein
Signed-off-by: David Kozub
---
block/sed-opal
On 1/16/19 3:09 PM, Dave Chinner wrote:
> On Wed, Jan 16, 2019 at 02:20:53PM -0700, Jens Axboe wrote:
>> On 1/16/19 1:53 PM, Dave Chinner wrote:
>>> On Wed, Jan 16, 2019 at 10:50:00AM -0700, Jens Axboe wrote:
If we have fixed user buffers, we can map them into the kernel when we
setup the
On Wed, Jan 16, 2019 at 02:20:53PM -0700, Jens Axboe wrote:
> On 1/16/19 1:53 PM, Dave Chinner wrote:
> > On Wed, Jan 16, 2019 at 10:50:00AM -0700, Jens Axboe wrote:
> >> If we have fixed user buffers, we can map them into the kernel when we
> >> setup the io_context. That avoids the need to do get
On 1/16/19 2:20 PM, Jens Axboe wrote:
> On 1/16/19 1:53 PM, Dave Chinner wrote:
>> On Wed, Jan 16, 2019 at 10:50:00AM -0700, Jens Axboe wrote:
>>> If we have fixed user buffers, we can map them into the kernel when we
>>> setup the io_context. That avoids the need to do get_user_pages() for
>>> eac
On 1/16/19 1:53 PM, Dave Chinner wrote:
> On Wed, Jan 16, 2019 at 10:50:00AM -0700, Jens Axboe wrote:
>> If we have fixed user buffers, we can map them into the kernel when we
>> setup the io_context. That avoids the need to do get_user_pages() for
>> each and every IO.
> .
>> +
On Wed, Jan 16, 2019 at 10:50:00AM -0700, Jens Axboe wrote:
> If we have fixed user buffers, we can map them into the kernel when we
> setup the io_context. That avoids the need to do get_user_pages() for
> each and every IO.
.
> + return -ENOMEM;
> + } while (atomic_lon
Similarly to how we use the state->ios_left to know how many references
to get to a file, we can use it to allocate the io_kiocb's we need in
bulk.
Signed-off-by: Jens Axboe
---
fs/io_uring.c | 66 ++-
1 file changed, 50 insertions(+), 16 deletions
Add hint on whether a read was served out of the page cache, or if it
hit media. This is useful for buffered async IO, O_DIRECT reads would
never have this set (for obvious reasons).
If the read hit page cache, cqe->flags will have IOCQE_FLAG_CACHEHIT
set.
Signed-off-by: Jens Axboe
---
fs/io_ur
We normally have to fget/fput for each IO we do on a file. Even with
the batching we do, this atomic inc/dec cost adds up.
This adds IORING_REGISTER_FILES, and IORING_UNREGISTER_FILES opcodes
for the io_uring_register(2) system call. The arguments passed in must
be an array of __s32 holding file d
For an ITER_BVEC, we can just iterate the iov and add the pages
to the bio directly. This requires that the caller doesn't releases
the pages on IO completion, we add a BIO_HOLD_PAGES flag for that.
The current two callers of bio_iov_iter_get_pages() are updated to
check if they need to release pa
This enables an application to do IO, without ever entering the kernel.
By using the SQ ring to fill in new sqes and watching for completions
on the CQ ring, we can submit and reap IOs without doing a single system
call. The kernel side thread will poll for new submissions, and in case
of HIPRI/pol
If we have fixed user buffers, we can map them into the kernel when we
setup the io_context. That avoids the need to do get_user_pages() for
each and every IO.
To utilize this feature, the application must call io_uring_register()
after having setup an io_uring context, passing in
IORING_REGISTER_
For the upcoming async polled IO, we can't sleep allocating requests.
If we do, then we introduce a deadlock where the submitter already
has async polled IO in-flight, but can't wait for them to complete
since polled requests must be active found and reaped.
Utilize the helper in the blockdev DIRE
From: Christoph Hellwig
Store the request queue the last bio was submitted to in the iocb
private data in addition to the cookie so that we find the right block
device. Also refactor the common direct I/O bio submission code into a
nice little helper.
Signed-off-by: Christoph Hellwig
Modified
From: Christoph Hellwig
This new methods is used to explicitly poll for I/O completion for an
iocb. It must be called for any iocb submitted asynchronously (that
is with a non-null ki_complete) which has the IOCB_HIPRI flag set.
The method is assisted by a new ki_cookie field in struct iocb to
Add a separate io_submit_state structure, to cache some of the things
we need for IO submission.
One such example is file reference batching. io_submit_state. We get as
many references as the number of sqes we are submitting, and drop
unused ones if we end up switching files. The assumption here i
Here's v5 of the io_uring interface. Mostly feels like putting some
finishing touches on top of v4, though we do have a few user interface
tweaks because of that.
Arnd was kind enough to review the code with an eye towards 32-bit
compatability, and that resulted in a few changes. See changelog bel
Add support for a polled io_uring context. When a read or write is
submitted to a polled context, the application must poll for completions
on the CQ ring through io_uring_enter(2). Polled IO may not generate
IRQ completions, hence they need to be actively found by the application
itself.
To use p
Some uses cases repeatedly get and put references to the same file, but
the only exposed interface is doing these one at the time. As each of
these entail an atomic inc or dec on a shared structure, that cost can
add up.
Add fget_many(), which works just like fget(), except it takes an
argument fo
From: Christoph Hellwig
Just call blk_poll on the iocb cookie, we can derive the block device
from the inode trivially.
Reviewed-by: Johannes Thumshirn
Signed-off-by: Christoph Hellwig
Signed-off-by: Jens Axboe
---
fs/block_dev.c | 10 ++
1 file changed, 10 insertions(+)
diff --git
From: Christoph Hellwig
Add a new fsync opcode, which either syncs a range if one is passed,
or the whole file if the offset and length fields are both cleared
to zero. A flag is provided to use fdatasync semantics, that is only
force out metadata which is required to retrieve the file data, but
The submission queue (SQ) and completion queue (CQ) rings are shared
between the application and the kernel. This eliminates the need to
copy data back and forth to submit and complete IO.
IO submissions use the io_uring_sqe data structure, and completions
are generated in the form of io_uring_sqe
On 1/16/19 9:37 AM, Christoph Hellwig wrote:
> The following changes since commit 8218a55b6b911d396565da4ed5ca8b18bf0d38fb:
>
> sbitmap: Protect swap_lock from hardirq (2019-01-14 21:30:32 -0700)
>
> are available in the Git repository at:
>
> git://git.infradead.org/nvme.git nvme-5.0
>
> f
The following changes since commit 8218a55b6b911d396565da4ed5ca8b18bf0d38fb:
sbitmap: Protect swap_lock from hardirq (2019-01-14 21:30:32 -0700)
are available in the Git repository at:
git://git.infradead.org/nvme.git nvme-5.0
for you to fetch changes up to eda14f8977df052dce3a9c54a6bf8d8f7
On Tue, Jan 08, 2019 at 04:24:32PM +0800, Dongli Zhang wrote:
> oops. Please ignore this v5 patch.
>
> I just realized Linus suggested in an old email not use BUG()/BUG_ON() in the
> code.
>
> I will switch to the WARN() solution and resend again.
OK. Did I miss it?
The block layer assumes scsi_request:sense is always a valid
pointer. This is set up once in scsi_mq_init_request() and the
containing scsi_cmnd object is used often, being re-initialized
by scsi_init_command(). That works unless some code re-purposes
part of the scsi_cmnd object for something else
On 1/16/19 8:41 AM, Arnd Bergmann wrote:
> On Wed, Jan 16, 2019 at 4:32 PM Jens Axboe wrote:
>>
>> On 1/16/19 8:14 AM, Jens Axboe wrote:
>>> On 1/16/19 3:53 AM, Arnd Bergmann wrote:
On Tue, Jan 15, 2019 at 3:56 AM Jens Axboe wrote:
> diff --git a/include/linux/syscalls.h b/include/l
On Wed, Jan 16, 2019 at 4:32 PM Jens Axboe wrote:
>
> On 1/16/19 8:14 AM, Jens Axboe wrote:
> > On 1/16/19 3:53 AM, Arnd Bergmann wrote:
> >> On Tue, Jan 15, 2019 at 3:56 AM Jens Axboe wrote:
> >>
> >>> diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
> >>> index 542757a4c898..e36
On 1/16/19 8:14 AM, Jens Axboe wrote:
> On 1/16/19 3:53 AM, Arnd Bergmann wrote:
>> On Tue, Jan 15, 2019 at 3:56 AM Jens Axboe wrote:
>>
>>> diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
>>> index 542757a4c898..e36c264d74e8 100644
>>> --- a/include/linux/syscalls.h
>>> +++ b/inc
On 1/16/19 8:16 AM, Arnd Bergmann wrote:
> On Wed, Jan 16, 2019 at 4:12 PM Jens Axboe wrote:
>> On 1/16/19 3:41 AM, Arnd Bergmann wrote:
>>> On Tue, Jan 15, 2019 at 3:55 AM Jens Axboe wrote:
diff --git a/arch/x86/entry/syscalls/syscall_32.tbl
b/arch/x86/entry/syscalls/syscall_32.tbl
>>
On Wed, Jan 16, 2019 at 4:12 PM Jens Axboe wrote:
> On 1/16/19 3:41 AM, Arnd Bergmann wrote:
> > On Tue, Jan 15, 2019 at 3:55 AM Jens Axboe wrote:
> >> diff --git a/arch/x86/entry/syscalls/syscall_32.tbl
> >> b/arch/x86/entry/syscalls/syscall_32.tbl
> >> index 3cf7b533b3d1..194e79c0032e 100644
>
On 1/16/19 3:45 AM, Arnd Bergmann wrote:
> On Tue, Jan 15, 2019 at 3:56 AM Jens Axboe wrote:
>
>> @@ -132,4 +139,12 @@ struct io_uring_register_buffers {
>> __u32 nr_iovecs;
>> };
>>
>> +struct io_uring_register_files {
>> + union {
>> + __s32 *fds;
>> +
On 1/16/19 3:53 AM, Arnd Bergmann wrote:
> On Tue, Jan 15, 2019 at 3:56 AM Jens Axboe wrote:
>
>> diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
>> index 542757a4c898..e36c264d74e8 100644
>> --- a/include/linux/syscalls.h
>> +++ b/include/linux/syscalls.h
>> @@ -314,6 +314,8 @@
On 1/16/19 3:41 AM, Arnd Bergmann wrote:
> On Tue, Jan 15, 2019 at 3:55 AM Jens Axboe wrote:
>>
>> diff --git a/arch/x86/entry/syscalls/syscall_32.tbl
>> b/arch/x86/entry/syscalls/syscall_32.tbl
>> index 3cf7b533b3d1..194e79c0032e 100644
>> --- a/arch/x86/entry/syscalls/syscall_32.tbl
>> +++ b/ar
On Wed, Jan 16, 2019 at 09:47:41PM +0800, Dongli Zhang wrote:
> There is no need to wake up xen_blkif_schedule() as kthread_stop() is able
> to already wake up the kernel thread.
>
> Signed-off-by: Dongli Zhang
> ---
> drivers/block/xen-blkback/xenbus.c | 4 +---
> 1 file changed, 1 insertion(+)
On 16/01/2019 02:54, Martin K. Petersen wrote:
Hi John,
Hi Martin,
So in this case I think that accessor functions are actually better
because they allow us to print a big fat warning when you twiddle
something you shouldn't post-initialization. So that's something I think
we could--and sho
On Wed, Jan 16, 2019 at 09:47:41PM +0800, Dongli Zhang wrote:
> There is no need to wake up xen_blkif_schedule() as kthread_stop() is able
> to already wake up the kernel thread.
>
> Signed-off-by: Dongli Zhang
Reviewed-by: Roger Pau Monné
kthread_stop waits for the thread to exit, so it must
On 1/16/19 4:08 AM, Ming Lei wrote:
> We need to pass bio->bi_opf after bio intergrity preparing, otherwise
> the flag of REQ_INTEGRITY may not be set on the allocated request, then
> breaks block integrity.
Thanks, applied.
--
Jens Axboe
On Tue, Jan 15, 2019 at 02:20:19PM +0100, Christoph Hellwig wrote:
> On Tue, Jan 15, 2019 at 09:37:42AM +0100, Joerg Roedel wrote:
> > On Mon, Jan 14, 2019 at 01:20:45PM -0500, Michael S. Tsirkin wrote:
> > > Which would be fine especially if we can manage not to introduce a bunch
> > > of indirect
On Wed, Jan 16, 2019 at 09:05:40AM -0500, Michael S. Tsirkin wrote:
> On Tue, Jan 15, 2019 at 02:22:57PM +0100, Joerg Roedel wrote:
> > + max_size = dma_max_mapping_size(&vdev->dev);
> > +
>
>
> Should this be limited to ACCESS_PLATFORM?
>
> I see no reason to limit this without as guest can
>
On Tue, Jan 15, 2019 at 02:22:57PM +0100, Joerg Roedel wrote:
> From: Joerg Roedel
>
> Segments can't be larger than the maximum DMA mapping size
> supported on the platform. Take that into account when
> setting the maximum segment size for a block device.
>
> Signed-off-by: Joerg Roedel
> ---
There is no need to wake up xen_blkif_schedule() as kthread_stop() is able
to already wake up the kernel thread.
Signed-off-by: Dongli Zhang
---
drivers/block/xen-blkback/xenbus.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/drivers/block/xen-blkback/xenbus.c
b/drivers
We need to pass bio->bi_opf after bio intergrity preparing, otherwise
the flag of REQ_INTEGRITY may not be set on the allocated request, then
breaks block integrity.
Fixes: f9afca4d367b ("blk-mq: pass in request/bio flags to queue mapping")
Cc: Hannes Reinecke
Cc: Keith Busch
Signed-off-by: Ming
On Wed, Jan 16, 2019 at 11:41 AM Arnd Bergmann wrote:
> > +/*
> > + * IO submission data structure (Submission Queue Entry)
> > + */
> > +struct io_uring_sqe {
> > + __u8opcode; /* type of operation for this sqe */
> > + __u8flags; /* as of now unused */
> > +
On Tue, Jan 15, 2019 at 3:56 AM Jens Axboe wrote:
> diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
> index 542757a4c898..e36c264d74e8 100644
> --- a/include/linux/syscalls.h
> +++ b/include/linux/syscalls.h
> @@ -314,6 +314,8 @@ asmlinkage long sys_io_uring_setup(u32 entries,
>
On Tue, Jan 15, 2019 at 3:56 AM Jens Axboe wrote:
> @@ -132,4 +139,12 @@ struct io_uring_register_buffers {
> __u32 nr_iovecs;
> };
>
> +struct io_uring_register_files {
> + union {
> + __s32 *fds;
> + __u64 pad;
> + };
> + __u32 nr_fds;
> +}
On Tue, Jan 15, 2019 at 3:55 AM Jens Axboe wrote:
>
> diff --git a/arch/x86/entry/syscalls/syscall_32.tbl
> b/arch/x86/entry/syscalls/syscall_32.tbl
> index 3cf7b533b3d1..194e79c0032e 100644
> --- a/arch/x86/entry/syscalls/syscall_32.tbl
> +++ b/arch/x86/entry/syscalls/syscall_32.tbl
> @@ -398,3
On Wed, 16 Jan 2019 at 09:52, Krzysztof Kozlowski wrote:
>
> Hi,
>
> On today's next-20190116 I see a bug during boot:
> [ 6.843308] kernel BUG at ../block/bio.c:1833!
> [ 6.847723] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP ARM
> ...
> [ 7.543824] [] (bio_spl
Hi,
On today's next-20190116 I see a bug during boot:
[ 6.843308] kernel BUG at ../block/bio.c:1833!
[ 6.847723] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP ARM
...
[ 7.543824] [] (bio_split) from [<>] ( (null))
[ 7.549881] Code: 13833b01 11c630bc e1a6 e8bd8070 (e7f0
85 matches
Mail list logo