From: Eric Biggers
bio_crypt_set_ctx() assumes its gfp_mask argument always includes
__GFP_DIRECT_RECLAIM, so that the mempool_alloc() will always succeed.
For now this assumption is still fine, since no callers violate it.
Making bio_crypt_set_ctx() able to fail would add unneeded complexity.
This series makes allocation of encryption contexts either able to fail,
or explicitly require __GFP_DIRECT_RECLAIM (via WARN_ON_ONCE).
This applies to linux-block/for-next.
Changed since v1
(https://lkml.kernel.org/r/20200902051511.79821-1-ebigg...@kernel.org):
- Added patches 2 and 3.
From: Eric Biggers
bio_crypt_clone() assumes its gfp_mask argument always includes
__GFP_DIRECT_RECLAIM, so that the mempool_alloc() will always succeed.
However, bio_crypt_clone() might be called with GFP_ATOMIC via
setup_clone() in drivers/md/dm-rq.c, or with GFP_NOWAIT via
kcryptd_io_read()
From: Eric Biggers
blk_crypto_rq_bio_prep() assumes its gfp_mask argument always includes
__GFP_DIRECT_RECLAIM, so that the mempool_alloc() will always succeed.
However, blk_crypto_rq_bio_prep() might be called with GFP_ATOMIC via
setup_clone() in drivers/md/dm-rq.c.
This case isn't currently
On Tue, Sep 15 2020 at 9:48pm -0400,
Ming Lei wrote:
> On Tue, Sep 15, 2020 at 09:28:14PM -0400, Mike Snitzer wrote:
> > On Tue, Sep 15 2020 at 9:08pm -0400,
> > Ming Lei wrote:
> >
> > > On Tue, Sep 15, 2020 at 01:23:57PM -0400, Mike Snitzer wrote:
> > > > blk_queue_split() has become
On Tue, Sep 15, 2020 at 09:28:14PM -0400, Mike Snitzer wrote:
> On Tue, Sep 15 2020 at 9:08pm -0400,
> Ming Lei wrote:
>
> > On Tue, Sep 15, 2020 at 01:23:57PM -0400, Mike Snitzer wrote:
> > > blk_queue_split() has become compulsory from .submit_bio -- regardless
> > > of whether it is
On Tue, Sep 15 2020 at 9:08pm -0400,
Ming Lei wrote:
> On Tue, Sep 15, 2020 at 01:23:57PM -0400, Mike Snitzer wrote:
> > blk_queue_split() has become compulsory from .submit_bio -- regardless
> > of whether it is recursing. Update DM core to always call
> > blk_queue_split().
> >
> >
On Tue, Sep 15, 2020 at 01:23:57PM -0400, Mike Snitzer wrote:
> blk_queue_split() has become compulsory from .submit_bio -- regardless
> of whether it is recursing. Update DM core to always call
> blk_queue_split().
>
> dm_queue_split() is removed because __split_and_process_bio() handles
>
Add failback code to get the uid for dasd devices from sysfs. Copied
from dasdinfo
Signed-off-by: Benjamin Marzinski
---
libmultipath/defaults.h | 1 +
libmultipath/discovery.c | 37 -
2 files changed, 37 insertions(+), 1 deletion(-)
diff --git
This library allows other programs to check if a path should be claimed
by multipath. It exports an init and exit function, a pointer to a
struct config, that stores the configuration which is dealt with in the
init and exit functions, and two more functions.
mpath_get_mode() get the configured
Setting this option to yes will force multipath to get the uid by using
the fallback sysfs methods, instead of getting it from udev. This will
cause devices that can't get their uid from the standard locations to
not get a uid. It will also disable uevent merging.
It will not stop uevents from
The main part of the this patchset is the first patch, which adds a
new library interface to check whether devices are valid paths. This
was designed for use in the Storage Instantiation Daemon (SID).
https://github.com/sid-project
Hopefully, I've removed all the controvertial bits from the last
On Thu, Sep 10, 2020 at 09:56:11PM +0200, mwi...@suse.com wrote:
> From: Martin Wilck
>
> setup_map() is called both for new maps (e.g. from coalesce_paths())
> and existing maps (e.g. from reload_map(), resize_map()). In the former
> case, the map will be removed from global data structures, so
Like 'io_opt', blk_stack_limits() should stack 'chunk_sectors' using
lcm_not_zero() rather than min_not_zero() -- otherwise the final
'chunk_sectors' could result in sub-optimal alignment of IO to
component devices in the IO stack.
Also, if 'chunk_sectors' isn't a multiple of
blk_queue_split() has become compulsory from .submit_bio -- regardless
of whether it is recursing. Update DM core to always call
blk_queue_split().
dm_queue_split() is removed because __split_and_process_bio() handles
splitting as needed.
Signed-off-by: Mike Snitzer
---
drivers/md/dm.c | 45
If target set ti->max_io_len it must be used when stacking
DM device's queue_limits to establish a 'chunk_sectors' that is
compatible with the IO stack.
By using lcm_not_zero() care is taken to avoid blindly overriding the
chunk_sectors limit stacked up by blk_stack_limits().
Signed-off-by: Mike
It is possible for a block device to use a non power-of-2 for chunk
size which results in a full-stripe size that is also a non
power-of-2.
Update blk_queue_chunk_sectors() and blk_max_size_offset() to
accommodate drivers that need a non power-of-2 chunk_sectors.
Signed-off-by: Mike Snitzer
Hi,
This v2 drops a patch from v1 and fixes the chunk_sectprs check added to
blk_stack_limits to convert chubk_sectors to bytes before comparing with
physical_block_size.
Jens, please feel free to pick up patches 1 and 2.
DM patches 3 and 4 are provided just to give context for how DM will be
On Mon, Sep 14 2020 at 9:33pm -0400,
Mike Snitzer wrote:
> On Thu, Sep 10 2020 at 3:29pm -0400,
> Vijayendra Suman wrote:
>
> > Hello Mike,
> >
> > I checked with upstream, performance measurement is similar and
> > shows performance improvement when
> >
On Tue, Sep 15, 2020 at 01:16:19PM +0200, Martin Wilck wrote:
> On Sat, 2020-08-22 at 00:42 +0200, mwi...@suse.com wrote:
> > From: Martin Wilck
> >
> > Hi Christophe, hi Ben,
> >
> > embarassingly, it turns out that my unit test code for the bitfield
> > code was broken in various ways, which
Replace the two negative flags that are always used together with a
single positive flag that indicates the writeback capability instead
of two related non-capabilities. Also remove the pointless wrappers
to just check the flag.
Signed-off-by: Christoph Hellwig
Reviewed-by: Johannes Thumshirn
Replace BDI_CAP_NO_ACCT_WB with a positive BDI_CAP_WRITEBACK_ACCT to
make the checks more obvious. Also remove the pointless
bdi_cap_account_writeback wrapper that just obsfucates the check.
Signed-off-by: Christoph Hellwig
Reviewed-by: Johannes Thumshirn
---
fs/fuse/inode.c | 3
The BDI_CAP_STABLE_WRITES is one of the few bits of information in the
backing_dev_info shared between the block drivers and the writeback code.
To help untangling the dependency replace it with a queue flag and a
superblock flag derived from it. This also helps with the case of e.g.
a file
There is no point in trying to call bdev_read_page if SWP_SYNCHRONOUS_IO
is not set, as the device won't support it.
Signed-off-by: Christoph Hellwig
Reviewed-by: Johannes Thumshirn
---
mm/page_io.c | 18 ++
1 file changed, 10 insertions(+), 8 deletions(-)
diff --git
BDI_CAP_SYNCHRONOUS_IO is only checked in the swap code, and used to
decided if ->rw_page can be used on a block device. Just check up for
the method instead. The only complication is that zram needs a second
set of block_device_operations as it can switch between modes that
actually support
Just checking SB_I_CGROUPWB for cgroup writeback support is enough.
Either the file system allocates its own bdi (e.g. btrfs), in which case
it is known to support cgroup writeback, or the bdi comes from the block
layer, which always supports cgroup writeback.
Signed-off-by: Christoph Hellwig
Drivers shouldn't really mess with the readahead size, as that is a VM
concept. Instead set it based on the optimal I/O size by lifting the
algorithm from the md driver when registering the disk. Also set
bdi->io_pages there as well by applying the same scheme based on
max_sectors.
Hi Jens,
this series contains a bunch of different BDI cleanups. The biggest item
is to isolate block drivers from the BDI in preparation of changing the
lifetime of the block device BDI in a follow up series.
Changes since v4:
- add a back a prematurely removed assignment in dm-table.c
-
The raid5 and raid10 drivers currently update the read-ahead size,
but not the optimal I/O size on reshape. To prepare for deriving the
read-ahead size from the optimal I/O size make sure it is updated
as well.
Signed-off-by: Christoph Hellwig
Acked-by: Song Liu
Reviewed-by: Johannes Thumshirn
Set up a readahead size by default, as very few users have a good
reason to change it.
Signed-off-by: Christoph Hellwig
Acked-by: David Sterba [btrfs]
Acked-by: Richard Weinberger [ubifs, mtd]
---
block/blk-core.c | 2 --
drivers/mtd/mtdcore.c | 2 ++
fs/9p/vfs_super.c | 6 --
Ever since the switch to blk-mq, a lower device not used for VM
writeback will not be marked congested, so the check will never
trigger.
Signed-off-by: Christoph Hellwig
Reviewed-by: Johannes Thumshirn
---
drivers/block/drbd/drbd_nl.c | 6 --
1 file changed, 6 deletions(-)
diff --git
This case isn't ever used.
Signed-off-by: Christoph Hellwig
Reviewed-by: Johannes Thumshirn
---
drivers/block/drbd/drbd_req.c | 4
include/linux/drbd.h | 1 -
2 files changed, 5 deletions(-)
diff --git a/drivers/block/drbd/drbd_req.c b/drivers/block/drbd/drbd_req.c
index
The last user of SB_I_MULTIROOT is disappeared with commit f2aedb713c28
("NFS: Add fs_context support.")
Signed-off-by: Christoph Hellwig
Reviewed-by: Johannes Thumshirn
---
fs/namei.c | 4 ++--
include/linux/fs.h | 1 -
2 files changed, 2 insertions(+), 3 deletions(-)
diff --git
On Sat, 2020-08-22 at 00:42 +0200, mwi...@suse.com wrote:
> From: Martin Wilck
>
> Hi Christophe, hi Ben,
>
> embarassingly, it turns out that my unit test code for the bitfield
> code was broken in various ways, which at the same time shows that
> I didn't test this as broadly as I should have
On Tue, Sep 15, 2020 at 04:21:54AM +, Damien Le Moal wrote:
> On 2020/09/15 10:10, Damien Le Moal wrote:
> > On 2020/09/15 0:04, Mike Snitzer wrote:
> >> On Sun, Sep 13 2020 at 8:46pm -0400,
> >> Damien Le Moal wrote:
> >>
> >>> On 2020/09/12 6:53, Mike Snitzer wrote:
>
On Thu, Sep 10, 2020 at 01:15:41PM -0400, Mike Snitzer wrote:
> > I'll move it to blk_register_queue, which should work just fine.
>
> That'll work for initial DM table load as part of DM device creation
> (dm_setup_md_queue). But it won't account for DM table reloads that
> might change
36 matches
Mail list logo