Switch all public blk-crypto interfaces to use struct block_device
arguments to specify the device they operate on instead of th
request_queue, which is a block layer implementation detail.
Signed-off-by: Christoph Hellwig
---
Documentation/block/inline-encryption.rst | 24
Hi all,
this series switches the blk-crypto interfaces to take block_device
arguments instead of request_queues, and with that finishes off the
project to hide struct request_queue from file systems.
Diffstat:
Documentation/block/inline-encryption.rst | 24
Add a blk_crypto_cfg_supported helper that wraps
__blk_crypto_cfg_supported to retreive the crypto_profile from the
request queue.
Signed-off-by: Christoph Hellwig
---
block/blk-crypto-profile.c | 7 +++
block/blk-crypto.c | 13 -
On Fri, 04 Nov 2022, Mikulas Patocka wrote:
> There's a crash in mempool_free when running the lvm test
> shell/lvchange-rebuild-raid.sh.
>
> The reason for the crash is this:
> * super_written calls atomic_dec_and_test(>pending_writes) and
> wake_up(>sb_wait). Then it calls
On 11/3/22 11:20 PM, Mikulas Patocka wrote:
On Thu, 3 Nov 2022, Mikulas Patocka wrote:
BTW, is the mempool_free from endio -> dec_count -> complete_io?
And io which caused the crash is from dm_io -> async_io / sync_io
-> dispatch_io, seems dm-raid1 can call it instead of dm-raid, so I
On 11/3/22 10:46 PM, Heming Zhao wrote:
On 11/3/22 11:47 AM, Guoqing Jiang wrote:
Hi,
On 11/3/22 12:27 AM, Mikulas Patocka wrote:
Hi
There's a crash in the test shell/lvchange-rebuild-raid.sh when running
the lvm testsuite. It can be reproduced by running "make check_local
On Thu, 2022-11-03 at 17:17 +0100, Xose Vazquez Perez wrote:
> Xose Vazquez Perez (3):
> multipath-tools: update hwtable text/info/comments
> multipath-tools: add PowerMax NVMe to hwtable
> multipath-tools: add more info for NetApp ontap prio
>
> README.md | 2 +-
>
On 11/2/22 19:13, Mike Christie wrote:
On 11/2/22 5:47 PM, Bart Van Assche wrote:
On 10/26/22 16:19, Mike Christie wrote:
+static inline enum scsi_pr_type block_pr_type_to_scsi(enum pr_type type)
+{
+ switch (type) {
+ case PR_WRITE_EXCLUSIVE:
+ return SCSI_PR_WRITE_EXCLUSIVE;
+
Hi
The patchset seems OK - but dm-integrity also has a limitation that the
bio vectors must be aligned on logical block size.
dm-writecache and dm-verity seem to handle unaligned bioset, but you
should check them anyway.
I'm not sure about dm-log-writes.
Mikulas
On Thu, 3 Nov 2022, Keith
No official config, just a "multipath -ll" output:
https://bugzilla.redhat.com/1686708#c0
Cc: Martin Wilck
Cc: Benjamin Marzinski
Cc: Christophe Varoqui
Cc: DM-DEVEL ML
Signed-off-by: Xose Vazquez Perez
---
libmultipath/hwtable.c | 6 ++
1 file changed, 6 insertions(+)
diff --git
Add Alletra 5000, FAS/AFF and E/EF Series info.
Compact some info.
Delete trivial/redundant comments.
Reformat LIO.
Cc: Martin Wilck
Cc: Benjamin Marzinski
Cc: Christophe Varoqui
Cc: DM-DEVEL ML
Signed-off-by: Xose Vazquez Perez
---
libmultipath/hwtable.c | 21 ++---
1 file
and format fixes.
Cc: George Martin
Cc: Martin Wilck
Cc: Benjamin Marzinski
Cc: Christophe Varoqui
Cc: DM-DEVEL ML
Signed-off-by: Xose Vazquez Perez
---
README.md | 2 +-
multipath/multipath.conf.5 | 10 +-
2 files changed, 6 insertions(+), 6 deletions(-)
diff
Xose Vazquez Perez (3):
multipath-tools: update hwtable text/info/comments
multipath-tools: add PowerMax NVMe to hwtable
multipath-tools: add more info for NetApp ontap prio
README.md | 2 +-
libmultipath/hwtable.c | 27 ---
There's a crash in mempool_free when running the lvm test
shell/lvchange-rebuild-raid.sh.
The reason for the crash is this:
* super_written calls atomic_dec_and_test(>pending_writes) and
wake_up(>sb_wait). Then it calls rdev_dec_pending(rdev, mddev)
and bio_put(bio).
* so, the process that
On Thu, 3 Nov 2022, Mikulas Patocka wrote:
> > BTW, is the mempool_free from endio -> dec_count -> complete_io?
> > And io which caused the crash is from dm_io -> async_io / sync_io
> > -> dispatch_io, seems dm-raid1 can call it instead of dm-raid, so I
> > suppose the io is for mirror image.
On 11/3/22 11:47 AM, Guoqing Jiang wrote:
Hi,
On 11/3/22 12:27 AM, Mikulas Patocka wrote:
Hi
There's a crash in the test shell/lvchange-rebuild-raid.sh when running
the lvm testsuite. It can be reproduced by running "make check_local
T=shell/lvchange-rebuild-raid.sh" in a loop.
I have
On Thu, 3 Nov 2022, Guoqing Jiang wrote:
> Hi,
>
> On 11/3/22 12:27 AM, Mikulas Patocka wrote:
> > Hi
> >
> > There's a crash in the test shell/lvchange-rebuild-raid.sh when running
> > the lvm testsuite. It can be reproduced by running "make check_local
> > T=shell/lvchange-rebuild-raid.sh"
On Wed, Nov 02, 2022 at 02:45:10PM -0400, Mikulas Patocka wrote:
> On Tue, 1 Nov 2022, Eric Biggers wrote:
> > Hi,
> >
> > I happened to notice the following QEMU bug report:
> >
> > https://gitlab.com/qemu-project/qemu/-/issues/1290
> >
> > I believe it's a regression from the following kernel
On 11/2/22 5:53 PM, Bart Van Assche wrote:
> On 10/26/22 16:19, Mike Christie wrote:
>> +struct pr_keys {
>> + u32 generation;
>> + u32 num_keys;
>> + u64 keys[];
>> +};
> Is my understanding correct that keys[] is treated as opaque data by the
> kernel? If so, is it necessary
On 11/2/22 5:47 PM, Bart Van Assche wrote:
> On 10/26/22 16:19, Mike Christie wrote:
>> +static inline enum scsi_pr_type block_pr_type_to_scsi(enum pr_type type)
>> +{
>> + switch (type) {
>> + case PR_WRITE_EXCLUSIVE:
>> + return SCSI_PR_WRITE_EXCLUSIVE;
>> + case
On 11/2/22 5:50 PM, Bart Van Assche wrote:
> On 10/26/22 16:19, Mike Christie wrote:
>> +struct pr_keys {
>> + u32 generation;
>> + u32 num_keys;
>> + u64 keys[];
>> +};
>> +
>> +struct pr_held_reservation {
>> + u64 key;
>> + u32 generation;
>> + enum
Hi all,
I am new to dm-devel. When using dm-thin via lvm, I found it difficult
to share dm-thin on multiple hosts.
The background is that I want to implement live migration of VMs in the
lvm + iSCSI environment, in which lvmlockd is used to coordinate access
to shared storage. There are
On Wed, 2 Nov 2022 10:14:52 -0600
Keith Busch wrote:
> This is what I'm coming up with. Only compile tested (still setting up
> an enviroment to actually run it).
>
> ---
> diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
> index 159c6806c19b..9334e58a4c9f 100644
> ---
[Cc'ing Dmitrii, who also reported the same issue]
On Tue, Nov 01, 2022 at 08:11:15PM -0700, Eric Biggers wrote:
> Hi,
>
> I happened to notice the following QEMU bug report:
>
> https://gitlab.com/qemu-project/qemu/-/issues/1290
>
> I believe it's a regression from the following kernel
On Wed, Nov 02, 2022 at 08:03:45PM +0300, Dmitrii Tcvetkov wrote:
>
> Applied on top 6.1-rc3, the issue still reproduces.
Yeah, I see that now. I needed to run a dm-crypt setup to figure out how
they're actually doing this, so now I have that up and running.
I think this type of usage will
If I run:
$ sudo dmsetup create baddev '--table=0 1024 zero'
$ sudo dmsetup load baddev '--table=0 1024 linear /dev/mapper/baddev 0'
$ sudo dmsetup suspend baddev
$ sudo dmsetup resume baddev
the kernel immediately panics. Console output indicates infinite
recursion in dm_block_ioctl. This is
On Wed, Nov 02, 2022 at 08:52:15AM -0600, Keith Busch wrote:
> [Cc'ing Dmitrii, who also reported the same issue]
>
> On Tue, Nov 01, 2022 at 08:11:15PM -0700, Eric Biggers wrote:
> > Hi,
> >
> > I happened to notice the following QEMU bug report:
> >
> >
Hi,
在 2022/11/02 22:17, Christoph Hellwig 写道:
On Wed, Nov 02, 2022 at 08:17:37PM +0800, Yu Kuai wrote:
I think this is still not safe 樂
Indeed - wrong open_mutex.
+ /*
+* del_gendisk drops the initial reference to bd_holder_dir, so we
need
+* to keep our own here to
在 2022/11/02 20:17, Yu Kuai 写道:
Hi,
在 2022/11/02 14:48, Christoph Hellwig 写道:
For gendisk that are not live or their partitions, the bd_holder_dir
pointer is not valid and the kobject might not have been allocated
yet or freed already. Check that the disk is live before creating the
linkage
Hi,
在 2022/11/02 14:48, Christoph Hellwig 写道:
For gendisk that are not live or their partitions, the bd_holder_dir
pointer is not valid and the kobject might not have been allocated
yet or freed already. Check that the disk is live before creating the
linkage and error out otherwise.
On 11/3/22 11:47 AM, Guoqing Jiang wrote:
[ 78.491429]
[ 78.491640] clone_endio+0xf4/0x1c0 [dm_mod]
[ 78.492072] clone_endio+0xf4/0x1c0 [dm_mod]
The clone_endio belongs to "clone" target_type.
Hmm, could be the "clone_endio" from dm.c instead of dm-clone-target.c.
[
31 matches
Mail list logo