When multipathd calls domap(), it should also print the reason on log
level 2, it already does this on every code path except when domap is
called by the path_checker. Also, if __setup_multipath deletes the
device, it should log that.
Signed-off-by: Benjamin Marzinski
---
multipathd/main.c |
dm_flush_maps() was failing if there were no device-mapper devices at
all, instead of returning success, since there is nothing to do.
Fixes: "libmultipath: make dm_flush_maps only return 0 on success"
Reviewed-by: Martin Wilck
Signed-off-by: Benjamin Marzinski
---
libmultipath/devmapper.c | 2
pathcountgr() is never used except by pathcount(), and neither is the
special case for PATH_WILD. Simplify this and change it into a helper
function that is called by pathcount, and will be used again in a future
patch. Leave count_active_paths alone for the sake of compiler
optimization.
Also
Patches 0003 & 0004 fix an issue that I've seen with paths whose checker
takes too long when multipathd is starting up and creating devices.
The others are minor build fixes or small cleanups to my previous
patchset.
Changes in v2:
- patch 0003 no longer refactors count_active_paths(), as
delegate_to_multipathd() was returning success, even if the multipathd
command failed. Also, if the command was set to fail with NOT_DELEGATED,
it shouldn't print any errors, since multipath will try to issue the
command itself.
Fixes: "multipath: delegate flushing maps to multipathd"
Reviewed-by: Martin Wilck
Signed-off-by: Benjamin Marzinski
---
kpartx/kpartx.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kpartx/kpartx.c b/kpartx/kpartx.c
index c24ad6d9..653ce0c8 100644
--- a/kpartx/kpartx.c
+++ b/kpartx/kpartx.c
@@ -738,7 +738,7 @@ struct block {
When multipath loads a table, it signals to udev if there are no active
paths. Multipath wasn't counting pending paths as active. This meant
that if all the paths were pending, udev would treat the device as not
ready, and not run kpartx on it. Even if the pending paths later
because active and
If the map doesn't unset its hwe pointer before orphaning all the paths,
multipathd will print a warning message in orphan_path() because of
commit "libmultipath: warn if freeing path that holds mpp->hwe".
Signed-off-by: Benjamin Marzinski
---
libmultipath/structs_vec.c | 1 +
multipathd/main.c
On Sat, 8 Aug 2020, Chuck Lever wrote:
> My interest is in code integrity enforcement for executables stored
> in NFS files.
>
> My struggle with IPE is that due to its dependence on dm-verity, it
> does not seem to able to protect content that is stored separately
> from its execution
Hi!
> > > > (eg, a specification) will be critical for remote filesystems.
> > > >
> > > > If any of this is to be supported by a remote filesystem, then we
> > > > need an unencumbered description of the new metadata format
> > > > rather than code. GPL-encumbered formats cannot be contributed
On Tue, 2020-08-11 at 10:48 -0400, Chuck Lever wrote:
> Mimi's earlier point is that any IMA metadata format that involves
> unsigned digests is exposed to an alteration attack at rest or in
> transit, thus will not provide a robust end-to-end integrity
> guarantee.
I don't believe that is Mimi's
On Tue, 2020-08-11 at 10:48 -0400, Chuck Lever wrote:
> > On Aug 11, 2020, at 1:43 AM, James Bottomley > nPartnership.com> wrote:
> >
> > On Mon, 2020-08-10 at 19:36 -0400, Chuck Lever wrote:
> > > > On Aug 10, 2020, at 11:35 AM, James Bottomley
> > > > wrote:
[...]
> > > > The first basic is
On Tue, 2020-08-11 at 10:48 -0400, Chuck Lever wrote:
> > On Aug 11, 2020, at 1:43 AM, James Bottomley
> > wrote:
> > On Mon, 2020-08-10 at 19:36 -0400, Chuck Lever wrote:
[...]
> > > Thanks for the help! I just want to emphasize that documentation
> > > (eg, a specification) will be critical for
On Mon, 2020-08-10 at 21:27 +, Schremmer, Steven wrote:
> An RDAC array configured to run with Linux DM-MP should never report
> that it supports implicit and explicit ALUA support. If the array is
> configured to run with scsi_dh_rdac then it reports TPGS=0 (none) and
> should use rdac prio.
On Tue, 2020-08-11 at 09:14 +0800, Zhiqiang Liu wrote:
>
> On 2020/8/10 22:34, Martin Wilck wrote:
> > Hi Liu,
> >
> > thanks again for your valuable contributions and meticulous code
> > review. I've added your patches in my upstream-queue branch now:
> >
> >
On Tue, 2020-08-11 at 11:23 +0800, Zhiqiang Liu wrote:
> In vector_alloc_slot func, if REALLOC fails, it means new slot
> allocation fails. However, it just update v->allocated and then
> return the old v->slot without new slot. So, the caller will take
> the last old slot as the new allocated
Hi Liaxiaokeng,
thanks again. I still have minor issues, see below.
On Tue, 2020-08-11 at 15:23 +0800, lixiaokeng wrote:
> In set_ble_device func, if blist is NULL or ble is NULL,
> the vendor and product isn't freed. We think it is not
> reasonable that strdup(XXX) is used as set_ble_device
>
In set_ble_device func, if blist is NULL or ble is NULL,
the vendor and product isn't freed. We think it is not
reasonable that strdup(XXX) is used as set_ble_device
and store_ble functions' parameter.
Here we call strdup() in store_ble and set_ble_device
functions and the string will be free if
Hello Martin:
Thanks for your reviews. I will modify this patch with your advice
and send it again.
On 2020/8/10 21:22, Martin Wilck wrote:
> Hello Lixiaokeng,
>
> On Thu, 2020-07-30 at 21:27 +0800, lixiaokeng wrote:
>> Hi.
>> I'm very sorry for subject mistake in first mail.
>>
>> In
An RDAC array configured to run with Linux DM-MP should never report that it
supports implicit and explicit ALUA support. If the array is configured to run
with scsi_dh_rdac then it reports TPGS=0 (none) and should use rdac prio. The
more modern configuration is to report TPGS=01b (implicit)
> On Aug 10, 2020, at 11:35 AM, James Bottomley
> wrote:
>
> On Sun, 2020-08-09 at 13:16 -0400, Mimi Zohar wrote:
>> On Sat, 2020-08-08 at 13:47 -0400, Chuck Lever wrote:
On Aug 5, 2020, at 2:15 PM, Mimi Zohar
wrote:
>>
>>
>>
If block layer integrity was enough, there
I can not agree with you more.
The root cause of the conflict is REQ_FAILFAST_TRANSPORT.
REQ_FAILFAST_TRANSPORT may be designed for scsi, because scsi protocol
do not difine the local retry mechanism. SCSI implements a fuzzy local
retry mechanism, so need the REQ_FAILFAST_TRANSPORT for multipath
On 2020/8/11 12:20, Mike Snitzer wrote:
On Mon, Aug 10 2020 at 11:32pm -0400,
Chao Leng wrote:
On 2020/8/11 1:22, Mike Snitzer wrote:
On Mon, Aug 10 2020 at 10:36am -0400,
Mike Snitzer wrote:
On Fri, Aug 07 2020 at 7:35pm -0400,
Sagi Grimberg wrote:
Hey Mike,
...
I think NVMe
On 2020/8/11 1:22, Mike Snitzer wrote:
On Mon, Aug 10 2020 at 10:36am -0400,
Mike Snitzer wrote:
On Fri, Aug 07 2020 at 7:35pm -0400,
Sagi Grimberg wrote:
Hey Mike,
...
I think NVMe can easily fix this by having an earlier stage of checking,
e.g. nvme_local_retry_req(), that
On Mon, 2020-08-10 at 10:13 -0700, James Bottomley wrote:
> On Mon, 2020-08-10 at 12:35 -0400, Mimi Zohar wrote:
> > On Mon, 2020-08-10 at 08:35 -0700, James Bottomley wrote:
> [...]
> > > > Up to now, verifying remote filesystem file integrity has been
> > > > out of scope for IMA. With
On Mon, 2020-08-10 at 08:35 -0700, James Bottomley wrote:
> On Sun, 2020-08-09 at 13:16 -0400, Mimi Zohar wrote:
> > On Sat, 2020-08-08 at 13:47 -0400, Chuck Lever wrote:
> > > > On Aug 5, 2020, at 2:15 PM, Mimi Zohar
> > > > wrote:
> >
> >
> >
> > > > If block layer integrity was enough,
26 matches
Mail list logo