[Expired for multipath-tools (Ubuntu) because there has been no activity
for 60 days.]
** Changed in: multipath-tools (Ubuntu)
Status: Incomplete => Expired
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad
Hi,
I reordered this slightly to split into topics properly.
> The problem is that I don't reach "3. finally once I hit my 60 second limit
> on the paths
> (dev_loss_tmo) they are considered dead" . The paths are never removed and
> stay "running"
> forever. Maybe it is related to the fact tha
Yes, it was the same on 18.04 also.
The problem is that I don't reach "3. finally once I hit my 60 second limit on
the paths (dev_loss_tmo) they are considered dead" . The paths are never
removed and stay "running" forever. Maybe it is related to the fact that there
is only one HBA active.
D
I have also checked if this was any different in the past and checked
Bionic (thanks Frank for the system). It behaved the same way (i.e. no
regression).
Unmapping device:
Jan 25 05:07:15 hwe0006 kernel: sd 1:0:0:1074151462: [sdc] tag#1877 FAILED
Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
J
Bonus for my theory that these timeouts are about the paths going down and not
the disks.
If I unplug the adapter (= 2 paths) it goes through this:
1. it immediately detects that something is wrong. I/O might still be
queued.
Jan 25 09:27:52 s1lp5 multipathd[782]: checker failed path 65:32 in ma
# Argument why (a) isn't a problem: the "paths are never going away"
And this is (as seen on the sysfs) on the rports, which are the links between
local and remote FC ports. In my case for example two ports on each side doing
NxM => 4 rports.
-rw-r--r-- 1 root root 4096 Jan 21 15:51
/sys/cla
# Argument why (b) might in fact a problem: "disk should be detected as
new, and not mapped onto the old paths/devices"
Obviously one could just say "have a static LUN Id plan and don't map
back the old LUN, but that is evasive. In a perfect world something in
kernel/udev/multipath would recognize
I'm not a 100% sure, but I wondered what "to expect" from dev_loss_tmo.
Reading more docs I think it is more like:
"I/O is held in flight, since the target might come back"
After the timeout I'd assume it will kill the remaining I/O.
This makes me think that this bug is about two things:
a) "fault
Trying to recreate on 20.04
# enable my FC adapters
$ sudo chccwdev -e 0.0.e000
$ sudo chccwdev -e 0.0.e100
# Ensure and check I have a one minute set (default would be infinite)
$ for f in
/sys/devices/css0/0.0.*/0.0.*/host*/rport-*/fc_remote_ports/rport-*/*loss_tmo;
do b=$(basename $f); echo
** Changed in: multipath-tools (Ubuntu)
Importance: Undecided => High
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1911999
Title:
faulty paths are not removed
To manage notifications about this
10 matches
Mail list logo