Stefan,

On maandag 29 juli 2019 21:28:50 CEST Stefan G. Weichinger wrote:
> At a customer I have to check through an older gentoo server.
> 
> The former admin is not available anymore and among other things I have
> to check how the SAN storage is attached.

If you ever encounter that admin, make sure you hide the body :)

> As I have to plan a new installation with minimal downtime I'd like to
> understand that multipath-stuff before any changes ;-)

I use multipath, but only using internal HBAs connected to internal 
backplanes.

> The system runs:
> 
> sys-fs/multipath-tools-0.5.0-r1

I use "sys-fs/multipath-tools-0.7.9"

> and has a multipath.conf:

Same here

> (rm-ed comments)
> 
> defaults {
> #  udev_dir                /dev
>   polling_interval        15
> #  selector                "round-robin 0"
>   path_grouping_policy    group_by_prio
>   failback                5
>   path_checker            tur
> #  prio_callout            "/sbin/mpath_prio_tpc /dev/%n"
>   rr_min_io               100
>   rr_weight               uniform
>   no_path_retry           queue
>   user_friendly_names     yes
> 
> }
> blacklist {
>   devnode cciss
>   devnode fd
>   devnode hd
>   devnode md
>   devnode sr
>   devnode scd
>   devnode st
>   devnode ram
>   devnode raw
>   devnode loop
>   devnode sda
>   devnode sdb
> }
> 
> multipaths {
>   multipath {
>     wwid  3600c0ff0001e91b2c1bae25601000000
>     ## To find your wwid, please use /usr/bin/sg_vpd --page=di /dev/DEVICE.
>     ## The address will be a 0x6. Remove the 0x and replace it with 3.
>     alias MSA2040_SAMBA_storage
>   }
> }

This looks like a default one. Mine is far simpler:
***
defaults {
        path_grouping_policy    multibus
        path_selector   "queue-length 0"
        rr_min_io_rq    100
}
***

Do you have any files in "/etc/multipath"?  I have 2:
"bindings" (which only contains comments)
"wwids" (which, aside from comments, shows the IDs from the harddrives.

Both of these files mention they are automatically maintained.

I don't "hide" devices from multipath and let it figure it out by itself

> "multipath -l" and "-ll" show nothing.

Then multipath is NOT working, I get the following (only showing first 2 
devices):

***
35000cca25d8ec910 dm-4 HGST,HUS726040ALS210
size=3.6T features='0' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
  |- 0:0:20:0 sdt  65:48  active ready running
  |- 0:0:7:0  sdh  8:112  active ready running
  |- 1:0:7:0  sdaf 65:240 active ready running
  `- 1:0:20:0 sdar 66:176 active ready running
35000cca25d8b5e78 dm-7 HGST,HUS726040ALS210
size=3.6T features='0' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
  |- 0:0:21:0 sdu  65:64  active ready running
  |- 0:0:8:0  sdi  8:128  active ready running
  |- 1:0:8:0  sdag 66:0   active ready running
  `- 1:0:21:0 sdas 66:192 active ready running
***

As per the above, every physical disk is seen 4 times by the system.
I have 2 HBAs connected to backplanes and as these are SAS-drives, every disk 
is connected twice to the backplanes.
In other words, I have 4 different paths to get to every single disk.

> dmesg:
> 
> # dmesg | grep multi
> [    1.144947] md: multipath personality registered for level -4
> [    1.145679] device-mapper: multipath: version 1.9.0 loaded
> [    1.145857] device-mapper: multipath round-robin: version 1.0.0 loaded
> [21827451.284100] device-mapper: table: 253:0: multipath: unknown path
> selector type
> [21827451.285432] device-mapper: table: 253:0: multipath: unknown path
> selector type
> [21827496.130239] device-mapper: table: 253:0: multipath: unknown path
> selector type
> [21827496.131379] device-mapper: table: 253:0: multipath: unknown path
> selector type
> [21827497.576482] device-mapper: table: 253:0: multipath: unknown path
> selector type
> 
> -
> 
> I see two devices sdc and sdd that should come from the SAN.

Interesting, are these supposed to be the same? 
what do you get back from:

# /usr/bin/sg_vpd --page=di /dev/sdc
# /usr/bin/sg_vpd --page=di /dev/sdd
(As suggested in the multipathd.conf file you listed above)

On my system I get the following for "sdt: and "sdh" (first disk listed in 
above multipath output):
***
# /usr/bin/sg_vpd --page=di /dev/sdt
Device Identification VPD page:
  Addressed logical unit:
    designator type: NAA,  code set: Binary
      0x5000cca25d8ec910
  Target port:
    designator type: NAA,  code set: Binary
     transport: Serial Attached SCSI Protocol (SPL-4)
      0x5000cca25d8ec911
    designator type: Relative target port,  code set: Binary
     transport: Serial Attached SCSI Protocol (SPL-4)
      Relative target port: 0x1
  Target device that contains addressed lu:
    designator type: NAA,  code set: Binary
     transport: Serial Attached SCSI Protocol (SPL-4)
      0x5000cca25d8ec913
    designator type: SCSI name string,  code set: UTF-8
      SCSI name string:
      naa.5000CCA25D8EC913
# /usr/bin/sg_vpd --page=di /dev/sdh
Device Identification VPD page:
  Addressed logical unit:
    designator type: NAA,  code set: Binary
      0x5000cca25d8ec910
  Target port:
    designator type: NAA,  code set: Binary
     transport: Serial Attached SCSI Protocol (SPL-4)
      0x5000cca25d8ec912
    designator type: Relative target port,  code set: Binary
     transport: Serial Attached SCSI Protocol (SPL-4)
      Relative target port: 0x2
  Target device that contains addressed lu:
    designator type: NAA,  code set: Binary
     transport: Serial Attached SCSI Protocol (SPL-4)
      0x5000cca25d8ec913
    designator type: SCSI name string,  code set: UTF-8
      SCSI name string:
      naa.5000CCA25D8EC913
***

The "wwid" entry for this would be "35000cca25d8ec910"
(The value for "Adressed logical unit", with "0x" replaced with "3")

If "sdc" and "sdd" are the same disk, the "Adressed logical unit" id should be 
the same for both.

> Could someone help me to research this in more detail?

I can try.
> This is a production server, I can't change much ...

Understood, no changes have been recommended yet. We will get to those later.

> I would like to find out how to reliably mount these SAN-devices into a
> new OS (maybe a new gentoo installation is coming).

If it is old, I would suggest that as well.

--
Joost



Reply via email to