On woensdag 14 augustus 2019 14:17:23 CEST Stefan G. Weichinger wrote:
> Am 14.08.19 um 13:20 schrieb J. Roeleveld:

> > See next item, make sure you do NOT mount both at the same time.
> 
> I understand and agree ;-)

good :)

> >> # /usr/bin/sg_vpd --page=di /dev/sdb
> >> 
> >> Device Identification VPD page:
> >>   Addressed logical unit:
> >>     designator type: NAA,  code set: Binary
> >>     
> >>       0x600605b00d0ce810217ccffe19f851e8
> > 
> > Yes, this one is different.
> > 
> > I checked the above ID and it looks like it is already correctly
> > configured. Is " multipathd " actually running?
> 
> no!

Then "multipath -l" will not show anything either. When you have a chance for 
downtime (and that disk can be umounted) you could try the following:
1) stop all services requiring that "disk" to be mounted
2) umount that "disk"
3) start the "multipath" service
4) run "multipath -ll" to see if there is any output

If yes, you can access the "disk" via the newly added entry under "/dev/
mapper/"
If you modify "/etc/fstab" for this at the point, ensure multipath is started 
BEFORE the OS tries to mount it during boot.

Other option (and only option of "multipath -ll" still doesn't show anything) 
is to stop the "multipath" service and leave it all as-is.

> > If it were running correctly, you would mount " /dev/mapper/.... " instead
> > of " /dev/sdc " or " /dev/sdd ".
> > 
> >> In the first week of september I travel there and I have the job to
> >> reinstall that server using Debian Linux (yes, gentoo-users, I am
> >> getting OT here ;-)).
> > 
> > For something that doesn't get updated/managed often, Gentoo might not be
> > the best choice, I agree.
> > I would prefer Centos for this one though, as there is far more info on
> > multipath from Redhat.
> 
> I will consider this ...

The choice is yours. I just haven't found much info about multipath for other 
distributions. (And I could still use a decent document/guide describing all 
the different options)

> As I understand things here:
> 
> the former admin *tried to* setup multipath and somehow got stuck.

My guess: multipath wasn't enabled before the boot-proces would try to mount 
it, the following needs to be done (and finished) in sequence for it to work:

1) The OS needs to detect the disks (/dev/sdc + /dev/sdd). This requires 
modules to be loaded and the fibrechannel disks to be detected

2) multipathd needs to be running and correctly identified the fibrechannel 
disk 
and the paths

3) The OS needs to mount the fibrechannel disk using the "/dev/mapper/..." 
entry created by multipath.

I run ZFS on top of the multipath entries, which makes it all a bit "simpler", 
as the HBA module is built-in and the "zfs"  services depend on "multipath".
All the mounting is done by the zfs services.

> That's why it isn't running and not used at all. He somehow mentioned
> this in an email back then when he was still working there.
> 
> So currently it seems to me that the storage is attached via "single
> path" (is that the term here?) only. "directly"= no redundancy

Exactly, and using non-guaranteed drive-letters. (I know for a fact that they 
can chance as I've had disks move to different letters during subsequent boots. 
I do have 12 disks getting 4 entries each, which means 48 entries ;)

> That means using the lpfc-kernel-module to run the FibreChannel-adapters
> ... which failed to come up / sync with a more recent gentoo kernel, as
> initially mentioned.

Are these modules not included in the main kernel?
And maybe they require firmware which, sometimes, requires specific versions 
between module/kernel versions.

> (right now: 4.1.15-gentoo-r1 ... )

Old, but if it works, don't fix it. (Just don't expose it to the internet)

> I consider sending a Debian-OS on a SSD there and let the (low
> expertise) guy there boot from it. (or a stick). Which in fact is risky
> as he doesn't know anything about linux.

I wouldn't take that risk on a production server

> Or I simply wait for my on-site-appointment and start testing when I am
> there.

Safest option.

> Maybe I am lucky and the debian lpfc stuff works from the start. And
> then I could test multipath as well.

You could test quickly with the gentoo-install present as described above. The 
config should be the same regardless.

> I assume that maybe the adapters need a firmware update or so.

When I added a 2nd HBA to my server, I ended up patching the firmware on both 
to ensure they were identical.

> The current gentoo installation was done with "hardened" profile, not
> touched for years, no docs .... so it somehow seems way too much hassle
> to get it up to date again.

I migrated a few "hardened" profile installations to non-hardened, but it 
required preparing binary packages on a VM and reinstalling the whole lot with 
a lot of effort. (empty /var/lib/portage/world, run emerge --depclean, do 
@system with --empty and than re-populate /var/lib/portage/world and let that 
be installed using the before-mentioned binaries).

A clean install is quicker and simpler.

> Additionally no experts on site there, so it
> should be low maintenance anyway.

A binary distro would be a better choice then. How far is this from your 
location?

--
Joost



Reply via email to