Hello,

In the past, there was an attempt to upgrade a Linux system from SLES11 SP4
to SLES 12 SP4, but it was not successful because of the missing disks
after OS upgrade.
We saw the below error messages in the SLES11 SP4:

hostname:~ # lvs | grep "Attr\|ao--p"
  LV                                                              VG   Attr
LSize   Pool Origin Data%  Move Log Copy%  Convert
  db2db_db2bmtg0_db2data                                          vg0
-wi-ao--p  60.00g
  db2db_db2bmtg0_db2dump                                          vg0
-wi-ao--p   2.00g
  is_appname                                                       vg0
-wi-ao--p 700.00g
  is_appname_mclm                                                  vg0
-wi-ao--p  44.25g
  is_appname_tss-mz                                                vg0
-wi-ao--p 250.00g
  opt_IBM_wstemp                                                  vg0
-wi-ao--p  65.00g
hostname:~ #

Please see the last see Attribute, which is "p". It means (see manual
pages):   (p)artial: One or more of the Physical Volumes this Logical
Volume uses is missing from the system.

We thought, that the permanent solution is the migration every data (~80
FileSystem) to a new Volume Group (from the vg0 to vg1). It was performed.
  The old and faulty VG (=Volume Group) (vg0) was not deleted with its LVs
(Logical Volume) and PVs (Physical Volume), but these were NOT used or
mounted.
  After it, the error messages disappeared on the SLES11 from the used
disks (PVs, LVs also). This was the staus till the new attempt of Linux
upgrade.

1 ) After it (on the last week), I started the OS upgrade again. I created
a full backup about the Linux OS (SLES 11) to a new disk before the
upgrade.
(The Linux itself is only on a pure 1 disk (dasd), without LVM. The
applications and their data are on LVM managed PVs, LVs, FSs.)

2) The upgrade was very slow and struggle. (The Linux system was useable
after 3 reboot, and there was many device(dasd)/lvm/FS error.)
In the SLES12 I saw many simlar errror messages:
WARNING: Device for PV x5zvM9-l42o-ifVM-zMWC-NgT5-dZ8G-afApuY not found or
rejected by a filter.
...
Couldn't find device with uuid x5zvM9-l42o-ifVM-zMWC-NgT5-dZ8G-afApuY
...
  There are 24 physical volumes missing.
WARNING: Couldn't find all devices for LV vg0/db2db_db2bmtg0_db2actlog
while checking used and assumed devices.
...

3 ) Many error message came from the old, and unused and corrupt VG (vg0)
and its LVs, and PVs, so I deleted these (the PVs, LVs of vg0).
  After a new reboot, disappeared every error messages (in SLES 12 SP4,
after upgrade).

4 ) At this time, I have checked every filesystem (~80, with fsck), and
every filesystem was good.

5 ) I updated the metadata of LVM ( to lvmetad).

6 ) I performed more (~10) boot on this Linux system to test, and I saw
similar error messages during every boot:
[FAILED] Failed to start LVM2 PV scan on device 94:617.
See 'systemctl status lvm2-pvscan@94:617.service' for details.
[FAILED] Failed to start LVM2 PV scan on device 94:517.
See 'systemctl status lvm2-pvscan@94:517.service' for details.
[FAILED] Failed to start LVM2 PV scan on device 94:473.
See 'systemctl status lvm2-pvscan@94:473.service' for details.
But the PVs, LVs, FSs (FileSystems) were good (without error messages) in
beginning.

7 ) Aftet some reboot, I saw same error messages about the new VG (vg1)
than above (in step 2) about the old and faulty VG (vg0)!;
It means, that one or more of the Physical Volumes is missing from the
Linux system, from the new VG.
My summary was at this point; The situation is better than it was at the
first upgrade attempt, but LVM is still unstable on SLES12, so file systems
are available or not (it is changing after every boot, from good to faulty
and back). The LVM database appears to be permanently corrupted.

8 ) I created the backup about the faulty SLES12 to a new disk, but the
Linux system has been restored to SLES 11 SP4, which provide stable LVs,
FSs so the system is useable.

  As we saw, the SLES12 is more sensitive to the errors than the SLES11, so
the filesystems errors may came up easier on SLES12.
But the customer is very impatient, because of they want to get a stable,
well working SLES 12.

Could somebody please say me, what should be fixed to get a stable PVs,
LVs, FSs on SLES 12?

Regards,

Csaba Polgar

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to