Hi Neil,
check RHEL-7 Installation Guide, there is a section describing how to
add DASDs:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/installation_guide/chap-post-installation-configuration-s390#sect-post-installation-dasds-setting-online-persistently-s390

Also make sure to run zipl after regenerating initramfs.
Updating boot parameters in /etc/zipl.conf and running zipl was
sufficient to extend my rootfs in the past, as described in the
Installation guide. Initramfs doesn't have to be regenerated.

Regards,
Jan

On 14. 12. 18 15:14, Neal Scheffler wrote:
I understand now that the PV NAME displayed by pvdisplay is not important.

This is on RHEL 7.

Here is the story so far.
I added a volume to the server at address 10a.
I added 010a to dasd.conf.
The Linux group added the new volume to the root vg.
Upon next reboot, it failed with

Buffer I/O error on dev dm-2, logical block 2269168, async page read
sysroot.mount mount process exited, code=exited status=1
Failed to mount /sysroot.

Restored server.
Added 010a to dasd.conf again.
I then added rd.dasd=0.0.010a to the kernel parms in zipl.conf and ran zipl.
On next reboot it failed with:

Warning: could not boot...
Warning: /dev/mapper/rhel_enyza032-root does not exist
Warning: /dev/rhel_enyza032/root does not exist
Warning: /dev/rhel_enyza032/swap does not exist

Restored server again.
Added 010a to dasd.conf again.
Then I realized /etc/dasd.conf is in the initramfs and of course does
not have dasd 010a in it.
Regenerated initramfs.
Have not rebooted server yet to verify, but I think we should be ok.

I think regenerating initramfs is the key if expanding the root vg
onto a volume which was not already in dasd.conf.

Neal


On Fri, Dec 14, 2018 at 12:59 AM Mark Post <mp...@suse.com> wrote:
On 12/12/2018 at 11:39 AM, Neal Scheffler <vmwiz...@gmail.com> wrote:
The Linux group expanded the root vg to a second physical dasd volume.
Doing a pvdisplay shows the PV Name of /dev/dasdm1
zipl.conf was updated to include the parm "rd.dasd=0.0.010a" since
this is part of the root file system.

After reboot, dasd 010a is now /dev/dasdb1 so the server reboot failed.

What is the best way to handle this?
Should they be using /dev/disk/by-path/ccw-0.0.010a-part1 to reference
the new dasd?
This doesn't make much sense to me.  LVM doesn't care about the names of block 
devices, whether /dev/dasdb1 or /dev/disk/by-path/ccw-0.0.010a-part1.  Apart 
from any block devices that are excluded in /etc/lvm/lvm.conf, LVM looks at all 
available block devices to see if they have any LVM metadata on them, and uses 
them regardless of their name.

Exactly _how_ did the reboot fail?


Mark Post

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
----------------------------------------------------------------------
For more information on Linux on System z, visit
http://wiki.linuxvm.org/
----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
----------------------------------------------------------------------
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
----------------------------------------------------------------------
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

Reply via email to