tsinik-dw commented on pull request #5335:
URL: https://github.com/apache/cloudstack/pull/5335#issuecomment-907182304
@shwstppr I saw your testing setup and adjusted the workflow accordingly
for my PoC. Keeping in mind that in production I cannot turn off the VMs while
upgrading ACS and XenServer I followed the next steps:
1. Deployed ACS 4.13.1 with two XenServer 7.0 hosts
2. VMs created (several Win 10 64bit), XenOrchestra shows Bios firmware
setting as _bios_
3. ACS upgraded using RPMs build from add-bootmodetype-xen branch (reports
version CloudStack 4.15.2.0-SNAPSHOT) while keeping VMs live
4. XenServer 7.0 hosts upgraded to XCP-NG 8.2 while keeping VMs live
After migrating the VMs from XenServer 7.0 host to XCP-NG 8.2 host I saw
that the Bios firmware setting of the VMs changed from _bios_ to _(default)
bios_
All VMs remained functional after upgrade to XCP-NG 8.2 and rebooted
normally with no problems
To be honest I couldn't understand exactly how this worked. Could it be the
change of the _default (bios)_ setting triggered by ACS during reboot?
I haven't observed the value of the Bios firmware setting in my previous
tests but I suppose this would have been changed to UEFI and thus the VMs
couldn't start after a reboot. (I changed it manually and saw that this really
happened again)
Overall, the test was successful but I'll do some more tests this weekend to
further investigate the details of the above.
Lastly, I have a question. During VM creation I can set the bootType and
bootMode (through Advanced settings), but once the VM is started there doesn't
appear to be a way to change these settings again. Is it possible to change
these settings again after the VM is created?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]