harikrishna-patnala commented on code in PR #7799:
URL: https://github.com/apache/cloudstack/pull/7799#discussion_r1281775568
##########
engine/schema/src/main/java/com/cloud/upgrade/GuestOsMapper.java:
##########
@@ -74,18 +74,19 @@ private long
getGuestOsIdFromHypervisorMapping(GuestOSHypervisorMapping mapping)
}
public void addGuestOsAndHypervisorMappings(long categoryId, String
displayName, List<GuestOSHypervisorMapping> mappings) {
- if (!addGuestOs(categoryId, displayName)) {
- LOG.warn("Couldn't add the guest OS with category id: " +
categoryId + " and display name: " + displayName);
- return;
- }
-
- if (CollectionUtils.isEmpty(mappings)) {
- return;
- }
-
long guestOsId = getGuestOsId(categoryId, displayName);
if (guestOsId == 0) {
LOG.debug("No guest OS found with category id: " + categoryId + "
and display name: " + displayName);
+ if (!addGuestOs(categoryId, displayName)) {
+ LOG.warn("Couldn't add the guest OS with category id: " +
categoryId + " and display name: " + displayName);
+ return;
+ }
+ guestOsId = getGuestOsId(categoryId, displayName);
+ } else {
+ // TODO: update is_user_defined to false
Review Comment:
@DaanHoogland I'm thinking of not doing any merge operation as it may affect
the old entries in older versions as well.
I think the problem here is only that we have duplicate guest OS entries,
which is not a critical/major issue.
My proposal is to leave them as is and make fix for future entries.
Regarding the "Red Hat Enterprise Linux 9" and other similar entries,
instead of doing a merge operation, I would suggest adding new mapping entries
for both the names (RHEL and RHEL9*) so that if the operator uses any of the
Guest OSs it won't behave differently while deploying VM (otherwise if the
mapping is not found, CS may choose "default").
@weizhouapache @DaanHoogland
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]