chandrakantrai commented on issue #11994:
URL: https://github.com/apache/cloudstack/issues/11994#issuecomment-3724012616
I have implemented a fix in KubernetesClusterScaleWorker.java (lines
358-389) to address issues with cluster scaling. Updated code is as follows.
` private void scaleKubernetesClusterOffering(KubernetesClusterNodeType
nodeType, ServiceOffering serviceOffering,
boolean updateNodeOffering,
boolean updateClusterOffering)
throws CloudRuntimeException {
// Do not validate for clusters in Created state (no VMs exist yet
to upgrade).
// Validation will be done later only if we are actually upgrading
existing VMs.
List<KubernetesCluster.State> scalingStates =
List.of(KubernetesCluster.State.Scaling,
KubernetesCluster.State.ScalingStoppedCluster);
if (!scalingStates.contains(kubernetesCluster.getState())) {
stateTransitTo(kubernetesCluster.getId(),
KubernetesCluster.Event.ScaleUpRequested);
}
if (KubernetesCluster.State.Created.equals(originalState)) {
kubernetesCluster =
updateKubernetesClusterEntryForNodeType(null, nodeType, serviceOffering,
updateNodeOffering, updateClusterOffering);
return;
}
final long size = getNodeCountForType(nodeType, kubernetesCluster);
List<KubernetesClusterVmMapVO> vmList =
kubernetesClusterVmMapDao.listByClusterIdAndVmType(kubernetesCluster.getId(),
nodeType);
final long tobeScaledVMCount = Math.min(vmList.size(), size);
// Only validate hypervisor compatibility when upgrading existing
VMs.
// Horizontal scaling (adding new nodes) on KVM must not be blocked
here.
if (tobeScaledVMCount > 0) {
validateKubernetesClusterScaleOfferingParameters();
}
for (long i = 0; i < tobeScaledVMCount; i++) {
KubernetesClusterVmMapVO vmMapVO = vmList.get((int) i);
UserVmVO userVM = userVmDao.findById(vmMapVO.getVmId());
boolean result = false;
try {
result = userVmManager.upgradeVirtualMachine(userVM.getId(),
serviceOffering.getId(),
new HashMap<String, String>());
} catch (RuntimeException | ResourceUnavailableException |
ManagementServerException
| VirtualMachineMigrationException e) {
logTransitStateAndThrow(Level.ERROR,
String.format("Scaling Kubernetes cluster : %s
failed, unable to scale cluster VM : %s due to %s",
kubernetesCluster.getName(),
userVM.getDisplayName(), e.getMessage()),
kubernetesCluster.getId(),
KubernetesCluster.Event.OperationFailed, e);
}
if (!result) {
logTransitStateAndThrow(Level.WARN,
String.format("Scaling Kubernetes cluster : %s
failed, unable to scale cluster VM : %s",
kubernetesCluster.getName(),
userVM.getDisplayName()),
kubernetesCluster.getId(),
KubernetesCluster.Event.OperationFailed);
}
if (System.currentTimeMillis() > scaleTimeoutTime) {
logTransitStateAndThrow(Level.WARN,
String.format("Scaling Kubernetes cluster : %s
failed, scaling action timed out",
kubernetesCluster.getName()),
kubernetesCluster.getId(),
KubernetesCluster.Event.OperationFailed);
}
}
kubernetesCluster = updateKubernetesClusterEntryForNodeType(null,
nodeType, serviceOffering,
updateNodeOffering, updateClusterOffering);
}`
**Status**:
**Confirmed**: Scale-up and scale-down are working correctly on KVM
hypervisors.
**Pending**: Testing on other hypervisors is still required.
**Testing Instructions**
If you would like to test this fix, I have attached a patched
cloudstack-4.22.0.0.jar file. You can apply it to your CloudStack Management
Server using the following steps:
1. Stop the CloudStack management service.
2. Backup your existing JAR file: `cp
/usr/share/cloudstack-management/lib/cloudstack-4.22.0.0.jar
/usr/share/cloudstack-management/lib/cloudstack-4.22.0.0.jar.bak`
3. Replace the existing JAR file with the provided patched version.
4. Start the CloudStack management service.
**Link for Patched jar file**
[https://github.com/chandrakantrai/cloudstack4.22.0-Patch/blob/main/cloudstack-4.22.0.0.jar](url)
@weizhouapache, thank you for pointing toward the relevant code section.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]