nimbustech-lab opened a new issue, #12108:
URL: https://github.com/apache/cloudstack/issues/12108

   ### problem
   
   I’m trying to enforce a consistent CPU socket configuration across VMs by 
using the global setting cpu.corespersocket.
   However, when this setting is enabled, live CPU scaling fails with the 
message:
   
   `unhandled exception`
   
   If I remove the setting cpu.corespersocket, the same VM live-scales 
successfully, so the issue is directly related to that parameter.
   
   For some environments (especially those with CPU-based licensing tied to 
sockets), it is important to make sure that the socket count is always fixed 
(e.g., always 2 sockets regardless of the number of cores assigned).
   
   At the moment, CloudStack does not seem to reliably honor or handle this 
setting during live scaling operations.
   
   ### versions
   
   Apache CloudStack: 4.22.0
   Hypervisor: VMware vCenter + ESXi
   Infrastructure: Standard VMware cluster, shared storage
   VM type: User instance with dynamic scaling enabled
   
   ### The steps to reproduce the bug
   
   1. Set the setting on any VM/instance:
   > `cpu.corespersocket = 2`
   2. Turn on the VM/instance
   3. Scale VM/instance
   4. `Unhandled exception` error pops up
   
   ### What to do about it?
   
   - Please confirm whether the cpu.corespersocket setting is fully supported 
for VMware live scaling.
   - If this is a bug, kindly update VM reconfigure logic to correctly compute:
   `cores = total_vcpu / cpu.corespersocket`
   `sockets = cpu.corespersocket`
   - If this is a limitation, please advise if CloudStack can support a feature 
to force a fixed socket count (e.g., always 2 sockets) regardless of vCPU 
changes.
   - This is important for environments that rely on socket-based licensing.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to