genegr opened a new pull request, #13059: URL: https://github.com/apache/cloudstack/pull/13059
### Description Registering an adaptive-plugin-backed managed primary pool currently fails with `Capacity bytes not available from the storage provider, user provided capacity bytes must be specified` even when `capacityBytes=` is actually passed to `createStoragePool`, whenever the provider cannot report capacity at that moment (for example a FlashArray pod with no quota and no footprint yet, or a transient probe failure). Root cause lives in `AdaptiveDataStoreLifeCycleImpl.initialize()`: the user-supplied capacity was guarded behind `stats != null`, so any null stats caused a fall-through to the "no user capacity either" error branch even when the user *did* provide one. This change accepts the user-supplied value unconditionally and uses the provider stats only as an upper-bound sanity check when they are actually available. The "no user-provided capacity, no provider capacity" branch is preserved and still raises the same `InvalidParameterValueException`. ### Types of changes - [x] Bugfix (non-breaking change which fixes an issue) ### Feature/Enhancement Scale or Bug Severity Major for any deployment that uses the adaptive storage framework against a provider which cannot report capacity synchronously at pool-register time — registration will always fail regardless of what `capacityBytes` is passed. ### How Has This Been Tested? Validated end-to-end on a 4.23-SNAPSHOT lab: - Registered a FlashArray primary pool (`provider="Flash Array"`, `transport=nvme-tcp`) against an empty Purity pod with `capacitybytes=1099511627776` and `capacityiops=100000`. Before this change, the registration failed with the error above; after this change, the pool enters the `Up` state using the user-provided capacity. - Registered a pool against a pod that *does* report stats; the pool's capacity comes from the provider as before, and the upper-bound check still rejects a user-supplied `capacityBytes` that exceeds the provider's capacity. - Exercised a 20 GiB volume end-to-end (create, attach to a Rocky 9 VM, `mkfs.ext4` + SHA-256 write/verify, live-migrate between two KVM hosts with the data disk attached — no I/O gap across the migrate). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
