asheswook opened a new issue, #11273:
URL: https://github.com/apache/cloudstack/issues/11273
## Description
In a CloudStack environment using KVM, I have set up a Shared Mountpoint
primary storage backed by DRBD + LVM and formatted with GFS2. While the primary
storage is detected and shows as "Up" in the UI, it doesn't actually work as
primary storage.
- The system continues to repeatedly attempt system VM creation. (Every time
system VM is in "Starting")
- No files are being created on the storage path even though the pool
appears online.
- I referred to the cloudstack official quick start guide and kvm device
setup guide. All settings look normal.
- I watched this problem for a few days and I know that it's not resolved
after reboot or time.
### What I have checked
- Permission is 777.
- SELinux `setenforce 0`.
- GFS2 status: GFS2 is mounted and functioning on both nodes.
- DRBD status: DRBD volume is Primary/UpToDate on both nodes.
- Read/Write test. Both nodes can successfully read and write to the shared
mountpoint (`/mnt/ssd_primary`)
- I tried to create a volume manually via Libvirt (virsh), It works
perfectly (I can see a volume file created on the `/mnt/ssd_primary`).
- No firewalld, iptables.
### Observations
The following log entry repeats in agent.log:
```
2025-07-23 19:37:29,923 INFO [kvm.storage.LibvirtStorageAdaptor]
(AgentRequest-Handler-5:[]) (logid:) Asking libvirt to refresh storage pool
ebdbe393-d6c1-4f2c-963b-5dd5a796e90f
2025-07-23 19:40:33,085 INFO [kvm.storage.LibvirtStorageAdaptor]
(AgentRequest-Handler-1:[]) (logid:) Trying to fetch storage pool
ebdbe393-d6c1-4f2c-963b-5dd5a796e90f from libvirt
2025-07-23 19:40:33,086 INFO [kvm.storage.LibvirtStorageAdaptor]
(AgentRequest-Handler-1:[]) (logid:) Asking libvirt to refresh storage pool
ebdbe393-d6c1-4f2c-963b-5dd5a796e90f
2025-07-23 19:41:34,138 INFO [kvm.storage.LibvirtStorageAdaptor]
(AgentRequest-Handler-2:[]) (logid:) Trying to fetch storage pool
ebdbe393-d6c1-4f2c-963b-5dd5a796e90f from libvirt
```
virsh:
```
virsh pool-info ebdbe393-d6c1-4f2c-963b-5dd5a796e90f
State: running
Capacity: 1.73 TiB
Allocation: 259.22 MiB
Available: 1.73 TiB
virsh vol-list ebdbe393-d6c1-4f2c-963b-5dd5a796e90f
(no volumes)
```
### Environment
- Rocky Linux 9.6
- KVM
- CloudStack 4.20.1.0
- DRBD 9.2.1.4, LVM, GFS2 (I don't think this will be the problem because
many times I have previously checked that the volume is created and that both
nodes work without lock issues)
```
[root@node1 ~]# rpm -qa | grep -E "cloudstack|libvirt"
cloudstack-common-4.20.1.0-1.noarch
cloudstack-management-4.20.1.0-1.noarch
libvirt-libs-10.10.0-7.3.el9_6.x86_64
libvirt-daemon-log-10.10.0-7.3.el9_6.x86_64
libvirt-client-10.10.0-7.3.el9_6.x86_64
libvirt-daemon-common-10.10.0-7.3.el9_6.x86_64
libvirt-daemon-driver-storage-core-10.10.0-7.3.el9_6.x86_64
libvirt-daemon-driver-storage-rbd-10.10.0-7.3.el9_6.x86_64
libvirt-daemon-driver-nwfilter-10.10.0-7.3.el9_6.x86_64
libvirt-daemon-driver-network-10.10.0-7.3.el9_6.x86_64
libvirt-daemon-lock-10.10.0-7.3.el9_6.x86_64
python3-libvirt-10.10.0-1.el9.x86_64
libvirt-client-qemu-10.10.0-7.3.el9_6.x86_64
libvirt-daemon-plugin-lockd-10.10.0-7.3.el9_6.x86_64
libvirt-daemon-config-network-10.10.0-7.3.el9_6.x86_64
libvirt-daemon-config-nwfilter-10.10.0-7.3.el9_6.x86_64
libvirt-daemon-driver-storage-logical-10.10.0-7.3.el9_6.x86_64
libvirt-daemon-driver-storage-mpath-10.10.0-7.3.el9_6.x86_64
libvirt-daemon-driver-storage-scsi-10.10.0-7.3.el9_6.x86_64
libvirt-daemon-driver-storage-disk-10.10.0-7.3.el9_6.x86_64
libvirt-daemon-driver-nodedev-10.10.0-7.3.el9_6.x86_64
libvirt-daemon-driver-interface-10.10.0-7.3.el9_6.x86_64
libvirt-daemon-driver-secret-10.10.0-7.3.el9_6.x86_64
libvirt-daemon-proxy-10.10.0-7.3.el9_6.x86_64
libvirt-daemon-10.10.0-7.3.el9_6.x86_64
libvirt-daemon-driver-qemu-10.10.0-7.3.el9_6.x86_64
libvirt-daemon-driver-storage-iscsi-10.10.0-7.3.el9_6.x86_64
libvirt-daemon-driver-storage-10.10.0-7.3.el9_6.x86_64
libvirt-10.10.0-7.3.el9_6.x86_64
cloudstack-agent-4.20.1.0-1.noarch
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]