Op 31-08-2021 om 13:02 schreef Mevludin Blazevic:
Hi all,
I am trying to add Ceph RBD (pacific) as a new Primary Storage for my fresh
Cloudstack 4.15.1 installation. I have currently an NFS server as Primary
storage running and after connecting Cloudstack with Ceph, I would then remove
the NFS server. Unfortunately, I am running into the same problem, no matter If
I am trying to add the Ceph storage Cluster or Zone-wide. The output of
/cloudstack/agent/agent.log is as follows:
2021-08-31 12:43:44,247 INFO [kvm.storage.LibvirtStorageAdaptor]
(agentRequest-Handler-4:null) (logid:cb99bb9f) Asking libvirt to refresh
storage pool 84aa6a27-0413-39ad-87ca-5e08078b9b84
2021-08-31 12:44:40,699 INFO [kvm.storage.LibvirtStorageAdaptor]
(agentRequest-Handler-5:null) (logid:cae1fff8) Attempting to create storage
pool fc6d0942-21ac-3cd1-b9f3-9e158cf4d75d (RBD) in libvirt
2021-08-31 12:44:40,701 WARN [kvm.storage.LibvirtStorageAdaptor]
(agentRequest-Handler-5:null) (logid:cae1fff8) Storage pool
fc6d0942-21ac-3cd1-b9f3-9e158cf4d75d was not found running in libvirt. Need to
create it.
2021-08-31 12:44:40,701 INFO [kvm.storage.LibvirtStorageAdaptor]
(agentRequest-Handler-5:null) (logid:cae1fff8) Didn't find an existing storage
pool fc6d0942-21ac-3cd1-b9f3-9e158cf4d75d by UUID, checking for pools with
duplicate paths
2021-08-31 12:44:44,286 INFO [kvm.storage.LibvirtStorageAdaptor]
(agentRequest-Handler-3:null) (logid:725f7dcf) Trying to fetch storage pool
84aa6a27-0413-39ad-87ca-5e08078b9b84 from libvirt
2021-08-31 12:44:44,290 INFO [kvm.storage.LibvirtStorageAdaptor]
(agentRequest-Handler-3:null) (logid:725f7dcf) Asking libvirt to refresh
storage pool 84aa6a27-0413-39ad-87ca-5e08078b9b84
2021-08-31 12:45:10,780 ERROR [kvm.storage.LibvirtStorageAdaptor]
(agentRequest-Handler-5:null) (logid:cae1fff8) Failed to create RBD storage
pool: org.libvirt.LibvirtException: failed to create the RBD IoCTX. Does the
pool 'cloudstack' exist?: No such file or directory
2021-08-31 12:45:10,780 ERROR [kvm.storage.LibvirtStorageAdaptor]
(agentRequest-Handler-5:null) (logid:cae1fff8) Failed to create the RBD storage
pool, cleaning up the libvirt secret
2021-08-31 12:45:10,781 WARN [cloud.agent.Agent] (agentRequest-Handler-5:null)
(logid:cae1fff8) Caught:
com.cloud.utils.exception.CloudRuntimeException: Failed to create storage pool:
fc6d0942-21ac-3cd1-b9f3-9e158cf4d75d
at
com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:645)
at
com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:329)
at
com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:323)
at
com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:42)
at
com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:35)
at
com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
at
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1646)
at com.cloud.agent.Agent.processRequest(Agent.java:661)
at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1079)
at com.cloud.utils.nio.Task.call(Task.java:83)
at com.cloud.utils.nio.Task.call(Task.java:29)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Details of my setup:
- Ceph pacific is installed and configured on a test environment. Cluster
health is ok.
- rbd pool and user created as described in the Ceph doc:
https://docs.ceph.com/en/pacific/rbd/rbd-cloudstack/?highlight=cloudstack
Can you double-check there is a pool called 'cloudstack'?
$ ceph df
- The IP of my Ceph Mon with the rbd pool is 192.168.1.4, the firewall is
disabled there
I suggest you use a Round Robin DNS hostname pointing to all three MONs
of your Ceph cluster.
- I have also tried to copy the keyring and ceph.conf from the monitor node to
the kvm machine (In the test environment I have only one kvm host), still the
same problem
Not needed. No configuration of Ceph is required on the KVM host.
Wido
Do you have any ideas how to resolve the problem?
Cheers,
Mevludin