Hi Daan and users,
Yes, I resolved the issue but it's only a work-around, I have found that
the problem is reproducible but I do not understand if I am the only one to
have noticed it and if there is another more elegant and definitive
solution.



Il giorno ven 31 gen 2020 alle ore 13:31 Daan Hoogland <
daan.hoogl...@gmail.com> ha scritto:

> sorry to not have any focus on this Charlie,
> Do I read correctly that you resolved your issue?
>
> On Tue, Jan 28, 2020 at 2:46 PM Charlie Holeowsky <
> charlie.holeow...@gmail.com> wrote:
>
>> In this period I have performed some testsand I found a workarount for
>> the metrics problem.
>> I created a test environment in the laboratory with the main
>> characteristics equal to that of production (acs 4.11.2.0, all Ubuntu 16.04
>> OS, KVM, NFS as shared storage and advanced network). Then I added a ubuntu
>> 18.04 new primary storage.
>>
>> I create a new VM in the new storage server and and after a while the
>> metrics appeared as on the first storage, so the storage is working.
>>
>> I destroyed this VM, I create a new one on the first (old) storage and
>> then I migrate it on the new storage.
>>
>> After migrating the disk from the first primary storage to the second one
>> i have encountered the same problem (same error on agent.log
>> com.cloud.utils.exception.CloudRuntimeException: Can't find
>> volume:5a521eb0...) and the volume metrics and metrics didn't appear or
>> update.
>>
>> In this test environment I created a symbolic link in the filesystem of
>> the storage server taking the name of the disk just migrated (the one with
>> the name path) and the uuid name (the one that appear in error message).
>>
>>
>> Here is an example to better explain me.
>>
>> I took the list of volumes where path is different from uuid which
>> returns the data of the migrated volumes:
>>
>> mysql> select id,uuid,path from volumes where uuid!=path;
>>
>> +----+--------------------------------------+--------------------------------------+
>> | id | uuid                                 | path
>>           |
>>
>> +----+--------------------------------------+--------------------------------------+
>> | 10 | 5a521eb0-266f-4353-b4b2-1d63a483e5b5 |
>> 165b92ba-68f1-4172-be35-bbe1d032cb7c |
>> | 12 | acb3bb29-9bac-4a2a-aefa-3c6ac1c2846b |
>> 56aa3961-dbc2-4f98-9246-a7497eef3214 |
>>
>> +----+--------------------------------------+--------------------------------------+
>>
>> In the storage server I make the symbolic links (ln -s <path> <uuid>):
>>
>> # ln
>> -s 5a521eb0-266f-4353-b4b2-1d63a483e5b5 165b92ba-68f1-4172-be35-bbe1d032cb7c
>> # ln -s acb3bb29-9bac-4a2a-aefa-3c6ac1c2846b
>> 56aa3961-dbc2-4f98-9246-a7497eef3214
>>
>> After doing this, I waited some time and then I found that metrics were
>> updated and the message in agent.log no longer appeared.
>>
>>
>> Il giorno gio 23 gen 2020 alle ore 17:53 Charlie Holeowsky <
>> charlie.holeow...@gmail.com> ha scritto:
>>
>>> I still don't understand
>>> why com.cloud.hypervisor.kvm.storage.LibvirtStoragePool don't find the
>>> volume d93d3c0a-3859-4473-951d-9b5c5912c76 that exists as file
>>> 39148fe1-842b-433a-8a7f-85e90f316e04...
>>>
>>> It's the only anomaly I have found. Where can I look again?
>>>
>>> Il giorno lun 20 gen 2020 alle ore 16:27 Daan Hoogland <
>>> daan.hoogl...@gmail.com> ha scritto:
>>>
>>>> but the record you send earlier also says that is should be looking for
>>>> 39148fe1-842b-433a-8a7f-85e90f316e04, in the path field. the message might
>>>> be just that, a message.
>>>>
>>>> On Mon, Jan 20, 2020 at 3:35 PM Charlie Holeowsky <
>>>> charlie.holeow...@gmail.com> wrote:
>>>>
>>>>> I think that's the problem because in the logs that I have forwarded
>>>>> it reads:
>>>>> Can't find volume: d93d3c0a-3859-4473-951d-9b5c5912c767
>>>>>
>>>>> This is the volume ID of migrated file but it do not exist on primary
>>>>> storage (new or old one) but it exist
>>>>> as 39148fe1-842b-433a-8a7f-85e90f316e04.
>>>>>
>>>>> Il giorno lun 20 gen 2020 alle ore 12:35 Daan Hoogland <
>>>>> daan.hoogl...@gmail.com> ha scritto:
>>>>>
>>>>>> also, can you see the primary storage being mounted?
>>>>>>
>>>>>>
>>>>>> On Mon, Jan 20, 2020 at 12:33 PM Daan Hoogland <
>>>>>> daan.hoogl...@gmail.com> wrote:
>>>>>>
>>>>>>> Why do you think that Charlie? Is it in the logs like that somewhere?
>>>>>>>
>>>>>>> On Mon, Jan 20, 2020 at 9:52 AM Charlie Holeowsky <
>>>>>>> charlie.holeow...@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi Daan,
>>>>>>>> in fact I find the volume file
>>>>>>>> (39148fe1-842b-433a-8a7f-85e90f316e04) in the repositry id = 3 (the new
>>>>>>>> one) but it seems to me that the cloudstack system goes looking for the
>>>>>>>> volume with its "old" name (path) that doesn't exist...
>>>>>>>>
>>>>>>>> Il giorno sab 18 gen 2020 alle ore 21:41 Daan Hoogland <
>>>>>>>> daan.hoogl...@gmail.com> ha scritto:
>>>>>>>>
>>>>>>>>> Charlie,
>>>>>>>>> forgive my not replying in a timely manner. This might happen if
>>>>>>>>> the disk was migrated. In this case probably from primary storage 
>>>>>>>>> with id 1
>>>>>>>>> to the one with id 3. the second record (pool_id 1) is removed, so 
>>>>>>>>> you can
>>>>>>>>> ignore that one. The first seems legit. You should be able to find 
>>>>>>>>> that
>>>>>>>>> disks on your primary storage with id 3.
>>>>>>>>> hope this helps.
>>>>>>>>>
>>>>>>>>> On Thu, Jan 16, 2020 at 2:07 PM Charlie Holeowsky <
>>>>>>>>> charlie.holeow...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hi Daan and users,
>>>>>>>>>> to better explain I show you the two records related to the disk
>>>>>>>>>> that generates the error message.
>>>>>>>>>>
>>>>>>>>>> In the first query there are the data of the disk currently in
>>>>>>>>>> use which has the "uuid" equal to the name searched by the procedure
>>>>>>>>>> com.cloud.hypervisor.kvm.storage.LibvirtStoragePool.getPhysicalDiskand
>>>>>>>>>>  a
>>>>>>>>>> has a different "path" field.
>>>>>>>>>>
>>>>>>>>>> In the second one we note that the "path" field is equal to the
>>>>>>>>>> "uuid" of the volume in use but has null "uuid" (and state=Expunged).
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> mysql> select * from volumes where
>>>>>>>>>> uuid='d93d3c0a-3859-4473-951d-9b5c5912c767';
>>>>>>>>>>
>>>>>>>>>> +-----+------------+-----------+---------+--------------+-------------+-----------+----------+--------------------------------------+-------------+---------------------+--------------------------------------+--------+----------------+------------+---------+-------------+-----------+------------------+-------------+----------------------------+-------------+---------------------+----------+---------------------+---------+-------+------------+--------------+-----------+------------------------+--------+----------------+--------+----------+----------+---------------+-------------------+
>>>>>>>>>> | id  | account_id | domain_id | pool_id | last_pool_id |
>>>>>>>>>> instance_id | device_id | name     | uuid                            
>>>>>>>>>>      |
>>>>>>>>>> size        | folder              | path                             
>>>>>>>>>>     |
>>>>>>>>>> pod_id | data_center_id | iscsi_name | host_ip | volume_type | 
>>>>>>>>>> pool_type |
>>>>>>>>>> disk_offering_id | template_id | first_snapshot_backup_uuid | 
>>>>>>>>>> recreatable |
>>>>>>>>>> created             | attached | updated             | removed | 
>>>>>>>>>> state |
>>>>>>>>>> chain_info | update_count | disk_type | vm_snapshot_chain_size | 
>>>>>>>>>> iso_id |
>>>>>>>>>> display_volume | format | min_iops | max_iops | hv_ss_reserve |
>>>>>>>>>> provisioning_type |
>>>>>>>>>>
>>>>>>>>>> +-----+------------+-----------+---------+--------------+-------------+-----------+----------+--------------------------------------+-------------+---------------------+--------------------------------------+--------+----------------+------------+---------+-------------+-----------+------------------+-------------+----------------------------+-------------+---------------------+----------+---------------------+---------+-------+------------+--------------+-----------+------------------------+--------+----------------+--------+----------+----------+---------------+-------------------+
>>>>>>>>>> | 213 |          2 |         1 |       3 |            1 |
>>>>>>>>>> 148 |         1 | DATA-148 | d93d3c0a-3859-4473-951d-9b5c5912c767 |
>>>>>>>>>> 53687091200 | /srv/primary | 39148fe1-842b-433a-8a7f-85e90f316e04 |  
>>>>>>>>>>  NULL
>>>>>>>>>> |              1 | NULL       | NULL    | DATADISK    | NULL      |
>>>>>>>>>>       34 |        NULL | NULL                       |           0 |
>>>>>>>>>> 2019-11-26 10:41:46 | NULL     | 2019-11-26 10:41:50 | NULL    | 
>>>>>>>>>> Ready |
>>>>>>>>>> NULL       |            2 | NULL      |                   NULL |   
>>>>>>>>>> NULL |
>>>>>>>>>>            1 | QCOW2  |     NULL |     NULL |          NULL | thin
>>>>>>>>>>      |
>>>>>>>>>>
>>>>>>>>>> +-----+------------+-----------+---------+--------------+-------------+-----------+----------+--------------------------------------+-------------+---------------------+--------------------------------------+--------+----------------+------------+---------+-------------+-----------+------------------+-------------+----------------------------+-------------+---------------------+----------+---------------------+---------+-------+------------+--------------+-----------+------------------------+--------+----------------+--------+----------+----------+---------------+-------------------+
>>>>>>>>>> 1 row in set (0.00 sec)
>>>>>>>>>>
>>>>>>>>>> mysql> select * from volumes where
>>>>>>>>>> path='d93d3c0a-3859-4473-951d-9b5c5912c767';
>>>>>>>>>>
>>>>>>>>>> +-----+------------+-----------+---------+--------------+-------------+-----------+----------+------+-------------+--------+--------------------------------------+--------+----------------+------------+---------+-------------+-----------+------------------+-------------+----------------------------+-------------+---------------------+----------+---------------------+---------------------+----------+------------+--------------+-----------+------------------------+--------+----------------+--------+----------+----------+---------------+-------------------+
>>>>>>>>>> | id  | account_id | domain_id | pool_id | last_pool_id |
>>>>>>>>>> instance_id | device_id | name     | uuid | size        | folder | 
>>>>>>>>>> path
>>>>>>>>>>                             | pod_id | data_center_id | iscsi_name |
>>>>>>>>>> host_ip | volume_type | pool_type | disk_offering_id | template_id |
>>>>>>>>>> first_snapshot_backup_uuid | recreatable | created             | 
>>>>>>>>>> attached |
>>>>>>>>>> updated             | removed             | state    | chain_info |
>>>>>>>>>> update_count | disk_type | vm_snapshot_chain_size | iso_id | 
>>>>>>>>>> display_volume
>>>>>>>>>> | format | min_iops | max_iops | hv_ss_reserve | provisioning_type |
>>>>>>>>>>
>>>>>>>>>> +-----+------------+-----------+---------+--------------+-------------+-----------+----------+------+-------------+--------+--------------------------------------+--------+----------------+------------+---------+-------------+-----------+------------------+-------------+----------------------------+-------------+---------------------+----------+---------------------+---------------------+----------+------------+--------------+-----------+------------------------+--------+----------------+--------+----------+----------+---------------+-------------------+
>>>>>>>>>> | 212 |          2 |         1 |       1 |         NULL |
>>>>>>>>>> 148 |         1 | DATA-148 | NULL | 53687091200 | NULL   |
>>>>>>>>>> d93d3c0a-3859-4473-951d-9b5c5912c767 |   NULL |              1 | NULL
>>>>>>>>>> | NULL    | DATADISK    | NULL      |               34 |        NULL 
>>>>>>>>>> | NULL
>>>>>>>>>>                       |           0 | 2019-11-26 10:38:23 | NULL     
>>>>>>>>>> |
>>>>>>>>>> 2019-11-26 10:41:50 | 2019-11-26 10:41:50 | Expunged | NULL       |
>>>>>>>>>>    8 | NULL      |                   NULL |   NULL |              1 
>>>>>>>>>> | QCOW2
>>>>>>>>>>  |     NULL |     NULL |          NULL | thin              |
>>>>>>>>>>
>>>>>>>>>> +-----+------------+-----------+---------+--------------+-------------+-----------+----------+------+-------------+--------+--------------------------------------+--------+----------------+------------+---------+-------------+-----------+------------------+-------------+----------------------------+-------------+---------------------+----------+---------------------+---------------------+----------+------------+--------------+-----------+------------------------+--------+----------------+--------+----------+----------+---------------+-------------------+
>>>>>>>>>> 1 row in set (0.00 sec)
>>>>>>>>>>
>>>>>>>>>> Il giorno mar 14 gen 2020 alle ore 15:05 Daan Hoogland <
>>>>>>>>>> daan.hoogl...@gmail.com> ha scritto:
>>>>>>>>>>
>>>>>>>>>>> So Charlie,
>>>>>>>>>>> d93d3c0a-3859-4473-951d-9b5c5912c767 is actually a valid disk?
>>>>>>>>>>> does it exist on the backend nfs?
>>>>>>>>>>> and the pool 9af0d1c6-85f2-3c55-94af-6ac17cb4024c does it exist
>>>>>>>>>>> both in cloudstack and on the backend?
>>>>>>>>>>>
>>>>>>>>>>> if both are answered with yes, you probably have a permissions
>>>>>>>>>>> issue, which might be in the network.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Tue, Jan 14, 2020 at 10:21 AM Charlie Holeowsky <
>>>>>>>>>>> charlie.holeow...@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hi Daan and users,
>>>>>>>>>>>> the infrastructure is based on the Linux environment. The
>>>>>>>>>>>> management server, hosts and storage are all Ubuntu 16.04 except 
>>>>>>>>>>>> the new
>>>>>>>>>>>> storage server which is an Ubuntu 18.04. The hypervisor used is 
>>>>>>>>>>>> Qemu-kvm
>>>>>>>>>>>> with NFS to share the storage.
>>>>>>>>>>>>
>>>>>>>>>>>> We tried to add another primary storage and creating a VM that
>>>>>>>>>>>> would use it we found no problems, the statistics update and no 
>>>>>>>>>>>> error
>>>>>>>>>>>> messages appear.
>>>>>>>>>>>>
>>>>>>>>>>>> Gere is an excerpt of the logs of the most complete agent:
>>>>>>>>>>>>
>>>>>>>>>>>> 2020-01-14 09:01:45,749 INFO
>>>>>>>>>>>>  [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-2:null)
>>>>>>>>>>>> (logid:c3851d3a) Trying to fetch storage pool
>>>>>>>>>>>> 171e90f4-511e-3b10-9310-b9eec0094be6 from libvirt
>>>>>>>>>>>> 2020-01-14 09:01:45,752 INFO
>>>>>>>>>>>>  [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-2:null)
>>>>>>>>>>>> (logid:c3851d3a) Asking libvirt to refresh storage pool
>>>>>>>>>>>> 171e90f4-511e-3b10-9310-b9eec0094be6
>>>>>>>>>>>> 2020-01-14 09:01:46,641 INFO
>>>>>>>>>>>>  [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:null)
>>>>>>>>>>>> (logid:c3851d3a) Trying to fetch storage pool
>>>>>>>>>>>> 9af0d1c6-85f2-3c55-94af-6ac17cb4024c from libvirt
>>>>>>>>>>>> 2020-01-14 09:01:46,643 INFO
>>>>>>>>>>>>  [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:null)
>>>>>>>>>>>> (logid:c3851d3a) Asking libvirt to refresh storage pool
>>>>>>>>>>>> 9af0d1c6-85f2-3c55-94af-6ac17cb4024c
>>>>>>>>>>>> 2020-01-14 09:05:51,529 INFO
>>>>>>>>>>>>  [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-1:null)
>>>>>>>>>>>> (logid:2765ff88) Trying to fetch storage pool
>>>>>>>>>>>> 9af0d1c6-85f2-3c55-94af-6ac17cb4024c from libvirt
>>>>>>>>>>>> 2020-01-14 09:05:51,532 INFO
>>>>>>>>>>>>  [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-1:null)
>>>>>>>>>>>> (logid:2765ff88) Asking libvirt to refresh storage pool
>>>>>>>>>>>> 9af0d1c6-85f2-3c55-94af-6ac17cb4024c
>>>>>>>>>>>> 2020-01-14 09:10:47,286 INFO
>>>>>>>>>>>>  [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-3:null)
>>>>>>>>>>>> (logid:6d27b740) Trying to fetch storage pool
>>>>>>>>>>>> 9af0d1c6-85f2-3c55-94af-6ac17cb4024c from libvirt
>>>>>>>>>>>> 2020-01-14 09:10:47,419 WARN  [cloud.agent.Agent]
>>>>>>>>>>>> (agentRequest-Handler-3:null) (logid:6d27b740) Caught:
>>>>>>>>>>>> com.cloud.utils.exception.CloudRuntimeException: Can't find
>>>>>>>>>>>> volume:d93d3c0a-3859-4473-951d-9b5c5912c767
>>>>>>>>>>>> at
>>>>>>>>>>>> com.cloud.hypervisor.kvm.storage.LibvirtStoragePool.getPhysicalDisk(LibvirtStoragePool.java:149)
>>>>>>>>>>>> at
>>>>>>>>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVolumeStatsCommandWrapper.getVolumeStat(LibvirtGetVolumeStatsCommandWrapper.java:63)
>>>>>>>>>>>> at
>>>>>>>>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVolumeStatsCommandWrapper.execute(LibvirtGetVolumeStatsCommandWrapper.java:52)
>>>>>>>>>>>> at
>>>>>>>>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVolumeStatsCommandWrapper.execute(LibvirtGetVolumeStatsCommandWrapper.java:40)
>>>>>>>>>>>> at
>>>>>>>>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
>>>>>>>>>>>> at
>>>>>>>>>>>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1450)
>>>>>>>>>>>> at com.cloud.agent.Agent.processRequest(Agent.java:645)
>>>>>>>>>>>> at
>>>>>>>>>>>> com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1083)
>>>>>>>>>>>> at com.cloud.utils.nio.Task.call(Task.java:83)
>>>>>>>>>>>> at com.cloud.utils.nio.Task.call(Task.java:29)
>>>>>>>>>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>>>>>>>>>>> at
>>>>>>>>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>>>>>>>>>>>> at
>>>>>>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>>>>>>>>>>> at java.lang.Thread.run(Thread.java:748)
>>>>>>>>>>>> 2020-01-14 09:20:48,390 INFO
>>>>>>>>>>>>  [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:null)
>>>>>>>>>>>> (logid:ec72387b) Trying to fetch storage pool
>>>>>>>>>>>> 9af0d1c6-85f2-3c55-94af-6ac17cb4024c from libvirt
>>>>>>>>>>>> 2020-01-14 09:20:48,536 WARN  [cloud.agent.Agent]
>>>>>>>>>>>> (agentRequest-Handler-4:null) (logid:ec72387b) Caught:
>>>>>>>>>>>> com.cloud.utils.exception.CloudRuntimeException: Can't find
>>>>>>>>>>>> volume:d93d3c0a-3859-4473-951d-9b5c5912c767
>>>>>>>>>>>> at
>>>>>>>>>>>> com.cloud.hypervisor.kvm.storage.LibvirtStoragePool.getPhysicalDisk(LibvirtStoragePool.java:149)
>>>>>>>>>>>> at
>>>>>>>>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVolumeStatsCommandWrapper.getVolumeStat(LibvirtGetVolumeStatsCommandWrapper.java:63)
>>>>>>>>>>>> at
>>>>>>>>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVolumeStatsCommandWrapper.execute(LibvirtGetVolumeStatsCommandWrapper.java:52)
>>>>>>>>>>>> at
>>>>>>>>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVolumeStatsCommandWrapper.execute(LibvirtGetVolumeStatsCommandWrapper.java:40)
>>>>>>>>>>>> at
>>>>>>>>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
>>>>>>>>>>>> at
>>>>>>>>>>>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1450)
>>>>>>>>>>>> at com.cloud.agent.Agent.processRequest(Agent.java:645)
>>>>>>>>>>>> at
>>>>>>>>>>>> com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1083)
>>>>>>>>>>>> at com.cloud.utils.nio.Task.call(Task.java:83)
>>>>>>>>>>>> at com.cloud.utils.nio.Task.call(Task.java:29)
>>>>>>>>>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>>>>>>>>>>> at
>>>>>>>>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>>>>>>>>>>>> at
>>>>>>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>>>>>>>>>>> at java.lang.Thread.run(Thread.java:748)
>>>>>>>>>>>> 2020-01-14 09:25:15,259 INFO
>>>>>>>>>>>>  [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-5:null)
>>>>>>>>>>>> (logid:1a7e082e) Trying to fetch storage pool
>>>>>>>>>>>> 9af0d1c6-85f2-3c55-94af-6ac17cb4024c from libvirt
>>>>>>>>>>>> 2020-01-14 09:25:15,261 INFO
>>>>>>>>>>>>  [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-5:null)
>>>>>>>>>>>> (logid:1a7e082e) Asking libvirt to refresh storage pool
>>>>>>>>>>>> 9af0d1c6-85f2-3c55-94af-6ac17cb4024c
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> And here the management server log:
>>>>>>>>>>>>
>>>>>>>>>>>> 2020-01-14 09:21:27,105 DEBUG [c.c.a.t.Request]
>>>>>>>>>>>> (AgentManager-Handler-2:null) (logid:) Seq 15-705657766613619075:
>>>>>>>>>>>> Processing:  { Ans: , MgmtId: 220777304233416, via: 15, Ver: v1, 
>>>>>>>>>>>> Flags: 10,
>>>>>>>>>>>> [{"com.cloud.agent.api.Answer":{"result":false,"details":"com.cloud.utils.exception.CloudRuntimeException:
>>>>>>>>>>>> Can't find volume:d93d3c0a-3859-4473-951d-9b5c5912c767\n\tat
>>>>>>>>>>>> com.cloud.hypervisor.kvm.storage.LibvirtStoragePool.getPhysicalDisk(LibvirtStoragePool.java:149)\n\tat
>>>>>>>>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVolumeStatsCommandWrapper.getVolumeStat(LibvirtGetVolumeStatsCommandWrapper.java:63)\n\tat
>>>>>>>>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVolumeStatsCommandWrapper.execute(LibvirtGetVolumeStatsCommandWrapper.java:52)\n\tat
>>>>>>>>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVolumeStatsCommandWrapper.execute(LibvirtGetVolumeStatsCommandWrapper.java:40)\n\tat
>>>>>>>>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)\n\tat
>>>>>>>>>>>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1450)\n\tat
>>>>>>>>>>>> com.cloud.agent.Agent.processRequest(Agent.java:645)\n\tat
>>>>>>>>>>>> com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1083)\n\tat
>>>>>>>>>>>> com.cloud.utils.nio.Task.call(Task.java:83)\n\tat
>>>>>>>>>>>> com.cloud.utils.nio.Task.call(Task.java:29)\n\tat
>>>>>>>>>>>> java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat
>>>>>>>>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
>>>>>>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
>>>>>>>>>>>> java.lang.Thread.run(Thread.java:748)\n","wait":0}}] }
>>>>>>>>>>>> 2020-01-14 09:21:27,105 DEBUG [c.c.a.t.Request]
>>>>>>>>>>>> (StatsCollector-6:ctx-fd801d0a) (logid:ec72387b) Seq 
>>>>>>>>>>>> 15-705657766613619075:
>>>>>>>>>>>> Received:  { Ans: , MgmtId: 220777304233416, via: 15(csdell017), 
>>>>>>>>>>>> Ver: v1,
>>>>>>>>>>>> Flags: 10, { Answer } }
>>>>>>>>>>>> 2020-01-14 09:21:27,105 DEBUG [c.c.a.m.AgentManagerImpl]
>>>>>>>>>>>> (StatsCollector-6:ctx-fd801d0a) (logid:ec72387b) Details from 
>>>>>>>>>>>> executing
>>>>>>>>>>>> class com.cloud.agent.api.GetVolumeStatsCommand:
>>>>>>>>>>>> com.cloud.utils.exception.CloudRuntimeException: Can't find
>>>>>>>>>>>> volume:d93d3c0a-3859-4473-951d-9b5c5912c767
>>>>>>>>>>>> at
>>>>>>>>>>>> com.cloud.hypervisor.kvm.storage.LibvirtStoragePool.getPhysicalDisk(LibvirtStoragePool.java:149)
>>>>>>>>>>>> at
>>>>>>>>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVolumeStatsCommandWrapper.getVolumeStat(LibvirtGetVolumeStatsCommandWrapper.java:63)
>>>>>>>>>>>> at
>>>>>>>>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVolumeStatsCommandWrapper.execute(LibvirtGetVolumeStatsCommandWrapper.java:52)
>>>>>>>>>>>> at
>>>>>>>>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVolumeStatsCommandWrapper.execute(LibvirtGetVolumeStatsCommandWrapper.java:40)
>>>>>>>>>>>> at
>>>>>>>>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
>>>>>>>>>>>> at
>>>>>>>>>>>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1450)
>>>>>>>>>>>> at com.cloud.agent.Agent.processRequest(Agent.java:645)
>>>>>>>>>>>> at
>>>>>>>>>>>> com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1083)
>>>>>>>>>>>> at com.cloud.utils.nio.Task.call(Task.java:83)
>>>>>>>>>>>> at com.cloud.utils.nio.Task.call(Task.java:29)
>>>>>>>>>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>>>>>>>>>>> at
>>>>>>>>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>>>>>>>>>>>> at
>>>>>>>>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>>>>>>>>>>> at java.lang.Thread.run(Thread.java:748)
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> on 09/01/20 12:58, Daan Hoogland wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> Charlie, I think you'll have to explain a bit more about your 
>>>>>>>>>>>> environment
>>>>>>>>>>>> to get an answer. what type of storage is it? Where did you 
>>>>>>>>>>>> migrate the VM
>>>>>>>>>>>> from and to? What types() of hypervisors are you using? Though 
>>>>>>>>>>>> saying **the**
>>>>>>>>>>>> agent logs suggests KVM, you are still leaving people guessing a 
>>>>>>>>>>>> lot.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Daan
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Daan
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Daan
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Daan
>>>>>>
>>>>>
>>>>
>>>> --
>>>> Daan
>>>>
>>>
>
> --
> Daan
>

Reply via email to