Good point, we'd have to modify the destination xml, which is kind of hairy.

I actually cant reproduce this on ubuntu 12.04 and the 4.2 RC. I mean,
'virsh pool-list' is blank as the bug reports, but so what? Is there a
separate bug we're discussing now?

 I created nfs primary storage, launched a vm on it, then restarted
libvirt. I verified the pools were erased via 'virsh pool-list', and
then attempted to launch a new vm with the nfs storage tag. It
registered in Libvirt and the VM is now running.  My intent was to try
a "umount -l", expecting that the existing VM's open filehandles would
stay working, and it would allow a fresh mount of the primary storage,
but there was no issue observed. Maybe I'm missing the root of the
issue reported.

root@devcloud-kvm-u:~# virsh pool-list
Name                 State      Autostart
-----------------------------------------
2fe9a944-505e-38cb-bf87-72623634be4a active     no  <--- nfs primary
609d4339-e66a-4298-909d-74dca7205a7b active     no
vg0                  active     no                      <--- clvm storage

root@devcloud-kvm-u:~# /etc/init.d/libvirt-bin restart
libvirt-bin stop/waiting
libvirt-bin start/running, process 7652

root@devcloud-kvm-u:~# virsh pool-list
Name                 State      Autostart
-----------------------------------------

... wait for new vm to start

root@devcloud-kvm-u:~# virsh pool-list
Name                 State      Autostart
-----------------------------------------
2fe9a944-505e-38cb-bf87-72623634be4a active     no <--- nfs primary is back

... launch clvm storage-based vm, wait for it to start

root@devcloud-kvm-u:~# virsh pool-list
Name                 State      Autostart
-----------------------------------------
2fe9a944-505e-38cb-bf87-72623634be4a active     no
609d4339-e66a-4298-909d-74dca7205a7b active     no
vg0                  active     no   <--- clvm



On Mon, Sep 16, 2013 at 2:15 PM, Wido den Hollander <w...@widodh.nl> wrote:
>
>
> On 09/16/2013 07:16 PM, Marcus Sorensen wrote:
>>
>> I agree, it makes things more complicated. However, updating libvirt
>> should be a maintenance mode thing; I've seen it lose track of VMs
>> between major libvirt upgrades and have to kill/restart all vms to
>> reregister them with libvirt, best to be in maintenance.
>>
>> If I remember right, the issues with persistent storage definitiions
>> were primarily related to failure scenarios, things like hard power
>> off of host, host would build up dozens of primary storage definitions
>> that it didn't need, creating unnecessary dependencies and potential
>> unnecessary outages (see other email about what happens when storage
>> goes down).
>>
>> If edison has a patch then that's great. One idea I head was that if
>> the pool fails to register due to mountpoint already existing, we
>> could use an alternate mount point to register it. Old stuff will keep
>> working, new stuff will continue.
>
>
> No! That will break migrations to machines which don't have that mountpoint.
> Backing devices with QCOW2 will also break on the longer run.
>
> Please do not do that.
>
> Wido
>
>
>>
>> On Mon, Sep 16, 2013 at 3:43 AM, Wei Zhou (JIRA) <j...@apache.org> wrote:
>>>
>>>
>>>      [
>>> https://issues.apache.org/jira/browse/CLOUDSTACK-3565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13768197#comment-13768197
>>> ]
>>>
>>> Wei Zhou commented on CLOUDSTACK-3565:
>>> --------------------------------------
>>>
>>> Marcus,
>>>
>>> The change of the host to maintenance one by one makes upgrade/change
>>> complicated.
>>> Edison has changed the source to support re-creating storage pool in the
>>> fix of CLOUDSTACK-2729.
>>> However, the test failed in my environment.
>>>
>>> By the way, I did not see any other issue when I used cloudstack 4.0
>>> while the storage pools are persistent.
>>> I do not know what issue will be leader by it.
>>>
>>>> Restarting libvirtd service leading to destroy storage pool
>>>> -----------------------------------------------------------
>>>>
>>>>                  Key: CLOUDSTACK-3565
>>>>                  URL:
>>>> https://issues.apache.org/jira/browse/CLOUDSTACK-3565
>>>>              Project: CloudStack
>>>>           Issue Type: Bug
>>>>       Security Level: Public(Anyone can view this level - this is the
>>>> default.)
>>>>           Components: KVM
>>>>     Affects Versions: 4.2.0
>>>>          Environment: KVM
>>>> Branch 4.2
>>>>             Reporter: Rayees Namathponnan
>>>>             Assignee: Marcus Sorensen
>>>>             Priority: Blocker
>>>>               Labels: documentation
>>>>              Fix For: 4.2.0
>>>>
>>>>
>>>> Steps to reproduce
>>>> Step 1 : Create cloudstack step in kvm
>>>> Step 2 : From kvm host check "virsh pool-list"
>>>> Step 3:  Stop and start libvirtd service
>>>> Step 4 : Check "virsh pool-list"
>>>> Actual Result
>>>> "virsh pool-list"  is blank after restart libvird service
>>>> [root@Rack2Host12 agent]# virsh pool-list
>>>> Name                 State      Autostart
>>>> -----------------------------------------
>>>> 41b632b5-40b3-3024-a38b-ea259c72579f active     no
>>>> 469da865-0712-4d4b-a4cf-a2d68f99f1b6 active     no
>>>> fff90cb5-06dd-33b3-8815-d78c08ca01d9 active     no
>>>> [root@Rack2Host12 agent]# service cloudstack-agent stop
>>>> Stopping Cloud Agent:
>>>> [root@Rack2Host12 agent]# virsh pool-list
>>>> Name                 State      Autostart
>>>> -----------------------------------------
>>>> 41b632b5-40b3-3024-a38b-ea259c72579f active     no
>>>> 469da865-0712-4d4b-a4cf-a2d68f99f1b6 active     no
>>>> fff90cb5-06dd-33b3-8815-d78c08ca01d9 active     no
>>>> [root@Rack2Host12 agent]# virsh list
>>>>   Id    Name                           State
>>>> ----------------------------------------------------
>>>> [root@Rack2Host12 agent]# service libvirtd stop
>>>> Stopping libvirtd daemon:                                  [  OK  ]
>>>> [root@Rack2Host12 agent]# service libvirtd start
>>>> Starting libvirtd daemon:                                  [  OK  ]
>>>> [root@Rack2Host12 agent]# virsh pool-list
>>>> Name                 State      Autostart
>>>> -----------------------------------------
>>>
>>>
>>> --
>>> This message is automatically generated by JIRA.
>>> If you think it was sent incorrectly, please contact your JIRA
>>> administrators
>>> For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to