This is an nfs-specific issue, i think. One thing to try maybe as a test
would be "umount -l (storage pool mount)" before starting guest agent after
a libvirtd restart. I still think the best route is to put the guest in
maintenance, but this should allow the NFS mount to be reregistered in
libvirt but keep existing handles to the pool working. We could perhaps do
that in the code if a pool's mount is already in use but the pool does not
exist in libvirt, in the create pool NFS code. Its a hack. In the end I
think it is a libvirt issue that we get to work around, inconsistent
behavior between XML defined and api created storage.
On Sep 11, 2013 3:36 AM, "Marcus Sorensen (JIRA)" <j...@apache.org> wrote:

>
>     [
> https://issues.apache.org/jira/browse/CLOUDSTACK-3565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13764160#comment-13764160]
>
> Marcus Sorensen commented on CLOUDSTACK-3565:
> ---------------------------------------------
>
> We don't want the pools to be persistent, it leads to other issues. If host
> is going to be worked on (restart libvirtd or anything else), host needs to
> be in maintenance.
>
>
>
> > Restarting libvirtd service leading to destroy storage pool
> > -----------------------------------------------------------
> >
> >                 Key: CLOUDSTACK-3565
> >                 URL:
> https://issues.apache.org/jira/browse/CLOUDSTACK-3565
> >             Project: CloudStack
> >          Issue Type: Bug
> >      Security Level: Public(Anyone can view this level - this is the
> default.)
> >          Components: KVM
> >    Affects Versions: 4.2.0
> >         Environment: KVM
> > Branch 4.2
> >            Reporter: Rayees Namathponnan
> >            Assignee: Marcus Sorensen
> >            Priority: Blocker
> >              Labels: documentation
> >             Fix For: 4.2.0
> >
> >
> > Steps to reproduce
> > Step 1 : Create cloudstack step in kvm
> > Step 2 : From kvm host check "virsh pool-list"
> > Step 3:  Stop and start libvirtd service
> > Step 4 : Check "virsh pool-list"
> > Actual Result
> > "virsh pool-list"  is blank after restart libvird service
> > [root@Rack2Host12 agent]# virsh pool-list
> > Name                 State      Autostart
> > -----------------------------------------
> > 41b632b5-40b3-3024-a38b-ea259c72579f active     no
> > 469da865-0712-4d4b-a4cf-a2d68f99f1b6 active     no
> > fff90cb5-06dd-33b3-8815-d78c08ca01d9 active     no
> > [root@Rack2Host12 agent]# service cloudstack-agent stop
> > Stopping Cloud Agent:
> > [root@Rack2Host12 agent]# virsh pool-list
> > Name                 State      Autostart
> > -----------------------------------------
> > 41b632b5-40b3-3024-a38b-ea259c72579f active     no
> > 469da865-0712-4d4b-a4cf-a2d68f99f1b6 active     no
> > fff90cb5-06dd-33b3-8815-d78c08ca01d9 active     no
> > [root@Rack2Host12 agent]# virsh list
> >  Id    Name                           State
> > ----------------------------------------------------
> > [root@Rack2Host12 agent]# service libvirtd stop
> > Stopping libvirtd daemon:                                  [  OK  ]
> > [root@Rack2Host12 agent]# service libvirtd start
> > Starting libvirtd daemon:                                  [  OK  ]
> > [root@Rack2Host12 agent]# virsh pool-list
> > Name                 State      Autostart
> > -----------------------------------------
>
> --
> This message is automatically generated by JIRA.
> If you think it was sent incorrectly, please contact your JIRA
> administrators
> For more information on JIRA, see: http://www.atlassian.com/software/jira
>

Reply via email to