Since this is dependent on specific versions, I'm not clear on whether
there was any difference between persistent and non persistent pools. It
was framed as though it was the cloudstack change that broke things, but
perhaps it was the libvirt update. If an NFS mount point is in use when a
*defined* NFS pool is started with an affected version, does it also fail?
I'm just trying to consider the case where we go back to persistently
defined storage and whether or not that will fix the issue. I will know
when I get a broken one to try (i will attempt stock centos next time).
Either way, we could perhaps assume that whether it works or not for
persistent, if someone on the libvirt dev team saw fit to check in the
pool.create() case, they could likely add the same in starting persistent
pools in the future.

Just brainstorming, it seems we can handle the libvirt error of 'mount
point in use' somehow.

One idea as mentioned is to 'umount -l', which should keep existing stuff
working and allow the pool to be recreated, but just seems messy/hackish to
me. I'd be OK with it though because I think it would make things 'just
work' for the end user.

Another option to consider might be to handle the error by switching to a
dir based pool like local uses, when the already mounted error is caught. I
don't think that will care if something is mounted, and should also allow
things to just work. The downsides are that the NFS mount won't clean up
when the pool is removed (small problem compared to now) and it may cause
confusion if someone goes inspecting their pool details.
On Sep 17, 2013 6:28 AM, "Marcus Sorensen (JIRA)" <j...@apache.org> wrote:

>
>     [
> https://issues.apache.org/jira/browse/CLOUDSTACK-3565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13769475#comment-13769475]
>
> Marcus Sorensen commented on CLOUDSTACK-3565:
> ---------------------------------------------
>
> Ok, now I understand why it wasn't caught as an issue before. So is this
> preemptive, preparing for when newer libvirtd are included in supported
> platforms, or does it affect maybe CentOS 6.4? What's the criteria to
> reproduce?
> On Sep 17, 2013 3:59 AM, "Wido den Hollander (JIRA)" <j...@apache.org>
>
>
>
> > Restarting libvirtd service leading to destroy storage pool
> > -----------------------------------------------------------
> >
> >                 Key: CLOUDSTACK-3565
> >                 URL:
> https://issues.apache.org/jira/browse/CLOUDSTACK-3565
> >             Project: CloudStack
> >          Issue Type: Bug
> >      Security Level: Public(Anyone can view this level - this is the
> default.)
> >          Components: KVM
> >    Affects Versions: 4.2.0
> >         Environment: KVM
> > Branch 4.2
> >            Reporter: Rayees Namathponnan
> >            Assignee: Marcus Sorensen
> >            Priority: Blocker
> >              Labels: documentation
> >             Fix For: 4.2.0
> >
> >
> > Steps to reproduce
> > Step 1 : Create cloudstack step in kvm
> > Step 2 : From kvm host check "virsh pool-list"
> > Step 3:  Stop and start libvirtd service
> > Step 4 : Check "virsh pool-list"
> > Actual Result
> > "virsh pool-list"  is blank after restart libvird service
> > [root@Rack2Host12 agent]# virsh pool-list
> > Name                 State      Autostart
> > -----------------------------------------
> > 41b632b5-40b3-3024-a38b-ea259c72579f active     no
> > 469da865-0712-4d4b-a4cf-a2d68f99f1b6 active     no
> > fff90cb5-06dd-33b3-8815-d78c08ca01d9 active     no
> > [root@Rack2Host12 agent]# service cloudstack-agent stop
> > Stopping Cloud Agent:
> > [root@Rack2Host12 agent]# virsh pool-list
> > Name                 State      Autostart
> > -----------------------------------------
> > 41b632b5-40b3-3024-a38b-ea259c72579f active     no
> > 469da865-0712-4d4b-a4cf-a2d68f99f1b6 active     no
> > fff90cb5-06dd-33b3-8815-d78c08ca01d9 active     no
> > [root@Rack2Host12 agent]# virsh list
> >  Id    Name                           State
> > ----------------------------------------------------
> > [root@Rack2Host12 agent]# service libvirtd stop
> > Stopping libvirtd daemon:                                  [  OK  ]
> > [root@Rack2Host12 agent]# service libvirtd start
> > Starting libvirtd daemon:                                  [  OK  ]
> > [root@Rack2Host12 agent]# virsh pool-list
> > Name                 State      Autostart
> > -----------------------------------------
>
> --
> This message is automatically generated by JIRA.
> If you think it was sent incorrectly, please contact your JIRA
> administrators
> For more information on JIRA, see: http://www.atlassian.com/software/jira
>

Reply via email to