Yes, I will revert changes test this. Just did an update see a large
number of changes to openstack. I'll apply that, disable patch, verify
failure an then apply proposed.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
I'm using RBD (their volume interface) and not a file system.
However, it appears that this IS working -- I restarted the server rather than
just the services
and now I can correctly snapshot a running instance.
This is all on a 13.10 base system, so I must have forgotten to restart some
I added that to virt-aa-helper but it's still broken. The libvirt/apparmor file
is below as is the log output.
I don't know if it matters, but I'm using CEPH for volume store.
The instance-specific apparmour file is
# DO NOT EDIT THIS FILE DIRECTLY. IT IS MANAGED BY LIBVIRT.
I concur that the apparmor solution works. I haven't tried to figure out
if it's a race or not.
Is this an OpenStack issue or an Ubuntu issue? Sounds like OpenStack to
me.
--
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in
I concur that the apparmor solution works. I haven't tried to figure out
if it's a race or not.
Is this an OpenStack issue or an Ubuntu issue? Sounds like OpenStack to
me.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
Daniel Speichert (dasp) - were you using a ceph/RBD backend?
Were you trying to snapshot a QCOW image? Try it with a RAW.
Openstack has issues snapping QCOW (I think there's a cinder bug filed on this).
I can create a snapshow of RAW images and create a volume from that snapshot.
I can not,
Daniel Speichert (dasp) - were you using a ceph/RBD backend?
Were you trying to snapshot a QCOW image? Try it with a RAW.
Openstack has issues snapping QCOW (I think there's a cinder bug filed on this).
I can create a snapshow of RAW images and create a volume from that snapshot.
I can not,
In my previous comment, I mentioned that this was a Havana install -
that's incorrect. It's grizzley from Ubuntu 13.10.
Daniel Speichert (dasp), did you check the contents of the snapshot that
you were able to make when the system was shut off? I too was able to
make a snapshot, but I had
double checking indicates that this may be specific to e.g.
/etc/apt/sources.list -- it might be that the image I'm using resets
that. I created a file (/foobar/baz), snapped and booted from the snap.
The /etc/apt/sources.list was reset to default mirror
In my previous comment, I mentioned that this was a Havana install -
that's incorrect. It's grizzley from Ubuntu 13.10.
Daniel Speichert (dasp), did you check the contents of the snapshot that
you were able to make when the system was shut off? I too was able to
make a snapshot, but I had
double checking indicates that this may be specific to e.g.
/etc/apt/sources.list -- it might be that the image I'm using resets
that. I created a file (/foobar/baz), snapped and booted from the snap.
The /etc/apt/sources.list was reset to default mirror
I'm having the same problem - I had a working Havana setup (with
Ceph/rbd as the cinder storage) and within the last few days, whammo,
this starts occuring. I had updated ubuntu 13.10 patches across parts of
the cluster.
nova-compute log from the node with the exception below
2013-11-03
I'm having the same problem - I had a working Havana setup (with
Ceph/rbd as the cinder storage) and within the last few days, whammo,
this starts occuring. I had updated ubuntu 13.10 patches across parts of
the cluster.
nova-compute log from the node with the exception below
2013-11-03
13 matches
Mail list logo