[Yahoo-eng-team] [Bug 1484081] [NEW] libvirt error while taking second snapshot of network type disk

2015-08-12 Thread Deepak C Shetty
Public bug reported:

Consistently getting the below error from libvirt when I take a second
snap of a network type disk ...

2015-08-12 11:10:30.473 TRACE nova.virt.libvirt.driver [instance: 
423192d4-1845-4a14-aae1-1aeb6e0b7d18] Traceback (most recent call last):
2015-08-12 11:10:30.473 TRACE nova.virt.libvirt.driver [instance: 
423192d4-1845-4a14-aae1-1aeb6e0b7d18]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 1749, in 
_volume_snapshot_create
2015-08-12 11:10:30.473 TRACE nova.virt.libvirt.driver [instance: 
423192d4-1845-4a14-aae1-1aeb6e0b7d18] 
domain.snapshotCreateXML(snapshot_xml, snap_flags)
2015-08-12 11:10:30.473 TRACE nova.virt.libvirt.driver [instance: 
423192d4-1845-4a14-aae1-1aeb6e0b7d18]   File 
/usr/lib/python2.7/site-packages/eventlet/tpool.py, line 183, in doit
2015-08-12 11:10:30.473 TRACE nova.virt.libvirt.driver [instance: 
423192d4-1845-4a14-aae1-1aeb6e0b7d18] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
2015-08-12 11:10:30.473 TRACE nova.virt.libvirt.driver [instance: 
423192d4-1845-4a14-aae1-1aeb6e0b7d18]   File 
/usr/lib/python2.7/site-packages/eventlet/tpool.py, line 141, in proxy_call
2015-08-12 11:10:30.473 TRACE nova.virt.libvirt.driver [instance: 
423192d4-1845-4a14-aae1-1aeb6e0b7d18] rv = execute(f, *args, **kwargs)
2015-08-12 11:10:30.473 TRACE nova.virt.libvirt.driver [instance: 
423192d4-1845-4a14-aae1-1aeb6e0b7d18]   File 
/usr/lib/python2.7/site-packages/eventlet/tpool.py, line 122, in execute
2015-08-12 11:10:30.473 TRACE nova.virt.libvirt.driver [instance: 
423192d4-1845-4a14-aae1-1aeb6e0b7d18] six.reraise(c, e, tb)
2015-08-12 11:10:30.473 TRACE nova.virt.libvirt.driver [instance: 
423192d4-1845-4a14-aae1-1aeb6e0b7d18]   File 
/usr/lib/python2.7/site-packages/eventlet/tpool.py, line 80, in tworker
2015-08-12 11:10:30.473 TRACE nova.virt.libvirt.driver [instance: 
423192d4-1845-4a14-aae1-1aeb6e0b7d18] rv = meth(*args, **kwargs)
2015-08-12 11:10:30.473 TRACE nova.virt.libvirt.driver [instance: 
423192d4-1845-4a14-aae1-1aeb6e0b7d18]   File 
/usr/lib64/python2.7/site-packages/libvirt.py, line 2472, in snapshotCreateXML
2015-08-12 11:10:30.473 TRACE nova.virt.libvirt.driver [instance: 
423192d4-1845-4a14-aae1-1aeb6e0b7d18] if ret is None:raise 
libvirtError('virDomainSnapshotCreateXML() failed', dom=self)
2015-08-12 11:10:30.473 TRACE nova.virt.libvirt.driver [instance: 
423192d4-1845-4a14-aae1-1aeb6e0b7d18] libvirtError: End of file while reading 
data: Input/output error
2015-08-12 11:10:30.473 TRACE nova.virt.libvirt.driver [instance: 
423192d4-1845-4a14-aae1-1aeb6e0b7d18] 


More details will follow in subsequent comments.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: cinder glusterfs libgfapi libvirt nova snapshot

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1484081

Title:
  libvirt error while taking second snapshot of network type disk

Status in OpenStack Compute (nova):
  New

Bug description:
  Consistently getting the below error from libvirt when I take a second
  snap of a network type disk ...

  2015-08-12 11:10:30.473 TRACE nova.virt.libvirt.driver [instance: 
423192d4-1845-4a14-aae1-1aeb6e0b7d18] Traceback (most recent call last):
  2015-08-12 11:10:30.473 TRACE nova.virt.libvirt.driver [instance: 
423192d4-1845-4a14-aae1-1aeb6e0b7d18]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 1749, in 
_volume_snapshot_create
  2015-08-12 11:10:30.473 TRACE nova.virt.libvirt.driver [instance: 
423192d4-1845-4a14-aae1-1aeb6e0b7d18] 
domain.snapshotCreateXML(snapshot_xml, snap_flags)
  2015-08-12 11:10:30.473 TRACE nova.virt.libvirt.driver [instance: 
423192d4-1845-4a14-aae1-1aeb6e0b7d18]   File 
/usr/lib/python2.7/site-packages/eventlet/tpool.py, line 183, in doit
  2015-08-12 11:10:30.473 TRACE nova.virt.libvirt.driver [instance: 
423192d4-1845-4a14-aae1-1aeb6e0b7d18] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
  2015-08-12 11:10:30.473 TRACE nova.virt.libvirt.driver [instance: 
423192d4-1845-4a14-aae1-1aeb6e0b7d18]   File 
/usr/lib/python2.7/site-packages/eventlet/tpool.py, line 141, in proxy_call
  2015-08-12 11:10:30.473 TRACE nova.virt.libvirt.driver [instance: 
423192d4-1845-4a14-aae1-1aeb6e0b7d18] rv = execute(f, *args, **kwargs)
  2015-08-12 11:10:30.473 TRACE nova.virt.libvirt.driver [instance: 
423192d4-1845-4a14-aae1-1aeb6e0b7d18]   File 
/usr/lib/python2.7/site-packages/eventlet/tpool.py, line 122, in execute
  2015-08-12 11:10:30.473 TRACE nova.virt.libvirt.driver [instance: 
423192d4-1845-4a14-aae1-1aeb6e0b7d18] six.reraise(c, e, tb)
  2015-08-12 11:10:30.473 TRACE nova.virt.libvirt.driver [instance: 
423192d4-1845-4a14-aae1-1aeb6e0b7d18]   File 
/usr/lib/python2.7/site-packages/eventlet/tpool.py, line 80, in tworker
  2015-08-12 11:10:30.473 TRACE nova.virt.libvirt.driver [instance: 

[Yahoo-eng-team] [Bug 1477110] Re: cinder delete Gluster snapshot failed

2015-07-24 Thread Deepak C Shetty
@Markus,
  The fix is in Nova project, so bug should be on Nova only. The effect of this 
bug is seen in Cinder as Cinder calls Nova for
online snapshot create/delete operations for GlusterFS backend.

** Changed in: cinder
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1477110

Title:
  cinder delete Gluster snapshot failed

Status in Cinder:
  Invalid
Status in OpenStack Compute (nova):
  New

Bug description:
  I have a test that cinder use GlusterFS as storage. 
  1. create a instance
  2. create a volume
  3. attach the volume to the instance
  4. make snapshot to the volume
  5. delete the snapshot

  It get an error.

  OS: CentOS 7

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1477110/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471726] [NEW] libvirt: blockCommit fails if domain is not running, for attached cinder volumes

2015-07-06 Thread Deepak C Shetty
Public bug reported:

Using a devstack setup, fairly latest!

1) Create a cinder volume (used GlusterFS as the cinder backed) - cv1
2) Attach cv1  to vm1 (vm1 is a nova VM in running state)
3) Create 2 snapshots of vol1 using cinder snapshot-create ... cv1-snap1, 
cv1-snap2
4) Stop the nova vm vm1 (Note that cinder still reports the volume cv1 as 
'in-use')
5) From cinder, delete cv1-snap1. Since cv1-snap1 is _Not_ the active file, 
nova tries to do a blockCommit and fails with excp below:

2015-07-06 09:33:00.479 ERROR oslo_messaging.rpc.dispatcher 
[req-695dd8c5-2722-4cf2-ab0a-583b0dacd388 nova service] Exception during 
message handling: Requested operation is not valid: domain is not running
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher Traceback (most 
recent call last):
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 142, 
in _dispatch_and_reply
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 186, 
in _dispatch
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 130, 
in _do_dispatch
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py, line 142, in 
inner
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher return 
func(*args, **kwargs)
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/exception.py, line 89, in wrapped
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher payload)
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_utils/excutils.py, line 119, in __exit__
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/exception.py, line 72, in wrapped
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher return f(self, 
context, *args, **kw)
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/compute/manager.py, line 2954, in volume_snapshot_delete
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher snapshot_id, 
delete_info)
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 2024, in 
volume_snapshot_delete
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher base_file = 
delete_info['base_file']
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_utils/excutils.py, line 119, in __exit__
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 2017, in 
volume_snapshot_delete
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher snapshots.) % 
ver
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 2003, in 
_volume_snapshot_delete
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher # paths are 
maintained relative by qemu.
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/eventlet/tpool.py, line 183, in doit
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher result = 
proxy_call(self._autowrap, f, *args, **kwargs)
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/eventlet/tpool.py, line 141, in proxy_call
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher rv = execute(f, 
*args, **kwargs)
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/eventlet/tpool.py, line 122, in execute
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher six.reraise(c, 
e, tb)
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/eventlet/tpool.py, line 80, in tworker
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher rv = 
meth(*args, **kwargs)
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib64/python2.7/site-packages/libvirt.py, line 642, in blockCommit
2015-07-06 09:33:00.479 TRACE oslo_messaging.rpc.dispatcher if ret == -1: 
raise libvirtError ('virDomainBlockCommit() failed', dom=self)
2015-07-06 09:33:00.479 TRACE 

[Yahoo-eng-team] [Bug 1438027] Re: backing file path isn't relative path for file-based volume drivers using qcow2 snapshots

2015-04-02 Thread Deepak C Shetty
OT: I am not sure why the gerrit patch i posted didn't appear here
automatically, inspite of linking this bug in the patch!

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1438027

Title:
  backing file path isn't relative path for file-based volume drivers
  using qcow2 snapshots

Status in Cinder:
  In Progress
Status in OpenStack Compute (Nova):
  New

Bug description:
  For file-based volume drivers (eg; GlusterFS) that uses qcow2
  snapshotting mechanism doesn't use relative path post an online snap
  delete operation.

  In my testing with GlusterFS , I see 2 scenarios where the backing file path 
was using nova mnt instead of relative path.
  eg: 
/opt/stack/data/nova/mnt/f4c6ad7e3bba4ad1195b3b538efab64a/volume-518a8faa-f264-4c07-80d7-ff691278b5da.838a5847-cb90-4279-b50f-f8f995321665

  This is incorrect since after the volume is detach, the above path is
  invalid on the cinder/storage node.

  The 2 scenarios are blockCommit and blockRebase for file-based volume
  drivers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1438027/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp