[Yahoo-eng-team] [Bug 1305860] [NEW] nova allows to resize from a flavor with disk_size=0 defined

2014-04-10 Thread Xavier Queralt
Public bug reported:

For some reason nova allows creating a flavor with a disk_size of 0,
when an instance is booted using such flavor its disk will use the
native base image size as the size of the ephemeral root volume (from
the ops docs).

Even though having the disk_size=0 means that the instance's disk can
have any imaginable size, nova will still allow to resize that instance
to any flavor defining disk_size = 0 but it won't shrink the disk if it
is bigger than the new disk_size.

Take for example the following:
 * we've an image of 2GB in glance
 * we boot an instance using this image and a flavor1 (with disk_size=0)
   - The instance's disk will be of 2GB (same as image)
 * Afterwards, we resize this instance to flavor2 (with disk_size=1GB) which is 
allowed b/c new_disk_size  old_disk_size
 * The new instance will be using flavor2 according to the papers, but its 
ephemeral disk will still be 2GB (nova won't try to shrink the image)

A couple of solutions are possible here, either nova forbids to resize
from any flavor with a disk_size=0 or nova checks the *real* size of the
disk before even trying to resize, failing if it is bigger than the one
defined by the new flavor. The drastic solution would be to forbid
defining flavors with a disk_size=0, but I guess it wouldn't be possible
to keep backwards compatibility.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1305860

Title:
  nova allows to resize from a flavor with disk_size=0 defined

Status in OpenStack Compute (Nova):
  New

Bug description:
  For some reason nova allows creating a flavor with a disk_size of 0,
  when an instance is booted using such flavor its disk will use the
  native base image size as the size of the ephemeral root volume (from
  the ops docs).

  Even though having the disk_size=0 means that the instance's disk can
  have any imaginable size, nova will still allow to resize that
  instance to any flavor defining disk_size = 0 but it won't shrink the
  disk if it is bigger than the new disk_size.

  Take for example the following:
   * we've an image of 2GB in glance
   * we boot an instance using this image and a flavor1 (with disk_size=0)
 - The instance's disk will be of 2GB (same as image)
   * Afterwards, we resize this instance to flavor2 (with disk_size=1GB) which 
is allowed b/c new_disk_size  old_disk_size
   * The new instance will be using flavor2 according to the papers, but its 
ephemeral disk will still be 2GB (nova won't try to shrink the image)

  A couple of solutions are possible here, either nova forbids to resize
  from any flavor with a disk_size=0 or nova checks the *real* size of
  the disk before even trying to resize, failing if it is bigger than
  the one defined by the new flavor. The drastic solution would be to
  forbid defining flavors with a disk_size=0, but I guess it wouldn't be
  possible to keep backwards compatibility.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1305860/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1297325] [NEW] swap and ephemeral devices defined in the flavor are not created as a block device mapping

2014-03-25 Thread Xavier Queralt
Public bug reported:

When booting an instance specifying the swap and/or ephemeral devices,
those will be created as a block device mapping in the database together
with the image and volumes if present.

If, instead, we rely on libvirt to define the swap and ephemeral devices
later from the specified instance type, those devices won't be added to
the block device mapping list.

To be consistent and to prevent any errors when trying to guess the
device name from the existing block device mappings, we should create a
mappings for those devices if present in the instance type. We should
create them from the API layer, before validating the block device
mappings and only if no swap or ephemeral device are defined by the
user.

** Affects: nova
 Importance: Medium
 Assignee: Xavier Queralt (xqueralt)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1297325

Title:
  swap and ephemeral devices defined in the flavor are not created as a
  block device mapping

Status in OpenStack Compute (Nova):
  New

Bug description:
  When booting an instance specifying the swap and/or ephemeral devices,
  those will be created as a block device mapping in the database
  together with the image and volumes if present.

  If, instead, we rely on libvirt to define the swap and ephemeral
  devices later from the specified instance type, those devices won't be
  added to the block device mapping list.

  To be consistent and to prevent any errors when trying to guess the
  device name from the existing block device mappings, we should create
  a mappings for those devices if present in the instance type. We
  should create them from the API layer, before validating the block
  device mappings and only if no swap or ephemeral device are defined by
  the user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1297325/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296590] [NEW] [libvirt] snapshots in progress are not cleaned when deleting an instance

2014-03-24 Thread Xavier Queralt
Public bug reported:

When creating an instance snapshot, if such instance is deleted while in
the middle of the process, the snapshot may be left in the SAVING state
because the instance disappears in the middle of the process or moves to
the deleting task_state.

Steps to reproduce:

$ nova boot --image image_id --flavor flavor test
$ nova image-create test test-snap
$ nova delete test

The image 'test-snap' will be left in the SAVING state although it
should be deleted when we detect the situation.

** Affects: nova
 Importance: Medium
 Assignee: Xavier Queralt (xqueralt)
 Status: New


** Tags: libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296590

Title:
  [libvirt] snapshots in progress are not cleaned when deleting an
  instance

Status in OpenStack Compute (Nova):
  New

Bug description:
  When creating an instance snapshot, if such instance is deleted while
  in the middle of the process, the snapshot may be left in the SAVING
  state because the instance disappears in the middle of the process or
  moves to the deleting task_state.

  Steps to reproduce:

  $ nova boot --image image_id --flavor flavor test
  $ nova image-create test test-snap
  $ nova delete test

  The image 'test-snap' will be left in the SAVING state although it
  should be deleted when we detect the situation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1296590/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290302] [NEW] Booting from volume fails when the device_name is not specified

2014-03-10 Thread Xavier Queralt
Public bug reported:

With current nova in master, when booting an instance using an existing
volume without specifying the device_name fails with the following
traceback in nova compute:

$ nova boot --boot-volume f1f2de9c-eedf-41cf-9089-a41ec0706b3e --flavor
m1.custom --key-name default server --poll

2014-03-10 05:48:27.735 DEBUG oslo.messaging._drivers.amqp [-] UNIQUE_ID is 
38af8a6ad88649449919434f9facd899. from (pid=21192) _add_unique_id 
/opt/stack/oslo.messaging/oslo/messaging/_drivers/amqp.py:338
2014-03-10 05:48:27.750 ERROR nova.compute.manager 
[req-f7b15296-7be8-4cc5-9e8c-1ea3658baa7e admin admin] [instance: 
7ce5a209-49a3-417b-aa0d-0ccf066dc660] Error: 'class 
'nova.objects.block_device.BlockDeviceMapping'' object has no attribute 
'mount_device'
2014-03-10 05:48:27.750 TRACE nova.compute.manager [instance: 
7ce5a209-49a3-417b-aa0d-0ccf066dc660] Traceback (most recent call last):
2014-03-10 05:48:27.750 TRACE nova.compute.manager [instance: 
7ce5a209-49a3-417b-aa0d-0ccf066dc660]   File 
/opt/stack/nova/nova/compute/manager.py, line 1242, in _build_instance
2014-03-10 05:48:27.750 TRACE nova.compute.manager [instance: 
7ce5a209-49a3-417b-aa0d-0ccf066dc660] bdms)
2014-03-10 05:48:27.750 TRACE nova.compute.manager [instance: 
7ce5a209-49a3-417b-aa0d-0ccf066dc660]   File 
/opt/stack/nova/nova/compute/manager.py, line 1573, in 
_default_block_device_names
2014-03-10 05:48:27.750 TRACE nova.compute.manager [instance: 
7ce5a209-49a3-417b-aa0d-0ccf066dc660] root_bdm)
2014-03-10 05:48:27.750 TRACE nova.compute.manager [instance: 
7ce5a209-49a3-417b-aa0d-0ccf066dc660]   File 
/opt/stack/nova/nova/compute/manager.py, line 1530, in 
_default_root_device_name
2014-03-10 05:48:27.750 TRACE nova.compute.manager [instance: 
7ce5a209-49a3-417b-aa0d-0ccf066dc660] root_bdm)
2014-03-10 05:48:27.750 TRACE nova.compute.manager [instance: 
7ce5a209-49a3-417b-aa0d-0ccf066dc660]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 5084, in 
default_root_device_name
2014-03-10 05:48:27.750 TRACE nova.compute.manager [instance: 
7ce5a209-49a3-417b-aa0d-0ccf066dc660] cdrom_bus)
2014-03-10 05:48:27.750 TRACE nova.compute.manager [instance: 
7ce5a209-49a3-417b-aa0d-0ccf066dc660]   File 
/opt/stack/nova/nova/virt/libvirt/blockinfo.py, line 422, in get_root_info
2014-03-10 05:48:27.750 TRACE nova.compute.manager [instance: 
7ce5a209-49a3-417b-aa0d-0ccf066dc660] if not get_device_name(root_bdm) and 
root_device_name:
2014-03-10 05:48:27.750 TRACE nova.compute.manager [instance: 
7ce5a209-49a3-417b-aa0d-0ccf066dc660]   File 
/opt/stack/nova/nova/virt/libvirt/blockinfo.py, line 392, in get_device_name
2014-03-10 05:48:27.750 TRACE nova.compute.manager [instance: 
7ce5a209-49a3-417b-aa0d-0ccf066dc660] return bdm.get('device_name') or 
bdm.get('mount_device')
2014-03-10 05:48:27.750 TRACE nova.compute.manager [instance: 
7ce5a209-49a3-417b-aa0d-0ccf066dc660]   File 
/opt/stack/nova/nova/objects/base.py, line 411, in get
2014-03-10 05:48:27.750 TRACE nova.compute.manager [instance: 
7ce5a209-49a3-417b-aa0d-0ccf066dc660] self.__class__, key))
2014-03-10 05:48:27.750 TRACE nova.compute.manager [instance: 
7ce5a209-49a3-417b-aa0d-0ccf066dc660] AttributeError: 'class 
'nova.objects.block_device.BlockDeviceMapping'' object has no attribute 
'mount_device'
2014-03-10 05:48:27.750 TRACE nova.compute.manager [instance: 
7ce5a209-49a3-417b-aa0d-0ccf066dc660] 
2014-03-10 05:48:27.753 DEBUG nova.compute.utils 
[req-f7b15296-7be8-4cc5-9e8c-1ea3658baa7e admin admin] [instance: 
7ce5a209-49a3-417b-aa0d-0ccf066dc660] Build of instance 
7ce5a209-49a3-417b-aa0d-0ccf066dc660 was re-scheduled: 'class 
'nova.objects.block_device.BlockDeviceMapping'' object has no attribute 
'mount_device' from (pid=21192) notify_about_instance_usage 
/opt/stack/nova/nova/compute/utils.py:334
2014-03-10 05:48:27.753 TRACE nova.compute.utils [instance: 
7ce5a209-49a3-417b-aa0d-0ccf066dc660] Traceback (most recent call last):
2014-03-10 05:48:27.753 TRACE nova.compute.utils [instance: 
7ce5a209-49a3-417b-aa0d-0ccf066dc660]   File 
/opt/stack/nova/nova/compute/manager.py, line 1134, in _run_instance
2014-03-10 05:48:27.753 TRACE nova.compute.utils [instance: 
7ce5a209-49a3-417b-aa0d-0ccf066dc660] instance, image_meta, 
legacy_bdm_in_spec)
2014-03-10 05:48:27.753 TRACE nova.compute.utils [instance: 
7ce5a209-49a3-417b-aa0d-0ccf066dc660]   File 
/opt/stack/nova/nova/compute/manager.py, line 1293, in _build_instance
2014-03-10 05:48:27.753 TRACE nova.compute.utils [instance: 
7ce5a209-49a3-417b-aa0d-0ccf066dc660] reason=unicode(exc_info[1]))
2014-03-10 05:48:27.753 TRACE nova.compute.utils [instance: 
7ce5a209-49a3-417b-aa0d-0ccf066dc660] RescheduledException: Build of instance 
7ce5a209-49a3-417b-aa0d-0ccf066dc660 was re-scheduled: 'class 
'nova.objects.block_device.BlockDeviceMapping'' object has no attribute 
'mount_device'
2014-03-10 05:48:27.753 TRACE nova.compute.

** Affects: nova
 Importance: Undecided
 Status: 

[Yahoo-eng-team] [Bug 1254007] Re: Error attaching a cinder glousterfs volume with libvirt

2014-02-26 Thread Xavier Queralt
I've jut checked the log of volume.py module and I couldn't find any
version containing that extra whitespace mentioned in your report. Could
it be that it was introduced by mistake in your local checkout?

See
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/volume.py#L810

** Changed in: nova
   Status: Triaged = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1254007

Title:
  Error attaching a cinder glousterfs volume with libvirt

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I quite investigating it.
  The problem seams to be related to
  nova/virt/libvirt/volume.py
  function _mount_glusterfs
  prepares a command to locally mount a glusterfs volume with this line of code:
  gluster_cmd.extend([glusterfs_share, mount_path, ' '])
  the trailing white spaces causes mount error on my system.

  Using the original code i get:

  2013-11-22 13:32:23.272 6686 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/utils.py, line 177, in execute
  2013-11-22 13:32:23.272 6686 TRACE nova.openstack.common.rpc.amqp return 
processutils.execute(*cmd, **kwargs)
  2013-11-22 13:32:23.272 6686 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py, line 
178, in execute
  2013-11-22 13:32:23.272 6686 TRACE nova.openstack.common.rpc.amqp cmd=' 
'.join(cmd))
  2013-11-22 13:32:23.272 6686 TRACE nova.openstack.common.rpc.amqp 
ProcessExecutionError: Unexpected error while running command.
  2013-11-22 13:32:23.272 6686 TRACE nova.openstack.common.rpc.amqp Command: 
sudo nova-rootwrap /etc/nova/rootwrap.conf mount -t glusterfs 
10.101.101.120:v_cinder /var/lib/cinder/mnt/8bf7d2294d80777975e85e1905255721
  2013-11-22 13:32:23.272 6686 TRACE nova.openstack.common.rpc.amqp Exit code: 1
  2013-11-22 13:32:23.272 6686 TRACE nova.openstack.common.rpc.amqp Stdout: ''
  2013-11-22 13:32:23.272 6686 TRACE nova.openstack.common.rpc.amqp Stderr: 
Usage: mount -V : print version\n   mount -h   
  : print this help\n   mount: list mounted 
filesystems\n   mount -l : idem, including volume 
labels\nSo far the informational part. Next the mounting.\nThe command is 
`mount [-t fstype] something somewhere'.\nDetails found in /etc/fstab may be 
omitted.\n   mount -a [-t|-O] ... : mount all stuff from /etc/fstab\n   
mount device : mount device at the known place\n   mount 
directory  : mount known device here\n   mount -t type dev dir: 
ordinary mount command\nNote that one does not really mount a device, one 
mounts\na filesystem (of the given type) found on the device.\nOne can also 
mount an already visible directory tree elsewhere:\n   mount --bind olddir 
newdir\nor move a subtree:\n   mount --move olddir newdir\nOne can change 
the type 
 of mount containing the directory dir:\n   mount --make-shared dir\n   
mount --make-slave dir\n   mount --make-private dir\n   mount 
--make-unbindable dir\nOne can change the type of all the mounts in a mount 
subtree\ncontaining the directory dir:\n   mount --make-rshared dir\n   
mount --make-rslave dir\n   mount --make-rprivate dir\n   mount 
--make-runbindable dir\nA device can be given by name, say /dev/hda1 or 
/dev/cdrom,\nor by label, using  -L label  or by uuid, using  -U uuid .\nOther 
options: [-nfFrsvw] [-o options] [-p passwdfd].\nFor many more details, say  
man 8 mount .\n
  2013-11-22 13:32:23.272 6686 TRACE nova.openstack.common.rpc.amqp

  
  Simply replacing this line
  gluster_cmd.extend([glusterfs_share, mount_path, ' '])
  with this
  gluster_cmd.extend([glusterfs_share, mount_path])
  it works correctly and it mount and attach my cinder glouster volume

  I don't understand why a trailing space can cause this kind of
  problem.

  I'm using havana vanilla distribution on CENTOS 6.4

  ciao

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1254007/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1285209] [NEW] [libvirt] nfs and glusterfs volume drivers don't unmount their shares when detaching

2014-02-26 Thread Xavier Queralt
Public bug reported:

When attaching volumes from NFS or GlusterFS backends, nova mounts the
share in a temporary directory, this is reused when attaching another
volume from the same share so there is no need to mount it several
times.

On the other hand, when disconnecting those volumes nova doesn't even
try to unmount the share which may remain mounted and unused there. To
clean up after detach, nova could at least try to umount the shares.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1285209

Title:
  [libvirt] nfs and glusterfs volume drivers don't unmount their shares
  when detaching

Status in OpenStack Compute (Nova):
  New

Bug description:
  When attaching volumes from NFS or GlusterFS backends, nova mounts the
  share in a temporary directory, this is reused when attaching another
  volume from the same share so there is no need to mount it several
  times.

  On the other hand, when disconnecting those volumes nova doesn't even
  try to unmount the share which may remain mounted and unused there. To
  clean up after detach, nova could at least try to umount the shares.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1285209/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1281973] [NEW] Compute manager ignores image_meta from api when booting an instance

2014-02-19 Thread Xavier Queralt
Public bug reported:

When booting an instance, in the API layer, we get the image to be used
for this instance or, if the instance is booted from a volume, the image
metadata attached to the volume (if present). This allows us to know a
bit more of the volume through the inherited properties of the image
from where it was created. This same image metadata is then passed to
the selected compute manager for processing the request.

Right now the compute manager completely ignores the image metadata and
tries to obtain it only if the instance is booted from an image. This
prevents us from obtaining some properties present in the volume that
might be needed while starting the instance (think about the
hw_disk_bus). Besides, it requires of an extra call to glance that can
be avoided.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1281973

Title:
  Compute manager ignores image_meta from api when booting an instance

Status in OpenStack Compute (Nova):
  New

Bug description:
  When booting an instance, in the API layer, we get the image to be
  used for this instance or, if the instance is booted from a volume,
  the image metadata attached to the volume (if present). This allows us
  to know a bit more of the volume through the inherited properties of
  the image from where it was created. This same image metadata is then
  passed to the selected compute manager for processing the request.

  Right now the compute manager completely ignores the image metadata
  and tries to obtain it only if the instance is booted from an image.
  This prevents us from obtaining some properties present in the volume
  that might be needed while starting the instance (think about the
  hw_disk_bus). Besides, it requires of an extra call to glance that can
  be avoided.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1281973/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1281989] [NEW] disk_bus may be ignored when booting from volume

2014-02-19 Thread Xavier Queralt
Public bug reported:

There are several ways of selecting the disk_bus for a BDM. One of them
is using the hw_disk_bus image property which, if present, is inherited
by volumes created from an image.

When booting from a volume that has this property defined, even though
we might think it will be considered when selecting the bus, it is
ignored because the code that guesses the bus and device name for volume
BDMs expects it to be defined in the same BDM.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1281989

Title:
  disk_bus may be ignored when booting from volume

Status in OpenStack Compute (Nova):
  New

Bug description:
  There are several ways of selecting the disk_bus for a BDM. One of
  them is using the hw_disk_bus image property which, if present, is
  inherited by volumes created from an image.

  When booting from a volume that has this property defined, even though
  we might think it will be considered when selecting the bus, it is
  ignored because the code that guesses the bus and device name for
  volume BDMs expects it to be defined in the same BDM.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1281989/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1266974] Re: nova work with glance SSL

2014-02-17 Thread Xavier Queralt
You need a newer version of eventlet. This was reported in [1] and fixed
with the patch in [2]. I'll try to update the packages in RDO but in the
meanwhile, could you open a bug in [3] for tracking it? Thanks.

[1] https://bitbucket.org/eventlet/eventlet/issue/136
[2] https://bitbucket.org/eventlet/eventlet/commits/609f230
[3] 
https://bugzilla.redhat.com/enter_bug.cgi?product=RDOcomponent=openstack-nova

** Changed in: nova
   Status: New = Invalid

** Changed in: python-glanceclient
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1266974

Title:
  nova work with glance SSL

Status in OpenStack Compute (Nova):
  Invalid
Status in Python client library for Glance:
  Invalid

Bug description:
  My environment is :
  Nova api --https-- haproxy(SSL proxy)http Glance api1
 |--http Glance api2

  I use centos + rdo rpm package(havana), my haproxy is 1.5_dev21.

  It can work well if I config in nova.conf as following:
  glance_api_servers=glanceapi1_ip:9292,glanceapi2_ip:9292

  But when I want nova api talk with glance api in https, it can't work. My 
config is as following:
  glance_api_servers=https://Glanceapi_VIP:443 in nova.conf,

  When I boot VM, I will get the error as below:
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/compute/api.py, line 1220, in create
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack 
legacy_bdm=legacy_bdm)
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/compute/api.py, line 840, in 
_create_instance
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack image_id, boot_meta 
= self._get_image(context, image_href)
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/compute/api.py, line 620, in _get_image
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack image = 
image_service.show(context, image_id)
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/image/glance.py, line 292, in show
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack 
_reraise_translated_image_exception(image_id)
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/image/glance.py, line 290, in show
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack image = 
self._client.call(context, 1, 'get', image_id)
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/image/glance.py, line 214, in call
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack return 
getattr(client.images, method)(*args, **kwargs)
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/glanceclient/v1/images.py, line 114, in get
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack % 
urllib.quote(str(image_id)))
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/glanceclient/common/http.py, line 293, in 
raw_request
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack return 
self._http_request(url, method, **kwargs)
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/glanceclient/common/http.py, line 244, in 
_http_request
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack body_str = 
''.join([chunk for chunk in body_iter])
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/glanceclient/common/http.py, line 499, in 
__iter__
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack chunk = self.next()
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/glanceclient/common/http.py, line 515, in 
next
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack chunk = 
self._resp.read(CHUNKSIZE)
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack   File 
/usr/lib64/python2.6/httplib.py, line 518, in read
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack self.close()
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack   File 
/usr/lib64/python2.6/httplib.py, line 499, in close
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack self.fp.close()
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack   File 
/usr/lib64/python2.6/socket.py, line 278, in close
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack self._sock.close()
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/eventlet/greenio.py, line 145, in __getattr__
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack attr = 
getattr(self.fd, name)
  2014-01-08 04:39:01.480 6011 TRACE nova.api.openstack 

[Yahoo-eng-team] [Bug 1274611] [NEW] nova-network bridge setup fails if the interface address has 'dynamic' flag

2014-01-30 Thread Xavier Queralt
Public bug reported:

While setting the bridge up, if the network interface has a dynamic
address, the 'dynamic' flag will be displayed in the ip addr show
command:

[fedora@dev1 devstack]$ ip addr show dev eth0 scope global
2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UP 
group default qlen 1000
link/ether 52:54:00:00:00:01 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.2/24 brd 192.168.122.255 scope global dynamic eth0
   valid_lft 2225sec preferred_lft 2225sec

When latter executing ip addr del with the IPv4 details, the 'dynamic'
flag is not accepted, causes the command to crash and leaves the bridge
half configured.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1274611

Title:
  nova-network bridge setup fails if the interface address has 'dynamic'
  flag

Status in OpenStack Compute (Nova):
  New

Bug description:
  While setting the bridge up, if the network interface has a dynamic
  address, the 'dynamic' flag will be displayed in the ip addr show
  command:

  [fedora@dev1 devstack]$ ip addr show dev eth0 scope global
  2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UP 
group default qlen 1000
  link/ether 52:54:00:00:00:01 brd ff:ff:ff:ff:ff:ff
  inet 192.168.122.2/24 brd 192.168.122.255 scope global dynamic eth0
 valid_lft 2225sec preferred_lft 2225sec

  When latter executing ip addr del with the IPv4 details, the
  'dynamic' flag is not accepted, causes the command to crash and leaves
  the bridge half configured.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1274611/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1158807] Re: Qpid SSL protocol

2013-12-16 Thread Xavier Queralt
** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1158807

Title:
  Qpid SSL protocol

Status in Cinder:
  Invalid
Status in Cinder grizzly series:
  In Progress
Status in OpenStack Compute (Nova):
  Invalid
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  By default, TCP is used as transport for QPID connections. If you like
  to enable SSL, there is a flat 'qpid_protocol = ssl' available in
  nova.conf. However, python-qpid client is awaiting transport type
  instead of protocol. It seems to be a bug:

  Solution:
  
(https://github.com/openstack/nova/blob/master/nova/openstack/common/rpc/impl_qpid.py#L323)

  WRONG:self.connection.protocol = self.conf.qpid_protocol
  CORRECT:self.connection.transport = self.conf.qpid_protocol

  Regards,
  JuanFra.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1158807/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259183] [NEW] wsgi.Loader should ensure the config_path is absolute

2013-12-09 Thread Xavier Queralt
Public bug reported:

nova-api service will fail to start when the nova-api command is invoked
from a directory containing a file with the same name as the one
specified in the configuration key 'api_paste_config' if it is not an
absolute path (the default is 'api-paste.ini').

[fedora@devstack1 devstack]$ pwd
/home/fedora/devstack

[fedora@devstack1 devstack]$ grep api_paste_conf /etc/nova/nova.conf 
api_paste_config = api-paste.ini

[fedora@devstack1 devstack]$ touch api-paste.ini

[fedora@devstack1 devstack]$ nova-api
2013-12-09 09:18:40.082 DEBUG nova.wsgi [-] Loading app ec2 from api-paste.ini 
from (pid=4817) load_app /opt/stack/nova/nova/wsgi.py:485
2013-12-09 09:18:40.083 CRITICAL nova [-] Cannot resolve relative uri 
'config:api-paste.ini'; no relative_to keyword argument given
2013-12-09 09:18:40.083 TRACE nova Traceback (most recent call last):
2013-12-09 09:18:40.083 TRACE nova   File /usr/bin/nova-api, line 10, in 
module
2013-12-09 09:18:40.083 TRACE nova sys.exit(main())
2013-12-09 09:18:40.083 TRACE nova   File /opt/stack/nova/nova/cmd/api.py, 
line 49, in main
2013-12-09 09:18:40.083 TRACE nova max_url_len=16384)
2013-12-09 09:18:40.083 TRACE nova   File /opt/stack/nova/nova/service.py, 
line 308, in __init__
2013-12-09 09:18:40.083 TRACE nova self.app = self.loader.load_app(name)
2013-12-09 09:18:40.083 TRACE nova   File /opt/stack/nova/nova/wsgi.py, line 
486, in load_app
2013-12-09 09:18:40.083 TRACE nova return deploy.loadapp(config:%s % 
self.config_path, name=name)
2013-12-09 09:18:40.083 TRACE nova   File 
/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py, line 247, in 
loadapp
2013-12-09 09:18:40.083 TRACE nova return loadobj(APP, uri, name=name, **kw)
2013-12-09 09:18:40.083 TRACE nova   File 
/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py, line 271, in 
loadobj
2013-12-09 09:18:40.083 TRACE nova global_conf=global_conf)
2013-12-09 09:18:40.083 TRACE nova   File 
/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py, line 296, in 
loadcontext
2013-12-09 09:18:40.083 TRACE nova global_conf=global_conf)
2013-12-09 09:18:40.083 TRACE nova   File 
/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py, line 308, in 
_loadconfig
2013-12-09 09:18:40.083 TRACE nova argument given % uri)
2013-12-09 09:18:40.083 TRACE nova ValueError: Cannot resolve relative uri 
'config:api-paste.ini'; no relative_to keyword argument given
2013-12-09 09:18:40.083 TRACE nova

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259183

Title:
  wsgi.Loader should ensure the config_path is absolute

Status in OpenStack Compute (Nova):
  New

Bug description:
  nova-api service will fail to start when the nova-api command is
  invoked from a directory containing a file with the same name as the
  one specified in the configuration key 'api_paste_config' if it is not
  an absolute path (the default is 'api-paste.ini').

  [fedora@devstack1 devstack]$ pwd
  /home/fedora/devstack

  [fedora@devstack1 devstack]$ grep api_paste_conf /etc/nova/nova.conf 
  api_paste_config = api-paste.ini

  [fedora@devstack1 devstack]$ touch api-paste.ini

  [fedora@devstack1 devstack]$ nova-api
  2013-12-09 09:18:40.082 DEBUG nova.wsgi [-] Loading app ec2 from 
api-paste.ini from (pid=4817) load_app /opt/stack/nova/nova/wsgi.py:485
  2013-12-09 09:18:40.083 CRITICAL nova [-] Cannot resolve relative uri 
'config:api-paste.ini'; no relative_to keyword argument given
  2013-12-09 09:18:40.083 TRACE nova Traceback (most recent call last):
  2013-12-09 09:18:40.083 TRACE nova   File /usr/bin/nova-api, line 10, in 
module
  2013-12-09 09:18:40.083 TRACE nova sys.exit(main())
  2013-12-09 09:18:40.083 TRACE nova   File /opt/stack/nova/nova/cmd/api.py, 
line 49, in main
  2013-12-09 09:18:40.083 TRACE nova max_url_len=16384)
  2013-12-09 09:18:40.083 TRACE nova   File /opt/stack/nova/nova/service.py, 
line 308, in __init__
  2013-12-09 09:18:40.083 TRACE nova self.app = self.loader.load_app(name)
  2013-12-09 09:18:40.083 TRACE nova   File /opt/stack/nova/nova/wsgi.py, 
line 486, in load_app
  2013-12-09 09:18:40.083 TRACE nova return deploy.loadapp(config:%s % 
self.config_path, name=name)
  2013-12-09 09:18:40.083 TRACE nova   File 
/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py, line 247, in 
loadapp
  2013-12-09 09:18:40.083 TRACE nova return loadobj(APP, uri, name=name, 
**kw)
  2013-12-09 09:18:40.083 TRACE nova   File 
/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py, line 271, in 
loadobj
  2013-12-09 09:18:40.083 TRACE nova global_conf=global_conf)
  2013-12-09 09:18:40.083 TRACE nova   File 
/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py, line 296, in 
loadcontext
  2013-12-09 09:18:40.083 TRACE nova global_conf=global_conf)
  

[Yahoo-eng-team] [Bug 1251757] Re: On restart of QPID broker, fanout no longer works

2013-12-05 Thread Xavier Queralt
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1251757

Title:
  On restart of QPID broker, fanout no longer works

Status in OpenStack Telemetry (Ceilometer):
  New
Status in Cinder:
  New
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released
Status in Messaging API for OpenStack:
  Fix Released

Bug description:
  When the QPID broker is restarted, RPC servers attempt to re-connect.
  This re-connection process is not done correctly for fanout
  subscriptions - two subscriptions are established to the same fanout
  address.

  This problem is compounded by the fix to bug#1178375
  https://bugs.launchpad.net/oslo/+bug/1178375

  With this bug fix, when topology version 2 is used, the reconnect
  attempt uses a malformed subscriber address.

  For example, I have a simple RPC server script that attempts to
  service my-topic.   When it initially connects to the broker using
  topology-version 1, these are the subscriptions that are established:

  (py27)[kgiusti@t530 work (master)]$ ./my-server.py --topology=1 --auto-delete 
server-02
  Running server, name=server-02 exchange=my-exchange topic=my-topic 
namespace=my-namespace
  Using QPID topology version 1
  Enable auto-delete
  Recevr openstack/my-topic ; {node: {x-declare: {auto-delete: true, 
durable: true}, type: topic}, create: always, link: {x-declare: 
{auto-delete: true, exclusive: false, durable: false}, durable: true, 
name: my-topic}}
  Recevr openstack/my-topic.server-02 ; {node: {x-declare: {auto-delete: 
true, durable: true}, type: topic}, create: always, link: 
{x-declare: {auto-delete: true, exclusive: false, durable: false}, 
durable: true, name: my-topic.server-02}}
  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_489a3178fc704123b0e5e2fbee125247}}

  When I restart the qpid broker, the server reconnects using the
  following subscriptions

  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_b40001afd9d946a582ead3b7b858b588}}
  Recevr my-topic_fanout ; {node: {x-declare: {auto-delete: true, 
durable: false, type: fanout}, type: topic}, create: always, 
link: {x-declare: {auto-delete: true, exclusive: true, durable: 
false}, durable: true, name: 
my-topic_fanout_b40001afd9d946a582ead3b7b858b588}}
  --- Note: subscribing twice to the same exclusive address!  (Bad!)
  Recevr openstack/my-topic.server-02 ; {node: {x-declare: {auto-delete: 
true, durable: true}, type: topic}, create: always, link: 
{x-declare: {auto-delete: true, exclusive: false, durable: false}, 
durable: true, name: my-topic.server-02}}
  Recevr openstack/my-topic ; {node: {x-declare: {auto-delete: true, 
durable: true}, type: topic}, create: always, link: {x-declare: 
{auto-delete: true, exclusive: false, durable: false}, durable: true, 
name: my-topic}}

  
  When using topology=2, the failure case is a bit different.  On reconnect, 
the fanout addresses are lacking proper topic names:

  Recevr amq.topic/topic/openstack/my-topic ; {link: {x-declare: 
{auto-delete: true, durable: false}}}
  Recevr amq.topic/fanout/ ; {link: {x-declare: {auto-delete: true, 
exclusive: true}}}
  Recevr amq.topic/fanout/ ; {link: {x-declare: {auto-delete: true, 
exclusive: true}}}
  Recevr amq.topic/topic/openstack/my-topic.server-02 ; {link: {x-declare: 
{auto-delete: true, durable: false}}}

  Note again - two subscriptions to fanout, and 'my-topic' is missing
  (it should be after that trailing /)

  FYI - my test RPC server and client can be accessed here:
  https://github.com/kgiusti/oslo-messaging-clients

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1251757/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1246103] Re: Nova Scheduler Fails to Start Due to Missing cinderclient

2013-10-30 Thread Xavier Queralt
** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: packstack
   Status: New = Invalid

** No longer affects: nova

** Also affects: nova
   Importance: Undecided
   Status: New

** Summary changed:

- Nova Scheduler Fails to Start Due to Missing cinderclient
+ encryptors module forces cert and scheduler services to depend on cinderclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1246103

Title:
  encryptors module forces cert and scheduler services to depend on
  cinderclient

Status in OpenStack Compute (Nova):
  New
Status in Packstack:
  Invalid

Bug description:
  When Nova Scheduler is installed via packstack as the only explicitly
  installed service on a particular node, it will fail to start.  This
  is because it depends on the Python cinderclient library, which is not
  marked as a dependency in 'nova::scheduler' class in Packstack.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1246103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1243727] Re: libvirt instance is killed and restarted during create snapshot for instance

2013-10-23 Thread Xavier Queralt
When using cold snapshots nova will stop the instance while copying the
disk and bring it back once done. This is the normal behaviour and won't
happen if your system is capable of doing live snapshots (qemu = 1.3.0
and libvirt = 1.0.0).

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1243727

Title:
  libvirt instance is killed and restarted during create snapshot for
  instance

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I am not sure if this is a bug but I don't think its a correct behaviour.
  when I create a snapshot type image from a running instance, the livirt will 
be killed and restarted: 

  [root@cougar07 ~(keystone_admin)]# virsh -r list 
   IdName   State
  
   12instance-0036  running

  [root@cougar07 ~(keystone_admin)]# virsh -r list 
   IdName   State
  

  [root@cougar07 ~(keystone_admin)]# virsh -r list 
   IdName   State
  

  [root@cougar07 ~(keystone_admin)]# virsh -r list 
   IdName   State
  
   13instance-0036  shut off

  [root@cougar07 ~(keystone_admin)]# virsh -r list 
   IdName   State
  
   13instance-0036  running

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1243727/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1235358] Re: selected cylinder exceeds maximum supported by bios

2013-10-17 Thread Xavier Queralt
** Also affects: cinder
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: Incomplete = Invalid

** Summary changed:

- selected cylinder exceeds maximum supported by bios
+ glusterfs: invalid volume when source image virtual size is bigger than the 
requested size

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1235358

Title:
  glusterfs: invalid volume when source image virtual size is bigger
  than the requested size

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I created a volume from an image and booted an instance from it 
  when instance boots I get this: 'selected cylinder exceeds maximum supported 
by bios'
  If I boot an instance from the same image I can boot with no issues so its 
just booting from the volume.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1235358/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240140] Re: fail to create a snapshot for instance in gluster

2013-10-15 Thread Xavier Queralt
This is a problem only with the version of qemu-img shipped in RHEL 6.5
which lacks the -s option.

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1240140

Title:
  fail to create a snapshot for instance in gluster

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  we fail to create a snapshot for an instance booted from image. 
  looking at the compute log I can see that that qemu-img is getting exit 1: 

  Command: qemu-img convert -f qcow2 -O qcow2 -s 
45b38831085a4b9ab7390eef35bc1070 
/var/lib/nova/instances/9d411b4a-49cf-478d-b464-ebe053daedd0/disk 
/var/lib/nova/instances/snapshots/tmpaiU9IA/45b38831085a4b9ab7390eef35bc1070
  2013-10-15 19:14:11.203 3065 TRACE nova.compute.manager [instance: 
9d411b4a-49cf-478d-b464-ebe053daedd0] Exit code: 1

  
  running the command manually: 

  [root@nott-vdsa ~(keystone_admin)]# qemu-img convert -f qcow2 -O qcow2 -s 
45b38831085a4b9ab7390eef35bc1070 
/var/lib/nova/instances/9d411b4a-49cf-478d-b464-ebe053daedd0/disk 
/var/lib/nova/instances/snapshots/tmpaiU9IA/45b38831085a4b9ab7390eef35bc1070fgfg
  convert: invalid option -- 's'
  qemu-img version 0.12.1, Copyright (c) 2004-2008 Fabrice Bellard
  usage: qemu-img command [command options]
  QEMU disk image utility

  Command syntax:
check [-f fmt] [--output=ofmt] [-r [leaks | all]] filename
create [-f fmt] [-o options] filename [size]
commit [-f fmt] [-t cache] filename
convert [-c] [-p] [-f fmt] [-t cache] [-O output_fmt] [-o options] [-S 
sparse_size] filename [filename2 [...]] output_filename
info [-f fmt] filename
map [-f fmt] [--output=ofmt] filename
snapshot [-l | -a snapshot | -c snapshot | -d snapshot] filename
rebase [-f fmt] [-t cache] [-p] [-u] -b backing_file [-F backing_fmt] 
filename
resize filename [+ | -]size

  
  not sure its related but I am working with gluster storage as backend for 
both cinder and glance

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1240140/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1223180] Re: Block device mapping is not updated in the swap volume

2013-09-17 Thread Xavier Queralt
** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1223180

Title:
  Block device mapping is not updated in the swap volume

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  swap volume is success.
  But block device mapping is not updated.

  Tried Commit ID:b037993984229bb698050f20e8719b8c06ff2be3

  1. Try to swap volume from vol01 to vol02
  $ nova show aa2c1541-d298-478c-9a35-7759a07a77b0
  
+--+--+
  | Property | Value
|
  
+--+--+
  | status   | ACTIVE   
|
  | ...  | ...  
|
  | os-extended-volumes:volumes_attached | [{u'id': 
u'66230802-aed6-4a90-9dec-42fec910d13d'}]   |
  | ...  | ...  
|
  
+--+--+

  $ mysql -e select is,device_name,volume_id from block_device_mapping where 
instance_uuid='aa2c1541-d298-478c-9a35-7759a07a77b0' and deleted=0
  ++-+--+
  | id | device_name | volume_id|
  ++-+--+
  | 17 | /dev/vda| NULL |
  | 19 | /dev/vdb| 66230802-aed6-4a90-9dec-42fec910d13d |
  ++-+--+

  $ cinder list
  
+--+---+--+--+-+--+--+
  |  ID  |   Status  | Display Name | Size | 
Volume Type | Bootable | Attached to  |
  
+--+---+--+--+-+--+--+
  | 66230802-aed6-4a90-9dec-42fec910d13d |   in-use  |vol01 |  1   |
 None|  False   | aa2c1541-d298-478c-9a35-7759a07a77b0 |
  | c3d51a52-764a-4007-976a-136b544c561b | available |vol02 |  1   |
 None|  False   |  |
  
+--+---+--+--+-+--+--+

  $ virsh domblklist instance-000e
  target source
  
  vda
/opt/stack/data/nova/instances/aa2c1541-d298-478c-9a35-7759a07a77b0/disk
  vdb
/dev/disk/by-path/ip-192.168.122.180:3260-iscsi-iqn.2010-10.org.openstack:volume-66230802-aed6-4a90-9dec-42fec910d13d-lun-1

  2. Swap volume is successful, but block device mapping is not updated.
  $ cinder list = OK
  
+--+---+--+--+-+--+--+
  |  ID  |   Status  | Display Name | Size | 
Volume Type | Bootable | Attached to  |
  
+--+---+--+--+-+--+--+
  | 66230802-aed6-4a90-9dec-42fec910d13d | available |vol01 |  1   |
 None|  False   |  |
  | c3d51a52-764a-4007-976a-136b544c561b |   in-use  |vol02 |  1   |
 None|  False   | aa2c1541-d298-478c-9a35-7759a07a77b0 |
  
+--+---+--+--+-+--+--+

  $ virsh domblklist instance-000e = OK
  target source
  
  vda
/opt/stack/data/nova/instances/aa2c1541-d298-478c-9a35-7759a07a77b0/disk
  vdb
/dev/disk/by-path/ip-192.168.122.218:3260-iscsi-iqn.2010-10.org.openstack:volume-c3d51a52-764a-4007-976a-136b544c561b-lun-1

  $ nova show aa2c1541-d298-478c-9a35-7759a07a77b0 = NG
  
+--+--+
  | Property | Value
|
  
+--+--+
  | status   | ACTIVE   
|
  | ...  | ...  
|
  | os-extended-volumes:volumes_attached | [{u'id': 

[Yahoo-eng-team] [Bug 1223374] Re: run what you can on multi action stopped by quota

2013-09-11 Thread Xavier Queralt
Nova will always start as much instances as it can fit with the current
quota (try it with nova client).

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1223374

Title:
  run what you can on multi action stopped by quota

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I ran a 100 instances with large flavor on my host and fail on quota on all 
instances. 
  I think we need to improve that and run what ever we can and fail the rest.

  to reproduce: 
  install an AIO
  try to exceed the allowed quota by running large quantity of instances with 
large flavor (I ran a 100 on largest one)

  results: 
  none of the instances will run. 

  expected results: 
  some of the instances should run - rest should be blocked by quota

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1223374/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp