[Yahoo-eng-team] [Bug 1306218] Re: rebuild does not allow changing SSH keys

2016-05-17 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on.

In case you want to work on that, consider writing a blueprints [1] and
spec [2]. I'll recommend to read [3] if not yet done. The effort to
implement the requested feature is then driven only by the blueprint
(and spec).

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1306218

Title:
  rebuild does not allow changing SSH keys

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  The compute API allows optionally setting adminPass and key_name for
  server creates, but only allows optionally setting adminPass for
  server rebuilds.

  This means that the key_data for an instance is effectively immutable.
  The key pair is retrieved during a server create and sets key_data,
  but appears it can never be changed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1306218/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302545] Re: Boot volumes API race

2016-05-17 Thread Markus Zoeller (markus_z)
The bp serias "generic resource pools" [1] should solve this (very old)
issue.

References:
[1] 
https://github.com/openstack/nova-specs/blob/63db6163968f9e25c4b6cb121c21660092bd4d88/specs/newton/approved/generic-resource-pools.rst

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1302545

Title:
  Boot volumes API race

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  When there is a race for a volume between 2 or more instances, it is
  possible for more than one to pass the API check. All of them will get
  scheduled as a result, and only one will actually successfully attach
  the volume, while others will go to ERROR.

  This is not ideal since we can reserve the volume in the API, thus
  making it a bit more user friendly when there is a race (the user will
  be informed immediately instead of seeing an errored instance).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1302545/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298217] Re: Unable to connect VM to a specific subnet

2016-05-17 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on.

In case you want to work on that, consider writing a blueprints [1] and
spec [2]. I'll recommend to read [3] if not yet done. The effort to
implement the requested feature is then driven only by the blueprint
(and spec).

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1298217

Title:
  Unable to connect VM to a specific subnet

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  VM can only connected (on creation or later) to a network and not to a 
specific subnet on that network.
  Same goes for port - you cannot create a port on a specific subnet in a 
network.

  This is inconsistent with router-interface-add which targets a
  specific subnet instead of network.

  It also greatly limits the user's ability to manage VMs in his own
  network

  port and VM creation should allow user to target a specific subnet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1298217/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294587] Re: Old style Images with vhd files tarred along with a folder are not handled in nova xen plugin

2016-05-17 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on.

In case you want to work on that, consider writing a blueprints [1] and
spec [2]. I'll recommend to read [3] if not yet done. The effort to
implement the requested feature is then driven only by the blueprint
(and spec).

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1294587

Title:
  Old style Images with vhd files tarred along with a folder are not
  handled in nova xen plugin

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  There are some legacy snapshots which are uploaded into swift, with
  vhd files bundled in a folder called "image".

  They are in the below format,
  image/
  snap.vhd
  image.vhd

  
  Right now, in glance plugin, after downloading the image, when we try to 
handle old style images, we expected vhd files to be downloaded in the 
staging_path.

  
https://github.com/openstack/nova/blob/master/plugins/xenserver/xenapi/etc/xapi.d/plugins/utils.py#167

  But in this case of legacy images, there is a folder called "image"
  with vhd files, downloaded in the staging path.

  So, the level of recursiveness in the downloaded image is not
  supported today, while handling old style image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1294587/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1281853] Re: Add method to bulk delete keypairs

2016-05-17 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on.

In case you want to work on that, consider writing a blueprints [1] and
spec [2]. I'll recommend to read [3] if not yet done. The effort to
implement the requested feature is then driven only by the blueprint
(and spec).

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

** Changed in: python-novaclient
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1281853

Title:
  Add method to bulk delete keypairs

Status in OpenStack Compute (nova):
  Opinion
Status in python-novaclient:
  Opinion

Bug description:
  A user should be able to delete keypairs in bulk.  I'm currently
  deleting about 100,000 of them and expect it to take about 13 hours.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1281853/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1293444] Re: filter: aggregate image props isolation needs a strict option

2016-05-17 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on.

In case you want to work on that, consider writing a blueprints [1] and
spec [2]. I'll recommend to read [3] if not yet done. The effort to
implement the requested feature is then driven only by the blueprint
(and spec).

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1293444

Title:
  filter: aggregate image props isolation needs a strict option

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  The filter AggregateImagePropertiesIsolation needs an option to
  provide a way that an image without key does not satisfy the request.

  Strict isolation False:   

  
 |  key=foo  |  key=xxx  |   

  
  ---+---+---+  

  
key=foo  |  True |  False|  True

  
key=bar  |  False|  False|  True

  
  |  True |  True |  True

  


  
  Strict isolation True:

  
 |  key=foo  |  key=xxx  |   

  
  ---+---+---+  

  
key=foo  |  True |  False|  False   

  
key=bar  |  False|  False|  False   

  
  |  False|  False|  False

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1293444/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294098] Re: Serial console in vmware

2016-05-17 Thread Markus Zoeller (markus_z)
Looks like this got implemented with [1].

References:
[1] https://blueprints.launchpad.net/nova/+spec/vmware-console-log

** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1294098

Title:
  Serial console in vmware

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  I would like to request a similar feature like it is in KVM - serial
  console (log).

  It should be easy to do, I think. Just reconfigure vm one more time and add 
serial port, redirected to file in VM directory (for example 
$DATASTORE/$VM_ID/console.log) same as it is in KVM. 
  It might be harder to get it using nova console-log but having console.log 
text file in datastore would be good enough (for now).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1294098/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290455] Re: libvirt inject_data assumes instance with kernel_id doesn't contain a partition table

2016-05-17 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on.

In case you want to work on that, consider writing a blueprints [1] and
spec [2]. I'll recommend to read [3] if not yet done. The effort to
implement the requested feature is then driven only by the blueprint
(and spec).

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1290455

Title:
  libvirt inject_data assumes instance with kernel_id doesn't contain a
  partition table

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  libvirt/driver.py passes partition=None to disk.inject_data() for any
  instance with kernel_id set. partition=None means that inject_data
  will attempt to mount the whole image, i.e. assuming there is no
  partition table. While this may be true for EC2, it is not safe to
  assume that Xen images don't contain partition tables. This should
  check something more directly related to the disk image. In fact,
  ideally it would leave it up to libguestfs to work it out, as
  libguestfs is very good at this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1290455/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276888] Re: Not return volume type id but name when nova gets volume object from cinder

2016-05-17 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on.

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1276888

Title:
  Not return volume type id but name when nova gets volume object from
  cinder

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  when nova gets volume object and untranslate its, volume_type_id
  regards vol.volume_type.

  Actually volume_type is returned not an id but a name.

  However, some unittest defines volume_type_id to defferent type value.

  1. in nova/tests/fake_volumes.py

  'volume_type_id': 99

  2. in nova/test/api/openstack/fakes.py

  'volume_type_id': 'fakevoltype'

  I think we should change the name of volume_type_id to volume_type for
  appropriate meaning of value.

  Otherwise, volume_type_id should be a one type.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1276888/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1275675] Re: Version change in ObjectField does not work with back-levelling

2016-05-17 Thread Markus Zoeller (markus_z)
This is now over 2 years old and the comments in [1] indicate that this
is the responsibility of "oslo.versionedobjects" now. The ML discussion
[1] doesn't show a clear path forward neither. I'm closing it as "won't
fix".

References:
[1] https://review.openstack.org/#/c/202554/
[2] http://lists.openstack.org/pipermail/openstack-dev/2014-February/026151.html

** Changed in: nova
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1275675

Title:
  Version change in ObjectField does not work with back-levelling

Status in OpenStack Compute (nova):
  Won't Fix
Status in oslo.versionedobjects:
  New

Bug description:
  When a NovaObject primitive is deserialized the object version is
  checked and an IncompatibleObjectVersion exception is raised if the
  serialized primitive is labelled with a version that is not known
  locally. The exception indicates what version is known locally, and
  the deserialization attempts to backport the primitive to the local
  version.

  If a NovaObject A has an ObjectField b containing NovaObject B and it
  is B that has the incompatible version, the version number in the
  exception will be the the locally supported version for B. The
  desrialization will then attempt to backport the primitive of object A
  to the locally supported version number for object B.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1275675/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1226425] Re: Problem specifying VMware network name in FlatNetworking

2016-05-17 Thread Markus Zoeller (markus_z)
** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1226425

Title:
  Problem specifying VMware network name in FlatNetworking

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Unable to use an existing network (port group) on an ESX host if the
  network's name is not a valid Linux bridge identifier.

  2013-09-17 06:02:47.365 TRACE nova.openstack.common.rpc.amqp
  ^[[01;35m^[[00m[u'Traceback (most recent call last):\n', u'  File
  "/opt/stack/nova/nova/openstack/common/rpc/amqp.py", line 421, in
  _process_data\n**args)\n', u'  File
  "/opt/stack/nova/nova/openstack/common/rpc/dispatcher.py", line 172,
  in dispatch\nresult = getattr(proxyobj, method)(ctxt,
  **kwargs)\n', u'  File "/opt/stack/nova/nova/network/floating_ips.py",
  line 189, in deallocate_for_instance\nsuper(FloatingIP,
  self).deallocate_for_instance(context, **kwargs)\n', u'  File
  "/opt/stack/nova/nova/network/manager.py", line 530, in
  deallocate_for_instance\nself.deallocate_fixed_ip(context,
  fixed_ip[\'address\'], host=host)\n', u'  File
  "/opt/stack/nova/nova/network/manager.py", line 237, in
  deallocate_fixed_ip\naddress)\n', u'  File
  "/opt/stack/nova/nova/network/manager.py", line 918, in
  deallocate_fixed_ip\nself._teardown_network_on_host(context,
  network)\n', u'  File "/opt/stack/nova/nova/network/manager.py", line
  1629, in _teardown_network_on_host\n
  self.driver.update_dhcp(elevated, dev, network)\n', u'  File
  "/opt/stack/nova/nova/network/linux_net.py", line 981, in
  update_dhcp\nrestart_dhcp(context, dev, network_ref)\n', u'  File
  "/opt/stack/nova/nova/openstack/common/lockutils.py", line 246, in
  inner\nreturn f(*args, **kwargs)\n', u'  File
  "/opt/stack/nova/nova/network/linux_net.py", line 1098, in
  restart_dhcp\n_add_dnsmasq_accept_rules(dev)\n', u'  File
  "/opt/stack/nova/nova/network/linux_net.py", line 910, in
  _add_dnsmasq_accept_rules\niptables_manager.apply()\n', u'  File
  "/opt/stack/nova/nova/network/linux_net.py", line 421, in apply\n
  self._apply()\n', u'  File
  "/opt/stack/nova/nova/openstack/common/lockutils.py", line 246, in
  inner\nreturn f(*args, **kwargs)\n', u'  File
  "/opt/stack/nova/nova/network/linux_net.py", line 450, in _apply\n
  attempts=5)\n', u'  File "/opt/stack/nova/nova/network/linux_net.py",
  line 1189, in _execute\nreturn utils.execute(*cmd, **kwargs)\n',
  u'  File "/opt/stack/nova/nova/utils.py", line 167, in execute\n
  return processutils.execute(*cmd, **kwargs)\n', u'  File
  "/opt/stack/nova/nova/openstack/common/processutils.py", line 178, in
  execute\ncmd=\' \'.join(cmd))\n', u'ProcessExecutionError:
  Unexpected error while running command.\nCommand: sudo nova-rootwrap
  /etc/nova/rootwrap.conf iptables-restore -c\nExit code: 2\nStdout:
  \'\'\nStderr: "Bad argument `Network\'\\nError occurred at line:
  21\\nTry `iptables-restore -h\' or \'iptables-restore --help\' for
  more information.\\n"\n']

  Triage Info :-
  With flat networking and VMware driver, the config parameter 
flat_network_bridge is associated with the port group on the ESX host. This 
parameter is used to create a port group or to find an existing port group. 
This is also used as the name of the bridge containing the flat interface.

  Eg. If there exists a network named 'VM Network' on a vswitch
  associated with a host in vCenter, you cannot specify the same in
  flat_network_bridge because a linux bridge cannot take this name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1226425/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274325] Re: Security-groups not working with cells using nova-network

2016-05-17 Thread Markus Zoeller (markus_z)
nova-network is deprecated with the Newton release [1], which means it
is unlikely that this gets solved. I'm closing it with "won't fix".

References:
[1] 
https://github.com/openstack/nova/commit/7d5fc486823117ba7a0a9005142ef87059ef74cd

** Changed in: nova
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1274325

Title:
  Security-groups not working with cells using nova-network

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  Security groups are not working with cells using nova-network.
  Only cell API database is updated when adding rules. These are not propagated 
into the children cells.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1274325/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1270238] Re: libvirt driver doesn't support disk re-size down

2016-05-17 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on.

In case you want to work on that, consider writing a blueprints [1] and
spec [2]. I'll recommend to read [3] if not yet done. The effort to
implement the requested feature is then driven only by the blueprint
(and spec).

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270238

Title:
  libvirt driver doesn't support disk re-size down

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Currently libvirt driver doesn't support resizing disk down.

  During a resizing down all run well and the instance is updated to the new 
flavor with the new disk size.
  but in real the disk is not resized and keep the original size.

  We need to add the support of resizing down:
  1. resizing the fs
  2. resizing the image

  For the step one we have to be sure we work with only one partition
  and we don't erase data.

  what to do for the support of ntfs?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1270238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1270726] Re: Different instance_name_template config in controller node and compute node will lead to VM not be deleted actually

2016-05-17 Thread Markus Zoeller (markus_z)
Sounds like a real bug, therefore setting the importance to "low". But
the report is pretty old, so I'm closing it. If you could reproduce this
issue on a supported release [1], please reopen this bug report.

References:
[1] http://releases.openstack.org/

** Changed in: nova
   Importance: Wishlist => Low

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270726

Title:
  Different instance_name_template config in controller node and compute
  node will lead to VM not be deleted actually

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  If we config instance_name_template differently in controller node and
  compute node will lead to VM not be deleted in the compute node, but
  the instance is deleted in the OpenStack.

  The reason of the issue are:
  When boot instance, instance's name generated on controller node using 
controller's instance_name_template config and  is passed to nova-comute and 
used to boot a VM as VM's name.
  But When delete instance, instance's name do not pass from controller node to 
compute node, the instance's name is generated on compute node using compute's 
instance_name_template config, so the nova-compute will not find the VM to 
delete when compute node try to delete the VM using the instance name 
genenrated on compute node.

  The nova should use instance_name_template either on controller node
  or compute node, and should be consistent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1270726/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1269060] Re: Calls to get_fixed_ip are not implemented when using neutron

2016-05-17 Thread Markus Zoeller (markus_z)
Raises a NotImplementedError in the mean time:
https://github.com/openstack/nova/blob/1335abb5da5acdbe3596f0d6443efc65ea075b90/nova/network/neutronv2/api.py#L1317

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1269060

Title:
  Calls to get_fixed_ip are not implemented when using neutron

Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  The get_fixed_ip() the nova.network.neutronv2.api module is not
  implemented.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1269060/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262038] Re: There is the delete_on_termination option for attaching volume when creating a server but not for an existing server

2016-05-17 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on.

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1262038

Title:
  There is the delete_on_termination option for attaching volume when
  creating a server but not for an existing server

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  There is the delete_on_termination option for attaching volume when
  creating a server but not for an existing server. So when delete the
  server may be two cases although the delete_on_termination option is
  true, one the attached volume can be deleted, the other is not.

  Attach a volume when creating a server, the API contains 
'block_device_mapping', such as:
  "block_device_mapping": [
  {
  "volume_id": "",
  "device_name": "/dev/vdc",
  "delete_on_termination": "true"
  }
  ]

  It can contain 'delete_on_termination' option.

  But attach a volume to existing server, there is no option 
'delete_on_termination', the POST data likes:
  {
  "volumeAttachment":{
  "volumeId":"",
  "device":"/dev/sdb"
  }
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1262038/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259262] Re: there is no api or cli to enable/disable a flavor

2016-05-17 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on.

NOTE: This got introduced with change [1].

References:
[1] https://github.com/openstack/nova/commit/f371198

** Changed in: nova
   Status: Confirmed => Opinion

** Changed in: python-novaclient
   Status: Confirmed => Opinion

** Tags added: api

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259262

Title:
  there is no api or cli to enable/disable a flavor

Status in OpenStack Compute (nova):
  Opinion
Status in python-novaclient:
  Opinion

Bug description:
  There is a extended property 'disabled' defined for a flavor, but
  there is no api or cli to change the value.

  Customer needs a way to temporarily make a flavor invisible to public
  by set the 'disabled' to true.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1259262/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1254722] Re: Security group record in Server Details should include ID as well as Name

2016-05-17 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on.

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1254722

Title:
  Security group record in Server Details should include ID as well as
  Name

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  This is one of the areas where the Nova v2 API is inconsistent,
  exposed in this case because of the move towards Neutron.

  In Nova-network a security group name is mandatory, and must be unique within 
the tenant.
  In Neutron the name is optional, and all operations are based on the uuid.

  However although all SecGroup operations are based on id/uuid, the
  security group element of a server details response only includes
  'name'.  to add to the confusion if name is not defined for a Neutron
  Security Group then name is set to the uuid.

  The proposal is to add an 'id' field to the security group element, at
  least for the Neutron driver and preferably for Nova Network as well.

  Because this is generated in code which is shared between v2 and v3
  APIs then we need to decide whether the V2 APi can be extended with
  this additional field, or if it will have to remove the ID entries
  (perhaps conditionally on whether an API extension is enabled)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1254722/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1243306] Re: consoleauth cannot be run in HA configuration without external memcache

2016-05-17 Thread Markus Zoeller (markus_z)
In addition to comment #13:

This bug report is pretty old. I'm closing it.
Please re-test this issue when [1] has merged. If it hasn't then reopen this 
report.

References:
[1] https://review.openstack.org/#/c/301158/

** Changed in: nova
   Importance: Wishlist => Low

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1243306

Title:
  consoleauth cannot be run in HA configuration without external
  memcache

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Running more than one consoleauth service causes silent failures where
  tokens simply don't get authenticated, because only one of the
  processes has it cached.

  There are two ways to fix this:
  - process sending the new token has to use the fanout queue rather than a 
direct message, so that all consoleauth services are updated
  - token can be sent to the database, rather than consoleauth directly - this 
allows restarting services and adding new ones without creating new problems

  Ideally both ways could be implemented at the same time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1243306/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1233335] Re: Nova calls into neutron as admin circumventing fixed-ip on shared network

2016-05-17 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on.

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/125

Title:
  Nova calls into neutron as admin circumventing fixed-ip on shared
  network

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  In Neutron on shared networks the default policy is to not allow
  tenants from specifying their own fixed ips. This is done so that one
  cannot deliberately try to imposter another tenant's instance after it
  has been deleted.  The reason is working is  because nova is calling
  into neutron as admin.

  $quantum port-create --fixed-ip  ip_address=10.2.0.44  shared-net
  {"NeutronError": "Policy doesn't allow create_port to be performed."}

  ^Fails

  
  $ nova boot --image cirros-0.3.1-x86_64-uec  --nic 
net-id=abce62c9-2d83-42ea-ada2-fd24e14af842,v4-fixed-ip=10.2.0.44 --flavor 1 
vm23
  ^Succeeds

  Marking as a security vulnerability though it's probably not really a
  big deal.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/125/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1222990] Re: Cannot specify subset of PCI devices for PCI passthrough

2016-05-17 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on.

Feature requests for nova are done with blueprints [1] and with specs
[2]. I'll recommend to read [3] if not yet done. The effort to implement
the requested feature is then driven only by the blueprint (and spec).

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1222990

Title:
  Cannot specify subset of PCI devices for PCI passthrough

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  I went  through the recent code merge of PCI passthrough
  implementation. I see some drawbacks in the implementation and would
  be great improvement if the following can be accommodated.

  1. Ability to specify a sub-set of PCI devices to be exported to Openstack 
via entries in nova.conf of the compute node.
  Example: pci_passthrough_whitelist=[{"product_id":"1520", "vendor_id":"8086", 
"deviceids":":06:00.0, :06:00.1, :06:00.2, :06:00.3"}]

  2.  Group PCI devices, so that when creating flavor, we can choose,
  how many interfaces from each group needs to be presented to the
  guest. The group name can be specified in nova.conf of the control
  node if needed.

  For example, if we have to SRIOV cards where each is connected
  physically to different networks and each has 32 Virtual functions,
  there is no way in current implementation to spin up a VM with one
  interface (VF) from each Group

  Example:
  pci_alias={"product_id":"1520", "name":"IN", "deviceids":":06:00.0, 
:06:00.2"}
  pci_alias={"product_id":"1520", "name":"OUT", "deviceids":":06:00.1, 
:06:00.3"}

  3. 'nova show' and 'nova hypervisor-stats' and 'nova hypervisor-show'
  shows (a) pci device associated with the VM, (b) shows how may groups
  are there and what is their current usage, and (c) show individual
  hypervisor PCI device usage respectively.

  4. Some ordering involved which enables guest to predict which PCI device 
belongs to which network.
  Example, in the alphabetical order of the network names ? With this, guest 
can predict that the first interface will be from IN-group and second will be 
from OUT-group.

  Is there any chance that we can see these in Havana ?

  We have an implementation that does these at:
  https://github.com/CiscoSystems/nova/tree/grizzly-multitool but is not
  based on current implementation.

  Thanks,
  Shesha

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1222990/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1218994] Re: file based disk images do not get scrubbed on delete

2016-05-17 Thread Markus Zoeller (markus_z)
Closed as "Opinion" in comment #5 and wrongly re-opened by follow up
updates of this bug report.

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1218994

Title:
  file based disk images do not get scrubbed on delete

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Right now, LVM backed instances can be scrubbed (overwritten with
  zeros using dd) upon deletion.  However, there is no such option with
  file backed images.  While it is true that fallocate can handle some
  of this by returning 0s to the instance when reading any unwritten
  parts of the file, there are some cases where it is not desirable to
  enable fallocate.

  What would be preferred would be a similar the options cinder has
  implemented, so the operator can choose to shred or zero out the file,
  based on their organizations own internal data policies.   A zero out
  option satisfies those that must ensure they scrub tenant data upon
  deletion, and shred would satisfy those beholden to DoD 5220-22.

  This would of course make file backed disks vulnerable to
  https://bugs.launchpad.net/nova/+bug/889299 but that might not be a
  bad thing considering its quite old.

  Attached an initial patch for nova/virt/libvirt/driver.py that
  performs the same LVM zero scrub routine to disk backed files, however
  it lacks any flags to enable or disable it right now.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1218994/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1199019] Re: Allow admin to specify CPU topology as metadata for glance images

2016-05-17 Thread Markus Zoeller (markus_z)
IIUC, this is implemented with https://specs.openstack.org/openstack
/nova-specs/specs/juno/implemented/virt-driver-vcpu-topology.html

** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1199019

Title:
  Allow admin to specify CPU topology as metadata for glance images

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Certain operating systems (namely non-server version of Microsoft
  Windows) limits the number of physical CPUs (sockets) to two without
  restricting the number of cores per socket.  This effective limits the
  number of VPUs that can be assigned to a non-server of Microsoft
  Windows to two.  The administrator should be able to configure the
  VCPU/core topology as metadata for images in glance.  This would also
  address problems with software licensed based on a certain number of
  physical processors.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1199019/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1218528] Re: openvswitch-nova in XenServer doesn't work with bonding

2016-05-17 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. This bug can be reopened (set back
to "New") if someone decides to work on this.


** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1218528

Title:
  openvswitch-nova in XenServer doesn't work with bonding

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Hi!

  The init.d script openvswith-nova do a for over the all network
  interfaces eth* except set INTERFACES in /etc/sysconfig/openvswitch-
  nova.

  The process execute by: ovs_configure_base_flows.py over each network
  card is:

  1. get bridge of that interface : ovs-vsctl iface-to-br ethX
  2. del flows: ovs-ofctl del-flows xapi1
  3. allow traffic from the physical NIC: ovs-ofctl add-flow bridge 
"priority=2,in_port=pnic_ofport_ethX,actions=normal"
  4. allow traffic from management interface: ovs-ofctl add-flow bridge 
"priority=2,in_port=LOCAL,actions=normal"
  5. traffic drop by default: ovs-ofctl add-flow bridge 
"priority=1,actions=drop"

  This with eth0. ok, but when execute for eth1, del flows for eth0.

  In these situation, the active/active not work, the traffic over one
  network-card (eth0) is dropped.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1218528/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1196416] Re: The verify_resize status is not accurate of admin action migrate.

2016-05-17 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. This bug can be reopened (set back
to "New") if someone decides to work on this.

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1196416

Title:
  The verify_resize status is not accurate of admin action migrate.

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  The verify_resize status is not accurate of admin action migrate, it
  will get user confused between resize and migrate.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1196416/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1217026] Re: Nova os-services:update should be requested by service id to respect REST principles

2016-05-17 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. This bug can be reopened (set back
to "New") if someone decides to work on this.


** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1217026

Title:
  Nova os-services:update should be requested by service id to respect
  REST principles

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Currently os-services:update operation is requested against v3/os-
  services/{enable, disable} that is not the right approach and was one
  of the things to be fixed on the v3 plans.

  The right way to do it is showing the service id when listing services
  and using that id to create a resquest to enable/disable it against v3
  /os-services/{id} with a body describing the action to be performed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1217026/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1192192] Re: Nova initiated Live Migration regression for vmware VCDriver

2016-05-17 Thread Markus Zoeller (markus_z)
The support matrix clarifies that this is not supported and the code
raises an exception. The VMWare folks surely know this limitation of
their driver. End users and developers are therefore informed. I see no
need to keep this bug report open.

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1192192

Title:
  Nova initiated Live Migration regression for vmware VCDriver

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Nova's Live Migration feature should work with hosts in the same
  cluster.  We can't specify hosts in a cluster to move between! That
  makes the live-migration feature effectively disabled.

  > I've found here
  > that ESX/VC
  > drivers supports live-migration, also i found related method in
  > code, it uses vmware API "MigrateVM_Task" function.
  >
  > But i couldn't understand how i should use live-migration:
  >
  >- standalone ESXi hosts not supports any migration. Therefore
  >VMWareESXDriver also not supports migration. Correct, if am wrong.
  >- In case vCenter (VMWareVCDriver) i could use vMotion to migrate VMs
  >between members of cluster. But nova sees cluster as a single "host" and
  >thru "nova live-migration VM" scheduler raise exception "NoValidHost: No
  >valid host was found."
  >
  > My question is: What is the use-case of
  > 
thisfunction

  See: http://docs.openstack.org/trunk/openstack-compute/admin/content
  /live-migration-usage.html

  Note: one possible fix for this is to implement a migration strategy
  that can move between hosts not in the same cluster.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1192192/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1185083] Re: VMware: Simplify logic get_network_with_the_name

2016-05-17 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. This bug can be reopened (set back
to "New") if someone decides to work on this.


** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1185083

Title:
  VMware: Simplify logic get_network_with_the_name

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  The method get_network_with_the_name features several points of repeated code 
that does not need to be repeated.
  * network._type is always a unique and valid argument for get_dynamic_property
  * networks will usually have 0 or 1 result, more results will be due to 
networks sharing the same name which will make them indistinguishable (this 
should be considered an exceptional case)
  * multiple return points should not be necessary particularly since we are 
working with 0 or 1 valid values.

  See:

  
https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/network_util.py#L32

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1185083/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1183885] Re: pxe boot a guest in nova is not possible anymore

2016-05-17 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. This bug can be reopened (set back
to "New") if someone decides to work on this.


** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1183885

Title:
  pxe boot a guest in nova is not possible anymore

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Hi guys,

  libvirt.xml.template support has been removed between diablo and
  folsom (I guess) but we still need a way of PXE booting a guest OS in
  some situation.  All the forums I've read point us to modifying
  libvirt.xml.template but that's not deprecated.

  This patch permits the configuration of PXE boots in a hackish way:

   --- /usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py.orig 
2013-05-10 16:25:17.787862177 +
   +++ /usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py 2013-05-10 
16:26:39.442022870 +
   @@ -87,6 +87,9 @@
   LOG = logging.getLogger(name)

   
  libvirtopts = [
   + cfg.StrOpt('defaultguestbootdev',
   + default='hd',
   + help='Sets the default guest boot device'),
cfg.StrOpt('rescueimageid',
   default=None,
   help='Rescue ami image'),
   @@ -1792,7 +1795,7 @@
   instance['name'],
   "ramdisk")
else:
   - guest.osbootdev = "hd"
   + guest.osbootdev = CONF.defaultguestboot_dev

   
   if CONF.libvirt_type != "lxc" and CONF.libvirt_type != "uml":
guest.acpi = True

  This may not be the best way as I would prefer to have two boot
  devices in order to try "network" boot and then "hd" boot if the first
  one failed.   This is not a bug but we lost a possibility when
  libvirt.xml.template usage was deprecated and some people will find it
  hard to get it working easily.

  Thank you very much,

  Dave

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1183885/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1180664] Re: How to update flavor parameters

2016-05-17 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. This bug can be reopened (set back
to "New") if someone decides to work on this.

NOTE:
Changing the vcpu, memory or disk of a flavor has a big impact. Imagine an 
OpenStack cloud where 1000 instances were launched with example flavor "A" (1 
VCPU, 1GB memory). After that, we change this specific flavor "A" by increasing 
the amount of VCPUs from 1 to 2. This means all 1000 instances would need to be 
rebuild with 2 VCPUs. I doubt that this is the desired use case. To be honest, 
I don't understand the use case described in comment #2. 
Nevertheless, Feature requests for nova are done with blueprints [1] and with 
specs [2]. I'll recommend to read [3] if not yet done. The effort to implement 
the requestedfeature is then driven only by the blueprint (and spec).

If there are any questions left, feel free to contact me (markus_z)
in the IRC channel #openstack-nova

[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1180664

Title:
  How to update flavor parameters

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  I have created new flavors using rest API , but i have not found any API for 
updating the parameters of a flavor like vcpu ,memory,disk,etc.let me know how 
can i proceed regarding this.
  I have ubuntu 12.10 and grizzle installed

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1180664/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1174802] Re: More attributes of Floating IP should be published for metering

2016-05-17 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. This bug can be reopened (set back
to "New") if someone decides to work on this.

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1174802

Title:
  More attributes of Floating IP should be published for metering

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Now there are some attributes of floating ip existed in DB but didn't
  publish, such as project_id, host, auto_assigned, etc. However, there
  attributes are important for metering. So this bug is opened to track
  publishing more attributes of floating ip.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1174802/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1172808] Re: Nova fails on Quantum port quota too late

2016-05-17 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. This bug can be reopened (set back
to "New") if someone decides to work on this.

FWIW, I guess this is also part of the bp "get-me-a-network":
http://specs.openstack.org/openstack/neutron-specs/specs/liberty/get-
me-a-network.html

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1172808

Title:
  Nova fails on Quantum port quota too late

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Currently Nova will only hit any port quota limit in Quantum in the
  compute manager - as that's where the code to create ports exists -
  resulting in the instance going to an error state (after its bounced
  through three hosts).

  Seems to me that for Quantum the ports should be created in the API
  call (so that the error can be sent back to the user), and the port
  then passed down to the compute manager.

  (Since a user can pass a port into the server create call I'm assuming
  this would be OK)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1172808/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1171921] Re: vsphere + vlanmanager still creates per-host port groups, not distributed port groups

2016-05-17 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. This bug can be reopened (set back
to "New") if someone decides to work on this.


** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1171921

Title:
  vsphere + vlanmanager still creates per-host port groups, not
  distributed port groups

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  using havana master.

  when running with the VMwareVCDriver along with the nova vlanmanager,
  the port-group vlans that are created are created per-host, rather
  than having a single distributed port-group that applies across the
  whole cluster.

  While this behavior is unexpected in terms of what the administrator
  expects to see, I believe it is still functionally correct, so this is
  just a wishlist issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1171921/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1171601] Re: dbapi_use_tpool exposes problems in eventlet

2016-05-17 Thread Markus Zoeller (markus_z)
The "use_tpool" option (formerly "dbapi_use_tpool") isn't anymore part
of Nova with commit [1]. "oslo.db" on the other hand is still using this
[2]. Therefore I'm removing "nova" as affected project but add "oslo.db"
instead.

Unfortunately the eventlet fix is sill open [3].

References:
[1] 
https://github.com/openstack/nova/commit/1b83b2f#diff-83e0a63a67e7c5f5759f002a5fde8da9L51
[2] 
http://git.openstack.org/cgit/openstack/oslo.db/tree/oslo_db/concurrency.py#n29
[3] 
https://bitbucket.org/eventlet/eventlet/pull-requests/29/fix-use-of-semaphore-with-tpool-issue-137/diff

** Project changed: nova => oslo.db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1171601

Title:
  dbapi_use_tpool exposes problems in eventlet

Status in oslo.db:
  Confirmed

Bug description:
  The dbapi_use_tpool option doesn't work completely because of problems
  in eventlet.  Even though this is technically an eventlet issue, it's
  important for Nova so this bug is to track the issue getting fixed in
  eventlet.

  There is a patch in progress here:

  https://bitbucket.org/eventlet/eventlet/pull-request/29/fix-use-of-
  semaphore-with-tpool-issue-137/diff

To manage notifications about this bug go to:
https://bugs.launchpad.net/oslo.db/+bug/1171601/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1039065] Re: scheduler hints should persist with instance for use in migration, resize, and evacuate

2016-05-17 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. This bug can be reopened (set back
to "New") if someone decides to work on this.

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1039065

Title:
  scheduler hints should persist with instance for use in migration,
  resize, and evacuate

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  If you've used a scheduler hint such as picking resources that are in
  close proximity to a particular ip address.

  ex:
  scheduler_hints': { 'cidr': '/32',
  'build_near_host_ip': affinity_ip}

  
  Now you want to migrate to a different host.  At no point during the 
migration will the hints be revisited. The original hints are not even saved! 

  On migration the hints should be applied against the destination. If
  the destination does no pass the filter an (overridable?) error should
  be raised.

  I have an initial implementation that can be found at : 
git://gitorious.org/nova/nova.git

 branch:  respect_scheduler_hints_during_migration

  I'll submit the patches via review.openstack.org.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1039065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1083602] Re: Private flavors can't be used

2016-05-17 Thread Markus Zoeller (markus_z)
The last check was done 1 year ago and this report describes multiple
issues at once. There is not enough time to double-check each bug
report. I'm closing this one. If you want to work on that, consider
opening new bug reports, each one with one single and very specific
issue. Otherwise it's not possible to solve them completely.

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1083602

Title:
  Private flavors can't be used

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  I'm testing nova client 2.9.0.

  I worked as a user with admin privileges.

  I created a private flavor with command:

  $ nova flavor-create myFlavor 10 512 1 1 --is-public false
  
++--+---+--+---+--+---+-+---+-+
  | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | 
Is_Public | extra_specs |
  
++--+---+--+---+--+---+-+---+-+
  | 10 | myFlavor | 512   | 1| 0 |  | 1 | 1   | 
False | {}  |
  
++--+---+--+---+--+---+-+---+-+

  Then, I associated the just created flavor to the tenant I was using
  with command:

  $ nova flavor-access-add 10 admin
  +---+---+
  | Flavor_ID | Tenant_ID |
  +---+---+
  | 10| admin |
  +---+---+

  If I list the flavors, with command nova flavor-list, the new flavor
  is not displayed:

  $ nova flavor-list
  
++---+---+--+---+--+---+-+---+-+
  | ID | Name  | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor 
| Is_Public | extra_specs |
  
++---+---+--+---+--+---+-+---+-+
  | 1  | m1.tiny   | 512   | 0| 0 |  | 1 | 1.0 
| True  | {}  |
  | 2  | m1.small  | 2048  | 20   | 0 |  | 1 | 1.0 
| True  | {}  |
  | 3  | m1.medium | 4096  | 40   | 0 |  | 2 | 1.0 
| True  | {}  |
  | 4  | m1.large  | 8192  | 80   | 0 |  | 4 | 1.0 
| True  | {}  |
  | 5  | m1.xlarge | 16384 | 160  | 0 |  | 8 | 1.0 
| True  | {}  |
  
++---+---+--+---+--+---+-+---+-+

  Even in the case I list with this other command, the new flavor is not
  shown:

  $ nova flavor-access-list --flavor=10
  ERROR: Failed to get access list for public flavor type.

  $ nova flavor-access-list --tenant admin
  ERROR: Sorry, query by tenant not supported.

  Please note that I get the same results if I don't associate the
  private flavor to any project.

  According to the specification I found here
  https://blueprints.launchpad.net/nova/+spec/project-specific-flavors,
  It seems to me that private flavor types are not working.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1083602/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1116593] Re: Need mechanism to show instance device -> cinder volume mapping.

2016-05-17 Thread Markus Zoeller (markus_z)
Looks like it could be done purely in the novaclient.

** Changed in: nova
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1116593

Title:
  Need mechanism to show instance device -> cinder volume mapping.

Status in OpenStack Compute (nova):
  Invalid
Status in python-novaclient:
  Fix Released

Bug description:
  When working with instances where multiple cinder volumes have been
  attached using device "auto" it would be useful to be able to see
  which instance device maps to which cinder volume by ID and/or display
  name.

  Ideally I would like to be able to see this from within the instance
  but being able to view it via one of the nova commands would be a
  suitable alternative.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1116593/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1118066] Re: Nova should confirm quota requests against Keystone

2016-05-17 Thread Markus Zoeller (markus_z)
see comments #26 + #22

This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. This bug can be reopened (set back
to "New") if someone decides to work on this.

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1118066

Title:
  Nova should confirm quota requests against Keystone

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  os-quota-sets API should check requests for /v2/:tenant/os-quota-sets/
  against Keystone to ensure that :tenant does exist.

  POST requests to a non-existant tenant should fail with a 400 error
  code.

  GET requests to a non-existant tenant may fail with a 400 error code.
  Current behavior is to return 200 with the default quotas. A slightly
  incompatible change would be to return a 302 redirect to /v2/:tenant
  /os-quota-sets/defaults in this case.

  Edit (2014-01-22)

  Original Description
  
  GET /v2/:tenant/os-quota-sets/:this_tenant_does_not_exist
  returns 200 with the default quotas.

  Moreover
  POST /v2/:tenant/os-quota-sets/:this_tenant_does_not_exist
  with updated quotas succeeds and that metadata is saved!

  I'm not sure if this is a bug or not. I cannot find any documentation
  on this interface.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1118066/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1161411] Re: Neutron SecGroup API should validate group name

2016-05-17 Thread Markus Zoeller (markus_z)
See comment #9 (the reset to "confirmed" was by mistake)

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1161411

Title:
  Neutron SecGroup API should validate group name

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  The validation of names and groups comes from the security group API
  driver validate_property() method, and the NativeNovaSecurityGroupAPI
  and NativeQuantumSecurityGroupAPI in Nova have different rules (Nova
  blocks empty names and descriptions and quantum does no validation).

  Normally an empty name is not an issue, since all SG APIs are based on
  the ID of the group.   However the action to add an instance to a
  security group takes a name not an ID - and will not accept an empty
  name.Hence for consistency with Nova behavior the
  QuantumSecurityGroupAPI should perform the same validation as Nova.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1161411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 970693] Re: No error displayed on volume attachment fail (need to get errors that happen after initial response)

2016-05-17 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. This bug can be reopened (set back
to "New") if someone decides to work on this.

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/970693

Title:
  No error displayed on volume attachment fail (need to get errors that
  happen after initial response)

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in OpenStack Compute (nova):
  Opinion

Bug description:
  When trying to attach a volume over existing device (i.e we have an
  instance with disk volume, and we try to attach another volume on
  /dev/hda) :

  http://db.tt/OzTWgDJR

  The operation finished without the actual error, just the notification
  above for starting the process. In the table below the status changes
  from 'Attaching' back to 'Available'. without showing a reason for
  attachment failure reason.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/970693/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1037562] Re: Different secgroups can't be applied to different interfaces

2016-05-17 Thread Markus Zoeller (markus_z)
Invalid for Nova according to comment #3. It is also older than 1 year
without any progress.

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1037562

Title:
  Different secgroups can't be applied to different interfaces

Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  With the coming of quantum it's quite reasonable to start an interface
  attached to multiple network segments with different purposes.  But
  there's only one security group for the whole machine, so I can't
  define different firewalling for the internal and external interfaces
  of a VM, for example.

  secgroups should apply to interfaces, not machines as a whole.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1037562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1580008] Re: The unit test test_versions is failing on the gate

2016-05-11 Thread Markus Zoeller (markus_z)
Solved with https://review.openstack.org/#/c/307938/

** Changed in: nova
 Assignee: Ken'ichi Ohmichi (oomichi) => Dan Smith (danms)

** Changed in: nova
   Importance: Undecided => High

** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1580008

Title:
  The unit test test_versions is failing on the gate

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  http://logs.openstack.org/10/312910/9/check/gate-nova-
  python27-db/ad6e438/testr_results.html.gz

  ft700.2: 
nova.tests.unit.objects.test_objects.TestObjectVersions.test_versions_StringException:
 Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File "nova/tests/unit/objects/test_objects.py", line 1224, in test_versions
  'Some objects have changed; please make sure the '
File 
"/home/jenkins/workspace/gate-nova-python27-db/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 406, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/home/jenkins/workspace/gate-nova-python27-db/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 493, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: !=:
  reference = {'BuildRequest': '1.0-fea0b079bddc45f3150f16be5515a2a8',
   'HVSpec': '1.2-db672e73304da86139086d003f3977e7',
   'ImageMetaProps': '1.12-6a132dee47931447bf86c03c7006d96c',
   'InstanceExternalEvent': '1.1-6e446ceaae5f475ead255946dd443417',
   'Migration': '1.4-17979b9f2ae7f28d97043a220b2a8350',
   'VirtCPUFeature': '1.0-3310718d8c72309259a6e39bdefe83ee',
   'VirtCPUModel': '1.0-6a5cc9f322729fc70ddc6733bacd57d3'}
  actual= {'BuildRequest': '1.0-efac033d3b771f9e74f9d3256e54c628',
   'HVSpec': '1.2-e6cf4455367f301baa926e3972978d55',
   'ImageMetaProps': '1.12-98afab51edf6c51614885a45afde017b',
   'InstanceExternalEvent': '1.1-650fd97a215616fb4c73645a96ba',
   'Migration': '1.4-fa7b43248cb56d7d3a2d6765b64c6ea1',
   'VirtCPUFeature': '1.0-ea2464bdd09084bd388e5f61d5d4fc86',
   'VirtCPUModel': '1.0-5e1864af9227f698326203d7249796b5'}
  : Some objects have changed; please make sure the versions have been bumped, 
and then update their hashes here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1580008/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1577821] Re: User cannot set arbitary IDs for images

2016-05-10 Thread Markus Zoeller (markus_z)
@Asad:

It seems that this is a feature request. Feature requests for nova are
done with blueprints [1] and with specs [2]. I'll recommend to read [3]
if not yet done. To focus here on bugs which are a failures/errors/faults
I close this one as "Opinion". The effort to implement the requested
feature is then driven only by the blueprint (and spec).

If there are any questions left, feel free to contact me (markus_z)
in the IRC channel #openstack-nova

[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: New => Opinion

** Changed in: nova
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1577821

Title:
  User cannot set arbitary IDs for images

Status in Glance:
  New
Status in OpenStack Compute (nova):
  Opinion

Bug description:
  The glance command line enables user to supply ID for the image
  while creating it using the`glance image-create` command.

  However, the backend imposes that the ID of an image be a UUID.

  There is no meaning of letting user supply ID as parameter, if the ID
  needs to be a UUID.
  User should be able to set custom ID for images as per need.

  Also, the regular expression for the image ID is hard coded in the
  backend. It will be nice if it is configurable in `schema-image.json`

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1577821/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1576464] Re: MQ timeout when lunching an instance

2016-05-10 Thread Markus Zoeller (markus_z)
Closing it as invalid because of comment #3

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1576464

Title:
  MQ timeout when lunching an instance

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  When I try to create an instance, I got this error.

  
  2016-04-28 20:52:43.060 3559 INFO nova.osapi_compute.wsgi.server 
[req-3e221cd7-0e1b-4d43-ba8e-f735fdeb04c2 5ff80e6795b44fb989a21267ff11b419 
f0d1841f00c64ffea8216ec1b1052aa5 - - -] 192.168.40.201 "GET 
/v2.1/f0d1841f00c64ffea8216ec1b1052aa5/flavors/1 HTTP/1.1" status: 200 len: 680 
time: 0.0173349
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions 
[req-90eaf672-3119-4e5f-aca1-5e80995f057f 5ff80e6795b44fb989a21267ff11b419 
f0d1841f00c64ffea8216ec1b1052aa5 - - -] Unexpected exception in API method
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 478, 
in wrapped
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/servers.py", line 
630, in create
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions 
**create_kwargs)
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/hooks.py", line 154, in inner
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions rv = 
f(*args, **kwargs)
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 1556, in create
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions 
check_server_group_quota=check_server_group_quota)
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 1139, in 
_create_instance
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions 
reservation_id, max_count)
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 834, in 
_validate_and_build_base_options
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions 
requested_networks, max_count)
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 448, in 
_check_requested_networks
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions 
max_count)
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/network/api.py", line 49, in wrapped
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions return 
func(self, context, *args, **kwargs)
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/network/api.py", line 399, in 
validate_networks
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions 
requested_networks)
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/network/rpcapi.py", line 212, in 
validate_networks
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions return 
self.client.call(ctxt, 'validate_networks', networks=networks)
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 413, in 
call
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions return 
self.prepare().call(ctxt, method, **kwargs)
  2016-04-28 20:53:43.351 3559 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 158, in 
call
  2016-04-28 20:53:43.351 3

[Yahoo-eng-team] [Bug 1580000] Re: cpu model Haswell doesn't work on openstack mitaka

2016-05-10 Thread Markus Zoeller (markus_z)
** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/158

Title:
  cpu model Haswell doesn't work on openstack mitaka

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description
  ===

  I'm running devstack Mitaka with networking-ovs-dpdk. I set the cpu model to 
Haswell in the flavor, but the guest is created with the default qemu cpu model:
  model name  : QEMU Virtual CPU version 2.3.0

  It worked when I was running on Kilo.

  Steps to reproduce
  ==

  Simply run devstack mitaka, then:
  nova flavor-create CIRROS-PERF auto 1024 80 6
  openstack flavor set CIRROS-PERF --property hw:cpu_model=Haswell
  nova --os-tenant-name admin boot --poll --image cirros-0.3.4-x86_64-uec 
--flavor CIRROS-PERF --nic net-id=d34b5e4c-b39b-4787-8e8d-ebd9464932c5 --nic 
net-id=7556233d-a09d-421a-bace-59500971539a demo-vm-cirros-1

  More Info
  =

  stack@dl-360-115:/opt/stack/nova$ nova --version
  3.3.1

  stack@dl-360-115:~/devstack-mitaka$ virsh --version
  1.2.16

  stack@dl-360-115:~/devstack-mitaka$ sudo kvm --version
  QEMU emulator version 2.3.0 (Debian 1:2.3+dfsg-5ubuntu9.2), Copyright (c) 
2003-2008 Fabrice Bellard

  stack@dl-360-115:~/devstack-mitaka$ neutron --version
  4.1.1

  flavor:
  stack@dl-360-115:~/devstack-mitaka$ nova flavor-show TG-PERF | grep model
  | extra_specs| {"hw:cpu_model": "Haswell"}  |

  stack@dl-360-115:~/nvp/scripts$ nova show demo-vm-cirros-1|grep flav
  | flavor   | TG-PERF 
(ce4dfd95-86d7-4d8d-a0da-f955ac999390)

  stack@dl-360-115:/opt/stack/nova$ ps -ef | grep qemu | grep Haswell
  stack@dl-360-115:/opt/stack/nova$ 

  On Kilo it works:
  root@BASE-CCP-CPN-N0001-NETCLM:~# ps -ef | grep qemu | grep Haswell
  root  2688 1 18 May09 ?03:27:34 /usr/bin/qemu-system-x86_64 
-name instance-01d8 -S -machine pc-i440fx-2.5,accel=kvm,usb=off -cpu 
Haswell-noTSX,+abm,+pdpe1gb,+rdrand,+f16c,+osxsave,+dca,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme
 ...

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/158/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1268676] Re: Nova does not work with domain scoped token

2016-05-10 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. If you decide to work on this
consider using a blueprint [1] (with a spec [2]). I'll recommend to 
read [3] if not yet done. 

If there are any questions left, feel free to contact me (markus_z)
in the IRC channel #openstack-nova

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1268676

Title:
  Nova does not work with domain scoped token

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Nova commands validate if a project id is present in context. Since
  Keystone V3, a domain scoped token can be generated, which do not add
  a project id to the context.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1268676/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373525] Re: Pass instance's name to neutron

2016-05-04 Thread Markus Zoeller (markus_z)
It looks like the initial request can be solved with another way.
Closing it with "Opinion".

** Changed in: nova
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373525

Title:
  Pass instance's name to neutron

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Nova, while creating or updating port for an instance, does not pass
  name to the port's request body.

  Though, device-id is set as VM's uuid it would help to pass the name along 
with it for the ease of corelating port with the instance.
  Instance's name will be the port's new name and it will be easier to corelate 
them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1373525/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1568820] Re: Duplicate sections in generated config

2016-05-04 Thread Markus Zoeller (markus_z)
As said in comment #12, the "wrong" usage of oslo.config gets fixed in
Nova with the mentioned changes. As oslo.config released a fix for this,
I don't see the need to provide a temporary fix in Nova, that's why I
change the status to "won't fix".

** Changed in: nova
   Status: In Progress => Won't Fix

** Changed in: nova
 Assignee: Tin Lam (tl3438) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1568820

Title:
  Duplicate sections in generated config

Status in OpenStack Compute (nova):
  Won't Fix
Status in oslo.config:
  Fix Released

Bug description:
  nova.conf as generated by oslo.config (for nova 
f05bbb1279598fa82ff5694e35d9de96315b1916) contains duplicate neutron and 
xenserver sections
  # PYTHONPATH=. oslo-config-generator 
--config-file=etc/nova/nova-config-generator.conf ; grep "^\[" 
etc/nova/nova.conf.sample | sort | uniq -c | grep " 2 "
2 [neutron]
2 [xenserver]

  This appears to be because there groups are being registered with both
  oslo_config.cfg.OptGroup and str in places

  e.g. 
  http://git.openstack.org/cgit/openstack/nova/tree/nova/api/metadata/handler.py
  CONF.register_opts(metadata_proxy_opts, 'neutron')

  and
  http://git.openstack.org/cgit/openstack/nova/tree/nova/conf/neutron.py
  neutron_group = cfg.OptGroup('neutron', title='Neutron Options')
  conf.register_group(neutron_group)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1568820/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1577370] Re: Duplicate lines in /etc/nova/policy.json

2016-05-04 Thread Markus Zoeller (markus_z)
I added "oslo.policy" because of this part of the report description:

> I don't known it can impact the policy, but it may be better 
> to raise an error when a rule is declared more than one time.

@oslo folks: Would be good to know if this is a valid request.

** Also affects: oslo.policy
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1577370

Title:
  Duplicate lines in /etc/nova/policy.json

Status in OpenStack Compute (nova):
  Triaged
Status in oslo.policy:
  New

Bug description:
  The default /etc/nova/policy.json released with Liberty contains two times 
the declaration for:
  "compute:delete": "",
  "compute:soft_delete": "",
  "compute:force_delete": "",

  I don't known it can impact the policy, but it may be better to raise
  an error when a rule is declared more than one time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1577370/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554231] Re: Clean up warnings about keystoneclient.adapter.Adapter

2016-05-03 Thread Markus Zoeller (markus_z)
These warnings get triggered by the python-cinderlient which uses
deprecated keystoneclient code. Nova cannot prevent that. The Cinder
folks should look into that.

Captured traceback:
~~~
Traceback (most recent call last):
  File "nova/tests/unit/test_cinder.py", line 238, in 
test_volume_without_attachment
volume = self.api.get(self.context, '5678')
  File "nova/volume/cinder.py", line 232, in wrapper
res = method(self, ctx, *args, **kwargs)
  File "nova/volume/cinder.py", line 255, in wrapper
res = method(self, ctx, volume_id, *args, **kwargs)
  File "nova/volume/cinder.py", line 301, in get
item = cinderclient(context).volumes.get(volume_id)
  File "nova/volume/cinder.py", line 147, in cinderclient
**service_parameters)
  File 
"/home/mzoeller/git/nova/.tox/py27/lib/python2.7/site-packages/cinderclient/client.py",
 line 634, in Client
return client_class(*args, **kwargs)
  File 
"/home/mzoeller/git/nova/.tox/py27/lib/python2.7/site-packages/cinderclient/v2/client.py",
 line 117, in __init__
**kwargs)
  File 
"/home/mzoeller/git/nova/.tox/py27/lib/python2.7/site-packages/cinderclient/client.py",
 line 551, in _construct_http_client
**kwargs)
  File 
"/home/mzoeller/git/nova/.tox/py27/lib/python2.7/site-packages/positional/__init__.py",
 line 101, in inner
return wrapped(*args, **kwargs)
  File 
"/home/mzoeller/git/nova/.tox/py27/lib/python2.7/site-packages/keystoneclient/adapter.py",
 line 54, in __init__
raise Exception("mzoeller: baem!!")
Exception: mzoeller: baem!!

** Project changed: nova => python-cinderclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1554231

Title:
  Clean up warnings about keystoneclient.adapter.Adapter

Status in python-cinderclient:
  Confirmed

Bug description:
  Nova test runs output a bunch of warnings about
  keystoneclient.adapter.Adapter:

  Captured stderr:
  
  
/home/jaypipes/repos/nova/.tox/py27/local/lib/python2.7/site-packages/keystoneclient/adapter.py:57:
 DeprecationWarning: keystoneclient.adapter.Adapter is deprecated as of the 
2.1.0 release in favor of keystoneauth1.adapter.Adapter. It will be removed in 
future releases.
'removed in future releases.', DeprecationWarning)

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-cinderclient/+bug/1554231/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1576245] Re: block live-migration fails if source/dest have different instance dir

2016-05-03 Thread Markus Zoeller (markus_z)
According to ML post [1] this bug report is invalid.

[1] http://lists.openstack.org/pipermail/openstack-
dev/2016-May/093744.html

** Changed in: nova
   Status: New => Invalid

** Changed in: nova
 Assignee: Eli Qiao (taget-9) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1576245

Title:
  block live-migration fails if source/dest have different instance dir

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  block live migration fails if using libvirt python inter face
  migrationTOURI3 if source/dest have different instance dir.

  migrationTOURI3 requires a xml on the dest host, but we don't update
  instance dir.

  error logs:

File "/usr/local/lib/python2.7/dist-packages/libvirt.py", line 1833, in 
migrateToURI3
  if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', 
dom=self)
  libvirtError: Unable to pre-create chardev file 
'/opt/stack/data/nova/instances/e73cc732-8b75-4743-8583-64da9bd7fee0/console.log':
 No such file or directory

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1576245/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450294] Re: Enable password support for vnc session

2016-04-29 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on.

** Tags removed: low-hanging-fruit

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1450294

Title:
  Enable password support for vnc session

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  qemu supports that password based authentication is used for client 
connections by adding password option for -vnc as below [1]. 
  -vnc 0.0.0.0:1,password -k en-us 
  qemu xml configuration file provides a VNC password in clear text. 
   

  but openstack doesn't support to configure vpn password, see the following 
codes: 
  if ((CONF.vnc_enabled and 
  virt_type not in ('lxc', 'uml'))): 
  graphics = vconfig.LibvirtConfigGuestGraphics() 
  graphics.type = "vnc" 
  graphics.keymap = CONF.vnc_keymap 
  graphics.listen = CONF.vncserver_listen 
  guest.add_device(graphics) 
  add_video_driver = True 

  
  [1], http://www.cyberciti.biz/faq/linux-kvm-vnc-for-guest-machine/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1450294/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1410003] Re: Performance Issue on Nova API about Nova Quota Usage

2016-04-29 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on.

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1410003

Title:
  Performance Issue on Nova API  about Nova Quota Usage

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  We have a requirement to collect quota usage information at project level 
with admin role account. We noticed that the only api we can work with 
currently is to get them one by one:
  Request: GET /v2/{tenant_id}/limits/?tenant_id={tenant_id}
  Refer to:
  
http://docs.openstack.org/api/openstack-compute/2/content/GET_os-used-limits-for-admins-v2_getCustomerLimits__v2__tenant_id__limits__tenant_id__ext-compute_limits_admins.html

  In production env, we maintained 1K projects at least. So with this
  API, we have to get the summary with 1K http requests.(O(n), n is the
  number of projects).

  This would cause low performance if we check the quota usages
  frequently. I would hope there's an API similar to the way we did for
  instances summary(/v2/​{tenant_id}​/servers/detail?all_tenant=True)
  which is use a "all_tenants" parameter to return the summary list we
  want.

  That would totally solve the performance issue we met(O(n) -> O(1)). I just 
wrote a prototype to get this down and hope it would be helpful to describe the 
issue. Code link is here:
  https://github.com/henryzzq/nova/compare/stable/icehouse?expand=1 

  Also attached a design doc about this issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1410003/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1573875] Re: Nova able to start VM 2 times after failed to live migrate it

2016-04-27 Thread Markus Zoeller (markus_z)
OK, after re-reading the bug description, I think upstream Nova should
take a look to address this issue:

> - If nova able to start instances two times with same rbd block
> device, it's a really big hole in the system I think [...]

I change the description of the bug report to make that clear.

** Changed in: nova
   Status: Invalid => New

** Summary changed:

- Nova able to start VM 2 times after failed to live migrate it
+ The same ceph rbd device is used by multiple instances

** Tags added: ceph libvirt live-migration

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1573875

Title:
  The same ceph rbd device is used by multiple instances

Status in OpenStack Compute (nova):
  New

Bug description:
  Hi,

  I've faced a strange problem with nova.
  A few enviromental details:
   - We use Ubuntu 14.04 LTS
   - We use Kilo from Ubuntu cloud archive
   - We use KVM as Hypervisor with the stocked qemu 2.2
   - We got Ceph as shared storage with libvirt-rbd devices
   - OVS neutron based networking, but it's all the same with other solutions I 
think.

  So, the workflow, which need to reproduce the bug:
   - Start a Windows guest (Linux distros not affected as I saw)
   - Live migrate this VM to another host (okay, I know, it's not fit 100% in 
cloud conception, but we must use it)

  As happend then, is a really wrong behavior:
   - The VM starts to migrate (virsh list shows it in a new host)
   - On the source side, virsh list tells me, the instance is stopped
   - After a few second, the destination host just remove the instance, and the 
source change it's state back to running
   - The network comes unavailable
   - The horizon reports, the instance is in shut off state and it's definietly 
not (the VNC is still available for example)
   - User can click on 'Start instance' button, and the instance will be 
started at the destination 
   - We see those lines in a specified libvirt log: "qemu-system-x86_64: load 
of migration failed: Invalid argument"

  After a few google search whit this error, i've found this site: 
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1472500
  It's not the exact error, but it's tells us a really important fact: those 
errors came with qemu 2.2, and it's had been fixed in 2.3...

  First of all, I've installed 2 CentOS compute node, which cames with
  qemu 2.3 by default, and the Windows migration started to work as
  Linux guests did before.

  Unfortunately, we must use Ubuntu, so we needed to find a workaround,
  which had been done yesterday...

  What I did:
   - Added Mitaka repository (which came out two days before)
   - Run this command (I cannot dist-upgrade openstack now): apt-get install 
qemu-system qemu-system-arm qemu-system-common qemu-system-mips 
qemu-system-misc qemu-system-ppc qemu-system-sparc qemu-system-x86 qemu-utils 
seabios libvirt-bin
   - Let the qemu 2.5 installed
   - The migration tests shows us, this new packages solves the issue

  What I want/advice, to repair this:
   - First of all, it would be nice to install qemu 2.5 with the original kilo 
repository, and I be able to upgrade without any 'quick and dirty' method 
(add-remove Mitaka repo until installing qemu). It is ASAP to us, cause if we 
not get this until the  next weekend, i had to choose the quick and dirty way 
(but don't want to rush anybody... just telling :) )

   - If nova able to start instances two times with same rbd block
  device, it's a really big hole in the system I think... we just
  corrupted 2 test Windows 7 guest with a few clicks... Some security
  check should be implementet, which collects the instances (and their
  states) from kvm at any VM starting, and if the algorithm sees, there
  are guest running with the same name (or some kind of uuid maybe)
  it's just not starting another copy...

   - Some kind of checking also would usefull, which automatically
  checks and compare the VM states in the database, and also in
  hypervisors side in a given interval (this check may can be disabled,
  and checking interval should be able to configured imho)

  I've not found any clue, that those things in nova side are repaired
  previously in liberty or mitaka... am I right, ot just someting avoid
  my attention?

  If any further information needed, feel free to ask :)

  Regards, 
   Peter

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1573875/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1575661] Re: can not deploy a partition image to Ironic node

2016-04-27 Thread Markus Zoeller (markus_z)
** Also affects: ironic
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1575661

Title:
  can not deploy a partition image to Ironic node

Status in Ironic:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  Using fresh master of DevStack, I can not deploy partition images to
  Ironic nodes via Nova.

  I have two images in Glance - kernel image and partition image with
  kernel_id property set.

  I have configured Ironic nodes and nova flavor with capabilities:
  "boot_option: local" as described in [0].

  When I try to boot nova instance with the partition image and the
  configured flavor, instance goes to error:

  $openstack server list
  +--+++--+
  | ID   | Name   | Status | Networks |
  +--+++--+
  | 6cde85d2-47ad-446b-9a1f-960dbcca5199 | parted | ERROR  |  |
  +--+++--+

  Instance is assigned to Ironic node but node is not moved to deploying
  state

  $openstack baremetal list
  
+--++--+-++-+
  | UUID | Name   | Instance UUID   
 | Power State | Provisioning State | Maintenance |
  
+--++--+-++-+
  | 95d3353f-61a6-44ba-8485-2881d1138ce1 | node-0 | None
 | power off   | available  | False   |
  | 48112a56-8f8b-42fc-b143-742cf4856e78 | node-1 | 
6cde85d2-47ad-446b-9a1f-960dbcca5199 | power off   | available  | False 
  |
  | c66a1035-5edf-434b-9d09-39ecc9069e02 | node-2 | None
 | power off   | available  | False   |
  
+--++--+-++-+

  In n-cpu.log I see the following errors:

  2016-04-27 15:26:13.190 ERROR ironicclient.common.http 
[req-077efca4-1776-443b-bd70-0769c09a0e54 demo demo] Error contacting Ironic 
server: Instance 6cde85d2-47ad-446b-9a1f-960dbcca5199 is already associated with
   a node, it cannot be associated with this other node 
c66a1035-5edf-434b-9d09-39ecc9069e02 (HTTP 409). Attempt 2 of 2
  2016-04-27 15:26:13.190 ERROR nova.compute.manager 
[req-077efca4-1776-443b-bd70-0769c09a0e54 demo demo] [instance: 
6cde85d2-47ad-446b-9a1f-960dbcca5199] Instance failed to spawn
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 
6cde85d2-47ad-446b-9a1f-960dbcca5199] Traceback (most recent call last):
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 
6cde85d2-47ad-446b-9a1f-960dbcca5199]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2209, in _build_resources
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 
6cde85d2-47ad-446b-9a1f-960dbcca5199] yield resources
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 
6cde85d2-47ad-446b-9a1f-960dbcca5199]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2055, in _build_and_run_instance
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 
6cde85d2-47ad-446b-9a1f-960dbcca5199] block_device_info=block_device_info)
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 
6cde85d2-47ad-446b-9a1f-960dbcca5199]   File 
"/opt/stack/nova/nova/virt/ironic/driver.py", line 698, in spawn
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 
6cde85d2-47ad-446b-9a1f-960dbcca5199] self._add_driver_fields(node, 
instance, image_meta, flavor)
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 
6cde85d2-47ad-446b-9a1f-960dbcca5199]   File 
"/opt/stack/nova/nova/virt/ironic/driver.py", line 366, in _add_driver_fields
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 
6cde85d2-47ad-446b-9a1f-960dbcca5199] retry_on_conflict=False)
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 
6cde85d2-47ad-446b-9a1f-960dbcca5199]   File 
"/opt/stack/nova/nova/virt/ironic/client_wrapper.py", line 139, in call
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 
6cde85d2-47ad-446b-9a1f-960dbcca5199] return self._multi_getattr(client, 
method)(*args, **kwargs)
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 
6cde85d2-47ad-446b-9a1f-960dbcca5199]   File 
"/opt/stack/python-ironicclient/ironicclient/v1/node.py", line 198, in update
  2016-04-27 15:26:13.190 TRACE nova.compute.manager [instance: 
6cde85d2-47ad-446b-9a1f-960dbcca5199] method=http_method)
  2016-04-27 15:26:13.

[Yahoo-eng-team] [Bug 1572013] Re: missing parameter explaination in "Servers" section of v2.1 compute api

2016-04-26 Thread Markus Zoeller (markus_z)
I agree with Matt here, let's just close this bug report and use the
blueprint "api-ref-in-rst" to drive this. The effort is described on the
ML [1] and the wiki [2]. The file the bug reporter is referencing to
has a comment at the top which describes the needed cleanup tasks [3].

Long story short, push the patch you planned to do but reference the bp
instead of the bug.

References:
[1] [openstack-dev] [nova] api-ref content verification phase doc push
 http://lists.openstack.org/pipermail/openstack-dev/2016-April/092936.html
[2] https://wiki.openstack.org/wiki/NovaAPIRef#Parameter_Verification
[3] https://github.com/openstack/nova/blob/master/api-ref/source/servers.inc#L3

** Changed in: nova
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1572013

Title:
  missing parameter explaination in "Servers" section of v2.1 compute
  api

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  URL:  http://developer.openstack.org/api-ref-compute-v2.1.html
  I think the request parameters listed in "GET /v2.1/​{tenant_id}​/servers" of 
"Servers" section are not complete, when i want to get all servers of all 
tenants, there should be "?all_tenants=true" in the url, as i read in 
python-novaclient source code and it works actually after testing; but there is 
no specific description about "all_tenant" listed in "Request parameters" 
following in the api documentation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1572013/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1575285] Re: _BroadcastMessage._send_response raises TypeError

2016-04-26 Thread Markus Zoeller (markus_z)
It looks like this piece of code [1] is 3 years old without having any
negative effect. These interfaces are all private and contained in one
single module without leaking to the outside world, so I guess it's not
worth to make a patch for that.

References:
[1] 
https://git.openstack.org/cgit/openstack/nova/tree/nova/cells/messaging.py?id=f9a868e86ce11f786538547c301b805bd68a1697#n462

** Changed in: nova
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1575285

Title:
  _BroadcastMessage._send_response raises TypeError

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  The class _BaseMessage defines a method named _send_json_responses,
  which takes a named parameter neighbor_only. Later on in the same
  class, another method _send_response makes a call to
  _send_json_responses (on line 285), setting neighbor_only explicitly.

  However, a subclass of _BaseMessage, _BroadcastMessage overrides
  _send_json_responses with a definition that does not have
  neighbor_only as a named parameter. Therefore if _send_response is
  ever called on an object of type _BroadcastMessage, a TypeError will
  be raised.

  One option would be to change the definition of
  _BroadcastMessage._send_json_reponses to allow neighbour_only to be
  passed even though it is not required.

  def _send_json_responses(self, json_responses,neighbour_only=None):
  """Responses to broadcast messages always need to go to the
  neighbor cell from which we received this message.  That
  cell aggregates the responses and makes sure to forward them
  to the correct source.
  """
  return super(_BroadcastMessage, self)._send_json_responses(
  json_responses, neighbor_only=True, fanout=True)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1575285/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1574384] Re: Libvirt console parameter incorrect for AMD64 KVM

2016-04-26 Thread Markus Zoeller (markus_z)
Nova doesn't set the cmdline if it is not defined in the image
properties. Your installed libvirt version uses then a default for that
platform and it sounds like an issue in libvirt itself. You can double-
check in the logs of nova-compute, there should be the generated
"libvirt.xml" *without* any cmdline. I'm closing this bug report.

-besides of that-

Kilo is only supported for security fixes and this issue doesn't sound
like one.


** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1574384

Title:
  Libvirt console parameter incorrect for AMD64 KVM

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  I intentionally make the title of this bug similar to #1259323. Since then, 
the default cmdline for libvirt was probably changed to
  root=/dev/vda console=tty0 console=ttyS0 console=ttyAMA0
  I can't find when the change could have occured. It is weird that I'm getting 
it on a fairly standard OpenStack Kilo installation on Ubuntu 14.04, AMD64 
platform.
  The default cmdline is applied because I'm launching instances in the AMI 
format.
  The consequence is that I do not see the output of init scripts in the 
console log. Everything between the last kernel message and the login prompt is 
missing. Because of that, I do not see, e.g., the generated root password of 
the image. The graphical console of course works.
  The kernel documentation explains it - the LAST console= statement is where 
/dev/console is redirected. Kernel messages go to all of them. The login prompt 
is then generated by a getty configured in /etc/inittab.
  It can be fixed using image properties on image upload to glance, such as:
  glance image-create ... --prop os_command_line="root=/dev/vda console=tty0 
console=ttyS0"
  There are also properties for setting kernel and ramdisk on the command line, 
so it's not a big problem to add one more definition, but it took me a few 
hours to figure it out...

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1574384/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1573875] Re: Nova able to start VM 2 times after failed to live migrate it

2016-04-26 Thread Markus Zoeller (markus_z)
> - First of all, it would be nice to install qemu 2.5 with the 
> original kilo repository, [...]

The upstream Nova project is not responsible to install system level
packages. You might want to move this bug to the ubuntu-cloud-archive
project.

> - If nova able to start instances two times with same rbd block 
> device, it's a really big hole in the system I think [...]

The upstream Kilo release has only support for security issues [1] and
this doesn't sound like one. Please check if this still happens on the
Mitaka release or the current Newton master code. If this is the case
please reopen this bug report.

> - Some kind of checking also would usefull, which automatically 
> checks and compare the VM states in the database, and also in 
> hypervisors side in a given interval (this check may can be disabled,
> and checking interval should be able to configured imho)

The config option "sync_power_state_interval" in the "nova.conf" file
should do exactly that.

References:
[1] http://releases.openstack.org/

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1573875

Title:
  Nova able to start VM 2 times after failed to live migrate it

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Hi,

  I've faced a strange problem with nova.
  A few enviromental details:
   - We use Ubuntu 14.04 LTS
   - We use Kilo from Ubuntu cloud archive
   - We use KVM as Hypervisor with the stocked qemu 2.2
   - We got Ceph as shared storage with libvirt-rbd devices
   - OVS neutron based networking, but it's all the same with other solutions I 
think.

  So, the workflow, which need to reproduce the bug:
   - Start a Windows guest (Linux distros not affected as I saw)
   - Live migrate this VM to another host (okay, I know, it's not fit 100% in 
cloud conception, but we must use it)

  As happend then, is a really wrong behavior:
   - The VM starts to migrate (virsh list shows it in a new host)
   - On the source side, virsh list tells me, the instance is stopped
   - After a few second, the destination host just remove the instance, and the 
source change it's state back to running
   - The network comes unavailable
   - The horizon reports, the instance is in shut off state and it's definietly 
not (the VNC is still available for example)
   - User can click on 'Start instance' button, and the instance will be 
started at the destination 
   - We see those lines in a specified libvirt log: "qemu-system-x86_64: load 
of migration failed: Invalid argument"

  After a few google search whit this error, i've found this site: 
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1472500
  It's not the exact error, but it's tells us a really important fact: those 
errors came with qemu 2.2, and it's had been fixed in 2.3...

  First of all, I've installed 2 CentOS compute node, which cames with
  qemu 2.3 by default, and the Windows migration started to work as
  Linux guests did before.

  Unfortunately, we must use Ubuntu, so we needed to find a workaround,
  which had been done yesterday...

  What I did:
   - Added Mitaka repository (which came out two days before)
   - Run this command (I cannot dist-upgrade openstack now): apt-get install 
qemu-system qemu-system-arm qemu-system-common qemu-system-mips 
qemu-system-misc qemu-system-ppc qemu-system-sparc qemu-system-x86 qemu-utils 
seabios libvirt-bin
   - Let the qemu 2.5 installed
   - The migration tests shows us, this new packages solves the issue

  What I want/advice, to repair this:
   - First of all, it would be nice to install qemu 2.5 with the original kilo 
repository, and I be able to upgrade without any 'quick and dirty' method 
(add-remove Mitaka repo until installing qemu). It is ASAP to us, cause if we 
not get this until the  next weekend, i had to choose the quick and dirty way 
(but don't want to rush anybody... just telling :) )

   - If nova able to start instances two times with same rbd block
  device, it's a really big hole in the system I think... we just
  corrupted 2 test Windows 7 guest with a few clicks... Some security
  check should be implementet, which collects the instances (and their
  states) from kvm at any VM starting, and if the algorithm sees, there
  are guest running with the same name (or some kind of uuid maybe)
  it's just not starting another copy...

   - Some kind of checking also would usefull, which automatically
  checks and compare the VM states in the database, and also in
  hypervisors side in a given interval (this check may can be disabled,
  and checking interval should be able to configured imho)

  I've not found any clue, that those things in nova side are repaired
  previously in liberty or mitaka... am I right, ot just someting avoid
  my attention?

  If any further information needed, feel free to ask :)

  Regards, 
   P

[Yahoo-eng-team] [Bug 1575154] Re: Ubuntu 16.04, Unexpected API Error, , Nova

2016-04-26 Thread Markus Zoeller (markus_z)
Closed as requested in comment #2.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1575154

Title:
  Ubuntu 16.04, Unexpected API Error,, Nova

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Creating an instance fails:
  Command: openstack server create --flavor 
42b791ba-3330-4a8c-b8bc-5b8afe94b7ce --image 
6b5bfd61-8c35-44e0-be49-da12a6988cd1 --nic 
net-id=16ed9203-beaf-42f0-bf37-b7bdf68839f1 --security-group default --key-name 
testinstance public-instance --debug

  (I'm following: http://docs.openstack.org/liberty/install-guide-ubuntu
  /launch-instance-public.html, all commands nova flavor-list, nova
  image-list,  work without issues)

  
  boot_args: ['public-instance', , ]
  boot_kwargs: {'files': {}, 'userdata': None, 'availability_zone': None, 
'nics': [{'port-id': '', 'net-id': u'16ed9203-beaf-42f0-bf37-b7bdf68839f1', 
'v4-fixed-ip': '', 'v6-fixed-ip': ''}], 'block_device_mapping': {}, 
'max_count': 1, 'meta': None, 'key_name': 'testinstance', 'min_count': 1, 
'scheduler_hints': {}, 'reservation_id': None, 'security_groups': ['default'], 
'config_drive': None}
  REQ: curl -g -i -X POST 
http://172.24.33.142:8774/v2.1/172f573ed78a4fc7b3460ea4a3ac6dbb/servers -H 
"User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: 
application/json" -H "X-Auth-Token: 
{SHA1}fadf0d364f2eb45845658431ef42d15ddf1d4e8a" -d '{"server": {"name": 
"public-instance", "imageRef": "6b5bfd61-8c35-44e0-be49-da12a6988cd1", 
"key_name": "testinstance", "flavorRef": 
"42b791ba-3330-4a8c-b8bc-5b8afe94b7ce", "max_count": 1, "min_count": 1, 
"networks": [{"uuid": "16ed9203-beaf-42f0-bf37-b7bdf68839f1"}], 
"security_groups": [{"name": "default"}]}}'
  "POST /v2.1/172f573ed78a4fc7b3460ea4a3ac6dbb/servers HTTP/1.1" 500 216
  RESP: [500] Content-Length: 216 X-Compute-Request-Id: 
req-9bc78003-48a4-4059-9dad-ed7ee468c931 Vary: X-OpenStack-Nova-API-Version 
Connection: keep-alive X-Openstack-Nova-Api-Version: 2.1 Date: Tue, 26 Apr 2016 
12:01:13 GMT Content-Type: application/json; charset=UTF-8 
  RESP BODY: {"computeFault": {"message": "Unexpected API Error. Please report 
this at http://bugs.launchpad.net/nova/ and attach the Nova API log if 
possible.\n", "code": 500}}

  Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ 
and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-9bc78003-48a4-4059-9dad-ed7ee468c931)
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/cliff/app.py", line 374, in 
run_subcommand
  result = cmd.run(parsed_args)
File "/usr/lib/python2.7/dist-packages/openstackclient/common/command.py", 
line 38, in run
  return super(Command, self).run(parsed_args)
File "/usr/lib/python2.7/dist-packages/cliff/display.py", line 92, in run
  column_names, data = self.take_action(parsed_args)
File 
"/usr/lib/python2.7/dist-packages/openstackclient/compute/v2/server.py", line 
519, in take_action
  server = compute_client.servers.create(*boot_args, **boot_kwargs)
File "/usr/lib/python2.7/dist-packages/novaclient/v2/servers.py", line 
1233, in create
  **boot_kwargs)
File "/usr/lib/python2.7/dist-packages/novaclient/v2/servers.py", line 667, 
in _boot
  return_raw=return_raw, **kwargs)
File "/usr/lib/python2.7/dist-packages/novaclient/base.py", line 345, in 
_create
  resp, body = self.api.client.post(url, body=body)
File "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 179, 
in post
  return self.request(url, 'POST', **kwargs)
File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 94, in 
request
  raise exceptions.from_response(resp, body, url, method)
  ClientException: Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-9bc78003-48a4-4059-9dad-ed7ee468c931)
  clean_up CreateServer: Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-9bc78003-48a4-4059-9dad-ed7ee468c931)
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/openstackclient/shell.py", line 118, 
in run
  ret_val = super(OpenStackShell, self).run(argv)
File "/usr/lib/python2.7/dist-packages/cliff/app.py", line 255, in run
  result = self.run_subcommand(remainder)
File "/usr/lib/python2.7/dist-packages/openstackclient/shell.py", line 153, 
in run_subcommand
  ret_value = super(OpenStackShell, self).run_subcommand(argv)
File "/usr/lib/python2.7/dist-packages/cliff/app.py", line 374, in 
run_subcommand
  result = cmd.run(parsed_args)
File "/usr/lib/python2.7/dist-packages/openstackclient/common/command.py", 
line 38, in ru

[Yahoo-eng-team] [Bug 1399815] Re: targeted migrations/evacuations skip scheduler validation

2016-04-18 Thread Markus Zoeller (markus_z)
This is an old feature request (= "wishlist" bug report) which 
wasn't able to draw enough attention for an implementation. Also, 
the effort to implement this needs to be driven by a blueprint + spec 
file. I'm closing it as "Opinion".

If you want to work on this, consider reading [1] and create a blueprint
at [2] and a spec-file at [3]. If you need assistance, reach out on the
IRC channel #openstack-nova or use the mailing list.

References:
[1] https://wiki.openstack.org/wiki/Blueprints
[2] https://blueprints.launchpad.net/nova/
[3] https://github.com/openstack/nova-specs

** Changed in: nova
   Status: Confirmed => Opinion

** Changed in: nova
 Assignee: Jennifer Mulsow (jmulsow) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1399815

Title:
  targeted migrations/evacuations skip scheduler validation

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  This was observed in the Juno release.

  Because targeted live and cold migrations do not go through the
  scheduler for policy-based decision making, a VM could be migrated to
  a host that would violate the policy of the server-group.

  If a VM belongs to a server group, the group policy will need to be checked 
in the compute manager at the time of migration to ensure that:
  1. VMs in a server group with affinity rule can't be migrated.
  2. VMs in a server group with anti-affinity rule don't move to a host that 
would violate the rule.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1399815/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1380493] Re: Root device size is not consistent with flavor

2016-04-18 Thread Markus Zoeller (markus_z)
The review [1] has multiple comments which doubt that this is a
genuine bug (though it seems to be an unexpected behavior). I also 
didn't find a ML discussion about possible action items which allow
progress, that's why I'm closing this report.

If you want to work on this, consider discussing the possible action
items on the ML first.

References:
[1] https://review.openstack.org/#/c/128497

** Changed in: nova
   Status: Confirmed => Opinion

** Changed in: nova
 Assignee: Luo Gangyi (luogangyi) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1380493

Title:
  Root device size is not consistent with flavor

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  If we choose "boot from image" and the image is a file, we will get an 
instance with a local disk. The size of the local disk equals the flavor.
  If we choose "boot from image(create new volume)", we will get an instance 
with a volume. The size of the volume equals the size we inputed in Dashboard.

  However, if  we choose "boot from image" and the image is a snapshot,
  we will get an instance with a volume. And the size of the volume
  equals the size of snapshot instead of the size of flavor.

  I am not sure whether it is a bug or it is designed  intentionally.
  But I believe making the size of volume being consistent with flavor
  is better in that situation.

  How to reproduce:
  1. Using "boot from image(create new volume)" to create a new volume-backed 
instance.
  2. Taking a snapshot of this  volume-backed instance. This operation will add 
a snapshot-based image in Glance.
  3. Using "boot from image" and choosing the snapshot-based image which 
created before. We will get volume-backed Instance.
   And the size of the root device(a volume)  equals to the size of shapshot 
instead of the size of flavor.

  
  This problem exists in both Icehouse and Juno. I uploaded two patch to fix 
this problem.
  However, as Andrew Laski (alaski) commented that "the size of the volume has 
no relation to the disk size of the flavor being used". It is confusing. In my 
solution, I simply use the size defined in flavor as the size volume when user 
doesn't give the size of volume specifically.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1380493/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446741] Re: global value for shutdown_timeout

2016-04-18 Thread Markus Zoeller (markus_z)
As mentioned in review [1] this will be driven by jichenjc with a 
blueprint. No need for this bug report anymore.

References:
[1] https://review.openstack.org/#/c/177217/

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1446741

Title:
  global value for shutdown_timeout

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Ther's  shutdown_timeout defined in nova.conf, which implies, that all
  virtual machines in cloud should get in time to turn off, no matter
  what OS they have installed. Not logical from my point of view.

  You have `image_os_shutdown_timeout` property, which used instead of
  shutdown_timeout, but it stored in system_metadata and cannot be
  overwritten.

  In result user only has ACPI shutdown and no access to libvirt's
  destroy method.

  This is what pisses off the most in AWS: you are unable to forcefully
  turn off VPS, when it hangs !

  I use Juno and I think this is a bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1446741/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1272489] Re: floating ip bulk list doesn't return ids

2016-04-18 Thread Markus Zoeller (markus_z)
This is an old feature request (= "wishlist" bug report) which 
wasn't able to draw enough attention for an implementation. Also, 
the effort to implement this needs to be driven by a blueprint + spec 
file. I'm closing it as "Opinion".

If you want to work on this, consider reading [1] and create a blueprint
at [2] and a spec-file at [3]. If you need assistance, reach out on the
IRC channel #openstack-nova or use the mailing list.

References:
[1] https://wiki.openstack.org/wiki/Blueprints
[2] https://blueprints.launchpad.net/nova/
[3] https://github.com/openstack/nova-specs

** Changed in: nova
   Status: Confirmed => Opinion

** Changed in: nova
 Assignee: Xiao Li Xu (xiao-li-xu) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1272489

Title:
  floating ip bulk list doesn't return ids

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  nova floating-ip-bulk-list allows an admin to retrieve the list of
  floating ips, but they can't perform a delete on individual ips
  because the delete is keyed off of id and there is no way to determine
  the id of the ip address through the api

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1272489/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 917644] Re: when deleting a network, the ips belongs to subnet should alse be deleted

2016-04-18 Thread Markus Zoeller (markus_z)
This is pretty old and more a "wishlist" item than a faulty behavior. If
you want to work on that, consider proposing a trivial blueprint for
that (maybe with a spec). Just drop by during a Nova meeting, see "open
discussion" at https://wiki.openstack.org/wiki/Meetings/Nova

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/917644

Title:
  when deleting a network,the ips belongs to subnet should alse be
  deleted

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  when deleteing a network use nova-manage network delete ,ips in the
  fixed_ips table should alse be deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/917644/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566025] Re: Unable to delete security groups; security_group table 'deleted' field needs migration

2016-04-18 Thread Markus Zoeller (markus_z)
According to [1], the "deleted" column is an Integer since at least
the Havanna release. It looks like you missed to do the database
migrations for each release. Please be aware the release upgrades
must be done one after the other. Omitting one release is not supported.
Based on this, this bug report is "invalid" IMO.

If you have questions to this, please contact me in IRC in the channel
#openstack-nova, my name there is "markus_z".

References:
[1] 
https://git.openstack.org/cgit/openstack/nova/tree/nova/db/sqlalchemy/migrate_repo/versions/160_havana.py?id=36388f0510140ab4e2ffd63d7feff901a9cc4e58#n788

** Changed in: nova
   Status: Confirmed => Invalid

** Changed in: nova
 Assignee: Anseela M M (anseela-m00) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1566025

Title:
  Unable to delete security groups; security_group table 'deleted' field
  needs migration

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  My long-standing Nova installation has the following columns in the
  security_groups table:

   +-+--+--+-+-++
   | Field   | Type | Null | Key | Default | Extra  |
   +-+--+--+-+-++
   | created_at  | datetime | YES  | | NULL||
   | updated_at  | datetime | YES  | | NULL||
   | deleted_at  | datetime | YES  | | NULL||
   | deleted | tinyint(1)   | YES  | MUL | NULL||
   | id  | int(11)  | NO   | PRI | NULL| auto_increment |
   | name| varchar(255) | YES  | | NULL||
   | description | varchar(255) | YES  | | NULL||
   | user_id | varchar(255) | YES  | | NULL||
   | project_id  | varchar(255) | YES  | | NULL||
   +-+--+--+-+-++

  A more recent install looks like this:

   +-+--+--+-+-++
   | Field   | Type | Null | Key | Default | Extra  |
   +-+--+--+-+-++
   | created_at  | datetime | YES  | | NULL||
   | updated_at  | datetime | YES  | | NULL||
   | deleted_at  | datetime | YES  | | NULL||
   | id  | int(11)  | NO   | PRI | NULL| auto_increment |
   | name| varchar(255) | YES  | | NULL||
   | description | varchar(255) | YES  | | NULL||
   | user_id | varchar(255) | YES  | | NULL||
   | project_id  | varchar(255) | YES  | MUL | NULL||
   | deleted | int(11)  | YES  | | NULL||
   +-+--+--+-+-++

  Note that the 'deleted' field has changed types.  It now stores a
  group ID upon deletion.  But, the old table can't store that group ID
  because of the tinyint data type.  This means that security groups
  cannot be deleted.

  I haven't yet located the source of this regression, but presumably it
  happened when the table definition was changed to use
  models.SoftDeleteMixin, and the accompanying migration change was
  overlooked.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1566025/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570270] [NEW] nova.sample.conf: The xenserver docs have a wrong indentation

2016-04-14 Thread Markus Zoeller (markus_z)
Public bug reported:

Version: Nova Newton master b335318 Jenkins 2016-04-13

Steps to reproduce:
* checkout nova code
* in base folder execute: "tox -e genconfig"
* check section "[xenserver]" in file "etc/nova/nova.conf.sample"

There is too much whitespace between the "#" on the left side until
the actual doc starts. The multiline comment at [2] is not properly
formatted. See the other sections for the expected result.

See the rendered result at [1]. Launchpad crops the superfluous
whitespace, so I cannot show an example here.

References:
[1] http://docs.openstack.org/developer/nova/sample_config.html
[2] 
https://github.com/openstack/nova/blob/b335318a6254e0e4752bcf0665579527b628c963/nova/conf/xenserver.py

** Affects: nova
 Importance: Low
 Status: Triaged


** Tags: config doc low-hanging-fruit

** Changed in: nova
   Status: New => Triaged

** Changed in: nova
   Importance: Undecided => Low

** Tags added: config doc low-hanging-fruit

** Description changed:

  Version: Nova Newton master b335318 Jenkins 2016-04-13
  
  Steps to reproduce:
- * checkout nova code
- * in base folder execute: "tox -e genconfig"
- * check section "[xenserver]" in file "etc/nova/nova.conf.sample" 
+ * checkout nova code
+ * in base folder execute: "tox -e genconfig"
+ * check section "[xenserver]" in file "etc/nova/nova.conf.sample"
  
  There is too much whitespace between the "#" on the left side until
  the actual doc starts. The multiline comment at [2] is not properly
  formatted. See the other sections for the expected result.
  
- Example:
- [xenserver]
- 
- #
- # From nova.conf
- #
- 
- #
- #Number of seconds to wait for agent's reply to a request.
- #
- #Nova configures/performs certain administrative actions on a
- #server with the help of an agent that's installed on the
- # server.
- #The communication between Nova and the agent is achieved via
- #sharing messages, called records, over xenstore, a shared
- #storage across all the domains on a Xenserver host.
- #Operations performed by the agent on behalf of nova are:
- #'version',' key_init',
- # 'password','resetnetwork','inject_file',
- #and 'agentupdate'.
+ See the rendered result at [1]. Launchpad crops the superfluous
+ whitespace, so I cannot show an example here.
  
  References:
  [1] http://docs.openstack.org/developer/nova/sample_config.html
  [2] 
https://github.com/openstack/nova/blob/b335318a6254e0e4752bcf0665579527b628c963/nova/conf/xenserver.py

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1570270

Title:
  nova.sample.conf: The xenserver docs have a wrong indentation

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  Version: Nova Newton master b335318 Jenkins 2016-04-13

  Steps to reproduce:
  * checkout nova code
  * in base folder execute: "tox -e genconfig"
  * check section "[xenserver]" in file "etc/nova/nova.conf.sample"

  There is too much whitespace between the "#" on the left side until
  the actual doc starts. The multiline comment at [2] is not properly
  formatted. See the other sections for the expected result.

  See the rendered result at [1]. Launchpad crops the superfluous
  whitespace, so I cannot show an example here.

  References:
  [1] http://docs.openstack.org/developer/nova/sample_config.html
  [2] 
https://github.com/openstack/nova/blob/b335318a6254e0e4752bcf0665579527b628c963/nova/conf/xenserver.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1570270/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1550671] Re: The limit of injected file conetent length is incorrect

2016-04-12 Thread Markus Zoeller (markus_z)
Alex Xu informed me that this report is still valid and got discussed in
the nova-api meeting [1]. That's why I'm setting the status to
"confirmed".

[1]
http://eavesdrop.openstack.org/meetings/nova_api/2016/nova_api.2016-03-08-12.00.log.html

** Changed in: nova
   Status: Invalid => Confirmed

** Changed in: nova
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1550671

Title:
  The limit of injected file conetent length is incorrect

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  According to 
http://developer.openstack.org/api-ref-compute-v2.1.html#createServer , the 
perameter personality
  has limits about path length and content length. And it is pointed out that 
the content should be base64 encoded
  string and the content length limit is the limit of base64 decoded raw data, 
not the base64 encoded data.

  But in the current implementation:
https://github.com/openstack/nova/blob/master/nova/compute/api.py#L266-L277
  we are checking the base64 encoded data.

  That is, for example, the quota limit for content length is 256, and user 
provided a file with the length of 256,
  exception.OnsetFileContentLimitExceeded error will rasie as after base64 
encoding, the file length will grow.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1550671/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1550671] Re: The limit of injected file conetent length is incorrect

2016-04-12 Thread Markus Zoeller (markus_z)
Based on Alex Xu's comment in [1] it looks like that this bug report is
invalid. If you disagree, reopen the report and add your reasoning.

References:
[1] https://review.openstack.org/#/c/285649/

** Tags added: api

** Changed in: nova
   Status: In Progress => Invalid

** Changed in: nova
 Assignee: Zhenyu Zheng (zhengzhenyu) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1550671

Title:
  The limit of injected file conetent length is incorrect

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  According to 
http://developer.openstack.org/api-ref-compute-v2.1.html#createServer , the 
perameter personality
  has limits about path length and content length. And it is pointed out that 
the content should be base64 encoded
  string and the content length limit is the limit of base64 decoded raw data, 
not the base64 encoded data.

  But in the current implementation:
https://github.com/openstack/nova/blob/master/nova/compute/api.py#L266-L277
  we are checking the base64 encoded data.

  That is, for example, the quota limit for content length is 256, and user 
provided a file with the length of 256,
  exception.OnsetFileContentLimitExceeded error will rasie as after base64 
encoding, the file length will grow.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1550671/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1531988] Re: py34 tests that use the stubbed out fake image service race fail a lot

2016-04-05 Thread Markus Zoeller (markus_z)
This doesn't have any hit anymore in logstash. I close this bug report
and remove the bug signature in elastic-recheck.

** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1531988

Title:
  py34 tests that use the stubbed out fake image service race fail a lot

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Failures like this:

  http://logs.openstack.org/26/224726/16/gate/gate-nova-
  python34/1dda5ee/console.html#_2016-01-07_18_17_49_116

  2016-01-07 18:17:49.115 | 
nova.tests.unit.virt.vmwareapi.test_configdrive.ConfigDriveTestCase.test_create_vm_without_config_drive
  2016-01-07 18:17:49.115 | 
---
  2016-01-07 18:17:49.115 | 
  2016-01-07 18:17:49.115 | Captured traceback:
  2016-01-07 18:17:49.115 | ~~~
  2016-01-07 18:17:49.115 | b'Traceback (most recent call last):'
  2016-01-07 18:17:49.115 | b'  File 
"/home/jenkins/workspace/gate-nova-python34/.tox/py34/lib/python3.4/site-packages/mock/mock.py",
 line 1305, in patched'
  2016-01-07 18:17:49.115 | b'return func(*args, **keywargs)'
  2016-01-07 18:17:49.115 | b'  File 
"/home/jenkins/workspace/gate-nova-python34/nova/tests/unit/virt/vmwareapi/test_configdrive.py",
 line 89, in setUp'
  2016-01-07 18:17:49.116 | b'metadata = image_service.show(context, 
image_id)'
  2016-01-07 18:17:49.116 | b'  File 
"/home/jenkins/workspace/gate-nova-python34/nova/tests/unit/image/fake.py", 
line 184, in show'
  2016-01-07 18:17:49.116 | b'raise 
exception.ImageNotFound(image_id=image_id)'
  2016-01-07 18:17:49.116 | b'nova.exception.ImageNotFound: Image 
70a599e0-31e7-49b7-b260-868f441e862b could not be found.'
  2016-01-07 18:17:49.116 | b''
  2016-01-07 18:17:49.116 | 
  2016-01-07 18:17:49.116 | Captured pythonlogging:
  2016-01-07 18:17:49.116 | ~~~
  2016-01-07 18:17:49.116 | b'2016-01-07 18:02:26,578 INFO 
[oslo_vmware.api] Successfully established new session; session ID is 3818e.'
  2016-01-07 18:17:49.116 | b'2016-01-07 18:02:26,579 INFO 
[nova.virt.vmwareapi.driver] VMware vCenter version: 5.1.0'
  2016-01-07 18:17:49.116 | b'2016-01-07 18:02:26,587 WARNING 
[nova.tests.unit.image.fake] Unable to find image id 
70a599e0-31e7-49b7-b260-868f441e862b.  Have images: {}'
  2016-01-07 18:17:49.116 | b''

  Have been showing up a lot recently in the py34 job for nova:

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22b'%20%20%20%20raise%20exception.ImageNotFound(image_id%3Dimage_id)'%5C%22%20AND%20tags%3A%5C%22console%5C%22%20AND%20build_name%3A%5C
  %22gate-nova-python34%5C%22

  A lot of them were in the vmwareapi driver tests which dims
  blacklisted for py34 yesterday:

  https://review.openstack.org/#/c/264368/

  But we're still hitting them.

  I have a change up to stop using the stubs.Set (mox) calls with the
  fake image service stub out code here:

  https://review.openstack.org/#/c/264393/

  This bug is for tracking those failures in elastic-recheck so we can
  get them off the uncategorized bugs page.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1531988/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1016633] Re: Bad performance problem with nova.virt.firewall

2016-04-05 Thread Markus Zoeller (markus_z)
@Hans Lindgren: Thanks for the feedback. This bug report is really old
and I doubt that the current "medium" importance is still valid. I
close this as fix released, thanks for you patch.

@David Kranz (+ other stakeholders): Please double-check if the 
issue is fixed from your perspective. Use the current master (Newton) 
code for that. If it is not fixed, please reopen and provide some
information how you tested this.

** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1016633

Title:
  Bad performance problem with nova.virt.firewall

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  I was trying to figure out why creating 1,2,4 servers in parallel on an 
8-core machine did not show any speedup. I found a
  problem shown in this log snippet with 4 servers. The pair of calls producing 
the debug messages are separated only by
  a single call. 

  def prepare_instance_filter(self, instance, network_info):
  # make sure this is legacy nw_info
  network_info = self._handle_network_info_model(network_info)

  self.instances[instance['id']] = instance
  self.network_infos[instance['id']] = network_info
  self.add_filters_for_instance(instance)
  LOG.debug(_('Filters added to instance'), instance=instance)
  self.refresh_provider_fw_rules()
  LOG.debug(_('Provider Firewall Rules refreshed'), instance=instance)
  self.iptables.apply()

  Note the interleaving of the last two calls in this log snippet and
  how long they take:

  
  Jun 22 10:52:09 xg06eth0 2012-06-22 10:52:09 DEBUG nova.virt.firewall 
[req-14689766-cc17-4d8d-85bb-c4c19a2fc88d demo demo] [instance: 
4c5a43af-04fd-4aa0-818e-8e0c5384b279] Filters added to instance from 
(pid=15704) prepare_instance_filter /opt/stack/nova/nova/virt/firewall.py:151

  Jun 22 10:52:10 xg06eth0 2012-06-22 10:52:10 DEBUG
  nova.virt.firewall [req-14689766-cc17-4d8d-85bb-c4c19a2fc88d demo
  demo] [instance: 4c5a43af-04fd-4aa0-818e-8e0c5384b279] Provider
  Firewall Rules refreshed from (pid=15704) prepare_instance_filter
  /opt/stack/nova/nova/virt/firewall.py:153

  Jun 22 10:52:18 xg06eth0 2012-06-22 10:52:18 DEBUG
  nova.virt.firewall [req-c9ed42e0-1eed-418a-ba37-132bcc26735c demo
  demo] [instance: df15e7d6-657e-4fd7-a4eb-6aab1bd63d5b] Filters added
  to instance from (pid=15704) prepare_instance_filter
  /opt/stack/nova/nova/virt/firewall.py:151

  Jun 22 10:52:19 xg06eth0 2012-06-22 10:52:19 DEBUG
  nova.virt.firewall [req-c9ed42e0-1eed-418a-ba37-132bcc26735c demo
  demo] [instance: df15e7d6-657e-4fd7-a4eb-6aab1bd63d5b] Provider
  Firewall Rules refreshed from (pid=15704) prepare_instance_filter
  /opt/stack/nova/nova/virt/firewall.py:153

  Jun 22 10:52:19 xg06eth0 2012-06-22 10:52:19 DEBUG
  nova.virt.firewall [req-2daf4cb8-73c5-487a-9bf6-bea08125b461 demo
  demo] [instance: 765212a6-cc23-4d5a-b252-5fa6b5f8331e] Filters added
  to instance from (pid=15704) prepare_instance_filter
  /opt/stack/nova/nova/virt/firewall.py:151

  Jun 22 10:52:25 xg06eth0 2012-06-22 10:52:25 DEBUG
  nova.virt.firewall [req-5618e93e-3af1-4c65-b826-9d38850a215d demo
  demo] [instance: fa6423ac-82b8-419b-a077-f2d44d081771] Filters added
  to instance from (pid=15704) prepare_instance_filter
  /opt/stack/nova/nova/virt/firewall.py:151

  Jun 22 10:52:38 xg06eth0 2012-06-22 10:52:38 DEBUG
  nova.virt.firewall [req-2daf4cb8-73c5-487a-9bf6-bea08125b461 demo
  demo] [instance: 765212a6-cc23-4d5a-b252-5fa6b5f8331e] Provider
  Firewall Rules refreshed from (pid=15704) prepare_instance_filter
  /opt/stack/nova/nova/virt/firewall.py:153

  Jun 22 10:52:52 xg06eth0 2012-06-22 10:52:52 DEBUG
  nova.virt.firewall [req-5618e93e-3af1-4c65-b826-9d38850a215d demo
  demo] [instance: fa6423ac-82b8-419b-a077-f2d44d081771] Provider
  Firewall Rules refreshed from (pid=15704) prepare_instance_filter
  /opt/stack/nova/nova/virt/firewall.py:153

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1016633/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566159] Re: ERROR nova.api.openstack.extensions HTTPInternalServerError: 500

2016-04-05 Thread Markus Zoeller (markus_z)
*** This bug is a duplicate of bug 1527925 ***
https://bugs.launchpad.net/bugs/1527925

This looks like a (popular) configuration issue:
* https://bugs.launchpad.net/nova/+bug/1514480
* https://bugs.launchpad.net/nova/+bug/1523889
* https://bugs.launchpad.net/nova/+bug/1523224
* https://bugs.launchpad.net/nova/+bug/1525819

Please double-check the passwords in the glance config files and
the nova config file.

** This bug has been marked a duplicate of bug 1527925
   glanceclient.exc.HTTPInternalServerError when running nova image-list

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1566159

Title:
  ERROR nova.api.openstack.extensions HTTPInternalServerError: 500

Status in OpenStack Compute (nova):
  New

Bug description:
  2016-04-05 10:54:42.673 989 INFO oslo_service.service [-] Child 1100 exited 
with status 0
  2016-04-05 10:54:42.676 989 INFO oslo_service.service [-] Child 1096 exited 
with status 0
  2016-04-05 10:54:42.676 989 INFO oslo_service.service [-] Child 1076 exited 
with status 0
  2016-04-05 10:54:42.680 989 INFO oslo_service.service [-] Child 1090 exited 
with status 0
  2016-04-05 10:54:42.681 989 INFO oslo_service.service [-] Child 1077 exited 
with status 0
  2016-04-05 10:54:42.682 989 INFO oslo_service.service [-] Child 1091 exited 
with status 0
  2016-04-05 10:54:42.682 989 INFO oslo_service.service [-] Child 1079 exited 
with status 0
  2016-04-05 10:54:42.682 989 INFO oslo_service.service [-] Child 1094 exited 
with status 0
  2016-04-05 10:54:42.683 989 INFO oslo_service.service [-] Child 1075 exited 
with status 0
  2016-04-05 10:54:42.683 989 INFO oslo_service.service [-] Child 1071 exited 
with status 0
  2016-04-05 10:54:42.684 989 INFO oslo_service.service [-] Child 1097 exited 
with status 0
  2016-04-05 10:54:42.684 989 INFO oslo_service.service [-] Child 1092 exited 
with status 0
  2016-04-05 10:54:42.685 989 INFO oslo_service.service [-] Child 1098 exited 
with status 0
  2016-04-05 10:54:42.685 989 INFO oslo_service.service [-] Child 1099 exited 
with status 0
  2016-04-05 10:54:42.685 989 INFO oslo_service.service [-] Child 1073 exited 
with status 0
  2016-04-05 10:54:42.706 989 INFO oslo_service.service [-] Child 1067 exited 
with status 0
  2016-04-05 10:54:42.706 989 INFO oslo_service.service [-] Child 1068 exited 
with status 0
  2016-04-05 10:54:42.707 989 INFO oslo_service.service [-] Child 1069 exited 
with status 0
  2016-04-05 10:54:42.707 989 INFO oslo_service.service [-] Child 1072 exited 
with status 0
  2016-04-05 10:54:42.708 989 INFO oslo_service.service [-] Child 1066 exited 
with status 0
  2016-04-05 10:54:42.708 989 INFO oslo_service.service [-] Child 1074 exited 
with status 0
  2016-04-05 10:54:42.709 989 INFO oslo_service.service [-] Child 1078 exited 
with status 0
  2016-04-05 10:54:42.709 989 INFO oslo_service.service [-] Child 1080 exited 
with status 0
  2016-04-05 10:54:42.709 989 INFO oslo_service.service [-] Child 1081 exited 
with status 0
  2016-04-05 10:54:42.710 989 INFO oslo_service.service [-] Child 1093 killed 
by signal 15
  2016-04-05 10:54:42.710 989 INFO oslo_service.service [-] Child 1095 killed 
by signal 15
  2016-04-05 10:54:42.711 989 INFO oslo_service.service [-] Child 1101 exited 
with status 0
  2016-04-05 10:54:42.711 989 INFO oslo_service.service [-] Child 1102 exited 
with status 0
  2016-04-05 10:54:42.712 989 INFO oslo_service.service [-] Child 1103 exited 
with status 0
  2016-04-05 10:54:42.712 989 INFO oslo_service.service [-] Child 1104 exited 
with status 0
  2016-04-05 10:54:42.713 989 INFO oslo_service.service [-] Child 1105 exited 
with status 0
  2016-04-05 10:54:46.299 1259 INFO oslo_service.periodic_task [-] Skipping 
periodic task _periodic_update_dns because its interval is negative
  2016-04-05 10:54:46.529 1259 INFO nova.api.openstack [-] Loaded extensions: 
['extensions', 'flavors', 'image-metadata', 'image-size', 'images', 'ips', 
'limits', 'os-access-ips', 'os-admin-actions', 'os-admin-password', 
'os-agents', 'os-aggregates', 'os-assisted-volume-snapshots', 
'os-attach-interfaces', 'os-availability-zone', 'os-baremetal-nodes', 
'os-block-device-mapping', 'os-cells', 'os-certificates', 'os-cloudpipe', 
'os-config-drive', 'os-console-auth-tokens', 'os-console-output', 
'os-consoles', 'os-create-backup', 'os-deferred-delete', 'os-disk-config', 
'os-evacuate', 'os-extended-availability-zone', 
'os-extended-server-attributes', 'os-extended-status', 'os-extended-volumes', 
'os-fixed-ips', 'os-flavor-access', 'os-flavor-extra-specs', 
'os-flavor-manage', 'os-flavor-rxtx', 'os-floating-ip-dns', 
'os-floating-ip-pools', 'os-floating-ips', 'os-floating-ips-bulk', 'os-fping', 
'os-hide-server-addresses', 'os-hosts', 'os-hypervisors', 
'os-instance-actions', 'os-instance-usage-audi
 t-log', 'os-keypairs', 'os-lock-server', 'os-migrate-server', 'os-migrations', 
'os-multinic', 

[Yahoo-eng-team] [Bug 1565824] [NEW] config option generation doesn't work with itertools.chain

2016-04-04 Thread Markus Zoeller (markus_z)
Public bug reported:

Config options code like this doesn't generate output in the
sample.config file:

ALL_OPTS = itertools.chain(
   compute_opts,
   resource_tracker_opts,
   allocation_ratio_opts
   )


def register_opts(conf):
conf.register_opts(ALL_OPTS)


def list_opts():
return {'DEFAULT': ALL_OPTS}

The reason is that the generator created by "itertools.chain" doesn't
get reset after getting used in "register_opts". A simple complete
example:

import itertools

a = [1, 2]
b = [3, 4]

ab = itertools.chain(a, b)

print("printing 'ab' for the first time")
for i in ab:
print(i)

print("printing 'ab' for the second time")
for i in ab:
print(i)

The combined list 'ab' won't get printed a second time. The same thing
happens when the oslo.config generator wants to print the sample.config
file. This means we use either:

ab = list(itertools.chain(a, b))

or

ab = a + b

** Affects: nova
 Importance: High
 Assignee: Markus Zoeller (markus_z) (mzoeller)
 Status: Confirmed

** Changed in: nova
   Importance: Undecided => High

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
 Assignee: (unassigned) => Markus Zoeller (markus_z) (mzoeller)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1565824

Title:
  config option generation doesn't work with itertools.chain

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Config options code like this doesn't generate output in the
  sample.config file:

  ALL_OPTS = itertools.chain(
 compute_opts,
 resource_tracker_opts,
 allocation_ratio_opts
 )

  
  def register_opts(conf):
  conf.register_opts(ALL_OPTS)

  
  def list_opts():
  return {'DEFAULT': ALL_OPTS}

  The reason is that the generator created by "itertools.chain" doesn't
  get reset after getting used in "register_opts". A simple complete
  example:

  import itertools

  a = [1, 2]
  b = [3, 4]

  ab = itertools.chain(a, b)

  print("printing 'ab' for the first time")
  for i in ab:
print(i)

  print("printing 'ab' for the second time")
  for i in ab:
print(i)

  The combined list 'ab' won't get printed a second time. The same thing
  happens when the oslo.config generator wants to print the
  sample.config file. This means we use either:

  ab = list(itertools.chain(a, b))

  or

  ab = a + b

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1565824/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562842] Re: instance going to scheduler when invalid v4-fixed-ip is provided in nova boot command

2016-04-01 Thread Markus Zoeller (markus_z)
@Abhilash:

root cause
--
The nova conductor log file "n-cond.log" you attached shows the error:

Invalid input received: 
Fixed IP 1.0.0.255 is not a valid ip address for network
97adf977-8b62-4996-a800-bbbdaf9c0fd9.

This gets raised in Liberty code at [1]. It's also there in Mitaka and
current master (Newton). 
As you already mentioned, this happens after the scheduler chose one 
compute host and the spawn of the instance has started.

conclusion
--
The CLI doesn't offer any client side validation. It returns the server
side created response. Because the spawn of an instance is a long 
running task, the "launch" REST API triggers the creation asynchronously.
This means the CLI only returns that the trigger of the creation got
accepted. It doesn't make a statement if this will succeed.
I don't see a benefit in introducing a "pre-validation" before the 
scheduler decides where to place an instance. I'm closing the bug report
as "Opinion/wishlist" as it doesn't look like a bug to me. If you think
this is wrong, please reopen it and add a reasoning.

References:
[1] 
https://github.com/openstack/nova/blob/acb2dc5e27a85b9148599f1c4dd59e317752f125/nova/network/neutronv2/api.py#L367-L370

** Changed in: nova
   Status: Incomplete => Opinion

** Changed in: nova
   Importance: Undecided => Wishlist

** Tags added: network neutron scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1562842

Title:
  instance going to scheduler when invalid v4-fixed-ip is provided in
  nova boot command

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  When providing invalid v4-fixed-ip in nova boot command, the instance
  will queue with the scheduler but fail to boot. Instead what should
  happen is, there should be check at client side or server side and
  error message should be thrown so that VM does not get queued with the
  scheduler.

  
  neutron net-list
  
+--+-+--+
  | id   | name| subnets
  |
  
+--+-+--+
  | 97adf977-8b62-4996-a800-bbbdaf9c0fd9 | public  | 
35364205-3b7d-47db-8bf3-e1590416d9f1 2001:db8::/64   |
  |  | | 
ddacd805-5e07-4e63-a827-fd094c61e84a 172.24.4.0/24   |
  | f9976ad3-33fb-44a4-b25f-e8356de9e7d2 | private | 
6427f306-2646-479e-b47b-5a115f020d1c 10.0.0.0/24 |
  |  | | 
4de2d8be-0c1a-4b9c-aabf-3a896a7c67c0 fd07:4910:1fa6::/64 |
  
+--+-+--+

  
  nova boot abi_1 --image a258964a-4250-4b6d-9fe9-492ac2b3d8da --flavor m1.tiny 
--nic net-id=97adf977-8b62-4996-a800-bbbdaf9c0fd9,v6-fixed-ip=1.0.0.255
  (note: 1.0.0.255 is broadcast ip for the network, should be invalid.)

  
+--++
  | Property | Value
  |
  
+--++
  | OS-DCF:diskConfig| MANUAL   
  |
  | OS-EXT-AZ:availability_zone  |  
  |
  | OS-EXT-SRV-ATTR:host | -
  |
  | OS-EXT-SRV-ATTR:hostname | abi-1
  |
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | -
  |
  | OS-EXT-SRV-ATTR:instance_name| instance-004e
  |
  | OS-EXT-SRV-ATTR:kernel_id| e2a5fdda-7422-4e41-b59a-fa2dad82583e 
  |
  | OS-EXT-SRV-ATTR:launch_index | 0
  |
  | OS-EXT-SRV-ATTR:ramdisk_id   | 4ea6bc99-2c01-459b-9ce1-b6bc6a51ad79 
  |
  | OS-EXT-SRV-ATTR:reservation_id   | r-wt958hij   
  |
  | OS-EXT-SRV-ATTR:root_device_name | -
  |
  | OS-EXT-SRV-ATTR:user_data| -
  |
  | OS-EXT-STS:power_state   | 0
  |
  | OS-EXT-STS:task_state| scheduling  

[Yahoo-eng-team] [Bug 1515768] Re: Instance creation fails with libvirtError: Unable to create tap device: Device or resource busy

2016-03-30 Thread Markus Zoeller (markus_z)
Acording to the abandoned review [1] (from the last Nova assignee
yong sheng gong) this got fixed with patch [2]. Patch [2] didn't 
mention this in its commit message, that's why this bug report looks
like it is still open. I'm closing it manually.

References:
[1] https://review.openstack.org/#/c/252824/
[2] https://review.openstack.org/#/c/252565/

** Changed in: nova
   Status: In Progress => Fix Released

** Tags added: liberty-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1515768

Title:
  Instance creation fails with libvirtError: Unable to create tap
  device: Device or resource busy

Status in heat:
  New
Status in OpenStack Compute (nova):
  Fix Released
Status in tacker:
  Fix Committed

Bug description:
  Summary:
  This issue is observed frequently on Jenkins gate and has been reproducible 
in local setup too.

  Steps:
  Initiate 3 stack create requests at once in a script:

  heat stack-create -f /home/stack/template_file stack1
  heat stack-create -f /home/stack/template_file stack2
  heat stack-create -f /home/stack/template_file stack3

  using the following HOT file:
  http://paste.openstack.org/show/479920/

  
  One of the stack creations fails with CreateFailed: Resource Create Failed: 
Conflict: Resources. vdu3: Port Is Still In Use.

  From the nova logs, there are duplicate bridges created for one of the
  servers. The qemu xml fails with libvirtError: Unable to create tap
  device tapd3a3d9e9-5d: Device or resource busy. See timestamp
  2015-11-25 23:03:14.940 in n-cpu.log

  Attaching the relevant n-cpu.log, q-svc.log and h-eng.log

  Observation:
  The 1st network interface for the nova instance is a Neutron Port resource 
provided in HOT template.
  Nova sends a PUT request to update the port information. It also sends 2 POST 
requests for the 2nd and 3rd network interfaces.
  Neutron receives the PUT request and sends network-event changed event while 
nova is still waiting for the POST response for the 2 ports.
  If the network-changed event is received before the 3rd port POST response is 
received, refresh_cache is acquired by nova_service
  Nova sends a query for port information, updates the cache and release the 
lock.
  By then, POST requests are completed which acquires the cache lock again and 
sends request for network info. refresh_cache is updated twice and contains 
duplicate set of ports
  Network vifs are built for all 6 ports and qemu xml is build based on that.
  Duplicate bridges in xml is complained by libvirt as device or resource busy.

  Version and environment:
  Devstack Master

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1515768/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496334] Re: Nova-compute launch slowly because lots of instances is init one by one

2016-03-30 Thread Markus Zoeller (markus_z)
Sean Dague's quote from the abandoned review [1]:

> Booting an instance is expensive, and triggers io load, which may
> increase failure rates of these booting. Doing in parallel is not
> really guarunteed to be faster than serial, and in some situations
> will actually be slower.
> 
> So this isn't a simple bug fix. It's something which we really should
> have a spec for. There definitely has to be a max number which 
> is < 200. It should also have some real world boot data to see how
> this plays out in real world situations.

I'm closing it as "Opinion/Wishlist". If you decide to work on this
consider using a blueprint [2] (with a spec [3]). I'll recommend to 
read [4] if not yet done. 

If there are any questions left, feel free to contact me (markus_z)
in the IRC channel #openstack-nova

References:
[1] https://review.openstack.org/#/c/223572/
[2] https://blueprints.launchpad.net/nova/
[3] https://github.com/openstack/nova-specs
[4] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: In Progress => Opinion

** Changed in: nova
   Importance: Undecided => Wishlist

** Changed in: nova
 Assignee: Rui Chen (kiwik-chenrui) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1496334

Title:
  Nova-compute launch slowly because lots of instances is init one by
  one

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  1. code base

  $ git log -1
  commit b492942744e09276e3ba4dcf0196143c521a1662
  Merge: 920abc9 9706454
  Author: Jenkins 
  Date:   Thu Sep 3 00:05:04 2015 +

  Merge "Fix bodies on consolidate-console-api"

  2. Reproduce steps:

  The issue happen on VMware driver, think about the following case:
  * 200 active instances run in one nova-compute host that map to one vCenter 
Cluster, batch delete all instances, all of them are in "deleting" task_state.
  * nova-compute process stop and restart when all instances are in "deleting" 
task_state.
  * nova-compute start to init 200 deleting instances one by one. The workflow 
of VMware driver is power-off instance, then wait task finish, then delete the 
instance.
  * After all the deleting instances are handled, nova-compute is set to "up" 
state, continue to work.

  step 3 will spend lots of time on serial init_instance. In my
  performance test environment, the nova-compute spend about 15 minutes
  to finish init_instance.

  In other drivers, like: libvirt, nova-compute manage less instances
  than VMware driver (maybe less than 50 instances), so these drivers
  have less chance to face the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1496334/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 947261] Re: format command line app documentation using sphinx features

2016-03-30 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going 
to move it to "Opinion / Wishlist", which is an easily-obtainable queue 
of older requests that have come on. If you want to work on this,
just push a patch to Gerrit, there's no need to track this with a
bug report.

If there are any questions left, feel free to contact me (markus_z)
in the IRC channel #openstack-nova

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/947261

Title:
  format command line app documentation using sphinx features

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Sphinx provides directives and roles for describing command line
  programs. We should use them for nova-manage.

  http://sphinx.pocoo.org/domains.html?highlight=program#directive-
  program

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/947261/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1253571] Re: libivrt+xen does not support copy on write images

2016-03-29 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. If you decide to work on this
consider using a blueprint [1] (with a spec [2]). I'll recommend to 
read [3] if not yet done. 

If there are any questions left, feel free to contact me (markus_z)
in the IRC channel #openstack-nova

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1253571

Title:
  libivrt+xen does not support copy on write images

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  The blktap2 (http://wiki.xen.org/wiki/Blktap2) does not support qcow2
  images. We should check if libvirt+xen is being used, and raise an
  error if nova is configured to use cow images.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1253571/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1169769] Re: start guests on host reboot

2016-03-29 Thread Markus Zoeller (markus_z)
I think comment #8 is right. Looks like this is implemented with 
commit [1] and should work through [2] as requested in the original
description.

If there are any questions left, feel free to contact me (markus_z)
in the IRC channel #openstack-nova

References:
[1] 
https://git.openstack.org/cgit/openstack/nova/commit/?id=ba4946d0d3c73e5d9f67f42203d103bf98563458
[2] 
https://github.com/openstack/nova/blob/af8d078d97a4ce7be48fa20572164f0cc79cbd21/nova/compute/manager.py#L1177-L1185

** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1169769

Title:
  start guests on host reboot

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Story:

  A host has a power fluctuation and reboots.

  The VM state in the DB is Active.
  The VM state in the driver is Shutoff.

  The sysadmin goes in to xenserver and xe starts all the instances.

  nova.compute.manager.Manager._run_image_cache_manager_pass runs,
    it finds that the DB vs driver states are incompatible,
    it says the driver is kind, and updates the DB

  now all the VMs that sysadmin started are down again.

  -
  Work around:

  After host reboots, shut down the compute service, start the VMs,
  start the compute service

  -
  Background:

  There used to be an option: 'start_guests_on_host_boot'

  The trouble with it was that it started VMs, even if the DB state was
  suspended.

  It was removed here: https://review.openstack.org/#/c/16698/

  --
  Suggested fix:

  Re-add a nova.conf option: 'start_guests_on_host_boot'

  But make it submit to the DB state, and only start a guest if it is
  labeled as running in the DB.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1169769/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1166321] Re: manually add a disabled service to nova-manage

2016-03-29 Thread Markus Zoeller (markus_z)
Looks like this is implemented with commit [1] (a new service gets 
created in "disabled" mode). This disabled service can then be enabled
with "nova service-enable  " [2].

References:
[1] 
https://git.openstack.org/cgit/openstack/nova/commit/?id=5a25de893f34cb9b05996406488188b6ed47fca1
[2] http://docs.openstack.org/cli-reference/nova.html#nova-service-enable

** Changed in: nova
   Status: Confirmed => Fix Released

** Changed in: nova
 Assignee: AKdebuggers (akdebuggers) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1166321

Title:
  manually add a disabled service to nova-manage

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  There is no way in the nova-manage commands to register an additional
  service node, e.g. I had to manually add the service through the
  database, which seems like the wrong way to do things. I would
  suggest:

  nova-manage service add --host compute123 --service nova-compute

  I would also suggest that any command that does this automatically
  sets the service to disabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1166321/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 816406] Re: Service stats needs to be unified across virt layer

2016-03-29 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. This bug can be reopened (set back
to "New") if someone decides to work on this.

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/816406

Title:
  Service stats needs to be unified across virt layer

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  XenAPI, LibVirt and other virt layers all report their stats (disk,
  ram, network, etc) differently. We haven't really solidified what the
  base statistics being reported should be and what units they should be
  in. For things like the distributed scheduler to work across all virt
  layers, this should really be nailed down.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/816406/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1102705] Re: Add a flag for passing user-specified options to dnsmasq

2016-03-29 Thread Markus Zoeller (markus_z)
Looks like this is implemented with commit [1].

References:
[1] 
https://git.openstack.org/cgit/openstack/nova/commit/?id=52f2479aac7b2fc84c23dba9f337cbfcde6e06e2

** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1102705

Title:
  Add a flag for passing user-specified options to dnsmasq

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  It would be nice, and relatively future-proof, if passing options to
  dnsmasq could be done via a general config option through nova.conf
  instead of having to patch linux_net.py each time.

  Current defined options can be left alone for backwards-compatibility.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1102705/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1155228] Re: Use shorter "serial" strings for disks & apply them for all disks, not just volumes

2016-03-29 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. If you decide to work on this
consider using a blueprint [1] (with a spec [2]). I'll recommend to 
read [3] if not yet done. 

If there are any questions left, feel free to contact me (markus_z)
in the IRC channel #openstack-nova

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1155228

Title:
  Use shorter "serial" strings for disks & apply them for all disks, not
  just volumes

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  When attaching Cinder volumes to an instance, Nova is setting the
   XML element to contain the UUID of the Cinder volume. The
  idea is that this is exposed to the guest OS, allowing it to setup a
  stable device path in /dev/disk/by-id/ to identify the device.

  The problem of reliably identifying disks passed to an instance really
  applies to all disks, not merely those backed by Cinder volumes. As
  such Nova should apply a  XML element to every disk that is
  configured.

  The current approach of using a UUID doesn't really work for non-
  volume backed disks, since they have no UUIDs. Even for Cinder volumes
  the use of UUIDs is flawed, because they are too long to fit in the
  available space. As a result rather than the UUID, the guest OS is
  seeing a truncated UUID string. UUIDs are pretty user unfriendly too,
  whether truncated or not.

  We ought to come up with a plan for better serial strings that can
  apply to all disks, and can avoid being truncated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1155228/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1101839] Re: Don't use the local compute time when syncing

2016-03-29 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going
to move it to "Opinion / Wishlist", which is an easily-obtainable queue
of older requests that have come on. If you decide to work on this
consider using a blueprint [1] (with spec [2]). I'll recommend to 
read [3] if not yet done. 

If there are any questions left, feel free to contact me (markus_z)
in the IRC channel #openstack-nova

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1101839

Title:
  Don't use the local compute time when syncing

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Right now there is a strong tendency to rely on NTP for determining if
  services are up or down, especially compute nodes. This has been
  problematic since it is very fragile in its implementation (aka when
  NTP gets slightly out of sync on any compute node then that compute
  node will no longer be useable). It seems simpler to let the database
  decide what is "time" using its own internal functions like NOW() and
  such and not worry about time being in sync on the other nodes...

  Examples of this:

  https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L502
  (note the time is from the caller, not from the db)... and
  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L276

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1101839/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1057689] Re: Return string instead of an integer for the aggregate id in the aggregates extension

2016-03-29 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going to move 
it to "Opinion / Wishlist", which is an easily-obtainable queue of older 
requests that have come on. If you decide to work on this
consider using a blueprint [1] (with spec [2]). I'll recommend to 
read [3] if not yet done. 

If there are any questions left, feel free to contact me (markus_z)
in the IRC channel #openstack-nova

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1057689

Title:
  Return string instead of an integer for the aggregate id in the
  aggregates extension

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  In the next API version the aggregates id should be a string instead
  of an integer.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1057689/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1078909] Re: FEATURE REQUEST: option to limit size of network creation; batch/transaction creation

2016-03-29 Thread Markus Zoeller (markus_z)
This wishlist bug has been open a year without any activity. I'm going to move 
it to "Opinion / Wishlist", which is an easily-obtainable queue of older 
requests that have come on. If you decide to work on this
consider using a blueprint [1] (with spec [2]). I'll recommend to 
read [3] if not yet done. 

If there are any questions left, feel free to contact me (markus_z)
in the IRC channel #openstack-nova

References:
[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1078909

Title:
  FEATURE REQUEST: option to limit size of network creation;
  batch/transaction creation

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  I'm not sure is this is strictly-speaking the right place to file
  this, but will do so for want of a better alternative for now.

  I recently made use of the RackSpace OpenStack Private Cloud installer/ISO, 
Alamo v.1.0.1
  During the install process, I was promptes for my [private] network block, 
and I provided it as 192.168.0.0/8

  The install process proceeded (overnight), but was getting nowhere.
  Upon closer inspection of the process list (`top` & `ps aux`), I found the 
following to explain the crazy load on the system:
  "/usr/bin/python /usr/bin/nova-manage network create --multi_host=T 
--label=public --fixed_range_v4=192.168.0.0/8 --num_networks=1 
--network_size=16777214 --bridge=br0 --bridge_interface=eth1 --dns1=8.8.8.8 
--dns2=8.8.4.4"
  note: trying to fill up the *entire* network range of 16+ *MILLION* addresses

  Found this issue with the help of a very nice RS tech @ IRC.

  The prevailing consensus is that there should probably be a little bit
  more smarts in this subsystem - either get verification for networks
  larger than 1,000 or 10,000 adresses (anything that'll take more than,
  say, an hour to process; probably not really a possibility for a batch
  process), or batch the DB process into a transaction to run more
  efficiently.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1078909/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 972320] Re: allow force terminate of instances

2016-03-29 Thread Markus Zoeller (markus_z)
Looks like this is implemented with commit [1] and available as CLI
command "nova force-delete " [2].

[1] 
https://git.openstack.org/cgit/openstack/nova/commit/?id=71b7298788045d4832dd8ec44cba3785955aa847
[2] http://docs.openstack.org/cli-reference/nova.html#nova-force-delete

** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/972320

Title:
  allow force terminate of instances

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  This is a small feature request really.

  It would be nice to have an extra flag or command to allow a "force
  terminate".

  I am thinking of cases like this:
  https://answers.launchpad.net/nova/+question/191046

  The admin could do with an operation to forcably update the DB to say
  the image is no longer on the system. We should not force people to
  edit the DB by hand. Of course, this should never happen, but a common
  cause is miss-configuration of the system.

  It would also be good to find out when a hypervisor appears to be out
  of sync with the database and allow an admin to go and solve that
  issue, rather than having to search around the logs and check the
  actual hypervisor. Perhaps use the notification system to notify there
  is a problem on the hypervisor.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/972320/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 912684] Re: XenAPI instances: support for randomly-named and uncompressed images

2016-03-29 Thread Markus Zoeller (markus_z)
Based on comment #2 the effort is driven by bp "bare-vhd-image" [1].
'm going to move it to "Opinion / Wishlist" to avoid that we track
the progress with two different work items (bp <-> bug report).

References:
[1] https://blueprints.launchpad.net/nova/+spec/bare-vhd-image

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/912684

Title:
  XenAPI instances: support for randomly-named and uncompressed images

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  To install the Xen instance, the image name must be exactly
  "image.vhd" and also has that is compressed with tar and gzip. The
  correct name would have to take any image and do not need to be
  compressed.

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/912684/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563324] Re: test test test

2016-03-29 Thread Markus Zoeller (markus_z)
** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1563324

Title:
  test test test

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  just a test, close me

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1563324/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563324] [NEW] test test test

2016-03-29 Thread Markus Zoeller (markus_z)
Public bug reported:

just a test, close me

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1563324

Title:
  test test test

Status in OpenStack Compute (nova):
  New

Bug description:
  just a test, close me

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1563324/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561337] Re: Unable to launch instance

2016-03-24 Thread Markus Zoeller (markus_z)
*** This bug is a duplicate of bug 1534273 ***
https://bugs.launchpad.net/bugs/1534273

@Arun:
It's very likely that this is a configuration issue and it sounds like a 
duplicate to bug 1534273. Especially this log entry should show you this:

2016-03-24 10:13:14.307 14413 ERROR 
nova.api.openstack.extensions
BadRequest: Expecting to find username or userId in 
passwordCredentials - the server could not comply with the 
request since it is either malformed or otherwise incorrect. 
The client is assumed to be in error. (HTTP 400) 
(Request-ID: req-3fac70af-c83e-457d-9acb-d8e969f0a05c)

Please double-check if the Keystone authentication settings in
"/etc/nova/nova.conf" are correct [1].

References:
[1] 
http://docs.openstack.org/liberty/install-guide-ubuntu/nova-controller-install.html

** This bug has been marked a duplicate of bug 1534273
   Keystone configuration options for nova.conf missing from Redhat/CentOS 
install guide

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1561337

Title:
  Unable to launch instance

Status in OpenStack Compute (nova):
  New

Bug description:
  I installed Openstack Liberty using the official guide for ubuntu
  14.01. I am not unable to launch instance.

  Here's the log from nova-api.log

  
  2016-03-24 10:12:53.412 14413 INFO nova.osapi_compute.wsgi.server 
[req-ec45686b-ad24-4949-83bb-42b3ed336b94 55db47d40b91474399879d1003883561 
b3338b63521d4fb7a87011108e9b1107 - - -] 192.168.1.213 "GET 
/v2/b3338b63521d4fb7a87011108e9b1107/os-quota-sets/b3338b63521d4fb7a87011108e9b1107
 HTTP/1.1" status: 200 len: 568 time: 0.0969541

  2016-03-24 10:12:57.869 14412 INFO nova.osapi_compute.wsgi.server 
[req-dcc90aa0-618f-4328-ace0-0e50d3a7bb53 55db47d40b91474399879d1003883561 
b3338b63521d4fb7a87011108e9b1107 - - -] 192.168.1.213 "GET 
/v2/b3338b63521d4fb7a87011108e9b1107/servers/detail?all_tenants=True&tenant_id=b3338b63521d4fb7a87011108e9b1107
 HTTP/1.1" status: 200 len: 211 time: 3.3184321
  2016-03-24 10:12:59.651 14412 INFO nova.osapi_compute.wsgi.server 
[req-95cb7922-c703-4036-ba13-005dff79741e 55db47d40b91474399879d1003883561 
b3338b63521d4fb7a87011108e9b1107 - - -] 192.168.1.213 "GET 
/v2/b3338b63521d4fb7a87011108e9b1107/os-keypairs HTTP/1.1" status: 200 len: 212 
time: 0.0333679
  2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions 
[req-2efac7ae-b1ae-475c-bb03-ab7f28b8ac3d 55db47d40b91474399879d1003883561 
b3338b63521d4fb7a87011108e9b1107 - - -] Unexpected exception in API method
  2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
  2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/extensions.py", line 478, 
in wrapped
  2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
  2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
  2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
  2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/servers.py", line 
611, in create
  2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions 
**create_kwargs)
  2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/hooks.py", line 149, in inner
  2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions rv = 
f(*args, **kwargs)
  2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 1581, in create
  2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions 
check_server_group_quota=check_server_group_quota)
  2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 1181, in 
_create_instance
  2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions 
auto_disk_config, reservation_id, max_count)
  2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 955, in 
_validate_and_build_base_options
  2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions 
pci_request_info, requested_networks)
  2016-03-24 10:13:14.307 14413 ERROR nova.api.openstack.extensions   File 
"/usr/lib/py

[Yahoo-eng-team] [Bug 1530294] Re: can't resize when use_cow_images is True

2016-03-23 Thread Markus Zoeller (markus_z)
@kaka:
As stated in comments 1-3 we need more information to solve this.
This bug report has the status "Incomplete" since more than 30 days.
To keep the bug list sane, I close this bug with "Invalid".
If you have more information, please set the bug back to "New" and
use the report template found at [1].

References:
[1] https://wiki.openstack.org/wiki/Nova/BugsTeam/BugReportTemplate

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1530294

Title:
  can't resize when use_cow_images is True

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  nova.conf:
  #force_raw_images = true
  use_cow_images = True

  error log:
  ] u'qemu-img resize 
/var/lib/nova/instances/727fd979-d02e-4a9b-8b7c-9488ead6c18b/disk 42949672960' 
failed. Not Retrying. execute 
/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:308
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager 
[req-c7949bf1-d9a1-46bd-8dd6-5bc26c32585c 7f50e4ce47aa4d28b78b3b5937f3a382 
65a1edd1dad24b15a4f27bb0d7dcb4d6 - - -] [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b] Setting instance vm_state to ERROR
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b] Traceback (most recent call last):
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3934, in 
finish_resize
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b] disk_info, image)
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3900, in 
_finish_resize
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b] old_instance_type)
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 195, in __exit__
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b] six.reraise(self.type_, self.value, 
self.tb)
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3895, in 
_finish_resize
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b] block_device_info, power_on)
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6836, in 
finish_migration
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b] self._disk_resize(image, size)
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6815, in 
_disk_resize
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b] disk.extend(image, size)
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b]   File 
"/usr/lib/python2.7/site-packages/nova/virt/disk/api.py", line 190, in extend
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b] utils.execute('qemu-img', 'resize', 
image.path, size)
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b]   File 
"/usr/lib/python2.7/site-packages/nova/utils.py", line 390, in execute
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b] return processutils.execute(*cmd, 
**kwargs)
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b]   File 
"/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py", line 275, 
in execute
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b] cmd=sanitized_cmd)
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b] ProcessExecutionError: Unexpected error 
while running command.
  2015-12-31 16:33:16.696 18243 ERROR nova.compute.manager [instance: 
727fd979-d02e-4a9b-8b7c-9488ead6c18b] Command: qemu-img resize 
/var/lib/nova/instances/727fd979-d02e-4a9b-8b7c-9488ead6c18b/disk 42949672960
  2015-12-31 16:33:16.696 18243 ERROR nova.comp

[Yahoo-eng-team] [Bug 1510504] Re: Keypairs list results not limited on database server-side

2016-03-22 Thread Markus Zoeller (markus_z)
Look like this is a RFE and will be driven by the (not yet approved) bp.

** Changed in: nova
   Status: Incomplete => Opinion

** Changed in: nova
   Importance: Low => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1510504

Title:
  Keypairs list results not limited on database server-side

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Nova's list results are not limited using the database query. User can
  only get list of all keypairs. This is inefficient. We should pass
  marker and limit parameters to the database query itself.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1510504/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496929] Re: VMware NSX: instance launch failed: TooManyExternalNetworks: More than one external network exists

2016-03-22 Thread Markus Zoeller (markus_z)
@El Mehd:
This bug report was opened against Juno (which has reached its EOL) [1].
I'm going to deprecate it with "invalid". If the issue arises again in 
master code or a supported stable release, just reopen the report by
setting it to "New".

References:
[1] http://releases.openstack.org/

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1496929

Title:
  VMware NSX: instance launch failed: TooManyExternalNetworks: More than
  one external network exists

Status in OpenStack Compute (nova):
  Invalid
Status in vmware-nsx:
  New

Bug description:
  Hello, I followed the documentation  " http://docs.openstack.org/kilo
  /config-reference/content/vmware.html " to connect ESXi with OpenStack
  Juno, i put the following configuration on the compute node, nova.conf
  file :

  [DEFAULT]
  compute_driver=vmwareapi.VMwareVCDriver
   
  [vmware]
  host_ip=
  host_username=
  host_password=
  cluster_name=
  datastore_regex=

  And in the nova-compute.conf :

  [DEFAULT]
  compute_driver=vmwareapi.VMwareVCDriver

  
  But in vain, on the juno OpenStack Dashboard when i whant to launch instance, 
i have error " Error: Failed to launch instance "Test": Please try again later 
[Error: No valid host was found. ]. ", plz there is an idea for launce instance 
in my ESXi.

  attached the logs on the controller and compute node:

  ==> nova-conductor

  ERROR nova.scheduler.utils [req-618d4ee3-c936-4249-9f8c-7c266d5f9264 None] 
[instance: 0c1ee287-edfe-4258-bb43-db23338bbe90] Error from last host: 
ComputeNode (node domain-c65(Compute)): [u'Traceback (most recent call 
last):\n', u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", 
line 2054, in _do_build_and_run_instance\nfilter_properties)\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2185, in 
_build_and_run_instance\ninstance_uuid=instance.uuid, 
reason=six.text_type(e))\n', u'RescheduledException: Build of instance 
0c1ee287-edfe-4258-bb43-db23338bbe90 was re-scheduled: Network could not be 
found for bridge br-int\n']
  2015-09-17 15:31:34.921 2432 WARNING nova.scheduler.driver 
[req-618d4ee3-c936-4249-9f8c-7c266d5f9264 None] [instance: 
0c1ee287-edfe-4258-bb43-db23338bbe90] NoValidHost exception with message: 'No 
valid host was found.'

  
  => neutron 
  2015-09-17 12:36:09.398 1840 ERROR oslo.messaging._drivers.common 
[req-775407a3-d756-4677-bdb9-0ddfe2fac50c ] Returning exception More than one 
external network exists to caller
  2015-09-17 12:36:09.398 1840 ERROR oslo.messaging._drivers.common 
[req-775407a3-d756-4677-bdb9-0ddfe2fac50c ] ['Traceback (most recent call 
last):\n', '  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, 
in _dispatch_and_reply\nincoming.message))\n', '  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, 
in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, args)\n', '  
File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt, 
**new_args)\n', '  File 
"/usr/lib/python2.7/dist-packages/neutron/api/rpc/handlers/l3_rpc.py", line 
149, in get_external_network_id\nnet_id = 
self.plugin.get_external_network_id(context)\n', '  File 
"/usr/lib/python2.7/dist-packages/neutron/db/external_net_db.py", line 161, in 
get_external_network_id\nraise n_exc.TooManyExternalNetworks()\n', 
'TooManyExternalNetworks: More than one e
 xternal network exists\n']

  
  =>  compute Node / nova-compute

  2015-09-17 15:28:22.323 5944 ERROR oslo.vmware.common.loopingcall [-] in 
fixed duration looping call
  2015-09-17 15:31:33.550 5944 ERROR nova.compute.manager [-] [instance: 
0c1ee287-edfe-4258-bb43-db23338bbe90] Instance failed to spawn

  
  => nova-network / nova-compute

  2015-09-17 11:21:10.840 1363 ERROR oslo.messaging._drivers.impl_rabbit [-] 
AMQP server on ControllerNode01:5672 is unreachable: [Errno 111] ECONNREFUSED. 
Trying again in 3 seconds.
  2015-09-17 11:23:02.874 1363 ERROR nova.openstack.common.periodic_task [-] 
Error during VlanManager._disassociate_stale_fixed_ips: Timed out waiting for a 
reply to message ID b6d62061352e4590a37cbc0438ea3ef0

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1496929/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296967] Re: instances stuck with task_state of REBOOTING after controller switchover

2016-03-22 Thread Markus Zoeller (markus_z)
@Chris Friesen:
This bug report was opened against Havana and the confirmation was
from Melanie (comment #3) during the Juno cycle. You also state
that this is an issue which comes up in certain race conditions
which are hard to reproduce. Given the age and conditions of this
bug report, there is almost no chance to make progress here.
I'm going to deprecate it with "won't fix". If the issue
arises again, just reopen the report by setting it to "New".

** Changed in: nova
   Status: Incomplete => Won't Fix

** Changed in: nova
   Importance: High => Undecided

** Changed in: nova
Milestone: ongoing => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296967

Title:
  instances stuck with task_state of REBOOTING after controller
  switchover

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  We were doing some testing of Havana and have run into a scenario that
  ended up with two instances stuck with a task_state of REBOOTING
  following a reboot of the controller:

  1) We reboot the controller.
  2) Right after it comes back up something calls compute.api.API.reboot() on 
an instance.
  3) That sets instance.task_state = task_states.REBOOTING and then calls 
instance.save() to update the database.
  4) Then it calls self.compute_rpcapi.reboot_instance() which does an rpc cast.
  5) That message gets dropped on the floor due to communication issues between 
the controller and the compute.
  6) Now we're stuck with a task_state of REBOOTING.

  Currently when doing a reboot we set the REBOOTING task_state in the
  database in compute-api and then send an RPC cast. That seems awfully
  risky given that if that message gets lost or the call fails for any
  reason we could end up stuck in the REBOOTING state forever.  I think
  it might make sense to have the power state audit clear the REBOOTING
  state if appropriate, but others with more experience should make that
  call.

  It didn't happen to us, but I think we could get into this state
  another way:

  1) nova-compute was running reboot_instance()
  2) we reboot the controller
  3) reboot_instance() times out trying to update the instance with the the new 
power state and a task_state of None.
  4) Later on in _sync_power_states() we would update the power_state, but 
nothing would update the task_state.

  The timeline that I have looks like this.  We had some buggy code that
  sent all the instances for a reboot when the controller came up.  The
  first two are in the controller logs below, and these are the ones
  that failed.

  controller: (running everything but nova-compute)
  nova-api log:

  /var/log/nova/nova-api.log.2.gz:2014-03-20 11:33:23.712 8187 INFO 
nova.compute.api [req-a84e25bd-85b4-478c-a845-7e8034df3ab2 
8162b2e247704e218ed13094889a5244 48c9875f2edb4a36bbe598effbe835cf] [instance: 
c967e4ef-8cf4-4fac-8aab-c5ea5c3c3bb4] API::reboot reboot_type=SOFT
  /var/log/nova/nova-api.log.2.gz:2014-03-20 11:33:23.898 8187 INFO 
nova.osapi_compute.wsgi.server [req-a84e25bd-85b4-478c-a845-7e8034df3ab2 
8162b2e247704e218ed13094889a5244 48c9875f2edb4a36bbe598effbe835cf] 
192.168.204.195 "POST 
/v2/48c9875f2edb4a36bbe598effbe835cf/servers/c967e4ef-8cf4-4fac-8aab-c5ea5c3c3bb4/action
 HTTP/1.1" status: 202 len: 185 time: 0.2299521
  /var/log/nova/nova-api.log.2.gz:2014-03-20 11:33:25.152 8128 INFO 
nova.compute.api [req-429feb82-a50d-4bf0-a9a4-bca036e55356 
8162b2e247704e218ed13094889a5244 48c9875f2edb4a36bbe598effbe835cf] [instance: 
17169e6d-6693-4e95-9900-ba250dad5a39] API::reboot reboot_type=SOFT
  /var/log/nova/nova-api.log.2.gz:2014-03-20 11:33:25.273 8128 INFO 
nova.osapi_compute.wsgi.server [req-429feb82-a50d-4bf0-a9a4-bca036e55356 
8162b2e247704e218ed13094889a5244 48c9875f2edb4a36bbe598effbe835cf] 
192.168.204.195 "POST 
/v2/48c9875f2edb4a36bbe598effbe835cf/servers/17169e6d-6693-4e95-9900-ba250dad5a39/action
 HTTP/1.1" status: 202 len: 185 time: 0.1583798

  After this there are other reboot requests for the other instances,
  and those ones passed.

  Interestingly, we later see this
  /var/log/nova/nova-api.log.2.gz:2014-03-20 11:33:45.476 8134 INFO 
nova.compute.api [req-2e0b67a0-0cd9-471f-b115-e4f07436f1c4 
8162b2e247704e218ed13094889a5244 48c9875f2edb4a36bbe598effbe835cf] [instance: 
c967e4ef-8cf4-4fac-8aab-c5ea5c3c3bb4] API::reboot reboot_type=SOFT
  /var/log/nova/nova-api.log.2.gz:2014-03-20 11:33:45.477 8134 INFO 
nova.osapi_compute.wsgi.server [req-2e0b67a0-0cd9-471f-b115-e4f07436f1c4 
8162b2e247704e218ed13094889a5244 48c9875f2edb4a36bbe598effbe835cf] 
192.168.204.195 "POST 
/v2/48c9875f2edb4a36bbe598effbe835cf/servers/c967e4ef-8cf4-4fac-8aab-c5ea5c3c3bb4/action
 HTTP/1.1" status: 409 len: 303 time: 0.1177511
  /var/log/nova/nova-api.log.2.gz:2014-03-20 11:33:48.831 8143 INFO 
nova.compute.api [req-afeb680b-91fd-4446-b4d8-fd264541369d 
8162b2e247704e218ed13094889a5244 48c9875f2ed

[Yahoo-eng-team] [Bug 1554195] Re: Nova (juno) ignores logging_*_format_string in syslog output

2016-03-19 Thread Markus Zoeller (markus_z)
@George Shuklin:
The Juno release has its EOL in December 2015 [1]. This means the
upstream development doesn't provide any fixes for that anymore.

References:
[1] http://releases.openstack.org/

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1554195

Title:
  Nova (juno) ignores logging_*_format_string in syslog output

Status in OpenStack Compute (nova):
  Invalid
Status in nova package in Ubuntu:
  Confirmed

Bug description:
  Nova in juno ignores following settings in configuration file ([DEFAULT] 
section):
  logging_context_format_string
  logging_default_format_string
  logging_debug_format_suffix
  logging_exception_prefix

  when sending logs via syslog. Log entries on stderr / in log files are
  fine (use logging_*_format).

  Steps to reproduce:

  1. set up custom logging stings and enable syslog:

  [DEFAULT]
  logging_default_format_string=MYSTYLE-DEFAULT-%(message)s
  logging_context_format_string=MYSTYLE-CONTEXT-%(message)s
  use_syslog=true

  2. restart nova and perform some actions

  3. Check the syslog content

  Expected behaviour: MYSTYLE- prefix in all messages.
  Actual behaviour: no changes in log message styles.

  This bug is specific to Juno version of nova.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1554195/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1557888] Re: Nova CLI - No message on nova-agent deletion

2016-03-19 Thread Markus Zoeller (markus_z)
OK, got it now, so it's about the feedback of the python-novaclient.
It looks like we have 3 different strategies to provide feedback:

1. Print a message that the task is accepted
2. Print a table with details of the deleted object
3. Print nothing 

Feedback 1 is usefull for long running asynchronous tasks like the
creation of instances:

stack@stack:~$ nova boot --image cirros-0.3.4-x86_64-uec \
--flavor m1.tiny my-own-instance
# [... snip instance details ...]
stack@stack:~$ nova delete my-own-instance
Request to delete server my-own-instance has been accepted.
stack@stack:~$

Feeback 2 is used for deleting a flavor:

stack@stack:~$ nova flavor-create my-own-flavor 12345 512 0 3
# [... snip flavor details ...]
stack@stack:~$ nova flavor-delete my-own-flavor
+---+---+---+--+---+--+[...]
| ID| Name  | Memory_MB | Disk | Ephemeral | Swap |[...]
+---+---+---+--+---+--+[...]
| 12345 | my-own-flavor | 512   | 0| 0 |  |[...]
+---+---+---+--+---+--+[...]
stack@stack:~$ 

Feedback 3 is used for deleting an agent (as you find out) and also for
deleting a keypair:

stack@stack:~$ nova agent-create linux x86 1.0 http://dummy.com \
0e49760580a20076fbba7b1e3ccd20e2 libvirt
# [... snip agent details ...]
stack@stack:~$ nova agent-delete 1
stack@stack:~$ 

stack@stack:~$ nova keypair-add my-own-keypair
# [... snip keypair details ...]
stack@stack:~$ nova keypair-delete my-own-keypair
stack@stack:~$ 

Incomplete:
I'd say that "nova agent-delete" doesn't fall into the "feedback 1"
category as it isn't a long running task. Because other "nova *-delete"
commands also don't provide feedback, I'm not sure if this is a valid
bug report. I'm going to ask around.

Test Env:
I tested with Nova master (Mitaka cycle), commit 859ff48

** Project changed: nova => python-novaclient

** Changed in: python-novaclient
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1557888

Title:
  Nova CLI - No message on nova-agent deletion

Status in python-novaclient:
  Incomplete

Bug description:
  In Nova,  when deleting a nova-agent , no message or alert is generated.
  But for other commands eg. nova delete , after deleting an 
  instance a proper message is generated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-novaclient/+bug/1557888/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1557902] Re: Wait for all success after nova boot with poll

2016-03-19 Thread Markus Zoeller (markus_z)
Confirmed: 
The novaclient uses only one instance to show the progress [1] and
doesn't consider the real number of created instances. AFAIK the REST
API to create instances will always return the first instance and not
a full list of all created instances. Blueprints to change that [2]
didn't get implemented. There was a bug about that too (within the 
last 12 months) but I don't find it anymore. IIRC we accept this
behavior, but don't pin me down on this.

However, there is the REST API request parameter "return_reservation_id"
which could maybe used in the python-novaclient to list all instances
matching this reservation_id [3]. And then the progress can be polled
per instance. If this would make the testing more reliable it's worth
a shot IMO.

References:
[1] novaclient; Mitaka; create; poll for one instance: 
https://github.com/openstack/python-novaclient/blob/b80d8cb6e6cd1e86c7dc3c99c3e7d92641c00097/novaclient/v2/shell.py#L591-L592
[2] 
https://blueprints.launchpad.net/nova/+spec/return-all-servers-during-multiple-create
[3] nova api-ref; V2.1; create multiple instances: 
http://developer.openstack.org/api-ref-compute-v2.1.html#os-multiple-create-v2.1

** Project changed: nova => python-novaclient

** Changed in: python-novaclient
   Status: New => Confirmed

** Changed in: python-novaclient
   Importance: Undecided => Low

** Changed in: python-novaclient
   Status: Confirmed => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1557902

Title:
  Wait for all success after nova boot with poll

Status in python-novaclient:
  In Progress

Bug description:
  Now we can use nova boot with poll parameter for one instance. But if
  we want to boot multiple instances, it return whenever the first
  instance succeeds or fails.

  It would be much better for testing to return the result of all
  instance when boot with max and min parameters.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-novaclient/+bug/1557902/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558459] [NEW] api-guide: server concepts: multiple format and style issues

2016-03-19 Thread Markus Zoeller (markus_z)
Public bug reported:

This bug reports is based on [1]. The "server concepts" page of the "api
guide" has multiple format and style issues:

*The code blocks are not correctly indented under their bullet points.
*The "Example ..." texts are rendered within the previous code blocks.
*The continuous texts (like "there are 2 servers existing ...") are placed 
within the code blocks.
*For reasons I don't get the last 2 code examples are rendered with an 
orange color font.
*Bullet point "Resource Optimization" has an unintentional quote due to a 
missing space
*   The json examples could be within "code-block:: json" to 
be more specific as json looks very similar to   python dicts but is more 
sensitive when it comes to validation 

Nova version:
Seen with commit 230958c002736444bfb36c9f0845f4f4e5253d0e

References:
[1] https://review.openstack.org/#/c/292137/5
[2] http://developer.openstack.org/api-guide/compute/server_concepts.html

** Affects: nova
 Importance: Low
 Status: Confirmed


** Tags: doc low-hanging-fruit

** Tags added: doc low-hanging-fruit

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1558459

Title:
  api-guide: server concepts: multiple format and style issues

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  This bug reports is based on [1]. The "server concepts" page of the
  "api guide" has multiple format and style issues:

  *The code blocks are not correctly indented under their bullet points.
  *The "Example ..." texts are rendered within the previous code blocks.
  *The continuous texts (like "there are 2 servers existing ...") are 
placed within the code blocks.
  *For reasons I don't get the last 2 code examples are rendered with an 
orange color font.
  *Bullet point "Resource Optimization" has an unintentional quote due to a 
missing space
  *   The json examples could be within "code-block:: json" 
to be more specific as json looks very similar to   python dicts but is more 
sensitive when it comes to validation 

  Nova version:
  Seen with commit 230958c002736444bfb36c9f0845f4f4e5253d0e

  References:
  [1] https://review.openstack.org/#/c/292137/5
  [2] http://developer.openstack.org/api-guide/compute/server_concepts.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1558459/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1186354] Re: Limits API doesn't work with Neutron

2016-03-19 Thread Markus Zoeller (markus_z)
Blueprint limits-quota-usage-from-neutron [1] will drive the effort. It
didn't make the cut for Mitaka [2] and need to be re-proposed for
Newton.

References:
[1] https://blueprints.launchpad.net/nova/+spec/limits-quota-usage-from-neutron
[2] https://review.openstack.org/#/c/206735/

** Changed in: nova
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1186354

Title:
  Limits API doesn't work with Neutron

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  The limits API is defined to return quota and usage values for
  floating ips and security groups, but on a system configured to use
  Quantum these values need to be retrieved from the quantum client
  rather than the Nova DB

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1186354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


<    1   2   3   4   5   6   >