[Yahoo-eng-team] [Bug 1489194] Re: hw_scsi_model from glance image is not used when booting instance from new volume

2015-08-27 Thread jichenjc
seems not related to nova based on #1

** Changed in: nova
   Status: Triaged = New

** Project changed: nova = horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1489194

Title:
  hw_scsi_model from glance image is not used when booting instance from
  new volume

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When creating an instance backed by a cinder volume, the disk device /dev/vda 
is used regardless of image settings. I am using the following image metadata 
to set virtio-scsi driver on my instances:
  hw_disk_bus=scsi
  hw_scsi_model=virtio-scsi

  When I boot instances using a normal root device (boot from image),
  they are using /dev/sda and virtio-scsi as expected. When booting from
  volume (either with a new volume or an existing image-based volume),
  they use target dev='vda' bus='virtio'/, ignoring the image
  metadata.

  According to this spec: http://specs.openstack.org/openstack/nova-
  specs/specs/juno/approved/add-virtio-scsi-bus-for-bdm.html

  A work item was: Nova retrieve “hw_scsi_model” property from
  volume’s glance_image_metadata when booting from cinder volume

  I would expect this work is what would implement setting virtio-scsi
  on volume backed instances, however none of the reviews I have looked
  through for that spec appear to implement anything regarding volume
  backed instances.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1489194/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489474] [NEW] Lack of federated token user object validation

2015-08-27 Thread Marek Denis
Public bug reported:

In our tests it would be better to add a validation of federated user structure 
in the token. 
The check should ensure there are reuired attributes as well some of them are 
meeting specified critera (like user_id should be always url safe).

** Affects: keystone
 Importance: Wishlist
 Assignee: Marek Denis (marek-denis)
 Status: In Progress


** Tags: federation test-improvement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1489474

Title:
  Lack of federated token user object validation

Status in Keystone:
  In Progress

Bug description:
  In our tests it would be better to add a validation of federated user 
structure in the token. 
  The check should ensure there are reuired attributes as well some of them are 
meeting specified critera (like user_id should be always url safe).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1489474/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489442] [NEW] Invalid order of volumes with adding a volume in boot operation

2015-08-27 Thread Feodor Tersin
Public bug reported:

If an image has several volume in bdm, and a user adds one more volume
for boot operation, then the new volume is not just added to a volume
list, but becomes the second device. This can lead to problems if the
image root device has various soft which settings point to other
volumes.

For example:
1 the image is a snapshot of a volume backed instance which had vda and vdb 
volumes
2 the instance had an sql server, which used both vda and vdb for its database
3 if a user runs a new instance from the image, either device names are 
restored (with xen), or they're reassigned (libvirt) to the same names, because 
the order of devices, which are passed in libvirt, is the same as it was for 
the original instance
4 if a user runs a new instance, adding a new volume, the volume list becomes 
vda, new, vdb
5 in this case libvirt reassings device names to vda=vda, new=vdb, vdb=vdc
6 as a result the sql server will not find its data on vdb

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1489442

Title:
  Invalid order of volumes with adding a volume in boot operation

Status in OpenStack Compute (nova):
  New

Bug description:
  If an image has several volume in bdm, and a user adds one more volume
  for boot operation, then the new volume is not just added to a volume
  list, but becomes the second device. This can lead to problems if the
  image root device has various soft which settings point to other
  volumes.

  For example:
  1 the image is a snapshot of a volume backed instance which had vda and vdb 
volumes
  2 the instance had an sql server, which used both vda and vdb for its database
  3 if a user runs a new instance from the image, either device names are 
restored (with xen), or they're reassigned (libvirt) to the same names, because 
the order of devices, which are passed in libvirt, is the same as it was for 
the original instance
  4 if a user runs a new instance, adding a new volume, the volume list becomes 
vda, new, vdb
  5 in this case libvirt reassings device names to vda=vda, new=vdb, vdb=vdc
  6 as a result the sql server will not find its data on vdb

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1489442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302351] Re: v2 API can't create image - Attribute 'file' is reserved.

2015-08-27 Thread Stuart McLaren
Seems to work ok now

$ glance --os-image-api-version 2  image-create --container-format bare
--disk-format raw --name trusty2 --file /etc/fstab

+--+--+
| Property | Value|
+--+--+
| checksum | 9cb02fe7fcac26f8a25d6db3109063ae |
| container_format | bare |
| created_at   | 2015-08-27T16:45:40Z |
| disk_format  | raw  |
| id   | b16dbfb2-f435-40a8-aa08-2aadaee99cd1 |
| min_disk | 0|
| min_ram  | 0|
| name | trusty2  |
| owner| 411423405e10431fb9c47ac5b2446557 |
| protected| False|
| size | 145  |
| status   | active   |
| tags | []   |
| updated_at   | 2015-08-27T16:45:40Z |
| virtual_size | None |
| visibility   | private  |
+--+--+


** Changed in: python-glanceclient
   Status: Confirmed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1302351

Title:
  v2 API can't create image - Attribute 'file' is reserved.

Status in Glance:
  Invalid
Status in python-glanceclient:
  Fix Released

Bug description:
  Trying to create an image with V2 API and get the following error:

  glance --os-image-api-version 2 --os-image-url http://glance-icehouse:9292/ 
image-create --container-format bare --disk-format raw --name trusty2 --file 
trusty-server-cloudimg-amd64-disk1.img 
  Request returned failure status.
  403 Forbidden
  Attribute 'file' is reserved.
  (HTTP 403)

  It works fine if I do  --os-image-api-version 1

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1302351/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461752] Re: Error at listener's barbican container validation

2015-08-27 Thread Adam Harwell
** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1461752

Title:
  Error at listener's barbican container validation

Status in neutron:
  Invalid

Bug description:
  Validation of the barabican container associated with listener (tls
  and sni) at plugin layer is throwing error.

  In validate_tls_container() ,contain_id is passed where as it is
  expecting container_ref.

   def _validate_tls(self, listener, curr_listener=None):

  def validate_tls_container(container_ref):
  ...

     def validate_tls_containers(to_validate):
  for container_ref in to_validate:
  validate_tls_container(container_ref)
     ...
     if len(to_validate)  0:
  validate_tls_containers(to_validate)

  #to_validate is list of container_ids.

  #at barbican_cert_manager.py  get_cert() is cert_ref is a UUID instead
  of ref_url for container.

  def get_cert(cert_ref, service_name='Octavia', resource_ref=None,
   check_only=False, **kwargs):

   ...
   :param cert_ref: the UUID of the cert to retrieve
   ...

   cert_container = connection.containers.get(
  container_ref=cert_ref

  #above container_ref is a UUID whereas connection.container.get()
  expects a reference url.

  We should prepare ref_url from container UUID
  following should fix the issue.

  diff --git a/neutron_lbaas/common/cert_manager/barbican_cert_manager.py 
b/neutron_lbaas/common/cert_manager/barbican_cert_manager.py
  index 1ad38ee..8d3c3c4 100644
  --- a/neutron_lbaas/common/cert_manager/barbican_cert_manager.py
  +++ b/neutron_lbaas/common/cert_manager/barbican_cert_manager.py
  @@ -219,6 +222,9 @@ class CertManager(cert_manager.CertManager):
   
   connection = BarbicanKeystoneAuth.get_barbican_client()

  +if self.is_UUID(cert_ref):
  +cert_ref = self.get_cert_ref_url(cert_ref)
  +

  Error log:
  
---
  ERROR neutron_lbaas.common.cert_manager.barbican_cert_manager 
[req-a5e704fb-f04b-45f2-9c50-f3bfebe09afd admin 5ca9f
  cbf4652456a9bd53582b86bd0e9] Error getting 
0b8d5af0-c156-46ad-b4c6-882a84824ce2
  2015-06-04 09:58:38.126 TRACE 
neutron_lbaas.common.cert_manager.barbican_cert_manager Traceback (most recent 
call last):
  2015-06-04 09:58:38.126 TRACE 
neutron_lbaas.common.cert_manager.barbican_cert_manager   File 
/opt/stack/neutron-lbaas/neutron_lbaas/common
  /cert_manager/barbican_cert_manager.py, line 228, in get_cert
  2015-06-04 09:58:38.126 TRACE 
neutron_lbaas.common.cert_manager.barbican_cert_manager 
container_ref=cert_ref
  2015-06-04 09:58:38.126 TRACE 
neutron_lbaas.common.cert_manager.barbican_cert_manager   File 
/opt/stack/python-barbicanclient/barbicanclie
  nt/containers.py, line 528, in get
  2015-06-04 09:58:38.126 TRACE 
neutron_lbaas.common.cert_manager.barbican_cert_manager 
base.validate_ref(container_ref, 'Container')
  2015-06-04 09:58:38.126 TRACE 
neutron_lbaas.common.cert_manager.barbican_cert_manager   File 
/opt/stack/python-barbicanclient/barbicanclie
  nt/base.py, line 35, in validate_ref
  2015-06-04 09:58:38.126 TRACE 
neutron_lbaas.common.cert_manager.barbican_cert_manager raise 
ValueError('{0} incorrectly specified.'.for
  mat(entity))
  2015-06-04 09:58:38.126 TRACE 
neutron_lbaas.common.cert_manager.barbican_cert_manager ValueError: Container 
incorrectly specified.
  2015-06-04 09:58:38.126 TRACE 
neutron_lbaas.common.cert_manager.barbican_cert_manager
  2015-06-04 09:58:38.167 INFO neutron.api.v2.resource 
[req-a5e704fb-f04b-45f2-9c50-f3bfebe09afd admin 
5ca9fcbf4652456a9bd53582b86bd0e9] crea
  te failed (client error): TLS container 0b8d5af0-c156-46ad-b4c6-882a84824ce2 
could not be found
  
---

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1461752/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489555] [NEW] nova rbd volume attach to running instance in Kilo is failing

2015-08-27 Thread venkat bokka
Public bug reported:

i am using openstack Kilo on debian, ceph firefly(0.80.7), libvirt 1.2.9

i am to create cinder volume with ceph as backend, but when i am trying to 
attach volume to running instance it failing with the error
libvirtError: internal error: unable to execute QEMU command 'device_add': 
Property 'virtio-blk-device.drive' can't find value 'drive-virtio-disk2'

Please find the attached nova compute log.

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: nova-compute.log
   
https://bugs.launchpad.net/bugs/1489555/+attachment/4453451/+files/nova-compute.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1489555

Title:
  nova rbd volume attach to running  instance in Kilo is failing

Status in OpenStack Compute (nova):
  New

Bug description:
  i am using openstack Kilo on debian, ceph firefly(0.80.7), libvirt
  1.2.9

  i am to create cinder volume with ceph as backend, but when i am trying to 
attach volume to running instance it failing with the error
  libvirtError: internal error: unable to execute QEMU command 'device_add': 
Property 'virtio-blk-device.drive' can't find value 'drive-virtio-disk2'

  Please find the attached nova compute log.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1489555/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1484322] Re: Loads service plugins fault-tolerantly

2015-08-27 Thread Armando Migliaccio
** Changed in: neutron
   Status: In Progress = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1484322

Title:
  Loads service plugins fault-tolerantly

Status in neutron:
  Won't Fix

Bug description:
  I notice the following code can be optimized:

  def _load_service_plugins(self):
  Loads service plugins.

  Starts from the core plugin and checks if it supports
  advanced services then loads classes provided in configuration.
  
  # load services from the core plugin first
  self._load_services_from_core_plugin()

  plugin_providers = cfg.CONF.service_plugins
  LOG.debug(Loading service plugins: %s, plugin_providers)
  for provider in plugin_providers:
  if provider == '':
  continue

  LOG.info(_LI(Loading Plugin: %s), provider)
  plugin_inst = self._get_plugin_instance('neutron.service_plugins',
  provider)

  # only one implementation of svc_type allowed
  # specifying more than one plugin
  # for the same type is a fatal exception
  if plugin_inst.get_plugin_type() in self.service_plugins:
  raise ValueError(_(Multiple plugins for service 
 %s were configured) %
   plugin_inst.get_plugin_type())

  if  provider == '' we can pass the mistake  to configure neutron service more 
fault-tolerant.
  who not that if plugin_inst.get_plugin_type() in self.service_plugins can be 
pass.
  if someone make a mistake to configure the same plugin twice,we can pass the 
second plugin quiet.
  so I think the following change will be better:

  if plugin_inst.get_plugin_type() in self.service_plugins:
  continue

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1484322/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483322] Re: python-memcached get_multi has much faster than get when get multiple value

2015-08-27 Thread Davanum Srinivas (DIMS)
** Changed in: oslo.cache
   Status: Fix Committed = Fix Released

** Changed in: oslo.cache
Milestone: None = 0.7.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1483322

Title:
  python-memcached get_multi has much faster than get when get multiple
  value

Status in OpenStack Compute (nova):
  In Progress
Status in oslo.cache:
  Fix Released

Bug description:
  nova use memcache with python.memcached's get function.

  when multiple litem reterived it uses as for .. in .. loop..
  in this case get_multi has better performance.

  In my case,  here is test result

  get 2.3020670414
  get_multi 0.0353858470917

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1483322/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461822] Re: Lack of password complexity verification in Keystone

2015-08-27 Thread David Stanek
Marking as WONTFIX because we are actively trying not to build a full
IdP solution into Keystone.

** Changed in: keystone
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1461822

Title:
  Lack of password complexity verification in Keystone

Status in Keystone:
  Won't Fix

Bug description:
  Currently, we can specified an arbitrary string as password when
  creating a user (or updating user's password) by keystone. In normally
  use cases, the user's password shouldn't be weak, because it may cause
  potential security issues.

  Keystone should add a mechanism to perform password complexity
  verification, and to fit different scenarios, this mechanism can be
  enabled or disabled by config option. The checking rules should follow
  the industry general standard.

  There is a similar situation about instance's password in Nova, see
  bug[1] and mail thread[2].

  [1] https://bugs.launchpad.net/nova/+bug/1461431
  [2] http://lists.openstack.org/pipermail/openstack-dev/2015-June/065600.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1461822/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462152] Re: python-memcache (and therefore) token memcache persistence driver does not support ipv6

2015-08-27 Thread David Stanek
Marking as WONTFIX since there isn't any work to be done on Keystone.

** Changed in: keystone
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1462152

Title:
  python-memcache (and therefore) token memcache persistence driver does
  not support ipv6

Status in Keystone:
  Won't Fix
Status in openstack-manuals:
  Confirmed

Bug description:
  (morganfainberg):
  OpenStack Manuals (for both Master and Kilo) need to be updated to eliminate 
the recommendation to use the memcache token persistence backend. The memcache 
token persistence backend is a poor choice due to performance concerns of the 
code itself, the fact that it is assumed that the token backend is stable 
storage (memcached is not) and can expose security concerns if restarted in 
some scenarios (PKI tokens and revoked tokens becoming valid again), and 
finally that the python-memcache library is all around poor (it will not work 
with IPv6 and is not Python3 compatible, among other poor design choices).

  
  The memcache backend driver that utilizes python-memcache does not support 
IPv6.

  I have included three scenarios (A, B and C) that will reproduce the
  bug and a control test that succeeds with same configuration using
  IPv4-resolving hostname.

  To reproduce scenario A: Bare IPv6 address in config
  1) Configure keystone according to 
http://docs.openstack.org/kilo/install-guide/install/apt/content/keystone-install.html
  2) In section [memcache] in /etc/keystone/keystone.conf change servers = line:
   servers = 
2001:db8:1000:1:f816:3eff:fe2a:f9c7:11211,2001:db8:1000:1:f816:3eff:fee9:9ce3:11211,2001:db8:1000:1:f816:3eff:fead:8f7f:11211
  3) Restart keystone/apache
  4) Attempt to issue token:
   openstack --os-auth-url http://192.168.0.15:35357   --os-project-name admin 
--os-username admin --os-auth-type password   token issue

  ERROR: openstack An unexpected error prevented the server from
  fulfilling your request: Unable to parse connection string:
  2001:db8:1000:1:f816:3eff:fe2a:f9c7:11211 (Disable debug mode to
  suppress these details.) (HTTP 500) (Request-ID: req-7c2bfd39-4b83
  -462b-92c6-f75f7677c8e5)

  To reproduce scenario B: IPv6 address enclosed in brackets
  1) Configure keystone according to 
http://docs.openstack.org/kilo/install-guide/install/apt/content/keystone-install.html
  2) In section [memcache] in /etc/keystone/keystone.conf change servers = line:
   servers = 
[2001:db8:1000:1:f816:3eff:fe2a:f9c7]:11211,[2001:db8:1000:1:f816:3eff:fee9:9ce3]:11211,[2001:db8:1000:1:f816:3eff:fead:8f7f]:11211
  3) Restart keystone/apache
  4) Attempt to issue token:
   openstack --os-auth-url http://192.168.0.15:35357   --os-project-name admin 
--os-username admin --os-auth-type password   token issue

  ERROR: openstack An unexpected error prevented the server from
  fulfilling your request: Unable to parse connection string:
  [2001:db8:1000:1:f816:3eff:fe2a:f9c7]:11211 (Disable debug mode to
  suppress these details.) (HTTP 500) (Request-ID: req-
  869eb953-74af-4336-b3e1-dc3a417180f9)

  To reproduce scenario C: hostname that resolves to IPv6-only address
  1) Configure keystone according to 
http://docs.openstack.org/kilo/install-guide/install/apt/content/keystone-install.html
  2) In section [memcache] in /etc/keystone/keystone.conf change servers = line:
   servers = keystone-1:11211,keystone-2:11211,keystone-3:11211

  3) Edit /etc/hosts:
  2001:db8:1000:1:f816:3eff:fe2a:f9c7   keystone-1
  2001:db8:1000:1:f816:3eff:fee9:9ce3   keystone-2
  2001:db8:1000:1:f816:3eff:fead:8f7f   keystone-3

  3) Restart keystone/apache
  4) Attempt to issue token:

  openstack --os-auth-url http://192.168.0.15:35357   --os-project-name admin 
--os-username admin --os-auth-type password   token issue
  Password:
  ERROR: openstack Maximum lock attempts on 
_lockusertokens-30dbbe8174b24174a3a24d1ae554ab17 occurred. (Disable debug mode 
to suppress these details.) (HTTP 500) (Request-ID: 
req-efd53eae-4bcf-4fd9-bab2-dd4c86fb9798)

  Control test:
  1) Configure keystone according to 
http://docs.openstack.org/kilo/install-guide/install/apt/content/keystone-install.html
  2) In section [memcache] in /etc/keystone/keystone.conf change servers = line:
   servers = keystone-1:11211,keystone-2:11211,keystone-3:11211

  3) Edit /etc/hosts:
  192.168.0.15  keystone-1
  192.168.0.14  keystone-2
  192.168.0.16  keystone-3

  3) Restart keystone/apache
  4) Attempt to issue token:

  openstack --os-auth-url http://192.168.0.15:35357   --os-project-name admin 
--os-username admin --os-auth-type password   token issue
  Password:
  ++--+
  | Field  | Value|
  ++--+
  | expires| 2015-06-05T00:31:30Z

[Yahoo-eng-team] [Bug 1489562] [NEW] Support docker image type in ng launch instance

2015-08-27 Thread Justin Pomeroy
Public bug reported:

The angular Launch Instance workflow does not show the correct image
type for Docker images in the Source allocated and available tables.
These show as RAW but when the container format is 'docker' the image
type should show as DOCKER.

** Affects: horizon
 Importance: Undecided
 Assignee: Justin Pomeroy (jpomero)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Justin Pomeroy (jpomero)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1489562

Title:
  Support docker image type in ng launch instance

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The angular Launch Instance workflow does not show the correct image
  type for Docker images in the Source allocated and available tables.
  These show as RAW but when the container format is 'docker' the image
  type should show as DOCKER.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1489562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489592] [NEW] Restarting OVS agent on network node breaks networking

2015-08-27 Thread Ryan Moats
Public bug reported:

Set up a three node devstack (compute+network+controller) running DVR on 
OVS+VXLAN
configure:

external net --- router --- internal net  instance

ping continuously from instance to 8.8.8.8

restart ovs agent on network node
after restart:  ping to 8.8.8.8 halts.  instance can not ping dhcp server 
address of network

at this point restarting dhcp and/or L3 agent does not help re-establish 
communication
pretty much only way out is to restack the network node.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489592

Title:
  Restarting OVS agent on network node breaks networking

Status in neutron:
  New

Bug description:
  Set up a three node devstack (compute+network+controller) running DVR on 
OVS+VXLAN
  configure:

  external net --- router --- internal net  instance

  ping continuously from instance to 8.8.8.8

  restart ovs agent on network node
  after restart:  ping to 8.8.8.8 halts.  instance can not ping dhcp server 
address of network

  at this point restarting dhcp and/or L3 agent does not help re-establish 
communication
  pretty much only way out is to restack the network node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489592/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1210261] Re: remove openstack.common.context

2015-08-27 Thread Aditi Rajagopal
** Changed in: neutron
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1210261

Title:
  remove openstack.common.context

Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.log:
  Fix Released

Bug description:
  relates to https://bugs.launchpad.net/neutron/+bug/1208734, and
  according to https://github.com/openstack/oslo-
  incubator/blob/master/MAINTAINERS#L87, i think we'd better remove
  openstack/comon/context

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1210261/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489576] [NEW] key error in del l3_agent.pd.routers[router['id']]['subnets']

2015-08-27 Thread Adolfo Duarte
Public bug reported:

currently the following erorr is seen in the q-l3 or q-vpn agents when
deleting a router:  (steps to reproduce below):

Exit code: 0
Stdin:
Stdout: lo
Stderr:  execute /opt/stack/neutron/neutron/agent/linux/utils.py:150
2015-08-27 19:08:38.007 12027 DEBUG neutron.agent.linux.utils [-]
Command: ['ip', 'netns', 'delete', 
u'qrouter-f53025cc-5159-480e-a7f3-19122aa330a3']
Exit code: 0
Stdin:
Stdout:
Stderr:  execute /opt/stack/neutron/neutron/agent/linux/utils.py:150
2015-08-27 19:08:38.009 12027 DEBUG neutron.callbacks.manager [-] Notify 
callbacks for router, after_delete _notify_loop 
/opt/stack/neutron/neutron/callbacks/manager.py:133
2015-08-27 19:08:38.009 12027 DEBUG neutron.callbacks.manager [-] Calling 
callback neutron.agent.linux.pd.remove_router _notify_loop 
/opt/stack/neutron/neutron/callbacks/manager.py:140
2015-08-27 19:08:38.010 12027 DEBUG oslo_concurrency.lockutils [-] Lock 
l3-agent-pd acquired by remove_router :: waited 0.000s inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:251
2015-08-27 19:08:38.010 12027 DEBUG oslo_concurrency.lockutils [-] Lock 
l3-agent-pd released by remove_router :: held 0.000s inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:262
2015-08-27 19:08:38.012 12027 ERROR neutron.callbacks.manager [-] Error during 
notification for neutron.agent.linux.pd.remove_router router, after_delete
2015-08-27 19:08:38.012 12027 ERROR neutron.callbacks.manager Traceback (most 
recent call last):
2015-08-27 19:08:38.012 12027 ERROR neutron.callbacks.manager   File 
/opt/stack/neutron/neutron/callbacks/manager.py, line 141, in _notify_loop
2015-08-27 19:08:38.012 12027 ERROR neutron.callbacks.manager 
callback(resource, event, trigger, **kwargs)
2015-08-27 19:08:38.012 12027 ERROR neutron.callbacks.manager   File 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py, line 
252, in inner
2015-08-27 19:08:38.012 12027 ERROR neutron.callbacks.manager return 
f(*args, **kwargs)
2015-08-27 19:08:38.012 12027 ERROR neutron.callbacks.manager   File 
/opt/stack/neutron/neutron/agent/linux/pd.py, line 307, in remove_router
2015-08-27 19:08:38.012 12027 ERROR neutron.callbacks.manager del 
l3_agent.pd.routers[router['id']]['subnets']
2015-08-27 19:08:38.012 12027 ERROR neutron.callbacks.manager KeyError: 'id'
2015-08-27 19:08:38.012 12027 ERROR neutron.callbacks.manager
2015-08-27 19:08:38.015 12027 DEBUG neutron.callbacks.manager [-] Calling 
callback neutron_vpnaas.services.vpn.vpn_service.router_removed_actions 
_notify_loop /opt/stack/neutron/neutron/callbacks/manager.py:140
2015-08-27 19:08:47.697 12027 DEBUG oslo_service.loopingcall [-] Fixed interval 
looping call 'neutron_vpnaas.services.vpn.agent.VPNAgent._report_state' 
sleeping for 30.00 seconds _run_loop 
/usr/local/lib/python2.7/dist-packages/oslo_service/loopingcall.py:121
2015-08-27 19:08:47.756 12027 DEBUG oslo_service.loopingcall [-] Fixed interval 
looping call 'neutron.service.Service.report_state' sleeping for 30.00 seconds 
_run_loop /usr/local/lib/python2.7/dist-packages/oslo_service/loopingcall.py:121


STEPS TO REPRODUCE: 
create a stack with a neutron server (q-vpn) node,  network server (q-l3), 
and a compute node. 
Then execut the following: 

neutron net-create public --router:external
neutron subnet-create public 123.0.0.0/24 --disable-dhcp
for id in $(neutron security-group-list | grep -v  id  | grep -v \-\- | awk 
'{print $2}'); do neutron security-group-delete $id; done 
neutron security-group-rule-create --protocol icmp --direction ingress default
neutron security-group-rule-create --protocol tcp --port-range-min 1 
--port-range-max 65535 --direction ingress default 
neutron security-group-rule-create --protocol udp --port-range-min 1 
--port-range-max 65535 --direction ingress default 
neutron net-create private
neutron subnet-create private103.0.0.0/24 --name private


neutron router-create r1 --distributed=False --ha=False
neutron router-gateway-set r1 public 

then execute: 
neutron router-delete r1

you will see the error above.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489576

Title:
  key error in del l3_agent.pd.routers[router['id']]['subnets']

Status in neutron:
  New

Bug description:
  currently the following erorr is seen in the q-l3 or q-vpn agents when
  deleting a router:  (steps to reproduce below):

  Exit code: 0
  Stdin:
  Stdout: lo
  Stderr:  execute /opt/stack/neutron/neutron/agent/linux/utils.py:150
  2015-08-27 19:08:38.007 12027 DEBUG neutron.agent.linux.utils [-]
  Command: ['ip', 'netns', 'delete', 
u'qrouter-f53025cc-5159-480e-a7f3-19122aa330a3']
  Exit code: 0
  Stdin:
  Stdout:
  Stderr:  execute /opt/stack/neutron/neutron/agent/linux/utils.py:150
  2015-08-27 19:08:38.009 12027 DEBUG 

[Yahoo-eng-team] [Bug 1383571] Re: The fip namespace can be destroyed on L3 agent restart

2015-08-27 Thread Carl Baldwin
** Changed in: neutron
   Status: Confirmed = Incomplete

** Changed in: neutron
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1383571

Title:
  The fip namespace can be destroyed on L3 agent restart

Status in neutron:
  Invalid

Bug description:
  The scenario is described in a recent patch review [1].  The patch did
  not introduce the problem but it was noticed during review of the
  patch.

  [1]
  https://review.openstack.org/#/c/128131/5/neutron/agent/l3_agent.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1383571/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1407893] Re: nova-network does not use interface for floating ip-SNAT-rule

2015-08-27 Thread Matt Davis
** Also affects: centos
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1407893

Title:
  nova-network does not use interface for floating ip-SNAT-rule

Status in OpenStack Compute (nova):
  Expired
Status in CentOS:
  New

Bug description:
  I created a pool of floating IPs with

  nova-manage floating create --ip_range=10.10.251.8/29  --pool testnetz
  --interface vlan251

  But nova-network does use the default public_interface when creating
  the SNAT-Rule:

  Chain nova-network-float-snat (1 references)
   pkts bytes target prot opt in out source   
destination
  0 0 SNAT   all  --  *  *   192.168.90.3 
192.168.90.3 to:10.10.251.10
  2   168 SNAT   all  --  *  eth0  192.168.90.3 0.0.0.0/0   
 to:10.10.251.10

  instead of using the given one.

  Applying this patch
  ---
  *** nova/network/floating_ips.py.origTue Jan  6 10:06:19 2015
  --- nova/network/floating_ips.py Tue Jan  6 10:06:43 2015
  ***
  *** 90,96 
msg = _('Fixed ip %s not found') % 
floating_ip.fixed_ip_id
LOG.debug(msg)
continue
  ! interface = CONF.public_interface or floating_ip.interface
try:
self.l3driver.add_floating_ip(floating_ip.address,
  fixed_ip.address,
  --- 90,96 
msg = _('Fixed ip %s not found') % 
floating_ip.fixed_ip_id
LOG.debug(msg)
continue
  ! interface = floating_ip.interface or CONF.public_interface
try:
self.l3driver.add_floating_ip(floating_ip.address,
  fixed_ip.address,
  ***
  *** 354,360 
def _associate_floating_ip(self, context, floating_address, 
fixed_address,
   interface, instance_uuid):
Performs db and driver calls to associate floating ip  fixed 
ip.
  ! interface = CONF.public_interface or interface

@utils.synchronized(unicode(floating_address))
def do_associate():
  --- 354,360 
def _associate_floating_ip(self, context, floating_address, 
fixed_address,
   interface, instance_uuid):
Performs db and driver calls to associate floating ip  fixed 
ip.
  ! interface = interface or CONF.public_interface

@utils.synchronized(unicode(floating_address))
def do_associate():
  ***
  *** 602,608 
floating_ip.host = dest
floating_ip.save()

  ! interface = CONF.public_interface or floating_ip.interface
fixed_ip = floating_ip.fixed_ip
self.l3driver.add_floating_ip(floating_ip.address,
  fixed_ip.address,
  --- 602,608 
floating_ip.host = dest
floating_ip.save()

  ! interface = floating_ip.interface or CONF.public_interface
fixed_ip = floating_ip.fixed_ip
self.l3driver.add_floating_ip(floating_ip.address,
  fixed_ip.address,
  

  changes this to the expectet behavior: Use default only if none is
  give on creation of the floating ip pool.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1407893/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488233] Re: FC with LUN ID 255 not recognized

2015-08-27 Thread Matt Riedemann
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1488233

Title:
  FC with LUN ID 255 not recognized

Status in Cinder:
  New
Status in Cinder kilo series:
  In Progress
Status in OpenStack Compute (nova):
  New
Status in OpenStack Compute (nova) kilo series:
  New
Status in os-brick:
  Fix Committed

Bug description:
  (s390 architecture/System z Series only) FC LUNs with LUN ID 255 are not 
recognized by neither Cinder nor Nova when trying to attach the volume.
  The issue is that Fibre-Channel volumes need to be added using the unit_add 
command with a properly formatted LUN string.
  The string is set correctly for LUNs =0xff. But not for LUN IDs within the 
range 0xff and 0x.
  Due to this the volumes do not get properly added to the hypervisor 
configuration and the hypervisor does not find them.

  Note: The change for Liberty os-brick is ready. I would also like to
  patch it back to Kilo. Since os-brick has been integrated with
  Liberty, but was separate before, I need to release a patch for Nova,
  Cinder, and os-brick. Unfortunately there is no option on this page to
  nominate the patch for Kilo. Can somebody help? Thank you!

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1488233/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1407893] Re: nova-network does not use interface for floating ip-SNAT-rule

2015-08-27 Thread Matt Davis
This is still an issue... has anyone looked at this?

Here is my patch (which seems to be working). It also has the effect of
allowing me to allocate floating IPs to different public networks (which
was my objective).

** No longer affects: centos

** Patch added: floating_ips.py.patch
   
https://bugs.launchpad.net/nova/+bug/1407893/+attachment/4453496/+files/floating_ips.py.patch

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1407893

Title:
  nova-network does not use interface for floating ip-SNAT-rule

Status in OpenStack Compute (nova):
  Expired

Bug description:
  I created a pool of floating IPs with

  nova-manage floating create --ip_range=10.10.251.8/29  --pool testnetz
  --interface vlan251

  But nova-network does use the default public_interface when creating
  the SNAT-Rule:

  Chain nova-network-float-snat (1 references)
   pkts bytes target prot opt in out source   
destination
  0 0 SNAT   all  --  *  *   192.168.90.3 
192.168.90.3 to:10.10.251.10
  2   168 SNAT   all  --  *  eth0  192.168.90.3 0.0.0.0/0   
 to:10.10.251.10

  instead of using the given one.

  Applying this patch
  ---
  *** nova/network/floating_ips.py.origTue Jan  6 10:06:19 2015
  --- nova/network/floating_ips.py Tue Jan  6 10:06:43 2015
  ***
  *** 90,96 
msg = _('Fixed ip %s not found') % 
floating_ip.fixed_ip_id
LOG.debug(msg)
continue
  ! interface = CONF.public_interface or floating_ip.interface
try:
self.l3driver.add_floating_ip(floating_ip.address,
  fixed_ip.address,
  --- 90,96 
msg = _('Fixed ip %s not found') % 
floating_ip.fixed_ip_id
LOG.debug(msg)
continue
  ! interface = floating_ip.interface or CONF.public_interface
try:
self.l3driver.add_floating_ip(floating_ip.address,
  fixed_ip.address,
  ***
  *** 354,360 
def _associate_floating_ip(self, context, floating_address, 
fixed_address,
   interface, instance_uuid):
Performs db and driver calls to associate floating ip  fixed 
ip.
  ! interface = CONF.public_interface or interface

@utils.synchronized(unicode(floating_address))
def do_associate():
  --- 354,360 
def _associate_floating_ip(self, context, floating_address, 
fixed_address,
   interface, instance_uuid):
Performs db and driver calls to associate floating ip  fixed 
ip.
  ! interface = interface or CONF.public_interface

@utils.synchronized(unicode(floating_address))
def do_associate():
  ***
  *** 602,608 
floating_ip.host = dest
floating_ip.save()

  ! interface = CONF.public_interface or floating_ip.interface
fixed_ip = floating_ip.fixed_ip
self.l3driver.add_floating_ip(floating_ip.address,
  fixed_ip.address,
  --- 602,608 
floating_ip.host = dest
floating_ip.save()

  ! interface = floating_ip.interface or CONF.public_interface
fixed_ip = floating_ip.fixed_ip
self.l3driver.add_floating_ip(floating_ip.address,
  fixed_ip.address,
  

  changes this to the expectet behavior: Use default only if none is
  give on creation of the floating ip pool.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1407893/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482101] Re: No response is received while changing image's status

2015-08-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/215793
Committed: 
https://git.openstack.org/cgit/openstack/api-site/commit/?id=2ed6a71c5fe526b7a0a5e718e8cd636e5d2cb424
Submitter: Jenkins
Branch:master

commit 2ed6a71c5fe526b7a0a5e718e8cd636e5d2cb424
Author: Diane Fleming difle...@cisco.com
Date:   Fri Aug 21 16:43:32 2015 -0500

Update response code to 204 for reactivate and deactivate image

Also, made a few small edits

Change-Id: I34105fbcf3ceee994751f6bc4b9b4b19d742ff5d
Closes-Bug: #1482101


** Changed in: openstack-api-site
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1482101

Title:
  No response is received while changing image's status

Status in Glance:
  Invalid
Status in openstack-api-site:
  Fix Released

Bug description:
  While changing the status of any image in glance no status code or
  response is returned.

  Steps to reproduce:-

  1. Get a token using curl request using admin's credentials.
  2. Deactivate any image using curl POST request.
  3. Check the status of same image using curl request.
  4. Reactivate the same image using curl POST request.
  5. Check the status of same image using curl request.

  
  Observations:-
  1. On Both status change Glance api doesn't returns any status code or status 
message showing success or failure of the action.
  2. But while checking the status of the image using curl request it shows the 
successful change in the status.

  
  Requirement:-

  1. This needs to be fixed as it is not possible for any user to make sure 
that the action has been successful or not.
  2. For automation purpose it is not possible to automate any case based on 
status change of the image if no status is returned.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1482101/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489581] Re: test_create_ebs_image_and_check_boot is race failing

2015-08-27 Thread Matt Riedemann
Adding nova as a low priority since in the attach of the snapshot bdm
here:

http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/block_device.py#n329

nova could be checking the status of the snapshot and waiting for it to
be available (or timeout), like it does with the wait_fun in that method
for checking when the volume it created is available before returning
from the attach method.

** Changed in: tempest
   Importance: Undecided = High

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
   Importance: Undecided = Low

** Changed in: nova
   Status: New = Triaged

** Tags added: volumes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1489581

Title:
  test_create_ebs_image_and_check_boot is race failing

Status in OpenStack Compute (nova):
  Triaged
Status in tempest:
  In Progress

Bug description:
  http://logs.openstack.org/97/217197/3/check/gate-tempest-dsvm-full-
  ceph/cb1771f/console.html#_2015-08-27_16_17_42_279

  noted here:

  
https://review.openstack.org/#/c/213621/13/tempest/scenario/test_volume_boot_pattern.py

  patch: https://review.openstack.org/#/c/217804/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1489581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361413] Re: LBaaS documentation is outdated , shows listeners instead of VIPs

2015-08-27 Thread Aditi Rajagopal
Marking the neutron section of this bug as invalid. The real fix
needed to get into openstack-api-site

** Changed in: neutron
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361413

Title:
  LBaaS documentation is outdated , shows listeners instead of VIPs

Status in neutron:
  Invalid
Status in openstack-api-site:
  Fix Released

Bug description:
  The documentation for the LBaaS REST API endpoints listed on the office docs 
website does not match the REST API exposed by neutron.
  Documentation URL: 
http://developer.openstack.org/api-ref-networking-v2.html#lbaas

  In the API docs there is a reference to /listeners. However, neutron
  doesn't have an API for /listeners, it only has an API for /vips

  Below is a curl command demonstrating the issue:
  Listing VIPs: *WORKS
  curl -i http://infracont.rnd.cloud:9696/v2.0/lb/vips -X GET -H X-Auth-Token: 
5c5b55bb54cc4c90971fc695ff44923d -H Content-Type: application/json -H 
Accept: application/json -H User-Agent: python-neutronclient

  Listing Listeners: *FAILS
  curl -i http://infracont.rnd.cloud:9696/v2.0/lb/listeners -X GET -H 
X-Auth-Token: 5c5b55bb54cc4c90971fc695ff44923d -H Content-Type: 
application/json -H Accept: application/json -H User-Agent: 
python-neutronclient

  
  Openstack icehouse deployment.
  Running neutron version 2.3.4

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1361413/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489609] [NEW] DB Model migrate: move cisco db l3_models to networking-cisco

2015-08-27 Thread Shweta P
Public bug reported:

Move neutron/plugins/cisco/db/l3/l3_models.py from neutron to
networking-cisco

** Affects: neutron
 Importance: Undecided
 Assignee: Shweta P (shweta-ap05)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Shweta P (shweta-ap05)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489609

Title:
  DB Model migrate: move cisco db l3_models to networking-cisco

Status in neutron:
  In Progress

Bug description:
  Move neutron/plugins/cisco/db/l3/l3_models.py from neutron to
  networking-cisco

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489609/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489611] [NEW] Error in console when opening ng launch instance wizard

2015-08-27 Thread Justin Pomeroy
Public bug reported:

Recently the strings for the Source step of the angular Launch Instance
wizard were moved out of the controller into the HTML.  At the same time
the help controller was removed because it was no longer needed.
However, the reference to the controller in the HTML was not removed so
an error is displayed in the browser console when opening the wizard.

Error: [ng:areq] Argument 'LaunchInstanceSourceHelpController' is not a
function, got undefined...

** Affects: horizon
 Importance: Undecided
 Assignee: Justin Pomeroy (jpomero)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) = Justin Pomeroy (jpomero)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1489611

Title:
  Error in console when opening ng launch instance wizard

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Recently the strings for the Source step of the angular Launch
  Instance wizard were moved out of the controller into the HTML.  At
  the same time the help controller was removed because it was no longer
  needed.  However, the reference to the controller in the HTML was not
  removed so an error is displayed in the browser console when opening
  the wizard.

  Error: [ng:areq] Argument 'LaunchInstanceSourceHelpController' is not
  a function, got undefined...

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1489611/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489618] [NEW] Available items count in ng launch instance does not update when changing source

2015-08-27 Thread Justin Pomeroy
Public bug reported:

In the angular Launch Instance wizard, when changing the boot source the
count displayed next to the table of available items does not update.
For example, by default the boot source is set to Image and the count
displayed reflects the number of images.  If you change the boot source
to Volume the count does not update to reflect the number of volumes
available.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1489618

Title:
  Available items count in ng launch instance does not update when
  changing source

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In the angular Launch Instance wizard, when changing the boot source
  the count displayed next to the table of available items does not
  update.  For example, by default the boot source is set to Image and
  the count displayed reflects the number of images.  If you change the
  boot source to Volume the count does not update to reflect the number
  of volumes available.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1489618/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489619] [NEW] Selected item disappears in ng launch instance after changing source

2015-08-27 Thread Justin Pomeroy
Public bug reported:

On the Select Source page of the angular Launch Instance wizard, if you
move an item from the Available table into the Allocated table and then
change the Boot Source, the selected item disappears.  If you switch
back to the previous boot source that item is no longer found in the
Available table and does not seem to show up again until closing and re-
opening the wizard.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1489619

Title:
  Selected item disappears in ng launch instance after changing source

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  On the Select Source page of the angular Launch Instance wizard, if
  you move an item from the Available table into the Allocated table and
  then change the Boot Source, the selected item disappears.  If you
  switch back to the previous boot source that item is no longer found
  in the Available table and does not seem to show up again until
  closing and re-opening the wizard.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1489619/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489627] [NEW] Incorrect use of os.path.join() in nova/api/openstack/common.py

2015-08-27 Thread Ed Leafe
Public bug reported:

Three of the link manipulation methods in nova/api/openstack/common.py
rejoin the URL parts by using os.path.join(). This is incorrect, as it
is OS-dependent, and can result in invalid URLs under Windows. Generally
the urlparse module would be the best choice, but since these URL
fragments aren't created with urlparse.urlparse() or
urlparse.urlsplit(), the equivalent reconstruction methods in that
module won't work. It is simpler and cleaner to just use /.join().

Additionally, there are no unit tests for these methods, so tests will
have to be added first before we can fix the methods, so that we have
some assurance that we are not breaking anything.

** Affects: nova
 Importance: Undecided
 Assignee: Ed Leafe (ed-leafe)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Ed Leafe (ed-leafe)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1489627

Title:
  Incorrect use of os.path.join() in nova/api/openstack/common.py

Status in OpenStack Compute (nova):
  New

Bug description:
  Three of the link manipulation methods in nova/api/openstack/common.py
  rejoin the URL parts by using os.path.join(). This is incorrect, as it
  is OS-dependent, and can result in invalid URLs under Windows.
  Generally the urlparse module would be the best choice, but since
  these URL fragments aren't created with urlparse.urlparse() or
  urlparse.urlsplit(), the equivalent reconstruction methods in that
  module won't work. It is simpler and cleaner to just use /.join().

  Additionally, there are no unit tests for these methods, so tests will
  have to be added first before we can fix the methods, so that we have
  some assurance that we are not breaking anything.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1489627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489650] [NEW] Prefix delegation testing issues

2015-08-27 Thread Assaf Muller
Public bug reported:

The pd, dibbler and agent side changes lack functional tests. There is
no test that validates that the entire feature works (Full stack or
Tempest).

** Affects: neutron
 Importance: Undecided
 Status: New

** Summary changed:

- Prefix delegation agent-side functional testing non-existent
+ Prefix delegation testing issues

** Description changed:

- The pd, dibbler and agent side changes lack functional tests.
+ The pd, dibbler and agent side changes lack functional tests. There is
+ no test that validates that the entire feature works (Full stack or
+ Tempest).

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489650

Title:
  Prefix delegation testing issues

Status in neutron:
  New

Bug description:
  The pd, dibbler and agent side changes lack functional tests. There is
  no test that validates that the entire feature works (Full stack or
  Tempest).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489650/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1132879] Re: server reboot hard and rebuild are flaky in tempest when ssh is enabled

2015-08-27 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: Confirmed = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1132879

Title:
  server reboot hard and rebuild are flaky in tempest when ssh is
  enabled

Status in OpenStack Compute (nova):
  Won't Fix
Status in tempest:
  Invalid

Bug description:
  Working on enabling back ssh access to VMs in tempest tests:

  https://review.openstack.org/#/c/22415/
  https://blueprints.launchpad.net/tempest/+spec/ssh-auth-strategy

  On the gate devstack with nova networking the hard reboot and rebuild
  test are sometimes passing and sometimes not.

  On the gate devstack with quantum networking the hard reboot and
  rebuild tests are systematically not passing, and blocking the overall
  blueprint implementation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1132879/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489716] [NEW] Patching an object which could be gone during test cleanup randomly fails at TestQosPlugin

2015-08-27 Thread Miguel Angel Ajo
Public bug reported:

http://logs.openstack.org/48/217048/1/gate/gate-neutron-
python27/dc83518/testr_results.html.gz

ft1.8256: 
neutron.tests.unit.services.qos.test_qos_plugin.TestQosPlugin.test_get_policy_bandwidth_limit_rules_for_policy_with_filters_StringException:
 Empty attachments:
  pythonlogging:'neutron.api.extensions'
  stderr
  stdout

pythonlogging:'': {{{
2015-08-27 22:16:19,871 INFO [neutron.manager] Loading core plugin: 
neutron.db.db_base_plugin_v2.NeutronDbPluginV2
2015-08-27 22:16:19,872  WARNING [neutron.notifiers.nova] Authenticating to 
nova using nova_admin_* options is deprecated. This should be done using an 
auth plugin, like password
2015-08-27 22:16:19,873 INFO [neutron.manager] Loading Plugin: qos
2015-08-27 22:16:19,877 INFO 
[neutron.services.qos.notification_drivers.manager] Loading message_queue 
(Message queue updates) notification driver for QoS plugin
}}}

Traceback (most recent call last):
  File 
/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py,
 line 1804, in _patch_stopall
patch.stop()
  File 
/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py,
 line 1510, in stop
return self.__exit__()
  File 
/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py,
 line 1480, in __exit__
setattr(self.target, self.attribute, self.temp_original)
ReferenceError: weakly-referenced object no longer exists

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489716

Title:
  Patching an object which could be gone during test cleanup randomly
  fails at TestQosPlugin

Status in neutron:
  New

Bug description:
  http://logs.openstack.org/48/217048/1/gate/gate-neutron-
  python27/dc83518/testr_results.html.gz

  ft1.8256: 
neutron.tests.unit.services.qos.test_qos_plugin.TestQosPlugin.test_get_policy_bandwidth_limit_rules_for_policy_with_filters_StringException:
 Empty attachments:
pythonlogging:'neutron.api.extensions'
stderr
stdout

  pythonlogging:'': {{{
  2015-08-27 22:16:19,871 INFO [neutron.manager] Loading core plugin: 
neutron.db.db_base_plugin_v2.NeutronDbPluginV2
  2015-08-27 22:16:19,872  WARNING [neutron.notifiers.nova] Authenticating to 
nova using nova_admin_* options is deprecated. This should be done using an 
auth plugin, like password
  2015-08-27 22:16:19,873 INFO [neutron.manager] Loading Plugin: qos
  2015-08-27 22:16:19,877 INFO 
[neutron.services.qos.notification_drivers.manager] Loading message_queue 
(Message queue updates) notification driver for QoS plugin
  }}}

  Traceback (most recent call last):
File 
/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py,
 line 1804, in _patch_stopall
  patch.stop()
File 
/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py,
 line 1510, in stop
  return self.__exit__()
File 
/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py,
 line 1480, in __exit__
  setattr(self.target, self.attribute, self.temp_original)
  ReferenceError: weakly-referenced object no longer exists

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489716/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483104] Re: VMware: host-describe only return the result of one VC cluster when we map one nova-compute to multiple VC cluster

2015-08-27 Thread Matt Riedemann
Per commit 2f7403bd7200a01e350cde9182c273562e0c9c62 we no longer support
the 1:M mapping described here.

** Changed in: nova
   Status: In Progress = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1483104

Title:
  VMware: host-describe only return the result of one VC cluster when we
  map one nova-compute to multiple VC cluster

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  We use nova vCenter driver, and config one nova-compute to manage two
  VC clusters, like this:

  CONF.vmware.cluster_name = ClusterA, ClusterB

  when we use 'nova host-describe' to get statistical information(total,
  used_now) of the nova-compute host, it only return the result of one
  VC cluster.

  We check objects.ComputeNode.get_first_node_by_host_for_old_compat(),
  it only return the first compute-node and ignore others.

  @base.remotable_classmethod
  def get_first_node_by_host_for_old_compat(cls, context, host,
    use_slave=False):
  computes = ComputeNodeList.get_all_by_host(context, host, use_slave)
  # FIXME(sbauza): Some hypervisors (VMware, Ironic) can return multiple
  # nodes per host, we should return all the nodes and modify the 
callers
  # instead.
  # Arbitrarily returning the first node.
  return computes[0]

  Code base:

  $ git log -1
  commit 744f98c8e5bc911a87cbdee28d6ccc3d914c8238
  Merge: cdfd489 0adde67
  Author: Jenkins jenk...@review.openstack.org
  Date:   Sun Aug 9 19:14:36 2015 +

  Merge libvirt: convert GlusterFS driver to
  LibvirtBaseFileSystemVolumeDriver

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1483104/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489690] [NEW] neutron-openvswitch-agent leak sg iptables rules

2015-08-27 Thread Zhangqi Chen
Public bug reported:

In function 'treat_devices_added_or_updated', port not exist at 'br-int' will 
be added into 'skipped_devices', and return to parent function 
'process_network_ports', and 'process_network_ports' will remove these ports 
from port_info['current'].
If a port updated due to 'sg_member', and the port deleted just in function 
'treat_devices_added_or_updated', so the port aded into 'skipped_devices', then 
it removed from port_info['current']. When next 'scan_port', the port not in 
'registered_ports', so it not added into port_info['removed'], it's chains and 
rules will never been removed. These waste chains and rules stay in iptables 
util ovs-agent restart or compute node restart.

** Affects: neutron
 Importance: Undecided
 Assignee: Zhangqi Chen (chenzhangqi79)
 Status: New


** Tags: sg-fw

** Changed in: neutron
 Assignee: (unassigned) = Zhangqi Chen (chenzhangqi79)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489690

Title:
  neutron-openvswitch-agent leak sg iptables rules

Status in neutron:
  New

Bug description:
  In function 'treat_devices_added_or_updated', port not exist at 'br-int' will 
be added into 'skipped_devices', and return to parent function 
'process_network_ports', and 'process_network_ports' will remove these ports 
from port_info['current'].
  If a port updated due to 'sg_member', and the port deleted just in function 
'treat_devices_added_or_updated', so the port aded into 'skipped_devices', then 
it removed from port_info['current']. When next 'scan_port', the port not in 
'registered_ports', so it not added into port_info['removed'], it's chains and 
rules will never been removed. These waste chains and rules stay in iptables 
util ovs-agent restart or compute node restart.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 948520] Re: nova-rootwrap does a poor job of validating parameters

2015-08-27 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/948520

Title:
  nova-rootwrap does a poor job of validating parameters

Status in Cinder:
  Triaged
Status in OpenStack Compute (nova):
  Invalid
Status in oslo-incubator:
  Invalid

Bug description:
  Although nova-rootwrap does limit which commands can be run as root,
  it doesn't validate the parameters passed through.

  For example,  '/bin/dd' is allowed by by 'nova/rootwrap/compute.py'
  which can be exploited in the following fashion:

  'sudo nova-rootwrap dd if=/tmp/mypw of=/etc/passwd'

  This means that if someone can get nova user access they can gain root
  access.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/948520/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357063] Re: nova.virt.driver Emitting event log message in stable/icehouse doesn't show anything

2015-08-27 Thread Davanum Srinivas (DIMS)
Icehouse is EOL - https://wiki.openstack.org/wiki/Releases

** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357063

Title:
  nova.virt.driver Emitting event log message in stable/icehouse
  doesn't show anything

Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  This is fixed on master with commit
  8c98b601f2db1f078d5f42ab94043d9939608f73 but is useless on
  stable/icehouse, here is an example snip from a stable/icehouse
  tempest run of what this looks like in the n-cpu log:

  2014-08-14 16:18:53.311 473 DEBUG nova.virt.driver [-] Emitting event
  emit_event /opt/stack/new/nova/nova/virt/driver.py:1207

  It would be really nice to use that information in trying to debug
  what's causing all of these hits for InstanceInfoCacheNotFound stack
  traces:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRXhjZXB0aW9uIGRpc3BhdGNoaW5nIGV2ZW50XCIgQU5EIG1lc3NhZ2U6XCJJbmZvIGNhY2hlIGZvciBpbnN0YW5jZVwiIEFORCBtZXNzYWdlOlwiY291bGQgbm90IGJlIGZvdW5kXCIgQU5EIHRhZ3M6XCJzY3JlZW4tbi1jcHUudHh0XCIgQU5EIE5PVCBidWlsZF9icmFuY2g6XCJtYXN0ZXJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwODA0NzMxMzM5Nn0=

  We should backport that repr fix to stable/icehouse for serviceability
  purposes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357063/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384127] Re: Fail-fast if initctl isnt present

2015-08-27 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: Confirmed = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1384127

Title:
  Fail-fast if initctl isnt present

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  If initctl isn't present in the image, a shutdown will take (by
  default) 60 seconds to timeout.

  In the case of initctl not being present, we figure this out
  immediately by way of this error from libvirt:

  error: Operation not supported: Container does not provide an initctl
  pipe

  If we detect this, we should abort the retry loop immediately, and
  switch to the unclean shutdown (_destroy_instance).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1384127/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489703] [NEW] Styles and Assets No Longer Necessary

2015-08-27 Thread Diana Whitten
Public bug reported:

There are two styles in horizon.scss that look like they are no longer
used.  In addition, one of the rules is pulling in an asset,
right_droparrow.png that is only used in this style.  It looks like all
of these can be removed.

** Affects: horizon
 Importance: Undecided
 Assignee: bryjen (bryan-jen)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1489703

Title:
  Styles and Assets No Longer Necessary

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  There are two styles in horizon.scss that look like they are no longer
  used.  In addition, one of the rules is pulling in an asset,
  right_droparrow.png that is only used in this style.  It looks like
  all of these can be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1489703/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1037227] Re: Ordering of extensions can affect the correctness of extensions

2015-08-27 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: Confirmed = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1037227

Title:
  Ordering of extensions can affect the correctness of extensions

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  Bug 1037201 appears to have found a bug in the disk_config extension
  that has been around for a while. It doesn't check if the 'server' key
  exists before attempting to use it.

  There is a unit test that checks that a 400 result code is returned,
  and it passes (mostly, see the aforementioned bug).

  It mostly works because of other extensions (such as scheduler_hints)
  that might possibly check for the missing 'server' key before the
  disk_config extensions.

  Extensions should be checked independently to ensure that these sorts
  of bugs don't get hidden.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1037227/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489694] [NEW] neutron-usage-audit lack audit period time

2015-08-27 Thread Deliang Fan
Public bug reported:

When execute neutron-usage-audit, audit_period_beginning and
audit_period_ending are not in the notification message, which are
important  for these events such as network.exists, router.exists etc。

** Affects: neutron
 Importance: Undecided
 Assignee: Deliang Fan (vanderliang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Deliang Fan (vanderliang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489694

Title:
  neutron-usage-audit lack audit period time

Status in neutron:
  New

Bug description:
  When execute neutron-usage-audit, audit_period_beginning and
  audit_period_ending are not in the notification message, which are
  important  for these events such as network.exists, router.exists etc。

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489678] [NEW] return inappropriate message when run net-list -F {fields} from neutronclient

2015-08-27 Thread zhaobo
Public bug reported:

version: 2.6.0

Run neutron net-list -F ${which not contained fields},it will return
some inappropriate info, the expected is return nothing.

normal:
neutron net-list
+--+--+--+
| id   | name | subnets 
 |
+--+--+--+
| d6424edb-103f-4aa8-8098-86199630bb1c | public   | 
309abcb3-da67-4948-9b26-f8788057d685 172.24.4.0/24   |
|  |  | 
a69a8e17-21ff-4170-a6ff-6eb59c7932af 2001:db8::/64   |
| 38653292-71cb-4dd4-b55f-750224ce484a | private  | 
9d334a61-dbb8-4271-9a06-8f1dd5a933c1 10.0.0.0/24 |
|  |  | 
aed807fb-4167-4388-bdfe-2b9c0843e0cd fdca:d8ff:39ca::/64 |
+--+--+--+


repro:
run neutron net-list -F abc
it returned:
++
||
++

++

** Affects: neutron
 Importance: Undecided
 Assignee: zhaobo (zhaobo6)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = zhaobo (zhaobo6)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489678

Title:
  return inappropriate message when run net-list -F {fields} from
  neutronclient

Status in neutron:
  New

Bug description:
  version: 2.6.0

  Run neutron net-list -F ${which not contained fields},it will return
  some inappropriate info, the expected is return nothing.

  normal:
  neutron net-list
  
+--+--+--+
  | id   | name | subnets   
   |
  
+--+--+--+
  | d6424edb-103f-4aa8-8098-86199630bb1c | public   | 
309abcb3-da67-4948-9b26-f8788057d685 172.24.4.0/24   |
  |  |  | 
a69a8e17-21ff-4170-a6ff-6eb59c7932af 2001:db8::/64   |
  | 38653292-71cb-4dd4-b55f-750224ce484a | private  | 
9d334a61-dbb8-4271-9a06-8f1dd5a933c1 10.0.0.0/24 |
  |  |  | 
aed807fb-4167-4388-bdfe-2b9c0843e0cd fdca:d8ff:39ca::/64 |
  
+--+--+--+

  
  repro:
  run neutron net-list -F abc
  it returned:
  ++
  ||
  ++

  ++

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489678/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489374] [NEW] neutron-sanity-check returns error while importing ml2_sriov cfg opts

2015-08-27 Thread Sridhar Gaddam
Public bug reported:

neutron-sanity-check -h
No handlers could be found for logger neutron.quota
Traceback (most recent call last):
  File /usr/local/bin/neutron-sanity-check, line 6, in module
from neutron.cmd.sanity_check import main
  File /opt/stack/neutron/neutron/cmd/sanity_check.py, line 37, in module
'neutron.plugins.ml2.drivers.mech_sriov.mech_driver')
  File /usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py, line 2089, 
in import_group
self._get_group(group)
  File /usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py, line 2407, 
in _get_group
raise NoSuchGroupError(group_name)
oslo_config.cfg.NoSuchGroupError: no such group: ml2_sriov

** Affects: neutron
 Importance: Undecided
 Assignee: Sridhar Gaddam (sridhargaddam)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Sridhar Gaddam (sridhargaddam)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489374

Title:
  neutron-sanity-check returns error while importing ml2_sriov cfg opts

Status in neutron:
  In Progress

Bug description:
  neutron-sanity-check -h
  No handlers could be found for logger neutron.quota
  Traceback (most recent call last):
File /usr/local/bin/neutron-sanity-check, line 6, in module
  from neutron.cmd.sanity_check import main
File /opt/stack/neutron/neutron/cmd/sanity_check.py, line 37, in module
  'neutron.plugins.ml2.drivers.mech_sriov.mech_driver')
File /usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py, line 
2089, in import_group
  self._get_group(group)
File /usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py, line 
2407, in _get_group
  raise NoSuchGroupError(group_name)
  oslo_config.cfg.NoSuchGroupError: no such group: ml2_sriov

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489374/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489283] [NEW] inapt spelling of a word

2015-08-27 Thread JuPing
Public bug reported:

There is a inapt spelling in the file called 
nova/doc/source/v2/2.0_general_info.rst.
  line5:...API is defined as a ReSTful HTTP service...
  line39:-  ReSTful web services
The word 'ReSTful' should be spelled as 'RESTful'.

** Affects: nova
 Importance: Undecided
 Assignee: JuPing (jup-fnst)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = JuPing (jup-fnst)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1489283

Title:
  inapt spelling of a word

Status in OpenStack Compute (nova):
  New

Bug description:
  There is a inapt spelling in the file called 
nova/doc/source/v2/2.0_general_info.rst.
line5:...API is defined as a ReSTful HTTP service...
line39:-  ReSTful web services
  The word 'ReSTful' should be spelled as 'RESTful'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1489283/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489291] [NEW] Add tags to neutron resources

2015-08-27 Thread Gal Sagie
Public bug reported:

In most popular REST API interfaces, objects in the domain model can be
tagged with zero or more simple strings. These strings may then be used
to group and categorize objects in the domain model.

Neutron resources in current DB model do not contain any tags, and
dont have a generic consistent way to add tags or/and any other data
by the user.
Adding tags to resources can be useful for management and
orchestration in OpenStack, if its done in the API level
and IS NOT backend specific data.

The following use cases refer to adding tags to networks, but the same
can be applicable to any other Neutron resource (core resource and router):

1) Ability to map different networks in different OpenStack locations
   to one logically same network (for Multi site OpenStack)

2) Ability to map Id's from different management/orchestration systems to
   OpenStack networks in mixed environments, for example for project Kuryr,
map docker network id to neutron network id

3) Leverage tags by deployment tools

spec : https://review.openstack.org/#/c/216021/

** Affects: neutron
 Importance: Undecided
 Assignee: Gal Sagie (gal-sagie)
 Status: New


** Tags: rfe

** Changed in: neutron
 Assignee: (unassigned) = Gal Sagie (gal-sagie)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489291

Title:
  Add tags to neutron resources

Status in neutron:
  New

Bug description:
  In most popular REST API interfaces, objects in the domain model can be
  tagged with zero or more simple strings. These strings may then be used
  to group and categorize objects in the domain model.

  Neutron resources in current DB model do not contain any tags, and
  dont have a generic consistent way to add tags or/and any other data
  by the user.
  Adding tags to resources can be useful for management and
  orchestration in OpenStack, if its done in the API level
  and IS NOT backend specific data.

  The following use cases refer to adding tags to networks, but the same
  can be applicable to any other Neutron resource (core resource and router):

  1) Ability to map different networks in different OpenStack locations
 to one logically same network (for Multi site OpenStack)

  2) Ability to map Id's from different management/orchestration systems to
 OpenStack networks in mixed environments, for example for project Kuryr,
  map docker network id to neutron network id

  3) Leverage tags by deployment tools

  spec : https://review.openstack.org/#/c/216021/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489291/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488809] Re: [Juno][UCA] Non default configuration sections ignored for nova.conf

2015-08-27 Thread Davanum Srinivas (DIMS)
So https://wiki.openstack.org/wiki/ReleaseNotes/2014.1.5 shows that The
2014.1.5 release is a Icehouse bugfix update, so that's icehouse and
NOT juno. If you inspect for example nova/network/neutronv2/api.py in
your environment, you will see

cfg.StrOpt('neutron_admin_username',
  help='Username for connecting to neutron in admin 
context'),

and NOT

cfg.StrOpt('admin_username',
   help='Username for connecting to neutron in admin context',
   deprecated_group='DEFAULT',
   deprecated_name='neutron_admin_username'),

Hope that helps.
Dims

** Changed in: oslo.config
 Assignee: (unassigned) = Davanum Srinivas (DIMS) (dims-v)

** Changed in: nova
 Assignee: (unassigned) = Davanum Srinivas (DIMS) (dims-v)

** Changed in: oslo.config
   Importance: Undecided = Medium

** Changed in: nova
   Importance: Undecided = Medium

** Changed in: oslo.config
   Status: New = Invalid

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1488809

Title:
  [Juno][UCA] Non default configuration sections ignored for nova.conf

Status in ubuntu-cloud-archive:
  New
Status in OpenStack Compute (nova):
  Invalid
Status in oslo.config:
  Invalid

Bug description:
  Non default configuration sections [glance], [neutron] ignored for
  nova.conf then installed from UCA packages:

  How to reproduce:
  1) Install and configure OpenStack Juno Nova with Neutron at compute node 
using UCA (http://archive.ubuntu.com/ubuntu/ trusty-updates/main amd64 
Packages):
  python-oslo.config 1:1.2.1-0ubuntu2
  python-oslo.messaging 1.3.0-0ubuntu1.2
  python-oslo.rootwrap 1.2.0-0ubuntu1
  nova-common 1:2014.1.5-0ubuntu1.2
  python-nova 1:2014.1.5-0ubuntu1.2
  neutron-common 1:2014.1.5-0ubuntu1

  /etc/nova/nova.conf example:
  [DEFAULT]
  debug=True
  ...
  [glance]
  api_servers=10.0.0.3:9292

  [neutron]
  admin_auth_url=http://10.0.0.3:5000/v2.0
  admin_username=admin
  admin_tenant_name=services
  admin_password=admin
  url=http://10.0.0.3:9696
  ...

  2) From nova log, check which values has been applied:
  # grep -E 'admin_auth_url\s+=|admin_username\s+=|api_servers\s+=' 
/var/log/nova/nova-compute.log
  2015-08-26 07:34:48.193 30535 DEBUG nova.openstack.common.service [-] 
glance_api_servers = ['192.168.121.14:9292'] log_opt_values 
/usr/lib/python2.7/dist-packages/oslo/config/cfg.py:1941
  2015-08-26 07:34:48.210 30535 DEBUG nova.openstack.common.service [-] 
neutron_admin_auth_url = http://localhost:5000/v2.0 log_opt_values 
/usr/lib/python2.7/dist-packages/oslo/config/cfg.py:1941
  2015-08-26 07:34:48.211 30535 DEBUG nova.openstack.common.service [-] 
neutron_admin_username = None log_opt_values 
/usr/lib/python2.7/dist-packages/oslo/config/cfg.py:1941

  Expected:
  configuration options to be applied from [glance], [neutron] sections 
according to the docs 
http://docs.openstack.org/juno/config-reference/content/list-of-compute-config-options.html

  Actual:
  Defaults for the deprecated options were applied from the [DEFAULT] section 
instead

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1488809/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449639] Re: RBD: On image creation error, image is not deleted

2015-08-27 Thread Flavio Percoco
** Also affects: glance-store/kilo
   Importance: Undecided
   Status: New

** Changed in: glance-store/kilo
   Status: New = In Progress

** Changed in: glance-store/kilo
   Importance: Undecided = Medium

** Changed in: glance-store/kilo
 Assignee: (unassigned) = Gorka Eguileor (gorka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1449639

Title:
  RBD: On image creation error, image is not deleted

Status in Glance:
  Fix Released
Status in Glance icehouse series:
  Fix Released
Status in glance_store:
  Fix Released
Status in glance_store kilo series:
  In Progress

Bug description:
  When an exception rises while adding/creating an image, and the image
  has been created, this new image is not properly deleted.

  The fault lies in the `_delete_image` call of the Store.add method
  that is providing incorrect arguments.

  This also affects Glance (Icehouse), since back then glance_store
  functionality was included there.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1449639/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489268] Re: [VPNaaS] DVR unit tests in VPNaaS failing

2015-08-27 Thread venkata anil
There is a fix already in progress.
I297f550a824785061d43237a98f079d7b0fa99ab,n,z
target=_blankI297f550a824785061d43237a98f079d7b0fa99ab,n,z
target=_blankhttps://review.openstack.org/#q,I297f550a824785061d43237a98f079d7b0fa99ab,n,z

so marking this bug as invalid.

** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489268

Title:
  [VPNaaS] DVR unit tests in VPNaaS failing

Status in neutron:
  Invalid

Bug description:
  VPNaaS unit tests for DVR are failing with below error

  AttributeError: 'DvrEdgeRouter' object has no attribute
  'create_snat_namespace'

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File 
neutron_vpnaas/tests/unit/services/vpn/device_drivers/test_ipsec.py, line 
934, in setUp
  ipsec_process)
File 
neutron_vpnaas/tests/unit/services/vpn/device_drivers/test_ipsec.py, line 
638, in setUp
  self._make_dvr_edge_router_info_for_test()
File 
neutron_vpnaas/tests/unit/services/vpn/device_drivers/test_ipsec.py, line 
646, in _make_dvr_edge_router_info_for_test
  router.create_snat_namespace()
  AttributeError: 'DvrEdgeRouter' object has no attribute 
'create_snat_namespace'

  
  The following 12 test cases related to dvr_edge_router are failing

  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPSecDeviceDVR.test_get_namespace_for_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPSecDeviceDVR.test_remove_rule_with_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecStrongswanDeviceDriverDVR.test_get_namespace_for_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecFedoraStrongswanDeviceDriverDVR.test_iptables_apply_with_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecStrongswanDeviceDriverDVR.test_add_nat_rule_with_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecStrongswanDeviceDriverDVR.test_remove_rule_with_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPSecDeviceDVR.test_add_nat_rule_with_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecFedoraStrongswanDeviceDriverDVR.test_add_nat_rule_with_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecFedoraStrongswanDeviceDriverDVR.test_get_namespace_for_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecFedoraStrongswanDeviceDriverDVR.test_remove_rule_with_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecStrongswanDeviceDriverDVR.test_iptables_apply_with_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPSecDeviceDVR.test_iptables_apply_with_dvr_edge_router
 [ multipart

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461678] Re: nova error handling causes glance to keep unlinked files open, wasting space

2015-08-27 Thread Erno Kuvaja
** Also affects: python-glanceclient
   Importance: Undecided
   Status: New

** Changed in: python-glanceclient
   Status: New = Fix Committed

** Changed in: python-glanceclient
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461678

Title:
  nova error handling causes glance to keep unlinked files open, wasting
  space

Status in OpenStack Compute (nova):
  Fix Committed
Status in python-glanceclient:
  Fix Committed

Bug description:
  When creating larger glance images (like a 10GB CentOS7 image), if we
  run into situation where we run out of room on the destination device,
  we cannot recover the space from glance. glance-api will have open
  unlinked files, so a TONNE of space is unavailable until we restart
  glance-api.

  Nova will try to reschedule the instance 3 times, so should see this 
nova-conductor.log :
  u'RescheduledException: Build of instance 
98ca2c0d-44b2-48a6-b1af-55f4b2db73c1 was re-scheduled: [Errno 28] No space left 
on device\n']

  The problem is this code in
  nova.image.glance.GlanceImageService.download():

  if data is None:
  return image_chunks
  else:
  try:
  for chunk in image_chunks:
  data.write(chunk)
  finally:
  if close_file:
  data.close()

  image_chunks is an iterator.  If we take an exception (like we can't
  write the file because the filesystem is full) then we will stop
  iterating over the chunks.  If we don't iterate over all the chunks
  then glance will keep the file open.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1461678/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489300] [NEW] when launch vm the step SelectProjectUserAction should not display.

2015-08-27 Thread Sun Jing
Public bug reported:

When launch vm with dashboard ,the first workflow SelectProjectUserAction 
should be hidden,but it displays.
In file:  
\openstack\horizon\openstack_dashboard\dashboards\project\instances\workflows\create_instance.py

class SelectProjectUserAction(workflows.Action):
project_id = forms.ChoiceField(label=_(Project))
user_id = forms.ChoiceField(label=_(User))

def __init__(self, request, *args, **kwargs):
super(SelectProjectUserAction, self).__init__(request, *args, **kwargs)
# Set our project choices
projects = [(tenant.id, tenant.name)
for tenant in request.user.authorized_tenants]
self.fields['project_id'].choices = projects

# Set our user options
users = [(request.user.id, request.user.username)]
self.fields['user_id'].choices = users

class Meta(object):
name = _(Project  User)
# Unusable permission so this is always hidden. However, we
# keep this step in the workflow for validation/verification purposes.
permissions = (!,)

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: create vm.PNG
   
https://bugs.launchpad.net/bugs/1489300/+attachment/4453209/+files/create%20vm.PNG

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1489300

Title:
  when launch vm the step SelectProjectUserAction should  not display.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When launch vm with dashboard ,the first workflow SelectProjectUserAction 
should be hidden,but it displays.
  In file:  
\openstack\horizon\openstack_dashboard\dashboards\project\instances\workflows\create_instance.py

  class SelectProjectUserAction(workflows.Action):
  project_id = forms.ChoiceField(label=_(Project))
  user_id = forms.ChoiceField(label=_(User))

  def __init__(self, request, *args, **kwargs):
  super(SelectProjectUserAction, self).__init__(request, *args, 
**kwargs)
  # Set our project choices
  projects = [(tenant.id, tenant.name)
  for tenant in request.user.authorized_tenants]
  self.fields['project_id'].choices = projects

  # Set our user options
  users = [(request.user.id, request.user.username)]
  self.fields['user_id'].choices = users

  class Meta(object):
  name = _(Project  User)
  # Unusable permission so this is always hidden. However, we
  # keep this step in the workflow for validation/verification purposes.
  permissions = (!,)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1489300/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489304] [NEW] Lack of volume status checks when detaching volume in rebuild.

2015-08-27 Thread Zhenyu Zheng
Public bug reported:

Currently, when rebuilding an instance with volume attached, the Nova compute 
manager will directly call
_detach_volume() which will skip the checks of volume status 
(volume_api.check_detach) and setting the
volume to 'detaching' (volume_api.begin_detaching) at Cinder side. This is 
different with the normal volume
detach process. 

Besides, when rebuilding, we should only allow detaching the volume with in-use 
status, volume in status
such as 'retyping' should not allowed.

** Affects: nova
 Importance: Undecided
 Assignee: Zhenyu Zheng (zhengzhenyu)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Zhenyu Zheng (zhengzhenyu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1489304

Title:
  Lack of volume status checks when detaching volume in rebuild.

Status in OpenStack Compute (nova):
  New

Bug description:
  Currently, when rebuilding an instance with volume attached, the Nova compute 
manager will directly call
  _detach_volume() which will skip the checks of volume status 
(volume_api.check_detach) and setting the
  volume to 'detaching' (volume_api.begin_detaching) at Cinder side. This is 
different with the normal volume
  detach process. 

  Besides, when rebuilding, we should only allow detaching the volume with 
in-use status, volume in status
  such as 'retyping' should not allowed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1489304/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488809] Re: [Juno][UCA] Non default configuration sections ignored for nova.conf

2015-08-27 Thread Bogdan Dobrelya
** Changed in: cloud-archive
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1488809

Title:
  [Juno][UCA] Non default configuration sections ignored for nova.conf

Status in ubuntu-cloud-archive:
  Invalid
Status in OpenStack Compute (nova):
  Invalid
Status in oslo.config:
  Invalid

Bug description:
  Non default configuration sections [glance], [neutron] ignored for
  nova.conf then installed from UCA packages:

  How to reproduce:
  1) Install and configure OpenStack Juno Nova with Neutron at compute node 
using UCA (http://archive.ubuntu.com/ubuntu/ trusty-updates/main amd64 
Packages):
  python-oslo.config 1:1.2.1-0ubuntu2
  python-oslo.messaging 1.3.0-0ubuntu1.2
  python-oslo.rootwrap 1.2.0-0ubuntu1
  nova-common 1:2014.1.5-0ubuntu1.2
  python-nova 1:2014.1.5-0ubuntu1.2
  neutron-common 1:2014.1.5-0ubuntu1

  /etc/nova/nova.conf example:
  [DEFAULT]
  debug=True
  ...
  [glance]
  api_servers=10.0.0.3:9292

  [neutron]
  admin_auth_url=http://10.0.0.3:5000/v2.0
  admin_username=admin
  admin_tenant_name=services
  admin_password=admin
  url=http://10.0.0.3:9696
  ...

  2) From nova log, check which values has been applied:
  # grep -E 'admin_auth_url\s+=|admin_username\s+=|api_servers\s+=' 
/var/log/nova/nova-compute.log
  2015-08-26 07:34:48.193 30535 DEBUG nova.openstack.common.service [-] 
glance_api_servers = ['192.168.121.14:9292'] log_opt_values 
/usr/lib/python2.7/dist-packages/oslo/config/cfg.py:1941
  2015-08-26 07:34:48.210 30535 DEBUG nova.openstack.common.service [-] 
neutron_admin_auth_url = http://localhost:5000/v2.0 log_opt_values 
/usr/lib/python2.7/dist-packages/oslo/config/cfg.py:1941
  2015-08-26 07:34:48.211 30535 DEBUG nova.openstack.common.service [-] 
neutron_admin_username = None log_opt_values 
/usr/lib/python2.7/dist-packages/oslo/config/cfg.py:1941

  Expected:
  configuration options to be applied from [glance], [neutron] sections 
according to the docs 
http://docs.openstack.org/juno/config-reference/content/list-of-compute-config-options.html

  Actual:
  Defaults for the deprecated options were applied from the [DEFAULT] section 
instead

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1488809/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489439] [NEW] /detail URL is nonsensical and breaks common patterns

2015-08-27 Thread Rob Cresswell
Public bug reported:

Many details pages are displayed at `panel/id/detail`. This
contrasts with normal navigation patterns, where we expect
objects/id to show the information about a specific object, not a
404 page.

The URLS should be changed across Horizon so that /id shows the
details pages.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1489439

Title:
  /detail URL is nonsensical and breaks common patterns

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Many details pages are displayed at `panel/id/detail`. This
  contrasts with normal navigation patterns, where we expect
  objects/id to show the information about a specific object, not a
  404 page.

  The URLS should be changed across Horizon so that /id shows the
  details pages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1489439/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489671] [NEW] Neutron L3 sync_routers logic process all router ports from database when even sync for a specific router

2015-08-27 Thread zhu zhu
Public bug reported:

Recreate Steps:
1) Create multiple routers and allocate each router interface for neutron route 
ports from different network.
for example, below, there are 4 routers with each have 4,2,1,2 ports.  (So 
totally 9 router ports in database)
[root@controller ~]# neutron router-list
+--+---+---+-+---+
| id   | name  | external_gateway_info 
| distributed | ha|
+--+---+---+-+---+
| b2b466d2-1b1a-488d-af92-9d83d1c0f2c0 | routername1   | null  
| False   | False |
| 919f4312-41d1-47a8-b2b5-dc7f14d3f331 | routername2   | null  
| False   | False |
| 2854df21-7fe8-4968-a372-3c4a5c3d4ecf | routername3   | null  
| False   | False |
| daf51173-0084-4881-9ba3-0a9ac80d7d7b | routername4   | null  
| False   | False |
+--+---+---+-+---+

[root@controller ~]# neutron router-port-list routername1
+--+--+---+-+
| id   | name | mac_address   | fixed_ips   
|
+--+--+---+-+
| 6194f014-e7c1-4d0b-835f-3cbf94839b9b |  | fa:16:3e:a9:43:7a | 
{subnet_id: 84b1e75e-9ce3-4a85-a9c6-32133fca081d, ip_address: 77.0.0.1} 
|
| bcac4f23-b74d-4cb3-8bbe-f1d59dff724f |  | fa:16:3e:72:59:a1 | 
{subnet_id: 80dc7dfe-d353-4c51-8882-934da8bbbe8b, ip_address: 77.1.0.1} 
|
| 39bb4b6c-e439-43a3-85f2-cade8bce8d3c |  | fa:16:3e:9a:65:e6 | 
{subnet_id: b54cb217-98b8-41e1-8b6f-fb69d84fcb56, ip_address: 80.0.0.1} 
|
| 3349d441-4679-4176-9f6f-497d39b37c74 |  | fa:16:3e:eb:43:b5 | 
{subnet_id: 8fad7ca7-ae0d-4764-92d9-a5e23e806eba, ip_address: 81.0.0.1} 
|
+--+--+---+-+
[root@controller ~]# neutron router-port-list routername2
+--+--+---+-+
| id   | name | mac_address   | fixed_ips   
|
+--+--+---+-+
| 77ac0964-57bf-4ed2-8822-332779e427f2 |  | fa:16:3e:ea:83:f8 | 
{subnet_id: 2f07dbf4-9c5c-477c-b992-1d3dd284b987, ip_address: 95.0.0.1} 
|
| aeeb920e-5c73-45ba-8fe9-f6dafabdab68 |  | fa:16:3e:ee:43:a8 | 
{subnet_id: 15c55c9f-2051-4b4d-9628-552b86543e4e, ip_address: 97.0.0.1} 
|
+--+--+---+-+
[root@controller ~]# neutron router-port-list routername3
+--+--+---+-+
| id   | name | mac_address   | fixed_ips   
|
+--+--+---+-+
| f792ac7d-0bdd-4dbe-bafb-7822ce388c71 |  | fa:16:3e:fe:b7:f7 | 
{subnet_id: b62990de-0468-4efd-adaf-d421351c6a8b, ip_address: 66.0.0.1} 
|
+--+--+---+-+
[root@controller ~]# neutron router-port-list routername4
+--+--+---+---+
| id   | name | mac_address   | fixed_ips   
  |
+--+--+---+---+
| d1fded02-d378-4d92-bf0f-31cdc93ab365 |  | fa:16:3e:3f:b1:2a | 
{subnet_id: b55cdcb2-e0e8-4110-8a90-3030930bd3d7, ip_address: 
10.10.10.1} |
| 0f8addf1-6c7e-49e9-9f25-f8709718865f |  | fa:16:3e:19:70:84 | 
{subnet_id: 089ae5d9-84ca-412c-9842-b20a5a0bb68d, ip_address: 
20.20.20.1} |

[Yahoo-eng-team] [Bug 1489669] [NEW] Policy check returns HTTP status instead of JSON

2015-08-27 Thread Thai Tran
Public bug reported:

Policy check today returns a JSON object containing an allowed flag that
can either be true or false. This requires that we check the response
object for the flag. It should instead just return a 204 for allowed, or
a 406 for not allowed (401 is already taken for unauthorized - which
redirects a user to the logout screen). This is undesirable since we may
want to hide content but not kick the user out.

We have future plans to batch policy checks and cache them, but for now,
the plan is for check to do a singular policy check that expects a
boolean. This is also more inline with the plans we have for hz-if-
policies directive.

** Affects: horizon
 Importance: Medium
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1489669

Title:
  Policy check returns HTTP status instead of JSON

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Policy check today returns a JSON object containing an allowed flag
  that can either be true or false. This requires that we check the
  response object for the flag. It should instead just return a 204 for
  allowed, or a 406 for not allowed (401 is already taken for
  unauthorized - which redirects a user to the logout screen). This is
  undesirable since we may want to hide content but not kick the user
  out.

  We have future plans to batch policy checks and cache them, but for
  now, the plan is for check to do a singular policy check that expects
  a boolean. This is also more inline with the plans we have for hz-if-
  policies directive.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1489669/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp