[Yahoo-eng-team] [Bug 1461406] [NEW] libvirt: missing iotune parse for LibvirtConfigGuestDisk

2015-06-02 Thread ChangBo Guo(gcb)
Public bug reported:

We support  instance disk IO control with  iotune like :

  
102400
  

we set iotune in class LibvirtConfigGuestDisk  in libvirt/config.py . The 
method parse_dom doesn't parse iotue options now.
Need fix that.

** Affects: nova
 Importance: Undecided
 Assignee: ChangBo Guo(gcb) (glongwave)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => ChangBo Guo(gcb) (glongwave)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461406

Title:
  libvirt: missing  iotune parse for  LibvirtConfigGuestDisk

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  We support  instance disk IO control with  iotune like :


  102400


  we set iotune in class LibvirtConfigGuestDisk  in libvirt/config.py . The 
method parse_dom doesn't parse iotue options now.
  Need fix that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1461406/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461391] [NEW] gazillion 'ClientException: The server has either erred or is incapable of performing the requested operation' traces

2015-06-02 Thread Armando Migliaccio
Public bug reported:

2015-06-03 02:46:50.553 22041 ERROR neutron.notifiers.nova [-] Failed to notify 
nova on events: [{'tag': u'27846c36-9dc4-4822-a2a2-fd3fd0cca019', 'name': 
'network-vif-deleted', 'server_uuid': u'f6aab00c-b64e-4054-8f26-4dd232d7bbf7'}]
2015-06-03 02:46:50.553 22041 ERROR neutron.notifiers.nova Traceback (most 
recent call last):
2015-06-03 02:46:50.553 22041 ERROR neutron.notifiers.nova   File 
"/opt/stack/new/neutron/neutron/notifiers/nova.py", line 252, in send_events
2015-06-03 02:46:50.553 22041 ERROR neutron.notifiers.nova batched_events)
2015-06-03 02:46:50.553 22041 ERROR neutron.notifiers.nova   File 
"/usr/local/lib/python2.7/dist-packages/novaclient/v2/contrib/server_external_events.py",
 line 39, in create
2015-06-03 02:46:50.553 22041 ERROR neutron.notifiers.nova return_raw=True)
2015-06-03 02:46:50.553 22041 ERROR neutron.notifiers.nova   File 
"/usr/local/lib/python2.7/dist-packages/novaclient/base.py", line 161, in 
_create
2015-06-03 02:46:50.553 22041 ERROR neutron.notifiers.nova _resp, body = 
self.api.client.post(url, body=body)
2015-06-03 02:46:50.553 22041 ERROR neutron.notifiers.nova   File 
"/usr/local/lib/python2.7/dist-packages/keystoneclient/adapter.py", line 176, 
in post
2015-06-03 02:46:50.553 22041 ERROR neutron.notifiers.nova return 
self.request(url, 'POST', **kwargs)
2015-06-03 02:46:50.553 22041 ERROR neutron.notifiers.nova   File 
"/usr/local/lib/python2.7/dist-packages/novaclient/client.py", line 104, in 
request
2015-06-03 02:46:50.553 22041 ERROR neutron.notifiers.nova raise 
exceptions.from_response(resp, body, url, method)
2015-06-03 02:46:50.553 22041 ERROR neutron.notifiers.nova ClientException: The 
server has either erred or is incapable of performing the requested operation. 
(HTTP 500) (Request-ID: req-b385d794-86fc-4b06-a183-a2de985f99d5)
2015-06-03 02:46:50.553 22041 ERROR neutron.notifiers.nova 

Most likely introduced by:

https://review.openstack.org/#/c/178666/

Logstash:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQ2xpZW50RXhjZXB0aW9uOiBUaGUgc2VydmVyIGhhcyBlaXRoZXIgZXJyZWQgb3IgaXMgaW5jYXBhYmxlIG9mIHBlcmZvcm1pbmcgdGhlIHJlcXVlc3RlZCBvcGVyYXRpb24uXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjE3MjgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MzMzMDk3OTk4NTB9

** Affects: neutron
 Importance: High
 Assignee: Kevin Benton (kevinbenton)
 Status: Confirmed

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1461391

Title:
  gazillion 'ClientException: The server has either erred or is
  incapable of performing the requested operation' traces

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  2015-06-03 02:46:50.553 22041 ERROR neutron.notifiers.nova [-] Failed to 
notify nova on events: [{'tag': u'27846c36-9dc4-4822-a2a2-fd3fd0cca019', 
'name': 'network-vif-deleted', 'server_uuid': 
u'f6aab00c-b64e-4054-8f26-4dd232d7bbf7'}]
  2015-06-03 02:46:50.553 22041 ERROR neutron.notifiers.nova Traceback (most 
recent call last):
  2015-06-03 02:46:50.553 22041 ERROR neutron.notifiers.nova   File 
"/opt/stack/new/neutron/neutron/notifiers/nova.py", line 252, in send_events
  2015-06-03 02:46:50.553 22041 ERROR neutron.notifiers.nova batched_events)
  2015-06-03 02:46:50.553 22041 ERROR neutron.notifiers.nova   File 
"/usr/local/lib/python2.7/dist-packages/novaclient/v2/contrib/server_external_events.py",
 line 39, in create
  2015-06-03 02:46:50.553 22041 ERROR neutron.notifiers.nova 
return_raw=True)
  2015-06-03 02:46:50.553 22041 ERROR neutron.notifiers.nova   File 
"/usr/local/lib/python2.7/dist-packages/novaclient/base.py", line 161, in 
_create
  2015-06-03 02:46:50.553 22041 ERROR neutron.notifiers.nova _resp, body = 
self.api.client.post(url, body=body)
  2015-06-03 02:46:50.553 22041 ERROR neutron.notifiers.nova   File 
"/usr/local/lib/python2.7/dist-packages/keystoneclient/adapter.py", line 176, 
in post
  2015-06-03 02:46:50.553 22041 ERROR neutron.notifiers.nova return 
self.request(url, 'POST', **kwargs)
  2015-06-03 02:46:50.553 22041 ERROR neutron.notifiers.nova   File 
"/usr/local/lib/python2.7/dist-packages/novaclient/client.py", line 104, in 
request
  2015-06-03 02:46:50.553 22041 ERROR neutron.notifiers.nova raise 
exceptions.from_response(resp, body, url, method)
  2015-06-03 02:46:50.553 22041 ERROR neutron.notifiers.nova ClientException: 
The server has either erred or is incapable of performing the requested 
operation. (HTTP 500) (Request-ID: req-b385d794-86fc-4b06-a183-a2de985f99d5)
  2015-06-03 02:46:50.553 22041 ERROR neutron.notifiers.nova 

  Most likely introduced by:

  http

[Yahoo-eng-team] [Bug 1439689] Re: Cannot see update notifications from nova.api

2015-06-02 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1439689

Title:
  Cannot see update notifications from nova.api

Status in OpenStack Compute (Nova):
  Expired

Bug description:
  Running devstack K, trying to get the notification from nova.api,
  especially the instance update.

  Enabled in config:
  [DEFAULT]
  notification_driver=nova.openstack.common.notifier.rpc_notifier
  notification_topics=notifications,monitor
  notify_on_state_change=vm_and_task_state
  notify_on_any_change=True
  instance_usage_audit=True
  instance_usage_audit_period=hour

  I can see some notifications coming (rabbitmqctl list_queues | grep
  notifications.info), but when i rename an instance (which calls into
  servers.py - update) i don't see the message count increasing, meaning
  the message is not being sent to rabbitmq.

  I can see the action being called:
  2015-04-02 15:12:23.738 DEBUG nova.api.openstack.wsgi 
[req-b8fc8e5e-c339-4329-9f01-3d6e57d2f14b admin admin] Action: 'update', 
calling method: >, 
body: {"server": {"name": "instance3"}}  but i don't see the notification.

  Related: https://ask.openstack.org/en/question/62331/how-to-receive-
  nova-notification-computeinstanceupdate/

  /opt/stack/nova# git log -1
  commit fc5e6315afb7fc90c6f80bd1dfed0babfa979f2f
  Merge: 86c8611 af4ce3e
  Author: Jenkins 
  Date:   Fri Mar 27 04:10:19 2015 +

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1439689/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1423301] Re: AttributeError if create user with no email

2015-06-02 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1423301

Title:
  AttributeError if create user with no email

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  Create User without filling in email address.

  When the table is rendered, you will get an error for those newly
  created users.  Please see image.  If you look examine what Keystone
  returns, there is no email field.  Whereas the default Users without
  email still default to 'null.'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1423301/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461102] Re: cascade in orm relationships shadows ON DELETE CASCADE

2015-06-02 Thread Salvatore Orlando
** Changed in: neutron
   Status: New => Opinion

** Changed in: neutron
   Importance: Medium => Wishlist

** Changed in: neutron
Milestone: liberty-1 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1461102

Title:
  cascade in orm relationships shadows ON DELETE CASCADE

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  In [1] there is a good discussion on how the 'cascade' property
  specified for sqlachemy.orm.relationship interacts with the 'ON DELETE
  CASCADE' specified in DDL.

  I stumbled on this when I was doing some DB access profiling and
  noticed multiple DELETE statements were emitted for a delete subnet
  operation [2], whereas I expected a single DELETE statement only; I
  expected that the cascade behaviour configured on db tables would have
  taken care of DNS servers, host routes, etc.

  What is happening is that sqlalchemy is perform orm-level cascading
  rather than relying on the database foreign key cascade options. And
  it's doing this because we told it to do so. As the SQLAlchemy
  documentation points out [3] there is no need to add the complexity of
  orm relationships if foreign keys are correctly configured on the
  database, and the passive_deletes option should be used.

  Enabling such option in place of all the cascade options for relationship 
caused a single DELETE statement to be issued [4].
  This is not a massive issue (possibly the time spent in extra queries is just 
.5ms), but surely it is something worth doing - if nothing else because it 
seems Neutron is not using SQLAlchemy in the correct way.

  As someone who's been doing this mistake for ages, for what is worth
  this has been for me a moment where I realized that sometimes it's
  good to be told RTFM.

  
  [1] http://docs.sqlalchemy.org/en/latest/orm/cascades.html
  [2] http://paste.openstack.org/show/256289/
  [3] http://docs.sqlalchemy.org/en/latest/orm/collections.html#passive-deletes
  [4] http://paste.openstack.org/show/256301/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1461102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348244] Re: debug log messages need to be unicode

2015-06-02 Thread Liang Chen
** Description changed:

+ [Impact]
+ 
+  * Nova services fail to start because they cannot connect to rsyslog
+ 
+ [Test Case]
+ 
+  * Set user_syslog to True in nova.conf, stop rsyslog service and
+ restart nova services.
+ 
+ [Regression Potential]
+ 
+  * None
+ 
+ When nova services log to syslog, we should make sure the dependency on
+ the upstart jobs is set prior to the nova-* services start.
+ 
+ 
  Debug logs should be:
-   
- LOG.debug("message")  should be LOG.debug(u"message")
+ 
+ LOG.debug("message")  should be LOG.debug(u"message")
  
  Before the translation of debug log messages was removed, the
  translation was returning unicode.   Now that they are no longer
  translated they need to be explicitly marked as unicode.
  
  This was confirmed by discussion with dhellman.   See
  2014-07-23T13:48:23 in this log http://eavesdrop.openstack.org/irclogs
  /%23openstack-oslo/%23openstack-oslo.2014-07-23.log
  
  The problem was discovered when an exception was used as replacement
  text in a debug log message:
  
-LOG.debug("Failed to mount image %(ex)s)", {'ex': e})
+    LOG.debug("Failed to mount image %(ex)s)", {'ex': e})
  
  In particular it was discovered as part of enabling lazy translation,
  where the exception message is replaced with an object that does not
  support str().   Note that this would also fail without lazy enabled, if
  a translation for the exception message was provided that was unicode.
  
+ Example trace:
  
- Example trace: 
- 
-  Traceback (most recent call last):
-   File "nova/tests/virt/disk/test_api.py", line 78, in 
test_can_resize_need_fs_type_specified
- self.assertFalse(api.is_image_partitionless(imgfile, use_cow=True))
-   File "nova/virt/disk/api.py", line 208, in is_image_partitionless
- fs.setup()
-   File "nova/virt/disk/vfs/localfs.py", line 80, in setup
- LOG.debug("Failed to mount image %(ex)s)", {'ex': e})
-   File "/usr/lib/python2.7/logging/__init__.py", line 1412, in debug
- self.logger.debug(msg, *args, **kwargs)
-   File "/usr/lib/python2.7/logging/__init__.py", line 1128, in debug
- self._log(DEBUG, msg, args, **kwargs)
-   File "/usr/lib/python2.7/logging/__init__.py", line 1258, in _log
- self.handle(record)
-   File "/usr/lib/python2.7/logging/__init__.py", line 1268, in handle
- self.callHandlers(record)
-   File "/usr/lib/python2.7/logging/__init__.py", line 1308, in callHandlers
- hdlr.handle(record)
-   File "nova/test.py", line 212, in handle
- self.format(record)
-   File "/usr/lib/python2.7/logging/__init__.py", line 723, in format
- return fmt.format(record)
-   File "/usr/lib/python2.7/logging/__init__.py", line 464, in format
- record.message = record.getMessage()
-   File "/usr/lib/python2.7/logging/__init__.py", line 328, in getMessage
- msg = msg % self.args
-   File 
"/opt/stack/nova/.tox/py27/local/lib/python2.7/site-packages/oslo/i18n/_message.py",
 line 167, in __str__
- raise UnicodeError(msg)
+  Traceback (most recent call last):
+   File "nova/tests/virt/disk/test_api.py", line 78, in 
test_can_resize_need_fs_type_specified
+ self.assertFalse(api.is_image_partitionless(imgfile, use_cow=True))
+   File "nova/virt/disk/api.py", line 208, in is_image_partitionless
+ fs.setup()
+   File "nova/virt/disk/vfs/localfs.py", line 80, in setup
+ LOG.debug("Failed to mount image %(ex)s)", {'ex': e})
+   File "/usr/lib/python2.7/logging/__init__.py", line 1412, in debug
+ self.logger.debug(msg, *args, **kwargs)
+   File "/usr/lib/python2.7/logging/__init__.py", line 1128, in debug
+ self._log(DEBUG, msg, args, **kwargs)
+   File "/usr/lib/python2.7/logging/__init__.py", line 1258, in _log
+ self.handle(record)
+   File "/usr/lib/python2.7/logging/__init__.py", line 1268, in handle
+ self.callHandlers(record)
+   File "/usr/lib/python2.7/logging/__init__.py", line 1308, in callHandlers
+ hdlr.handle(record)
+   File "nova/test.py", line 212, in handle
+ self.format(record)
+   File "/usr/lib/python2.7/logging/__init__.py", line 723, in format
+ return fmt.format(record)
+   File "/usr/lib/python2.7/logging/__init__.py", line 464, in format
+ record.message = record.getMessage()
+   File "/usr/lib/python2.7/logging/__init__.py", line 328, in getMessage
+ msg = msg % self.args
+   File 
"/opt/stack/nova/.tox/py27/local/lib/python2.7/site-packages/oslo/i18n/_message.py",
 line 167, in __str__
+ raise UnicodeError(msg)
  UnicodeError: Message objects do not support str() because they may contain 
non-ascii characters. Please use unicode() or translate() instead.
  ==
  FAIL: nova.tests.virt.disk.test_api.APITestCase.test_resize2fs_e2fsck_fails
  tags: worker-3

** Description changed:

  [Impact]
  
-  * Nova services fail to start because they cannot connect to rsyslog
+  * Nova services fail to start because they cannot connect to rsyslog
  
  [Te

[Yahoo-eng-team] [Bug 1218942] Re: Dependency resolution does not create objects on demand

2015-06-02 Thread David Stanek
We are in the process of removing our current DI implementation.

** Changed in: keystone
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1218942

Title:
  Dependency resolution does not create objects on demand

Status in OpenStack Identity (Keystone):
  Won't Fix

Bug description:
  The isssue is clearest with the circular dependncy code between
  identity and assignment, but we see it throughout the code base:  one
  component that requires another component has to be sure that the
  dependency has been initialized prior to access.

  This form of Dependency Injection is not new. The general pattern
  followed by  Spring etc is that the app has a two phase process. Dugin
  the first phase (early in the application) component register as
  fulfilling dependencies and what dependencies they require.  Afte  a
  certain point (once the source files have all been parsed) object
  instantiation can begin.  THere is no deliberate "instantiate all
  objects" stage as you may end up creating objects that you do not
  need.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1218942/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461325] [NEW] keyerror in OVS agent port_delete handler

2015-06-02 Thread Kevin Benton
Public bug reported:

[req-d746e623-8c6e-4e4d-b246-8ca689e0b8ad None None] Error while processing VIF 
ports
2015-06-02 23:21:44.836 24270 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent call 
last):
2015-06-02 23:21:44.836 24270 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/new/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py",
 line 1521, in rpc_loop
2015-06-02 23:21:44.836 24270 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent self.deleted_ports -= 
port_info['removed']
2015-06-02 23:21:44.836 24270 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent KeyError: 'removed'
2015-06-02 23:21:44.836 24270 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1461325

Title:
  keyerror in OVS agent port_delete handler

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  [req-d746e623-8c6e-4e4d-b246-8ca689e0b8ad None None] Error while processing 
VIF ports
  2015-06-02 23:21:44.836 24270 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent call 
last):
  2015-06-02 23:21:44.836 24270 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/new/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py",
 line 1521, in rpc_loop
  2015-06-02 23:21:44.836 24270 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent self.deleted_ports -= 
port_info['removed']
  2015-06-02 23:21:44.836 24270 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent KeyError: 'removed'
  2015-06-02 23:21:44.836 24270 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1461325/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1341420] Re: gap between scheduler selection and claim causes spurious failures when the instance is the last one to fit

2015-06-02 Thread Michael Davies
This is not an Ironic bug.  It's just triggered by using Ironic.  This
is a a Nova scheduling bug (and a PITA to fix :(

** No longer affects: ironic

** Tags removed: ironic

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1341420

Title:
  gap between scheduler selection and claim causes spurious failures
  when the instance is the last one to fit

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  There is a race between the scheduler in select_destinations, which
  selects a set of hosts, and the nova compute manager, which claims
  resources on those hosts when building the instance. The race is
  particularly noticable with Ironic, where every request will consume a
  full host, but can turn up on libvirt etc too. Multiple schedulers
  will likely exacerbate this too unless they are in a version of python
  with randomised dictionary ordering, in which case they will make it
  better :).

  I've put https://review.openstack.org/106677 up to remove a comment
  which comes from before we introduced this race.

  One mitigating aspect to the race in the filter scheduler _schedule
  method attempts to randomly select hosts to avoid returning the same
  host in repeated requests, but the default minimum set it selects from
  is size 1 - so when heat requests a single instance, the same
  candidate is chosen every time. Setting that number higher can avoid
  all concurrent requests hitting the same host, but it will still be a
  race, and still likely to fail fairly hard at near-capacity situations
  (e.g. deploying all machines in a cluster with Ironic and Heat).

  Folk wanting to reproduce this: take a decent size cloud - e.g. 5 or
  10 hypervisor hosts (KVM is fine). Deploy up to 1 VM left of capacity
  on each hypervisor. Then deploy a bunch of VMs one at a time but very
  close together - e.g. use the python API to get cached keystone
  credentials, and boot 5 in a loop.

  If using Ironic you will want https://review.openstack.org/106676 to
  let you see which host is being returned from the selection.

  Possible fixes:
   - have the scheduler be a bit smarter about returning hosts - e.g. track 
destination selection counts since the last refresh and weight hosts by that 
count as well
   - reinstate actioning claims into the scheduler, allowing the audit to 
correct any claimed-but-not-started resource counts asynchronously
   - special case the retry behaviour if there are lots of resources available 
elsewhere in the cluster.

  Stats wise, I just testing a 29 instance deployment with ironic and a
  heat stack, with 45 machines to deploy onto (so 45 hosts in the
  scheduler set) and 4 failed with this race - which means they
  recheduled and failed 3 times each - or 12 cases of scheduler racing
  *at minimum*.

  background chat

  15:43 < lifeless> mikal: around? I need to sanity check something
  15:44 < lifeless> ulp, nope, am sure of it. filing a bug.
  15:45 < mikal> lifeless: ok
  15:46 < lifeless> mikal: oh, you're here, I will run it past you :)
  15:46 < lifeless> mikal: if you have ~5m
  15:46 < mikal> Sure
  15:46 < lifeless> so, symptoms
  15:46 < lifeless> nova boot <...> --num-instances 45 -> works fairly 
reliably. Some minor timeout related things to fix but nothing dramatic.
  15:47 < lifeless> heat create-stack <...> with a stack with 45 instances in 
it -> about 50% of instances fail to come up
  15:47 < lifeless> this is with Ironic
  15:47 < mikal> Sure
  15:47 < lifeless> the failure on all the instances is the retry-three-times 
failure-of-death
  15:47 < lifeless> what I believe is happening is this
  15:48 < lifeless> the scheduler is allocating the same weighed list of hosts 
for requests that happen close enough together
  15:49 < lifeless> and I believe its able to do that because the target hosts 
(from select_destinations) need to actually hit the compute node manager and 
have
  15:49 < lifeless> with rt.instance_claim(context, instance, 
limits):
  15:49 < lifeless> happen in _build_and_run_instance
  15:49 < lifeless> before the resource usage is assigned
  15:49 < mikal> Is heat making 45 separate requests to the nova API?
  15:49 < lifeless> eys
  15:49 < lifeless> yes
  15:49 < lifeless> thats the key difference
  15:50 < lifeless> same flavour, same image
  15:50 < openstackgerrit> Sam Morrison proposed a change to openstack/nova: 
Remove cell api overrides for lock and unlock  
https://review.openstack.org/89487
  15:50 < mikal> And you have enough quota for these instances, right?
  15:50 < lifeless> yes
  15:51 < mikal> I'd have to dig deeper to have an answer, but it sure does 
seem worth filing a bug for
  15:51 < lifeless> my theory is that there is enough time between 
select_destinations in the conductor, and _build_and_run_instance in compute 
for another request to come in the front door and be sch

[Yahoo-eng-team] [Bug 1456441] Re: keystone wsgi does not read files in /etc/keystone/*

2015-06-02 Thread Alan Pevec
Why is status Invalid, as comment 10 says, group=keystone should be
added to httpd/wsgi-keystone.conf

** Changed in: keystone
   Status: Invalid => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1456441

Title:
  keystone wsgi does not read files in /etc/keystone/*

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid
Status in OpenStack Identity (Keystone):
  Confirmed
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Keystone launched through wsgi was not picking up the connection
  string in /etc/keystone/keystone.conf.  Manual run of keystone-all
  identified that there was a collision with /usr/share/keystone
  /keystone-dist.conf which contains an entry for connection in
  [database].

  Commented out the line in keystone-dist.conf and restarted apache.
  Correct connection string was then picked up from
  /etc/keystone/keystone.conf.

  CentOS 7 minimal install
  Followed install guide for Kilo.
  Encountered error Access denied keystone@localhost at step of creating 
openstack service identity because the wrong credentials from the .conf 
conflict were being passed to mariadb.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1456441/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461299] [NEW] Failure on list users when using ldap domain configuration from database

2015-06-02 Thread Roxana Gherle
Public bug reported:

When having a setup with domain_specific_drivers_enabled set to true,
and a domain configured with ldap backend and configurations stored in
the database : the keystone user list API fails with the following
error:

openstack user list --domain domainX
ERROR: openstack An unexpected error prevented the server from fulfilling your 
request: Message objects do not support str() because they may contain 
non-ascii characters. Please use unicode() or translate() instead. (Disable 
debug mode to suppress these details.) (HTTP 500)

** Affects: keystone
 Importance: Undecided
 Assignee: Roxana Gherle (roxana-gherle)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Roxana Gherle (roxana-gherle)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1461299

Title:
  Failure on list users when using ldap domain configuration from
  database

Status in OpenStack Identity (Keystone):
  New

Bug description:
  When having a setup with domain_specific_drivers_enabled set to true,
  and a domain configured with ldap backend and configurations stored in
  the database : the keystone user list API fails with the following
  error:

  openstack user list --domain domainX
  ERROR: openstack An unexpected error prevented the server from fulfilling 
your request: Message objects do not support str() because they may contain 
non-ascii characters. Please use unicode() or translate() instead. (Disable 
debug mode to suppress these details.) (HTTP 500)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1461299/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458945] Re: Use graduated oslo.policy instead of oslo-incubator code

2015-06-02 Thread John Dickinson
Swift doesn't yet use oslo policy (incubated or library), so this bug
doesn't apply

** Changed in: swift
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458945

Title:
  Use graduated oslo.policy instead of oslo-incubator code

Status in OpenStack Key Management (Barbican):
  New
Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  Confirmed
Status in OpenStack Congress:
  Confirmed
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Orchestration API (Heat):
  Fix Committed
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in MagnetoDB - key-value storage service for OpenStack:
  Confirmed
Status in OpenStack Magnum:
  New
Status in Manila:
  New
Status in Mistral:
  Invalid
Status in Murano:
  Confirmed
Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in OpenStack Compute (Nova):
  Confirmed
Status in Rally:
  Invalid
Status in OpenStack Data Processing (Sahara):
  Fix Released
Status in OpenStack Object Storage (Swift):
  Invalid
Status in Openstack Database (Trove):
  Invalid

Bug description:
  The Policy code is now be managed as a library, named oslo.policy.

  If there is a CVE level defect, deploying a fix should require
  deploying a new version of the library, not syncing each individual
  project.

  All the projects in the OpenStack ecosystem that are using the policy
  code from oslo-incubator should use the new library.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1458945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461217] Re: Kilo Docker: Hypervisor Type Not Defined

2015-06-02 Thread Claudiu Belu
** Also affects: nova-docker
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461217

Title:
  Kilo Docker:  Hypervisor Type Not Defined

Status in OpenStack Compute (Nova):
  New
Status in Nova Docker Driver:
  New

Bug description:
  On a multinode setup, based on Ubuntu14.04, with nova version, below
   # dpkg -l | grep nova
  ii  nova-common 1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - common files
  ii  nova-compute1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - compute node base
  ii  nova-compute-kvm1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - compute node (KVM)
  ii  nova-compute-libvirt1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - compute node libvirt support
  ii  python-nova 1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute Python libraries
  ii  python-novaclient   1:2.22.0-0ubuntu1~cloud0  
all  client library for OpenStack Compute API
  ,  did the following:
  1- Followed https://wiki.openstack.org/wiki/Docker procedure
  2- Added nova to libvirtd group, to avoid /var/run/libvirt/libvirt-sock 
access problem
  3- Updated nova-compute.conf to include 
compute_driver=novadocker.virt.docker.DockerDriver and virt_type=docker

  Upon starting a new docker instance, got the following error:
  2015-06-01 17:02:59.784 42703 ERROR nova.openstack.common.threadgroup 
[req-9f37298d-828c-4ac5-9834-320cc082f92b - - - - -] 'module' object has no 
attribute 'DOCKER'
  2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup 
Traceback (most recent call last):
  2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py", line 
145, in wait
  2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup 
x.wait()
  2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py", line 
47, in wait
  2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
  2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 175, in wait
  2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup 
return self._exit_event.wait()
  2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/eventlet/event.py", line 121, in wait
  2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup 
return hubs.get_hub().switch()
  2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 294, in switch
  2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
  2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 214, in main
  2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup 
result = function(*args, **kwargs)
  2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/service.py", line 497, 
in run_service
  2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup 
service.start()
  2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/service.py", line 183, in start
  2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup 
self.manager.pre_start_hook()
  2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1291, in 
pre_start_hook
  2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup 
self.update_available_resource(nova.context.get_admin_context())
  2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6240, in 
update_available_resource
  2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup 
rt.update_available_resource(context)
  2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line 376, 
in update_available_resource
  2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup 
resources = self.driver.get_available_resource(self.nodename)
  2015-06-01

[Yahoo-eng-team] [Bug 1461251] [NEW] Stop using deprecated oslo_utils.timeutils.isotime

2015-06-02 Thread Brant Knudson
Public bug reported:

oslo_utils.timeutils.isotime() is deprecated as of 1.6 so we need to
stop using it.

This breaks unit tests in keystone since we've got a check for calling
deprecated functions.

** Affects: keystone
 Importance: Critical
 Assignee: Brant Knudson (blk-u)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) => Brant Knudson (blk-u)

** Changed in: keystone
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1461251

Title:
  Stop using deprecated oslo_utils.timeutils.isotime

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  oslo_utils.timeutils.isotime() is deprecated as of 1.6 so we need to
  stop using it.

  This breaks unit tests in keystone since we've got a check for calling
  deprecated functions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1461251/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461238] [NEW] Following QuickStart instructions, "run_test.sh" fails with "Please install 'python-virtualenv'", which doesn't exist as a yum package

2015-06-02 Thread David M. Karr
Public bug reported:

I'm trying to check out and build Horizon so I can investigate some
changes.

I'm following the instructions at
http://docs.openstack.org/developer/horizon/quickstart.html .

I'm using a CentOS 6.6 VM.

When I ran "./run_tests.sh", it said this:

% ./run_tests.sh 
Checking environment.
Environment not found. Install? (Y/n) y
Fetching new src packages...
which: no virtualenv in 
(/home/dk068x/frameworks/node-v0.10.32-linux-x64/bin:/home/dk068x/frameworks/gradle-2.3/bin:/home/dk068x/frameworks/apache-ant-1.9.3/bin:/home/dk068x/frameworks/apache-maven-3.2.1/bin:/home/dk068x/bin:/home/dk068x/frameworks/groovy-2.4.1/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin)
Please install 'python-virtualenv'.
-

This is curious, because I pasted the earlier "yum install" line as described 
in the page.  I ran it again:

% sudo yum install gcc git-core python-devel python-virtualenv openssl-devel 
libffi-devel which
Loaded plugins: fastestmirror, refresh-packagekit, security
Setting up Install Process
Loading mirror speeds from cached hostfile
Package gcc-4.4.7-11.el6.x86_64 already installed and latest version
Package git-1.7.1-3.el6_4.1.x86_64 already installed and latest version
Package python-devel-2.6.6-52.el6.x86_64 already installed and latest version
No package python-virtualenv available.
Package openssl-devel-1.0.1e-30.el6_6.5.x86_64 already installed and latest 
version
Package libffi-devel-3.0.5-3.2.el6.x86_64 already installed and latest version
Package which-2.19-6.el6.x86_64 already installed and latest version
Nothing to do
-

Is there another repo I'm supposed to be configuring here?

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1461238

Title:
  Following QuickStart instructions, "run_test.sh" fails with "Please
  install 'python-virtualenv'", which doesn't exist as a yum package

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I'm trying to check out and build Horizon so I can investigate some
  changes.

  I'm following the instructions at
  http://docs.openstack.org/developer/horizon/quickstart.html .

  I'm using a CentOS 6.6 VM.

  When I ran "./run_tests.sh", it said this:
  
  % ./run_tests.sh 
  Checking environment.
  Environment not found. Install? (Y/n) y
  Fetching new src packages...
  which: no virtualenv in 
(/home/dk068x/frameworks/node-v0.10.32-linux-x64/bin:/home/dk068x/frameworks/gradle-2.3/bin:/home/dk068x/frameworks/apache-ant-1.9.3/bin:/home/dk068x/frameworks/apache-maven-3.2.1/bin:/home/dk068x/bin:/home/dk068x/frameworks/groovy-2.4.1/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin)
  Please install 'python-virtualenv'.
  -

  This is curious, because I pasted the earlier "yum install" line as described 
in the page.  I ran it again:
  
  % sudo yum install gcc git-core python-devel python-virtualenv openssl-devel 
libffi-devel which
  Loaded plugins: fastestmirror, refresh-packagekit, security
  Setting up Install Process
  Loading mirror speeds from cached hostfile
  Package gcc-4.4.7-11.el6.x86_64 already installed and latest version
  Package git-1.7.1-3.el6_4.1.x86_64 already installed and latest version
  Package python-devel-2.6.6-52.el6.x86_64 already installed and latest version
  No package python-virtualenv available.
  Package openssl-devel-1.0.1e-30.el6_6.5.x86_64 already installed and latest 
version
  Package libffi-devel-3.0.5-3.2.el6.x86_64 already installed and latest version
  Package which-2.19-6.el6.x86_64 already installed and latest version
  Nothing to do
  -

  Is there another repo I'm supposed to be configuring here?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1461238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461217] [NEW] Kilo Docker: Hypervisor Type Not Defined

2015-06-02 Thread Nastooh
Public bug reported:

On a multinode setup, based on Ubuntu14.04, with nova version, below
 # dpkg -l | grep nova
ii  nova-common 1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - common files
ii  nova-compute1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - compute node base
ii  nova-compute-kvm1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - compute node (KVM)
ii  nova-compute-libvirt1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute - compute node libvirt support
ii  python-nova 1:2015.1.0-0ubuntu1~cloud0
all  OpenStack Compute Python libraries
ii  python-novaclient   1:2.22.0-0ubuntu1~cloud0  
all  client library for OpenStack Compute API
,  did the following:
1- Followed https://wiki.openstack.org/wiki/Docker procedure
2- Added nova to libvirtd group, to avoid /var/run/libvirt/libvirt-sock access 
problem
3- Updated nova-compute.conf to include 
compute_driver=novadocker.virt.docker.DockerDriver and virt_type=docker

Upon starting a new docker instance, got the following error:
2015-06-01 17:02:59.784 42703 ERROR nova.openstack.common.threadgroup 
[req-9f37298d-828c-4ac5-9834-320cc082f92b - - - - -] 'module' object has no 
attribute 'DOCKER'
2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup Traceback 
(most recent call last):
2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py", line 
145, in wait
2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup 
x.wait()
2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py", line 
47, in wait
2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 175, in wait
2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup 
return self._exit_event.wait()
2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/eventlet/event.py", line 121, in wait
2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup 
return hubs.get_hub().switch()
2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 294, in switch
2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 214, in main
2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup 
result = function(*args, **kwargs)
2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/service.py", line 497, 
in run_service
2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup 
service.start()
2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/service.py", line 183, in start
2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup 
self.manager.pre_start_hook()
2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1291, in 
pre_start_hook
2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup 
self.update_available_resource(nova.context.get_admin_context())
2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6240, in 
update_available_resource
2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup 
rt.update_available_resource(context)
2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py", line 376, 
in update_available_resource
2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup 
resources = self.driver.get_available_resource(self.nodename)
2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup   File 
"/home/ubuntu/dockerdrv/src/novadocker/novadocker/virt/docker/driver.py", line 
312, in get_available_resource
2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup 
(arch.I686, hv_type.DOCKER, vm_mode.EXE),
2015-06-01 17:02:59.784 42703 TRACE nova.openstack.common.threadgroup 
AttributeError: 'module' object has no attribute 'DOCKER'
2015-06-01 17:02:59.784 42703 TRA

[Yahoo-eng-team] [Bug 1461201] [NEW] Check for systemd (in distros/rhel.py) is fragile

2015-06-02 Thread Lars Kellogg-Stedman
Public bug reported:

The existing cloud-init code determines if systemd is in use by looking
at the distribution name and version.  This is prone to error because:

- RHEL derivatives other than CentOS (e.g., Scientific Linux) will fail this 
test, and
- Distributions that are not derived from RHEL also use systemd

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1461201

Title:
  Check for systemd (in distros/rhel.py) is fragile

Status in Init scripts for use on cloud images:
  New

Bug description:
  The existing cloud-init code determines if systemd is in use by
  looking at the distribution name and version.  This is prone to error
  because:

  - RHEL derivatives other than CentOS (e.g., Scientific Linux) will fail this 
test, and
  - Distributions that are not derived from RHEL also use systemd

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1461201/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461183] [NEW] keystone/tests/unit/test_v3.py:RestfulTestCase.load_sample_data still uses the assignment_api

2015-06-02 Thread Lance Bragstad
Public bug reported:

All test classes that inherit
keystone/tests/unit/test_v3.py:RestfulTestCase run a load_sample_data
method [0]. This method creates some sample data to test with and it
still uses the assignment API, which has been deprecated. This method
should be refactored to use the resource API instead.


[0] 
https://github.com/openstack/keystone/blob/f6c01dd1673b290578e9fff063e27104412ffeda/keystone/tests/unit/test_v3.py#L235-L240

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: test-improvement

** Tags added: test-improvement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1461183

Title:
  keystone/tests/unit/test_v3.py:RestfulTestCase.load_sample_data still
  uses the assignment_api

Status in OpenStack Identity (Keystone):
  New

Bug description:
  All test classes that inherit
  keystone/tests/unit/test_v3.py:RestfulTestCase run a load_sample_data
  method [0]. This method creates some sample data to test with and it
  still uses the assignment API, which has been deprecated. This method
  should be refactored to use the resource API instead.

  
  [0] 
https://github.com/openstack/keystone/blob/f6c01dd1673b290578e9fff063e27104412ffeda/keystone/tests/unit/test_v3.py#L235-L240

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1461183/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458945] Re: Use graduated oslo.policy instead of oslo-incubator code

2015-06-02 Thread Samuel de Medeiros Queiroz
Manila change 'Use oslo_policy lib instead of oslo-incubator code'

https://github.com/openstack/manila/commit/a4a60b1328443f6a1d5a85884f029e3fa683c142

** Also affects: swift
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458945

Title:
  Use graduated oslo.policy instead of oslo-incubator code

Status in OpenStack Key Management (Barbican):
  New
Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  Confirmed
Status in OpenStack Congress:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Orchestration API (Heat):
  Fix Committed
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in MagnetoDB - key-value storage service for OpenStack:
  Confirmed
Status in OpenStack Magnum:
  New
Status in Manila:
  New
Status in Mistral:
  Invalid
Status in Murano:
  Confirmed
Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in OpenStack Compute (Nova):
  Confirmed
Status in Rally:
  Invalid
Status in OpenStack Data Processing (Sahara):
  Fix Released
Status in OpenStack Object Storage (Swift):
  New
Status in Openstack Database (Trove):
  Invalid

Bug description:
  The Policy code is now be managed as a library, named oslo.policy.

  If there is a CVE level defect, deploying a fix should require
  deploying a new version of the library, not syncing each individual
  project.

  All the projects in the OpenStack ecosystem that are using the policy
  code from oslo-incubator should use the new library.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1458945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458945] Re: Use graduated oslo.policy instead of oslo-incubator code

2015-06-02 Thread Sergey Lukjanov
** Changed in: sahara
   Status: New => Fix Released

** Changed in: sahara
   Importance: Undecided => Medium

** Changed in: sahara
 Assignee: (unassigned) => Sergey Lukjanov (slukjanov)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458945

Title:
  Use graduated oslo.policy instead of oslo-incubator code

Status in OpenStack Key Management (Barbican):
  New
Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  Confirmed
Status in OpenStack Congress:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Orchestration API (Heat):
  Fix Committed
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in MagnetoDB - key-value storage service for OpenStack:
  Confirmed
Status in OpenStack Magnum:
  New
Status in Manila:
  New
Status in Mistral:
  Invalid
Status in Murano:
  Confirmed
Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in OpenStack Compute (Nova):
  Confirmed
Status in Rally:
  Invalid
Status in OpenStack Data Processing (Sahara):
  Fix Released
Status in OpenStack Object Storage (Swift):
  New
Status in Openstack Database (Trove):
  Invalid

Bug description:
  The Policy code is now be managed as a library, named oslo.policy.

  If there is a CVE level defect, deploying a fix should require
  deploying a new version of the library, not syncing each individual
  project.

  All the projects in the OpenStack ecosystem that are using the policy
  code from oslo-incubator should use the new library.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1458945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458945] Re: Use graduated oslo.policy instead of oslo-incubator code

2015-06-02 Thread Ruby Loo
https://review.openstack.org/#/c/162501/

** Changed in: ironic
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458945

Title:
  Use graduated oslo.policy instead of oslo-incubator code

Status in OpenStack Key Management (Barbican):
  New
Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  Confirmed
Status in OpenStack Congress:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Orchestration API (Heat):
  Fix Committed
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in MagnetoDB - key-value storage service for OpenStack:
  Confirmed
Status in OpenStack Magnum:
  New
Status in Manila:
  New
Status in Mistral:
  Invalid
Status in Murano:
  Confirmed
Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in OpenStack Compute (Nova):
  Confirmed
Status in Rally:
  Invalid
Status in OpenStack Data Processing (Sahara):
  New
Status in Openstack Database (Trove):
  Invalid

Bug description:
  The Policy code is now be managed as a library, named oslo.policy.

  If there is a CVE level defect, deploying a fix should require
  deploying a new version of the library, not syncing each individual
  project.

  All the projects in the OpenStack ecosystem that are using the policy
  code from oslo-incubator should use the new library.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1458945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461172] [NEW] neutron.tests.functional.agent.test_l3_agent.MetadataL3AgentTestCase.test_access_to_metadata_proxy times out intermittently

2015-06-02 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/90/186290/2/gate/gate-neutron-dsvm-
functional/599b742/console.html.gz#_2015-05-29_19_44_46_232

2015-05-29 19:44:46.232 | 2015-05-29 19:44:46.177 | 
neutron.tests.functional.agent.test_l3_agent.MetadataL3AgentTestCase.test_access_to_metadata_proxy
2015-05-29 19:44:46.232 | 2015-05-29 19:44:46.178 | 
--
2015-05-29 19:44:46.232 | 2015-05-29 19:44:46.179 | 
2015-05-29 19:44:46.233 | 2015-05-29 19:44:46.180 | Captured traceback:
2015-05-29 19:44:46.233 | 2015-05-29 19:44:46.181 | ~~~
2015-05-29 19:44:46.233 | 2015-05-29 19:44:46.182 | Traceback (most recent 
call last):
2015-05-29 19:44:46.233 | 2015-05-29 19:44:46.183 |   File 
"neutron/tests/functional/agent/test_l3_agent.py", line 900, in 
test_access_to_metadata_proxy
2015-05-29 19:44:46.233 | 2015-05-29 19:44:46.185 | self.fail('metadata 
proxy unreachable on %s before timeout' % url)
2015-05-29 19:44:46.233 | 2015-05-29 19:44:46.186 |   File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/unittest2/case.py",
 line 666, in fail
2015-05-29 19:44:46.234 | 2015-05-29 19:44:46.187 | raise 
self.failureException(msg)
2015-05-29 19:44:46.234 | 2015-05-29 19:44:46.188 | AssertionError: 
metadata proxy unreachable on http://169.254.169.254:80 before timeout
2015-05-29 19:44:46.234 | 2015-05-29 19:44:46.189 | 
2015-05-29 19:44:46.234 | 2015-05-29 19:44:46.191 | 
2015-05-29 19:44:46.234 | 2015-05-29 19:44:46.192 | Captured stdout:
2015-05-29 19:44:46.234 | 2015-05-29 19:44:46.193 | 
2015-05-29 19:44:46.235 | 2015-05-29 19:44:46.194 | 2015-05-29 19:43:27.257 
15907 INFO neutron.common.config [req-8a89f51c-7d0f-4317-a897-c68b90af405d - - 
- - -] Logging enabled!
2015-05-29 19:44:46.235 | 2015-05-29 19:44:46.195 | 2015-05-29 19:43:27.257 
15907 INFO neutron.common.config [req-8a89f51c-7d0f-4317-a897-c68b90af405d - - 
- - -] 
/opt/stack/new/neutron/.tox/dsvm-functional/lib/python2.7/site-packages/subunit/run.py
 version 2015.2.0.dev499
2015-05-29 19:44:46.235 | 2015-05-29 19:44:46.196 | 2015-05-29 19:43:27.331 
15907 INFO eventlet.wsgi.server [-] (15907) wsgi starting up on http:///:t/
2015-05-29 19:44:46.235 | 2015-05-29 19:44:46.197 | 2015-05-29 19:43:30.762 
15907 INFO eventlet.wsgi.server [-] (15907) wsgi starting up on http:///:t/
2015-05-29 19:44:46.235 | 2015-05-29 19:44:46.198 | 2015-05-29 19:43:31.229 
15907 ERROR neutron.agent.linux.utils [req-d285dd8a-7dab-41a8-9692-57608f0cd9b2 
- - - - -] 
2015-05-29 19:44:46.236 | 2015-05-29 19:44:46.199 | Command: ['ip', 
'netns', 'exec', 'func-7b734de4-1879-4c2e-87f5-070911a7b223', 'curl', 
'--max-time', 60, '-D-', 'http://169.254.169.254:80']
2015-05-29 19:44:46.261 | 2015-05-29 19:44:46.200 | Exit code: 7
2015-05-29 19:44:46.261 | 2015-05-29 19:44:46.201 | Stdin: 
2015-05-29 19:44:46.261 | 2015-05-29 19:44:46.202 | Stdout: 
2015-05-29 19:44:46.261 | 2015-05-29 19:44:46.203 | Stderr:   % Total% 
Received % Xferd  Average Speed   TimeTime Time  Current
2015-05-29 19:44:46.261 | 2015-05-29 19:44:46.204 | 
 Dload  Upload   Total   SpentLeft  Speed
2015-05-29 19:44:46.262 | 2015-05-29 19:44:46.205 | 
  0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 
0curl: (7) Failed to connect to 169.254.169.254 port 80: Connection refused
2015-05-29 19:44:46.262 | 2015-05-29 19:44:46.206 | 
2015-05-29 19:44:46.262 | 2015-05-29 19:44:46.207 | 2015-05-29 19:43:31.293 
15907 INFO eventlet.wsgi.server [-] (15907) wsgi exited, is_accepting=True

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQXNzZXJ0aW9uRXJyb3I6IG1ldGFkYXRhIHByb3h5IHVucmVhY2hhYmxlIG9uXCIgQU5EIG1lc3NhZ2U6XCJiZWZvcmUgdGltZW91dFwiIEFORCB0YWdzOlwiY29uc29sZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDMzMjY0NzAwNjIzfQ==

20 hits in 7 days, all failures, check and gate queues.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1461172

Title:
  
neutron.tests.functional.agent.test_l3_agent.MetadataL3AgentTestCase.test_access_to_metadata_proxy
  times out intermittently

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  http://logs.openstack.org/90/186290/2/gate/gate-neutron-dsvm-
  functional/599b742/console.html.gz#_2015-05-29_19_44_46_232

  2015-05-29 19:44:46.232 | 2015-05-29 19:44:46.177 | 
neutron.tests.functional.agent.test_l3_agent.MetadataL3AgentTestCase.test_access_to_metadata_proxy
  2015-05-29 19:44:46.232 | 2015-05-29 19:44:46.178 | 
---

[Yahoo-eng-team] [Bug 1458945] Re: Use graduated oslo.policy instead of oslo-incubator code

2015-06-02 Thread John Dickinson
** No longer affects: swift

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458945

Title:
  Use graduated oslo.policy instead of oslo-incubator code

Status in OpenStack Key Management (Barbican):
  New
Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  Confirmed
Status in OpenStack Congress:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Orchestration API (Heat):
  Fix Committed
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  New
Status in OpenStack Identity (Keystone):
  Fix Released
Status in MagnetoDB - key-value storage service for OpenStack:
  Confirmed
Status in OpenStack Magnum:
  New
Status in Manila:
  New
Status in Mistral:
  Invalid
Status in Murano:
  Confirmed
Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in OpenStack Compute (Nova):
  Confirmed
Status in Rally:
  Invalid
Status in OpenStack Data Processing (Sahara):
  New
Status in Openstack Database (Trove):
  Invalid

Bug description:
  The Policy code is now be managed as a library, named oslo.policy.

  If there is a CVE level defect, deploying a fix should require
  deploying a new version of the library, not syncing each individual
  project.

  All the projects in the OpenStack ecosystem that are using the policy
  code from oslo-incubator should use the new library.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1458945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458945] Re: Use graduated oslo.policy instead of oslo-incubator code

2015-06-02 Thread Kiall Mac Innes
** No longer affects: designate

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458945

Title:
  Use graduated oslo.policy instead of oslo-incubator code

Status in OpenStack Key Management (Barbican):
  New
Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  Confirmed
Status in OpenStack Congress:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Orchestration API (Heat):
  Fix Committed
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  New
Status in OpenStack Identity (Keystone):
  Fix Released
Status in MagnetoDB - key-value storage service for OpenStack:
  Confirmed
Status in OpenStack Magnum:
  New
Status in Manila:
  New
Status in Mistral:
  Invalid
Status in Murano:
  Confirmed
Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in OpenStack Compute (Nova):
  Confirmed
Status in Rally:
  Invalid
Status in OpenStack Data Processing (Sahara):
  New
Status in Openstack Database (Trove):
  Invalid

Bug description:
  The Policy code is now be managed as a library, named oslo.policy.

  If there is a CVE level defect, deploying a fix should require
  deploying a new version of the library, not syncing each individual
  project.

  All the projects in the OpenStack ecosystem that are using the policy
  code from oslo-incubator should use the new library.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1458945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461158] [NEW] image table throws errrors into log files trying to display size attr

2015-06-02 Thread Eric Peterson
Public bug reported:

I am seeing the images table fill up our log file with messages that are
completely useless / silly.  There should be a getattr() with a default
to None so this type of situation is more gracefully handled.


2015-06-01 20:50:11,741 32061 WARNING horizon.tables.base ^[[31;1mThe attribute 
size doesn't exist on .^[[0m
2015-06-01 20:50:42,434 32062 WARNING horizon.tables.base ^[[31;1mThe attribute 
size doesn't exist on .^[[0m
2015-06-01 20:50:42,434 32063 WARNING horizon.tables.base ^[[31;1mThe attribute 
size doesn't exist on .^[[0m
2015-06-01 20:51:13,339 32062 WARNING horizon.tables.base ^[[31;1mThe attribute 
size doesn't exist on .^[[0m

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit

** Tags added: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1461158

Title:
  image table throws errrors into log files trying to display size attr

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I am seeing the images table fill up our log file with messages that
  are completely useless / silly.  There should be a getattr() with a
  default to None so this type of situation is more gracefully handled.

  
  2015-06-01 20:50:11,741 32061 WARNING horizon.tables.base ^[[31;1mThe 
attribute size doesn't exist on .^[[0m
  2015-06-01 20:50:42,434 32062 WARNING horizon.tables.base ^[[31;1mThe 
attribute size doesn't exist on .^[[0m
  2015-06-01 20:50:42,434 32063 WARNING horizon.tables.base ^[[31;1mThe 
attribute size doesn't exist on .^[[0m
  2015-06-01 20:51:13,339 32062 WARNING horizon.tables.base ^[[31;1mThe 
attribute size doesn't exist on .^[[0m

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1461158/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461148] [NEW] Setting L3 agent status to "down" should update HA router states

2015-06-02 Thread Assaf Muller
Public bug reported:

The main use case of L3 HA is bouncing back from a machine (That is
running a L3 agent) dying. In this case, with bp/report-ha-router-master
merged, any active routers on that node will remain active in the
Neutron DB (As the dead agent cannot update the server of anything). A
backup node will pick up the routers previously active on the dead node
and will update their status, resulting in the Neutron DB having the
router 'active' on two different nodes. This can mess up l2pop as HA
router interfaces will now be arbitrarily hosted on any of the 'active'
hosts.

The solution would be that when a L3 agent is marked as dead, to go
ahead and change the HA router states on that agent to from active to
standby, and also to update the router ports 'host' value to point to
the new active agent.

Note: This bug is at least partially coupled with
https://bugs.launchpad.net/neutron/+bug/1365476. Ideally we could solve
the two bugs in two separate patches with no dependencies.

** Affects: neutron
 Importance: High
 Assignee: Mike Kolesnik (mkolesni)
 Status: New


** Tags: l2-pop l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1461148

Title:
  Setting L3 agent status to "down" should update HA router states

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The main use case of L3 HA is bouncing back from a machine (That is
  running a L3 agent) dying. In this case, with bp/report-ha-router-
  master merged, any active routers on that node will remain active in
  the Neutron DB (As the dead agent cannot update the server of
  anything). A backup node will pick up the routers previously active on
  the dead node and will update their status, resulting in the Neutron
  DB having the router 'active' on two different nodes. This can mess up
  l2pop as HA router interfaces will now be arbitrarily hosted on any of
  the 'active' hosts.

  The solution would be that when a L3 agent is marked as dead, to go
  ahead and change the HA router states on that agent to from active to
  standby, and also to update the router ports 'host' value to point to
  the new active agent.

  Note: This bug is at least partially coupled with
  https://bugs.launchpad.net/neutron/+bug/1365476. Ideally we could
  solve the two bugs in two separate patches with no dependencies.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1461148/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461133] [NEW] Modular Layer 3 (ML3) Framework

2015-06-02 Thread Paul Carver
Public bug reported:

There are a variety of hardware and software options available to handle layer 
3 (routing) in Neutron environments with various tradeoffs. Currently a single 
Neutron instance can be configured to support only one routing mechanism at a 
time and this leads to a need to build multiple OpenStack zones based on 
different requirements.
This RFE is analogous to the ML2 framework. I would like to see a standard 
vendor neutral framework/API for creating/maintaining L3 routing constructs 
with a standard way for vendors/developers to build mechanism drivers to effect 
the desired routing on a variety of hardware and software platforms.
In terms of broader scope (perhaps not initial implementation) there are a 
number of L3 related developments taking place that could benefit from the 
logical (aka "type") constructs from the implementation (aka "mechanism") 
constructs. e.g. BGP VPNs, IPSec/SSL VPNs, Service Chaining, QoS.

The vision here is that the OpenStack community would standardize on
what virtual routers can do, then individual companies/people with an
interest in specific L3 implementations would build mechanism drivers to
do those things. An essential criteria is that it should be possible to
mix mechanisms within a single OpenStack zone rather than building
separate building entirely separate Nova/Neutron/computenode
environments based on a single L3 mechanism.

Some examples of ways to handle L3 currently: L3 agent on x86, SDN
software Contrail, Nuage, NSX, OVN, Plumgrid, and others, in hardware on
a variety of vendors' switch/router platforms Arista, Cisco, others.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1461133

Title:
  Modular Layer 3 (ML3) Framework

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  There are a variety of hardware and software options available to handle 
layer 3 (routing) in Neutron environments with various tradeoffs. Currently a 
single Neutron instance can be configured to support only one routing mechanism 
at a time and this leads to a need to build multiple OpenStack zones based on 
different requirements.
  This RFE is analogous to the ML2 framework. I would like to see a standard 
vendor neutral framework/API for creating/maintaining L3 routing constructs 
with a standard way for vendors/developers to build mechanism drivers to effect 
the desired routing on a variety of hardware and software platforms.
  In terms of broader scope (perhaps not initial implementation) there are a 
number of L3 related developments taking place that could benefit from the 
logical (aka "type") constructs from the implementation (aka "mechanism") 
constructs. e.g. BGP VPNs, IPSec/SSL VPNs, Service Chaining, QoS.

  The vision here is that the OpenStack community would standardize on
  what virtual routers can do, then individual companies/people with an
  interest in specific L3 implementations would build mechanism drivers
  to do those things. An essential criteria is that it should be
  possible to mix mechanisms within a single OpenStack zone rather than
  building separate building entirely separate Nova/Neutron/computenode
  environments based on a single L3 mechanism.

  Some examples of ways to handle L3 currently: L3 agent on x86, SDN
  software Contrail, Nuage, NSX, OVN, Plumgrid, and others, in hardware
  on a variety of vendors' switch/router platforms Arista, Cisco,
  others.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1461133/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1455397] Re: vms which donot winth a project will become garbage data, mybe waste resources

2015-06-02 Thread Dolph Mathews
*** This bug is a duplicate of bug 967832 ***
https://bugs.launchpad.net/bugs/967832

** This bug has been marked a duplicate of bug 967832
   Resources owned by a project/tenant are not cleaned up after that project is 
deleted from keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1455397

Title:
  vms which donot winth a project will  become garbage data,mybe waste
  resources

Status in OpenStack Identity (Keystone):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  now,when an user logs in dashboardAs an administrator, he can delete a 
project whitout considering wether the project has vms, further more ,keystone 
doesn't considering that too. 
  So, in nova database table 'instance' the vms data will always exist.There 
will be no user can use them again.It is a garbage data.
  Maybe will waste resource too.

  I think one must delete the vm first,then delete project,or nova check
  whether the vms is useful,the ones whitout a effective project should
  be deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1455397/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461118] [NEW] limit maximum length of gen_random_resource

2015-06-02 Thread Martin Pavlásek
Public bug reported:

in example, name of flavor can't be longer than 25 characters, but this
function produce much longer strings and it can lead to raise exception
like 'abcdefgh' != 'abc'

** Affects: horizon
 Importance: Undecided
 Assignee: Martin Pavlásek (mpavlase)
 Status: In Progress


** Tags: integration-tests

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1461118

Title:
  limit maximum length of gen_random_resource

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  in example, name of flavor can't be longer than 25 characters, but
  this function produce much longer strings and it can lead to raise
  exception like 'abcdefgh' != 'abc'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1461118/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461103] [NEW] Creation of juno_initial migration is required

2015-06-02 Thread Ann Kamyshnikova
Public bug reported:

havana is deprecated now, so havana_inital migration should be removed
and replaced with juno_initial.

** Affects: neutron
 Importance: Undecided
 Assignee: Ann Kamyshnikova (akamyshnikova)
 Status: New


** Tags: db

** Summary changed:

- Creation of juno_initial migration required
+ Creation of juno_initial migration is required

** Changed in: neutron
 Assignee: (unassigned) => Ann Kamyshnikova (akamyshnikova)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1461103

Title:
  Creation of juno_initial migration is required

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  havana is deprecated now, so havana_inital migration should be removed
  and replaced with juno_initial.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1461103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461102] [NEW] cascade in orm relationships shadows ON DELETE CASCADE

2015-06-02 Thread Salvatore Orlando
Public bug reported:

In [1] there is a good discussion on how the 'cascade' property
specified for sqlachemy.orm.relationship interacts with the 'ON DELETE
CASCADE' specified in DDL.

I stumbled on this when I was doing some DB access profiling and noticed
multiple DELETE statements were emitted for a delete subnet operation
[2], whereas I expected a single DELETE statement only; I expected that
the cascade behaviour configured on db tables would have taken care of
DNS servers, host routes, etc.

What is happening is that sqlalchemy is perform orm-level cascading
rather than relying on the database foreign key cascade options. And
it's doing this because we told it to do so. As the SQLAlchemy
documentation points out [3] there is no need to add the complexity of
orm relationships if foreign keys are correctly configured on the
database, and the passive_deletes option should be used.

Enabling such option in place of all the cascade options for relationship 
caused a single DELETE statement to be issued [4].
This is not a massive issue (possibly the time spent in extra queries is just 
.5ms), but surely it is something worth doing - if nothing else because it 
seems Neutron is not using SQLAlchemy in the correct way.

As someone who's been doing this mistake for ages, for what is worth
this has been for me a moment where I realized that sometimes it's good
to be told RTFM.


[1] http://docs.sqlalchemy.org/en/latest/orm/cascades.html
[2] http://paste.openstack.org/show/256289/
[3] http://docs.sqlalchemy.org/en/latest/orm/collections.html#passive-deletes
[4] http://paste.openstack.org/show/256301/

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: db

** Changed in: neutron
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1461102

Title:
  cascade in orm relationships shadows ON DELETE CASCADE

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In [1] there is a good discussion on how the 'cascade' property
  specified for sqlachemy.orm.relationship interacts with the 'ON DELETE
  CASCADE' specified in DDL.

  I stumbled on this when I was doing some DB access profiling and
  noticed multiple DELETE statements were emitted for a delete subnet
  operation [2], whereas I expected a single DELETE statement only; I
  expected that the cascade behaviour configured on db tables would have
  taken care of DNS servers, host routes, etc.

  What is happening is that sqlalchemy is perform orm-level cascading
  rather than relying on the database foreign key cascade options. And
  it's doing this because we told it to do so. As the SQLAlchemy
  documentation points out [3] there is no need to add the complexity of
  orm relationships if foreign keys are correctly configured on the
  database, and the passive_deletes option should be used.

  Enabling such option in place of all the cascade options for relationship 
caused a single DELETE statement to be issued [4].
  This is not a massive issue (possibly the time spent in extra queries is just 
.5ms), but surely it is something worth doing - if nothing else because it 
seems Neutron is not using SQLAlchemy in the correct way.

  As someone who's been doing this mistake for ages, for what is worth
  this has been for me a moment where I realized that sometimes it's
  good to be told RTFM.

  
  [1] http://docs.sqlalchemy.org/en/latest/orm/cascades.html
  [2] http://paste.openstack.org/show/256289/
  [3] http://docs.sqlalchemy.org/en/latest/orm/collections.html#passive-deletes
  [4] http://paste.openstack.org/show/256301/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1461102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458945] Re: Use graduated oslo.policy instead of oslo-incubator code

2015-06-02 Thread Samuel de Medeiros Queiroz
I am re-adding the projects which were marked as 'no longer affects' and
then invalidating them, so that we can keep track of the status of this
change for the whole OpenStack ecosystem.

People who marked as 'no longer affects' and respective projects are:

Samuel Merritt (torgomatic) on swift
Ruby Loo (rloo) on ironic
Thomas Herve (therve) on heat
Sergey Reshetnyak (sreshetniak) on sahara
Valeriy Ponomaryov (vponomaryov) on manila
Tim Simmons (tim-simmons-t) on designate

** Also affects: swift
   Importance: Undecided
   Status: New

** Also affects: ironic
   Importance: Undecided
   Status: New

** Also affects: heat
   Importance: Undecided
   Status: New

** Also affects: sahara
   Importance: Undecided
   Status: New

** Also affects: manila
   Importance: Undecided
   Status: New

** Also affects: designate
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458945

Title:
  Use graduated oslo.policy instead of oslo-incubator code

Status in OpenStack Key Management (Barbican):
  New
Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  Confirmed
Status in OpenStack Congress:
  New
Status in Designate:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Orchestration API (Heat):
  New
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  New
Status in OpenStack Identity (Keystone):
  Fix Released
Status in MagnetoDB - key-value storage service for OpenStack:
  Confirmed
Status in OpenStack Magnum:
  New
Status in Manila:
  New
Status in Mistral:
  Invalid
Status in Murano:
  Confirmed
Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in OpenStack Compute (Nova):
  Confirmed
Status in Rally:
  Invalid
Status in OpenStack Data Processing (Sahara):
  New
Status in OpenStack Object Storage (Swift):
  New
Status in Openstack Database (Trove):
  Invalid

Bug description:
  The Policy code is now be managed as a library, named oslo.policy.

  If there is a CVE level defect, deploying a fix should require
  deploying a new version of the library, not syncing each individual
  project.

  All the projects in the OpenStack ecosystem that are using the policy
  code from oslo-incubator should use the new library.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1458945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461095] [NEW] Token user's not revoked

2015-06-02 Thread Vlad Okhrimenko
Public bug reported:

Steps:
1. Login to dashboard as admin
2. Create project (as example - `project_1`)
3. Create Member-user.
4. add Member-user  to `project_1`
5. In another browser login as Member-user
6. go to `/project/instance` (the behavior is typical for another pages - 
`volumes`, `images`, `identity`)
7. refresh (or go to page) - 3-5 times. Stay of this page.
8. Then, as admin, remove Member-user from `project_1`
9. as Member-user try go to `/project/instance` -- you don't get error

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1461095

Title:
  Token user's not revoked

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps:
  1. Login to dashboard as admin
  2. Create project (as example - `project_1`)
  3. Create Member-user.
  4. add Member-user  to `project_1`
  5. In another browser login as Member-user
  6. go to `/project/instance` (the behavior is typical for another pages - 
`volumes`, `images`, `identity`)
  7. refresh (or go to page) - 3-5 times. Stay of this page.
  8. Then, as admin, remove Member-user from `project_1`
  9. as Member-user try go to `/project/instance` -- you don't get error

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1461095/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458945] Re: Use graduated oslo.policy instead of oslo-incubator code

2015-06-02 Thread Chris Dent
Fixed in https://review.openstack.org/#/c/162881/

** Changed in: ceilometer
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458945

Title:
  Use graduated oslo.policy instead of oslo-incubator code

Status in OpenStack Key Management (Barbican):
  New
Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Cinder:
  Confirmed
Status in OpenStack Congress:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in MagnetoDB - key-value storage service for OpenStack:
  Confirmed
Status in OpenStack Magnum:
  New
Status in Mistral:
  Invalid
Status in Murano:
  Confirmed
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Confirmed
Status in Rally:
  Invalid
Status in Openstack Database (Trove):
  Invalid

Bug description:
  The Policy code is now be managed as a library, named oslo.policy.

  If there is a CVE level defect, deploying a fix should require
  deploying a new version of the library, not syncing each individual
  project.

  All the projects in the OpenStack ecosystem that are using the policy
  code from oslo-incubator should use the new library.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1458945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461081] [NEW] SMBFS volume attach race condition

2015-06-02 Thread Lucian Petrut
Public bug reported:

When the SMBFS volume backend is used and a volume is detached, the
according SMB share is detached if no longer used.

This can cause issues if at the same time, a different volume stored on
the same share is being attached as the according disk image will not be
available.

This affects the Libvirt driver as well as the Hyper-V one. The issue
can easily be fixed by using the share path as a lock when performing
attach/detach volume operations.

Trace: http://paste.openstack.org/show/256096/

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: libvirt smbfs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461081

Title:
  SMBFS volume attach race condition

Status in OpenStack Compute (Nova):
  New

Bug description:
  When the SMBFS volume backend is used and a volume is detached, the
  according SMB share is detached if no longer used.

  This can cause issues if at the same time, a different volume stored
  on the same share is being attached as the according disk image will
  not be available.

  This affects the Libvirt driver as well as the Hyper-V one. The issue
  can easily be fixed by using the share path as a lock when performing
  attach/detach volume operations.

  Trace: http://paste.openstack.org/show/256096/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1461081/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458945] Re: Use graduated oslo.policy instead of oslo-incubator code

2015-06-02 Thread Lingxian Kong
currently, policy mechanism is not supported in Mistral

** Changed in: mistral
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458945

Title:
  Use graduated oslo.policy instead of oslo-incubator code

Status in OpenStack Key Management (Barbican):
  New
Status in OpenStack Telemetry (Ceilometer):
  New
Status in Cinder:
  Confirmed
Status in OpenStack Congress:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in MagnetoDB - key-value storage service for OpenStack:
  New
Status in OpenStack Magnum:
  New
Status in Mistral:
  Invalid
Status in Murano:
  Confirmed
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Confirmed
Status in Rally:
  Invalid
Status in Openstack Database (Trove):
  Invalid

Bug description:
  The Policy code is now be managed as a library, named oslo.policy.

  If there is a CVE level defect, deploying a fix should require
  deploying a new version of the library, not syncing each individual
  project.

  All the projects in the OpenStack ecosystem that are using the policy
  code from oslo-incubator should use the new library.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1458945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461068] [NEW] Add STT Type Driver to ML2

2015-06-02 Thread Gal Sagie
Public bug reported:

Add STT type driver for ML2

** Affects: neutron
 Importance: Undecided
 Assignee: Gal Sagie (gal-sagie)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Gal Sagie (gal-sagie)

** Summary changed:

- Add STT Type Driver
+ Add STT Type Driver to ML2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1461068

Title:
  Add STT Type Driver to ML2

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Add STT type driver for ML2

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1461068/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461069] [NEW] Add Geneve type driver for ML2

2015-06-02 Thread Gal Sagie
Public bug reported:

Add Geneve type driver for ML2

** Affects: neutron
 Importance: Undecided
 Assignee: Gal Sagie (gal-sagie)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Gal Sagie (gal-sagie)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1461069

Title:
  Add Geneve type driver for ML2

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Add Geneve type driver for ML2

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1461069/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461065] [NEW] Security groups may break

2015-06-02 Thread Gary Kotton
Public bug reported:

Commit
https://github.com/openstack/nova/commit/171e5f8b127610d93a230a6f692d8fd5ea0d0301
converted instance dicts to objects. There are cases for the security
groups where these should still be dicts. This will cause update of
security groups to break.

** Affects: nova
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress


** Tags: kilo-backport-potential

** Tags added: kilo-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461065

Title:
  Security groups may break

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Commit
  
https://github.com/openstack/nova/commit/171e5f8b127610d93a230a6f692d8fd5ea0d0301
  converted instance dicts to objects. There are cases for the security
  groups where these should still be dicts. This will cause update of
  security groups to break.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1461065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461055] [NEW] Can't delete instance stuck in deleting task

2015-06-02 Thread Tzach Shefi
Public bug reported:

Description of problem:  On a Juno HA deployment, nova over shared nfs
storage, when I deleted an instance it was "deleted":

2015-06-02 11:57:36.273 3505 INFO nova.virt.libvirt.driver [req-
4cc54412-a449-4c7a-bbe1-b21d202bcfe7 None] [instance:
7b6c8ad5-7633-4d53-9f84-93b12a701cd3] Deletion of
/var/lib/nova/instances/7b6c8ad5-7633-4d53-9f84-93b12a701cd3_del
complete

Also instance wasn't found with virsh list all. 
Yet nova list and Horizon both still show this instance as stuck in task 
deleting, two hours+ pasted since I deleted it. 

Version-Release number of selected component (if applicable):
rhel 7.1
python-nova-2014.2.2-19.el7ost.noarch
openstack-nova-compute-2014.2.2-19.el7ost.noarch
python-novaclient-2.20.0-1.el7ost.noarch
openstack-nova-common-2014.2.2-19.el7ost.noarch

How reproducible:
Unsure, it doesn't happen with every instance deletion, but happened more than 
this one time. 

Steps to Reproduce:
1. Boot an instance
2. Delete instance 
3. Instance is stuck in deleting task on nova/Horozon. 

Actual results:
Stuck with a phantom "deleting" instance, which is basically already dead from 
Virsh's point of view. 

Expected results:
Instance should get deleted including from nova list/Horizon. 

Additional info:


Workaround doing openstack-service restart for nova on compute node fixed my 
problem. Instance is totally gone from Nova/Horizon. 

instance virsh id instance-0d4d.log
instanceID  7b6c8ad5-7633-4d53-9f84-93b12a701cd3

| OS-EXT-STS:power_state   | 1  
|
| OS-EXT-STS:task_state| deleting   
|
| OS-EXT-STS:vm_state  | deleted
|
| OS-SRV-USG:launched_at   | 2015-05-28T11:06:33.00 
|
| OS-SRV-USG:terminated_at | 2015-06-02T08:57:37.00 
|
.. |
| status   | DELETED

Attached nova log from compute and controller.

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "logs"
   
https://bugs.launchpad.net/bugs/1461055/+attachment/4408565/+files/logs.tar.gz

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461055

Title:
  Can't delete instance stuck in deleting task

Status in OpenStack Compute (Nova):
  New

Bug description:
  Description of problem:  On a Juno HA deployment, nova over shared nfs
  storage, when I deleted an instance it was "deleted":

  2015-06-02 11:57:36.273 3505 INFO nova.virt.libvirt.driver [req-
  4cc54412-a449-4c7a-bbe1-b21d202bcfe7 None] [instance:
  7b6c8ad5-7633-4d53-9f84-93b12a701cd3] Deletion of
  /var/lib/nova/instances/7b6c8ad5-7633-4d53-9f84-93b12a701cd3_del
  complete

  Also instance wasn't found with virsh list all. 
  Yet nova list and Horizon both still show this instance as stuck in task 
deleting, two hours+ pasted since I deleted it. 

  Version-Release number of selected component (if applicable):
  rhel 7.1
  python-nova-2014.2.2-19.el7ost.noarch
  openstack-nova-compute-2014.2.2-19.el7ost.noarch
  python-novaclient-2.20.0-1.el7ost.noarch
  openstack-nova-common-2014.2.2-19.el7ost.noarch

  How reproducible:
  Unsure, it doesn't happen with every instance deletion, but happened more 
than this one time. 

  Steps to Reproduce:
  1. Boot an instance
  2. Delete instance 
  3. Instance is stuck in deleting task on nova/Horozon. 

  Actual results:
  Stuck with a phantom "deleting" instance, which is basically already dead 
from Virsh's point of view. 

  Expected results:
  Instance should get deleted including from nova list/Horizon. 

  Additional info:

  
  Workaround doing openstack-service restart for nova on compute node fixed my 
problem. Instance is totally gone from Nova/Horizon. 

  instance virsh id instance-0d4d.log
  instanceID  7b6c8ad5-7633-4d53-9f84-93b12a701cd3

  | OS-EXT-STS:power_state   | 1
  |
  | OS-EXT-STS:task_state| deleting 
  |
  | OS-EXT-STS:vm_state  | deleted  
  |
  | OS-SRV-USG:launched_at   | 2015-05-28T11:06:33.00   
  |
  | OS-SRV-USG:terminated_at | 2015-06-02T08:57:37.00   

[Yahoo-eng-team] [Bug 1461047] [NEW] description column is missing in firewall tables

2015-06-02 Thread Masco Kaliyamoorthy
Public bug reported:

in all the firewall tables 'description' column is missing.

** Affects: horizon
 Importance: Undecided
 Assignee: Masco Kaliyamoorthy (masco)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Masco Kaliyamoorthy (masco)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1461047

Title:
  description column is missing in firewall tables

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  in all the firewall tables 'description' column is missing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1461047/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461031] [NEW] Federation docs say domain is identified by name not id

2015-06-02 Thread Marek Denis
Public bug reported:

In [0] in the token example we can see that service domain is identified
by 'name', whereas [1] sets 'id' field. Docs should reflect code.

[0] 
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3-os-federation-ext.html#authenticating
[1] 
https://github.com/openstack/keystone/blob/master/keystone/contrib/federation/utils.py#L529-L533

** Affects: keystone
 Importance: Low
 Assignee: Marek Denis (marek-denis)
 Status: In Progress


** Tags: documentation

** Changed in: keystone
   Importance: Undecided => Low

** Changed in: keystone
 Assignee: (unassigned) => Marek Denis (marek-denis)

** Tags added: documentation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1461031

Title:
  Federation docs say domain is identified by name not id

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  In [0] in the token example we can see that service domain is
  identified by 'name', whereas [1] sets 'id' field. Docs should reflect
  code.

  [0] 
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3-os-federation-ext.html#authenticating
  [1] 
https://github.com/openstack/keystone/blob/master/keystone/contrib/federation/utils.py#L529-L533

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1461031/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461024] [NEW] Notification is not sent on security group rule creation

2015-06-02 Thread Oleg Bondarev
Public bug reported:

Security group rule before/after_create notifications are done in
create_security_group_rule() from SecurityGroupDbMixin.

But currently SecurityGroupServerRpcMixin is used to support security group 
extension in plugins. 
It is derived from SecurityGroupDbMixin. Both have create_security_group_rule() 
method so in SecurityGroupServerRpcMixin it is overriden. 
Hence create_security_group_rule() from SecurityGroupDbMixin is not used => 
notifications are not sent.

** Affects: neutron
 Importance: Undecided
 Assignee: Oleg Bondarev (obondarev)
 Status: New


** Tags: sg-fw

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1461024

Title:
  Notification is not sent on security group rule creation

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Security group rule before/after_create notifications are done in
  create_security_group_rule() from SecurityGroupDbMixin.

  But currently SecurityGroupServerRpcMixin is used to support security group 
extension in plugins. 
  It is derived from SecurityGroupDbMixin. Both have 
create_security_group_rule() method so in SecurityGroupServerRpcMixin it is 
overriden. 
  Hence create_security_group_rule() from SecurityGroupDbMixin is not used => 
notifications are not sent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1461024/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461018] [NEW] description column is missing in vpn tables

2015-06-02 Thread Masco Kaliyamoorthy
Public bug reported:

description column is not present in vpn tables.

** Affects: horizon
 Importance: Undecided
 Assignee: Masco Kaliyamoorthy (masco)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Masco Kaliyamoorthy (masco)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1461018

Title:
  description column is missing in vpn tables

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  description column is not present in vpn tables.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1461018/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1041332] Re: Image delete notification sends old image info instead of deleted image info

2015-06-02 Thread Erno Kuvaja
Cleaning up, lets open this again if it is still an issue.

** Changed in: glance
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1041332

Title:
  Image delete notification sends old image info instead of deleted
  image info

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  One of the earlier commits(51faf289) introduced this change:

  
  -image = registry.delete_image_metadata(req.context, id)
  +registry.delete_image_metadata(req.context, id)

  so later when notification is sent out

   self.notifier.info('image.delete', image)

  it sends the old image info instead of deleted image info. This in
  turn results in the image having fields with deleted=False and
  deleted_at=null

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1041332/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461000] [NEW] [rfe] openvswitch based firewall driver

2015-06-02 Thread Jakub Libosvar
Public bug reported:

Nowadays, when using openvswitch-agent with security groups we must use
hybrid bridging, i.e. per instance we have both openvswitch bridge and
linux bridge. The rationale behind this approach is to set filtering
rules matching on given linux bridge.

We can get rid of linux bridge if filtering is done directly in
openvswitch via openflow rules. The benefits of this approach are better
throughput in data plain due to removal of linux bridge and faster rule
filtering due to not using physdev extension in iptables. Another
improvement is in control plain because currently setting rules via
iptables firewall driver doesn't scale well.

This RFE requests a new firewall driver that is capable of filtering
packets based on specified security groups using openvswitch only.
Requirement for OVS is to have conntrack support which is planned to be
released with OVS 2.4.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1461000

Title:
  [rfe] openvswitch based firewall driver

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Nowadays, when using openvswitch-agent with security groups we must
  use hybrid bridging, i.e. per instance we have both openvswitch bridge
  and linux bridge. The rationale behind this approach is to set
  filtering rules matching on given linux bridge.

  We can get rid of linux bridge if filtering is done directly in
  openvswitch via openflow rules. The benefits of this approach are
  better throughput in data plain due to removal of linux bridge and
  faster rule filtering due to not using physdev extension in iptables.
  Another improvement is in control plain because currently setting
  rules via iptables firewall driver doesn't scale well.

  This RFE requests a new firewall driver that is capable of filtering
  packets based on specified security groups using openvswitch only.
  Requirement for OVS is to have conntrack support which is planned to
  be released with OVS 2.4.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1461000/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460060] Re: Glance v1 and v2 api returns 500 while passing --min-ram and --min-disk greater than 2^(31) max value

2015-06-02 Thread Erno Kuvaja
On our fail early principle we should not send these values to the
server at the first place. This does not mean that we shouldn't fix the
server side as well.

** Also affects: python-glanceclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1460060

Title:
  Glance v1 and v2 api returns 500 while passing --min-ram and --min-
  disk greater than 2^(31) max value

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Python client library for Glance:
  New

Bug description:
  glance image-create --name test --container-format bare --disk-format raw 
--file delete_images.py --min-disk 100
  HTTPInternalServerError (HTTP 500)

  glance image-create --name test --container-format bare --disk-format raw 
--file delete_images.py --min-ram 100
  HTTPInternalServerError (HTTP 500)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1460060/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1455397] Re: vms which donot winth a project will become garbage data, mybe waste resources

2015-06-02 Thread Markus Zoeller
I added "Keystone" as possible affected project. I'm not sure if the
responsibility of checking this is in Nova.

** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1455397

Title:
  vms which donot winth a project will  become garbage data,mybe waste
  resources

Status in OpenStack Identity (Keystone):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  now,when an user logs in dashboardAs an administrator, he can delete a 
project whitout considering wether the project has vms, further more ,keystone 
doesn't considering that too. 
  So, in nova database table 'instance' the vms data will always exist.There 
will be no user can use them again.It is a garbage data.
  Maybe will waste resource too.

  I think one must delete the vm first,then delete project,or nova check
  whether the vms is useful,the ones whitout a effective project should
  be deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1455397/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460973] [NEW] libvirt: serial console service causes live migration to fail

2015-06-02 Thread Alexander Schmidt
Public bug reported:

When I try to live migrate an instance that uses the serial console, the
migration fails due to a libvirt error.

I do not have a solution for this problem yet, please let me know if
anyone else hits this or if it might be a configuration problem.

Command I ran:

# nova live-migration demo-3

Logs on the source hypervisor:

2015-06-02 10:34:02.774 75696 INFO nova.virt.block_device 
[req-3db85858-4602-46d7-b6c4-104ed301d721 7992de4b6d3f415180345e505a534ada 
d35cb5f0ec6f4a26bdbc7b19dc5c2ca8 - - -] preserve multipath_id 
36005076307ffc6a6110d
2015-06-02 10:34:08.147 75696 ERROR nova.virt.libvirt.driver [-] [instance: 
85618d64-9019-410f-a5cf-b3f2ed508270] Live Migration failure: internal error: 
process exited while connecting to monitor: 2015-06-02T08:34:07.958411Z 
qemu-system-s390x: -chardev 
socket,id=charconsole0,host=myhost,port=10007,server,nowait: Failed to bind 
socket: Cannot assign requested address

2015-06-02 10:34:08.175 75696 ERROR nova.virt.libvirt.driver [-]
[instance: 85618d64-9019-410f-a5cf-b3f2ed508270] Migration operation has
aborted

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1460973

Title:
  libvirt: serial console service causes live migration to fail

Status in OpenStack Compute (Nova):
  New

Bug description:
  When I try to live migrate an instance that uses the serial console,
  the migration fails due to a libvirt error.

  I do not have a solution for this problem yet, please let me know if
  anyone else hits this or if it might be a configuration problem.

  Command I ran:

  # nova live-migration demo-3

  Logs on the source hypervisor:

  2015-06-02 10:34:02.774 75696 INFO nova.virt.block_device 
[req-3db85858-4602-46d7-b6c4-104ed301d721 7992de4b6d3f415180345e505a534ada 
d35cb5f0ec6f4a26bdbc7b19dc5c2ca8 - - -] preserve multipath_id 
36005076307ffc6a6110d
  2015-06-02 10:34:08.147 75696 ERROR nova.virt.libvirt.driver [-] [instance: 
85618d64-9019-410f-a5cf-b3f2ed508270] Live Migration failure: internal error: 
process exited while connecting to monitor: 2015-06-02T08:34:07.958411Z 
qemu-system-s390x: -chardev 
socket,id=charconsole0,host=myhost,port=10007,server,nowait: Failed to bind 
socket: Cannot assign requested address

  2015-06-02 10:34:08.175 75696 ERROR nova.virt.libvirt.driver [-]
  [instance: 85618d64-9019-410f-a5cf-b3f2ed508270] Migration operation
  has aborted

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1460973/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp