[Yahoo-eng-team] [Bug 1324417] Re: fwaas:shared firewall rule is not able to use when it is already attached in other tenant's firewall policy

2014-07-08 Thread Koteswara Rao Kelam
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1324417

Title:
  fwaas:shared firewall rule is not able to use when it is already
  attached in other tenant's firewall policy

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  DESCRIPTION: 
  firewall rule shared by admin is not able to use in tenant's firewall policy  
when the rule is already attached in other tenant's or admin's firewall policy  
 
  Steps to Reproduce: 
  1. create a firewall rule r1 as share = true from admin tenant
  2. create a firewall policy p1 and attach the aboce firewall rule r1 from 
admin tenant
  3. Try to create a firewall policy from other tenant with the above firewall 
rule r1
  Actual Results: 
  cli throws error as its being in use and doesn't create firewall policy 
   

  root@IGA-OSC:~#  fwrc --protocol icmp --action deny --name a2 --shared
  Created a new firewall_rule:
  ++--+
  | Field  | Value|
  ++--+
  | action | deny |
  | description|  |
  | destination_ip_address |  |
  | destination_port   |  |
  | enabled| True |
  | firewall_policy_id |  |
  | id | 15f3c1a8-f813-4809-ab44-00d12f7ff8ad |
  | ip_version | 4|
  | name   | a2   |
  | position   |  |
  | protocol   | icmp |
  | shared | True |
  | source_ip_address  |  |
  | source_port|  |
  | tenant_id  | 0ad385e00e97476e9456945c079a21ea |
  ++--+
  root@IGA-OSC:~#  fwpc ap --firewall-rule a2
  Created a new firewall_policy:
  ++--+
  | Field  | Value|
  ++--+
  | audited| False|
  | description|  |
  | firewall_rules | 15f3c1a8-f813-4809-ab44-00d12f7ff8ad |
  | id | 800bea29-f165-421e-8e56-a0ec9af2bfc0 |
  | name   | ap   |
  | shared | False|
  | tenant_id  | 0ad385e00e97476e9456945c079a21ea |
  ++--+
  root@IGA-OSC:~# fwrs a2
  ++--+
  | Field  | Value|
  ++--+
  | action | deny |
  | description|  |
  | destination_ip_address |  |
  | destination_port   |  |
  | enabled| True |
  | firewall_policy_id | 800bea29-f165-421e-8e56-a0ec9af2bfc0 |
  | id | 15f3c1a8-f813-4809-ab44-00d12f7ff8ad |
  | ip_version | 4|
  | name   | a2   |
  | position   | 1|
  | protocol   | icmp |
  | source_ip_address  |  |
  | source_port|  |
  | tenant_id  | 0ad385e00e97476e9456945c079a21ea |
  ++--+

  From other tenant
  ==

  root@IGA-OSC:~# fwpc p3 --firewall-rule a2
  409-{u'NeutronError': {u'message': u'Firewall Rule 
15f3c1a8-f813-4809-ab44-00d12f7ff8ad is being used.', u'type': 
u'FirewallRuleInUse', u'detail': u''}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1324417/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1233546] Re: Wrong Docstring format in some scheduler/driver.py methods

2014-07-08 Thread Joe Gordon
** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1233546

Title:
  Wrong Docstring format in some scheduler/driver.py methods

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  In "handle_scheduler_error" method:
  Move the description comment to be the method Docstring 
  In "instance_update_db" method:
  Fix the Docstring format in "instance_update_db"methods

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1233546/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255001] Re: Fix exception for os-migrateLive

2014-07-08 Thread Joe Gordon
https://review.openstack.org/58469 was merged

** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1255001

Title:
  Fix exception for os-migrateLive

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Several exception is not correct in os-migrateLive action:
  1.If the server state conflict to live-migrate, it's raises 400 exception.I 
think we should raise HTTPConflict instead of HTTPBadRequest.
  2.the error msg is not accurate while several exception such as :
  exception.NoValidHost,
  exception.InvalidLocalStorage,
  exception.InvalidSharedStorage,
  exception.MigrationPreCheckError

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1255001/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 981294] Re: use FQDN for hostnames of services

2014-07-08 Thread Joe Gordon
This approach was attempted but caused some issues: please see
https://review.openstack.org/#/c/24080/

** Changed in: nova
   Status: In Progress => Incomplete

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/981294

Title:
  use FQDN for hostnames of services

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  at the moment a running service registers in the database using it's
  short hostname. i would prefer it to see the FQDN in the database to
  have an unique identifier for a host. the shortname of two hosts could
  be the same in a bigger enviroment working with several availability
  zones.

  shortname could be used if the fqdn is not available on a system.

  
  mysql> select host from services;
  ++
  | host   |
  ++
  | os0007 | 
  | plum   | 
  | plum   | 
  | fig| 
  | fig| 
  ++
  5 rows in set (0.01 sec)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/981294/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1126375] Re: Disable libvirt base file cleanup by default

2014-07-08 Thread Joe Gordon
patch was abandoned

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1126375

Title:
  Disable libvirt base file cleanup by default

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Hello,

  This is related to the following previous bugs:

  bug #1029674
  bug #1078594

  I see that the image_cache_manager_interval was temporarily changed to
  0 to effectively disable images from being removed. It looks like it
  has been restored to a non-zero number.

  Both as a systems administrator responsible for OpenStack environments
  and as someone who was bit by this feature being enabled, I feel that
  remove_unused_base_images should be False as default. Having a
  "delete" setting turned on by default can be very dangerous. No matter
  how many precautions are taken, some scenarios will still slip by. In
  my opinion, it is a better operational problem to have someone report
  an abundance of unused images that they can opt-in to delete rather
  than having a user report images that have been deleted without them
  understanding why and not being able to restore them.

  If I understand the difference between remove_unused_base_images and
  image_cache_manager_interval, the latter will not enable any of the
  image cache features to run at all. In an odd way, I still find the
  cache manager useful to run - as long as if nothing is removed without
  the user explicitly configuring Nova to do so. If the logs are
  reporting that certain files are up for deletion, this can help the
  user understand what will happen if they turn
  remove_unused_base_images on.

  Please let me know if you have any questions or if I am not
  understanding something correctly.

  Thanks,
  Joe

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1126375/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1189385] Re: quantum-server hung up it's listening port

2014-07-08 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1189385

Title:
  quantum-server hung up it's listening port

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  After running for a week or so under load quantum-server hung up it's
  listening socket, but it still had 173 other sockets open. This
  naturally caused everything to grind to a halt.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1189385/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338844] Re: "FixedIpLimitExceeded: Maximum number of fixed ips exceeded" in tempest nova-network runs since 7/4

2014-07-08 Thread Joe Gordon
** Also affects: tempest
   Importance: Undecided
   Status: New

** Changed in: tempest
   Importance: Undecided => High

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1338844

Title:
  "FixedIpLimitExceeded: Maximum number of fixed ips exceeded" in
  tempest nova-network runs since 7/4

Status in OpenStack Compute (Nova):
  Invalid
Status in Tempest:
  New

Bug description:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQnVpbGRBYm9ydEV4Y2VwdGlvbjogQnVpbGQgb2YgaW5zdGFuY2VcIiBBTkQgbWVzc2FnZTpcImFib3J0ZWQ6IEZhaWxlZCB0byBhbGxvY2F0ZSB0aGUgbmV0d29yayhzKSB3aXRoIGVycm9yIE1heGltdW0gbnVtYmVyIG9mIGZpeGVkIGlwcyBleGNlZWRlZCwgbm90IHJlc2NoZWR1bGluZy5cIiBBTkQgdGFnczpcInNjcmVlbi1uLWNwdS50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwNDc3OTE1MzY1MiwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

  Saw it here:

  http://logs.openstack.org/63/98563/5/check/check-tempest-dsvm-
  postgres-full/1472e7b/logs/screen-n-cpu.txt.gz?level=TRACE

  Looks like it's only in jobs using nova-network.

  Started on 7/4, 70 failures in 7 days, check and gate, multiple
  changes.

  Maybe related to https://review.openstack.org/104581.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1338844/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1080921] Re: Things that should not be in the main process

2014-07-08 Thread Joe Gordon
** Changed in: nova
   Status: Triaged => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1080921

Title:
  Things that should not be in the main process

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  There seems to be quite a lot of 'manager.periodic_task' that should
  really be done in background processes.

  For example:

  - report_driver_status
  - sync_power_states
  - reclaim_queued_deletes
  - update_available_resource
  - cleanup_running_deleted_instances
  - run_image_cache_manager_pass

  In a production system it is highly unlikely that you want to
  temporarily make your compute node 'inactive/blocking' due to the
  potential of periodic tasks running (which might cause blocking
  calls). In fact it seems pretty odd that it was even considered a
  possibility to allow this to happen to begin with. These seem much
  better served by separate processes that can run as often as they want
  (and will not affect the main compute process).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1080921/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1048562] Re: Request for "list/search of available options" in nova api

2014-07-08 Thread Joe Gordon
This is a feature request and not a bug per se.

** Changed in: nova
   Status: Triaged => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1048562

Title:
  Request for "list/search of available options" in nova api

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  One could search with filters in URL in the below mentioned example:

  http://dev.splunk.com/view/rest-tutorial-overview/SP-CAAADP8

  search Filters the returned objects by matching any eligible field
  value against the search expression. For example: search=foo would
  match any object that has the value 'foo' as a substring of at least
  one of the eligible fields. The search can also be restricted to a
  single field as follows: search=field_name%3Dfield_value (which when
  URL decoded is: field_name=field_value).

  In the same way, I need to be able to frame a query to nova api, to
  return me available options available from nova api such as server,
  server/detail,images etc. This could considerably ease QA ability to
  do manual testing using curl, across api upgrades and also
  sufficiently advanced users, need not have to necessarily read API
  docs all the time for minor doubts and clarifications.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1048562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339039] Re: ERROR: Store for scheme glance.store.rbd.Store not found

2014-07-08 Thread Zhi Yan Liu
According to the error message "Store for scheme glance.store.rbd.Store
not found", your configuration on ''default_store" is wrong, for your
case it should be 'rbd' instead of 'glance.store.rbd.Store', since as
help message of the option mentioned, the value should be schema of
store but the module name of store dirver, pls try it again.

** Changed in: glance
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1339039

Title:
  ERROR: Store for scheme glance.store.rbd.Store not found

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  glance-api is unable to initilize rbd storage backend.

  Running glance-api with only rbd backend configured and disabling all
  the other backends causes the service to exit with the following
  error:

  root@control:~# glance-api
2 2014-07-07 17:48:03.950 1874 DEBUG glance.store [-] Attempting to import 
store glance.store.rbd.Store _get_store_class 
/usr/lib/python2.7/dist-packages/glance/store/__init__.py:168
3 2014-07-07 17:48:03.955 1874 DEBUG glance.store [-] Registering store 
 with schemes ('rbd',) create_stores 
/usr/lib/python2.7/dist-packages/glance/store/__init__.py:210
4 2014-07-07 17:48:03.955 1874 DEBUG glance.store.base [-] Late loading 
location class glance.store.rbd.StoreLocation get_store_location_class 
/usr/lib/python2.7/dist-packages/glance/store/base.py:80
5 2014-07-07 17:48:03.956 1874 DEBUG glance.store.location [-] Registering 
scheme rbd with {'store_class': , 
'location_class': } 
register_scheme_map /usr/lib/python2.7/dist-packages/glance/store/location.py:86
6 2014-07-07 17:48:03.958 1874 DEBUG glance.api.policy [-] Loading policy 
from /etc/glance/policy.json _read_policy_file 
/usr/lib/python2.7/dist-packages/glance/api/policy.py:106
7 2014-07-07 17:48:03.959 1874 DEBUG glance.api.policy [-] Loaded policy 
rules: {u'get_task': '@', u'get_image_location': '@', u'add_image': '@', 
u'modify_image': '@', u'manage_image_cache': 'role:admin', 
u'delete_member': '@', u'get_images': '@', u'delete_image': '@', 
u'publicize_image': '@', u'get_member': '@', u'add_member': '@', 
u'set_image_location': '@', u'get_image': '@', u'modify_member': '@', 
u'context_is_admin': 'role:admin', u'upload_image': '@', u'modify_task': 
'@', u'get_members': '@', u'get_tasks': '@', u'add_task': '@', u'default': '@', 
u'delete_image_location': '@', u'copy_from': '@', u'download_image': '@'} 
load_rules /usr/lib/python2.7/dist-packages/glance/api/policy.py:85
8 ERROR: Store for scheme glance.store.rbd.Store not found

  root@control:~# cat /etc/glance/glance-api.conf
  [...]
  default_store = glance.store.rbd.Store
  known_stores = glance.store.rbd.Store
  [...]

  root@control:~# dpkg -l | grep glance
  ii  glance  1:2014.1-0ubuntu1  
all  OpenStack Image Registry and Delivery Service - Daemons
  ii  glance-api  1:2014.1-0ubuntu1  
all  OpenStack Image Registry and Delivery Service - API
  ii  glance-common   1:2014.1-0ubuntu1  
all  OpenStack Image Registry and Delivery Service - Common
  ii  glance-registry 1:2014.1-0ubuntu1  
all  OpenStack Image Registry and Delivery Service - Registry
  ii  python-glance   1:2014.1-0ubuntu1  
all  OpenStack Image Registry and Delivery Service - Python library
  ii  python-glanceclient 1:0.12.0-0ubuntu1  
all  Client library for Openstack glance server.

  After some time of debugging I figured out, that the problem is cased
  by the function "get_store_from_scheme(context, scheme, loc=None)".
  The argument "context" is evaluated to "glance.store.rbd.Store" not to
  "rbd".

  Applying the attached quick and dirty fix patch has solved the
  problem.

  Further investigation needed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1339039/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339462] [NEW] vmware: cannot use adaptertype Paravirtual

2014-07-08 Thread Koichi Yoshigoe
Public bug reported:

nova icehouse Ubuntu14.04 
Version: 1:2014.1-0ubuntu1.2

Current Nova vmwareapi cannot configure guest VM's SCSI controller type as 
"Paravirtual".
(though this requires vmwaretools running on the guest VM)

** Affects: nova
 Importance: Undecided
 Assignee: Koichi Yoshigoe (degdoo)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Koichi Yoshigoe (degdoo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1339462

Title:
  vmware: cannot use adaptertype Paravirtual

Status in OpenStack Compute (Nova):
  New

Bug description:
  nova icehouse Ubuntu14.04 
  Version: 1:2014.1-0ubuntu1.2

  Current Nova vmwareapi cannot configure guest VM's SCSI controller type as 
"Paravirtual".
  (though this requires vmwaretools running on the guest VM)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1339462/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339439] [NEW] TypeError: object of type 'NoneType' has no len()

2014-07-08 Thread angeloudy
Public bug reported:

2014-07-09 17:31:21.408 31964 ERROR keystone.common.wsgi [-] object of type 
'NoneType' has no len()
2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi Traceback (most recent 
call last):
2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/common/wsgi.py", line 207, in 
__call__
2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi result = 
method(context, **params)
2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/token/controllers.py", line 98, in 
authenticate
2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi context, auth)
2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/token/controllers.py", line 272, in 
_authenticate_local
2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi password=password)
2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/notifications.py", line 253, in 
wrapper
2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi result = 
f(wrapped_self, context, user_id, *args, **kwargs)
2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/identity/core.py", line 189, in 
wrapper
2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi return f(self, 
*args, **kwargs)
2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/identity/core.py", line 281, in 
authenticate
2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi ref = 
driver.authenticate(user_id, password)
2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/identity/backends/sql.py", line 110, 
in authenticate
2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi user_ref = 
self._get_user(session, user_id)
2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/identity/backends/sql.py", line 136, 
in _get_user
2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi user_ref = 
session.query(User).get(user_id)
2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi   File 
"/usr/lib64/python2.6/site-packages/sqlalchemy/orm/query.py", line 799, in get
2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi if len(ident) != 
len(mapper.primary_key):
2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi TypeError: object of 
type 'NoneType' has no len()
2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1339439

Title:
  TypeError: object of type 'NoneType' has no len()

Status in OpenStack Identity (Keystone):
  New

Bug description:
  2014-07-09 17:31:21.408 31964 ERROR keystone.common.wsgi [-] object of type 
'NoneType' has no len()
  2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi Traceback (most 
recent call last):
  2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/common/wsgi.py", line 207, in 
__call__
  2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi result = 
method(context, **params)
  2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/token/controllers.py", line 98, in 
authenticate
  2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi context, auth)
  2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/token/controllers.py", line 272, in 
_authenticate_local
  2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi 
password=password)
  2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/notifications.py", line 253, in 
wrapper
  2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi result = 
f(wrapped_self, context, user_id, *args, **kwargs)
  2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/identity/core.py", line 189, in 
wrapper
  2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi return f(self, 
*args, **kwargs)
  2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/identity/core.py", line 281, in 
authenticate
  2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi ref = 
driver.authenticate(user_id, password)
  2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/identity/backends/sql.py", line 110, 
in authenticate
  2014-07-09 17:31:21.408 31964 TRACE keystone.common.wsgi user_ref = 
self._get_user(se

[Yahoo-eng-team] [Bug 1339401] [NEW] NSX: don't check router interface on network delete

2014-07-08 Thread Salvatore Orlando
Public bug reported:

since commit b50e66f the router interfaces will not be automatically delete 
anymore when a network is deleted.
Instead a 409 response code will be returned.

The NSX plugin still has logic to ensure correct backend state in case router 
interfaces are present on network_delete. This logic is useless and can be 
removed.
Also it performs some unnecessary queries, so it affects also plugin 
performance.

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: In Progress


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1339401

Title:
  NSX: don't check router interface on network delete

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  since commit b50e66f the router interfaces will not be automatically delete 
anymore when a network is deleted.
  Instead a 409 response code will be returned.

  The NSX plugin still has logic to ensure correct backend state in case router 
interfaces are present on network_delete. This logic is useless and can be 
removed.
  Also it performs some unnecessary queries, so it affects also plugin 
performance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1339401/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339386] [NEW] Reboot should not require a Glance.show

2014-07-08 Thread Rick Harris
Public bug reported:

When a host is rebooted, we use `resume_state_on_host_boot` to spin back
up the instances.

In `libvirt` this translates to a bunch of `_hard_reboot` calls.

The problem is that, `_hard_reboot` calls `_get_guest_xml`, which then
calls `get_image_metadata` (since no `image_meta` is passed in). This in
turn triggers a call to `glance.show` which will fail.

The reason the call will fail is that, the glanceclient needs user-
credentials in order to make this call, but since we're a server-side
triggered action (host rebooting), we don't have a user-request context.

At a high-level, this is an issue of user-impersonation for server-side-
triggered actions, which we don't have good story for yet.

We do, however, have a work around for this particular case.

We can use the cached image_metadata that we store with the instance.

In fact `_hard_reboot` is already using it, so we just need to pass that
`image_meta` into `_get_guest_xml` and it will work.

** Affects: nova
 Importance: Undecided
 Assignee: Rick Harris (rconradharris)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Rick Harris (rconradharris)

** Changed in: nova
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1339386

Title:
  Reboot should not require a Glance.show

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  When a host is rebooted, we use `resume_state_on_host_boot` to spin
  back up the instances.

  In `libvirt` this translates to a bunch of `_hard_reboot` calls.

  The problem is that, `_hard_reboot` calls `_get_guest_xml`, which then
  calls `get_image_metadata` (since no `image_meta` is passed in). This
  in turn triggers a call to `glance.show` which will fail.

  The reason the call will fail is that, the glanceclient needs user-
  credentials in order to make this call, but since we're a server-side
  triggered action (host rebooting), we don't have a user-request
  context.

  At a high-level, this is an issue of user-impersonation for server-
  side-triggered actions, which we don't have good story for yet.

  We do, however, have a work around for this particular case.

  We can use the cached image_metadata that we store with the instance.

  In fact `_hard_reboot` is already using it, so we just need to pass
  that `image_meta` into `_get_guest_xml` and it will work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1339386/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339382] [NEW] openrc does not use region to find auth_url

2014-07-08 Thread Matt Fischer
Public bug reported:

in the code to generate the openrc file, the region is not passed into
the code that finds the keystone endpoint. The result is that you end up
with the first endpoint in the list.

** Affects: horizon
 Importance: Undecided
 Assignee: Matt Fischer (mfisch)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Matt Fischer (mfisch)

** Changed in: horizon
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1339382

Title:
  openrc does not use region to find auth_url

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  in the code to generate the openrc file, the region is not passed into
  the code that finds the keystone endpoint. The result is that you end
  up with the first endpoint in the list.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1339382/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339271] Re: H104 cleanup in __init__ files

2014-07-08 Thread Fawad Khaliq
*** This bug is a duplicate of bug 1329017 ***
https://bugs.launchpad.net/bugs/1329017

Duplicate of https://bugs.launchpad.net/neutron/+bug/1329017

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1339271

Title:
  H104 cleanup in __init__ files

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  As per H104 in [1] guidelines, files with no code shouldn’t contain
  any license header nor comments, and must be left completely empty.

  We have quite a few __init__.py files that will fail this hacking
  rule. These need to be cleaned up.

  [1] http://docs.openstack.org/developer/hacking/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1339271/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339342] [NEW] VMware: boot from sparse image results in OS not found

2014-07-08 Thread Eric Brown
Public bug reported:

I am attempting to boot an instance from the cirros  image found here:
http://partnerweb.vmware.com/programs/vmdkimage/cirros-0.3.2-i386-disk.vmdk

So I originally imported the image without setting any of the vmware
properties.  So when I go to boot from this image, I get "Operating
System not found" in the VM.

This was my user error, so I then used the glance command line image-
update to set those properties after the image was already created.
Then I tried another boot from this image.  I got the same result,
"Operating System not found".

However, if I set the properties for the image at image-create time,
everything works.  It also works if I do not boot the image before doing
an image-update.  So definitely seems as though some of the metadata is
cached.

To Recreate:
- use glance image-create to import image: 
http://partnerweb.vmware.com/programs/vmdkimage/cirros-0.3.2-i386-disk.vmdk (do 
not set any of the vmware_* properties.
- boot from this image, notice it fails to find the OS as expected
- use glance image-update to modify the image metadata so that it properly has 
the --property vmware_adaptertype="ide" --property vmware_disktype="sparse" set.
- boot from this image again, notice it still fails, which is unexpected.

** Affects: nova
 Importance: Medium
 Assignee: Arnaud Legendre (arnaudleg)
 Status: New


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1339342

Title:
  VMware: boot from sparse image results in OS not found

Status in OpenStack Compute (Nova):
  New

Bug description:
  I am attempting to boot an instance from the cirros  image found here:
  http://partnerweb.vmware.com/programs/vmdkimage/cirros-0.3.2-i386-disk.vmdk

  So I originally imported the image without setting any of the vmware
  properties.  So when I go to boot from this image, I get "Operating
  System not found" in the VM.

  This was my user error, so I then used the glance command line image-
  update to set those properties after the image was already created.
  Then I tried another boot from this image.  I got the same result,
  "Operating System not found".

  However, if I set the properties for the image at image-create time,
  everything works.  It also works if I do not boot the image before
  doing an image-update.  So definitely seems as though some of the
  metadata is cached.

  To Recreate:
  - use glance image-create to import image: 
http://partnerweb.vmware.com/programs/vmdkimage/cirros-0.3.2-i386-disk.vmdk (do 
not set any of the vmware_* properties.
  - boot from this image, notice it fails to find the OS as expected
  - use glance image-update to modify the image metadata so that it properly 
has the --property vmware_adaptertype="ide" --property vmware_disktype="sparse" 
set.
  - boot from this image again, notice it still fails, which is unexpected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1339342/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1315095] Re: grenade nova network (n-net) fails to start

2014-07-08 Thread Attila Fazekas
The same logstash query founds this error in another jobs, and they are
runtime issues.

http://logs.openstack.org/94/105194/2/check/check-tempest-dsvm-postgres-
full/f029225/logs/screen-n-net.txt.gz#_2014-07-07_17_24_19_012

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1315095

Title:
  grenade nova network (n-net) fails to start

Status in Grenade - OpenStack upgrade testing:
  Confirmed
Status in OpenStack Compute (Nova):
  New

Bug description:
  Here we see that n-net never started logging to it's screen:

  http://logs.openstack.org/02/91502/1/check/check-grenade-
  dsvm/912e89e/logs/new/

  The errors in n-cpu seem to support that the n-net service never
  started.

  According to http://logs.openstack.org/02/91502/1/check/check-grenade-
  dsvm/912e89e/logs/grenade.sh.log.2014-05-01-042623, circa "2014-05-01
  04:31:15.580" the interesting bits should be in:

  /opt/stack/status/stack/n-net.failure

  But I don't see that captured.

  I'm not sure why n-net did not start.

To manage notifications about this bug go to:
https://bugs.launchpad.net/grenade/+bug/1315095/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1231351] Re: nova-bm keeps a copy of baremetal image after deployment

2014-07-08 Thread Robert Collins
This is fixed for Ironic but we won't be trying to fix it for nova-bm -
thats deprecated.

** Changed in: tripleo
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1231351

Title:
  nova-bm keeps a copy of baremetal image after deployment

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Compute (Nova):
  Triaged
Status in tripleo - openstack on openstack:
  Won't Fix

Bug description:
  this is unneeded, it would be equivalent to nova vm keeping a pristine
  copy of every image deployed - glance is the right place to do that.
  This pushes the disk requirements for bare metal hypervisors wy
  up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1231351/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1329995] Re: Sporadic tempest failures: "The server could not comply with the request since it is either malformed or otherwise incorrect"

2014-07-08 Thread Clark Boylan
This appears to be a nova bug. Tempest has asked nova to perform a task
and it failed.

If you still believe this is an Infra bug please update this bug with
information on why that is the case so that we can debug it further and
fix it.

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: openstack-ci
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1329995

Title:
  Sporadic tempest failures: "The server could not comply with the
  request since it is either malformed or otherwise incorrect"

Status in OpenStack Compute (Nova):
  New
Status in OpenStack Core Infrastructure:
  Incomplete

Bug description:
  In one of my Tempest review runs, I'm seeing the following error fail
  some tests:

   Traceback (most recent call last):
File "tempest/services/compute/xml/servers_client.py", line 388, in 
wait_for_server_status
  raise_on_error=raise_on_error)
File "tempest/common/waiters.py", line 86, in wait_for_server_status
  _console_dump(client, server_id)
File "tempest/common/waiters.py", line 27, in _console_dump
  resp, output = client.get_console_output(server_id, None)
File "tempest/services/compute/xml/servers_client.py", line 596, in 
get_console_output
  length=length)
File "tempest/services/compute/xml/servers_client.py", line 439, in action
  resp, body = self.post("servers/%s/action" % server_id, str(doc))
File "tempest/common/rest_client.py", line 209, in post
  return self.request('POST', url, extra_headers, headers, body)
File "tempest/common/rest_client.py", line 419, in request
  resp, resp_body)
File "tempest/common/rest_client.py", line 468, in _error_checker
  raise exceptions.BadRequest(resp_body)
  BadRequest: Bad request
  Details: {'message': 'The server could not comply with the request since it 
is either malformed or otherwise incorrect.', 'code': '400'}

  Full log for the run here: http://logs.openstack.org/93/98693/5/check
  /check-tempest-dsvm-full-icehouse/71d6c8c/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1329995/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334015] Re: "Server could not comply with the request" for .test_create_image_from_stopped_server

2014-07-08 Thread Clark Boylan
This appears to be a nova bug. Tempest asked nova to perform an action
and the resulting response was unexpected. It may be a tempest bug but
usually it is an issue in the project being tested.

I have marked this bug as Incomplete for the Infra side, please feel
free to add more info indicating why this might be an Infra bug if you
have it.

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: openstack-ci
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334015

Title:
  "Server could not comply with the request" for
  .test_create_image_from_stopped_server

Status in OpenStack Compute (Nova):
  New
Status in OpenStack Core Infrastructure:
  Incomplete

Bug description:
  The following gate test failed for patchset
  https://review.openstack.org/#/c/98693/:

  
tempest.api.compute.images.test_images_negative.ImagesNegativeTestXML.test_create_image_from_stopped_server

  The console.log showed the following traceback:

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "tempest/api/compute/images/test_images_negative.py", line 63, in 
test_create_image_from_stopped_server
  resp, server = self.create_test_server(wait_until='ACTIVE')
File "tempest/api/compute/base.py", line 247, in create_test_server
  raise ex
  BadRequest: Bad request
  Details: {'message': 'The server could not comply with the request since 
it is either malformed or otherwise incorrect.', 'code': '400'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1334015/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339273] [NEW] Sphinx documentation build failed in stable/havana: source_dir is not a directory

2014-07-08 Thread Tristan Cacqueray
Public bug reported:

Documentation is not building in stable/havana:

$ tox -evenv -- python setup.py build_sphinx
venv inst: /opt/stack/horizon/.tox/dist/horizon-2013.2.4.dev9.g19634d6.zip
venv runtests: PYTHONHASHSEED='1422458638'
venv runtests: commands[0] | python setup.py build_sphinx
running build_sphinx
error: 'source_dir' must be a directory name (got 
`/opt/stack/horizon/doc/source`)
ERROR: InvocationError: '/opt/stack/horizon/.tox/venv/bin/python setup.py 
build_sphinx'

** Affects: horizon
 Importance: Undecided
 Assignee: Tristan Cacqueray (tristan-cacqueray)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Tristan Cacqueray (tristan-cacqueray)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1339273

Title:
  Sphinx documentation build failed in stable/havana: source_dir is not
  a directory

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Documentation is not building in stable/havana:

  $ tox -evenv -- python setup.py build_sphinx
  venv inst: /opt/stack/horizon/.tox/dist/horizon-2013.2.4.dev9.g19634d6.zip
  venv runtests: PYTHONHASHSEED='1422458638'
  venv runtests: commands[0] | python setup.py build_sphinx
  running build_sphinx
  error: 'source_dir' must be a directory name (got 
`/opt/stack/horizon/doc/source`)
  ERROR: InvocationError: '/opt/stack/horizon/.tox/venv/bin/python setup.py 
build_sphinx'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1339273/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334137] Re: [Errno 2] No such file or directory in gate-nova-python26

2014-07-08 Thread Clark Boylan
This looks like a nova test fixture bug. I have added nova to the bug
and marked the Infra side incomplete. If you can provide more info that
indicates this is an Infra bug please do and we can update the bug and
hopefully fix it.

All that said I think this is a bug in nova.

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: openstack-ci
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334137

Title:
  [Errno 2] No such file or directory in gate-nova-python26

Status in OpenStack Compute (Nova):
  New
Status in OpenStack Core Infrastructure:
  Incomplete

Bug description:
  Getting a lot of failed tests in a single gate run with errors like

  Traceback (most recent call last):
File "nova/tests/integrated/test_api_samples.py", line 1945, in setUp
  self.uuid = self._post_server()
File "nova/tests/integrated/test_api_samples.py", line 180, in _post_server
  response = self._do_post('servers', 'server-post-req', subs)
File "nova/tests/integrated/api_samples_test_base.py", line 312, in _do_post
  body = self._read_template(name) % subs
File "nova/tests/integrated/api_samples_test_base.py", line 101, in 
_read_template
  with open(template) as inf:
  IOError: [Errno 2] No such file or directory: 
'/home/jenkins/workspace/gate-nova-python26/CA/nova/tests/integrated/api_samples/os-admin-actions/server-post-req.xml.tpl'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1334137/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339271] [NEW] H104 cleanup in __init__ files

2014-07-08 Thread Fawad Khaliq
Public bug reported:

As per H104 in [1] guidelines, files with no code shouldn’t contain any
license header nor comments, and must be left completely empty.

We have quite a few __init__.py files that will fail this hacking rule.
These need to be cleaned up.

[1] http://docs.openstack.org/developer/hacking/

** Affects: neutron
 Importance: Undecided
 Assignee: Fawad Khaliq (fawadkhaliq)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Fawad Khaliq (fawadkhaliq)

** Description changed:

  As per H104 in [1] guidelines, files with no code shouldn’t contain any
  license header nor comments, and must be left completely empty.
  
  We have quite a few __init__.py files that will fail this hacking rule.
  These need to be cleaned up.
+ 
+ [1] http://docs.openstack.org/developer/hacking/

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1339271

Title:
  H104 cleanup in __init__ files

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  As per H104 in [1] guidelines, files with no code shouldn’t contain
  any license header nor comments, and must be left completely empty.

  We have quite a few __init__.py files that will fail this hacking
  rule. These need to be cleaned up.

  [1] http://docs.openstack.org/developer/hacking/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1339271/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339266] [NEW] LBaaS: Member object doesn't contains VM instance id

2014-07-08 Thread Anil Vishnoi
Public bug reported:

I would rather consider it enhancement than a bug. I created two VM
instances in my openstack setup.

VM1 ( id - 3d248e17-922c-4619-aa60-d30684108c01) & VM2 (id - acb0148f-
508d-4c6f-a9b1-4d160960764d).

I created pool (A) and added both of these VM instances as a pool
member. But when i see details of member VM1 (from member tab in horizon
lbaas view), i don't see the VM instance- id.

Member Details:
ID
b6e2e91a-56f3-4da2-8abb-934db5b3bc91
Project ID
5d787eb9991a4030beba90cf0ea2461d
Pool ID
3715ddb0-acab-4a37-9db8-dcb47edc88ed
Address
1.1.1.2
Protocol Port
8000
Weight
1
Admin State Up
Yes
Status
ACTIVE 


I think by definition of pool member its fine, but  I believe having pool 
member's original resource object id reference in the member object can make 
things bit more convenient. For example, i am writing my lbaas driver, in which 
for each pool member i want to fetch few stats from the ceilometer service, but 
ceilometer service requires instance-id in the rest api for fetching the stats. 
In the current state, to find out instance-id from the member object details, i 
need to fetch all the VM details for the tenant and then need to match based on 
private/floating ip of the VM. I feel its a bit cumbersome approach. 

So if member object can provide the resource id of  the associated
resource, it can make things more convenient in term of associating the
member with openstack resource.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lb lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1339266

Title:
  LBaaS: Member object doesn't contains VM instance id

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I would rather consider it enhancement than a bug. I created two VM
  instances in my openstack setup.

  VM1 ( id - 3d248e17-922c-4619-aa60-d30684108c01) & VM2 (id - acb0148f-
  508d-4c6f-a9b1-4d160960764d).

  I created pool (A) and added both of these VM instances as a pool
  member. But when i see details of member VM1 (from member tab in
  horizon lbaas view), i don't see the VM instance- id.

  Member Details:
  ID
  b6e2e91a-56f3-4da2-8abb-934db5b3bc91
  Project ID
  5d787eb9991a4030beba90cf0ea2461d
  Pool ID
  3715ddb0-acab-4a37-9db8-dcb47edc88ed
  Address
  1.1.1.2
  Protocol Port
  8000
  Weight
  1
  Admin State Up
  Yes
  Status
  ACTIVE 

  
  I think by definition of pool member its fine, but  I believe having pool 
member's original resource object id reference in the member object can make 
things bit more convenient. For example, i am writing my lbaas driver, in which 
for each pool member i want to fetch few stats from the ceilometer service, but 
ceilometer service requires instance-id in the rest api for fetching the stats. 
In the current state, to find out instance-id from the member object details, i 
need to fetch all the VM details for the tenant and then need to match based on 
private/floating ip of the VM. I feel its a bit cumbersome approach. 

  So if member object can provide the resource id of  the associated
  resource, it can make things more convenient in term of associating
  the member with openstack resource.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1339266/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339235] [NEW] UnexpectedTaskStateError: Unexpected task state: expecting (u'powering-off', ) but the actual state is None

2014-07-08 Thread Matt Riedemann
*** This bug is a duplicate of bug 1320628 ***
https://bugs.launchpad.net/bugs/1320628

Public bug reported:

This is showing up all over the n-cpu logs on teardown of tempest tests:

UnexpectedTaskStateError: Unexpected task state: expecting (u'powering-
off',) but the actual state is None

For example:

http://logs.openstack.org/06/103206/4/check/check-tempest-dsvm-postgres-
full/b5e8f3c/logs/screen-n-cpu.txt.gz?level=TRACE

We have nearly 40K hits on this in logstash in 7 days:

message:"UnexpectedTaskStateError: Unexpected task state: expecting (u
'powering-off',) but the actual state is None" AND
tags:"screen-n-cpu.txt"

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVW5leHBlY3RlZFRhc2tTdGF0ZUVycm9yOiBVbmV4cGVjdGVkIHRhc2sgc3RhdGU6IGV4cGVjdGluZyAodSdwb3dlcmluZy1vZmYnLCkgYnV0IHRoZSBhY3R1YWwgc3RhdGUgaXMgTm9uZVwiIEFORCB0YWdzOlwic2NyZWVuLW4tY3B1LnR4dFwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDA0ODQxMjQ3MDk4fQ==

This is the interesting traceback from the compute manager:

2014-07-08 00:46:47.922 18853 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-07-08 00:46:47.922 18853 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply
2014-07-08 00:46:47.922 18853 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-07-08 00:46:47.922 18853 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
2014-07-08 00:46:47.922 18853 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-07-08 00:46:47.922 18853 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
2014-07-08 00:46:47.922 18853 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-07-08 00:46:47.922 18853 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/exception.py", line 88, in wrapped
2014-07-08 00:46:47.922 18853 TRACE oslo.messaging.rpc.dispatcher payload)
2014-07-08 00:46:47.922 18853 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
2014-07-08 00:46:47.922 18853 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-07-08 00:46:47.922 18853 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/exception.py", line 71, in wrapped
2014-07-08 00:46:47.922 18853 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2014-07-08 00:46:47.922 18853 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 272, in decorated_function
2014-07-08 00:46:47.922 18853 TRACE oslo.messaging.rpc.dispatcher 
LOG.info(_("Task possibly preempted: %s") % e.format_message())
2014-07-08 00:46:47.922 18853 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
2014-07-08 00:46:47.922 18853 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-07-08 00:46:47.922 18853 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 266, in decorated_function
2014-07-08 00:46:47.922 18853 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-07-08 00:46:47.922 18853 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 330, in decorated_function
2014-07-08 00:46:47.922 18853 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-07-08 00:46:47.922 18853 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 308, in decorated_function
2014-07-08 00:46:47.922 18853 TRACE oslo.messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
2014-07-08 00:46:47.922 18853 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
2014-07-08 00:46:47.922 18853 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-07-08 00:46:47.922 18853 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 296, in decorated_function
2014-07-08 00:46:47.922 18853 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-07-08 00:46:47.922 18853 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2356, in stop_instance
2014-07-08 00:46:47.922 18853 TRACE oslo.messaging.rpc.dispatcher 
instance.save(expected_task_state=task_states.POWERING_OFF)
2014-07-08 00:46:

[Yahoo-eng-team] [Bug 1328245] Re: libvirt does not store connection_info after BFV setup

2014-07-08 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

** Changed in: nova/havana
   Status: New => In Progress

** Changed in: nova/icehouse
   Status: New => In Progress

** Changed in: nova/havana
   Importance: Undecided => Medium

** Changed in: nova/icehouse
   Importance: Undecided => Medium

** Changed in: nova/havana
 Assignee: (unassigned) => Dan Smith (danms)

** Changed in: nova/icehouse
 Assignee: (unassigned) => Dan Smith (danms)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1328245

Title:
  libvirt does not store connection_info after BFV setup

Status in OpenStack Compute (Nova):
  Confirmed
Status in OpenStack Compute (nova) havana series:
  In Progress
Status in OpenStack Compute (nova) icehouse series:
  In Progress

Bug description:
  If booting from a volume, the virt driver does the setup of the volume
  with cinder before starting the instance. This differs from the attach
  volume case, which is managed by nova itself. Since the connect
  operation could yield new details in the connection_info structure
  that need to be persisted until teardown time, it is important that
  the connection_info be written back after connect completes. Nova's
  attach_volume() does this, but libvirt does not. Specifically in the
  case of the fibre channel code, this means we don't persist
  information about multipath devices which means we don't fully tear
  down everything at disconnect time.

  This is present in at least Havana, and I expect it is present in
  Icehosue and master as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1328245/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339232] [NEW] Debug logs for unit tests appear to contain some corrupted characters

2014-07-08 Thread Henry Nash
Public bug reported:

When running our unit tests as part of jenkins the "output" file are
merged into one output file using subunit.  The resulting files appear
to contain corrupted characters, e.g.:

>From a jenkins test:

³+@žS¹_.Ô-ˆ@‹keystone.tests.test_associate_project_endpoint_extension.AssociateEndpointProjectFilterCRUDTestCase.test_list_projects_for_endpoint_defaultœ<ú¦³+`idS¹_/Û÷Q(@‹keystone.tests.test_associate_project_endpoint_extension.AssociateEndpointProjectFilterCRUDTestCase.test_list_projects_for_endpoint_defaulttext/plain;charset="utf8"pythonlogging:''h™Adding
 cache-proxy 'keystone.tests.test_cache.CacheIsolatingProxy' to backend.
Callback: `keystone.contrib.revoke.core.Manager._trust_callback` subscribed to 
event `identity.OS-TRUST:trust.deleted`.
Callback: `keystone.contrib.revoke.core.Manager._consumer_callback` subscribed 
to event `identity.OS-OAUTH1:consumer.deleted`.
Callback: `keystone.contrib.revoke.core.Manager._access_token_callback` 
subscribed to event `identity.OS-OAUTH1:access_token.deleted`.
Callback: `keystone.contrib.revoke.core.Manager._role_callback` subscribed to 
event `identity.role.deleted`.
Callback: `keystone.contrib.revoke.core.Manager._user_callback` subscribed to 
event `identity.user.deleted`.
.

>From a local test:

test: 
keystone.tests.test_associate_project_endpoint_extension.AssociateEndpointProjectFilterCRUDTestCase.test_check_endpoint_project_assoc
time: 2014-07-03 17:05:53.955299Z
successful: 
keystone.tests.test_associate_project_endpoint_extension.AssociateEndpointProjectFilterCRUDTestCase.test_check_endpoint_project_assoc
 [ multipart
Content-Type: text/plain;charset="utf8"
pythonlogging:''
4343
Adding cache-proxy 'keystone.tests.test_cache.CacheIsolatingProxy' to backend.
Callback: `keystone.contrib.revoke.core.Manager._trust_callback` subscribed to 
event `identity.OS-TRUST:trust.deleted`
Callback: `keystone.contrib.revoke.core.Manager._consumer_callback` subscribed 
to event `identity.OS-OAUTH1:consumer.deleted`.
Callback: `keystone.contrib.revoke.core.Manager._access_token_callback` 
subscribed to event `identity.OS-OAUTH1:access_token.deleted`.
Callback: `keystone.contrib.revoke.core.Manager._role_callback` subscribed to 
event `identity.role.deleted`.
Callback: `keystone.contrib.revoke.core.Manager._user_callback` subscribed to 
event `identity.user.deleted`.
.

Is this some kind of artefact of not handling the subunit output
correctly?

** Affects: keystone
 Importance: Medium
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1339232

Title:
  Debug logs for unit tests appear to contain some corrupted characters

Status in OpenStack Identity (Keystone):
  New

Bug description:
  When running our unit tests as part of jenkins the "output" file are
  merged into one output file using subunit.  The resulting files appear
  to contain corrupted characters, e.g.:

  From a jenkins test:

  
³+@žS¹_.Ô-ˆ@‹keystone.tests.test_associate_project_endpoint_extension.AssociateEndpointProjectFilterCRUDTestCase.test_list_projects_for_endpoint_defaultœ<ú¦³+`idS¹_/Û÷Q(@‹keystone.tests.test_associate_project_endpoint_extension.AssociateEndpointProjectFilterCRUDTestCase.test_list_projects_for_endpoint_defaulttext/plain;charset="utf8"pythonlogging:''h™Adding
 cache-proxy 'keystone.tests.test_cache.CacheIsolatingProxy' to backend.
  Callback: `keystone.contrib.revoke.core.Manager._trust_callback` subscribed 
to event `identity.OS-TRUST:trust.deleted`.
  Callback: `keystone.contrib.revoke.core.Manager._consumer_callback` 
subscribed to event `identity.OS-OAUTH1:consumer.deleted`.
  Callback: `keystone.contrib.revoke.core.Manager._access_token_callback` 
subscribed to event `identity.OS-OAUTH1:access_token.deleted`.
  Callback: `keystone.contrib.revoke.core.Manager._role_callback` subscribed to 
event `identity.role.deleted`.
  Callback: `keystone.contrib.revoke.core.Manager._user_callback` subscribed to 
event `identity.user.deleted`.
  .

  From a local test:

  test: 
keystone.tests.test_associate_project_endpoint_extension.AssociateEndpointProjectFilterCRUDTestCase.test_check_endpoint_project_assoc
  time: 2014-07-03 17:05:53.955299Z
  successful: 
keystone.tests.test_associate_project_endpoint_extension.AssociateEndpointProjectFilterCRUDTestCase.test_check_endpoint_project_assoc
 [ multipart
  Content-Type: text/plain;charset="utf8"
  pythonlogging:''
  4343
  Adding cache-proxy 'keystone.tests.test_cache.CacheIsolatingProxy' to backend.
  Callback: `keystone.contrib.revoke.core.Manager._trust_callback` subscribed 
to event `identity.OS-TRUST:trust.deleted`
  Callback: `keystone.contrib.revoke.core.Manager._consumer_callback` 
subscribed to event `identity.OS-OAUTH1:consumer.deleted`.
  Callback: `keystone.contrib.revoke.core.Manager._access_token_callback` 
subscribed to event `identity.OS-OAUTH1:access_to

[Yahoo-eng-team] [Bug 1339197] [NEW] Linuxbridge agent test for VXLAN module requires /lib/modules

2014-07-08 Thread Hugh Saunders
Public bug reported:

The Linuxbridge agent VXLAN test[1] uses modinfo which requires
/lib/modules to be available. This leads  to a false negative result on
LXC containers where /lib/modules is not mounted.

Expected behaviour: VXLAN module detected if loaded.
Current behvarious: VXLAN module only detected if loaded and /lib/modules is 
available.

Example:

root@neutron-agents:/# modinfo vxlan
libkmod: ERROR ../libkmod/libkmod.c:556 kmod_search_moddep: could not open 
moddep file '/lib/modules/3.13.0-30-generic/modules.dep.bin'
modinfo: ERROR: Module alias vxlan not found.

However the module is available:

root@neutron-agents:/# lsmod |grep ^vxlan
vxlan  37619  0

Trace that lead to this:

2014-07-08 16:29:14.946 16452 CRITICAL neutron [-] VXLAN Network unsupported.
2014-07-08 16:29:14.946 16452 TRACE neutron Traceback (most recent call last):
2014-07-08 16:29:14.946 16452 TRACE neutron   File 
"/usr/local/bin/neutron-linuxbridge-agent", line 10, in 
2014-07-08 16:29:14.946 16452 TRACE neutron sys.exit(main())
2014-07-08 16:29:14.946 16452 TRACE neutron   File 
"/usr/local/lib/python2.7/dist-packages/neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py",
 line 1031, in main
2014-07-08 16:29:14.946 16452 TRACE neutron root_helper)
2014-07-08 16:29:14.946 16452 TRACE neutron   File 
"/usr/local/lib/python2.7/dist-packages/neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py",
 line 816, in __init__
2014-07-08 16:29:14.946 16452 TRACE neutron 
self.setup_linux_bridge(interface_mappings)
2014-07-08 16:29:14.946 16452 TRACE neutron   File 
"/usr/local/lib/python2.7/dist-packages/neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py",
 line 883, in setup_linux_bridge
2014-07-08 16:29:14.946 16452 TRACE neutron self.br_mgr = 
LinuxBridgeManager(interface_mappings, self.root_helper)
2014-07-08 16:29:14.946 16452 TRACE neutron   File 
"/usr/local/lib/python2.7/dist-packages/neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py",
 line 82, in __init__
2014-07-08 16:29:14.946 16452 TRACE neutron self.check_vxlan_support()
2014-07-08 16:29:14.946 16452 TRACE neutron   File 
"/usr/local/lib/python2.7/dist-packages/neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py",
 line 576, in check_vxlan_support
2014-07-08 16:29:14.946 16452 TRACE neutron raise 
exceptions.VxlanNetworkUnsupported()
2014-07-08 16:29:14.946 16452 TRACE neutron VxlanNetworkUnsupported: VXLAN 
Network unsupported.
2014-07-08 16:29:14.946 16452 TRACE neutron

[1]
https://github.com/openstack/neutron/blob/master/neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py#L564-L569

** Affects: neutron
 Importance: Undecided
 Assignee: Hugh Saunders (hughsaunders)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Hugh Saunders (hughsaunders)

** Description changed:

- The VXLAN test[1] uses modinfo which requires /lib/modules to be
- available. This leads  to a false negative result on LXC containers
- where /lib/modules is not mounted.
+ The Linuxbridge agent VXLAN test[1] uses modinfo which requires
+ /lib/modules to be available. This leads  to a false negative result on
+ LXC containers where /lib/modules is not mounted.
  
- Expected behaviour: VXLAN module detected if loaded. 
+ Expected behaviour: VXLAN module detected if loaded.
  Current behvarious: VXLAN module only detected if loaded and /lib/modules is 
available.
  
  Example:
  
  root@neutron-agents:/# modinfo vxlan
  libkmod: ERROR ../libkmod/libkmod.c:556 kmod_search_moddep: could not open 
moddep file '/lib/modules/3.13.0-30-generic/modules.dep.bin'
  modinfo: ERROR: Module alias vxlan not found.
  
  However the module is available:
  
  root@neutron-agents:/# lsmod |grep ^vxlan
  vxlan  37619  0
- 
  
  Trace that lead to this:
  
  2014-07-08 16:29:14.946 16452 CRITICAL neutron [-] VXLAN Network unsupported.
  2014-07-08 16:29:14.946 16452 TRACE neutron Traceback (most recent call last):
  2014-07-08 16:29:14.946 16452 TRACE neutron   File 
"/usr/local/bin/neutron-linuxbridge-agent", line 10, in 
  2014-07-08 16:29:14.946 16452 TRACE neutron sys.exit(main())
  2014-07-08 16:29:14.946 16452 TRACE neutron   File 
"/usr/local/lib/python2.7/dist-packages/neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py",
 line 1031, in main
  2014-07-08 16:29:14.946 16452 TRACE neutron root_helper)
  2014-07-08 16:29:14.946 16452 TRACE neutron   File 
"/usr/local/lib/python2.7/dist-packages/neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py",
 line 816, in __init__
  2014-07-08 16:29:14.946 16452 TRACE neutron 
self.setup_linux_bridge(interface_mappings)
  2014-07-08 16:29:14.946 16452 TRACE neutron   File 
"/usr/local/lib/python2.7/dist-packages/neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py",
 line 883, in setup_linux_bridge
  2014-07-08 16:29:14.946 16452 TRACE neutron self.br_mgr = 
LinuxBrid

[Yahoo-eng-team] [Bug 1337768] Re: keystone v2 api change_password authz require also update_user authz

2014-07-08 Thread Dolph Mathews
This is by design in v2 - that password update call is intended for
administrators. In v3, we support a self-service password change that
requires the user's existing password:

  https://github.com/openstack/identity-api/blob/master/v3/src/markdown
/identity-api-v3.md#change-user-password-post-usersuser_idpassword

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1337768

Title:
  keystone v2 api change_password authz require also update_user authz

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  In v2 the set_user_password controller method call update_user, which
  mean that setting only 'identity:change_password' to 'rule:owner' will
  not works unless 'identity:update_user' is also changed to
  'rule:owner' or similar.

  
https://github.com/openstack/keystone/blob/stable/icehouse/keystone/identity/controllers.py#L237-239

  NOTE: Stating the obvious, I picked up 'rule:owner' as an example,
  which is what make sense in our case, but the problem is not specific
  to this rule

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1337768/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1239902] Re: neutron-server should be called neutron-api to conform with the rest of OpenStack

2014-07-08 Thread Dean Troyer
DevStack hasn't even removed the 'q-' prefixes from Neutron process
window names.  I'd accept a patch for that, and if it included changing
q-svc to something else that would be fine.

** Changed in: devstack
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1239902

Title:
  neutron-server should be called neutron-api to conform with the rest
  of OpenStack

Status in devstack - openstack dev environments:
  Opinion
Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  The rest of OpenStack calls its api server *-api, so to make things
  more uniform neutron should rename neutron-server as neutron-api

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1239902/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338550] Re: V3 API project/user/group list only work with domain scoped token

2014-07-08 Thread Dolph Mathews
This is by design. Project, user and group collections are owned by the
domain, and therefore the policy requires domain-level authorization to
administer those collections.

** Changed in: keystone
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1338550

Title:
  V3 API project/user/group list  only work with domain scoped token

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  From the policy.json of the V3 API:

  "admin_and_matching_domain_id": "rule:admin_required and 
domain_id:%(domain_id)s",
  "identity:list_projects": "rule:admin_required and 
domain_id:%(domain_id)s",
  ...
  "identity:list_users": "rule:cloud_admin or 
rule:admin_and_matching_domain_id",

  This specify that if an admin user of a domain ask for GET
  /v3/users?domain_id= then this later will only work if
  token was scoped in this domain but not if it was scoped in a project
  in that domain.

  A patch is coming soon that hopefully will clarify more.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1338550/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339104] Re: error lunching instance:OpenStackconsole is currently unavailable, please try again later

2014-07-08 Thread John Vrbanac
This doesn't affect Barbican. Moving to project referred to in the
description.

** Project changed: barbican => horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1339104

Title:
  error lunching instance:OpenStackconsole is currently unavailable,
  please try again later

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  i have created the vm using xen hypervisor and trying to lunch the vm using 
openstack dashboard-horizon,
  but is showing the error "console is currently unavailable, please try again 
later" while i check for the console log for the same.
  I have base openstating system as CentOS 6.5, xen hypervisor 4.0 and 
openstack installed on the controller node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1339104/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339184] [NEW] L3 Agent should advertise the right RPC version

2014-07-08 Thread Armando Migliaccio
Public bug reported:

When looking at the following:

- https://github.com/openstack/neutron/blob/master/neutron/agent/l3_agent.py#L66
https://github.com/openstack/neutron/blob/master/neutron/agent/l3_agent.py#L216

It looks like the L3 Agent does not advertises the right supported RPC
version, and instead only uses BASE_RPC_API_VERSION.

Unless I am completely off here, this should be fixed.

** Affects: neutron
 Importance: Undecided
 Assignee: Armando Migliaccio (armando-migliaccio)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Armando Migliaccio (armando-migliaccio)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1339184

Title:
  L3 Agent should advertise the right RPC version

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When looking at the following:

  - 
https://github.com/openstack/neutron/blob/master/neutron/agent/l3_agent.py#L66
  
https://github.com/openstack/neutron/blob/master/neutron/agent/l3_agent.py#L216

  It looks like the L3 Agent does not advertises the right supported RPC
  version, and instead only uses BASE_RPC_API_VERSION.

  Unless I am completely off here, this should be fixed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1339184/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339104] [NEW] error lunching instance:OpenStackconsole is currently unavailable, please try again later

2014-07-08 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

i have created the vm using xen hypervisor and trying to lunch the vm using 
openstack dashboard-horizon,
but is showing the error "console is currently unavailable, please try again 
later" while i check for the console log for the same.
I have base openstating system as CentOS 6.5, xen hypervisor 4.0 and openstack 
installed on the controller node.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
error lunching instance:OpenStackconsole is currently unavailable, please try 
again later
https://bugs.launchpad.net/bugs/1339104
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1323578] Re: randmly failing rebuild on ssh-key errors on a setup which is configured for non-migration resize

2014-07-08 Thread Daniel Berrange
Proposing a blueprint/nova-specs is only appropriate if the submitter
actually intends to provide a solution. We're not wanting nova-specs for
arbitrary user reported wish-list items.

For user wish list items like this, they should be status:
confirmed/triaged, importance: wishlist


** Changed in: nova
   Status: Invalid => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1323578

Title:
  randmly failing rebuild on ssh-key errors on a setup which is
  configured for non-migration resize

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  I created a setup with 3 computes but configured nova.conf with the following 
on each of the computes: 
  allow_resize_to_same_host=True
  scheduler_default_filters=AllHostsFilter 

  when I try to resize I failed constantly on ssh-key error so I added
  this on my hosts:

  [root@orange-vdsf ~(keystone_admin)]# cat /var/lib/nova/.ssh/config
  Host *
  StrictHostKeyChecking no
  #UserKnownHostsFile=/dev/null 

  after adding the config 
  it seems that sometimes we are trying to migrate an instance - which still 
fails migration and therefor we fail the resize and other time we resize 
locally and that succeeds. 
  I could not understand when we select to migrate and when we select to resize 
locally but it might be a scheduler issue. 

  [root@orange-vdsf ~(keystone_admin)]# egrep 
a400d2f2-80ce-4ebf-9847-8885b7742fec /var/log/nova/*
  /var/log/nova/nova-api.log:2014-05-20 13:45:02.892 22298 INFO 
nova.osapi_compute.wsgi.server [req-ba38a1d5-80e4-4051-a89b-7daeae20edbf 
c9062d562d9f41e4a1fdce36a4f176f6 4ad766166539403189f2caca1ba306aa] 10.35.160.73 
"GET 
/v2/4ad766166539403189f2caca1ba306aa/servers/a400d2f2-80ce-4ebf-9847-8885b7742fec
 HTTP/1.1" status: 200 len: 1793 time: 0.1236081
  /var/log/nova/nova-api.log:2014-05-20 13:45:06.822 22295 INFO 
nova.osapi_compute.wsgi.server [req-ce846da7-5757-47da-99b2-cdc2d1dd15b7 
c9062d562d9f41e4a1fdce36a4f176f6 4ad766166539403189f2caca1ba306aa] 10.35.160.73 
"GET 
/v2/4ad766166539403189f2caca1ba306aa/servers/a400d2f2-80ce-4ebf-9847-8885b7742fec
 HTTP/1.1" status: 200 len: 1923 time: 0.1979871
  /var/log/nova/nova-api.log:2014-05-20 13:45:09.193 22295 INFO 
nova.osapi_compute.wsgi.server [req-4b03f451-bfd1-49c2-ae41-9d2576f10a66 
c9062d562d9f41e4a1fdce36a4f176f6 4ad766166539403189f2caca1ba306aa] 10.35.160.73 
"GET 
/v2/4ad766166539403189f2caca1ba306aa/servers/a400d2f2-80ce-4ebf-9847-8885b7742fec
 HTTP/1.1" status: 200 len: 1923 time: 0.1024799
  /var/log/nova/nova-api.log:2014-05-20 13:45:11.735 22295 INFO 
nova.osapi_compute.wsgi.server [req-f1db26ba-a097-4377-9a78-2e3b5b5f4a8a 
c9062d562d9f41e4a1fdce36a4f176f6 4ad766166539403189f2caca1ba306aa] 10.35.160.73 
"GET 
/v2/4ad766166539403189f2caca1ba306aa/servers/a400d2f2-80ce-4ebf-9847-8885b7742fec
 HTTP/1.1" status: 200 len: 1923 time: 0.2173102
  /var/log/nova/nova-api.log:2014-05-20 13:45:14.081 22295 INFO 
nova.osapi_compute.wsgi.server [req-547bcbe4-974e-47d3-89a4-996a67a6be41 
c9062d562d9f41e4a1fdce36a4f176f6 4ad766166539403189f2caca1ba306aa] 10.35.160.73 
"GET 
/v2/4ad766166539403189f2caca1ba306aa/servers/a400d2f2-80ce-4ebf-9847-8885b7742fec
 HTTP/1.1" status: 200 len: 1923 time: 0.1151090
  /var/log/nova/nova-api.log:2014-05-20 13:45:16.632 22295 INFO 
nova.osapi_compute.wsgi.server [req-2efc51c8-2cd1-4aa0-a7b6-c75e275f7657 
c9062d562d9f41e4a1fdce36a4f176f6 4ad766166539403189f2caca1ba306aa] 10.35.160.73 
"GET 
/v2/4ad766166539403189f2caca1ba306aa/servers/a400d2f2-80ce-4ebf-9847-8885b7742fec
 HTTP/1.1" status: 200 len: 1940 time: 0.1693149
  /var/log/nova/nova-api.log:2014-05-20 13:46:15.071 22295 INFO 
nova.osapi_compute.wsgi.server [req-f2053fdc-be29-4c3d-bca0-647080997ea5 
c9062d562d9f41e4a1fdce36a4f176f6 4ad766166539403189f2caca1ba306aa] 10.35.160.73 
"GET 
/v2/4ad766166539403189f2caca1ba306aa/servers/a400d2f2-80ce-4ebf-9847-8885b7742fec
 HTTP/1.1" status: 200 len: 1940 time: 0.0634232
  /var/log/nova/nova-api.log:2014-05-20 13:46:18.156 22295 INFO 
nova.osapi_compute.wsgi.server [req-4f8e2c79-c885-4805-8af2-72cfe6f41bee 
c9062d562d9f41e4a1fdce36a4f176f6 4ad766166539403189f2caca1ba306aa] 10.35.160.73 
"GET 
/v2/4ad766166539403189f2caca1ba306aa/servers/a400d2f2-80ce-4ebf-9847-8885b7742fec
 HTTP/1.1" status: 200 len: 1940 time: 0.0809522
  /var/log/nova/nova-api.log:2014-05-20 13:46:18.696 22295 INFO 
nova.osapi_compute.wsgi.server [req-b51803d7-d892-47cd-8402-d9730a1ecfb5 
c9062d562d9f41e4a1fdce36a4f176f6 4ad766166539403189f2caca1ba306aa] 10.35.160.73 
"POST 
/v2/4ad766166539403189f2caca1ba306aa/servers/a400d2f2-80ce-4ebf-9847-8885b7742fec/action
 HTTP/1.1" status: 202 len: 185 time: 0.5187571
  /var/log/nova/nova-api.log:2014-05-20 13:46:20.989 22295 INFO 
nova.osapi_compute.wsgi.server [req-f4a6b905-b66d-4373-9103-c97a99e96625 
c9062d562d9f41e4a1fdce36a4f176f6 4ad766166539403189f2caca1ba306aa] 10.35.160.73 
"GET 
/v

[Yahoo-eng-team] [Bug 1339107] [NEW] Kyestone: Auth token not in the request header

2014-07-08 Thread Nicolae Paladi
Public bug reported:

Hi, I am using CentOS 6.4, deployed OpenStack Icehouse with Packstack.

After the deployment, they admin user is not authorized for some commands,
e.g. nova list, neutron net-list, etc.

Similar to the bug described in 
https://bugs.launchpad.net/keystone/+bug/1289935,
however the solution patch does not apply.

Some output:

2014-07-08 16:52:11.063 1649 INFO eventlet.wsgi.server [-] 10.0.230.14 - - 
[08/Jul/2014 16:52:11] "POST /v2.0/tokens HTTP/1.1" 200 7520 0.201348
2014-07-08 16:52:11.079 1649 DEBUG keystone.middleware.core [-] Auth token not 
in the request header. Will not build auth context. process_request 
/usr/lib/python2.6/site-packages/keystone/middleware/core.py:271
2014-07-08 16:52:11.081 1649 DEBUG keystone.common.wsgi [-] arg_dict: {} 
__call__ /usr/lib/python2.6/site-packages/keystone/common/wsgi.py:181
2014-07-08 16:52:11.086 1649 DEBUG keystone.notifications [-] CADF Event: 
{'typeURI': 'http://schemas.dmtf.org/cloud/audit/1.0/event', 'initiator': 
{'typeURI': 'service/security/account/user', 'host': {'agent': 
'python-neutronclient', 'address': '10.0.230.14'}, 'id': 
'openstack:ca12b898-95bb-4705-8455-6122aae81752', 'name': 
u'77aabd14a2e1453489dec37d7b174e58'}, 'target': {'typeURI': 
'service/security/account/user', 'id': 
'openstack:c9028777-2e4b-4c8a-bf07-4175e1c1f5e9'}, 'observer': {'typeURI': 
'service/security', 'id': 'openstack:669df929-fca7-4f71-99cf-0e2af4e981fa'}, 
'eventType': 'activity', 'eventTime': '2014-07-08T14:52:11.086573+', 
'action': 'authenticate', 'outcome': 'pending', 'id': 
'openstack:0d35b838-3cc9-46ed-bdf6-e384583d0982'} _send_audit_notification 
/usr/lib/python2.6/site-packages/keystone/notifications.py:289


Identical to the issue mentioned here:
https://www.redhat.com/archives/rdo-list/2014-June/msg00067.html

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1339107

Title:
  Kyestone: Auth token not in the request header

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Hi, I am using CentOS 6.4, deployed OpenStack Icehouse with Packstack.

  After the deployment, they admin user is not authorized for some commands,
  e.g. nova list, neutron net-list, etc.

  Similar to the bug described in 
https://bugs.launchpad.net/keystone/+bug/1289935,
  however the solution patch does not apply.

  Some output:

  2014-07-08 16:52:11.063 1649 INFO eventlet.wsgi.server [-] 10.0.230.14 - - 
[08/Jul/2014 16:52:11] "POST /v2.0/tokens HTTP/1.1" 200 7520 0.201348
  2014-07-08 16:52:11.079 1649 DEBUG keystone.middleware.core [-] Auth token 
not in the request header. Will not build auth context. process_request 
/usr/lib/python2.6/site-packages/keystone/middleware/core.py:271
  2014-07-08 16:52:11.081 1649 DEBUG keystone.common.wsgi [-] arg_dict: {} 
__call__ /usr/lib/python2.6/site-packages/keystone/common/wsgi.py:181
  2014-07-08 16:52:11.086 1649 DEBUG keystone.notifications [-] CADF Event: 
{'typeURI': 'http://schemas.dmtf.org/cloud/audit/1.0/event', 'initiator': 
{'typeURI': 'service/security/account/user', 'host': {'agent': 
'python-neutronclient', 'address': '10.0.230.14'}, 'id': 
'openstack:ca12b898-95bb-4705-8455-6122aae81752', 'name': 
u'77aabd14a2e1453489dec37d7b174e58'}, 'target': {'typeURI': 
'service/security/account/user', 'id': 
'openstack:c9028777-2e4b-4c8a-bf07-4175e1c1f5e9'}, 'observer': {'typeURI': 
'service/security', 'id': 'openstack:669df929-fca7-4f71-99cf-0e2af4e981fa'}, 
'eventType': 'activity', 'eventTime': '2014-07-08T14:52:11.086573+', 
'action': 'authenticate', 'outcome': 'pending', 'id': 
'openstack:0d35b838-3cc9-46ed-bdf6-e384583d0982'} _send_audit_notification 
/usr/lib/python2.6/site-packages/keystone/notifications.py:289


  Identical to the issue mentioned here:
  https://www.redhat.com/archives/rdo-list/2014-June/msg00067.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1339107/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339098] [NEW] detach_interface may hide issues due to async call

2014-07-08 Thread Drew Thorstensen
Public bug reported:

The detach_interface runs to the compute host via a cast rpc invocation
(async).  As such, the validation that is done on the compute manager
(example: an incorrect port id being passed in) is lost and the HTTP
response code returned to the user is always 202.  Users would need to
look in the logs to determine the error (and it would be indicated to
them that nothing was wrong).

The attach_interface is a synchronous (call) rpc invocation.  This
enables validation to be done and the error codes returned up to the
user.

This behavior should be consistent between the two calls.  Propose that
the detach_interface switch to a 'call' instead of a 'cast' to have
similar behavior.

It appears that the detach_volume also has this similar issue.

** Affects: nova
 Importance: Undecided
 Assignee: Drew Thorstensen (thorst)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Drew Thorstensen (thorst)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1339098

Title:
  detach_interface may hide issues due to async call

Status in OpenStack Compute (Nova):
  New

Bug description:
  The detach_interface runs to the compute host via a cast rpc
  invocation (async).  As such, the validation that is done on the
  compute manager (example: an incorrect port id being passed in) is
  lost and the HTTP response code returned to the user is always 202.
  Users would need to look in the logs to determine the error (and it
  would be indicated to them that nothing was wrong).

  The attach_interface is a synchronous (call) rpc invocation.  This
  enables validation to be done and the error codes returned up to the
  user.

  This behavior should be consistent between the two calls.  Propose
  that the detach_interface switch to a 'call' instead of a 'cast' to
  have similar behavior.

  It appears that the detach_volume also has this similar issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1339098/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1307424] Re: undercloud won't come up: seed metadata server 404s

2014-07-08 Thread Jay Dobies
** Changed in: tripleo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1307424

Title:
  undercloud won't come up: seed metadata server 404s

Status in OpenStack Compute (Nova):
  Fix Committed
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  With a fresh update and a new deployment across the board, the
  undercloud won't boot. It has a route to 169.254.169.254, back to the
  seed IP (192.0.2.1). The nova-api metadata service is responding to
  requests to it, but attempting to fetch

  http://169.254.169.254/latest/meta-data

  return an HTTP 404.

  On the seed node, the nova-api.log is full of these:

  2014-04-14 10:21:16.346 3939 ERROR nova.api.metadata.handler [-]
  Failed to get metadata for ip: 192.0.2.3

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1307424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338881] Re: VMware: Unable to validate session when start nova compute service

2014-07-08 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1338881

Title:
  VMware: Unable to validate session when start nova compute service

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  We are using non-administrator to connect the vCenter when start
  compute service. In vCenter we defined a separate role (you can it in
  the attachment) for this account and allow it to only access the
  cluster that is used to provision VM and split with the management
  cluster.

  I can use this user/password to login vCenter, but I hint the follow error 
when start the compute service.
  So I want to know what kinds of privleges should be assigned to this account.
  2014-07-08 05:26:55.485 30556 WARNING nova.virt.vmwareapi.driver 
[req-35ad4408-f0d3-423a-a211-c7200ae8da3c None None] Session 
527362cd-b3d2-0ba9-0be8-b7dd3200e9f1 is inactive!
  2014-07-08 05:27:06.479 30556 ERROR suds.client [-] 
  http://schemas.xmlsoap.org/soap/envelope/"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; 
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/";>
 

   SessionManager
   527362cd-b3d2-0ba9-0be8-b7dd3200e9f1

 
  
  2014-07-08 05:27:06.483 30556 DEBUG nova.virt.vmwareapi.driver 
[req-35ad4408-f0d3-423a-a211-c7200ae8da3c None None] Server raised fault: 
'Permission to perform this operation was denied.'

  
  2014-07-08 05:27:44.310 30556 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/virt/vmwareapi/error_util.py", line 123, 
in retrievepropertiesex_fault_checker
  2014-07-08 05:27:44.310 30556 TRACE nova.openstack.common.threadgroup 
exc_msg_list))
  2014-07-08 05:27:44.310 30556 TRACE nova.openstack.common.threadgroup 
VimFaultException: Error(s) NotAuthenticated occurred in the call to 
RetrievePropertiesEx
  2014-07-08 05:27:44.310 30556 TRACE nova.openstack.common.threadgroup

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1338881/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339045] [NEW] Cannot open network namespace

2014-07-08 Thread moorryan
Public bug reported:

While booting tripleo undercloud have occassionally (about 8 in 10
attempts) seen errors in neutron-dhcp-agent.log of:

2014-07-08 10:12:07.374 3905 ERROR neutron.agent.linux.utils [-] 
Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ip', 'netns', 'exec', 'qdhcp-e967d587-5999-4333-8405-76b32398b121', 'ip', 
'link', 'set', 'tap4ac4d6ab-d7', 'up']
Exit code: 1
Stdout: ''
Stderr: 'Cannot open network namespace 
"qdhcp-e967d587-5999-4333-8405-76b32398b121": No such file or directory\n'
2014-07-08 10:12:07.419 3905 ERROR neutron.agent.linux.utils [-] 
Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ip', 'netns', 'exec', 'qdhcp-e967d587-5999-4333-8405-76b32398b121', 'ip', 
'-o', 'link', 'show', 'tap4ac4d6ab-d7']
Exit code: 1
Stdout: ''
Stderr: 'Cannot open network namespace 
"qdhcp-e967d587-5999-4333-8405-76b32398b121": No such file or directory\n'


In all cases these ERRORs are preceeded by the ERROR:

2014-07-08 10:12:07.151 3905 ERROR neutron.agent.dhcp_agent 
[req-523b2309-18c9-4a2d-ad65-7586db10f956 None] Failed reporting state!
2014-07-08 10:12:07.151 3905 TRACE neutron.agent.dhcp_agent Traceback (most 
recent call last):
2014-07-08 10:12:07.151 3905 TRACE neutron.agent.dhcp_agent   File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/agent/dhcp_agent.py",
 line 576, in _report_state
2014-07-08 10:12:07.151 3905 TRACE neutron.agent.dhcp_agent 
self.state_rpc.report_state(ctx, self.agent_state, self.use_call)
2014-07-08 10:12:07.151 3905 TRACE neutron.agent.dhcp_agent   File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/agent/rpc.py",
 line 70, in report_state
2014-07-08 10:12:07.151 3905 TRACE neutron.agent.dhcp_agent return 
self.call(context, msg, topic=self.topic)
2014-07-08 10:12:07.151 3905 TRACE neutron.agent.dhcp_agent   File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/common/rpc.py",
 line 161, in call
2014-07-08 10:12:07.151 3905 TRACE neutron.agent.dhcp_agent context, msg, 
rpc_method='call', **kwargs)
2014-07-08 10:12:07.151 3905 TRACE neutron.agent.dhcp_agent   File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/neutron/common/rpc.py",
 line 185, in __call_rpc_method
2014-07-08 10:12:07.151 3905 TRACE neutron.agent.dhcp_agent return 
func(context, msg['method'], **msg['args'])
2014-07-08 10:12:07.151 3905 TRACE neutron.agent.dhcp_agent   File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/oslo/messaging/rpc/client.py",
 line 150, in call
2014-07-08 10:12:07.151 3905 TRACE neutron.agent.dhcp_agent 
wait_for_reply=True, timeout=timeout)
2014-07-08 10:12:07.151 3905 TRACE neutron.agent.dhcp_agent   File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/oslo/messaging/transport.py",
 line 90, in _send
2014-07-08 10:12:07.151 3905 TRACE neutron.agent.dhcp_agent timeout=timeout)
2014-07-08 10:12:07.151 3905 TRACE neutron.agent.dhcp_agent   File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py",
 line 412, in send
2014-07-08 10:12:07.151 3905 TRACE neutron.agent.dhcp_agent return 
self._send(target, ctxt, message, wait_for_reply, timeout)
2014-07-08 10:12:07.151 3905 TRACE neutron.agent.dhcp_agent   File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py",
 line 403, in _send
2014-07-08 10:12:07.151 3905 TRACE neutron.agent.dhcp_agent result = 
self._waiter.wait(msg_id, timeout)
2014-07-08 10:12:07.151 3905 TRACE neutron.agent.dhcp_agent   File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py",
 line 267, in wait
2014-07-08 10:12:07.151 3905 TRACE neutron.agent.dhcp_agent reply, ending = 
self._poll_connection(msg_id, timeout)
2014-07-08 10:12:07.151 3905 TRACE neutron.agent.dhcp_agent   File 
"/opt/stack/venvs/neutron/local/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py",
 line 217, in _poll_connection
2014-07-08 10:12:07.151 3905 TRACE neutron.agent.dhcp_agent % msg_id)
2014-07-08 10:12:07.151 3905 TRACE neutron.agent.dhcp_agent MessagingTimeout: 
Timed out waiting for a reply to message ID 21fd392ebb3a43228447f380334d05b0
2014-07-08 10:12:07.151 3905 TRACE neutron.agent.dhcp_agent 
2014-07-08 10:12:07.157 3905 WARNING neutron.openstack.common.loopingcall [-] 
task run outlasted interval by 30.119771 sec

If I then wait for the undercloud to fail and login to the seed host, I can 
manually run the command that failed previously:
root@ubuntu:/var/log/upstart# sudo /usr/bin/neutron-rootwrap 
/etc/neutron/rootwrap.conf ip netns exec 
'qdhcp-e967d587-5999-4333-8405-76b32398b121' ip link set tap4ac4d6ab-d7 up

Replicate the ERROR message by attempting to exec a command for a non existent 
namespace..
-

[Yahoo-eng-team] [Bug 1339039] [NEW] ERROR: Store for scheme glance.store.rbd.Store not found

2014-07-08 Thread Roman Jerger
Public bug reported:

glance-api is unable to initilize rbd storage backend.

Running glance-api with only rbd backend configured and disabling all
the other backends causes the service to exit with the following error:

root@control:~# glance-api
  2 2014-07-07 17:48:03.950 1874 DEBUG glance.store [-] Attempting to import 
store glance.store.rbd.Store _get_store_class 
/usr/lib/python2.7/dist-packages/glance/store/__init__.py:168
  3 2014-07-07 17:48:03.955 1874 DEBUG glance.store [-] Registering store 
 with schemes ('rbd',) create_stores 
/usr/lib/python2.7/dist-packages/glance/store/__init__.py:210
  4 2014-07-07 17:48:03.955 1874 DEBUG glance.store.base [-] Late loading 
location class glance.store.rbd.StoreLocation get_store_location_class 
/usr/lib/python2.7/dist-packages/glance/store/base.py:80
  5 2014-07-07 17:48:03.956 1874 DEBUG glance.store.location [-] Registering 
scheme rbd with {'store_class': , 
'location_class': } 
register_scheme_map /usr/lib/python2.7/dist-packages/glance/store/location.py:86
  6 2014-07-07 17:48:03.958 1874 DEBUG glance.api.policy [-] Loading policy 
from /etc/glance/policy.json _read_policy_file 
/usr/lib/python2.7/dist-packages/glance/api/policy.py:106
  7 2014-07-07 17:48:03.959 1874 DEBUG glance.api.policy [-] Loaded policy 
rules: {u'get_task': '@', u'get_image_location': '@', u'add_image': '@', 
u'modify_image': '@', u'manage_image_cache': 'role:admin', 
u'delete_member': '@', u'get_images': '@', u'delete_image': '@', 
u'publicize_image': '@', u'get_member': '@', u'add_member': '@', 
u'set_image_location': '@', u'get_image': '@', u'modify_member': '@', 
u'context_is_admin': 'role:admin', u'upload_image': '@', u'modify_task': 
'@', u'get_members': '@', u'get_tasks': '@', u'add_task': '@', u'default': '@', 
u'delete_image_location': '@', u'copy_from': '@', u'download_image': '@'} 
load_rules /usr/lib/python2.7/dist-packages/glance/api/policy.py:85
  8 ERROR: Store for scheme glance.store.rbd.Store not found

root@control:~# cat /etc/glance/glance-api.conf
[...]
default_store = glance.store.rbd.Store
known_stores = glance.store.rbd.Store
[...]

root@control:~# dpkg -l | grep glance
ii  glance  1:2014.1-0ubuntu1  all  
OpenStack Image Registry and Delivery Service - Daemons
ii  glance-api  1:2014.1-0ubuntu1  all  
OpenStack Image Registry and Delivery Service - API
ii  glance-common   1:2014.1-0ubuntu1  all  
OpenStack Image Registry and Delivery Service - Common
ii  glance-registry 1:2014.1-0ubuntu1  all  
OpenStack Image Registry and Delivery Service - Registry
ii  python-glance   1:2014.1-0ubuntu1  all  
OpenStack Image Registry and Delivery Service - Python library
ii  python-glanceclient 1:0.12.0-0ubuntu1  all  
Client library for Openstack glance server.

After some time of debugging I figured out, that the problem is cased by
the function "get_store_from_scheme(context, scheme, loc=None)".  The
argument "context" is evaluated to "glance.store.rbd.Store" not to
"rbd".

Applying the attached quick and dirty fix patch has solved the problem.

Further investigation needed.

** Affects: glance
 Importance: Undecided
 Status: New

** Patch added: "glance.store.__init__.py.patch"
   
https://bugs.launchpad.net/bugs/1339039/+attachment/4147975/+files/glance.store.__init__.py.patch

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1339039

Title:
  ERROR: Store for scheme glance.store.rbd.Store not found

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  glance-api is unable to initilize rbd storage backend.

  Running glance-api with only rbd backend configured and disabling all
  the other backends causes the service to exit with the following
  error:

  root@control:~# glance-api
2 2014-07-07 17:48:03.950 1874 DEBUG glance.store [-] Attempting to import 
store glance.store.rbd.Store _get_store_class 
/usr/lib/python2.7/dist-packages/glance/store/__init__.py:168
3 2014-07-07 17:48:03.955 1874 DEBUG glance.store [-] Registering store 
 with schemes ('rbd',) create_stores 
/usr/lib/python2.7/dist-packages/glance/store/__init__.py:210
4 2014-07-07 17:48:03.955 1874 DEBUG glance.store.base [-] Late loading 
location class glance.store.rbd.StoreLocation get_store_location_class 
/usr/lib/python2.7/dist-packages/glance/store/base.py:80
5 2014-07-07 17:48:03.956 1874 DEBUG glance.store.location [-] Registering 
scheme rbd with {'store_class': , 
'location_class': } 
register_scheme_map /usr/lib/python2.7/dist-packages/glance/store/location.py:86
6 2014-07-07 17:48:03.958 1874 D

[Yahoo-eng-team] [Bug 1339028] [NEW] Update costom route on Router not take effect

2014-07-08 Thread Xurong Yang
Public bug reported:

1. create a router
2. create a network with subnet 4.6.72.0/23
3. attach the above subnet to the router
4. update the router with route {destination: 4.6.72.0/23, nexthop: 4.6.72.10}, 
success
5. remove the above route from the router, success
6. update the router with the same route again, operation success, but the 
route isn't added to the router namespace, so not take effect

This problem is caused by removing the connected route, so when adding
the route the second time, "ip route replace" command fail.

I think we need to restrict the modification of connected route.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1339028

Title:
  Update costom route on Router not take effect

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  1. create a router
  2. create a network with subnet 4.6.72.0/23
  3. attach the above subnet to the router
  4. update the router with route {destination: 4.6.72.0/23, nexthop: 
4.6.72.10}, success
  5. remove the above route from the router, success
  6. update the router with the same route again, operation success, but the 
route isn't added to the router namespace, so not take effect

  This problem is caused by removing the connected route, so when adding
  the route the second time, "ip route replace" command fail.

  I think we need to restrict the modification of connected route.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1339028/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339023] [NEW] It is impossible to import selenium submodules inside 'selenium.py' file

2014-07-08 Thread Timur Sufiev
Public bug reported:

Currently there are 2 such files inside horizon tests:
horizon.test.tests.selenium and openstack_dashboard.test.tests.selenium
- so it would be impossible to import any submodules of selenium package
inside them. The most obvious solution is to rename both into, say,
'selenium_tests.py'.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1339023

Title:
  It is impossible to import selenium submodules inside 'selenium.py'
  file

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Currently there are 2 such files inside horizon tests:
  horizon.test.tests.selenium and
  openstack_dashboard.test.tests.selenium - so it would be impossible to
  import any submodules of selenium package inside them. The most
  obvious solution is to rename both into, say, 'selenium_tests.py'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1339023/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333137] Re: Can't launch Nova instance: no boot image available

2014-07-08 Thread Amit Prakash Pandey
I tried to reproduce this bug and I don't see this error. I have not
created any image on my own but still I see a few in my devstack insall
( master).

So I am marking this as Invalid!

** Changed in: horizon
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1333137

Title:
  Can't launch Nova instance: no boot image available

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Testing step:
  1:login as admin
  2:go to project/instance/launch instance
  3:No image available whether what flavor is choosed

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1333137/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338977] [NEW] Using OpenvSwitch, all devices with name "tapxxx" attached to OVS bridge can not be synchronized successfully when ovs_use_veth is set to True

2014-07-08 Thread Yang Yu
Public bug reported:

OpenStack will create a device attached to the OVS integration bridge as
a dhcp server, so whenever you create a network in OpenStack, there will
be a tap device created and attached to the OVS integration bridge such
as below.

stack@vm:/opt/stack/tempest$ sudo ovs-vsctl show
67b6d3bf-ff99-45da-9fea-a9d17385bc9d
Bridge br-int
Port "tap865468b3-57"
tag: 7
Interface "tap865468b3-57"
type: internal
Port "eth1"
Interface "eth1"
Port "tapbd8e5831-c9"
Interface "tapbd8e5831-c9"
type: internal
ovs_version: "1.4.6"

stack@vm:/opt/stack/tempest$ sudo ip netns exec 
qdhcp-dd25353d-dfc1-4bf1-a67b-a24fe6a29058 ip a 
20: lo:  mtu 16436 qdisc noqueue state UNKNOWN 
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host 
   valid_lft forever preferred_lft forever
24: ns-any:  mtu 1500 qdisc noop state DOWN qlen 1000
link/ether 56:b7:ff:07:dc:85 brd ff:ff:ff:ff:ff:ff
50: tap865468b3-57:  mtu 1500 qdisc noqueue 
state UNKNOWN 
link/ether 00:50:56:9b:95:48 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.2/24 brd 10.0.0.255 scope global tap865468b3-57
inet 169.254.169.254/16 brd 169.254.255.255 scope global tap865468b3-57
inet6 fe80::250:56ff:fe9b:9548/64 scope link 
   valid_lft forever preferred_lft forever

And when you remove the integration bridge carelessly, you can create
the bridge with same name manually, and restart the dhcp agent to get
all tap devices back.

But the devices can not be set correctly when ovs_use_veth is set to
True. If ovs_use_veth is set to True, there will be two peer devices
created, one is named ns-x, another one is named tapx. We need
to make sure tapx could be attached to the ovs integration bridge
automatically using the scenario above.

** Affects: neutron
 Importance: Undecided
 Assignee: Yang Yu (yuyangbj)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Yang Yu (yuyangbj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1338977

Title:
  Using OpenvSwitch,  all devices with name "tapxxx" attached to OVS
  bridge can not be synchronized successfully when ovs_use_veth is set
  to True

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  OpenStack will create a device attached to the OVS integration bridge
  as a dhcp server, so whenever you create a network in OpenStack, there
  will be a tap device created and attached to the OVS integration
  bridge such as below.

  stack@vm:/opt/stack/tempest$ sudo ovs-vsctl show
  67b6d3bf-ff99-45da-9fea-a9d17385bc9d
  Bridge br-int
  Port "tap865468b3-57"
  tag: 7
  Interface "tap865468b3-57"
  type: internal
  Port "eth1"
  Interface "eth1"
  Port "tapbd8e5831-c9"
  Interface "tapbd8e5831-c9"
  type: internal
  ovs_version: "1.4.6"

  stack@vm:/opt/stack/tempest$ sudo ip netns exec 
qdhcp-dd25353d-dfc1-4bf1-a67b-a24fe6a29058 ip a 
  20: lo:  mtu 16436 qdisc noqueue state UNKNOWN 
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
  inet6 ::1/128 scope host 
 valid_lft forever preferred_lft forever
  24: ns-any:  mtu 1500 qdisc noop state DOWN qlen 1000
  link/ether 56:b7:ff:07:dc:85 brd ff:ff:ff:ff:ff:ff
  50: tap865468b3-57:  mtu 1500 qdisc noqueue 
state UNKNOWN 
  link/ether 00:50:56:9b:95:48 brd ff:ff:ff:ff:ff:ff
  inet 10.0.0.2/24 brd 10.0.0.255 scope global tap865468b3-57
  inet 169.254.169.254/16 brd 169.254.255.255 scope global tap865468b3-57
  inet6 fe80::250:56ff:fe9b:9548/64 scope link 
 valid_lft forever preferred_lft forever

  And when you remove the integration bridge carelessly, you can create
  the bridge with same name manually, and restart the dhcp agent to get
  all tap devices back.

  But the devices can not be set correctly when ovs_use_veth is set to
  True. If ovs_use_veth is set to True, there will be two peer devices
  created, one is named ns-x, another one is named tapx. We need
  to make sure tapx could be attached to the ovs integration bridge
  automatically using the scenario above.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1338977/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1285478] Re: Enforce alphabetical ordering in requirements file

2014-07-08 Thread Ilya Shakhat
** Changed in: stackalytics
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1285478

Title:
  Enforce alphabetical ordering in requirements file

Status in Blazar:
  Triaged
Status in Cinder:
  Invalid
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Won't Fix
Status in OpenStack Identity (Keystone):
  In Progress
Status in OpenStack Message Queuing Service (Marconi):
  In Progress
Status in OpenStack Neutron (virtual network service):
  Invalid
Status in Python client library for Cinder:
  In Progress
Status in Python client library for Glance:
  In Progress
Status in Python client library for Ironic:
  Fix Committed
Status in Python client library for Neutron:
  Invalid
Status in Trove client binding:
  In Progress
Status in OpenStack contribution dashboard:
  Fix Released
Status in Storyboard database creator:
  In Progress
Status in Tempest:
  In Progress
Status in Openstack Database (Trove):
  In Progress
Status in Tuskar:
  Fix Released

Bug description:
  
  Sorting requirement files in alphabetical order makes code more readable, and 
can check whether specific library
  in the requirement files easily. Hacking donesn't check *.txt files.
  We had  enforced  this check in oslo-incubator 
https://review.openstack.org/#/c/66090/.

  This bug is used to track syncing the check gating.

  How to sync this to other projects:

  1.  Copy  tools/requirements_style_check.sh  to project/tools.

  2. run tools/requirements_style_check.sh  requirements.txt test-
  requirements.txt

  3. fix the violations

To manage notifications about this bug go to:
https://bugs.launchpad.net/blazar/+bug/1285478/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338938] [NEW] dhcp scheduler should stop redundant agent

2014-07-08 Thread Xurong Yang
Public bug reported:

we initiate the counter of dhcp agents between active host and
cfg.CONF.dhcp_agents_per_network, suppose that we start dhcp agents
correctly, then some dhcp agents are down(host down or kill the dhcp-
agent), during this period, we will reschedule and recover the normal
dhcp agents.  but when down dhcp agents restart, some dhcp agents are
redundant.

if len(dhcp_agents) >= agents_per_network:
LOG.debug(_('Network %s is hosted already'),
  network['id'])
return

IMO, we need stop the redundant agents  In above case.

** Affects: neutron
 Importance: Undecided
 Assignee: Xurong Yang (idopra)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Xurong Yang (idopra)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1338938

Title:
  dhcp scheduler should stop redundant agent

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  we initiate the counter of dhcp agents between active host and
  cfg.CONF.dhcp_agents_per_network, suppose that we start dhcp agents
  correctly, then some dhcp agents are down(host down or kill the dhcp-
  agent), during this period, we will reschedule and recover the normal
  dhcp agents.  but when down dhcp agents restart, some dhcp agents are
  redundant.

  if len(dhcp_agents) >= agents_per_network:
  LOG.debug(_('Network %s is hosted already'),
network['id'])
  return

  IMO, we need stop the redundant agents  In above case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1338938/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp