[Yahoo-eng-team] [Bug 1188218] Re: Support standard ceilometer compute metrics with nova baremetal

2015-07-07 Thread gordon chung
https://github.com/openstack/ceilometer/commit/683ead74af36c88575ca8bce312ecf1428a5cc80

** Changed in: ceilometer
   Status: Triaged = Fix Released

** Changed in: ceilometer
 Assignee: (unassigned) = Zhai, Edwin (edwin-zhai)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1188218

Title:
  Support standard ceilometer compute metrics with nova baremetal

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Compute (Nova):
  Won't Fix
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  I guess this is a subset of bug #1004468 and
  https://blueprints.launchpad.net/ceilometer/+spec/non-libvirt-hw

  However, it's a bit different for nova-baremetal. There is no
  hypervisor we can query for CPU, disk and network statistics so we
  can't just add another plugin for ceilometer's compute agent.

  Instead, we will need an agent which runs inside each baremetal
  instance and posts samples to ceilometer's public /meters/ API

  At a first glance, these look like the counters which require a guest
  agent:

   cpu  CPU time used
   cpu_util CPU utilisation
   disk.read.requestNumber of read requests
   disk.write.request   Number of write requests
   disk.read.bytes  Volume of read in B
   disk.write.bytes Volume of write in B
   network.incoming.bytes   number of incoming bytes on the network
   network.outgoing.bytes   number of outgoing bytes on the network
   network.incoming.packets number of incoming packets
   network.outgoing.packets number of outgoing packets

  For the other compute counters, we can add baremetal support to the
  ceilometer compute agent - e.g. these counters:

   instance Duration of instance
   instance:type  Duration of instance type (openstack types)
   memory   Volume of RAM in MB
   cpus Number of VCPUs
   disk.root.size   Size of root disk in GB
   disk.ephemeral.size  Size of ephemeral disk in GB

  One thing to consider is access control to these counters - we
  probably don't usually allow tenants to update these counters in, but
  in this case the tenant will require that ability.

  It's unclear whether this guest agent would live in ceilometer, nova
  baremetal or ironic. It's interfacing with (what should be) a very
  stable ceilometer API, so there's no particular need for it to live in
  ceilometer.

  I'm also adding a tripleo task, since I expect tripleo will want these
  metrics available for things like auto-scaling or simply resource
  monitoring. We'd need at least a diskimage-builder element which
  includes the guest agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1188218/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472347] [NEW] With multiple Neutron api/rpc workers enabled, intermittent failure deleting dhcp_port

2015-07-07 Thread Danny Choi
Public bug reported:

Neutron multiple workers are enabled as follows in neutron.conf:
   - api_workers=3
   - rpc_workers=3

The following were configured:
   - 20 tenants
   - Each tenant had 5 tenant networks
   - For each network, one VM at each Compute nodes (2) for a total of 10 VMs
   - Total 100 VLANs/200 VMs
 
A script which did the following at tenant-1:
   - Delete all 10 VMs
   - For each network, delete its router interface
   - Delete the subnet
   - Delete the network
   - Re-create the network, subnet and router interface
   - For each network, launch 2 VMs (one at each Compute node)
   - Repeat steps 1 – 6

Intermittently the following delete port error is encountered:

2015-07-06 16:17:51.903 43190 DEBUG neutron.plugins.ml2.plugin 
[req-f18af2a1-0047-4301-9fa1-01632fa5b2b8 None] Calling delete_port for 
fcf17b5d-235c-466b-b54b-ce80acca7359 owned by network:dhcp delete_p
ort /usr/lib/python2.7/site-packages/neutron/plugins/ml2/plugin.py:1076
2015-07-06 16:17:51.904 43216 ERROR oslo.db.sqlalchemy.exc_filters 
[req-cbb23fa8-5043-405c-a569-fcfdc912555a ] DB exception wrapped.
2015-07-06 16:17:51.904 43216 TRACE oslo.db.sqlalchemy.exc_filters Traceback 
(most recent call last):
2015-07-06 16:17:51.904 43216 TRACE oslo.db.sqlalchemy.exc_filters   File 
/usr/lib64/python2.7/site-packages/sqlalchemy/engine/result.py, line 781, in 
fetchall
2015-07-06 16:17:51.904 43216 TRACE oslo.db.sqlalchemy.exc_filters l = 
self.process_rows(self._fetchall_impl())
2015-07-06 16:17:51.904 43216 TRACE oslo.db.sqlalchemy.exc_filters   File 
/usr/lib64/python2.7/site-packages/sqlalchemy/engine/result.py, line 750, in 
_fetchall_impl
2015-07-06 16:17:51.904 43216 TRACE oslo.db.sqlalchemy.exc_filters 
self._non_result()
2015-07-06 16:17:51.904 43216 TRACE oslo.db.sqlalchemy.exc_filters   File 
/usr/lib64/python2.7/site-packages/sqlalchemy/engine/result.py, line 755, in 
_non_result
2015-07-06 16:17:51.904 43216 TRACE oslo.db.sqlalchemy.exc_filters This 
result object does not return rows. 
2015-07-06 16:17:51.904 43216 TRACE oslo.db.sqlalchemy.exc_filters 
ResourceClosedError: This result object does not return rows. It has been 
closed automatically.
2015-07-06 16:17:51.904 43216 TRACE oslo.db.sqlalchemy.exc_filters 
2015-07-06 16:17:51.906 43216 DEBUG neutron.openstack.common.lockutils 
[req-cbb23fa8-5043-405c-a569-fcfdc912555a ] Releasing semaphore db-access 
lock /usr/lib/python2.7/site-packages/neutron/openstack
/common/lockutils.py:238
2015-07-06 16:17:51.906 43216 ERROR neutron.api.v2.resource 
[req-cbb23fa8-5043-405c-a569-fcfdc912555a None] delete failed
2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/api/v2/resource.py, line 81, in 
resource
2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/api/v2/base.py, line 476, in delete
2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/plugins/ml2/plugin.py, line 680, in 
delete_network
2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource continue
2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/openstack/common/excutils.py, line 
82, in __exit__
2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/plugins/ml2/plugin.py, line 640, in 
delete_network
2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource 
with_lockmode('update').all())
2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource   File 
/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py, line 2300, in all
2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource return 
list(self)
2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource   File 
/usr/lib64/python2.7/site-packages/sqlalchemy/orm/loading.py, line 66, in 
instances
2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource fetch = 
cursor.fetchall()
2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource   File 
/usr/lib64/python2.7/site-packages/sqlalchemy/engine/result.py, line 787, in 
fetchall
2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource self.cursor, 
self.context)
2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource   File 
/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py, line 1156, in 
_handle_dbapi_exception
2015-07-06 16:17:51.906 43216 TRACE neutron.api.v2.resource 
util.raise_from_cause(newraise, exc_info)
2015-07-06 16:17:51.906 

[Yahoo-eng-team] [Bug 1320617] Re: [Image] failed to reach ACTIVE status within the required time (196 s). Current status: SAVING

2015-07-07 Thread gordon chung
marking this invalid from ceilometer pov, since this hasn't cropped up.
as eglynn mentioned, there were various changes to address issues... not
really sure if consuming 7.5% cpu is the root of issue regardless.

** Changed in: ceilometer
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1320617

Title:
  [Image] failed to reach ACTIVE status within the required time (196
  s). Current status: SAVING

Status in OpenStack Telemetry (Ceilometer):
  Invalid
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  http://logs.openstack.org/08/92608/1/gate/gate-tempest-dsvm-postgres-
  full/57b137a/console.html.gz

  The relevant part of the test case:
  1. Boot servet1
  2. Boot server2 wait until active
  3. wait until server1 ACTIVE
  4. Create snapshot from server1, wait until ACTIVE with 196 sec timeout
  5. Cleanup, die with the first failure

  Normally the test case would create 2 additional image at the beginning,
  but now died at fist image creation.

  2014-05-09 21:44:09.857 | Captured traceback:
  2014-05-09 21:44:09.858 | ~~~
  2014-05-09 21:44:09.858 | Traceback (most recent call last):
  2014-05-09 21:44:09.858 |   File 
tempest/api/compute/images/test_list_image_filters.py, line 45, in setUpClass
  2014-05-09 21:44:09.858 | cls.server1['id'], wait_until='ACTIVE')
  2014-05-09 21:44:09.858 |   File tempest/api/compute/base.py, line 302, 
in create_image_from_server
  2014-05-09 21:44:09.858 | kwargs['wait_until'])
  2014-05-09 21:44:09.858 |   File 
tempest/services/compute/xml/images_client.py, line 140, in 
wait_for_image_status
  2014-05-09 21:44:09.858 | waiters.wait_for_image_status(self, 
image_id, status)
  2014-05-09 21:44:09.858 |   File tempest/common/waiters.py, line 129, 
in wait_for_image_status
  2014-05-09 21:44:09.858 | raise exceptions.TimeoutException(message)
  2014-05-09 21:44:09.859 | TimeoutException: Request timed out
  2014-05-09 21:44:09.859 | Details: (ListImageFiltersTestXML:setUpClass) 
Image 20b6e7a9-f65d-4d17-b025-59f9237ff8cb failed to reach ACTIVE status within 
the required time (196 s). Current status: SAVING.

  logstash:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiZmFpbGVkIHRvIHJlYWNoIEFDVElWRSBzdGF0dXMgd2l0aGluIHRoZSByZXF1aXJlZCB0aW1lXCIgQU5EIGZpbGVuYW1lOlwiY29uc29sZS5odG1sXCIgQU5EIG1lc3NhZ2U6XCJDdXJyZW50IHN0YXR1czogU0FWSU5HXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImFsbCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDA0MTEyNDcwMzksIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

  message:failed to reach ACTIVE status within the required time AND
  filename:console.html AND message:Current status: SAVING

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1320617/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471934] Re: tox -e docs doesn't build module index

2015-07-07 Thread Matt Riedemann
The solution for the sphinx stuff is to configure sphinx using the
conf.py in the doc source dir, like in nova that means updating this
file:

https://github.com/openstack/nova/blob/master/doc/source/conf.py#L85

** Changed in: nova
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1471934

Title:
  tox -e docs doesn't build module index

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  With this change https://review.openstack.org/#/c/121737/
  autodoc_index_modules was set to 0 in the [pbr] section of setup.cfg
  so that you can no longer get to git/nova/doc/build/html/py-
  modindex.html locally to verify that module docstrings are correct.

  We should enable autodoc_index_modules for the docs tox target again.
  The original point of the previous change was to not list the module
  index in the home page, which is fine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1471934/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472098] [NEW] Horizon Application level module structure is unclear

2015-07-07 Thread Shaoquan Chen
Public bug reported:

Horizon client side is architected based on Angular modules, providers
etc. The application level module is defined based on a list of other
modules. What modules should be listed there and why are not very clear.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1472098

Title:
  Horizon Application level module structure is unclear

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Horizon client side is architected based on Angular modules, providers
  etc. The application level module is defined based on a list of other
  modules. What modules should be listed there and why are not very
  clear.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1472098/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472167] [NEW] Fix column width for Firewalls' drag and drop forms

2015-07-07 Thread Tatiana Ovchinnikova
Public bug reported:

Firewalls' Add Policy and Add Firewall forms have columns with drag
and drop mechanisms. These columns should have a fixed width: they look
strange if there are no items in corresponding lists or items' names are
too short or too long.

** Affects: horizon
 Importance: Undecided
 Assignee: Tatiana Ovchinnikova (tmazur)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1472167

Title:
  Fix column width for Firewalls' drag and drop forms

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Firewalls' Add Policy and Add Firewall forms have columns with
  drag and drop mechanisms. These columns should have a fixed width:
  they look strange if there are no items in corresponding lists or
  items' names are too short or too long.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1472167/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472163] [NEW] DVR: in some cases router is not scheduled to all needed hosts

2015-07-07 Thread Oleg Bondarev
Public bug reported:

Scenario
 - have vm running on a subnet not connected to a router
 - create router with gateway:
   neutron router-create router1 -- --external-gateway-info type=dict 
network_id=uuid
 - connect subnet to a router
 - check l3 agents hosting router
In this case only csnat portion of the router will be scheduled to dvr_snat l3 
agent on network node.
Router will not be scheduled to a dvr l3 agent on compute node hosting VM, as 
well as to other l3 agents on networks nodes hosting dhcp ports.

This is a regression from commit
3794b4a83e68041e24b715135f0ccf09a5631178 where scheduler will return
once snat portion is scheduled.

** Affects: neutron
 Importance: Undecided
 Assignee: Oleg Bondarev (obondarev)
 Status: New


** Tags: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1472163

Title:
  DVR: in some cases router is not scheduled to all needed hosts

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Scenario
   - have vm running on a subnet not connected to a router
   - create router with gateway:
 neutron router-create router1 -- --external-gateway-info type=dict 
network_id=uuid
   - connect subnet to a router
   - check l3 agents hosting router
  In this case only csnat portion of the router will be scheduled to dvr_snat 
l3 agent on network node.
  Router will not be scheduled to a dvr l3 agent on compute node hosting VM, as 
well as to other l3 agents on networks nodes hosting dhcp ports.

  This is a regression from commit
  3794b4a83e68041e24b715135f0ccf09a5631178 where scheduler will return
  once snat portion is scheduled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1472163/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472099] [NEW] when delete a domain, the role assignment of user (this user in the other domain) in this domain isn't deleted.

2015-07-07 Thread lumeihong
Public bug reported:

when delete a domain,the role assignment of user (this user in the other 
domain) in this domain isn't deleted.
the steps are as follows:
1) create a user with domain_id = default, user_name = myuser
2) create a domain named test_domain
3) grant a role to myuser(user's name ) in test_domain ( domain's name)
4) delete the domain named test_domain

but the  myuser'role in the test_domain still exists in the talbe of
assignment.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1472099

Title:
  when delete a domain,the role assignment of user (this user in the
  other domain) in this domain isn't deleted.

Status in OpenStack Identity (Keystone):
  New

Bug description:
  when delete a domain,the role assignment of user (this user in the other 
domain) in this domain isn't deleted.
  the steps are as follows:
  1) create a user with domain_id = default, user_name = myuser
  2) create a domain named test_domain
  3) grant a role to myuser(user's name ) in test_domain ( domain's name)
  4) delete the domain named test_domain

  but the  myuser'role in the test_domain still exists in the talbe of
  assignment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1472099/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472361] [NEW] COMMON_PREFIXES removal fallout

2015-07-07 Thread Armando Migliaccio
Public bug reported:

Change 18bc67d56faef30a0f73429a5ee580e052858cb5 has caused a bit of an
unexpected havoc.

** Affects: neutron
 Importance: Critical
 Assignee: Armando Migliaccio (armando-migliaccio)
 Status: In Progress

** Changed in: neutron
   Status: New = Confirmed

** Changed in: neutron
 Assignee: (unassigned) = Armando Migliaccio (armando-migliaccio)

** Changed in: neutron
   Importance: Undecided = Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1472361

Title:
  COMMON_PREFIXES removal fallout

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Change 18bc67d56faef30a0f73429a5ee580e052858cb5 has caused a bit of an
  unexpected havoc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1472361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472385] [NEW] Duplicate keywords for translation

2015-07-07 Thread Thai Tran
Public bug reported:

There is a duplicate keyword in the list of keywords specified for translation. 
This is incorrect and should be ugettext_noop.
https://github.com/openstack/horizon/blob/master/run_tests.sh#L428

** Affects: horizon
 Importance: Low
 Assignee: Thai Tran (tqtran)
 Status: In Progress


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1472385

Title:
  Duplicate keywords for translation

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  There is a duplicate keyword in the list of keywords specified for 
translation. This is incorrect and should be ugettext_noop.
  https://github.com/openstack/horizon/blob/master/run_tests.sh#L428

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1472385/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471934] Re: tox -e docs doesn't build module index

2015-07-07 Thread Matt Riedemann
Re-opening, comment 3 was for the pbr bug.

** Changed in: nova
   Status: Won't Fix = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1471934

Title:
  tox -e docs doesn't build module index

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  With this change https://review.openstack.org/#/c/121737/
  autodoc_index_modules was set to 0 in the [pbr] section of setup.cfg
  so that you can no longer get to git/nova/doc/build/html/py-
  modindex.html locally to verify that module docstrings are correct.

  We should enable autodoc_index_modules for the docs tox target again.
  The original point of the previous change was to not list the module
  index in the home page, which is fine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1471934/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472358] [NEW] Add --yes flag to run_tests.sh

2015-07-07 Thread Ana Krivokapić
Public bug reported:

Currently run_tests.sh asks for Y/n user input e.g. to confirm update
of the virtual env. This makes it hard for scripting as explicit user
input is required. We should add a flag to the run_tests.sh script, e.g.
--yes, to do these changes without user confirmation.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1472358

Title:
  Add --yes flag to run_tests.sh

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Currently run_tests.sh asks for Y/n user input e.g. to confirm
  update of the virtual env. This makes it hard for scripting as
  explicit user input is required. We should add a flag to the
  run_tests.sh script, e.g. --yes, to do these changes without user
  confirmation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1472358/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1468934] Re: Neutron might use robust quota enforcement

2015-07-07 Thread Salvatore Orlando
Sorted.
I was wondering indeed where did the rfe bug go!

** Also affects: neutron
   Importance: Undecided
   Status: New

** No longer affects: vmware-nsx

** Changed in: neutron
   Importance: Undecided = Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1468934

Title:
  Neutron might use robust quota enforcement

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Neutron can allow exceeding the quota in certain cases.  Some
  investigation revealed that quotas in Neutron are subject to a race
  where parallel requests can each check quota and find there is just
  enough left to fulfill its individual request.

  Neutron has no concept of reservation and optimistically assumes that
  a resource count before performing a request is all that's needed.

  Also, it does not take into account at all that API operations might
  create resources as a side effect, and that resources can be created
  even from RPC calls.

  The goal of this RFE is to ensure quota enforcement is done in a decent way 
in Neutron.
  Yeah, even quota management is pretty terrible, but let's start with quota 
enforcement

  Oh... by the way, the patches are already under review [1]

  Note: I am filing this RFE as the patches [1] did not land by the
  liberty-1 deadline and I failed to resubmit the already approved Kilo
  spec [2] because I'm an indolent procrastinator.

  
  [1] 
https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:master+topic:bp/better-quotas,n,z
  [2] 
http://specs.openstack.org/openstack/neutron-specs/specs/kilo-backlog/better-quotas.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1468934/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472410] [NEW] Improve Angular modules management

2015-07-07 Thread Shaoquan Chen
Public bug reported:

There are room for improving Horizon's angular module management:

- All Horizon's built-in modules should be managed as tree, child-modules of 
the same tree do not depends on each other directly.
- Library modules should be managed at the application module level, so that:
   - we have a clear picture what external modules are bing used in Horizon.
   - clearer code.

This can also improve over-mocked tests.

** Affects: horizon
 Importance: Undecided
 Assignee: Shaoquan Chen (sean-chen2)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1472410

Title:
  Improve Angular modules management

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  There are room for improving Horizon's angular module management:

  - All Horizon's built-in modules should be managed as tree, child-modules of 
the same tree do not depends on each other directly.
  - Library modules should be managed at the application module level, so that:
     - we have a clear picture what external modules are bing used in Horizon.
     - clearer code.

  This can also improve over-mocked tests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1472410/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472304] [NEW] Generating allocation pools with ::/64 gives inconsistent results

2015-07-07 Thread John Davidge
Public bug reported:

Calling _allocate_pools_for_subnet when subnet['cidr'] = ::/64 returns a
pool that begins with an IPv4 address and ends with an IPv6 address. For
example:

{start: 0.0.0.2, end: :::::}

IP version should be kept consistent during the pool generation to
prevent this.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1472304

Title:
  Generating allocation pools with ::/64 gives inconsistent results

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Calling _allocate_pools_for_subnet when subnet['cidr'] = ::/64 returns
  a pool that begins with an IPv4 address and ends with an IPv6 address.
  For example:

  {start: 0.0.0.2, end: :::::}

  IP version should be kept consistent during the pool generation to
  prevent this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1472304/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469867] Re: Stop using deprecated oslo_utils.timeutils.strtime

2015-07-07 Thread Doug Hellmann
** Changed in: keystonemiddleware
   Status: Fix Committed = Fix Released

** Changed in: keystonemiddleware
Milestone: None = 2.1.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1469867

Title:
  Stop using deprecated oslo_utils.timeutils.strtime

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in OpenStack Identity  (Keystone) Middleware:
  Fix Released
Status in Python client library for Keystone:
  In Progress

Bug description:
  
  Keystone unit tests are failing because they're still using the deprecated 
oslo_utils.timeutils.strtime function. We need to stop using the function.

  DeprecationWarning: Using function/method
  'oslo_utils.timeutils.strtime()' is deprecated in version '1.6' and
  will be removed in a future version: use either
  datetime.datetime.isoformat() or datetime.datetime.strftime() instead

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1469867/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472306] [NEW] Broken ascii diagram in materialized path spec

2015-07-07 Thread Alexander Makarov
Public bug reported:

In the problem description of the spec:
https://github.com/openstack/keystone-specs/blob/master/specs/liberty/materialize-project-hierarchy.rst

A tree diagram is represented in variable width font because of the mistake in 
syntax:
a string ending with '::' must precede the pseudo-graphics block for it to be 
displayed in a monospace font.

** Affects: keystone
 Importance: Undecided
 Assignee: Alexander Makarov (amakarov)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) = Alexander Makarov (amakarov)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1472306

Title:
  Broken ascii diagram in materialized path spec

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  In the problem description of the spec:
  
https://github.com/openstack/keystone-specs/blob/master/specs/liberty/materialize-project-hierarchy.rst

  A tree diagram is represented in variable width font because of the mistake 
in syntax:
  a string ending with '::' must precede the pseudo-graphics block for it to be 
displayed in a monospace font.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1472306/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472309] [NEW] py34 tox env does not support regex

2015-07-07 Thread Ihar Hrachyshka
Public bug reported:

If I try to execute a single test by calling:

tox -e py34 name-of-test, it still executes the whole test suite.

** Affects: neutron
 Importance: Undecided
 Assignee: Cyril Roelandt (cyril-roelandt)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Cyril Roelandt (cyril-roelandt)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1472309

Title:
  py34 tox env does not support regex

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  If I try to execute a single test by calling:

  tox -e py34 name-of-test, it still executes the whole test suite.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1472309/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472433] [NEW] Do not use dnsmasq process as nameserver

2015-07-07 Thread Aaron Rosen
Public bug reported:

Previously, if one had more than one dhcp port configured on a network
the host booted on this network got a resolv.conf that contains
a nameserver entry for both dhcp-ports. If there is only 1 dhcp port
it would get the dns-server configured as cfg.CONF.dnsmasq_dns_servers.
This patch removes this code that sets the nameserver to be the dhcp-agent
instead of the one configured.

** Affects: neutron
 Importance: Undecided
 Assignee: Aaron Rosen (arosen)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Aaron Rosen (arosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1472433

Title:
  Do not use dnsmasq process as nameserver

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Previously, if one had more than one dhcp port configured on a network
  the host booted on this network got a resolv.conf that contains
  a nameserver entry for both dhcp-ports. If there is only 1 dhcp port
  it would get the dns-server configured as cfg.CONF.dnsmasq_dns_servers.
  This patch removes this code that sets the nameserver to be the dhcp-agent
  instead of the one configured.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1472433/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472244] [NEW] Remove dependency on Tempest from neutron

2015-07-07 Thread Dmitry Ratushnyy
Public bug reported:

As everyone knows, all neutron API tests copied from tempest tree into
neutron tree, And this tests already running on neutron gates.

But for current moment running this tests has some external dependencies
( TEMPEST_CONFIG_DIR and tempest.conf file itself).

Having dependency for API tests on third-party component is
unacceptable.

This bug is about removing dependency on tempest from neutron tests
completely by changing how tests are checking neutron configuration.

CONF object, which is used by tests during skip checks and other methods
should be refactor not rely on tempest.conf

Removing this dependency will simplify running of neutron tests

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1472244

Title:
  Remove dependency on Tempest from neutron

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  As everyone knows, all neutron API tests copied from tempest tree into
  neutron tree, And this tests already running on neutron gates.

  But for current moment running this tests has some external
  dependencies ( TEMPEST_CONFIG_DIR and tempest.conf file itself).

  Having dependency for API tests on third-party component is
  unacceptable.

  This bug is about removing dependency on tempest from neutron tests
  completely by changing how tests are checking neutron configuration.

  CONF object, which is used by tests during skip checks and other
  methods should be refactor not rely on tempest.conf

  Removing this dependency will simplify running of neutron tests

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1472244/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472243] [NEW] Router interface add port with a mac address raise runtime error

2015-07-07 Thread Tristan Cacqueray
Public bug reported:

Trace:
ERROR neutron.agent.l3.agent [-] Failed to process compatible router 
'1794ed9d-68d6-402c-a4e5-8041de4c4186'
TRACE neutron.agent.l3.agent Traceback (most recent call last):
TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py, line 452, in 
_process_router_update
TRACE neutron.agent.l3.agent self._process_router_if_compatible(router)
TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py, line 406, in 
_process_router_if_compatible
TRACE neutron.agent.l3.agent self._process_updated_router(router)
TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py, line 420, in 
_process_updated_router
TRACE neutron.agent.l3.agent ri.process(self)
TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/neutron/common/utils.py, line 346, in call
TRACE neutron.agent.l3.agent self.logger(e)
TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/oslo_utils/excutils.py, line 85, in __exit__
TRACE neutron.agent.l3.agent six.reraise(self.type_, self.value, self.tb)
TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/neutron/common/utils.py, line 343, in call
TRACE neutron.agent.l3.agent return func(*args, **kwargs)
TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py, line 605, 
in process
TRACE neutron.agent.l3.agent self._process_internal_ports()
TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py, line 361, 
in _process_internal_ports
TRACE neutron.agent.l3.agent self.internal_network_added(p)
TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py, line 312, 
in internal_network_added
TRACE neutron.agent.l3.agent INTERNAL_DEV_PREFIX)
TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py, line 288, 
in _internal_network_added
TRACE neutron.agent.l3.agent prefix=prefix)
TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/linux/interface.py, line 252, 
in plug
TRACE neutron.agent.l3.agent ns_dev.link.set_address(mac_address)
TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py, line 270, in 
set_address
TRACE neutron.agent.l3.agent self._as_root([], ('set', self.name, 
'address', mac_address))
TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py, line 222, in 
_as_root
TRACE neutron.agent.l3.agent use_root_namespace=use_root_namespace)
TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py, line 69, in 
_as_root
TRACE neutron.agent.l3.agent log_fail_as_error=self.log_fail_as_error)
TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py, line 78, in 
_execute
TRACE neutron.agent.l3.agent log_fail_as_error=log_fail_as_error)
TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py, line 137, in 
execute
TRACE neutron.agent.l3.agent raise RuntimeError(m)
TRACE neutron.agent.l3.agent RuntimeError: 
TRACE neutron.agent.l3.agent Command: ['sudo', 'neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'link', 'set', 'qr-a848e3a3-ce', 'address', 
'00:00:00:00:00:00']
TRACE neutron.agent.l3.agent Exit code: 2
TRACE neutron.agent.l3.agent Stdin: 
TRACE neutron.agent.l3.agent Stdout: 
TRACE neutron.agent.l3.agent Stderr: RTNETLINK answers: Cannot assign requested 
address


Steps to reproduce:
router_id=$(neutron router-create test | grep ' id ' | awk '{ print $4 }')
neutron net-create test
neutron subnet-create test 192.168.0.1/24
port_id=$(neutron port-create --mac_address '00:00:00:00:00:00' test | grep ' 
id ' | awk '{ print $4 }')
neutron router-interface-add $router_id port=$port_id

Impact:
Raise RuntimeError instead of NeutronError

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1472243

Title:
  Router interface add port with a mac address raise runtime error

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Trace:
  ERROR neutron.agent.l3.agent [-] Failed to process compatible router 
'1794ed9d-68d6-402c-a4e5-8041de4c4186'
  TRACE neutron.agent.l3.agent Traceback (most recent call last):
  TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py, line 452, in 
_process_router_update
  TRACE neutron.agent.l3.agent self._process_router_if_compatible(router)
  TRACE neutron.agent.l3.agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py, line 406, in 

[Yahoo-eng-team] [Bug 1472242] [NEW] Router interface add port without subnet raise indexerror

2015-07-07 Thread Tristan Cacqueray
Public bug reported:

Trace:
ERROR neutron.api.v2.resource [req-dbf179d1-62ac-4537-be15-c2088669f75c ] 
add_router_interface failed
TRACE neutron.api.v2.resource Traceback (most recent call last):
TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/api/v2/resource.py, line 83, in 
resource
TRACE neutron.api.v2.resource result = method(request=request, **args)
TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/api/v2/base.py, line 207, in 
_handle_action
TRACE neutron.api.v2.resource return getattr(self._plugin, name)(*arg_list, 
**kwargs)
TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/db/l3_dvr_db.py, line 306, in 
add_router_interface
TRACE neutron.api.v2.resource router_id, port['tenant_id'], port['id'], 
subnets[-1]['id'],
TRACE neutron.api.v2.resource IndexError: list index out of range


Steps to reproduce:
router_id=$(neutron router-create test | grep ' id ' | awk '{ print $4 }')
neutron net-create test
port_id=$(neutron port-create test | grep ' id ' | awk '{ print $4 }')
neutron router-interface-add $router_id port=$port_id


Impact:
Raise an IndexError exception instead of NeutronError

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1472242

Title:
  Router interface add port without subnet raise indexerror

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Trace:
  ERROR neutron.api.v2.resource [req-dbf179d1-62ac-4537-be15-c2088669f75c ] 
add_router_interface failed
  TRACE neutron.api.v2.resource Traceback (most recent call last):
  TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/api/v2/resource.py, line 83, in 
resource
  TRACE neutron.api.v2.resource result = method(request=request, **args)
  TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/api/v2/base.py, line 207, in 
_handle_action
  TRACE neutron.api.v2.resource return getattr(self._plugin, 
name)(*arg_list, **kwargs)
  TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/site-packages/neutron/db/l3_dvr_db.py, line 306, in 
add_router_interface
  TRACE neutron.api.v2.resource router_id, port['tenant_id'], port['id'], 
subnets[-1]['id'],
  TRACE neutron.api.v2.resource IndexError: list index out of range

  
  Steps to reproduce:
  router_id=$(neutron router-create test | grep ' id ' | awk '{ print $4 }')
  neutron net-create test
  port_id=$(neutron port-create test | grep ' id ' | awk '{ print $4 }')
  neutron router-interface-add $router_id port=$port_id

  
  Impact:
  Raise an IndexError exception instead of NeutronError

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1472242/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472184] [NEW] volume attachment to windows VM, mount point is always linux format

2015-07-07 Thread zhu zhu
Public bug reported:

When attach a volume to a windows instance(without specify the device
name), this mount point will be always linux format such as /dev/sdb
from CLI or the Horizon GUI.

Checking the openstack nova, looks it seems only allow the device name
with /dev/* format regardless of any type of instances

https://github.com/openstack/nova/blob/master/nova/block_device.py#L562

Just want to query if any consideration for this design? Or some
interfaces provided from virt driver layer for the devices name format?

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1472184

Title:
  volume attachment to windows VM, mount point is always linux format

Status in OpenStack Compute (Nova):
  New

Bug description:
  When attach a volume to a windows instance(without specify the device
  name), this mount point will be always linux format such as /dev/sdb
  from CLI or the Horizon GUI.

  Checking the openstack nova, looks it seems only allow the device name
  with /dev/* format regardless of any type of instances

  https://github.com/openstack/nova/blob/master/nova/block_device.py#L562

  Just want to query if any consideration for this design? Or some
  interfaces provided from virt driver layer for the devices name
  format?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1472184/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472205] [NEW] DVR: csnat portion is not rescheduled after snat host goes down

2015-07-07 Thread Oleg Bondarev
Public bug reported:

Currently l3 auto rescheduling mechanism does not take into account DVR router 
csnat portion.
So if l3 agent (node) hosting csnat portion of a dvr router goes down, external 
connectivity is lost for all VMs without floating IPs.

** Affects: neutron
 Importance: Undecided
 Assignee: Oleg Bondarev (obondarev)
 Status: New


** Tags: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1472205

Title:
  DVR: csnat portion is not rescheduled after snat host goes down

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Currently l3 auto rescheduling mechanism does not take into account DVR 
router csnat portion.
  So if l3 agent (node) hosting csnat portion of a dvr router goes down, 
external connectivity is lost for all VMs without floating IPs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1472205/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472459] [NEW] test setup leakage in current (2015-07-08) master HEAD

2015-07-07 Thread Richard Jones
Public bug reported:

Running a subset of the test suite via:

   ./run_tests.sh  openstack_dashboard.test.api_tests

results in a rest error:

 ==
 ERROR: test_url_for 
(openstack_dashboard.test.api_tests.base_tests.ApiHelperTests)
 --
 Traceback (most recent call last):
  File 
/Users/richard/src/openstack/horizon/openstack_dashboard/test/api_tests/base_tests.py,
 line 192, in test_url_for
url = api_base.url_for(self.request, 'image')
  File /Users/richard/src/openstack/horizon/openstack_dashboard/api/base.py, 
line 311, in url_for
catalog = request.user.service_catalog
  File 
/Users/richard/src/openstack/horizon/.venv/lib/python2.7/site-packages/django/utils/functional.py,
 line 225, in inner
return func(self._wrapped, *args)
 AttributeError: 'AnonymousUser' object has no attribute 'service_catalog'
 --

which goes away when the scope of the tests is raised up to:

  ./run_tests.sh  openstack_dashboard.test

which implies the setup of the tests in some other part of Horizon is
causing a side-effect that's being relied upon in the api tests.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1472459

Title:
  test setup leakage in current (2015-07-08) master HEAD

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Running a subset of the test suite via:

 ./run_tests.sh  openstack_dashboard.test.api_tests

  results in a rest error:

   ==
   ERROR: test_url_for 
(openstack_dashboard.test.api_tests.base_tests.ApiHelperTests)
   --
   Traceback (most recent call last):
File 
/Users/richard/src/openstack/horizon/openstack_dashboard/test/api_tests/base_tests.py,
 line 192, in test_url_for
  url = api_base.url_for(self.request, 'image')
File 
/Users/richard/src/openstack/horizon/openstack_dashboard/api/base.py, line 
311, in url_for
  catalog = request.user.service_catalog
File 
/Users/richard/src/openstack/horizon/.venv/lib/python2.7/site-packages/django/utils/functional.py,
 line 225, in inner
  return func(self._wrapped, *args)
   AttributeError: 'AnonymousUser' object has no attribute 'service_catalog'
   --

  which goes away when the scope of the tests is raised up to:

./run_tests.sh  openstack_dashboard.test

  which implies the setup of the tests in some other part of Horizon is
  causing a side-effect that's being relied upon in the api tests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1472459/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472458] [NEW] Arista ML2 VLAN driver should ignore non-VLAN network types

2015-07-07 Thread Sukhdev Kapur
Public bug reported:

Arista ML2 VLAN driver should process only VLAN based networks. Any
other network type (e.g. vxlan) should be ignored.

** Affects: neutron
 Importance: Undecided
 Status: New

** Summary changed:

- Arista ML2 VLAN driver should ignore any other network types
+ Arista ML2 VLAN driver should ignore non-VLAN network types

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1472458

Title:
  Arista ML2 VLAN driver should ignore non-VLAN network types

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Arista ML2 VLAN driver should process only VLAN based networks. Any
  other network type (e.g. vxlan) should be ignored.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1472458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472445] [NEW] Cannot override compute_driver config

2015-07-07 Thread Mathieu Gagné
Public bug reported:

All nova::compute::* manifests enforce an hardcoded value for the
compute_driver config and do not allow the user to override it.

A user should be able to override the compute_driver by its own. Common
use case is to use a local or derivative version of the compute driver
which adds features and/or bugfixes.

** Affects: puppet-nova
 Importance: Undecided
 Assignee: Mathieu Gagné (mgagne)
 Status: New

** Project changed: nova = puppet-nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1472445

Title:
  Cannot override compute_driver config

Status in Puppet module for Nova:
  New

Bug description:
  All nova::compute::* manifests enforce an hardcoded value for the
  compute_driver config and do not allow the user to override it.

  A user should be able to override the compute_driver by its own.
  Common use case is to use a local or derivative version of the compute
  driver which adds features and/or bugfixes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/puppet-nova/+bug/1472445/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472452] [NEW] arp spoofing protection flow install failed

2015-07-07 Thread shihanzhang
Public bug reported:

Now ovs-agent failed to install arp spoofing protection flow for new VMs, 
because it will firstly install arp spoofing protection flow in funstion 
'treat_devices_added_or_updated':
def treat_devices_added_or_updated(self, devices, ovs_restarted):
.
.

if self.prevent_arp_spoofing:
   self.setup_arp_spoofing_protection(self.int_br, port, details)

but then in function '_bind_devices', it will clear all flows for this
new port, so the arp spoofing protection flow is also be clean

def _bind_devices(self, need_binding_ports):
.

if cur_tag != lvm.vlan:
self.int_br.set_db_attribute(
Port, port.port_name, tag, lvm.vlan)
if port.ofport != -1:
# NOTE(yamamoto): Remove possible drop_port flow
# installed by port_dead.
self.int_br.delete_flows(in_port=port.ofport)

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = shihanzhang (shihanzhang)

** Description changed:

  Now ovs-agent failed to install arp spoofing protection flow for new VMs, 
because it will firstly install arp spoofing protection flow in funstion 
'treat_devices_added_or_updated':
- def treat_devices_added_or_updated(self, devices, ovs_restarted):
- .
- .
- if 'port_id' in details:
- LOG.info(_LI(Port %(device)s updated. Details: %(details)s),
-  {'device': device, 'details': details})
- need_binding = self.treat_vif_port(port, details['port_id'],
-details['network_id'],
-details['network_type'],
-
details['physical_network'],
-details['segmentation_id'],
-details['admin_state_up'],
-details['fixed_ips'],
-details['device_owner'],
-ovs_restarted)
- if self.prevent_arp_spoofing:
- self.setup_arp_spoofing_protection(self.int_br,
-port, details)
+ def treat_devices_added_or_updated(self, devices, ovs_restarted):
+ .
+ .
+ if 'port_id' in details:
+ if self.prevent_arp_spoofing:
+ self.setup_arp_spoofing_protection(self.int_br,
+
port, details)
  
  but then in function '_bind_devices', it will clear all flows for this
  new port, so the arp spoofing protection flow is also be clean
  
- def _bind_devices(self, need_binding_ports):
- .
- 
- if cur_tag != lvm.vlan:
- self.int_br.set_db_attribute(
- Port, port.port_name, tag, lvm.vlan)
- if port.ofport != -1:
- # NOTE(yamamoto): Remove possible drop_port flow
- # installed by port_dead.
- self.int_br.delete_flows(in_port=port.ofport)
+ def _bind_devices(self, need_binding_ports):
+ .
+ 
+ if cur_tag != lvm.vlan:
+ self.int_br.set_db_attribute(
+ Port, port.port_name, tag, lvm.vlan)
+ if port.ofport != -1:
+ # NOTE(yamamoto): Remove possible drop_port flow
+ # installed by port_dead.
+ self.int_br.delete_flows(in_port=port.ofport)

** Description changed:

  Now ovs-agent failed to install arp spoofing protection flow for new VMs, 
because it will firstly install arp spoofing protection flow in funstion 
'treat_devices_added_or_updated':
  def treat_devices_added_or_updated(self, devices, ovs_restarted):
  .
  .
- if 'port_id' in details:
- if self.prevent_arp_spoofing:
- self.setup_arp_spoofing_protection(self.int_br,
-
port, details)
+ 
+ if self.prevent_arp_spoofing:
+    self.setup_arp_spoofing_protection(self.int_br, port, details)
  
  but then in function '_bind_devices', it will clear all flows for this
  new port, so the arp spoofing protection flow is also be clean
  
  def _bind_devices(self, need_binding_ports):
  .
  
  

[Yahoo-eng-team] [Bug 1472447] [NEW] Remove old trans filter

2015-07-07 Thread Thai Tran
Public bug reported:

Once we make the move to angular-gettext, we will not longer need the
trans filter. Since we are not using it anywhere at the moment, we can
safely remove it now and ensure that there will be no future conflict.

** Affects: horizon
 Importance: Medium
 Assignee: Thai Tran (tqtran)
 Status: In Progress


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1472447

Title:
  Remove old trans filter

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Once we make the move to angular-gettext, we will not longer need the
  trans filter. Since we are not using it anywhere at the moment, we can
  safely remove it now and ensure that there will be no future conflict.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1472447/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472449] [NEW] download error when the image locations is blank

2015-07-07 Thread Long Quan Sha
Public bug reported:


When the locations is blank, downloading image will show python error, but the 
error message is not correct.

[root@vm134 pe]# glance image-show 9be94a27-367f-4a26-ae7a-045db3cb7332
+--+--+
| Property | Value|
+--+--+
| checksum | None |
| container_format | None |
| created_at   | 2015-07-02T09:09:22Z |
| disk_format  | None |
| id   | 9be94a27-367f-4a26-ae7a-045db3cb7332 |
| locations| []   |
| min_disk | 0|
| min_ram  | 0|
| name | test |
| owner| e4b36a5b654942328943a835339a6289 |
| protected| False|
| size | None |
| status   | queued   |
| tags | []   |
| updated_at   | 2015-07-02T09:09:22Z |
| virtual_size | None |
| visibility   | private  |
+--+--+
[root@vm134 pe]# glance image-download 9be94a27-367f-4a26-ae7a-045db3cb7332 
--file myimg
iter() returned non-iterator of type 'NoneType'

** Affects: glance
 Importance: Undecided
 Assignee: Long Quan Sha (shalq)
 Status: New

** Changed in: glance
 Assignee: (unassigned) = Long Quan Sha (shalq)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1472449

Title:
  download error when the image locations is blank

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  
  When the locations is blank, downloading image will show python error, but 
the error message is not correct.

  [root@vm134 pe]# glance image-show 9be94a27-367f-4a26-ae7a-045db3cb7332
  +--+--+
  | Property | Value|
  +--+--+
  | checksum | None |
  | container_format | None |
  | created_at   | 2015-07-02T09:09:22Z |
  | disk_format  | None |
  | id   | 9be94a27-367f-4a26-ae7a-045db3cb7332 |
  | locations| []   |
  | min_disk | 0|
  | min_ram  | 0|
  | name | test |
  | owner| e4b36a5b654942328943a835339a6289 |
  | protected| False|
  | size | None |
  | status   | queued   |
  | tags | []   |
  | updated_at   | 2015-07-02T09:09:22Z |
  | virtual_size | None |
  | visibility   | private  |
  +--+--+
  [root@vm134 pe]# glance image-download 9be94a27-367f-4a26-ae7a-045db3cb7332 
--file myimg
  iter() returned non-iterator of type 'NoneType'

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1472449/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449531] Re: remote stack fails on autoscaling - redelegation depth

2015-07-07 Thread Launchpad Bug Tracker
[Expired for Keystone because there has been no activity for 60 days.]

** Changed in: keystone
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1449531

Title:
  remote stack fails on autoscaling - redelegation depth

Status in Orchestration API (Heat):
  Invalid
Status in OpenStack Identity (Keystone):
  Expired

Bug description:
  on auto scaling event (via heat-cfn-api) where in the autoscaling group there 
is a resource of type OS::Heat::Stack we are getting an error 
HTTPInternalServerError: ERROR: Remote error: Forbidden Remaining redelegation 
depth of 0 out of allowed range of [0..3] 
  we tried also with the default config and also with the following 
keystone.conf :
  [trust]
  allow_redelegation = true
  max_redelegation_count = 3
  enabled = true

  attached logs and example template , we scale out posting to the
  scaleup url

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1449531/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262914] Re: Unnecessary data copy during cold snapshot

2015-07-07 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1262914

Title:
  Unnecessary data copy during cold snapshot

Status in OpenStack Compute (Nova):
  Expired

Bug description:
  When creating a cold snapshot, LibvirtDriver.snapshot() creates a
  local copy of the VM image before uploading from that copy into a new
  image in Glance.

  In case of snapshotting a local file backed VM to Swift, that's one
  copy too many:  if the target format matches the source format, the
  local file can be uploaded directly, halving the time it takes to
  create a snapshot. In case of snapshotting an RBD backed VM to RBD
  backed Glance, that's two copies too many: a copy-on-write clone of
  the VM drive could obviate the need to copy any data at all.

  I think that instead of passing the target location as a temporary
  file path under snapshots_directory, LibvirtDriver.snapshot() should
  pass image metadata to Image.snapshot_extract() and let the image
  backend figure out and return the target location.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1262914/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470047] Re: CLI fails to report an error after creating a snapshot from instance

2015-07-07 Thread wangxiyuan
The reason why image been deleted is that Nova didn't pass 'size' parameter to 
Glance,So it didn't go quota check in Glance.
So I think it can be fixed in Nova.

** Project changed: glance = nova

** Changed in: nova
 Assignee: (unassigned) = wangxiyuan (wangxiyuan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1470047

Title:
  CLI fails to report an error after creating a snapshot from instance

Status in OpenStack Compute (Nova):
  New
Status in Python client library for Nova:
  In Progress

Bug description:
  Description of problem:
  The CLI fails to declare an error and stuck with Server snapshotting... 0 
when a user tries to save a snapshot of an instance while his quota is too 
small. 

  Version-Release number of selected component (if applicable):
  python-glanceclient-0.17.0-2.el7ost.noarch
  python-glance-2015.1.0-6.el7ost.noarch
  python-glance-store-0.4.0-1.el7ost.noarch
  openstack-glance-2015.1.0-6.el7ost.noarch
  openstack-nova-api-2015.1.0-13.el7ost.noarch

  How reproducible:
  100%

  Steps to Reproduce:
  1. Edit /etc/glance/glance-api.conf set user_storage_quota with low space for 
creating snapshot from instance 
  2. openstack-service restart glance
  3. Create a snapshot from instance via command line: 'nova image-create 
instanceName snapName --poll'

  Actual results:
  The CLI fails to declare an error and stuck with Server snapshotting... 0

  Expected results:
  ERROR should be appeared indicating that quota is too small

  
  Additional info:
  log

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1470047/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472281] [NEW] router:external networks are not visible

2015-07-07 Thread Yves-Gwenael Bourhis
Public bug reported:

Since Icehouse, the common Neutron policy is to have external networks
with router:external = True and shared=False (used to be shared before)

The issue is that we can not see the external network to obtain its ID
(and its ID is necessary e.g. in the orchestration panel when launching
a heat template)

Currently, a user needs to use the CLI to obtain the public network ID.

** Affects: horizon
 Importance: Undecided
 Assignee: Yves-Gwenael Bourhis (yves-gwenael-bourhis)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1472281

Title:
  router:external networks are not visible

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Since Icehouse, the common Neutron policy is to have external networks
  with router:external = True and shared=False (used to be shared
  before)

  The issue is that we can not see the external network to obtain its ID
  (and its ID is necessary e.g. in the orchestration panel when
  launching a heat template)

  Currently, a user needs to use the CLI to obtain the public network
  ID.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1472281/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472285] [NEW] set default domain dynamically

2015-07-07 Thread Richard Megginson
Public bug reported:

In order to set the default_domain_id, Keystone must first be running,
and you must issue a domain create command to create the default domain
and retrieve the id of the domain in order to set it in keystone.conf,
then restart Keystone for the change to take effect.  This causes
problems for puppet based installers because puppet does not want to
restart Keystone more than once.

Instead, what would be preferable is the ability to set the
default_domain_id dynamically.

For example, when using `openstack`, add an option `--default-domain`::

$ openstack domain create defdomain --enable --description 'my
default domain' --default-domain

This would create the domain and make it the default domain, without
having to restart Keystone for the change in default_domain_id to take
effect.

It doesn't have to be implemented this way, just a suggestion, but the
method to set the default_domain_id should be exposed via the
`openstack` cli.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1472285

Title:
  set default domain dynamically

Status in OpenStack Identity (Keystone):
  New

Bug description:
  In order to set the default_domain_id, Keystone must first be running,
  and you must issue a domain create command to create the default
  domain and retrieve the id of the domain in order to set it in
  keystone.conf, then restart Keystone for the change to take effect.
  This causes problems for puppet based installers because puppet does
  not want to restart Keystone more than once.

  Instead, what would be preferable is the ability to set the
  default_domain_id dynamically.

  For example, when using `openstack`, add an option `--default-
  domain`::

  $ openstack domain create defdomain --enable --description 'my
  default domain' --default-domain

  This would create the domain and make it the default domain, without
  having to restart Keystone for the change in default_domain_id to take
  effect.

  It doesn't have to be implemented this way, just a suggestion, but the
  method to set the default_domain_id should be exposed via the
  `openstack` cli.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1472285/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp