[Yahoo-eng-team] [Bug 1485858] [NEW] Tenants cannot access or attach QoS policies even if shared, but can create policies

2015-08-17 Thread Miguel Angel Ajo
Public bug reported:

policy.json is not correctly configured, as per specification it should:
* not allow tenants to create policies or rules by default.
* allow tenants to attach ports or networks to shared policies.

please note that policy access is controlled by business logic: 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/objects/qos/policy.py?h=feature/qos#n68
 to guarantee network/port attachment permissions are properly checked.

** Affects: neutron
 Importance: Undecided
 Assignee: Miguel Angel Ajo (mangelajo)
 Status: Confirmed


** Tags: qos

** Changed in: neutron
 Assignee: (unassigned) => Miguel Angel Ajo (mangelajo)

** Changed in: neutron
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1485858

Title:
  Tenants cannot access or attach QoS policies even if shared, but can
  create policies

Status in neutron:
  Confirmed

Bug description:
  policy.json is not correctly configured, as per specification it should:
  * not allow tenants to create policies or rules by default.
  * allow tenants to attach ports or networks to shared policies.

  please note that policy access is controlled by business logic: 
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/objects/qos/policy.py?h=feature/qos#n68
   to guarantee network/port attachment permissions are properly checked.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1485858/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485712] Re: Can't set parent_id of project for hierarchical multi-tenancy

2015-08-17 Thread Dolph Mathews
This is truly by design. But by disallowing it today, we've given
ourselves the option to allow it in the future (we can't do the
opposite: take an API feature away). The consequences of a mutable
hierarchy are complicated and affect the rest of OpenStack (think
quotas, for example), and the risk of unintended consequences, bugs, and
security vulnerabilities is simply too great.

It's far too early in hierarchical multitenancy's lifecycle to consider
introducing all that extra complexity today. Perhaps once hierarchical
multitenancy is a mature feature and we have a strong grasp of the
consequences of a mutable hierarchy we could have that discussion again.

** Changed in: keystone
   Status: New => Won't Fix

** Changed in: keystone
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1485712

Title:
  Can't set parent_id of project for hierarchical multi-tenancy

Status in Keystone:
  Won't Fix

Bug description:
  I understand reading some of the specs for hierarchical multi-tenancy
  that you cannot change the parent_id of a project. But i would've
  hoped that if a project was created without a parent_id , you can
  later assign a parent, otherwise there's no clear migration path for
  projects that wish to utilize this feature and would have to
  destroy/recreate the project.

  When i tried it i get a 403, "Update of `parent_id` is not allowed.".
  Is this just in place until it can be supported later, or is this
  truly by design?

  thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1485712/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485694] Re: Keystone raises an exception when it receives incorrectly encoded parameters

2015-08-17 Thread David Stanek
** Also affects: keystone
   Importance: Undecided
   Status: New

** Changed in: keystone
   Status: New => Confirmed

** Changed in: keystone
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1485694

Title:
  Keystone raises an exception when it receives incorrectly encoded
  parameters

Status in Keystone:
  Confirmed
Status in keystone package in Ubuntu:
  Confirmed

Bug description:
  The following command will cause an exception:

  $ curl -g -i -X GET
  http://localhost:35357/v3/users?name=nonexit%E8nt -H "User-Agent:
  python-keystoneclient" -H "Accept: application/json" -H "X-Auth-Token:
  "ADMIN

  This command works as expected:

  $ curl -g -i -X GET
  http://localhost:35357/v3/users?name=nonexit%C3%A8nt -H "User-Agent:
  python-keystoneclient" -H "Accept: application/json" -H "X-Auth-Token:
  ADMIN"

  The exception occurs fairly deep in the WebOb library while it is
  trying to parse the parameters our of the URL.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1485694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485635] Re: Delete image should not progress as long as instances using it

2015-08-17 Thread Ian Cordasco
So most services in openstack (with the exception of Nova) don't tend to
know anything about the services consuming them. Cinder doesn't know if
an instance still depends on a volume, we don't know if an instance
still relies on an image. In fact, I don't see a reason why we should be
aware of that. Perhaps what is actually desired is a way for an owner to
prevent an image from being deleted but I'm not sure we want yet more
logic around that and I'm not quite certain there's a great deal of
value in that.

** Changed in: glance
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1485635

Title:
  Delete image should not progress as long as instances using it

Status in Glance:
  Opinion

Bug description:
  Currently it is possible to delete the glance images, even if instances are 
still referencing it.
  This is in particular an issue once you deleted the image and tried to 
resize/migrate instances to a new host.
  The new host can't download the image from glance and the instance can't 
start anymore due to the missing qemu backing file. In case of a resize, the 
action does abort since the coalescing of the qemu file, happening during 
resize,  can not be completed without the qemu backing file. 
  This issue usually causes manual intervention to reset the instance action, 
state and manual search/copy of the base image file to the target host.
  The ideal state would be to prevent image-delete requests as long as active 
instances are still referencing it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1485635/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478464] Re: Binding:host_id changes when creating vm

2015-08-17 Thread Kevin Benton
You can't control where a VM will be with the host_id parameter. If you
attach a port to VM, nova updates the host_id field with the hostname of
the compute node.

If you are trying to boot the VM to a specific host, you need to do that
with nova scheduling.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1478464

Title:
  Binding:host_id  changes when creating vm

Status in neutron:
  Invalid
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
   I use the parameter binding:host_id to create port,  trying to allocate the 
port to a specified host. For example:
 neutron port-create e77c556b-7ec8-415a-8b92-98f2f4f3784f  
--binding:host_id=dvr-compute1.novalocal
  Then creating a vm assigning to the port created above,
 nova boot --flavor=1 --image=cirros-0.3.4-x86_64 --nic 
port-id=2063c4db-7388-49dc-a2e0-9ffa579f870c testvm
  After the vm created, the neutron port-show result shows the value 
binding:host_id changed.
  Further more, I changed the --binding:host_id to the original one. Then the 
value --binding:host_id changed, but not the same as the host the vm(created 
before) belongs to.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1478464/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485819] [NEW] db retry mechanism should use new olso_db exception checker arg

2015-08-17 Thread Kevin Benton
Public bug reported:

We currently have a decorator to convert db exceptions into retry
requests to work with the exceptions the oslo_db wrapper is expecting:
https://github.com/openstack/neutron/blob/cb9b44e57430745cd2cfa9ccd3d63fff322af707/neutron/db/api.py#L79

This is no longer necessary since the wrapper now accepts a function to 
determine if additional exception types should be caught:
https://github.com/openstack/oslo.db/commit/c7f938ceaa92292aec7df4454566c116c4f6ad8d

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1485819

Title:
  db retry mechanism should use new olso_db exception checker arg

Status in neutron:
  New

Bug description:
  We currently have a decorator to convert db exceptions into retry
  requests to work with the exceptions the oslo_db wrapper is expecting:
  
https://github.com/openstack/neutron/blob/cb9b44e57430745cd2cfa9ccd3d63fff322af707/neutron/db/api.py#L79

  This is no longer necessary since the wrapper now accepts a function to 
determine if additional exception types should be caught:
  
https://github.com/openstack/oslo.db/commit/c7f938ceaa92292aec7df4454566c116c4f6ad8d

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1485819/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485809] [NEW] create_router calls create_port within a DB transaction

2015-08-17 Thread Ivar Lazzaro
Public bug reported:

When a router is created, the L3 plugin manages the gateway interface 
information within a DB transaction. This eventually leads to a call to 
port_create on the core plugin, which can potentially be a slow operation that 
causes the DB transaction to timeout.
This behavior is correctly handles in router_update case.

** Affects: neutron
 Importance: Undecided
 Assignee: Ivar Lazzaro (mmaleckk)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Ivar Lazzaro (mmaleckk)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1485809

Title:
  create_router calls create_port within a DB transaction

Status in neutron:
  New

Bug description:
  When a router is created, the L3 plugin manages the gateway interface 
information within a DB transaction. This eventually leads to a call to 
port_create on the core plugin, which can potentially be a slow operation that 
causes the DB transaction to timeout.
  This behavior is correctly handles in router_update case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1485809/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470625] Re: Mechanism to register and run all external neutron alembic migrations automatically

2015-08-17 Thread Henry Gessau
** Summary changed:

- Mechanism to register and run all external alembic migrations automatically
+ Mechanism to register and run all external neutron alembic migrations 
automatically

** Also affects: devstack
   Importance: Undecided
   Status: New

** Changed in: devstack
 Assignee: (unassigned) => Henry Gessau (gessau)

** Changed in: networking-cisco
   Importance: Undecided => High

** Changed in: neutron
   Status: In Progress => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1470625

Title:
  Mechanism to register and run all external neutron alembic migrations
  automatically

Status in devstack:
  New
Status in networking-cisco:
  New
Status in networking-l2gw:
  In Progress
Status in neutron:
  Fix Committed

Bug description:
  For alembic migration branches that are out-of-tree, we need a
  mechanism whereby the external code can register its branches when it
  is installed, and then neutron will provide automation of running all
  installed external migration branches when neutron-db-manage is used
  for upgrading.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1470625/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485795] [NEW] fwaas unit tests for the API extension are not runnable

2015-08-17 Thread Sean M. Collins
Public bug reported:

When running tox in the neutron-fwaas repo, none of the unit tests are
executed under /tests/unit/extensions are run.

When forcing it to run by overriding
OS_TEST_PATH=./neutron_fwaas/tests/unit/extensions - the following error
occurs.

scollins@Sean-Collins-MBPr15 ~/src/openstack/neutron-fwaas ⚡ » tox -e py27
py27 develop-inst-nodeps: /Users/scollins/src/openstack/neutron-fwaas
py27 runtests: PYTHONHASHSEED='483791098'
py27 runtests: commands[0] | sh tools/pretty_tox.sh
running testr
Traceback (most recent call last):
  File 
"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py",
 line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
  File 
"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py",
 line 72, in _run_code
exec code in run_globals
  File 
"/Users/scollins/src/openstack/neutron-fwaas/.tox/py27/lib/python2.7/site-packages/subunit/run.py",
 line 149, in 
main()
  File 
"/Users/scollins/src/openstack/neutron-fwaas/.tox/py27/lib/python2.7/site-packages/subunit/run.py",
 line 145, in main
stdout=stdout, exit=False)
  File 
"/Users/scollins/src/openstack/neutron-fwaas/.tox/py27/lib/python2.7/site-packages/testtools/run.py",
 line 171, in __init__
self.parseArgs(argv)
  File 
"/Users/scollins/src/openstack/neutron-fwaas/.tox/py27/lib/python2.7/site-packages/unittest2/main.py",
 line 113, in parseArgs
self._do_discovery(argv[2:])
  File 
"/Users/scollins/src/openstack/neutron-fwaas/.tox/py27/lib/python2.7/site-packages/testtools/run.py",
 line 211, in _do_discovery
super(TestProgram, self)._do_discovery(argv, Loader=Loader)
  File 
"/Users/scollins/src/openstack/neutron-fwaas/.tox/py27/lib/python2.7/site-packages/unittest2/main.py",
 line 223, in _do_discovery
self.test = loader.discover(self.start, self.pattern, self.top)
  File 
"/Users/scollins/src/openstack/neutron-fwaas/.tox/py27/lib/python2.7/site-packages/unittest2/loader.py",
 line 364, in discover
raise ImportError('Start directory is not importable: %r' % start_dir)
ImportError: Start directory is not importable: 
'/Users/scollins/src/openstack/neutron-fwaas/neutron_fwaas/tests/unit/extensions'
Non-zero exit code (1) from test listing.
error: testr failed (3)
running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 
${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./neutron_fwaas/tests/unit} --list
The test run didn't actually run any tests
ERROR: InvocationError: '/bin/sh tools/pretty_tox.sh '

** Affects: neutron
 Importance: Undecided
 Assignee: Sean M. Collins (scollins)
 Status: In Progress


** Tags: fwaas

** Tags added: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1485795

Title:
  fwaas unit tests for the API extension are not runnable

Status in neutron:
  In Progress

Bug description:
  When running tox in the neutron-fwaas repo, none of the unit tests are
  executed under /tests/unit/extensions are run.

  When forcing it to run by overriding
  OS_TEST_PATH=./neutron_fwaas/tests/unit/extensions - the following
  error occurs.

  scollins@Sean-Collins-MBPr15 ~/src/openstack/neutron-fwaas ⚡ » tox -e py27
  py27 develop-inst-nodeps: /Users/scollins/src/openstack/neutron-fwaas
  py27 runtests: PYTHONHASHSEED='483791098'
  py27 runtests: commands[0] | sh tools/pretty_tox.sh
  running testr
  Traceback (most recent call last):
File 
"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py",
 line 162, in _run_module_as_main
  "__main__", fname, loader, pkg_name)
File 
"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py",
 line 72, in _run_code
  exec code in run_globals
File 
"/Users/scollins/src/openstack/neutron-fwaas/.tox/py27/lib/python2.7/site-packages/subunit/run.py",
 line 149, in 
  main()
File 
"/Users/scollins/src/openstack/neutron-fwaas/.tox/py27/lib/python2.7/site-packages/subunit/run.py",
 line 145, in main
  stdout=stdout, exit=False)
File 
"/Users/scollins/src/openstack/neutron-fwaas/.tox/py27/lib/python2.7/site-packages/testtools/run.py",
 line 171, in __init__
  self.parseArgs(argv)
File 
"/Users/scollins/src/openstack/neutron-fwaas/.tox/py27/lib/python2.7/site-packages/unittest2/main.py",
 line 113, in parseArgs
  self._do_discovery(argv[2:])
File 
"/Users/scollins/src/openstack/neutron-fwaas/.tox/py27/lib/python2.7/site-packages/testtools/run.py",
 line 211, in _do_discovery
  super(TestProgram, self)._do_discovery(argv, Loader=Loader)
File 
"/Users/scollins/src/openstack/neutron-fwaas/.tox/py27/lib/python2.7/site-packages/unittest2/main.py",
 line 223, in _do_discovery
  self.test = loader.discover(self.start, self.pattern, self.top)
File 
"/Users/scollins/src/openstack/neutron-fwaas/.tox/py27/lib/python2.7/site-pa

[Yahoo-eng-team] [Bug 1485792] [NEW] Glance create an image with incorrect location

2015-08-17 Thread Kairat Kushaev
Public bug reported:

I tried to upload images from location by way like using SCP:

# glance image-create --name LINUX-64 --is-public True --disk-format iso
--container-format bare --progress --location
http://:~/ubuntu-14.04.2-server-amd64.iso

# glance image-create --name LINUX-64-2 --is-public True --disk-format
iso --container-format bare --progress --copy-from
http://:~/ubuntu-14.04.2-server-amd64.iso

Glance client accepted wrong location and as result i got images in Glance with 
Active status and 0 size.
Same behavior noticed with aki and ari images.

Expected that Glance client will prevent creation of image from
malformed source.

** Affects: glance
 Importance: Undecided
 Assignee: Kairat Kushaev (kkushaev)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => Kairat Kushaev (kkushaev)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1485792

Title:
  Glance create an image with incorrect location

Status in Glance:
  New

Bug description:
  I tried to upload images from location by way like using SCP:

  # glance image-create --name LINUX-64 --is-public True --disk-format
  iso --container-format bare --progress --location
  http://:~/ubuntu-14.04.2-server-amd64.iso

  # glance image-create --name LINUX-64-2 --is-public True --disk-format
  iso --container-format bare --progress --copy-from
  http://:~/ubuntu-14.04.2-server-amd64.iso

  Glance client accepted wrong location and as result i got images in Glance 
with Active status and 0 size.
  Same behavior noticed with aki and ari images.

  Expected that Glance client will prevent creation of image from
  malformed source.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1485792/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485788] [NEW] default review branch changed to feature/qos on neutron master baranch

2015-08-17 Thread sean mooney
Public bug reported:

the default review branch for the neutron master was updated by the follow 
change when the 
feature/qos branch was merged  to master branch.
this bug track reverting the change  intrduced by 
I2d35d0659bd3f06c570ba99e8b8a41b620253e75
https://review.openstack.org/#/c/189627/

** Affects: neutron
 Importance: Undecided
 Assignee: sean mooney (sean-k-mooney)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1485788

Title:
  default review branch changed to feature/qos on neutron master baranch

Status in neutron:
  In Progress

Bug description:
  the default review branch for the neutron master was updated by the follow 
change when the 
  feature/qos branch was merged  to master branch.
  this bug track reverting the change  intrduced by 
  I2d35d0659bd3f06c570ba99e8b8a41b620253e75
  https://review.openstack.org/#/c/189627/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1485788/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485778] [NEW] Angular tables should use hz-page-header directive

2015-08-17 Thread Cindy Lu
Public bug reported:

We should use this new directive:
https://review.openstack.org/#/c/201661/

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1485778

Title:
  Angular tables should use hz-page-header directive

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  We should use this new directive:
  https://review.openstack.org/#/c/201661/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1485778/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485767] [NEW] deleted flavors no longer have extra_specs

2015-08-17 Thread Sean Wilcox
Public bug reported:

When a flavor with extra_specs is deleted the extra_specs are marked as
deleted as well.  Instance built with the deleted flavor of course still
point to the deleted flavor row.  When looking up the extra_specs for
the flavor associated with the instance none are returned as the lookup
does not take into account deleted rows.

For example (running on devstack):

$ nova flavor-create --is-public true bar 23 8192 23 6
$ nova flavor-key 23 set foobar=baz

$ nova flavor-show 23
++-+
| Property   | Value   |
++-+
| OS-FLV-DISABLED:disabled   | False   |
| OS-FLV-EXT-DATA:ephemeral  | 0   |
| disk   | 23  |
| extra_specs| {"zonecfg:brand": "solaris-kz"} |
| id | 23  |
| name   | bar |
| os-flavor-access:is_public | True|
| ram| 8192|
| rxtx_factor| 1.0 |
| swap   | |
| vcpus  | 6   |
++-+

$ nova flavor-delete 23  (yes there is a bug against this and probably
shouldn't work... but good to show the problem)

$ nova flavor-show 23
++---+
| Property   | Value |
++---+
| OS-FLV-DISABLED:disabled   | False |
| OS-FLV-EXT-DATA:ephemeral  | 0 |
| disk   | 23|
| extra_specs| N/A   |<--- 
extra_specs are not set.
| id | 23|
| name   | bar   |
| os-flavor-access:is_public | True  |
| ram| 8192  |
| rxtx_factor| 1.0   |
| swap   |   |
| vcpus  | 6 |
++---+


If you remove the deletion markings in the instance_type_extra_specs table for 
the row, then the extra_specs will show up.  While it's not so much the nova 
flavor-show output above that creates a problem, its when code attempts to use 
an instance objects instance_type_id and nova.objects.flavor.Flavor.get_by_id() 
that the extra_specs are not loaded and therefor it appears there are no 
extra_specs for the flavor that the instance was created with.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1485767

Title:
  deleted flavors no longer have extra_specs

Status in OpenStack Compute (nova):
  New

Bug description:
  When a flavor with extra_specs is deleted the extra_specs are marked
  as deleted as well.  Instance built with the deleted flavor of course
  still point to the deleted flavor row.  When looking up the
  extra_specs for the flavor associated with the instance none are
  returned as the lookup does not take into account deleted rows.

  For example (running on devstack):

  $ nova flavor-create --is-public true bar 23 8192 23 6
  $ nova flavor-key 23 set foobar=baz

  $ nova flavor-show 23
  ++-+
  | Property   | Value   |
  ++-+
  | OS-FLV-DISABLED:disabled   | False   |
  | OS-FLV-EXT-DATA:ephemeral  | 0   |
  | disk   | 23  |
  | extra_specs| {"zonecfg:brand": "solaris-kz"} |
  | id | 23  |
  | name   | bar |
  | os-flavor-access:is_public | True|
  | ram| 8192|
  | rxtx_factor| 1.0 |
  | swap   | |
  | vcpus  | 6   |
  ++-+

  $ nova flavor-delete 23  (yes there is a bug against this and probably
  shouldn't work... but good to show the problem)

  $ nova flavor-show 23
  ++---+
  | Property   | Value |
  ++---+
  | OS-FLV-DISABLED:disabled   | False |
  | OS-FLV-EXT-DATA:ephemeral  | 0 |
  | disk   | 23|
  | extra_specs 

[Yahoo-eng-team] [Bug 1482744] Re: HealthcheckResult class source is dumped in glance logs on every dsvm test run

2015-08-17 Thread Matt Riedemann
** Changed in: oslo.middleware
   Status: In Progress => Fix Committed

** Changed in: glance
   Status: Confirmed => Invalid

** Changed in: oslo.middleware
   Importance: Undecided => Low

** No longer affects: glance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1482744

Title:
  HealthcheckResult class source is dumped in glance logs on every dsvm
  test run

Status in oslo.middleware:
  Fix Committed

Bug description:
  This class source is dumped in a devstack run in both g-api and g-reg
  logs:

  http://logs.openstack.org/00/209200/11/check/gate-neutron-lbaasv1
  -dsvm-api/e205cb7/logs/screen-g-api.txt.gz#_2015-08-07_18_47_55_313

  class HealthcheckResult(tuple):
  'HealthcheckResult(available, reason)'

  __slots__ = ()

  _fields = ('available', 'reason')

  def __new__(_cls, available, reason):
  'Create new instance of HealthcheckResult(available, reason)'
  return _tuple.__new__(_cls, (available, reason))

  @classmethod
  def _make(cls, iterable, new=tuple.__new__, len=len):
  'Make a new HealthcheckResult object from a sequence or iterable'
  result = new(cls, iterable)
  if len(result) != 2:
  raise TypeError('Expected 2 arguments, got %d' % len(result))
  return result

  def __repr__(self):
  'Return a nicely formatted representation string'
  return 'HealthcheckResult(available=%r, reason=%r)' % self

  def _asdict(self):
  'Return a new OrderedDict which maps field names to their values'
  return OrderedDict(zip(self._fields, self))

  def _replace(_self, **kwds):
  'Return a new HealthcheckResult object replacing specified fields 
with new values'
  result = _self._make(map(kwds.pop, ('available', 'reason'), _self))
  if kwds:
  raise ValueError('Got unexpected field names: %r' % kwds.keys())
  return result

  def __getnewargs__(self):
  'Return self as a plain tuple.  Used by copy and pickle.'
  return tuple(self)

  __dict__ = _property(_asdict)

  def __getstate__(self):
  'Exclude the OrderedDict from pickling'
  pass

  available = _property(_itemgetter(0), doc='Alias for field number
  0')

  reason = _property(_itemgetter(1), doc='Alias for field number 1')

To manage notifications about this bug go to:
https://bugs.launchpad.net/oslo.middleware/+bug/1482744/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485764] [NEW] left hand nav border color does not work on big tables

2015-08-17 Thread Eric Peterson
Public bug reported:

When the data table displayed requires vertical scrolling, the left hand
nav's background coloring cuts off eventually as it is sized to 100%
of the display area.

It looks like:
http://pasteboard.co/2Ni16M2C.png

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1485764

Title:
  left hand nav border color does not work on big tables

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When the data table displayed requires vertical scrolling, the left
  hand nav's background coloring cuts off eventually as it is sized
  to 100% of the display area.

  It looks like:
  http://pasteboard.co/2Ni16M2C.png

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1485764/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485746] [NEW] Inconsistent karma conf for xstatic files

2015-08-17 Thread Thai Tran
Public bug reported:

The xstatic files in horizon/karma.conf and dashboard/karma.conf is
missing the smart-table reference causing any tests that rely on smart-
table to fail on the dashboard side.

** Affects: horizon
 Importance: High
 Assignee: Thai Tran (tqtran)
 Status: In Progress


** Tags: low-hanging-fruit

** Summary changed:

- Inconsistent karma conf
+ Inconsistent karma conf for xstatic files

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1485746

Title:
  Inconsistent karma conf for xstatic files

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The xstatic files in horizon/karma.conf and dashboard/karma.conf is
  missing the smart-table reference causing any tests that rely on
  smart-table to fail on the dashboard side.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1485746/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485732] [NEW] subnet-update of allocation_pool does not prevent orphaning existing ports

2015-08-17 Thread John Kasperski
Public bug reported:

An error should be returned when subnet-update is used to modify the
allocation_pool such that existing neutron ports are no longer included.
This operation should not be permitted.

Currently the existing allocated neutron ports are not verified when a
subnet allocation pool is changed.  This can lead to unusual statistics
such as:   there could be 50 allocated neutron objects associated with a
subnet, however the allocation pool range only includes 10 IP addresses
and all 10 of those are not allocated.

** Affects: neutron
 Importance: Undecided
 Assignee: John Kasperski (jckasper)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => John Kasperski (jckasper)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1485732

Title:
  subnet-update of allocation_pool does not prevent orphaning existing
  ports

Status in neutron:
  New

Bug description:
  An error should be returned when subnet-update is used to modify the
  allocation_pool such that existing neutron ports are no longer
  included.   This operation should not be permitted.

  Currently the existing allocated neutron ports are not verified when a
  subnet allocation pool is changed.  This can lead to unusual
  statistics such as:   there could be 50 allocated neutron objects
  associated with a subnet, however the allocation pool range only
  includes 10 IP addresses and all 10 of those are not allocated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1485732/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485718] [NEW] Devstack neutron intermittently fails on neutron-debug probe-create --device-owner compute

2015-08-17 Thread Ramy Asselin
Public bug reported:

I'm seeing devstack neutron fail to stack on our 3rd party ci system.
This is an intermittent failure.

2015-08-17 15:33:14.786 | + setup_neutron_debug
2015-08-17 15:33:14.786 | + [[ True == \T\r\u\e ]]
2015-08-17 15:33:14.787 | ++ _get_net_id public
2015-08-17 15:33:14.787 | ++ neutron --os-tenant-name admin --os-username admin 
--os-password secretadmin net-list
2015-08-17 15:33:14.788 | ++ grep public
2015-08-17 15:33:14.788 | ++ awk '{print $2}'
2015-08-17 15:33:16.062 | + public_net_id=3e00553c-83df-4ed0-bf35-a07cd64be710
2015-08-17 15:33:16.062 | + neutron-debug --os-tenant-name admin --os-username 
admin --os-password secretadmin probe-create --device-owner compute 
3e00553c-83df-4ed0-bf35-a07cd64be710
2015-08-17 15:33:17.219 | Option "verbose" from group "DEFAULT" is deprecated 
for removal.  Its value may be silently ignored in the future.
2015-08-17 15:33:17.838 | 2015-08-17 15:33:17.837 8559 WARNING oslo_config.cfg 
[-] Option "use_namespaces" from group "DEFAULT" is deprecated for removal.  
Its value may be silently ignored in the future.
2015-08-17 15:33:18.472 | ++ _get_net_id private
2015-08-17 15:33:18.472 | ++ neutron --os-tenant-name admin --os-username admin 
--os-password secretadmin net-list
2015-08-17 15:33:18.473 | ++ grep private
2015-08-17 15:33:18.473 | ++ awk '{print $2}'
2015-08-17 15:33:20.081 | + private_net_id=faa3a43c-fc68-4da6-9337-f354ce184482
2015-08-17 15:33:20.081 | + neutron-debug --os-tenant-name admin --os-username 
admin --os-password secretadmin probe-create --device-owner compute 
faa3a43c-fc68-4da6-9337-f354ce184482
2015-08-17 15:33:21.225 | Option "verbose" from group "DEFAULT" is deprecated 
for removal.  Its value may be silently ignored in the future.
2015-08-17 15:33:21.822 | 2015-08-17 15:33:21.821 8699 WARNING oslo_config.cfg 
[-] Option "use_namespaces" from group "DEFAULT" is deprecated for removal.  
Its value may be silently ignored in the future.
2015-08-17 15:33:22.381 | 2015-08-17 15:33:22.380 8699 ERROR 
neutron.agent.linux.utils [-] 
2015-08-17 15:33:22.381 | Command: ['ip', 'netns', 'exec', 
u'qprobe-b1c0edee-1e71-479b-94ed-fe96ad8b2041', 'ip', '-6', 'addr', 'add', 
'fd2d:8d14:3bb8:0:f816:3eff:feca:6779/64', 'scope', 'global', 'dev', 
u'tapb1c0edee-1e']
2015-08-17 15:33:22.381 | Exit code: 2
2015-08-17 15:33:22.381 | Stdin: 
2015-08-17 15:33:22.382 | Stdout: 
2015-08-17 15:33:22.382 | Stderr: RTNETLINK answers: File exists
2015-08-17 15:33:22.382 | 
2015-08-17 15:33:22.382 | 2015-08-17 15:33:22.380 8699 ERROR 
neutronclient.shell [-] 
2015-08-17 15:33:22.383 | Command: ['ip', 'netns', 'exec', 
u'qprobe-b1c0edee-1e71-479b-94ed-fe96ad8b2041', 'ip', '-6', 'addr', 'add', 
'fd2d:8d14:3bb8:0:f816:3eff:feca:6779/64', 'scope', 'global', 'dev', 
u'tapb1c0edee-1e']
2015-08-17 15:33:22.383 | Exit code: 2
2015-08-17 15:33:22.383 | Stdin: 
2015-08-17 15:33:22.383 | Stdout: 
2015-08-17 15:33:22.383 | Stderr: RTNETLINK answers: File exists
2015-08-17 15:33:22.383 | 

Full logs and configuration available here for 30 days:
http://15.126.198.151/59/213659/1/check/3par-fc-driver-master-client-pip-eos14-dsvm/cc7f606/logs/devstacklog.txt.gz#_2015-08-17_15_33_20_081

I will add additional log entries to this bug as I see them.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1485718

Title:
  Devstack neutron intermittently fails on neutron-debug probe-create
  --device-owner compute

Status in neutron:
  New

Bug description:
  I'm seeing devstack neutron fail to stack on our 3rd party ci system.
  This is an intermittent failure.

  2015-08-17 15:33:14.786 | + setup_neutron_debug
  2015-08-17 15:33:14.786 | + [[ True == \T\r\u\e ]]
  2015-08-17 15:33:14.787 | ++ _get_net_id public
  2015-08-17 15:33:14.787 | ++ neutron --os-tenant-name admin --os-username 
admin --os-password secretadmin net-list
  2015-08-17 15:33:14.788 | ++ grep public
  2015-08-17 15:33:14.788 | ++ awk '{print $2}'
  2015-08-17 15:33:16.062 | + public_net_id=3e00553c-83df-4ed0-bf35-a07cd64be710
  2015-08-17 15:33:16.062 | + neutron-debug --os-tenant-name admin 
--os-username admin --os-password secretadmin probe-create --device-owner 
compute 3e00553c-83df-4ed0-bf35-a07cd64be710
  2015-08-17 15:33:17.219 | Option "verbose" from group "DEFAULT" is deprecated 
for removal.  Its value may be silently ignored in the future.
  2015-08-17 15:33:17.838 | 2015-08-17 15:33:17.837 8559 WARNING 
oslo_config.cfg [-] Option "use_namespaces" from group "DEFAULT" is deprecated 
for removal.  Its value may be silently ignored in the future.
  2015-08-17 15:33:18.472 | ++ _get_net_id private
  2015-08-17 15:33:18.472 | ++ neutron --os-tenant-name admin --os-username 
admin --os-password secretadmin net-list
  2015-08-17 15:33:18.473 | ++ grep private
  2015-08-17 15:33:18.473 | ++ awk '{print $2}'
  2015-08-17 15:3

[Yahoo-eng-team] [Bug 1485712] [NEW] Can't set parent_id of project for hierarchical multi-tenancy

2015-08-17 Thread Alex Ortiz
Public bug reported:

I understand reading some of the specs for hierarchical multi-tenancy
that you cannot change the parent_id of a project. But i would've hoped
that if a project was created without a parent_id , you can later assign
a parent, otherwise there's no clear migration path for projects that
wish to utilize this feature and would have to destroy/recreate the
project.

When i tried it i get a 403, "Update of `parent_id` is not allowed.". Is
this just in place until it can be supported later, or is this truly by
design?

thanks.

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: hierarchical-multitenancy

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1485712

Title:
  Can't set parent_id of project for hierarchical multi-tenancy

Status in Keystone:
  New

Bug description:
  I understand reading some of the specs for hierarchical multi-tenancy
  that you cannot change the parent_id of a project. But i would've
  hoped that if a project was created without a parent_id , you can
  later assign a parent, otherwise there's no clear migration path for
  projects that wish to utilize this feature and would have to
  destroy/recreate the project.

  When i tried it i get a 403, "Update of `parent_id` is not allowed.".
  Is this just in place until it can be supported later, or is this
  truly by design?

  thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1485712/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485690] [NEW] L3 port failure when router gateway is set from router-create API

2015-08-17 Thread Kiran
Public bug reported:

This failure occurs when
[ml2]
mechanism_driver = cisco_apic_juno
[DEFAULT]
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
service_plugins = apic_l3_juno

Failure snippet: https://gist.github.com/a32c627869a10551880c.git

Steps to reproduce
-

$ export auth_token=keystone token-get | awk '/id/{print $4}' | head -n1

$ curl -v -i -X POST -H "X-Auth-Token: $auth_token" -H "Content-
type:application/json" -d '{"router": {"external_gateway_info":
{"network_id": "EXT_NET_ID"}, "name": "router1", "admin_state_up":
false}}' http://10.101.1.40:9696/v2.0/routers

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1485690

Title:
  L3 port failure when router gateway is set from router-create API

Status in neutron:
  New

Bug description:
  This failure occurs when
  [ml2]
  mechanism_driver = cisco_apic_juno
  [DEFAULT]
  core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
  service_plugins = apic_l3_juno

  Failure snippet: https://gist.github.com/a32c627869a10551880c.git

  Steps to reproduce
  -

  $ export auth_token=keystone token-get | awk '/id/{print $4}' | head
  -n1

  $ curl -v -i -X POST -H "X-Auth-Token: $auth_token" -H "Content-
  type:application/json" -d '{"router": {"external_gateway_info":
  {"network_id": "EXT_NET_ID"}, "name": "router1", "admin_state_up":
  false}}' http://10.101.1.40:9696/v2.0/routers

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1485690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435693] Re: A number of places where we LOG messages fail to use the _L{X} formatting

2015-08-17 Thread Dolph Mathews
I thought this was backportable since it's only adding translation
strings to stable/kilo (not modifying things that may have already been
translated).

** Changed in: keystone/kilo
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1435693

Title:
  A number of places where we LOG messages fail to use the _L{X}
  formatting

Status in Keystone:
  Fix Committed
Status in Keystone kilo series:
  Invalid

Bug description:
  These should be corrected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1435693/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485687] [NEW] key install from source doc missing libffi-devel (fedora)

2015-08-17 Thread algerwang
Public bug reported:

In http://docs.openstack.org/developer/keystone/setup.html [Fedora 19+].

when I  execute command "python tools/install_venv.py", appear error as
follow:

building '_cffi_backend' extension
creating build/temp.linux-x86_64-2.7
creating build/temp.linux-x86_64-2.7/c
gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall 
-Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions 
-fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 
-mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall 
-Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions 
-fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 
-mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DUSE__THREAD 
-I/usr/include/ffi -I/usr/include/libffi -I/usr/include/python2.7 -c 
c/_cffi_backend.c -o build/temp.linux-x86_64-2.7/c/_cffi_backend.o
c/_cffi_backend.c:13:17::ffi.h

error: command 'gcc' failed with exit status 1


Command "/usr/bin/python -c "import setuptools, 
tokenize;__file__='/tmp/pip-build-scQlSO/cffi/setup.py';exec(compile(getattr(tokenize,
 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" 
install --record /tmp/pip-BRdoTg-record/install-record.txt 
--single-version-externally-managed --compile" failed with error code 1 in 
/tmp/pip-build-scQlSO/cffi
Command "tools/with_venv.sh pip install --upgrade -r 
/root/keystone/requirements.txt -r /root/keystone/test-requirements.txt" failed.

** Affects: keystone
 Importance: Undecided
 Assignee: algerwang (wang-weijie)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => algerwang (wang-weijie)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1485687

Title:
  key install from source doc missing libffi-devel (fedora)

Status in Keystone:
  New

Bug description:
  In http://docs.openstack.org/developer/keystone/setup.html [Fedora
  19+].

  when I  execute command "python tools/install_venv.py", appear error
  as follow:

  building '_cffi_backend' extension
  creating build/temp.linux-x86_64-2.7
  creating build/temp.linux-x86_64-2.7/c
  gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall 
-Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions 
-fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 
-mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall 
-Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions 
-fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 
-mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DUSE__THREAD 
-I/usr/include/ffi -I/usr/include/libffi -I/usr/include/python2.7 -c 
c/_cffi_backend.c -o build/temp.linux-x86_64-2.7/c/_cffi_backend.o
  c/_cffi_backend.c:13:17::ffi.h
  
  error: command 'gcc' failed with exit status 1

  
  Command "/usr/bin/python -c "import setuptools, 
tokenize;__file__='/tmp/pip-build-scQlSO/cffi/setup.py';exec(compile(getattr(tokenize,
 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" 
install --record /tmp/pip-BRdoTg-record/install-record.txt 
--single-version-externally-managed --compile" failed with error code 1 in 
/tmp/pip-build-scQlSO/cffi
  Command "tools/with_venv.sh pip install --upgrade -r 
/root/keystone/requirements.txt -r /root/keystone/test-requirements.txt" failed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1485687/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483321] Re: Kilo nova-compute unable to register with Juno nova-conductor

2015-08-17 Thread Markus Zoeller (markus_z)
@Nick Jones:

It seems that this setup is not a supported model [1]:

"No, that's not valid behaviour. You need to upgrade the controller 
infrastructure (conductor, API nodes, etc) before any compute nodes."

I'll close this bug as "Invalid" and remove the assignee because of
this.

[1] http://lists.openstack.org/pipermail/openstack-
dev/2015-August/072237.html

** Changed in: nova
   Status: New => Invalid

** Changed in: nova
 Assignee: pawan (pawansolanki) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1483321

Title:
  Kilo nova-compute unable to register with Juno nova-conductor

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  When deploying a compute node running Kilo against a control node
  running Juno, nova-compute fails to start with the following
  exceptions thrown by nova-conductor:

  2015-08-10 16:56:02.236 977 ERROR oslo.messaging.rpc.dispatcher 
[req-1d9be6ed-7b53-4dc8-bb3a-995a7ca7e359 ] Exception during message handling: 
Version 1.12 of Service is not supported
  2015-08-10 16:56:02.236 977 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-08-10 16:56:02.236 977 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, 
in _dispatch_and_reply
  2015-08-10 16:56:02.236 977 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2015-08-10 16:56:02.236 977 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, 
in _dispatch
  2015-08-10 16:56:02.236 977 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2015-08-10 16:56:02.236 977 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 123, 
in _do_dispatch
  2015-08-10 16:56:02.236 977 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
  2015-08-10 16:56:02.236 977 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 408, in 
object_class_action
  2015-08-10 16:56:02.236 977 TRACE oslo.messaging.rpc.dispatcher objver)
  2015-08-10 16:56:02.236 977 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/objects/base.py", line 288, in 
obj_class_from_name
  2015-08-10 16:56:02.236 977 TRACE oslo.messaging.rpc.dispatcher 
supported=latest_ver)
  2015-08-10 16:56:02.236 977 TRACE oslo.messaging.rpc.dispatcher 
IncompatibleObjectVersion: Version 1.12 of Service is not supported
  2015-08-10 16:56:02.236 977 TRACE oslo.messaging.rpc.dispatcher 
  2015-08-10 16:56:02.237 977 ERROR oslo.messaging._drivers.common 
[req-1d9be6ed-7b53-4dc8-bb3a-995a7ca7e359 ] Returning exception Version 1.12 of 
Service is not supported to caller
  2015-08-10 16:56:02.237 977 ERROR oslo.messaging._drivers.common 
[req-1d9be6ed-7b53-4dc8-bb3a-995a7ca7e359 ] ['Traceback (most recent call 
last):\n', '  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, 
in _dispatch_and_reply\nincoming.message))\n', '  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, 
in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, args)\n', '  
File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt, 
**new_args)\n', '  File 
"/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 408, in 
object_class_action\nobjver)\n', '  File 
"/usr/lib/python2.7/dist-packages/nova/objects/base.py", line 288, in 
obj_class_from_name\nsupported=latest_ver)\n', 'IncompatibleObjectVersion: 
Version 1.12 of Service is not supported\n']
  2015-08-10 16:56:04.212 976 WARNING nova.context [-] Arguments dropped when 
creating context: {u'read_only': False, u'domain': None, u'show_deleted': 
False, u'user_identity': u'- - - - -', u'project_domain': None, 
u'resource_uuid': None, u'user_domain': None}

  2015-08-10 16:56:04.244 972 ERROR oslo.messaging.rpc.dispatcher 
[req-eb961fea-3b0a-4c34-b774-7c8f4312467f ] Exception during message handling: 
Version 1.16 of InstanceList is not supported
  2015-08-10 16:56:04.244 972 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-08-10 16:56:04.244 972 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, 
in _dispatch_and_reply
  2015-08-10 16:56:04.244 972 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2015-08-10 16:56:04.244 972 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, 
in _dispatch
  2015-08-10 16:56:04.244 

[Yahoo-eng-team] [Bug 1477432] Re: Swift object Cross Site Scripting (XSS) attack

2015-08-17 Thread Jeremy Stanley
*** This bug is a duplicate of bug 1463698 ***
https://bugs.launchpad.net/bugs/1463698

** Information type changed from Private Security to Public

** Changed in: ossa
   Status: Incomplete => Won't Fix

** This bug has been marked a duplicate of bug 1463698
   XSS

** Description changed:

- This issue is being treated as a potential security risk under embargo.
- Please do not make any public mention of embargoed (private) security
- vulnerabilities before their coordinated publication by the OpenStack
- Vulnerability Management Team in the form of an official OpenStack
- Security Advisory. This includes discussion of the bug or associated
- fixes in public forums such as mailing lists, code review systems and
- bug trackers. Please also avoid private disclosure to other individuals
- not already approved for access to this information, and provide this
- same reminder to those who are made aware of the issue prior to
- publication. All discussion should remain confined to this private bug
- report, and any proposed fixes should be added to the bug as
- attachments.
- 
- 
  Browser open *.http objects instead of download them.
  
  XSS flaws occur when an application includes user supplied data in a page 
sent to the browser without properly validating or escaping that content
  Cross-Site Scripting attacks are a type of injection attack, in which 
malicious scripts are injected into the otherwise benign and trusted web sites. 
Cross-site scripting (XSS) attacks occur when an attacker uses a web 
application to send malicious code, generally in the form of a browser side 
script, to a different end user.
  
  Affected URL:
  /horizon/project/containers/
  
  Fix: browser should download but not open *.http objects

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1477432

Title:
  Swift object Cross Site Scripting (XSS) attack

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  Browser open *.http objects instead of download them.

  XSS flaws occur when an application includes user supplied data in a page 
sent to the browser without properly validating or escaping that content
  Cross-Site Scripting attacks are a type of injection attack, in which 
malicious scripts are injected into the otherwise benign and trusted web sites. 
Cross-site scripting (XSS) attacks occur when an attacker uses a web 
application to send malicious code, generally in the form of a browser side 
script, to a different end user.

  Affected URL:
  /horizon/project/containers/

  Fix: browser should download but not open *.http objects

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1477432/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478464] Re: Binding:host_id changes when creating vm

2015-08-17 Thread Jeremy Stanley
** Changed in: ossa
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1478464

Title:
  Binding:host_id  changes when creating vm

Status in neutron:
  New
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
   I use the parameter binding:host_id to create port,  trying to allocate the 
port to a specified host. For example:
 neutron port-create e77c556b-7ec8-415a-8b92-98f2f4f3784f  
--binding:host_id=dvr-compute1.novalocal
  Then creating a vm assigning to the port created above,
 nova boot --flavor=1 --image=cirros-0.3.4-x86_64 --nic 
port-id=2063c4db-7388-49dc-a2e0-9ffa579f870c testvm
  After the vm created, the neutron port-show result shows the value 
binding:host_id changed.
  Further more, I changed the --binding:host_id to the original one. Then the 
value --binding:host_id changed, but not the same as the host the vm(created 
before) belongs to.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1478464/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485198] Re: Cannot boot if bdm specifies to create a new volume from an image or a snapshot

2015-08-17 Thread Matt Riedemann
** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1485198

Title:
  Cannot boot if bdm specifies to create a new volume from an image or a
  snapshot

Status in Cinder:
  In Progress
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Since that https://review.openstack.org/#/c/182994/ is merged, Cinder
  doesn't allow an empty volume name. But Nova specifies it for a new
  volume when the volume is created from a snapshot or a volume. Thus
  boot operation fails on compute node.

  Steps to reproduce:
  1 Boot an instance from a volume which should be autocreated from an image
  nova boot inst --block-device 
source=image,dest=volume,bootindex=0,size=1,id= --flavor m1.nano

  2 Wait a bit and look at the instance state
  nova show inst

  Expected result: the instance state is active.
  Actual result: the instance state is error, current stage is block device 
mapping.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1485198/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485635] [NEW] Delete image should not progress as long as instances using it

2015-08-17 Thread Bjoern Teipel
Public bug reported:

Currently it is possible to delete the glance images, even if instances are 
still referencing it.
This is in particular an issue once you deleted the image and tried to 
resize/migrate instances to a new host.
The new host can't download the image from glance and the instance can't start 
anymore due to the missing qemu backing file. In case of a resize, the action 
does abort since the coalescing of the qemu file, happening during resize,  can 
not be completed without the qemu backing file. 
This issue usually causes manual intervention to reset the instance action, 
state and manual search/copy of the base image file to the target host.
The ideal state would be to prevent image-delete requests as long as active 
instances are still referencing it.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1485635

Title:
  Delete image should not progress as long as instances using it

Status in Glance:
  New

Bug description:
  Currently it is possible to delete the glance images, even if instances are 
still referencing it.
  This is in particular an issue once you deleted the image and tried to 
resize/migrate instances to a new host.
  The new host can't download the image from glance and the instance can't 
start anymore due to the missing qemu backing file. In case of a resize, the 
action does abort since the coalescing of the qemu file, happening during 
resize,  can not be completed without the qemu backing file. 
  This issue usually causes manual intervention to reset the instance action, 
state and manual search/copy of the base image file to the target host.
  The ideal state would be to prevent image-delete requests as long as active 
instances are still referencing it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1485635/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485631] [NEW] CPU/RAM overcommit treated differently by "normal" and "NUMA topology" case

2015-08-17 Thread Chris Friesen
Public bug reported:

Currently in the NUMA topology case (so multi-node guest, dedicated
CPUs, hugepages in the guest, etc.) a single guest is not allowed to
consume more CPU/RAM than the host actually has in total regardless of
the specified overcommit ratio.  In other words, the overcommit ratio
only applies when the host resources are being used by multiple guests.
A given host resource can only be used once by any particular guest.

So as an example, if the host has 2 pCPUs in total for guests, a single
guest instance is not allowed to use more than 2CPUs but you might be
able to have 16 such instances running. (Assuming default CPU overcommit
ratio.)

However, this is not true when the NUMA topology is not involved.  In
that case a host with 2 pCPUs would allow a guest with 3 vCPUs to be
spawned.

We should pick one behaviour as "correct" and adjust the other one to
match.  Given that the NUMA topology case was discussed more recently,
it seems reasonable to select it as the "correct" behaviour.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: compute scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1485631

Title:
  CPU/RAM overcommit treated differently by "normal" and "NUMA topology"
  case

Status in OpenStack Compute (nova):
  New

Bug description:
  Currently in the NUMA topology case (so multi-node guest, dedicated
  CPUs, hugepages in the guest, etc.) a single guest is not allowed to
  consume more CPU/RAM than the host actually has in total regardless of
  the specified overcommit ratio.  In other words, the overcommit ratio
  only applies when the host resources are being used by multiple
  guests.  A given host resource can only be used once by any particular
  guest.

  So as an example, if the host has 2 pCPUs in total for guests, a
  single guest instance is not allowed to use more than 2CPUs but you
  might be able to have 16 such instances running. (Assuming default CPU
  overcommit ratio.)

  However, this is not true when the NUMA topology is not involved.  In
  that case a host with 2 pCPUs would allow a guest with 3 vCPUs to be
  spawned.

  We should pick one behaviour as "correct" and adjust the other one to
  match.  Given that the NUMA topology case was discussed more recently,
  it seems reasonable to select it as the "correct" behaviour.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1485631/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485627] [NEW] no error is output if you specify the incorrect url for upload to image location

2015-08-17 Thread Alexey Galkin
Public bug reported:

Steps to reproduce:

1. Log into OpenStack Horizon dashboard. 
2. Navigate to Project > Compute > Images.
3. Click on ‘Create Image’ button.
4. Set Image name = ‘New_vm_test’, provide Image source = ‘Image Location’, 
provide a invalid link = 'http://www.google.com/any-invalid-image.iso' , select 
image format and click on ‘Create Image’ button.

Expected Result:
 
Error message with '404 Not Found: HTTP datastore could not find image at URI. 
(HTTP 404)' and image is't be created.

Actual Result:

No error is output and create image with status: queued.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: glance horizon.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1485627

Title:
  no error is output if you specify the incorrect url for upload to
  image location

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps to reproduce:

  1. Log into OpenStack Horizon dashboard. 
  2. Navigate to Project > Compute > Images.
  3. Click on ‘Create Image’ button.
  4. Set Image name = ‘New_vm_test’, provide Image source = ‘Image Location’, 
provide a invalid link = 'http://www.google.com/any-invalid-image.iso' , select 
image format and click on ‘Create Image’ button.

  Expected Result:
   
  Error message with '404 Not Found: HTTP datastore could not find image at 
URI. (HTTP 404)' and image is't be created.

  Actual Result:

  No error is output and create image with status: queued.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1485627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1484742] Re: NUMATopologyFilter doesn't account for CPU/RAM overcommit

2015-08-17 Thread Chris Friesen
** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1484742

Title:
  NUMATopologyFilter doesn't account for CPU/RAM overcommit

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  There seems to be a bug in the NUMATopologyFilter where it doesn't
  properly account for cpu_allocation_ratio or ram_allocation_ratio.
  (Detected on stable/kilo, not sure if it applies to current master.)

  To reproduce:

  1) Create a flavor with a moderate number of CPUs (5, for example) and
  enable hugepages by setting   "hw:mem_page_size=2048" in the flavor
  extra specs.  Do not specify dedicated CPUs on the flavor.

  2) Ensure that the available compute nodes have fewer CPUs free than
  the number of CPUs in the flavor above.

  3) Ensure that the "cpu_allocation_ratio" is big enough that
  "num_free_cpus * cpu_allocation_ratio" is more than the number of CPUs
  in the flavor above.

  4) Enable the NUMATopologyFilter for the nova filter scheduler.

  5) Try to boot an instance with the specified flavor.

  This should pass, because we're not using dedicated CPUs and so the
  "cpu_allocation_ratio" should apply.  However, the NUMATopologyFilter
  returns 0 hosts.

  It seems like the NUMATopologyFilter is failing to properly account
  for the cpu_allocation_ratio when checking whether an instance can fit
  onto a given host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1484742/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1193424] Re: Cleanup scrubber tries to decrypt a uri that is already decryptd

2015-08-17 Thread Flavio Percoco
moved the bug to invalid because the patch is abandoned and there hasn't
been any activity in over a year.

** Changed in: glance
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1193424

Title:
  Cleanup scrubber tries to decrypt a uri that is already decryptd

Status in Glance:
  Invalid

Bug description:
  Cleanup scrubber calls RegistryClient.get_images_detailed() to get a
  list of pending_delete images, which contains decrypted locations.
  However, the deletion method tries to decrypt the locations again,
  that causes the problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1193424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485198] Re: Cannot boot if bdm specifies to create a new volume from an image or a snapshot

2015-08-17 Thread Xing Yang
Fixed proposed to Cinder:

https://review.openstack.org/#/c/213723/

** Also affects: cinder
   Importance: Undecided
   Status: New

** Changed in: cinder
 Assignee: (unassigned) => John Griffith (john-griffith)

** Changed in: cinder
Milestone: None => liberty-3

** Changed in: cinder
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1485198

Title:
  Cannot boot if bdm specifies to create a new volume from an image or a
  snapshot

Status in Cinder:
  In Progress
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Since that https://review.openstack.org/#/c/182994/ is merged, Cinder
  doesn't allow an empty volume name. But Nova specifies it for a new
  volume when the volume is created from a snapshot or a volume. Thus
  boot operation fails on compute node.

  Steps to reproduce:
  1 Boot an instance from a volume which should be autocreated from an image
  nova boot inst --block-device 
source=image,dest=volume,bootindex=0,size=1,id= --flavor m1.nano

  2 Wait a bit and look at the instance state
  nova show inst

  Expected result: the instance state is active.
  Actual result: the instance state is error, current stage is block device 
mapping.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1485198/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485606] Re: Unable to enable dhcp for networkid

2015-08-17 Thread Assaf Muller
This looks like more of a rabbit issue than a Neutron issue. It is most
likely deployment specific: Bad configuration etc.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1485606

Title:
  Unable to enable dhcp for networkid

Status in neutron:
  Invalid

Bug description:
   Installed Mirantis Open Stack 6.0 release in ubuntu. Neutron vlan
  network is created but the VMs created on private network are not
  getting DHCP IP address.

  Toploogy : 1 fuel node, 1 controller and 2 compute nodes.

  All eth0s are connected to L2 switch for PXE boot, 
  eth1s are connected to uplink switch for connectivity, 
  eth2 are connected to L2 switch for Management/Storage network and 
  eth3s are connected to L2 switch for Private network

  Verified the connections between Fuel, controller and compute nodes,
  everything is proper. Seeing "Unable to enable dhcp for “
  in /var/log/neutron/dhcp-agent.log.

  2015-07-20 10:05:23.255 10139 ERROR neutron.agent.dhcp_agent 
[req-fe1305ed-17da-4863-9b04-ca418495256d None] Unable to enable dhcp for 
0aff4aa3-e393-499d-b3bc-5c90dd3655b1.
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent Traceback (most 
recent call last):
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/dhcp_agent.py", line 129, in 
call_driver
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent 
getattr(driver, action)(**action_kwargs)
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/dhcp.py", line 204, in 
enable
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent 
interface_name = self.device_manager.setup(self.network)
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/dhcp.py", line 929, in 
setup
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent port = 
self.setup_dhcp_port(network)
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/dhcp.py", line 910, in 
setup_dhcp_port
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent dhcp_port = 
self.plugin.create_dhcp_port({'port': port_dict})
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/dhcp_agent.py", line 439, in 
create_dhcp_port
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent 
host=self.host))
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/common/log.py", line 34, in wrapper
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent return 
method(*args, **kwargs)
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/common/rpc.py", line 161, in call
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent context, 
msg, rpc_method='call', **kwargs)
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/common/rpc.py", line 187, in 
__call_rpc_method
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent return 
func(context, msg['method'], **msg['args'])
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line 389, in 
call
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent return 
self.prepare().call(ctxt, method, **kwargs)
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line 152, in 
call
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent 
retry=self.retry)
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/transport.py", line 90, in 
_send
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent 
timeout=timeout, retry=retry)
  2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", line 
434, in send


  
  This is server log output - seeing AMPQ connection issue

  2015-07-20 10:01:30.204 9808 TRACE oslo.messaging._drivers.impl_rabbit 
(class_id, method_id), ConnectionError)
  2015-07-20 10:01:30.204 9808 TRACE oslo.messaging._drivers.impl_rabbit 
ConnectionForced: (0, 0): (320) CONNECTION_FORCED - broker forced connection 
closure with reason 'shutdown'
  2015-07-20 10:01:30.204 9808 TRACE oslo.messaging._drivers.impl_rabbit
  2015-07-20 10:01:30.215 9808 INFO oslo.messaging._drivers.impl_rabbit [-] 
Delaying reconnect for 5.0 seconds ...
  2015-07-20 10:01:30.21

[Yahoo-eng-team] [Bug 1485606] [NEW] Unable to enable dhcp for networkid

2015-08-17 Thread karthi palaniappan
Public bug reported:

 Installed Mirantis Open Stack 6.0 release in ubuntu. Neutron vlan
network is created but the VMs created on private network are not
getting DHCP IP address.

Toploogy : 1 fuel node, 1 controller and 2 compute nodes.

All eth0s are connected to L2 switch for PXE boot, 
eth1s are connected to uplink switch for connectivity, 
eth2 are connected to L2 switch for Management/Storage network and 
eth3s are connected to L2 switch for Private network

Verified the connections between Fuel, controller and compute nodes,
everything is proper. Seeing "Unable to enable dhcp for “ in
/var/log/neutron/dhcp-agent.log.

2015-07-20 10:05:23.255 10139 ERROR neutron.agent.dhcp_agent 
[req-fe1305ed-17da-4863-9b04-ca418495256d None] Unable to enable dhcp for 
0aff4aa3-e393-499d-b3bc-5c90dd3655b1.
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent Traceback (most 
recent call last):
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/dhcp_agent.py", line 129, in 
call_driver
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent 
getattr(driver, action)(**action_kwargs)
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/dhcp.py", line 204, in 
enable
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent interface_name 
= self.device_manager.setup(self.network)
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/dhcp.py", line 929, in 
setup
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent port = 
self.setup_dhcp_port(network)
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/dhcp.py", line 910, in 
setup_dhcp_port
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent dhcp_port = 
self.plugin.create_dhcp_port({'port': port_dict})
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/dhcp_agent.py", line 439, in 
create_dhcp_port
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent 
host=self.host))
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/common/log.py", line 34, in wrapper
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent return 
method(*args, **kwargs)
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/common/rpc.py", line 161, in call
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent context, msg, 
rpc_method='call', **kwargs)
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/dist-packages/neutron/common/rpc.py", line 187, in 
__call_rpc_method
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent return 
func(context, msg['method'], **msg['args'])
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line 389, in 
call
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent return 
self.prepare().call(ctxt, method, **kwargs)
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line 152, in 
call
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent 
retry=self.retry)
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/transport.py", line 90, in 
_send
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent 
timeout=timeout, retry=retry)
2015-07-20 10:05:23.255 10139 TRACE neutron.agent.dhcp_agent   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", line 
434, in send


This is server log output - seeing AMPQ connection issue

2015-07-20 10:01:30.204 9808 TRACE oslo.messaging._drivers.impl_rabbit 
(class_id, method_id), ConnectionError)
2015-07-20 10:01:30.204 9808 TRACE oslo.messaging._drivers.impl_rabbit 
ConnectionForced: (0, 0): (320) CONNECTION_FORCED - broker forced connection 
closure with reason 'shutdown'
2015-07-20 10:01:30.204 9808 TRACE oslo.messaging._drivers.impl_rabbit
2015-07-20 10:01:30.215 9808 INFO oslo.messaging._drivers.impl_rabbit [-] 
Delaying reconnect for 5.0 seconds ...
2015-07-20 10:01:30.216 9808 ERROR oslo.messaging._drivers.impl_rabbit [-] 
Failed to consume message from queue: (0, 0): (320) CONNECTION_FORCED - broker 
forced connection closure with reason 'shutdown'
2015-07-20 10:01:30.216 9808 TRACE oslo.messaging._drivers.impl_rabbit 
Traceback (most recent call last):
2015-07-20 10:01:30.216 9808 TRACE oslo.messaging._drivers.impl_rabbit   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py", line 
681, in ensure
2015-07-20 10:01:30.216 9808 TRACE o

[Yahoo-eng-team] [Bug 1485604] [NEW] Logs must contain the request ID

2015-08-17 Thread Brant Knudson
Public bug reported:


The keystone log file doesn't have the request ID like the other projects. The 
log file should contain the request ID so that it's easier to debug.

** Affects: keystone
 Importance: Undecided
 Assignee: Brant Knudson (blk-u)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1485604

Title:
  Logs must contain the request ID

Status in Keystone:
  In Progress

Bug description:
  
  The keystone log file doesn't have the request ID like the other projects. 
The log file should contain the request ID so that it's easier to debug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1485604/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485450] Re: [Sahara] Can't save selected "Proxy gateway"

2015-08-17 Thread Vitaly Gridnev
** Project changed: horizon => mos

** Changed in: mos
 Assignee: (unassigned) => Vitaly Gridnev (vgridnev)

** Changed in: mos
Milestone: None => 7.0

** Changed in: mos
   Status: New => Confirmed

** Changed in: mos
   Importance: Undecided => Medium

** Changed in: mos
   Importance: Medium => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1485450

Title:
  [Sahara] Can't save selected "Proxy gateway"

Status in Mirantis OpenStack:
  Confirmed

Bug description:
  ENVIRONMENT:
ISO: 164

  STEPS TO REPRODUCE:
  1. Navigate to "Node group templates"
  2. Click on "Edit template"
  3. Select "Proxy gateway"
  4. Click "Save"
  5. Click on "Edit template" of edited template

  EXPECTED RESULT:
  "Proxy gateway" selected

  ACTUAL RESULT:
  "Proxy gateway" unselected

To manage notifications about this bug go to:
https://bugs.launchpad.net/mos/+bug/1485450/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485578] [NEW] It is not possible to select AZ for new Cinder volume during the VM creation

2015-08-17 Thread Timur Nurlygayanov
Public bug reported:

Steps To Reproduce:
1. Deploy OpenStack cluster with several Nova availability zones, for example, 
'nova1' and 'nova2' and with several Cinder availability zones, for example, 
'storage1' and 'storage2' (availability zones for Nova and Cinder should be 
different).
2. Login to Horizon dashboard and navigate to Project > Instances
3. Click on 'Launch Instance' button
4. Set all required parameters, select Nova AZ 'nova1' for new VM and select 
Instance Boot Source = "Boot from image (creates new volume)"
5. Click on 'Launch' button

Observed Result:
Instance will fail with "Failure prepping block device" error (please see 
attached screenshot horizon_az_bug.png)

Expected Result:
As a user I expect that Horizon UI will provide me the ability to select the 
availability zone for new volume if I want to create new volume and boot VM 
from it. We can't use Nova AZ as availability zone for Cinder volume because 
these zones are different availability zones (we can have, for example, 1 Nova 
availability zones and many Cinder availability zone or one Cinder AZ and many 
Nova AZs - it depends on users needs).

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "horizon_az_bug.png"
   
https://bugs.launchpad.net/bugs/1485578/+attachment/4445910/+files/horizon_az_bug.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1485578

Title:
  It is not possible to select AZ for new Cinder volume during the VM
  creation

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps To Reproduce:
  1. Deploy OpenStack cluster with several Nova availability zones, for 
example, 'nova1' and 'nova2' and with several Cinder availability zones, for 
example, 'storage1' and 'storage2' (availability zones for Nova and Cinder 
should be different).
  2. Login to Horizon dashboard and navigate to Project > Instances
  3. Click on 'Launch Instance' button
  4. Set all required parameters, select Nova AZ 'nova1' for new VM and select 
Instance Boot Source = "Boot from image (creates new volume)"
  5. Click on 'Launch' button

  Observed Result:
  Instance will fail with "Failure prepping block device" error (please see 
attached screenshot horizon_az_bug.png)

  Expected Result:
  As a user I expect that Horizon UI will provide me the ability to select the 
availability zone for new volume if I want to create new volume and boot VM 
from it. We can't use Nova AZ as availability zone for Cinder volume because 
these zones are different availability zones (we can have, for example, 1 Nova 
availability zones and many Cinder availability zone or one Cinder AZ and many 
Nova AZs - it depends on users needs).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1485578/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472999] Re: filter doesn't handle unicode charaters

2015-08-17 Thread George Peristerakis
** Changed in: nova
 Assignee: (unassigned) => George Peristerakis (george-peristerakis)

** Project changed: nova => python-novaclient

** Changed in: python-novaclient
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1472999

Title:
  filter doesn't handle unicode charaters

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in python-glanceclient:
  New
Status in python-novaclient:
  In Progress

Bug description:
  1 go to project/instances
  2. insert 'ölk' into filter field
  3. enter filter
  4. 
  UnicodeEncodeError at /project/instances/

  'ascii' codec can't encode character u'\xf6' in position 0: ordinal
  not in range(128)

  Request Method:   GET
  Request URL:  http://localhost:8000/project/instances/
  Django Version:   1.8.2
  Exception Type:   UnicodeEncodeError
  Exception Value:  

  'ascii' codec can't encode character u'\xf6' in position 0: ordinal
  not in range(128)

  Exception Location:   /usr/lib64/python2.7/urllib.py in urlencode, line 1347
  Python Executable:/usr/bin/python
  Python Version:   2.7.10
  Python Path:  

  ['/home/mrunge/work/horizon',
   '/usr/lib64/python27.zip',
   '/usr/lib64/python2.7',
   '/usr/lib64/python2.7/plat-linux2',
   '/usr/lib64/python2.7/lib-tk',
   '/usr/lib64/python2.7/lib-old',
   '/usr/lib64/python2.7/lib-dynload',
   '/usr/lib64/python2.7/site-packages',
   '/usr/lib64/python2.7/site-packages/gtk-2.0',
   '/usr/lib/python2.7/site-packages',
   '/home/mrunge/work/horizon/openstack_dashboard']

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1472999/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485553] [NEW] Does not report appropriate error if user ID is invaild

2015-08-17 Thread priynk
Public bug reported:

Here are the steps for repro

1. Create a Project
2.Create an User to a project
3. Create and add an role to project user
4. Try to revoke role by modifying project user ID

commands:

> openstack --insecure role list --user ad28fec1b20d4e54b004a9f0fadc7ab9 
> --project 627db9080f184b7d92408ff42c19a132
-+
ID  NameProject User
-+
37cb4f08d18f496ba3a451fa5a8bf17aservice project_nametest

> curl --insecure
https://keystone:35357/v3/projects/627db9080f184b7d92408ff42c19a132/users/ad28/roles/37cb4f08d18f496ba3a451fa5a8bf17a
-X DELETE -H "X-Auth-Token:" -H "Content-Type: application/json"

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1485553

Title:
  Does not report appropriate error if user ID is invaild

Status in Keystone:
  New

Bug description:
  Here are the steps for repro

  1. Create a Project
  2.Create an User to a project
  3. Create and add an role to project user
  4. Try to revoke role by modifying project user ID

  commands:

  > openstack --insecure role list --user ad28fec1b20d4e54b004a9f0fadc7ab9 
--project 627db9080f184b7d92408ff42c19a132
  -+
  IDNameProject User
  -+
  37cb4f08d18f496ba3a451fa5a8bf17a  service project_nametest

  > curl --insecure
  
https://keystone:35357/v3/projects/627db9080f184b7d92408ff42c19a132/users/ad28/roles/37cb4f08d18f496ba3a451fa5a8bf17a
  -X DELETE -H "X-Auth-Token:" -H "Content-Type:
  application/json"

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1485553/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485416] Re: Soft reboot doesn't work for bare metal.

2015-08-17 Thread Tony Breeds
This is pretty clearly operating as intended:

http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/ironic/driver.py#n892

There are changes in progress to support soft reboot via ACPI (or
similar)

** Changed in: nova
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1485416

Title:
  Soft reboot doesn't work for bare metal.

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  When we use ironic, we can't reboot a bare metal instance with a graceful 
shutdown.
  We execute "nova reboot" command without "--hard" option. However, it 
performs a hard reboot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1485416/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485537] [NEW] adopt oslo_config.fixture.Config

2015-08-17 Thread Ihar Hrachyshka
Public bug reported:

Currently, base test class tries to mock out oslo.config autodiscovery
to isolate tests from external configuration files, but does not make it
completely. F.e. policy.d directory is still accessed by oslo.policy
code, making bugs like bug 1484553.

I hope that adopting the fixture should isolate us from all
configuration files that are external to unit tests tree.

** Affects: neutron
 Importance: Undecided
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Ihar Hrachyshka (ihar-hrachyshka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1485537

Title:
  adopt oslo_config.fixture.Config

Status in neutron:
  New

Bug description:
  Currently, base test class tries to mock out oslo.config autodiscovery
  to isolate tests from external configuration files, but does not make
  it completely. F.e. policy.d directory is still accessed by
  oslo.policy code, making bugs like bug 1484553.

  I hope that adopting the fixture should isolate us from all
  configuration files that are external to unit tests tree.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1485537/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485529] [NEW] The API for getting console connection info works only for RDP

2015-08-17 Thread Radoslav Gerganov
Public bug reported:

There is an API (os-console-auth-tokens) which returns the connection
info which correspond to a given console token.  However this API works
only for RDP consoles:

https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/console_auth_tokens.py#L49

We need the same API for MKS consoles as well.  Also I don't see any
reason why we should check the console type at all.

** Affects: nova
 Importance: Medium
 Assignee: Radoslav Gerganov (rgerganov)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1485529

Title:
  The API for getting console connection info works only for RDP

Status in OpenStack Compute (nova):
  New

Bug description:
  There is an API (os-console-auth-tokens) which returns the connection
  info which correspond to a given console token.  However this API
  works only for RDP consoles:

  
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/console_auth_tokens.py#L49

  We need the same API for MKS consoles as well.  Also I don't see any
  reason why we should check the console type at all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1485529/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485509] [NEW] Firewall doesn't work for instances with floating IPs in DVR mode

2015-08-17 Thread Sergey Kolekonov
Public bug reported:

FWaaS doesn't seem to be fully compatible with Neutron DVR at the
moment.

With firewall created I'm observing firewall rules in SNAT namespace on
the network node. It's OK if instances don't have floating IPs assigned.
But when I assign a floating IP to an instance, firewall rules are still
only in SNAT-namespaces, however, they should also exist on a compute
node. So traffic just bypasses firewall rules in that case.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fwaas l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1485509

Title:
  Firewall doesn't work for instances with floating IPs in DVR mode

Status in neutron:
  New

Bug description:
  FWaaS doesn't seem to be fully compatible with Neutron DVR at the
  moment.

  With firewall created I'm observing firewall rules in SNAT namespace
  on the network node. It's OK if instances don't have floating IPs
  assigned. But when I assign a floating IP to an instance, firewall
  rules are still only in SNAT-namespaces, however, they should also
  exist on a compute node. So traffic just bypasses firewall rules in
  that case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1485509/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485476] [NEW] Glance returned 200 status code when we add element to nonexistent property

2015-08-17 Thread dshakhray
*** This bug is a duplicate of bug 1485478 ***
https://bugs.launchpad.net/bugs/1485478

Public bug reported:

ENVIRONMENT: devstack, Glance (master, 14.08.2015)

STEPS TO REPRODUCE:
We tried add element  to nonexistent property on artifact.
Send request
curl -H "X-Auth-Token: b582b953413b4a8896bfa27e1b70d4e0" -H 
"Content-Type:application/json" -X PATCH -d '[{"op": "add", "path": "/banana", 
"value": "minion_Stuart"}]' 
http://172.18.76.44:9292/v3/artifacts/myartifact/v2.0/4d1e26e5-87b7-49b4-99bd-0e6232a80142
 -i

EXPECTED RESULT:
status code 400 and error description 

ACTUAL RESULT:
HTTP/1.1 200 OK
Content-Length: 517
Content-Type: application/json; charset=UTF-8
X-Openstack-Request-Id: req-req-efd101f6-7dd4-4a97-95f6-d647a7e21cb7
Date: Mon, 17 Aug 2015 06:55:46 GMT

{"description": null, "published_at": null, "tags": [], "depends_on":
null, "created_at": "2015-08-14T13:42:13.00", "type_name":
"MyArtifact", "updated_at": "2015-08-14T14:10:17.00", "visibility":
"public", "id": "4d1e26e5-87b7-49b4-99bd-0e6232a80142", "type_version":
"2.0", "state": "creating", "version": "12.0.0", "references": [],
"prop1": null, "prop2": null, "owner":
"b230d6ea1098462bb98d993b9cf386c0", "image_file": null, "deleted_at":
null, "screenshots": [], "int_list": null, "name": "artifact-3"}

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: artifacts glance

** This bug has been marked a duplicate of bug 1485478
   Glance returned 200 status code when we add element to nonexistent property

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1485476

Title:
  Glance returned 200 status code when we add element to nonexistent
  property

Status in Glance:
  New

Bug description:
  ENVIRONMENT: devstack, Glance (master, 14.08.2015)

  STEPS TO REPRODUCE:
  We tried add element  to nonexistent property on artifact.
  Send request
  curl -H "X-Auth-Token: b582b953413b4a8896bfa27e1b70d4e0" -H 
"Content-Type:application/json" -X PATCH -d '[{"op": "add", "path": "/banana", 
"value": "minion_Stuart"}]' 
http://172.18.76.44:9292/v3/artifacts/myartifact/v2.0/4d1e26e5-87b7-49b4-99bd-0e6232a80142
 -i

  EXPECTED RESULT:
  status code 400 and error description 

  ACTUAL RESULT:
  HTTP/1.1 200 OK
  Content-Length: 517
  Content-Type: application/json; charset=UTF-8
  X-Openstack-Request-Id: req-req-efd101f6-7dd4-4a97-95f6-d647a7e21cb7
  Date: Mon, 17 Aug 2015 06:55:46 GMT

  {"description": null, "published_at": null, "tags": [], "depends_on":
  null, "created_at": "2015-08-14T13:42:13.00", "type_name":
  "MyArtifact", "updated_at": "2015-08-14T14:10:17.00",
  "visibility": "public", "id": "4d1e26e5-87b7-49b4-99bd-0e6232a80142",
  "type_version": "2.0", "state": "creating", "version": "12.0.0",
  "references": [], "prop1": null, "prop2": null, "owner":
  "b230d6ea1098462bb98d993b9cf386c0", "image_file": null, "deleted_at":
  null, "screenshots": [], "int_list": null, "name": "artifact-3"}

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1485476/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485474] [NEW] Glance returned 200 status code when we add element to nonexistent property

2015-08-17 Thread dshakhray
*** This bug is a duplicate of bug 1485478 ***
https://bugs.launchpad.net/bugs/1485478

Public bug reported:

ENVIRONMENT: devstack, Glance (master, 14.08.2015)

STEPS TO REPRODUCE:
We tried add element  to nonexistent property on artifact.
Send request
curl -H "X-Auth-Token: b582b953413b4a8896bfa27e1b70d4e0" -H 
"Content-Type:application/json" -X PATCH -d '[{"op": "add", "path": "/banana", 
"value": "minion_Stuart"}]' 
http://172.18.76.44:9292/v3/artifacts/myartifact/v2.0/4d1e26e5-87b7-49b4-99bd-0e6232a80142
 -i

EXPECTED RESULT:
status code 400 and error description 

ACTUAL RESULT:
HTTP/1.1 200 OK
Content-Length: 517
Content-Type: application/json; charset=UTF-8
X-Openstack-Request-Id: req-req-efd101f6-7dd4-4a97-95f6-d647a7e21cb7
Date: Mon, 17 Aug 2015 06:55:46 GMT

{"description": null, "published_at": null, "tags": [], "depends_on":
null, "created_at": "2015-08-14T13:42:13.00", "type_name":
"MyArtifact", "updated_at": "2015-08-14T14:10:17.00", "visibility":
"public", "id": "4d1e26e5-87b7-49b4-99bd-0e6232a80142", "type_version":
"2.0", "state": "creating", "version": "12.0.0", "references": [],
"prop1": null, "prop2": null, "owner":
"b230d6ea1098462bb98d993b9cf386c0", "image_file": null, "deleted_at":
null, "screenshots": [], "int_list": null, "name": "artifact-3"}

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: artifacts glance

** This bug has been marked a duplicate of bug 1485476
   Glance returned 200 status code when we add element to nonexistent property

** This bug is no longer a duplicate of bug 1485476
   Glance returned 200 status code when we add element to nonexistent property
** This bug has been marked a duplicate of bug 1485478
   Glance returned 200 status code when we add element to nonexistent property

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1485474

Title:
  Glance returned 200 status code when we add element to nonexistent
  property

Status in Glance:
  New

Bug description:
  ENVIRONMENT: devstack, Glance (master, 14.08.2015)

  STEPS TO REPRODUCE:
  We tried add element  to nonexistent property on artifact.
  Send request
  curl -H "X-Auth-Token: b582b953413b4a8896bfa27e1b70d4e0" -H 
"Content-Type:application/json" -X PATCH -d '[{"op": "add", "path": "/banana", 
"value": "minion_Stuart"}]' 
http://172.18.76.44:9292/v3/artifacts/myartifact/v2.0/4d1e26e5-87b7-49b4-99bd-0e6232a80142
 -i

  EXPECTED RESULT:
  status code 400 and error description 

  ACTUAL RESULT:
  HTTP/1.1 200 OK
  Content-Length: 517
  Content-Type: application/json; charset=UTF-8
  X-Openstack-Request-Id: req-req-efd101f6-7dd4-4a97-95f6-d647a7e21cb7
  Date: Mon, 17 Aug 2015 06:55:46 GMT

  {"description": null, "published_at": null, "tags": [], "depends_on":
  null, "created_at": "2015-08-14T13:42:13.00", "type_name":
  "MyArtifact", "updated_at": "2015-08-14T14:10:17.00",
  "visibility": "public", "id": "4d1e26e5-87b7-49b4-99bd-0e6232a80142",
  "type_version": "2.0", "state": "creating", "version": "12.0.0",
  "references": [], "prop1": null, "prop2": null, "owner":
  "b230d6ea1098462bb98d993b9cf386c0", "image_file": null, "deleted_at":
  null, "screenshots": [], "int_list": null, "name": "artifact-3"}

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1485474/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485475] [NEW] Glance returned 200 status code when we add element to nonexistent property

2015-08-17 Thread dshakhray
Public bug reported:

ENVIRONMENT: devstack, Glance (master, 14.08.2015)

STEPS TO REPRODUCE:
We tried add element  to nonexistent property on artifact.
Send request
curl -H "X-Auth-Token: b582b953413b4a8896bfa27e1b70d4e0" -H 
"Content-Type:application/json" -X PATCH -d '[{"op": "add", "path": "/banana", 
"value": "minion_Stuart"}]' 
http://172.18.76.44:9292/v3/artifacts/myartifact/v2.0/4d1e26e5-87b7-49b4-99bd-0e6232a80142
 -i

EXPECTED RESULT:
status code 400 and error description 

ACTUAL RESULT:
HTTP/1.1 200 OK
Content-Length: 517
Content-Type: application/json; charset=UTF-8
X-Openstack-Request-Id: req-req-efd101f6-7dd4-4a97-95f6-d647a7e21cb7
Date: Mon, 17 Aug 2015 06:55:46 GMT

{"description": null, "published_at": null, "tags": [], "depends_on":
null, "created_at": "2015-08-14T13:42:13.00", "type_name":
"MyArtifact", "updated_at": "2015-08-14T14:10:17.00", "visibility":
"public", "id": "4d1e26e5-87b7-49b4-99bd-0e6232a80142", "type_version":
"2.0", "state": "creating", "version": "12.0.0", "references": [],
"prop1": null, "prop2": null, "owner":
"b230d6ea1098462bb98d993b9cf386c0", "image_file": null, "deleted_at":
null, "screenshots": [], "int_list": null, "name": "artifact-3"}

** Affects: glance
 Importance: Undecided
 Status: Invalid


** Tags: artifacts glance

** Changed in: glance
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1485475

Title:
  Glance returned 200 status code when we add element to nonexistent
  property

Status in Glance:
  Invalid

Bug description:
  ENVIRONMENT: devstack, Glance (master, 14.08.2015)

  STEPS TO REPRODUCE:
  We tried add element  to nonexistent property on artifact.
  Send request
  curl -H "X-Auth-Token: b582b953413b4a8896bfa27e1b70d4e0" -H 
"Content-Type:application/json" -X PATCH -d '[{"op": "add", "path": "/banana", 
"value": "minion_Stuart"}]' 
http://172.18.76.44:9292/v3/artifacts/myartifact/v2.0/4d1e26e5-87b7-49b4-99bd-0e6232a80142
 -i

  EXPECTED RESULT:
  status code 400 and error description 

  ACTUAL RESULT:
  HTTP/1.1 200 OK
  Content-Length: 517
  Content-Type: application/json; charset=UTF-8
  X-Openstack-Request-Id: req-req-efd101f6-7dd4-4a97-95f6-d647a7e21cb7
  Date: Mon, 17 Aug 2015 06:55:46 GMT

  {"description": null, "published_at": null, "tags": [], "depends_on":
  null, "created_at": "2015-08-14T13:42:13.00", "type_name":
  "MyArtifact", "updated_at": "2015-08-14T14:10:17.00",
  "visibility": "public", "id": "4d1e26e5-87b7-49b4-99bd-0e6232a80142",
  "type_version": "2.0", "state": "creating", "version": "12.0.0",
  "references": [], "prop1": null, "prop2": null, "owner":
  "b230d6ea1098462bb98d993b9cf386c0", "image_file": null, "deleted_at":
  null, "screenshots": [], "int_list": null, "name": "artifact-3"}

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1485475/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485478] [NEW] Glance returned 200 status code when we add element to nonexistent property

2015-08-17 Thread dshakhray
Public bug reported:

ENVIRONMENT: devstack, Glance (master, 14.08.2015)

STEPS TO REPRODUCE:
We tried add element  to nonexistent property on artifact.
Send request
curl -H "X-Auth-Token: b582b953413b4a8896bfa27e1b70d4e0" -H 
"Content-Type:application/json" -X PATCH -d '[{"op": "add", "path": "/banana", 
"value": "minion_Stuart"}]' 
http://172.18.76.44:9292/v3/artifacts/myartifact/v2.0/4d1e26e5-87b7-49b4-99bd-0e6232a80142
 -i

EXPECTED RESULT:
status code 400 and error description 

ACTUAL RESULT:
HTTP/1.1 200 OK
Content-Length: 517
Content-Type: application/json; charset=UTF-8
X-Openstack-Request-Id: req-req-efd101f6-7dd4-4a97-95f6-d647a7e21cb7
Date: Mon, 17 Aug 2015 06:55:46 GMT

{"description": null, "published_at": null, "tags": [], "depends_on":
null, "created_at": "2015-08-14T13:42:13.00", "type_name":
"MyArtifact", "updated_at": "2015-08-14T14:10:17.00", "visibility":
"public", "id": "4d1e26e5-87b7-49b4-99bd-0e6232a80142", "type_version":
"2.0", "state": "creating", "version": "12.0.0", "references": [],
"prop1": null, "prop2": null, "owner":
"b230d6ea1098462bb98d993b9cf386c0", "image_file": null, "deleted_at":
null, "screenshots": [], "int_list": null, "name": "artifact-3"}

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: artifacts glance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1485478

Title:
  Glance returned 200 status code when we add element to nonexistent
  property

Status in Glance:
  New

Bug description:
  ENVIRONMENT: devstack, Glance (master, 14.08.2015)

  STEPS TO REPRODUCE:
  We tried add element  to nonexistent property on artifact.
  Send request
  curl -H "X-Auth-Token: b582b953413b4a8896bfa27e1b70d4e0" -H 
"Content-Type:application/json" -X PATCH -d '[{"op": "add", "path": "/banana", 
"value": "minion_Stuart"}]' 
http://172.18.76.44:9292/v3/artifacts/myartifact/v2.0/4d1e26e5-87b7-49b4-99bd-0e6232a80142
 -i

  EXPECTED RESULT:
  status code 400 and error description 

  ACTUAL RESULT:
  HTTP/1.1 200 OK
  Content-Length: 517
  Content-Type: application/json; charset=UTF-8
  X-Openstack-Request-Id: req-req-efd101f6-7dd4-4a97-95f6-d647a7e21cb7
  Date: Mon, 17 Aug 2015 06:55:46 GMT

  {"description": null, "published_at": null, "tags": [], "depends_on":
  null, "created_at": "2015-08-14T13:42:13.00", "type_name":
  "MyArtifact", "updated_at": "2015-08-14T14:10:17.00",
  "visibility": "public", "id": "4d1e26e5-87b7-49b4-99bd-0e6232a80142",
  "type_version": "2.0", "state": "creating", "version": "12.0.0",
  "references": [], "prop1": null, "prop2": null, "owner":
  "b230d6ea1098462bb98d993b9cf386c0", "image_file": null, "deleted_at":
  null, "screenshots": [], "int_list": null, "name": "artifact-3"}

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1485478/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1484343] Re: Multiple record for same user and resource in quota_usage tabel

2015-08-17 Thread wanghao
** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1484343

Title:
  Multiple record for same user and resource in quota_usage tabel

Status in Cinder:
  New
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Currently, the quota usage table doesn't contain UniqueConstraint
  policy. Although it has locks when writing data, but under extreme
  circumstance, for example: launching over 100 instances at the same
  time for a new tenant(user). In this condition,the quota usage table
  for this user is empty, and there might be a chance that multiple
  record are recorded for same resource and same user.

  Adding UniqueConstraint policy for this table can solve this problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1484343/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485450] [NEW] [Sahara] Can't save selected "Proxy gateway"

2015-08-17 Thread Evgeny Sikachev
Public bug reported:

ENVIRONMENT:
  ISO: 164

STEPS TO REPRODUCE:
1. Navigate to "Node group templates"
2. Click on "Edit template"
3. Select "Proxy gateway"
4. Click "Save"
5. Click on "Edit template" of edited template

EXPECTED RESULT:
"Proxy gateway" selected

ACTUAL RESULT:
"Proxy gateway" unselected

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: sahara

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1485450

Title:
  [Sahara] Can't save selected "Proxy gateway"

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  ENVIRONMENT:
ISO: 164

  STEPS TO REPRODUCE:
  1. Navigate to "Node group templates"
  2. Click on "Edit template"
  3. Select "Proxy gateway"
  4. Click "Save"
  5. Click on "Edit template" of edited template

  EXPECTED RESULT:
  "Proxy gateway" selected

  ACTUAL RESULT:
  "Proxy gateway" unselected

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1485450/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp