[Yahoo-eng-team] [Bug 1475091] Re: It's possible to create duplicate trusts

2015-12-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/239114
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=59b09b50ff15df9975832dbfba42e0c984591e48
Submitter: Jenkins
Branch:master

commit 59b09b50ff15df9975832dbfba42e0c984591e48
Author: Kent Wang 
Date:   Fri Oct 23 05:58:13 2015 -0700

Add Trusts unique constraint to remove duplicates

For now, effectively there could be multiple trusts with the same
project, trustor, trustee, expiry date, impersonation. The same
combination can have multiple trusts assigned with different roles
or not.

Patch fixes this issue by adding unique constraint to the trusts
database model. If two requests create trusts with the same
trustor, trustee, project, expiry, impersonation, then the second
request would bring up an exception saying there's a conflict.

This can help to improve specific trusts identification and
improve user experience.

Change-Id: I1a681b13cfbef40bf6c21271fb80966517fb1ec5
Closes-Bug: #1475091


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1475091

Title:
  It's possible to create duplicate trusts

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  A name field in Keystone DB is needed for helping identifying trusts.

  Effectively , there could be multiple trusts for a same
  project/trustor/trustee including the same expiry date and same
  impersonate flag. And the same combination could have multiple trusts
  assigned with different roles or not.

  Having a name would help for implementing trust usage.

  A use case scenario is currently with Puppet Keystone module while
  creating the trust provider:

  When creating a resource, Puppet uses a name as a title for the
  resource, that name is unique in order to provide idem-potency. The
  trust ID (Keystone DB) doesn't exist until its creation and therefore
  cannot be used as a title for a Puppet resource. Without a name,
  puppet provider has to make up a name from the different fields, which
  doesn't guarantee uniqueness anyway. Worse when fetching resources,
  the provider would have to fetch all the fields to identify the
  resource and take the first one if many available.

  So far, most other Keystone DBMS objects (tables) have a name, which Puppet 
has been able to use to identify resources.
  The latter is why it made more sense to create this request as a bug instead 
of a blueprint, basically saying a name has been missing upfront rather than 
being a request for enhancement.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1475091/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526175] [NEW] ha router schedule to dvr agent in compute node

2015-12-14 Thread zhang sheng
Public bug reported:

I use my conpany's environment to test the neutron DVR router.

At first, the environment use 2 network node provide L3-ha-router, in
neutron.conf :

l3_ha = True
max_l3_agents_per_router = 3
min_l3_agents_per_router = 2

then I change the neutron.conf to:

l3_ha = False
router_distributed = True
max_l3_agents_per_router = 3
min_l3_agents_per_router = 2

and run dvr mode l3-agent on compute nodes, now the strange things happened, 
All ha-router 
bind to this compute node.
If i create a new ha-router ,and use "neutron l3-agent-list-hosting-router" 
command to watch binding

root@controller:~# neutron l3-agent-list-hosting-router 
73a5308f-dd1e-4c0e-8ccf-b9e4d2a82c5e
+--+--++---+--+
| id   | host | admin_state_up | alive | 
ha_state |
+--+--++---+--+
| 0f3f65bd-9349-4f9a-af2c-7872a4fddd1f | network2 | True   | :-)   | 
standby  |
| b174f741-3a41-45ed-bae0-e00ef4c1b1f9 | network1 | True   | :-)   | 
standby  |
+--+--++---+--+
root@controller:~# neutron l3-agent-list-hosting-router 
73a5308f-dd1e-4c0e-8ccf-b9e4d2a82c5e
+--+--++---+--+
| id   | host | admin_state_up | alive | 
ha_state |
+--+--++---+--+
| 95f0c274-95ec-44c4-a6e7-f7e6de4b6e25 | compute3 | True   | :-)   | 
standby  |
| 0f3f65bd-9349-4f9a-af2c-7872a4fddd1f | network2 | True   | :-)   | 
active   |
| b174f741-3a41-45ed-bae0-e00ef4c1b1f9 | network1 | True   | :-)   | 
standby  |
+--+--++---+--+

It will first bind to network1 and network2,then bind to compute3.
I guess the reason is when dvr mode l3-agent start sync_router , neutron bind 
the ha-router to compute3

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1526175

Title:
  ha router schedule to dvr agent in compute node

Status in neutron:
  New

Bug description:
  I use my conpany's environment to test the neutron DVR router.

  At first, the environment use 2 network node provide L3-ha-router, in
  neutron.conf :

  l3_ha = True
  max_l3_agents_per_router = 3
  min_l3_agents_per_router = 2

  then I change the neutron.conf to:

  l3_ha = False
  router_distributed = True
  max_l3_agents_per_router = 3
  min_l3_agents_per_router = 2

  and run dvr mode l3-agent on compute nodes, now the strange things happened, 
All ha-router 
  bind to this compute node.
  If i create a new ha-router ,and use "neutron l3-agent-list-hosting-router" 
command to watch binding

  root@controller:~# neutron l3-agent-list-hosting-router 
73a5308f-dd1e-4c0e-8ccf-b9e4d2a82c5e
  
+--+--++---+--+
  | id   | host | admin_state_up | alive | 
ha_state |
  
+--+--++---+--+
  | 0f3f65bd-9349-4f9a-af2c-7872a4fddd1f | network2 | True   | :-)   | 
standby  |
  | b174f741-3a41-45ed-bae0-e00ef4c1b1f9 | network1 | True   | :-)   | 
standby  |
  
+--+--++---+--+
  root@controller:~# neutron l3-agent-list-hosting-router 
73a5308f-dd1e-4c0e-8ccf-b9e4d2a82c5e
  
+--+--++---+--+
  | id   | host | admin_state_up | alive | 
ha_state |
  
+--+--++---+--+
  | 95f0c274-95ec-44c4-a6e7-f7e6de4b6e25 | compute3 | True   | :-)   | 
standby  |
  | 0f3f65bd-9349-4f9a-af2c-7872a4fddd1f | network2 | True   | :-)   | 
active   |
  | b174f741-3a41-45ed-bae0-e00ef4c1b1f9 | network1 | True   | :-)   | 
standby  |
  
+--+--++---+--+

  It will first bind to network1 and network2,then bind to compute3.
  I guess the reason is when dvr mode l3-agent start sync_router , neutron bind 
the ha-router to compute3

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1526175/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1481666] Re: [LBaaS V2] neutron lbaas-healthmonitor-create/update needs to be validated

2015-12-14 Thread Reedip
Looks like this is already
merged:https://bugs.launchpad.net/neutron/+bug/1320111

** Changed in: neutron
   Status: In Progress => Fix Released

** Changed in: neutron
 Assignee: Reedip (reedip-banerjee) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1481666

Title:
  [LBaaS V2] neutron lbaas-healthmonitor-create/update needs to be
  validated

Status in neutron:
  Fix Released

Bug description:
  In the option of lbaasv2 command
  (neutron lbaas-healthmonitor-create),
  "delay" must be greater than or equal to "timeout".

  Since the lbaasv2 validation check that has not been performed,
  it is not an error.
  The lbaas command is an exception occurs properly.

  e.g.
  $ neutron lbaas-healthmonitor-create --delay 3 --max-retries 10 --timeout 5 
--type HTTP --pool pool1
  Created a new healthmonitor:
  +++
  | Field  | Value  |
  +++
  | admin_state_up | True   |
  | delay  | 3  |
  | expected_codes | 200|
  | http_method| GET|
  | id | e6d5d998-cd50-4f97-9b41-3461fcb9c6fc   |
  | max_retries| 10 |
  | pools  | {"id": "32f660f5-8e6c-4998-9f95-94f457ab858a"} |
  | tenant_id  | 06523097274b4bf2bae201f8a34c357f   |
  | timeout| 5  |
  | type   | HTTP   |
  | url_path   | /  |
  +++

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1481666/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526162] [NEW] Add instance.save() when handling nova-compute failure during a soft reboot

2015-12-14 Thread Zhenyu Zheng
Public bug reported:

In patch https://review.openstack.org/#/c/219980 we added a few lines of
code to handle nova-compute failure during a soft reboot.

http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py#n1059 
to 1067
but somehow we missed instance.save() at the end of this logic, so the task 
state didn't
saved and the logic doesn't work.

** Affects: nova
 Importance: Undecided
 Assignee: Zhenyu Zheng (zhengzhenyu)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Zhenyu Zheng (zhengzhenyu)

** Description changed:

  In patch https://review.openstack.org/#/c/219980 we added a few lines of
  code to handle nova-compute failure during a soft reboot.
+ 
+ 
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py#n1059 
to 1067
+ but somehow we missed instance.save() at the end of this logic, so the task 
state didn't
+ saved and the logic doesn't work.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1526162

Title:
  Add instance.save() when handling nova-compute failure during a soft
  reboot

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  In patch https://review.openstack.org/#/c/219980 we added a few lines
  of code to handle nova-compute failure during a soft reboot.

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py#n1059 
to 1067
  but somehow we missed instance.save() at the end of this logic, so the task 
state didn't
  saved and the logic doesn't work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1526162/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524849] Re: Cannot use trusts with fernet tokens

2015-12-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/257478
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=c885eeed341fd2ebca8d7c0bec0c51b00df2f28e
Submitter: Jenkins
Branch:master

commit c885eeed341fd2ebca8d7c0bec0c51b00df2f28e
Author: Boris Bobrov 
Date:   Mon Dec 14 19:42:43 2015 +0300

Verify that user is trustee only on issuing token

get_token_data is used to gather various data for token. One of the
checks it does is verifying that the authenticated user is a trustee.
Before Fernet, it was used during token issuing.

Impersonation in trusts substitutes information about user in token,
so instead of trustee, trustor is stored in token.

With Fernet tokens, get_token_data is used during token validation.
In case of impersonation, user_id, stored in Fernet token, is id of
the trustor, but the check described needs this id to be id of the
trustee.

Move the check to happen only on token issuing.

Change-Id: I7c02cc6a1dbfe4e28d390960ac85d4574759b1a8
Closes-Bug: 1524849


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1524849

Title:
  Cannot use trusts with fernet tokens

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Master, devstack (installed today). 
  1. Enable fernet tokens in Keystone
  2. Add the following lib to glance/common/ folder:
  http://paste.openstack.org/show/481480/
  3. Replace upload method in glance/api/v2/image_data.py with the following:
  http://paste.openstack.org/show/481489/
  NOTE: it is just example of the code to demonstrate that fernet tokens can't 
work well with trusts.
  4. Restart glance
  5. Try to upload any image.
  You will get the following error when deleting the trust: 
http://paste.openstack.org/show/481493/
  When you try to upload big image that requires more than hour (or reduce 
token expiration)
  you will get the following: http://paste.openstack.org/show/481492/
  Apparently, refreshed token rejected by keystone-middleware.

  I faced with the issue when implementing trusts for Glance but it seems that 
Heat and other services have the same troubles.
  UUID tokens works as expected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1524849/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526150] [NEW] Use imageutils from oslo_utils

2015-12-14 Thread Abhishek Kekane
Public bug reported:

imageutils module is sync to oslo.utis with unittest in version 3.1. So
instead of using imageutils from openstack.common package, use from
oslo_utils.

** Affects: nova
 Importance: Undecided
 Assignee: Abhishek Kekane (abhishek-kekane)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Abhishek Kekane (abhishek-kekane)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1526150

Title:
  Use imageutils from oslo_utils

Status in OpenStack Compute (nova):
  New

Bug description:
  imageutils module is sync to oslo.utis with unittest in version 3.1.
  So instead of using imageutils from openstack.common package, use from
  oslo_utils.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1526150/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280522] Re: Replace assertEqual(None, *) with assertIsNone in tests

2015-12-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/257253
Committed: 
https://git.openstack.org/cgit/openstack/glance_store/commit/?id=64c746df383004c130e256661c3472e44d85aada
Submitter: Jenkins
Branch:master

commit 64c746df383004c130e256661c3472e44d85aada
Author: Shuquan Huang 
Date:   Mon Dec 14 18:05:26 2015 +0800

Replace assertEqual(None, *) with assertIsNone in tests

Replace assertEqual(None, *) with assertIsNone in tests to have
more clear messages in case of failure.

Change-Id: If651a433db3e573c3b0b86b2a9f87ab84e65b96b
Closes-bug: #1280522


** Changed in: glance-store
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1280522

Title:
  Replace assertEqual(None, *) with assertIsNone in tests

Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released
Status in glance_store:
  Fix Released
Status in heat:
  Fix Released
Status in Ironic:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Manila:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-neutronclient:
  Fix Released
Status in python-troveclient:
  Fix Released
Status in Sahara:
  Fix Released
Status in tempest:
  Fix Released
Status in Trove:
  Fix Released
Status in tuskar:
  Fix Released

Bug description:
  Replace assertEqual(None, *) with assertIsNone in tests to have
  more clear messages in case of failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1280522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500567] Re: port binding host_id does not update when removing openvswitch agent

2015-12-14 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1500567

Title:
  port binding host_id does not update when removing openvswitch agent

Status in neutron:
  Expired

Bug description:
  SPECS:
  Openstack Juno
  neutron version 2.3.9
  nova version 2.20.0
  OS: CentOS 6.5 Final
  kernel: 2.6.32-504.1.3.el6.mos61.x86_64 (Mirantis Fuel 6.1 installed 
compute+ceph node)

  
  SCENARIO:
  I had a compute node that was also running an neutron-openvswitch-agent.  
This was 'node-12'.
  Before node-12's primary disk died, there was an instance being hosted on the 
node, which was in the state 'SHUTDOWN'.
  I have created a node-15, which also runs the neutron-openvswitch-agent with 
nova-compute.  I did not migrate the instance before perfroming a neutron 
agent-delete on node-12, so now there is metadata that looks like this:

  [root@node-14 ~]# neutron port-show c209538b-ecc1-4414-9f97-e0f6a5d08ecc
  
+---+--+
  | Field | Value   
 |
  
+---+--+
  | admin_state_up| True
 |
  | allowed_address_pairs | 
 |
  | binding:host_id   | node-12

  
  ACTION:
  Node-12 neutron agent is deleted, using the command, `neutron agent-delete 
6bcadbe2-7631-41f5-9124-6fe75016217a`

  EXPECTED:
  all neturon ports bound with that agent should be updated and modified to use 
an alternative binding host_id, preferably the host currently running the VM. 
In my scenario, this would be node-15. NOT node-12.

  ACTUAL:
  The neutron ports maintained the same binding:host_id, which was node-12.


  
  ADDITIONAL INFORMATION:

  I was able to update the value using the following request:

  curl -X PUT -d '{"port":{"binding:host_id": "node-15.domain.com"}}' -H
  "X-Auth_token:f3f1c03239b246a8a7ffa9ca0eb323bf" -H "Content-type:
  application/json"
  http://10.10.30.2:9696/v2.0/ports/f98fe798-d522-4b6c-b084-45094fdc5052.json

  However, I'm not sure if there are modifications to the openvswitch
  agent on node-15 that also need to be performed.

  Also, since my node-12 died before I could migrate the instances, and
  I attempted to power them on before i realized they needed migration,
  I was forced to update the instances table in the database, and
  specify node-15 as the new host.

  > update instances set task_state = NULL where task_state = 'powering-on';
  > update instances set host = 'node-15.domain.com' where host = 
'node-12.domain.com';
  > update instances set node = 'node-15.domain.com' where node = 
'node-12.domain.com';
  > update instances set launched_on = 'node-15.domain.com' where launched_on = 
'node-12.domain.com';

  In my case, the work around is to kick off a 'migrate', in which case
  the binding:host_id is updated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1500567/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507651] Re: MidoNet Neutron Plugin upgrade from kilo stable 2015.1.0 to kilo unstable 2015.1.1.2.0-1~rc0 (MNv5.0) not supported

2015-12-14 Thread YAMAMOTO Takashi
i added neutron as an affected project because i think it's better to fix this 
in neutron.
see https://review.openstack.org/#/c/245657/ for discussion.

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507651

Title:
  MidoNet Neutron Plugin upgrade from kilo stable 2015.1.0 to kilo
  unstable 2015.1.1.2.0-1~rc0 (MNv5.0) not supported

Status in networking-midonet:
  Fix Released
Status in neutron:
  New

Bug description:
  New supported features in last unstable version of the kilo plugin
  2015.1.1.2.0-1~rc0 such as port_security cause backwards
  incompatibility with stable version of kilo plugin 2015.1.0.

  E.g. neutron-server logs:

  2015-10-19 11:23:23.722 29190 ERROR neutron.api.v2.resource 
[req-007bd588-78a5-4cdd-a893-7522c1820edc ] index failed
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 83, in 
resource
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 319, in index
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource return 
self._items(request, True, parent_id)
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 249, in _items
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource obj_list = 
obj_getter(request.context, **kwargs)
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/db_base_plugin_v2.py", line 1970, 
in get_ports
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource items = 
[self._make_port_dict(c, fields) for c in query]
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/db_base_plugin_v2.py", line 936, 
in _make_port
  _dict
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource 
attributes.PORTS, res, port)
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/common_db_mixin.py", line 162, in 
_apply_dict_
  extend_functions
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource func(*args)
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/portsecurity_db.py", line 31, in 
_extend_port_
  security_dict
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource psec_value = 
db_data['port_security'][psec.PORTSECURITY]
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource TypeError: 
'NoneType' object has no attribute '__getitem__'
  2015-10-19 11:23:23.722 29190 TRACE neutron.api.v2.resource
  2015-10-19 11:23:24.283 29190 ERROR oslo_messaging.rpc.dispatcher 
[req-21c014b0-c418-4ebe-822f-3789fc680af6 ] Exception during message handling: 
'NoneType' ob
  ject has no attribute '__getitem__'
  2015-10-19 11:23:24.283 29190 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-10-19 11:23:24.283 29190 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _di
  spatch_and_reply
  2015-10-19 11:23:24.283 29190 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-10-19 11:23:24.283 29190 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _di
  spatch
  2015-10-19 11:23:24.283 29190 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-10-19 11:23:24.283 29190 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do
  _dispatch
  2015-10-19 11:23:24.283 29190 TRACE oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2015-10-19 11:23:24.283 29190 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 
120, in
   get_active_networks_info
  2015-10-19 11:23:24.283 29190 TRACE oslo_messaging.rpc.dispatcher 
networks = self._get_active_networks(context, **kwargs)
  2015-10-19 11:23:24.283 29190 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 
63, in
  _get_active_networks
  2015-10-19 11:23:24.283 29190 TRACE oslo_messaging.rpc.dispatcher 
context, host)
  2015-10-19 11:23:24.283 29190 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/neutron/db/agentschedulers_db.py", lin

[Yahoo-eng-team] [Bug 1484745] Re: DB migration juno-> kilo fails: Can't create table nsxv_internal_networks

2015-12-14 Thread Nguyen Truong Son
It's my fault to set the schema to latin1, alter schema to utf8 make it
works.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1484745

Title:
  DB migration juno-> kilo fails: Can't create table
  nsxv_internal_networks

Status in neutron:
  Invalid

Bug description:
  Get the following error when upgrading my juno DB to kilo

  neutron-db-manage --config-file /etc/neutron/neutron.conf   --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini upgrade kilo
  INFO  [alembic.migration] Context impl MySQLImpl.
  INFO  [alembic.migration] Will assume non-transactional DDL.
  INFO  [alembic.migration] Context impl MySQLImpl.
  INFO  [alembic.migration] Will assume non-transactional DDL.
  INFO  [alembic.migration] Running upgrade 38495dc99731 -> 4dbe243cd84d, nsxv
  Traceback (most recent call last):
File "/usr/bin/neutron-db-manage", line 10, in 
  sys.exit(main())
File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
238, in main
  CONF.command.func(config, CONF.command.name)
File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
106, in do_upgrade
  do_alembic_command(config, cmd, revision, sql=CONF.command.sql)
File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
72, in do_alembic_command
  getattr(alembic_command, cmd)(config, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/alembic/command.py", line 165, in 
upgrade
  script.run_env()
File "/usr/lib/python2.7/dist-packages/alembic/script.py", line 382, in 
run_env
  util.load_python_file(self.dir, 'env.py')
File "/usr/lib/python2.7/dist-packages/alembic/util.py", line 241, in 
load_python_file
  module = load_module_py(module_id, path)
File "/usr/lib/python2.7/dist-packages/alembic/compat.py", line 79, in 
load_module_py
  mod = imp.load_source(module_id, path, fp)
File 
"/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/env.py",
 line 109, in 
  run_migrations_online()
File 
"/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/env.py",
 line 100, in run_migrations_online
  context.run_migrations()
File "", line 7, in run_migrations
File "/usr/lib/python2.7/dist-packages/alembic/environment.py", line 742, 
in run_migrations
  self.get_context().run_migrations(**kw)
File "/usr/lib/python2.7/dist-packages/alembic/migration.py", line 305, in 
run_migrations
  step.migration_fn(**kw)
File 
"/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/versions/4dbe243cd84d_nsxv.py",
 line 65, in upgrade
  sa.PrimaryKeyConstraint('network_purpose'))
File "", line 7, in create_table
File "/usr/lib/python2.7/dist-packages/alembic/operations.py", line 936, in 
create_table
  self.impl.create_table(table)
File "/usr/lib/python2.7/dist-packages/alembic/ddl/impl.py", line 182, in 
create_table
  self._exec(schema.CreateTable(table))
File "/usr/lib/python2.7/dist-packages/alembic/ddl/impl.py", line 106, in 
_exec
  return conn.execute(construct, *multiparams, **params)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
729, in execute
  return meth(self, multiparams, params)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/ddl.py", line 69, in 
_execute_on_connection
  return connection._execute_ddl(self, multiparams, params)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
783, in _execute_ddl
  compiled
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
958, in _execute_context
  context)
File 
"/usr/lib/python2.7/dist-packages/oslo_db/sqlalchemy/compat/handle_error.py", 
line 261, in _handle_dbapi_exception
  e, statement, parameters, cursor, context)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1155, in _handle_dbapi_exception
  util.raise_from_cause(newraise, exc_info)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 
199, in raise_from_cause
  reraise(type(exception), exception, tb=exc_tb)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
951, in _execute_context
  context)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 
436, in do_execute
  cursor.execute(statement, parameters)
File "/usr/lib/python2.7/dist-packages/MySQLdb/cursors.py", line 174, in 
execute
  self.errorhandler(self, exc, value)
File "/usr/lib/python2.7/dist-packages/MySQLdb/connections.py", line 36, in 
defaulterrorhandler
  raise errorclass, errorvalue
  sqlalchemy.exc.OperationalError: (OperationalError) (1005, "Can't create 
table 'neutron_rctest.nsxv_internal_networks' (errno: 150)") "\nCREATE TABLE 
nsxv_internal_net

[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2015-12-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/252210
Committed: 
https://git.openstack.org/cgit/openstack/python-swiftclient/commit/?id=6bb97044c22b241c6b1e6d3e35df14131ca2547c
Submitter: Jenkins
Branch:master

commit 6bb97044c22b241c6b1e6d3e35df14131ca2547c
Author: shu-mutou 
Date:   Wed Dec 2 15:31:04 2015 +0900

Delete python bytecode before every test run

Because python creates pyc|pyo files and __pycache__
directories during tox runs, certain changes in the tree,
like deletes of files, or switching branches, can create
spurious errors.

Change-Id: Ibaac514521bab11bbf552e0310d1203230c0d984
Closes-Bug: #1368661


** Changed in: python-swiftclient
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in congress:
  Fix Released
Status in Gnocchi:
  Invalid
Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Mistral:
  Fix Released
Status in Monasca:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in oslo.cache:
  Invalid
Status in oslo.concurrency:
  Invalid
Status in oslo.log:
  Invalid
Status in oslo.service:
  Fix Committed
Status in python-cinderclient:
  Fix Released
Status in python-congressclient:
  In Progress
Status in python-cueclient:
  In Progress
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  Fix Committed
Status in python-keystoneclient:
  Fix Committed
Status in python-magnumclient:
  Fix Released
Status in python-mistralclient:
  Fix Committed
Status in python-neutronclient:
  In Progress
Status in Python client library for Sahara:
  Fix Committed
Status in python-solumclient:
  In Progress
Status in python-swiftclient:
  Fix Released
Status in python-troveclient:
  Fix Committed
Status in Python client library for Zaqar:
  Fix Committed
Status in Solum:
  In Progress
Status in OpenStack Object Storage (swift):
  New
Status in Trove:
  Fix Released
Status in zaqar:
  In Progress

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/congress/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526140] [NEW] Should not use mutable default arguments in function definitions

2015-12-14 Thread Wang Bo
Public bug reported:

Refer to: http://docs.python-guide.org/en/latest/writing/gotchas/. 
We should not use mutable default arguments in function definitions.

So we need remove the default arguments "[]" and "{}" in function
definitions. "[]" has been removed in patch:
https://review.openstack.org/#/c/256931/2.  We also need fix the code
incorrectly using "{}".

** Affects: horizon
 Importance: Undecided
 Assignee: Wang Bo (chestack)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Wang Bo (chestack)

** Description changed:

- Refer to: http://docs.python-guide.org/en/latest/writing/gotchas/. we
- should not use mutable default arguments in function definitions.
+ Refer to: http://docs.python-guide.org/en/latest/writing/gotchas/. 
+ We should not use mutable default arguments in function definitions.
  
  So we need remove the default arguments "[]" and "{}" in function
  definitions. "[]" has been removed in patch:
- https://review.openstack.org/#/c/256931/2.  We need fix the left "{}"
- usage.
+ https://review.openstack.org/#/c/256931/2.  We also need fix the code
+ incorrectly using "{}".

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1526140

Title:
  Should not use mutable default arguments in function definitions

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Refer to: http://docs.python-guide.org/en/latest/writing/gotchas/. 
  We should not use mutable default arguments in function definitions.

  So we need remove the default arguments "[]" and "{}" in function
  definitions. "[]" has been removed in patch:
  https://review.openstack.org/#/c/256931/2.  We also need fix the code
  incorrectly using "{}".

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1526140/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523664] Re: Token operations fail when fernet key repository isn't writeable

2015-12-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/256736
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=0aaa3ab1710c3bd9ca7800cc2156a483bd463a11
Submitter: Jenkins
Branch:master

commit 0aaa3ab1710c3bd9ca7800cc2156a483bd463a11
Author: Ron De Rose 
Date:   Fri Dec 11 20:29:09 2015 +

Changed the key repo validation to allow read only

Fernet token operations would fail if the key respository did not
have write access, even though it would only need read access.
Added logic to validation to only check for read or read/write
access based on what is required.

Change-Id: I1ac8c3bd549055d5a13e0f5785dede42d710cf9d
Closes-Bug: 1523664


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1523664

Title:
  Token operations fail when fernet key repository isn't writeable

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  When using fernet tokens, I'm unable to get a token if the
  key_repository isn't writeable [0]. The main keystone process is only
  required to read keys from the key repository. The keystone-manage
  process must have write access to the key repository in order to
  bootstrap keys.

  Keystone doesn't rely on write access in order to create tokens. The
  check for keystone shouldn't be dependent on it having write access,
  since it doesn't need it [1].

  The write permissions should be kept when called from keystone-manage,
  but not when called from keystone.

  mfisch and clayton from Time Warner Cable brought this to my attention
  and I was able to recreate.

  [0] http://cdn.pasteraw.com/nng0up76dgy5b3naw0hw4bdabdkin84
  [1] 
https://github.com/openstack/keystone/blob/56d3d76304a88baa3ff90e94e6bbd6d8d28e7dcf/keystone/token/providers/fernet/utils.py#L34-L36

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1523664/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526138] [NEW] xenserver driver lacks of linux bridge qbrXXX

2015-12-14 Thread huan
Public bug reported:

1. Nova latest master branch, should be Mitaka with next release

2. XenServer as compute driver in OpenStack lacks of linux bridge when
using neutron networking and thus it cannot support neutron security
group as well.

** Affects: nova
 Importance: Undecided
 Assignee: huan (huan-xie)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => huan (huan-xie)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1526138

Title:
  xenserver driver lacks of linux bridge qbrXXX

Status in OpenStack Compute (nova):
  New

Bug description:
  1. Nova latest master branch, should be Mitaka with next release

  2. XenServer as compute driver in OpenStack lacks of linux bridge when
  using neutron networking and thus it cannot support neutron security
  group as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1526138/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357751] Re: nova.tests.network.test_manager.AllocateTestCase should use mock

2015-12-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/255876
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=f92264dab59e53fa27d068bbd0f9fa4927867e14
Submitter: Jenkins
Branch:master

commit f92264dab59e53fa27d068bbd0f9fa4927867e14
Author: Bharath Thiruveedula 
Date:   Thu Dec 10 18:42:37 2015 +0530

Remove start_service calls from the test case

Currently in nova.tests.unit.network.test_manager.AllocateTestCase
start_service have been used to start nova compute and network services.
But these calls takes most of the time while running the test cases. In
this patch start_service method calls are removed to make test cases run
fast.

Change-Id: I908dc6007d66c482254f07c035e948daff2319f1
Closes-Bug: #1357751


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357751

Title:
  nova.tests.network.test_manager.AllocateTestCase should use mock

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Unit tests should not need to start services. Instead, they should
  simply mock out the specific calls out of the unit of code that is
  being tested.

  nova.tests.network.test_manager.AllocateTestCase calls
  self.start_service() for the conductor, network, and compute service,
  and it does so unnecessarily. This results in long test run times and
  tests that are affected by side effects.

  Remove the calls to self.start_service() and replace with proper use
  of mock.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357751/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486882] Re: ml2 port cross backends extension

2015-12-14 Thread Armando Migliaccio
I read the spec and the use case description, and unless I miss
something I don't think that extending the port resource with a
tunnelling information is a viable solution. Ultimately a tunnel IP is a
property of a compute host or a subset of them. Each backend can choose
how to expose it to allow the wiring, and for that we don't need to
spill implementation details all the way to the API.

Please feel free to elaborate your use case further; until then, this is
provisionally rejected.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1486882

Title:
  ml2 port cross backends extension

Status in neutron:
  Won't Fix

Bug description:
  In some scenes using neutron, there are more than one backends of ML2,
  maybe two backends, one is openvswitch and the other is an mechanism
  driver that manage a lot of TOR switch. The ports in the same network,
  they may be made up of different ML2 backends. For openvswitch
  backends, the tunneling ip of a port is the same as ovs-agent host's
  ip. But for another kind of backends, there is no l2-agent, the
  tunneling ip of a port can not get from host configuration. So I
  think, we need to extend a new ml2 port attribute to record this
  tunnel connection info about ports. Maybe we can named it
  "binding:tunnel_connection".

  Another benefit of using this extension:
  In currently implement, for openvswitch backends, we get the tunneling ip of 
a port in this way: port -> binding:host -> agent -> agent configurations -> 
tunneling_ip. But by using this extension, we could get a tunneling ip in a 
simple way: port -> binding:tunnel_connection -> tunneling_ip.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1486882/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526117] [NEW] _can_fallocate should throw a warning instead of error

2015-12-14 Thread yuntongjin
Public bug reported:

_can_fallocate in nova/virt/libvirt/imagebackend.py
should throw a warning message when the backend image doesn't support fallocate,
currently, it throws a error message like:
LOG.error(_LE('Unable to preallocate image at path: '
  '%(path)s'), {'path': self.path})

** Affects: nova
 Importance: Undecided
 Assignee: yuntongjin (yuntongjin)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => yuntongjin (yuntongjin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1526117

Title:
  _can_fallocate should throw a warning instead of error

Status in OpenStack Compute (nova):
  New

Bug description:
  _can_fallocate in nova/virt/libvirt/imagebackend.py
  should throw a warning message when the backend image doesn't support 
fallocate,
  currently, it throws a error message like:
  LOG.error(_LE('Unable to preallocate image at path: '
'%(path)s'), {'path': self.path})

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1526117/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526108] Re: Octavia devstack install fails behind proxy

2015-12-14 Thread Henry Gessau
** Also affects: octavia
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1526108

Title:
  Octavia devstack install fails behind proxy

Status in octavia:
  New

Bug description:
  The octavia devstack/plugin.sh script does a sudo pip install without
  passing the user environment, meaning settings like
  http_proxy/https_proxy are not passed and keeping the devstack from
  working behind a proxy that needs those settings for pip installs if
  set for the user only.

  function octavia_install {

  setup_develop $OCTAVIA_DIR
  if ! [ "$DISABLE_AMP_IMAGE_BUILD" == 'True' ]; then
  install_package qemu kpartx
  git_clone https://git.openstack.org/openstack/diskimage-builder.git 
$DE\
  ST/diskimage-builder master
  git_clone 
https://git.openstack.org/openstack/tripleo-image-elements.gi\
  t $DEST/tripleo-image-elements master
  sudo  pip install -r $DEST/diskimage-builder/requirements.txt
  fi

  The user could explicitly set http_proxy/https_proxy settings for root
  as well to avoid the issue.  But this error takes a long time to come
  to when using client only proxy settings until octavia diskimage build
  sudo fails.

  Other services  such as devstack/install_pip.sh do pass the user
  enviornment to sudo with the -E flag.  Suggested fix is to pass the
  '-E' flag in this situation as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1526108/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1495664] Re: public base URL is returned in the links even though request is coming from admin URL

2015-12-14 Thread Guang Yee
This bug can be addressed outside of Keystone by passing the appropriate
X-Forwarded-* headers from the proxy or LB.


** Changed in: keystone
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1495664

Title:
  public base URL is returned in the links even though request is coming
  from admin URL

Status in OpenStack Identity (keystone):
  Won't Fix

Bug description:
  Public base URL is returned in the links even though the request is
  coming from admin URL.  Set both admin_endpoint and public_endpoint in
  keystone.conf and notice that public_endpoint is always use as the
  base URL in the links.  i.e.

  $curl -k -s -H 'X-Auth-Token: d5363c1fe9524972b89192242087' 
http://localhost:5000/v3/policies | python -mjson.tool
  {
  "links": {
  "next": null,
  "previous": null,
  "self": "https://public:5000/v3/policies";
  },
  "policies": []
  }

  $ curl -k -s -H 'X-Auth-Token: d5363c1fe9524972b89192242087' 
http://localhost:35357/v3/policies | python -mjson.tool
  {
  "links": {
  "next": null,
  "previous": null,
  "self": "https://public:5000/v3/policies";
  },
  "policies": []
  }

  This is related to https://bugs.launchpad.net/keystone/+bug/1381961

  See

  
https://github.com/openstack/keystone/blob/master/keystone/common/controller.py#L419

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1495664/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526108] [NEW] Octavia devstack install fails behind proxy

2015-12-14 Thread James Arendt
Public bug reported:

The octavia devstack/plugin.sh script does a sudo pip install without
passing the user environment, meaning settings like
http_proxy/https_proxy are not passed and keeping the devstack from
working behind a proxy that needs those settings for pip installs if set
for the user only.

function octavia_install {

setup_develop $OCTAVIA_DIR
if ! [ "$DISABLE_AMP_IMAGE_BUILD" == 'True' ]; then
install_package qemu kpartx
git_clone https://git.openstack.org/openstack/diskimage-builder.git $DE\
ST/diskimage-builder master
git_clone https://git.openstack.org/openstack/tripleo-image-elements.gi\
t $DEST/tripleo-image-elements master
sudo  pip install -r $DEST/diskimage-builder/requirements.txt
fi

The user could explicitly set http_proxy/https_proxy settings for root
as well to avoid the issue.  But this error takes a long time to come to
when using client only proxy settings until octavia diskimage build sudo
fails.

Other services  such as devstack/install_pip.sh do pass the user
enviornment to sudo with the -E flag.  Suggested fix is to pass the '-E'
flag in this situation as well.

** Affects: neutron
 Importance: Undecided
 Assignee: James Arendt (james-arendt-7)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => James Arendt (james-arendt-7)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1526108

Title:
  Octavia devstack install fails behind proxy

Status in neutron:
  New

Bug description:
  The octavia devstack/plugin.sh script does a sudo pip install without
  passing the user environment, meaning settings like
  http_proxy/https_proxy are not passed and keeping the devstack from
  working behind a proxy that needs those settings for pip installs if
  set for the user only.

  function octavia_install {

  setup_develop $OCTAVIA_DIR
  if ! [ "$DISABLE_AMP_IMAGE_BUILD" == 'True' ]; then
  install_package qemu kpartx
  git_clone https://git.openstack.org/openstack/diskimage-builder.git 
$DE\
  ST/diskimage-builder master
  git_clone 
https://git.openstack.org/openstack/tripleo-image-elements.gi\
  t $DEST/tripleo-image-elements master
  sudo  pip install -r $DEST/diskimage-builder/requirements.txt
  fi

  The user could explicitly set http_proxy/https_proxy settings for root
  as well to avoid the issue.  But this error takes a long time to come
  to when using client only proxy settings until octavia diskimage build
  sudo fails.

  Other services  such as devstack/install_pip.sh do pass the user
  enviornment to sudo with the -E flag.  Suggested fix is to pass the
  '-E' flag in this situation as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1526108/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462694] Re: python keystone v3 client RoleAssignmentManager needs to pass include_subtree

2015-12-14 Thread Steve Martinelli
see https://review.openstack.org/#/c/188184/ for a patch

** Also affects: python-keystoneclient
   Importance: Undecided
   Status: New

** Changed in: keystone
   Status: New => Invalid

** Changed in: python-keystoneclient
   Importance: Undecided => Low

** Changed in: python-keystoneclient
 Assignee: (unassigned) => Dan Nguyen (daniel-a-nguyen)

** Changed in: python-keystoneclient
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1462694

Title:
  python keystone v3 client RoleAssignmentManager  needs to  pass
  include_subtree

Status in OpenStack Identity (keystone):
  Invalid
Status in python-keystoneclient:
  In Progress

Bug description:
  In order for a Domain Admin to successfully list the role assignments
  in the v3 api we need to pass a new request parameter to the api.
  Horizon depends on this for domain support.

  The call looks like this

   GET /v3/role_assignments?scope.domain.id=id&include_subtree

  and the results will return all the project role_assignments for the domain 
id.
  This can be further filters by project id in the client code.

  This is related to https://bugs.launchpad.net/keystone/+bug/1437407

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1462694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526096] [NEW] lbaas devstack plugin should start neutron-server with neutron_lbaas.conf

2015-12-14 Thread Brandon Logan
Public bug reported:

The neutron-lbaas devstack plugin sets up neutron_lbaas.conf but for
non-default values to be read, neutron-server needs to be started with
--config-file /etc/neutron/neutron.conf --config-file
/etc/neutron/neutron_lbaas.conf (or wherever those config files live).

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1526096

Title:
  lbaas devstack plugin should start neutron-server with
  neutron_lbaas.conf

Status in neutron:
  New

Bug description:
  The neutron-lbaas devstack plugin sets up neutron_lbaas.conf but for
  non-default values to be read, neutron-server needs to be started with
  --config-file /etc/neutron/neutron.conf --config-file
  /etc/neutron/neutron_lbaas.conf (or wherever those config files live).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1526096/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526086] [NEW] Unable to create service chain instance with GBP client

2015-12-14 Thread Gary Marchiny
Public bug reported:

When attempting to create a service chain instance using the GBP client,
I get the following error message:

ubuntu@foo$ gbp servicechain-instance-create --service-chain-spec 
svc_chain_spec --provider-ptg app_ptg --consumer-ptg user_ptg testInstance
Unrecognized attribute(s) 'provider_ptg, servicechain_spec, consumer_ptg'

To reproduce, install devstack kilo release (using these instructions
https://github.com/group-policy/gbp-devstack). Next, use Heat to create
the a test stack of resources that includes policy groups (see attached
Heat templates). Finally, attempt to run the command mentioned above to
create a service chain instance.

** Affects: group-based-policy-client
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1526086

Title:
  Unable to create service chain instance with GBP client

Status in Group Based Policy Client:
  New

Bug description:
  When attempting to create a service chain instance using the GBP
  client, I get the following error message:

  ubuntu@foo$ gbp servicechain-instance-create --service-chain-spec 
svc_chain_spec --provider-ptg app_ptg --consumer-ptg user_ptg testInstance
  Unrecognized attribute(s) 'provider_ptg, servicechain_spec, consumer_ptg'

  To reproduce, install devstack kilo release (using these instructions
  https://github.com/group-policy/gbp-devstack). Next, use Heat to
  create the a test stack of resources that includes policy groups (see
  attached Heat templates). Finally, attempt to run the command
  mentioned above to create a service chain instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/group-based-policy-client/+bug/1526086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526086] Re: Unable to create service chain instance with GBP client

2015-12-14 Thread Gary Marchiny
** Attachment added: "Heat template for firewall"
   
https://bugs.launchpad.net/neutron/+bug/1526086/+attachment/4534783/+files/fw.template

** Project changed: neutron => group-based-policy-client

** Description changed:

  When attempting to create a service chain instance using the GBP client,
  I get the following error message:
  
  ubuntu@foo$ gbp servicechain-instance-create --service-chain-spec 
svc_chain_spec --provider-ptg app_ptg --consumer-ptg user_ptg testInstance
  Unrecognized attribute(s) 'provider_ptg, servicechain_spec, consumer_ptg'
  
  To reproduce, install devstack kilo release (using these instructions
  https://github.com/group-policy/gbp-devstack). Next, use Heat to create
- the stack of resources that includes groups (see attached Heat
- template). Finally, attempt to run the command mentioned above to create
- a service chain instance.
+ the a test stack of resources that includes policy groups (see attached
+ Heat templates). Finally, attempt to run the command mentioned above to
+ create a service chain instance.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1526086

Title:
  Unable to create service chain instance with GBP client

Status in Group Based Policy Client:
  New

Bug description:
  When attempting to create a service chain instance using the GBP
  client, I get the following error message:

  ubuntu@foo$ gbp servicechain-instance-create --service-chain-spec 
svc_chain_spec --provider-ptg app_ptg --consumer-ptg user_ptg testInstance
  Unrecognized attribute(s) 'provider_ptg, servicechain_spec, consumer_ptg'

  To reproduce, install devstack kilo release (using these instructions
  https://github.com/group-policy/gbp-devstack). Next, use Heat to
  create the a test stack of resources that includes policy groups (see
  attached Heat templates). Finally, attempt to run the command
  mentioned above to create a service chain instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/group-based-policy-client/+bug/1526086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526084] [NEW] Wrong uuid passed by disable_isolated_metadata_proxy

2015-12-14 Thread Shih-Hao Li
Public bug reported:

In DhcpAgent, when enable_isolated_metadata_proxy() spawns a metadata
proxy agent for a network, it will pass router_id instead of network_id
if metadata network is enabled and a router port is connected to this
network.

Later, MatadataDriver will register this uuid (i.e. router_id) with
monitor for the new metadata proxy process.

But when disable_isolated_metadata_proxy() destroys  a metadata proxy
agent for a network, it always passes network_id as the uuid.  Thus
MatadataDriver can not find the matching process. So the corresponding
metadata proxy agent can not be destroyed.

** Affects: neutron
 Importance: Undecided
 Assignee: Shih-Hao Li (shihli)
 Status: In Progress

** Description changed:

- In DhcpAgent, when enable_isolated_metadata_proxy() spawns a metadata proxy 
agent
- for a network, it will pass router_id instead of network_id if metadata 
network is enabled
- and a router port is connected to this network. Later, MatadataDriver will 
register this uuid
- (i.e. router_id) with monitor for the new metadata proxy process.
+ In DhcpAgent, when enable_isolated_metadata_proxy() spawns a metadata
+ proxy agent for a network, it will pass router_id instead of network_id
+ if metadata network is enabled and a router port is connected to this
+ network. Later, MatadataDriver will register this uuid (i.e. router_id)
+ with monitor for the new metadata proxy process.
  
- But when disable_isolated_metadata_proxy() destroys  a metadata proxy agent 
for a network,
- it always passes network_id as the uuid.  Thus MatadataDriver can not find 
the matching process.
- So the corresponding metadata proxy agent can not be destroyed.
+ But when disable_isolated_metadata_proxy() destroys  a metadata proxy
+ agent for a network, it always passes network_id as the uuid.  Thus
+ MatadataDriver can not find the matching process.So the corresponding
+ metadata proxy agent can not be destroyed.

** Summary changed:

- Fix uuid passing in disable_isolated_metadata_proxy
+ Wrong uuid passed by disable_isolated_metadata_proxy

** Description changed:

  In DhcpAgent, when enable_isolated_metadata_proxy() spawns a metadata
  proxy agent for a network, it will pass router_id instead of network_id
  if metadata network is enabled and a router port is connected to this
- network. Later, MatadataDriver will register this uuid (i.e. router_id)
- with monitor for the new metadata proxy process.
+ network.
+ 
+ Later, MatadataDriver will register this uuid (i.e. router_id) with
+ monitor for the new metadata proxy process.
  
  But when disable_isolated_metadata_proxy() destroys  a metadata proxy
  agent for a network, it always passes network_id as the uuid.  Thus
- MatadataDriver can not find the matching process.So the corresponding
+ MatadataDriver can not find the matching process. So the corresponding
  metadata proxy agent can not be destroyed.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1526084

Title:
  Wrong uuid passed by disable_isolated_metadata_proxy

Status in neutron:
  In Progress

Bug description:
  In DhcpAgent, when enable_isolated_metadata_proxy() spawns a metadata
  proxy agent for a network, it will pass router_id instead of
  network_id if metadata network is enabled and a router port is
  connected to this network.

  Later, MatadataDriver will register this uuid (i.e. router_id) with
  monitor for the new metadata proxy process.

  But when disable_isolated_metadata_proxy() destroys  a metadata proxy
  agent for a network, it always passes network_id as the uuid.  Thus
  MatadataDriver can not find the matching process. So the corresponding
  metadata proxy agent can not be destroyed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1526084/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526087] [NEW] Can edit user name, email to illegal values

2015-12-14 Thread Rani Fields
Public bug reported:

Under Identity > Users, you can edit usernames and emails to illegal
values (string too long, invalid characters/format, etc). The test
string for both email and username update is
"abcdefghijklmnopqrstuvwxyz!@#$%^&*()_+1234567890-=[]\{}|;':",./<>?
baduser2".

This behavior is not in line with user creation's validation. When you
attempt to create a user with the test string as a username or email,
you get an error. This validation present during user creation does not
appear to be active when editing the user's name or email.

Furthermore, when you set the user's name to the test string, you will
be unable to log on using that username due to a name length issue. The
test string's length is 75 characters; the horizon log-on maximum is 64.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1526087

Title:
  Can edit user name, email to illegal values

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Under Identity > Users, you can edit usernames and emails to illegal
  values (string too long, invalid characters/format, etc). The test
  string for both email and username update is
  "abcdefghijklmnopqrstuvwxyz!@#$%^&*()_+1234567890-=[]\{}|;':",./<>?
  baduser2".

  This behavior is not in line with user creation's validation. When you
  attempt to create a user with the test string as a username or email,
  you get an error. This validation present during user creation does
  not appear to be active when editing the user's name or email.

  Furthermore, when you set the user's name to the test string, you will
  be unable to log on using that username due to a name length issue.
  The test string's length is 75 characters; the horizon log-on maximum
  is 64.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1526087/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526054] [NEW] Angular delete action icons and alignment wrong

2015-12-14 Thread Travis Tripp
Public bug reported:


Angular Table actions are using a different icon for delete than the django 
actions. In addition, the spacing and font is different.

http://pasteboard.co/2c1hmge.png

This is in:
https://github.com/openstack/horizon/blob/master/horizon/static/framework/widgets
/action-list/actions-delete-selected.template.html


Angular row delete actions have an icon even though their django
counterparts do not.

On the Django table, there is not an icon for row actions. I did a
screen grab below (note that I messed with the code to force it to only
have delete actions).

http://pasteboard.co/2c1hmge.png

If you look at the rendered HTML on the Django table, there does not
appear to be any span element containing the icon.

 Delete
Image 

This is in:
https://github.com/openstack/horizon/blob/master/horizon/static/framework/widgets
/action-list/actions-delete.template.html

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1526054

Title:
  Angular delete action icons and alignment wrong

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  
  Angular Table actions are using a different icon for delete than the django 
actions. In addition, the spacing and font is different.

  http://pasteboard.co/2c1hmge.png

  This is in:
  
https://github.com/openstack/horizon/blob/master/horizon/static/framework/widgets
  /action-list/actions-delete-selected.template.html


  Angular row delete actions have an icon even though their django
  counterparts do not.

  On the Django table, there is not an icon for row actions. I did a
  screen grab below (note that I messed with the code to force it to
  only have delete actions).

  http://pasteboard.co/2c1hmge.png

  If you look at the rendered HTML on the Django table, there does not
  appear to be any span element containing the icon.

   Delete
  Image 

  This is in:
  
https://github.com/openstack/horizon/blob/master/horizon/static/framework/widgets
  /action-list/actions-delete.template.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1526054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525282] Re: wrap_exception decorator does not grab arguments properly for decorated methods

2015-12-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/256118
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=1b757c328d43d8154282794fddb8822ea265b1cf
Submitter: Jenkins
Branch:master

commit 1b757c328d43d8154282794fddb8822ea265b1cf
Author: Andrew Laski 
Date:   Thu Dec 10 17:34:09 2015 -0500

Fix wrap_exception to get all arguments for payload

The wrap_exception decorator that attempts to send a notification when
exceptions occur was not sending all the arguments it was intending to.
It relies on getcallargs to get the arguments and argument names for the
called method but if the method has another decorator on it getcallargs
pulls information for the decorator rather than the decorated method.
This pulls the decorated method with get_wrapped_function and then calls
getcallargs.

get_wrapped_function was moved to safeutils because utils.py can't be
imported by exception.py without a circular import.

A few tests were updated to include the id on the instance object used.
This was done because serialization of the object includes the
instance.name field which assumes that id is set to populate the
CONF.instance_name_template.  When id is not set it triggers a lazy load
which fails in the test environment.

Change-Id: I87d8691a2aae6f3555177364f3c40a490a6f7591
Closes-bug: 1525282


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1525282

Title:
  wrap_exception decorator does not grab arguments properly for
  decorated methods

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The wrap_exception decorator, which pulls the wrapped method arguments
  and sends a notification if there is an exception raised from the
  method, is not pulling the full list of arguments.  This is because of
  a combination of relying on safe_utils.getcallargs which doesn't pull
  arguments when the called method uses *args or **kwargs and not using
  get_wrapped_function to get the call args for the decorated method.
  What is currently happening is getcallargs is passed a decorated
  method and pulls the argument list for the decorator, and most
  decorators are defined with *args and **kwargs.  Instead
  get_wrapped_function should be called first and the result should be
  passed to getcallargs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1525282/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524990] Re: safe_utils.getcallargs does not match inspect.getcallargs

2015-12-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/255572
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=2910d75b28afd909af3b4ac392729ac3d5e64b65
Submitter: Jenkins
Branch:master

commit 2910d75b28afd909af3b4ac392729ac3d5e64b65
Author: Andrew Laski 
Date:   Wed Dec 9 17:12:29 2015 -0500

Fix use of safeutils.getcallargs

The getcallargs method was written under the assumption that self was
not passed in as an argument even if the function being introspected
took self as an argument.  Since safeutils.getcallargs is modeled on
inspect.getcallargs which does require self to be passed in the wrong
interface was being provided.

The initial user of getcallargs was therefore calling it incorrectly and
this usage was copied to other places.  However this led to strange
failures when someone attempted to call the method correctly.  So this
patch fixes callers of getcallargs, and removes some code from the
implementation that is not necessary once the usage is fixed.

Change-Id: I86eb59a2280961b809c1b4680012fab9566d60db
Closes-Bug: 1524990


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1524990

Title:
  safe_utils.getcallargs does not match inspect.getcallargs

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  safeutils.getcallargs was written when python2.6 was supported and did
  not have inspect.getcallargs.  It was intended to be a simplified
  version that could be replaced when python2.6 support was dropped and
  inspect.getcallargs was ubiquitous.  However the interface that
  safe_utils.getcallargs provides did not match inspect.getcallargs
  around the handling of the self parameter needing to be passed in.  It
  should be brought inline with inspect.getcallargs so that it can be
  dropped.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1524990/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525881] Re: unnecessary L3 rpcs

2015-12-14 Thread YAMAMOTO Takashi
** Description changed:

- our plugin issues rpcs for l3-agent unnecessarily.
+ networking-midonet plugin issues RPCs for l3-agent unnecessarily.

** Changed in: networking-midonet
   Status: New => In Progress

** Changed in: networking-midonet
Milestone: None => 2.0.0

** Changed in: networking-midonet
 Assignee: (unassigned) => YAMAMOTO Takashi (yamamoto)

** Description changed:

  networking-midonet plugin issues RPCs for l3-agent unnecessarily.
+ 
+ related to that, neutron has a few RPC assumptions which needs to be
+ fixed.

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New => In Progress

** Changed in: neutron
 Assignee: (unassigned) => YAMAMOTO Takashi (yamamoto)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1525881

Title:
  unnecessary L3 rpcs

Status in networking-midonet:
  In Progress
Status in neutron:
  In Progress

Bug description:
  networking-midonet plugin issues RPCs for l3-agent unnecessarily.

  related to that, neutron has a few RPC assumptions which needs to be
  fixed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1525881/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518431] Re: Glance failed to upload image to swift storage

2015-12-14 Thread Kairat Kushaev
It took some amount of time to debug this issue for Glance.
It turned out that both Glance and Swift support chunked requests. 
Unfortunately, chunked uploading is not supported mod_fastcgi that is used by 
RadosGW. It is very typical for Apache to response with 411 error because some 
cgi (or wsgi) frameworks always require Content-length to be specified. 
Request  with transfer-encoding=chunked is a part of Http spec so glance_store 
prepares correct request here.
So I would recommend to deploy RadosGW under different CGI (for example 
mod_proxy_fcgi) that supports chunked requests.
I will mark this as Invalid for Glance, please re-open the bug if you don't 
agree.

** Changed in: glance
 Assignee: (unassigned) => Kairat Kushaev (kkushaev)

** Changed in: glance
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1518431

Title:
  Glance failed to upload image to swift storage

Status in Glance:
  Invalid

Bug description:
  When glance configured with swift backend, and swift API provides via
  RadosGW is unable to upload image.

  Command:
  glance --debug image-create --name trusty_ext4 --disk-format raw 
--container-format bare --file trusty-server-cloudimg-amd64.img --visibility 
public --progress
  Logs:
  http://paste.openstack.org/show/479621/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1518431/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1520510] Re: inconsistent column ordering in system Information panel

2015-12-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/251228
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=0a6d9ce8272b6d2b5ca6e4a085af671f306a0acf
Submitter: Jenkins
Branch:master

commit 0a6d9ce8272b6d2b5ca6e4a085af671f306a0acf
Author: liyingjun 
Date:   Sat Nov 28 01:24:42 2015 +0800

Change column order for Orchestration Services table

The `Last Updated` and `State` column are not consistent with other
tables in system information, change order to make it consistent.

Change-Id: If21aed35afe9dc56754f4b1e7bbd57bbda9369f5
Closes-bug: #1520510


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1520510

Title:
  inconsistent column ordering in system Information panel

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  While clicking sytem Information in the Horizon Dashboard, there are some 
tabs like Network Agents and Orchestration Services.
  We have inconsistency in the colmun Last Updated and Status in the 
Orchestration Services tab.
  The last column should be Last Updated and not Status because in all other 
tabs you have first Status and then the Last Updated colmun.

  Our version is 2015.1.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1520510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370207] Re: race condition between nova scheduler and nova compute

2015-12-14 Thread Nikola Đipanov
This seems to be by design i.e. Scheduler can get out of sync, and we
have the claim-and-retry mechanism in place so request for vm3 would
fail and trigger a reschedule.

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370207

Title:
  race condition between nova scheduler and nova compute

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  This is for nova 2014.1.2.

  Here, nova DB is the shared resources between nova-scheduler and nova-
  compute. Nova-scheduler checks DB to see if hv node can meet the
  provision requirement, nova-compute is the actual process to modify DB
  to reduce the free_ram_mb.

  For example, current available RAM on hv is 56G, with 
ram_allocation_ration=1.0. Within a minute, 3 vm provision requests are coming 
to scheduler, each asking for 24G RAM.
   
   t1: scheduler gets a request for vm1, assign vm1 to hv
   t2: scheduler gets a request for vm2, assign vm2 to hv
   t3: vm1 is created, nova-compute updates nova DB with RAM=32G
   t4: scheduler gets a request for vm3, assign vm3 to hv
   t5: vm2 is created, nova-compute updates nova DB with RAM=8G
   t6: vm3 is created, nova-compute updates nova DB with RAM=-16G

  In the end, we have a negative RAM with ram_allocation_ratio=1.0.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1370207/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525918] [NEW] Instance NG: Images can dissapear from Instance Source table

2015-12-14 Thread Itxaka Serrano
*** This bug is a duplicate of bug 1489619 ***
https://bugs.launchpad.net/bugs/1489619

Public bug reported:

How to reproduce:

 - Enable the instance launch NG
 - Have at least 1 image available
 - Click on the Instance Launch angular button
 - Go to Source
 - Add an image to the allocated table by clicking on +
 - Change "Select Boot Source" to any other source type (Volume for example)
 - Change "Select Boot Source" to "Image"

What is expected:

The image list has been reset to the original values (Nothing selected,
every image in the Available table)

What actually happens:

The image that was selected has dissapeared from both tables.

** Affects: horizon
 Importance: Undecided
 Status: New

** This bug has been marked a duplicate of bug 1489619
   Selected item disappears in ng launch instance after changing source

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1525918

Title:
  Instance NG: Images can dissapear from Instance Source table

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  How to reproduce:

   - Enable the instance launch NG
   - Have at least 1 image available
   - Click on the Instance Launch angular button
   - Go to Source
   - Add an image to the allocated table by clicking on +
   - Change "Select Boot Source" to any other source type (Volume for example)
   - Change "Select Boot Source" to "Image"

  What is expected:

  The image list has been reset to the original values (Nothing
  selected, every image in the Available table)

  What actually happens:

  The image that was selected has dissapeared from both tables.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1525918/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521464] Re: create an instance with ephemeral disk or swap disk error

2015-12-14 Thread Alexis Lee
Sorry, newbie bug triager here. After double-checking with markus_z, he
recommends setting incomplete and asking for DEBUG level logs.

** Changed in: nova
   Status: Invalid => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1521464

Title:
  create an instance with ephemeral disk or swap disk error

Status in OpenStack Compute (nova):
  Incomplete

Bug description:
  I boot an instance with ephemeral disk(or swap disk) in kilo, but return http 
request error:
  eg:
  nova boot --flavor 2 --image 5905bd7e-a87f-4856-8401-b8eb7211c84d --nic 
net-id=12ace164-d996-4261-9228-23ca0680f7a8 --ephemeral size=5,format=ext3 
test_vm1

  ERROR (BadRequest): Block Device Mapping is Invalid: Boot sequence for
  the instance and image/block device mapping combination is not valid.
  (HTTP 400) (Request-ID: req-b571662f-e554-49a7-979f-763f34b4b162)

  I think the instance with ephemeral disk should be able to boot from
  the image, instead of invalid boot sequence.

  the log in:
  http://paste.openstack.org/show/480452/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1521464/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525916] [NEW] launch instance NG available table shows wrong number

2015-12-14 Thread Itxaka Serrano
*** This bug is a duplicate of bug 1489618 ***
https://bugs.launchpad.net/bugs/1489618

Public bug reported:

Enable new angular instance launch
Click on the angular launch instance button
Go to source
Change the source to any other type which is not image

What is expected:
Badge next to the available table refresh to represent the number of items 
available for the actual source

What actually happens:
Badge will always show the number of items from the image table

** Affects: horizon
 Importance: Undecided
 Status: New

** This bug has been marked a duplicate of bug 1489618
   Available items count in ng launch instance does not update when changing 
source

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1525916

Title:
  launch instance NG available table shows wrong number

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Enable new angular instance launch
  Click on the angular launch instance button
  Go to source
  Change the source to any other type which is not image

  What is expected:
  Badge next to the available table refresh to represent the number of items 
available for the actual source

  What actually happens:
  Badge will always show the number of items from the image table

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1525916/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525913] [NEW] vmwareapi get vcenter cluster

2015-12-14 Thread linbing
Public bug reported:


In liberty nova/virt/vmwareapi/vm_util.py
def get_cluster_ref_by_name(session, cluster_name):
"""Get reference to the vCenter cluster with the specified name."""
all_clusters = get_all_cluster_mors(session)
for cluster in all_clusters:
if (hasattr(cluster, 'propSet') and
cluster.propSet[0].val == cluster_name):
return cluster.obj
when all_cluster is None,this code may cause error:TypeError: 'NoneType' object 
is not iterable,and nova-computer would't up

** Affects: nova
 Importance: Undecided
 Assignee: linbing (hawkerous)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => linbing (hawkerous)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1525913

Title:
  vmwareapi get vcenter cluster

Status in OpenStack Compute (nova):
  New

Bug description:
  
  In liberty nova/virt/vmwareapi/vm_util.py
  def get_cluster_ref_by_name(session, cluster_name):
  """Get reference to the vCenter cluster with the specified name."""
  all_clusters = get_all_cluster_mors(session)
  for cluster in all_clusters:
  if (hasattr(cluster, 'propSet') and
  cluster.propSet[0].val == cluster_name):
  return cluster.obj
  when all_cluster is None,this code may cause error:TypeError: 'NoneType' 
object is not iterable,and nova-computer would't up

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1525913/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488913] Re: Deprecation warnings (removal in Django 1.8) in Data Processing

2015-12-14 Thread Vitaly Gridnev
** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1488913

Title:
  Deprecation warnings (removal in Django 1.8) in Data Processing

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Noticed below message in logs in Data Processing panels:

  WARNING:py.warnings:RemovedInDjango18Warning: In Django 1.8, widget
  attribute placeholder=True will be rendered as 'placeholder'. To
  preserve current behavior, use the string 'True' instead of the
  boolean value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1488913/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525901] [NEW] Agents report as started before neutron recognizes as active

2015-12-14 Thread Brent Eagles
Public bug reported:

In HA, there is a potential race condition between the openvswitch agent
and other agents that "own", depend on or manipulate ports. As the
neutron server resumes on a failover it will not immediately be aware of
openvswitch agents that have also been activated on failover and act as
though there are no active openvswitch agents (this is an example, it
most likely affects other L2 agents). If an agent such as the L3 agent
starts and begins resync before the neutron server is aware of the
active openvswitch agent, ports for the routers on that agent will be
marked as "binding_failed". Currently this is a "terminal" state for the
port as neutron does not attempt to rebind failed bindings on the same
host.

Unfortunately, the neutron agents do not provide even a best-effort
deterministic indication to the outside service manager (systemd,
pacemaker, etc...) that it has fully initialized and the neutron server
should be aware that it is active. Agents should follow the same pattern
as wsgi based services and notify systemd after it can be reasonably
assumed that the neutron server should be aware that it is alive. That
way service startup order logic or constraints can properly start an
agent that is dependent on other agents *after* neutron should be aware
that the required agents are active.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1525901

Title:
  Agents report as started before neutron recognizes as active

Status in neutron:
  New

Bug description:
  In HA, there is a potential race condition between the openvswitch
  agent and other agents that "own", depend on or manipulate ports. As
  the neutron server resumes on a failover it will not immediately be
  aware of openvswitch agents that have also been activated on failover
  and act as though there are no active openvswitch agents (this is an
  example, it most likely affects other L2 agents). If an agent such as
  the L3 agent starts and begins resync before the neutron server is
  aware of the active openvswitch agent, ports for the routers on that
  agent will be marked as "binding_failed". Currently this is a
  "terminal" state for the port as neutron does not attempt to rebind
  failed bindings on the same host.

  Unfortunately, the neutron agents do not provide even a best-effort
  deterministic indication to the outside service manager (systemd,
  pacemaker, etc...) that it has fully initialized and the neutron
  server should be aware that it is active. Agents should follow the
  same pattern as wsgi based services and notify systemd after it can be
  reasonably assumed that the neutron server should be aware that it is
  alive. That way service startup order logic or constraints can
  properly start an agent that is dependent on other agents *after*
  neutron should be aware that the required agents are active.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1525901/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525903] [NEW] should not use mutable default arguments

2015-12-14 Thread javeme
Public bug reported:

We should not use mutable default arguments in function definitions due
to the "Common Gotchas"[1].

So,  we must remove the default arguments "[]" when the function is defined, 
such as this function:
https://github.com/openstack/horizon/blob/master/openstack_dashboard/utils/metering.py#L178

[1]: http://docs.python-guide.org/en/latest/writing/gotchas/

** Affects: horizon
 Importance: Undecided
 Assignee: javeme (javaloveme)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1525903

Title:
  should not use mutable default arguments

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  We should not use mutable default arguments in function definitions
  due to the "Common Gotchas"[1].

  So,  we must remove the default arguments "[]" when the function is defined, 
such as this function:
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/utils/metering.py#L178

  [1]: http://docs.python-guide.org/en/latest/writing/gotchas/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1525903/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525101] Re: floating ip info is not updated correctly when remove a fix ip which is associated to a floating ip

2015-12-14 Thread Bo Chi
changed this bug to nova, should disassociate floating ip when remove a
fixed_ip from instance

** Project changed: neutron => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1525101

Title:
  floating ip info is not updated correctly when remove a fix ip which
  is associated to a floating ip

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  [Summary]
  floating ip info is not updated correctly when remove a fix ip which is 
associated to a floating ip

  [Topo]
  devstack all-in-one node

  [Description and expect result]
  floating ip info can be updated correctly when remove a fix ip which is 
associated to a floating ip.

  [Reproduceable or not]
  reproduceable 

  [Recreate Steps]
  1) launch 1 instance:
  root@45-59:/opt/stack/devstack# nova list
  
+--+--+++-+---+
  | ID   | Name | Status | Task State | Power 
State | Networks  |
  
+--+--+++-+---+
  | 4608feb6-c825-46ae-8dcf-3d8839e51865 | inst | ACTIVE | -  | Running 
| net2=2.0.0.30 |
  
+--+--+++-+---+

  2) associate a floating ip to it:
  root@45-59:/opt/stack/devstack# nova floating-ip-associate --fixed-address 
2.0.0.30 inst 172.168.0.10
  root@45-59:/opt/stack/devstack# nova list
  
+--+--+++-+-+
  | ID   | Name | Status | Task State | Power 
State | Networks|
  
+--+--+++-+-+
  | 4608feb6-c825-46ae-8dcf-3d8839e51865 | inst | ACTIVE | -  | Running 
| net2=2.0.0.30, 172.168.0.10 |
  
+--+--+++-+-+

  3)remove the fix ip of the instance, the fix ip and floating ip info can be 
removed when show "nova list":
  root@45-59:/opt/stack/devstack# nova remove-fixed-ip inst 2.0.0.30
  root@45-59:/opt/stack/devstack# nova list
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power 
State | Networks |
  
+--+--+++-+--+
  | 4608feb6-c825-46ae-8dcf-3d8839e51865 | inst | ACTIVE | -  | Running 
|  |
  
+--+--+++-+--+

  4) but if show the floating ip info via below cmd, the fix ip info still 
exist:  ISSUE
  root@45-59:/opt/stack/devstack# neutron floatingip-show  
46af70eb-e62b-471c-8694-0dc14060c372
  +-+--+
  | Field   | Value|
  +-+--+
  | fixed_ip_address| 2.0.0.30ISSUE|
  | floating_ip_address | 172.168.0.10 |
  | floating_network_id | c2592b59-f621-479c-8eaf-9b23f41c64d4 |
  | id  | 46af70eb-e62b-471c-8694-0dc14060c372 |
  | port_id | 8c57f3ea-7cbf-4ad6-98d9-8c7a0d743bbb |
  | router_id   | 37b26f1a-6086-4d64-bf8d-bd2ba27e5fee |
  | status  | ACTIVE   |
  | tenant_id   | f75256da799642e0ab597a7533918714 |
  +-+--+
  root@45-59:/opt/stack/devstack#

  5)the issue results in another problem: the network info of the
  instance in dashboard is incorrect.

  [Configration]
  reproduceable bug, no need

  [logs]
  reproduceable bug, no need

  [Root cause anlyze or debug inf]
  reproduceable bug

  [Attachment]
  None

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1525101/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525895] [NEW] ML2 OpenvSwitch Agent GRE/VXLAN tunnel code does not support IPv6 addresses as tunnel endpoints

2015-12-14 Thread Frode Nordahl
Public bug reported:

OpenvSwitch currently does not support IPv6 addresses as endpoints for
GRE or VXLAN tunnels. Subsequently the Neutron OpenvSwitch driver and
agent is written with support for IPv4-only endpoint addresses in mind.
However, there is ongoing work in OpenvSwitch to allow VXLAN tunnels
with IPv6 addresses as endpoints. Running a datacenter with support for
both IPv4 and IPv6 in the core is currently a necessity, but as soon as
OpenvSwitch does support IPv6 endpoint addresses we can do without IPv4
in the core alltogether.

Supporting both IPv4 and IPv6 in a datacenter core fabric is a
administrative burden and it would be a great help for operators to be
able to run with a IPv6-only core network.

The purpose of this bug report is to start the work to enable Neutron to
make use of OpenvSwitch tunnels with IPv6 addresses as endpoints.

It will take some time before
 a) the code is merged and mature within OpenvSwitch
 b) the distributions catch up

But hey, let's start now and be ready when it lands!


Some of the issues at hand for Neutron to support this is:
- Allow IPv6 address in configuration for local_ip

- Sanity check and assertion when IPv6 address is configured and
installed OVS version does not support IPv6 endpoints for selected
tunnel protocols

- Possibly have separate local_ip settings per tunnel type? (if for
example VXLAN gets support before GRE)

- Or separate local_ipv6 setting and use IPv6 when supported/available?

- Interface names are typically limited to 16 characters on Linux - How to 
generate interface names based on 128 bit destination IPv6 addresses in 16 
characters without collisions within a single installation?
 Suggestions:
 - vxlan-<8 byte hex representation of CRC32 checksum>
  - Pro: Compatible with current port/interface naming regime
  - Con: High probability for collisions (but may be good enough?)
 - vxlan-<10 byte hash>
  - Con: Non-standard hash-length
  - Con: Still high probability for collisions
 - <16 byte representation of IPv6 address>
  - Pro: less probability for collisions
  - Con: Still probability for collissions (we loose information when cramming 
128 bits into 16 bytes ascii representation)
  - Con: Incompatible with current port/interface naming regime
  - Con: Information about what tunnel type is lost from port/interface name

- Implement sanity checks for local_ip and remote_ip being same IP
version

- Migration strategies:
 - Is it possible/feasible to support a installation where some nodes have IPv4 
local_ip and some nodes have IPv6 local_ip?
 - Do operators have to do a clean cut?

- Discussion/comments very welcome!

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1525895

Title:
  ML2 OpenvSwitch Agent GRE/VXLAN tunnel code does not support IPv6
  addresses as tunnel endpoints

Status in neutron:
  New

Bug description:
  OpenvSwitch currently does not support IPv6 addresses as endpoints for
  GRE or VXLAN tunnels. Subsequently the Neutron OpenvSwitch driver and
  agent is written with support for IPv4-only endpoint addresses in
  mind. However, there is ongoing work in OpenvSwitch to allow VXLAN
  tunnels with IPv6 addresses as endpoints. Running a datacenter with
  support for both IPv4 and IPv6 in the core is currently a necessity,
  but as soon as OpenvSwitch does support IPv6 endpoint addresses we can
  do without IPv4 in the core alltogether.

  Supporting both IPv4 and IPv6 in a datacenter core fabric is a
  administrative burden and it would be a great help for operators to be
  able to run with a IPv6-only core network.

  The purpose of this bug report is to start the work to enable Neutron
  to make use of OpenvSwitch tunnels with IPv6 addresses as endpoints.

  It will take some time before
   a) the code is merged and mature within OpenvSwitch
   b) the distributions catch up

  But hey, let's start now and be ready when it lands!

  
  Some of the issues at hand for Neutron to support this is:
  - Allow IPv6 address in configuration for local_ip

  - Sanity check and assertion when IPv6 address is configured and
  installed OVS version does not support IPv6 endpoints for selected
  tunnel protocols

  - Possibly have separate local_ip settings per tunnel type? (if for
  example VXLAN gets support before GRE)

  - Or separate local_ipv6 setting and use IPv6 when
  supported/available?

  - Interface names are typically limited to 16 characters on Linux - How to 
generate interface names based on 128 bit destination IPv6 addresses in 16 
characters without collisions within a single installation?
   Suggestions:
   - vxlan-<8 byte hex representation of CRC32 checksum>
- Pro: Compatible with current port/interface naming regime
- Con: High probability for collisions (but may be good enough?)
   - vxlan-<10 byte hash>
- Con: Non

[Yahoo-eng-team] [Bug 1301359] Re: While creating database instance size paramater should not be mandotory

2015-12-14 Thread Matthias Runge
** Changed in: horizon
 Assignee: Harshada Mangesh Kakad (harshada-kakad) => (unassigned)

** Changed in: horizon
   Status: In Progress => Invalid

** Changed in: horizon
   Importance: Medium => Undecided

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1301359

Title:
  While creating database instance size paramater should not be
  mandotory

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  While creating database instance, size parameter depends on if
  trove_volume_support is set or not.

  If its set to true, size is must. But if its set to false, size is not
  required.

  Then in dashboard, it should not be a mandotory paramater. It should
  be an optional paramater.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1301359/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521464] Re: create an instance with ephemeral disk or swap disk error

2015-12-14 Thread Alexis Lee
Marking this invalid because it sounds like a feature to me. You may
want to create a backlog spec, see http://specs.openstack.org/openstack
/nova-specs/specs/backlog/index.html

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1521464

Title:
  create an instance with ephemeral disk or swap disk error

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  I boot an instance with ephemeral disk(or swap disk) in kilo, but return http 
request error:
  eg:
  nova boot --flavor 2 --image 5905bd7e-a87f-4856-8401-b8eb7211c84d --nic 
net-id=12ace164-d996-4261-9228-23ca0680f7a8 --ephemeral size=5,format=ext3 
test_vm1

  ERROR (BadRequest): Block Device Mapping is Invalid: Boot sequence for
  the instance and image/block device mapping combination is not valid.
  (HTTP 400) (Request-ID: req-b571662f-e554-49a7-979f-763f34b4b162)

  I think the instance with ephemeral disk should be able to boot from
  the image, instead of invalid boot sequence.

  the log in:
  http://paste.openstack.org/show/480452/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1521464/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523727] Re: Cinder cannot create volume from RAW glance image

2015-12-14 Thread Erno Kuvaja
Moving this to Cinder for evaluation. Glance just stores the bits, I
don't expect that having anything to do about Cinder not able to create
volume from them specially if different format works.

** Project changed: glance => cinder

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1523727

Title:
  Cinder cannot create volume from RAW glance image

Status in Cinder:
  New

Bug description:
  Make a RAW image from some files:

  virt-make-fs --type=ext4 --format=raw --size=+500M /tmp/tmpb3abEh 
/tmp/binary-9bHVxp.raw
  glance -v -d  image-create --name my-binary-img-test --disk-format raw 
--container-format bare --file /tmp/binary-9bHVxp.raw
  cinder  create --display-name my-test4 --image-id  7

  This used to work fine on Kilo: 2015.1.0. On Kilo 2015.1.1 the volume
  gets stuck in "Downloading"

  cinder list --all-tenants
  
+--+--+-+--+--+-+--+-+
  |  ID  |Tenant ID |   
 Status   | Display Name | Size | Volume Type | Bootable | Attached to |
  
+--+--+-+--+--+-+--+-+
  | 3b3828ed-6e70-4e6e-bdc1-db7a1ca591a9 | 9514f3eeb98643609f3fd16a59a78e1f | 
downloading | my-binary-img-test  |  7   | None|  false   | 
|
  
+--+--+-+--+--+-+--+-+

  
  This is affecting several of our testbeds. A workaround is to use a QCOW2 
image instead of RAW. This is not a solution as some setups have CEPH backed 
storage which don't play well with QCOW2 formats.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1523727/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525439] Re: Glance V2 API is not backwards compatible and breaks Cinder solidfire driver

2015-12-14 Thread Erno Kuvaja
Thanks for documenting the issue Ed.

As John pointed out this is reflecting back to stable release and this
is solved in Liberty Cinder so I mark the glance bug as Won't Fix just
so we don't keep this hanging open.

** Changed in: glance
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1525439

Title:
  Glance V2 API is not backwards compatible and breaks Cinder solidfire
  driver

Status in Cinder:
  Triaged
Status in Glance:
  Won't Fix
Status in puppet-cinder:
  New

Bug description:
  In stable/kilo

  Glance API V2 change of image-metadata is_public flag to visibility =
  Public breaks the SolidFire (and maybe other, NetApp?) drivers that
  depend on is_public flag. Specifically this breaks the ability
  efficiently handle images by caching images in the SolidFire cluster.

  Changing the API back to V1 through the cinder.conf file then breaks
  Ceph which depends on V2 and the image-metadata direct_url and
  locations to determine if it can clone a image to a volume.  So this
  breaks Ceph's ability to efficiently handle images

  This version mismatch does not allow for SolidFire and Ceph to both be
  used efficiently in the same OpenStack cloud.

  NOTE: openstack/puppet-cinder defaults to glance-api-version = 2 which
  allows Ceph efficientcy to work and not SolidFire (and others).

  Mainly Opening this Bug to document this problem since no changes are
  allowed to Kilo there is probably no way to fix this.

  Code locations:

  cinder/cinder/image/glance.py line 250-256
  cinder/cinder/volume/drivers/rbd.py line 827
  cinder/cinder/volume/drivers/solidfire.py line 647
  puppet-cinder/manifests/glance.pp line 59

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1525439/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394941] Re: Update port's fixed-ip breaks the existed floating-ip

2015-12-14 Thread YAMAMOTO Takashi
i confirmed that the following entries are not updated when i changed
the port's ip address from 10.0.0.4 to some other address.
i think it's what the submitter meant.

-A neutron-l3-agent-OUTPUT -d 172.24.4.4/32 -j DNAT --to-destination 10.0.0.4
-A neutron-l3-agent-PREROUTING -d 172.24.4.4/32 -j DNAT --to-destination 
10.0.0.4
-A neutron-l3-agent-float-snat -s 10.0.0.4/32 -j SNAT --to-source 172.24.4.4


** Changed in: neutron
   Status: Expired => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1394941

Title:
  Update port's fixed-ip breaks the existed floating-ip

Status in neutron:
  Incomplete

Bug description:
  Use port-update to change fixed-ip of a given port. But found it
  failed with the existing floating-ip setup.

  Although floating-ip is managed by l3-agent, it is the case that
  neutron should internally deal with it because it will break the
  network connection.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1394941/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525856] [NEW] func test test_reprocess_port_when_ovs_restarts fails from time to time

2015-12-14 Thread Rossella Sblendido
Public bug reported:

The functional tests func test test_reprocess_port_when_ovs_restarts
fails sporadically with

Traceback (most recent call last):
  File "neutron/agent/linux/polling.py", line 56, in stop
self._monitor.stop()
  File "neutron/agent/linux/async_process.py", line 131, in stop
raise AsyncProcessException(_('Process is not running.'))
neutron.agent.linux.async_process.AsyncProcessException: Process is not running.

see for example http://logs.openstack.org/98/202098/30/check/gate-
neutron-dsvm-functional/8b49362/testr_results.html.gz

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: functional-tests

** Tags added: functional-tests

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1525856

Title:
  func test test_reprocess_port_when_ovs_restarts fails from time to
  time

Status in neutron:
  New

Bug description:
  The functional tests func test test_reprocess_port_when_ovs_restarts
  fails sporadically with

  Traceback (most recent call last):
File "neutron/agent/linux/polling.py", line 56, in stop
  self._monitor.stop()
File "neutron/agent/linux/async_process.py", line 131, in stop
  raise AsyncProcessException(_('Process is not running.'))
  neutron.agent.linux.async_process.AsyncProcessException: Process is not 
running.

  see for example http://logs.openstack.org/98/202098/30/check/gate-
  neutron-dsvm-functional/8b49362/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1525856/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1484745] Re: DB migration juno-> kilo fails: Can't create table nsxv_internal_networks

2015-12-14 Thread Nguyen Truong Son
** Changed in: neutron
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1484745

Title:
  DB migration juno-> kilo fails: Can't create table
  nsxv_internal_networks

Status in neutron:
  New

Bug description:
  Get the following error when upgrading my juno DB to kilo

  neutron-db-manage --config-file /etc/neutron/neutron.conf   --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini upgrade kilo
  INFO  [alembic.migration] Context impl MySQLImpl.
  INFO  [alembic.migration] Will assume non-transactional DDL.
  INFO  [alembic.migration] Context impl MySQLImpl.
  INFO  [alembic.migration] Will assume non-transactional DDL.
  INFO  [alembic.migration] Running upgrade 38495dc99731 -> 4dbe243cd84d, nsxv
  Traceback (most recent call last):
File "/usr/bin/neutron-db-manage", line 10, in 
  sys.exit(main())
File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
238, in main
  CONF.command.func(config, CONF.command.name)
File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
106, in do_upgrade
  do_alembic_command(config, cmd, revision, sql=CONF.command.sql)
File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
72, in do_alembic_command
  getattr(alembic_command, cmd)(config, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/alembic/command.py", line 165, in 
upgrade
  script.run_env()
File "/usr/lib/python2.7/dist-packages/alembic/script.py", line 382, in 
run_env
  util.load_python_file(self.dir, 'env.py')
File "/usr/lib/python2.7/dist-packages/alembic/util.py", line 241, in 
load_python_file
  module = load_module_py(module_id, path)
File "/usr/lib/python2.7/dist-packages/alembic/compat.py", line 79, in 
load_module_py
  mod = imp.load_source(module_id, path, fp)
File 
"/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/env.py",
 line 109, in 
  run_migrations_online()
File 
"/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/env.py",
 line 100, in run_migrations_online
  context.run_migrations()
File "", line 7, in run_migrations
File "/usr/lib/python2.7/dist-packages/alembic/environment.py", line 742, 
in run_migrations
  self.get_context().run_migrations(**kw)
File "/usr/lib/python2.7/dist-packages/alembic/migration.py", line 305, in 
run_migrations
  step.migration_fn(**kw)
File 
"/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/versions/4dbe243cd84d_nsxv.py",
 line 65, in upgrade
  sa.PrimaryKeyConstraint('network_purpose'))
File "", line 7, in create_table
File "/usr/lib/python2.7/dist-packages/alembic/operations.py", line 936, in 
create_table
  self.impl.create_table(table)
File "/usr/lib/python2.7/dist-packages/alembic/ddl/impl.py", line 182, in 
create_table
  self._exec(schema.CreateTable(table))
File "/usr/lib/python2.7/dist-packages/alembic/ddl/impl.py", line 106, in 
_exec
  return conn.execute(construct, *multiparams, **params)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
729, in execute
  return meth(self, multiparams, params)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/ddl.py", line 69, in 
_execute_on_connection
  return connection._execute_ddl(self, multiparams, params)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
783, in _execute_ddl
  compiled
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
958, in _execute_context
  context)
File 
"/usr/lib/python2.7/dist-packages/oslo_db/sqlalchemy/compat/handle_error.py", 
line 261, in _handle_dbapi_exception
  e, statement, parameters, cursor, context)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1155, in _handle_dbapi_exception
  util.raise_from_cause(newraise, exc_info)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 
199, in raise_from_cause
  reraise(type(exception), exception, tb=exc_tb)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
951, in _execute_context
  context)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 
436, in do_execute
  cursor.execute(statement, parameters)
File "/usr/lib/python2.7/dist-packages/MySQLdb/cursors.py", line 174, in 
execute
  self.errorhandler(self, exc, value)
File "/usr/lib/python2.7/dist-packages/MySQLdb/connections.py", line 36, in 
defaulterrorhandler
  raise errorclass, errorvalue
  sqlalchemy.exc.OperationalError: (OperationalError) (1005, "Can't create 
table 'neutron_rctest.nsxv_internal_networks' (errno: 150)") "\nCREATE TABLE 
nsxv_internal_networks (\n\tnetwork_purpose ENUM('inter_edge_net') NOT NULL, 
\n\tnetwork_id VARCHAR(

[Yahoo-eng-team] [Bug 1280522] Fix proposed to ironic (master)

2015-12-14 Thread OpenStack Infra
Fix proposed to branch: master
Review: https://review.openstack.org/257256

** Changed in: tempest
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1280522

Title:
  Replace assertEqual(None, *) with assertIsNone in tests

Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released
Status in glance_store:
  In Progress
Status in heat:
  Fix Released
Status in Ironic:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Manila:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-neutronclient:
  Fix Released
Status in python-troveclient:
  Fix Released
Status in Sahara:
  Fix Released
Status in tempest:
  Fix Released
Status in Trove:
  Fix Released
Status in tuskar:
  Fix Released

Bug description:
  Replace assertEqual(None, *) with assertIsNone in tests to have
  more clear messages in case of failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1280522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280522] Re: Replace assertEqual(None, *) with assertIsNone in tests

2015-12-14 Thread Shuquan Huang
** Also affects: glance-store
   Importance: Undecided
   Status: New

** Changed in: glance-store
 Assignee: (unassigned) => Shuquan Huang (shuquan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1280522

Title:
  Replace assertEqual(None, *) with assertIsNone in tests

Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released
Status in glance_store:
  In Progress
Status in heat:
  Fix Released
Status in Ironic:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Manila:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-neutronclient:
  Fix Released
Status in python-troveclient:
  Fix Released
Status in Sahara:
  Fix Released
Status in tempest:
  In Progress
Status in Trove:
  Fix Released
Status in tuskar:
  Fix Released

Bug description:
  Replace assertEqual(None, *) with assertIsNone in tests to have
  more clear messages in case of failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1280522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373153] Re: Unit tests for ML2 drivers use incorrect base class

2015-12-14 Thread Cedric Brandily
https://review.openstack.org/125894 + plugin split fix the bug

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373153

Title:
  Unit tests for ML2 drivers  use incorrect base class

Status in neutron:
  Fix Released

Bug description:
  In unit tests for several ML2 mechanism drivers (listed below) the
  NeutronDbPluginV2TestCase class instead of Ml2PluginV2TestCase which
  is derived from that class is used:

  Unit tests for ML2 mechanism drivers in neutron/tests/unit/ml2:
 drivers/cisco/nexus/test_cisco_mech.py
 drivers/brocade/test_brocade_mechanism_driver.py
 test_mechanism_fslsdn.py 
 test_mechanism_ncs.py
 test_mechanism_odl.py

  In other cases, such as tests in drivers/test_bigswitch_mech.py some
  unit tests from the corresponding monolithic driver is reused and
  therefor Ml2PluginV2TestCase class is not used.

  This prevents specialization needed for testing ML2 specific
  implementations.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373153/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280522] Re: Replace assertEqual(None, *) with assertIsNone in tests

2015-12-14 Thread Shuquan Huang
** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1280522

Title:
  Replace assertEqual(None, *) with assertIsNone in tests

Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released
Status in heat:
  Fix Released
Status in Ironic:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Manila:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-neutronclient:
  Fix Released
Status in python-troveclient:
  Fix Released
Status in Sahara:
  Fix Released
Status in tempest:
  In Progress
Status in Trove:
  Fix Released
Status in tuskar:
  Fix Released

Bug description:
  Replace assertEqual(None, *) with assertIsNone in tests to have
  more clear messages in case of failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1280522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525824] [NEW] RFE: enable or disable port's promiscuous mode

2015-12-14 Thread shihanzhang
Public bug reported:

Now the VM's VNIC with neutron port is in a promiscuous mode, sometimes
it will affect the application performance if there are too many
traffic, some hypervisor like Huawei FusionSphere or VMware can set VNIC
promiscuous mode, so this proposal will add a Qos rule to control port's
promiscuous mode.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

** Tags added: rfe

** Summary changed:

- enable or disable port's promiscuous mode
+ RFE: enable or disable port's promiscuous mode

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1525824

Title:
  RFE: enable or disable port's promiscuous mode

Status in neutron:
  New

Bug description:
  Now the VM's VNIC with neutron port is in a promiscuous mode,
  sometimes it will affect the application performance if there are too
  many traffic, some hypervisor like Huawei FusionSphere or VMware can
  set VNIC promiscuous mode, so this proposal will add a Qos rule to
  control port's promiscuous mode.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1525824/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525819] [NEW] nova image-list returns 500 error

2015-12-14 Thread recital
Public bug reported:

I am installing liberty release on centos7 with one controller and one
compute node. nova service-list and nova endpoints work fine. But, nova
image-list returns the following error.

ERROR(ClientException) : Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: ...)

glance image-list works fine. 
Pls, help me out.

the nova-api.log is following.

2015-12-14 16:29:56.071 9986 INFO nova.osapi_compute.wsgi.server 
[req-496b5df0-e28f-486b-8386-96f704433daf 504f2af1b9044a3e97bc69e80daf5809 
8de4c51458804564b79932efa364524b - - -] 192.168.219.11 "GET /v2/ HTTP/1.1" 
status: 200 len: 572 time: 0.0285130
2015-12-14 16:30:01.676 9986 ERROR nova.api.openstack.extensions 
[req-e65f8242-8d2c-43f1-af66-362f3161b655 504f2af1b9044a3e97bc69e80daf5809 
8de4c51458804564b79932efa364524b - - -] Unexpected exception in API method
2015-12-14 16:30:01.676 9986 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
2015-12-14 16:30:01.676 9986 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 478, 
in wrapped
2015-12-14 16:30:01.676 9986 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
2015-12-14 16:30:01.676 9986 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/images.py", line 
145, in detail
2015-12-14 16:30:01.676 9986 ERROR nova.api.openstack.extensions 
**page_params)
2015-12-14 16:30:01.676 9986 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/image/api.py", line 68, in get_all
2015-12-14 16:30:01.676 9986 ERROR nova.api.openstack.extensions return 
session.detail(context, **kwargs)
2015-12-14 16:30:01.676 9986 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/image/glance.py", line 284, in detail
2015-12-14 16:30:01.676 9986 ERROR nova.api.openstack.extensions for image 
in images:
2015-12-14 16:30:01.676 9986 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/v1/images.py", line 254, in list
2015-12-14 16:30:01.676 9986 ERROR nova.api.openstack.extensions for image 
in paginate(params, return_request_id):
2015-12-14 16:30:01.676 9986 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/v1/images.py", line 238, in 
paginate
2015-12-14 16:30:01.676 9986 ERROR nova.api.openstack.extensions images, 
resp = self._list(url, "images")
2015-12-14 16:30:01.676 9986 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/v1/images.py", line 63, in _list
2015-12-14 16:30:01.676 9986 ERROR nova.api.openstack.extensions resp, body 
= self.client.get(url)
2015-12-14 16:30:01.676 9986 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 280, in get
2015-12-14 16:30:01.676 9986 ERROR nova.api.openstack.extensions return 
self._request('GET', url, **kwargs)
2015-12-14 16:30:01.676 9986 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 272, in 
_request
2015-12-14 16:30:01.676 9986 ERROR nova.api.openstack.extensions resp, 
body_iter = self._handle_response(resp)
2015-12-14 16:30:01.676 9986 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 93, in 
_handle_response
2015-12-14 16:30:01.676 9986 ERROR nova.api.openstack.extensions raise 
exc.from_response(resp, resp.content)
2015-12-14 16:30:01.676 9986 ERROR nova.api.openstack.extensions 
HTTPInternalServerError: 500 Internal Server Error: The server has either erred 
or is incapable of performing the requested operation. (HTTP 500)
2015-12-14 16:30:01.676 9986 ERROR nova.api.openstack.extensions 
2015-12-14 16:30:01.685 9986 INFO nova.api.openstack.wsgi 
[req-e65f8242-8d2c-43f1-af66-362f3161b655 504f2af1b9044a3e97bc69e80daf5809 
8de4c51458804564b79932efa364524b - - -] HTTP exception thrown: Unexpected API 
Error. Please report this at http://bugs.launchpad.net/nova/ and attach the 
Nova API log if possible.

2015-12-14 16:30:01.687 9986 INFO nova.osapi_compute.wsgi.server 
[req-e65f8242-8d2c-43f1-af66-362f3161b655 504f2af1b9044a3e97bc69e80daf5809 
8de4c51458804564b79932efa364524b - - -] 192.168.219.11 "GET 
/v2/8de4c51458804564b79932efa364524b/images/detail HTTP/1.1" status: 500 len: 
445 time: 5.4028652

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1525819

Title:
  nova image-list returns 500 error

Status in OpenStack Compute (nova):
  New

Bug description:
  I am installing liberty release on centos7 with one controller and one
  comp

[Yahoo-eng-team] [Bug 1525796] Re: After OVS-DPDK package installation host responds very slowly

2015-12-14 Thread Eran Kuris
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1525796

Title:
  After OVS-DPDK package installation host responds very slowly

Status in neutron:
  Invalid

Bug description:
  Description of problem:
  Install OVS-DPDK packages 
  dpdk-tools-2.1.0-5.el7.x86_64
  openvswitch-dpdk-2.4.0-0.10346.git97bab959.2.el7.x86_64
  dpdk-2.1.0-5.el7.x86_64

  After that every command or action taking too long. THe host response
  very slowly.

  Version-Release number of selected component (if applicable):
  dpdk-tools-2.1.0-5.el7.x86_64
  openvswitch-dpdk-2.4.0-0.10346.git97bab959.2.el7.x86_64
  dpdk-2.1.0-5.el7.x86_64
  [root@puma06 ~]# rpm -qa |grep neutron
  python-neutronclient-3.1.0-1.el7ost.noarch
  python-neutron-7.0.0-5.el7ost.noarch
  openstack-neutron-common-7.0.0-5.el7ost.noarch
  openstack-neutron-7.0.0-5.el7ost.noarch
  openstack-neutron-ml2-7.0.0-5.el7ost.noarch
  openstack-neutron-openvswitch-7.0.0-5.el7ost.noarch

  
  How reproducible:
  always 
  Steps to Reproduce:
  1.Install OSP-8 ENV with packstack {AIO ENV}
  2.install ovs-DPDK packages 
  3.your user experience now is very slow

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1525796/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525809] [NEW] networking-midonet 2015.1.2 stable/kilo release request

2015-12-14 Thread YAMAMOTO Takashi
Public bug reported:

please push 2015.1.2 tag on the tip of stable/kilo.
namely, commit 69c7c61e1cf9934bfdaac6774e2f02a2aa5d5472  .

** Affects: networking-midonet
 Importance: Medium
 Assignee: YAMAMOTO Takashi (yamamoto)
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: release-subproject

** Tags added: release-subproject

** Changed in: networking-midonet
   Importance: Undecided => Medium

** Changed in: networking-midonet
Milestone: None => 2015.1.2

** Changed in: networking-midonet
 Assignee: (unassigned) => YAMAMOTO Takashi (yamamoto)

** Also affects: neutron
   Importance: Undecided
   Status: New

** Description changed:

  please push 2015.1.2 tag on the tip of stable/kilo.
- namely, commit e867117f26d861260d2f2a3c7e2fa35362ae171a  .
+ namely, commit 69c7c61e1cf9934bfdaac6774e2f02a2aa5d5472  .

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1525809

Title:
   networking-midonet 2015.1.2 stable/kilo release request

Status in networking-midonet:
  New
Status in neutron:
  New

Bug description:
  please push 2015.1.2 tag on the tip of stable/kilo.
  namely, commit 69c7c61e1cf9934bfdaac6774e2f02a2aa5d5472  .

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1525809/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525789] Re: test for report(ignore it)

2015-12-14 Thread liuhui
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1525789

Title:
  test for report(ignore it)

Status in neutron:
  Invalid

Bug description:
  just test for submitting to gerrit

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1525789/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp