[Yahoo-eng-team] [Bug 1573388] [NEW] AvailabilityZonePluginBase isn't decorated with @six.add_metaclass

2016-04-21 Thread John Perkins
Public bug reported:

AvailabilityZonePluginBase isn't decorated with @six.add_metaclass,
allowing for child classes that do not properly handle
get_availability_zones and validate_availability_zones.

** Affects: neutron
 Importance: Undecided
 Assignee: John Perkins (john-d-perkins)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1573388

Title:
  AvailabilityZonePluginBase isn't decorated with @six.add_metaclass

Status in neutron:
  In Progress

Bug description:
  AvailabilityZonePluginBase isn't decorated with @six.add_metaclass,
  allowing for child classes that do not properly handle
  get_availability_zones and validate_availability_zones.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1573388/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459179] Re: User heat has no access to domain default when using Keystone v3 with multi-domain-driver

2016-04-21 Thread Launchpad Bug Tracker
[Expired for OpenStack Identity (keystone) because there has been no
activity for 60 days.]

** Changed in: keystone
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1459179

Title:
  User heat has no access to domain default when using Keystone v3 with
  multi-domain-driver

Status in OpenStack Identity (keystone):
  Expired

Bug description:
  When using Keystone v3 with multi-domain-driver in Juno on Centos, I
  cann't deploy heat stack, because the heat user has no access to
  default domain wich runs on sql

  default -> SQL -> service user and heat
  dom -> LDAP -> AD user

   /var/log/heat/heat.log 
  2015-05-27 11:38:42.502 13632 DEBUG heat.engine.stack_lock [-] Engine 
651cdcf1-49cb-4ca4-9436-35ff538666ed acquired lock on stack 
22a20e5a-901b-436c-9c8c-e603bc79015b acquire 
/usr/lib/python2.7/site-packages/heat/engine/stack_lock.py:72
  2015-05-27 11:38:42.503 13632 DEBUG keystoneclient.auth.identity.v3 [-] 
Making authentication request to http://172.16.89.1:5000/v3/auth/tokens 
get_auth_ref 
/usr/lib/python2.7/site-packages/keystoneclient/auth/identity/v3.py:117
  2015-05-27 11:38:42.504 13632 INFO urllib3.connectionpool [-] Starting new 
HTTP connection (1): 172.16.89.1
  2015-05-27 11:38:42.579 13632 DEBUG urllib3.connectionpool [-] "POST 
/v3/auth/tokens HTTP/1.1" 401 181 _make_request 
/usr/lib/python2.7/site-packages/urllib3/connectionpool.py:357
  2015-05-27 11:38:42.580 13632 DEBUG keystoneclient.session [-] Request 
returned failure status: 401 request 
/usr/lib/python2.7/site-packages/keystoneclient/session.py:345
  2015-05-27 11:38:42.580 13632 DEBUG keystoneclient.v3.client [-] 
Authorization failed. get_raw_token_from_identity_service 
/usr/lib/python2.7/site-packages/keystoneclient/v3/client.py:267

   /var/log/keystone/keystone.log 
  2015-05-27 11:38:42.265 8847 DEBUG keystone.common.kvs.core [-] KVS lock 
acquired for: os-revoke-events acquire 
/usr/lib/python2.7/site-packages/keystone/common/kvs/core.py:380
  2015-05-27 11:38:42.265 8847 DEBUG keystone.common.kvs.core [-] KVS lock 
released for: os-revoke-events release 
/usr/lib/python2.7/site-packages/keystone/common/kvs/core.py:399
  2015-05-27 11:38:42.265 8847 DEBUG keystone.middleware.core [-] RBAC: 
auth_context: {'is_delegated_auth': False, 'access_token_id': None, 'user_id': 
u'86396c4533a044a1ab106ccaeb7e883d', 'roles': [u'heat_stack_owner', u'admin'], 
'trustee_$
  2015-05-27 11:38:42.266 8847 DEBUG keystone.common.wsgi [-] arg_dict: {} 
__call__ /usr/lib/python2.7/site-packages/keystone/common/wsgi.py:191
  2015-05-27 11:38:42.267 8847 DEBUG keystone.common.controller [-] RBAC: 
Authorizing identity:validate_token() _build_policy_check_credentials 
/usr/lib/python2.7/site-packages/keystone/common/controller.py:55
  2015-05-27 11:38:42.267 8847 DEBUG keystone.common.controller [-] RBAC: using 
auth context from the request environment _build_policy_check_credentials 
/usr/lib/python2.7/site-packages/keystone/common/controller.py:60
  2015-05-27 11:38:42.270 8847 DEBUG keystone.common.kvs.core [-] KVS lock 
acquired for: os-revoke-events acquire 
/usr/lib/python2.7/site-packages/keystone/common/kvs/core.py:380
  2015-05-27 11:38:42.270 8847 DEBUG keystone.common.kvs.core [-] KVS lock 
released for: os-revoke-events release 
/usr/lib/python2.7/site-packages/keystone/common/kvs/core.py:399
  2015-05-27 11:38:42.270 8847 DEBUG keystone.policy.backends.rules [-] enforce 
identity:validate_token: {'is_delegated_auth': False, 'access_token_id': None, 
'user_id': u'86396c4533a044a1ab106ccaeb7e883d', 'roles': [u'heat_stack_owner', 
u$
  2015-05-27 11:38:42.270 8847 DEBUG keystone.common.controller [-] RBAC: 
Authorization granted inner 
/usr/lib/python2.7/site-packages/keystone/common/controller.py:155
  2015-05-27 11:38:42.273 8847 DEBUG keystone.common.kvs.core [-] KVS lock 
acquired for: os-revoke-events acquire 
/usr/lib/python2.7/site-packages/keystone/common/kvs/core.py:380
  2015-05-27 11:38:42.273 8847 DEBUG keystone.common.kvs.core [-] KVS lock 
released for: os-revoke-events release 
/usr/lib/python2.7/site-packages/keystone/common/kvs/core.py:399
  2015-05-27 11:38:42.274 8847 INFO eventlet.wsgi.server [-] 172.16.89.1 - - 
[27/May/2015 11:38:42] "GET /v3/auth/tokens HTTP/1.1" 200 7887 0.012976
  2015-05-27 11:38:42.343 8849 DEBUG keystone.middleware.core [-] Auth token 
not in the request header. Will not build auth context. process_request 
/usr/lib/python2.7/site-packages/keystone/middleware/core.py:270
  2015-05-27 11:38:42.345 8849 DEBUG keystone.common.wsgi [-] arg_dict: {} 
__call__ /usr/lib/python2.7/site-packages/keystone/common/wsgi.py:191
  2015-05-27 11:38:42.441 8849 INFO eventlet.wsgi.server [-] 172.16.89.1 - - 
[27/May/2015 11:38:42] "POST /v3/auth/tokens HTTP/1.1" 201 7902 0.097828
  2015-05-27 11:38:42.450 8852 DEBUG 

[Yahoo-eng-team] [Bug 1572264] Re: xenapi xmlrpclib marshalling error

2016-04-21 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/307984
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=fd9bedb7a907dece1b4114be782e6baa89182f42
Submitter: Jenkins
Branch:master

commit fd9bedb7a907dece1b4114be782e6baa89182f42
Author: Brian Elliott 
Date:   Tue Apr 19 18:33:03 2016 +

xenapi: Fix xmlrpclib marshalling error

Fix an error that occurs when trying to marshal an
oslo_i18n._message.Message object via xmlrpclib.

xmlrpclib is not designed to marshal custom objects.  Since
nova_version is only used for this one purpose, don't internationalize
it.

Closes-Bug: 1572264

Change-Id: I04c59993125834fc50abd0c5b6dc3fd0269b7243


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1572264

Title:
  xenapi xmlrpclib marshalling error

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Seeing an error on the xenapi compute driver where one of the
  arguments passed in is an i18n string and can't be marshalled via
  xmlrpclib when talking to xenapi:

  http://paste.openstack.org/show/494702/

  My understanding is custom objects are not supported per:

  "ServerProxy instance methods take Python basic types and objects as
  arguments and return Python basic types and classes."

  https://docs.python.org/2.7/library/xmlrpclib.html#module-xmlrpclib

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1572264/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1573344] [NEW] Cannot add API extension without tenant_id in the resource

2016-04-21 Thread Brandon Logan
Public bug reported:

Sometimes an API extension that defines a new resource without a
tenant_id is wanted.  Currently, the validation code assumes tenant_id
is supposed to be on every resource, so even if the extension does not
define tenant_id as an attribute in the resource's body, the validation
layer will throw an error saying it was passed in.

** Affects: neutron
 Importance: Undecided
 Assignee: Brandon Logan (brandon-logan)
 Status: New


** Tags: api

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1573344

Title:
  Cannot add API extension without tenant_id in the resource

Status in neutron:
  New

Bug description:
  Sometimes an API extension that defines a new resource without a
  tenant_id is wanted.  Currently, the validation code assumes tenant_id
  is supposed to be on every resource, so even if the extension does not
  define tenant_id as an attribute in the resource's body, the
  validation layer will throw an error saying it was passed in.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1573344/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1573337] [NEW] Neutron-LBaaS v2 in Mitaka: loadbalancer_dbv2.get_loadbalancers() fails with NoReferencedTableError

2016-04-21 Thread Jiahao liang
Public bug reported:

I was calling get_loadbalancers() function from "/opt/stack/neutron-
lbaas/neutron_lbaas/db/loadbalancer/loadbalancer_dbv2.py" in my driver
to get context of loadbalancers.

My enviroment is devstack on a ubuntu 14.04 machine with all
stable/mitaka repos. This problem didn't occur when I worked with
liberty.

the error msgs are listed below:

2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/db/loadbalancer/loadbalancer_dbv2.py", 
line 267, in get_loadbalancers
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher for lb_db 
in lb_dbs]
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/services/loadbalancer/data_models.py", 
line 96, in from_sqlalchemy_model
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher 
calling_classes=calling_classes + [cls]))
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/services/loadbalancer/data_models.py", 
line 96, in from_sqlalchemy_model
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher 
calling_classes=calling_classes + [cls]))
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/services/loadbalancer/data_models.py", 
line 84, in from_sqlalchemy_model
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher attr = 
getattr(sa_model, attr_name)
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py", line 
237, in
__get__
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher return 
self.impl.get(instance_state(instance), dict_)
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py", line 
583, in get
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher value = 
self.callable_(state, passive)
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/strategies.py", line 
532, in _load_for_state
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher return 
self._emit_lazyload(session, state, ident_key, passive)
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher   File 
"", line 1, in 
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/strategies.py", line 
602, in _emit_lazyload
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher result = 
q.all()
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2588, in 
all
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher return 
list(self)
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2732, in 
__iter__
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher context = 
self._compile_context()
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 3196, in 
_compile_context
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher 
entity.setup_context(self, context)
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 3565, in 
setup_context
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher 
polymorphic_discriminator=self._polymorphic_discriminator)
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/loading.py", line 255, 
in _setup_entity_query
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher **kw
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/interfaces.py", line 
505, in setup
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher 
strat.setup_query(context, entity, path, loader, adapter, **kwargs)
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/strategies.py", line 
1156, in setup_query
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher 
chained_from_outerjoin=chained_from_outerjoin)
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/loading.py", line 255, 
in _setup_entity_query
2016-04-20 18:06:09.351 16077 ERROR oslo_messaging.rpc.dispatcher **kw
2016-04-20 18:06:09.351 16077 ERROR 

[Yahoo-eng-team] [Bug 1573320] [NEW] Neutron-LBaaS v2: Pool still gets created after ListenerPoolProtocolMismatch

2016-04-21 Thread Franklin Naval
Public bug reported:

1.  Create a Load Balancer.
2.  Create a Listener with TCP 
3.  Create a Pool with HTTP
4.  Observe the following error:  {"NeutronError": {"message": "Listener 
protocol TCP and pool protocol HTTP are not compatible.", "type": 
"ListenerPoolProtocolMismatch", "detail": ""}
5.  List Pools.

Result:   Pool still gets created after the error.

Expected:  Pool should not get created if the listener and the pool
protocols are not compatible.

Note:  This bug occurs on any mismatch of protocols between listener and
pool:  TCP vs HTTP, TCP vs HTTPS, TCP vs TERMINATED_HTTPS, HTTP vs TCP,
etc.

Log (note the creation of pool "95300d77-084a-4015-804f-698d960b8050"
after the 409 error):

2016-04-21 22:15:04.370 17603 INFO tempest.lib.common.rest_client 
[req-411d6275-9341-4c53-ac13-3a879ae95ed1 ] Request (TestProtocols:
test_create_listener_and_pool_with_protocol): 409 POST 
http://127.0.0.1:9696/v2.0/lbaas/pools 0.339s
2016-04-21 22:15:04.370 17603 DEBUG tempest.lib.common.rest_client 
[req-411d6275-9341-4c53-ac13-3a879ae95ed1 ] Request - Headers: {'X
-Auth-Token': '', 'Accept': 'application/json', 'Content-Type': 
'application/json'}
Body: {"pool": {"name": "pool-1818041961", "protocol": "HTTP", 
"admin_state_up": true, "description": "pool_description-15184
40568", "session_persistence": null, "listener_id": 
"7e4a7347-4113-4de4-9af9-092fc532bf23", "loadbalancer_id": 
"9c65fe8b-cc24-4e5b-8d
ae-bcfb22c0163f", "lb_algorithm": "ROUND_ROBIN"}}
Response - Headers: {'content-length': '151', 'x-openstack-request-id': 
'req-411d6275-9341-4c53-ac13-3a879ae95ed1', 'date': 'Thu,
 21 Apr 2016 22:15:04 GMT', 'content-type': 'application/json', 'connection': 
'close', 'status': '409'}
Body: {"NeutronError": {"message": "Listener protocol TCP and pool 
protocol HTTP are not compatible.", "type": "ListenerPoolProtocolMismatch", 
"detail": ""}} _log_request_full 
/opt/stack/neutron-lbaas/.tox/apiv2/local/lib/python2.7/site-packages/tempest/lib/common/rest_client.py:414
2016-04-21 22:15:04.590 17603 INFO tempest.lib.common.rest_client 
[req-97add959-c5f8-4389-8f4a-41708b59cc92 ] Request (TestProtocols:tearDown): 
200 GET 
http://127.0.0.1:9696/v2.0/lbaas/loadbalancers/9c65fe8b-cc24-4e5b-8dae-bcfb22c0163f
 0.188s
2016-04-21 22:15:04.590 17603 DEBUG tempest.lib.common.rest_client 
[req-97add959-c5f8-4389-8f4a-41708b59cc92 ] Request - Headers: {'X-Auth-Token': 
'', 'Accept': 'application/json', 'Content-Type': 'application/json'}
Body: None
Response - Headers: {'content-length': '517', 'x-openstack-request-id': 
'req-97add959-c5f8-4389-8f4a-41708b59cc92', 'date': 'Thu, 21 Apr 2016 22:15:04 
GMT', 'content-location': 
'http://127.0.0.1:9696/v2.0/lbaas/loadbalancers/9c65fe8b-cc24-4e5b-8dae-bcfb22c0163f',
 'content-type': 'application/json', 'connection': 'close', 'status': '200'}
Body: {"loadbalancer": {"description": "", "admin_state_up": true, 
"tenant_id": "59c6e9a660b94e97b0f2a4919974c220", "provisioning_status": 
"ACTIVE", "vip_subnet_id": "732198a3-0a70-478a-a4dc-a9cee967f423", "listeners": 
[{"id": "7e4a7347-4113-4de4-9af9-092fc532bf23"}], "vip_address": "10.100.0.2", 
"vip_port_id": "27c8f167-a10d-4d6f-b7fd-12e4f0373e5e", "provider": "octavia", 
"pools": [{"id": "95300d77-084a-4015-804f-698d960b8050"}], "id": 
"9c65fe8b-cc24-4e5b-8dae-bcfb22c0163f", "operating_status": "ONLINE", "name": 
""}} _log_request_full 
/opt/stack/neutron-lbaas/.tox/apiv2/local/lib/python2.7/site-packages/tempest/lib/common/rest_client.py:414

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaasv2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1573320

Title:
  Neutron-LBaaS v2:  Pool still gets created after
  ListenerPoolProtocolMismatch

Status in neutron:
  New

Bug description:
  1.  Create a Load Balancer.
  2.  Create a Listener with TCP 
  3.  Create a Pool with HTTP
  4.  Observe the following error:  {"NeutronError": {"message": "Listener 
protocol TCP and pool protocol HTTP are not compatible.", "type": 
"ListenerPoolProtocolMismatch", "detail": ""}
  5.  List Pools.

  Result:   Pool still gets created after the error.

  Expected:  Pool should not get created if the listener and the pool
  protocols are not compatible.

  Note:  This bug occurs on any mismatch of protocols between listener
  and pool:  TCP vs HTTP, TCP vs HTTPS, TCP vs TERMINATED_HTTPS, HTTP vs
  TCP, etc.

  Log (note the creation of pool "95300d77-084a-4015-804f-698d960b8050"
  after the 409 error):

  2016-04-21 22:15:04.370 17603 INFO tempest.lib.common.rest_client 
[req-411d6275-9341-4c53-ac13-3a879ae95ed1 ] Request (TestProtocols:
  test_create_listener_and_pool_with_protocol): 409 POST 
http://127.0.0.1:9696/v2.0/lbaas/pools 0.339s
  2016-04-21 22:15:04.370 17603 DEBUG tempest.lib.common.rest_client 
[req-411d6275-9341-4c53-ac13-3a879ae95ed1 ] Request - Headers: {'X

[Yahoo-eng-team] [Bug 1558772] Re: Magic-Search shouldn't exist inside of table structure

2016-04-21 Thread Travis Tripp
Once this gets into horizon, the change will affect searchlight and
searchlight-ui must be updated.

** Also affects: searchlight
   Importance: Undecided
   Status: New

** Changed in: searchlight
   Importance: Undecided => Critical

** Changed in: searchlight
   Status: New => In Progress

** Changed in: horizon
Milestone: next => newton-1

** Changed in: searchlight
Milestone: None => newton-1

** Changed in: searchlight
   Importance: Critical => High

** Changed in: searchlight
 Assignee: (unassigned) => Matt Borland (palecrow)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1558772

Title:
  Magic-Search shouldn't exist inside of table structure

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Search (Searchlight):
  In Progress

Bug description:
  Currently, the way the Angular Magic-Search directive works, it
  requires being placed in the context of a smart-table.  This is not
  ideal and causes trouble with formatting.

  A good solution would allow the search bar directive to be placed
  outside of the table structure in the markup.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1558772/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1573288] [NEW] over time, horizon's admin -> overview page becomes very slow ....

2016-04-21 Thread Kambiz Aghaiepour
Public bug reported:

I've noticed that when logging into the admin account after a bunch of
activity against the RDO installation, it takes a very long time (many
minutes) before horizon loads (I think the issue is the overview admin
page which is also the main landing page for logging in).

The list includes overall activity including deleted projects.  If you
orchestrate lots of testing against the installation using "rally" you
will see lots of projects get created and later deleted.  As such I have
an overview page which lists at the bottom:

"Displaying 2035 items"

Is it possible to do something about the Overview page either by
displaying only the first 20 items, or changing the type of information
being displayed?  Logging into admin is very painful currently.  Non-
admin accounts login quickly.


Version-Release number of selected component (if applicable):

Liberty

How reproducible:

Always.

Steps to Reproduce:

Run rally against openstack in an endless loop.  After a few days (or
hours depending on what you do and how you do it) you will find horizon
getting slower and slower.

Originally reported against RDO here:
https://bugzilla.redhat.com/show_bug.cgi?id=1329414

though this is likely a general issue.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1573288

Title:
   over time, horizon's admin -> overview page becomes very slow 

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I've noticed that when logging into the admin account after a bunch of
  activity against the RDO installation, it takes a very long time (many
  minutes) before horizon loads (I think the issue is the overview admin
  page which is also the main landing page for logging in).

  The list includes overall activity including deleted projects.  If you
  orchestrate lots of testing against the installation using "rally" you
  will see lots of projects get created and later deleted.  As such I
  have an overview page which lists at the bottom:

  "Displaying 2035 items"

  Is it possible to do something about the Overview page either by
  displaying only the first 20 items, or changing the type of
  information being displayed?  Logging into admin is very painful
  currently.  Non-admin accounts login quickly.

  
  Version-Release number of selected component (if applicable):

  Liberty

  How reproducible:

  Always.

  Steps to Reproduce:

  Run rally against openstack in an endless loop.  After a few days (or
  hours depending on what you do and how you do it) you will find
  horizon getting slower and slower.

  Originally reported against RDO here:
  https://bugzilla.redhat.com/show_bug.cgi?id=1329414

  though this is likely a general issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1573288/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1573197] Re: [RFE] Neutron API enhancement for visibility into multi-segmented networks

2016-04-21 Thread Doug Wiegley
Isn't this a dup of this spec, that is in progress?
https://review.openstack.org/#/c/225384/22/specs/newton/routed-
networks.rst

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1573197

Title:
  [RFE] Neutron API enhancement for visibility into multi-segmented
  networks

Status in neutron:
  Invalid

Bug description:
  Neutron networks are, by default, assumed to be single segmented L2
  domains, represented by a single segmentation ID (e.g VLAN ID).
  Current neutron API (neutron net-show) works well with this model.
  However, with the introduction of HPB, this assumption is not true
  anymore. Networks are now multi-segmented. A given network could have
  anywhere from 3 to N number of segments depending upon the
  breadth/size of the data center topology. This will be true with the
  implementation of routed networks as well.

  In general, the segments, in multi-segmented networks, will be
  dynamically created.  As mentioned earlier, the number of these
  segments will grow and shrink dynamically representing the breadth of
  data center topology. Therefore, at the very least, admins would like
  to have visibility into these segments - e.g. which segmentation
  type/id is consumed in which segment of the network.

  Venders and Operators are forced to come up with their hacks to get such 
visibility. 
  This RFE proposes that we enhance neutron API to address this visibility 
issue in a vendor/implementation agnostic way - by either enhancing "neutron 
net-show" or by introducing additional commands such as "neutron 
net-segments-list/neutron net-segment-show". 

  This capability is needed for Neutron-Manila integration as well.
  Manila requires visibility into the segmentation IDs used in specific
  segments of a network. Please see Manila use case here -
  https://wiki.openstack.org/wiki/Manila/design/manila-newton-hpb-
  support

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1573197/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1516031] Re: Use of MD5 in OpenStack Glance image signature (CVE-2015-8234)

2016-04-21 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/308466
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=5ab63107b69e381f04bfa4aa9143e229ac2a9857
Submitter: Jenkins
Branch:master

commit 5ab63107b69e381f04bfa4aa9143e229ac2a9857
Author: Dane Fichter 
Date:   Tue Apr 19 01:27:02 2016 -0400

Remove deprecated "sign-the-hash" approach

This change removes the "sign-the-hash" signature
verification code in the signature_utils module and
the ImageProxy class. This code was deprecated in
Mitaka and scheduled for removal in Newton.

Change-Id: I8862f6c94538dd818c7360ba287e14c1264ff20f
Closes-Bug: #1516031


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1516031

Title:
  Use of MD5 in OpenStack Glance image signature (CVE-2015-8234)

Status in Glance:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix
Status in OpenStack Security Notes:
  Fix Released

Bug description:
  This have been reported by Daniel P. Berrange:
  "
  In the OpenStack Liberty release, the Glance project added support for image 
signature verification.

  http://specs.openstack.org/openstack/glance-specs/specs/liberty/image-
  signing-and-verification-support.html

  The verification code was added in the following git commit

  
https://github.com/openstack/glance/commit/484ef1b40b738c87adb203bba6107ddb4b04ff6e

  
  Unfortunately the design of this signature verification method is flawed by 
design.

  The generalized approach to creating signatures of content is to apply
  a hash to the content and then encrypt it in some manner. Consider
  that the signature is defined to use hash=sha256 and cipher=rsa we can
  describe the signature computation as

  signature = rsa(sha256(content))

  In the case of verifying a disk image, the content we care about
  verifying is the complete disk image file. Unfortunately, the glance
  specification chose *not* to compute the signature against the disk
  image file. Glance already had an MD5 checksum calculated for the disk
  image file, so they instead chose to compute the signature against the
  MD5 checksum instead. ie glance is running

  signature = rsa(sha256(md5(disk-image-content)))

  This degrades the security of the system to that of the weakest hash,
  which is obviously MD5 here.

  The code where glance verifies the signature is in the
  glance/locations.py, the 'set_data' method where is does

   result = signature_utils.verify_signature(
   self.context, checksum, self.image.extra_properties)
   if result:
   LOG.info(_LI("Successfully verified signature for image %s"),
   self.image.image_id)

  The 'checksum' variable is populate by the glance_store driver, but it
  is hardcoded to always be md5 in all current glance storage backends:

   $ git grep hashlib glance_store/_drivers/ | grep checksum
   glance_store/_drivers/filesystem.py: checksum = hashlib.md5()
   glance_store/_drivers/rbd.py: checksum = hashlib.md5()
   glance_store/_drivers/s3.py: checksum = hashlib.md5()
   glance_store/_drivers/s3.py: checksum = hashlib.md5()
   glance_store/_drivers/sheepdog.py: checksum = hashlib.md5()
   glance_store/_drivers/swift/store.py: checksum =
   hashlib.md5()
   glance_store/_drivers/vmware_datastore.py: self.checksum =
   hashlib.md5()

  
  Since we will soon be shipping OpenStack Liberty release, we need to at least 
give a security notice to alert our customers to the fact that the signature 
verification is cryptographically weak/broken. IMHO, it quite likely deserves a 
CVE though

  NB, this is public knowledge as I first became aware of this flawed
  design in comments / discussion on a public specification proposed to
  implement the same approach in the Nova project.

  My suggested way to fix this is to simply abandon the current impl and
  re-do it such that it directly computes the signature against  the
  disk image, and does not use the existing md5 checksum in any way.

  Regards,
  Daniel
  "

  Mailing list thread for Nova impl: 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/079348.html
  Nova Spec: https://review.openstack.org/#/c/188874/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1516031/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570596] Re: Neutron-LBaaS v2: 500 error on creating 2 listeners simultaneously

2016-04-21 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/306591
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lbaas/commit/?id=63f4bac2247e45ec66ba8fde04692abacbe1f72c
Submitter: Jenkins
Branch:master

commit 63f4bac2247e45ec66ba8fde04692abacbe1f72c
Author: Michael Johnson 
Date:   Fri Apr 15 18:49:44 2016 +

Set HTTP status code to 409 for LBs in PENDING*

When users attempt to change a LBaaS resource while the load
balancer is in a PENDING-* state they currently get an error
with a 500 HTTP status code.  This patch changes the status code
to be 409 (conflict) which is consistent with other "I'm busy"
errors.

Change-Id: I6fc0c966e72dde956bd481b71a5cea5ba6d10c55
Closes-Bug: #1570596


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1570596

Title:
  Neutron-LBaaS v2: 500 error on creating 2 listeners simultaneously

Status in neutron:
  Fix Released

Bug description:
  1. Create LB
  2. Create 2 Listener's simultaneously

  Result:  500 thrown
  Expected: 409 conflict with an informative error message

  
  Log: http://paste.openstack.org/show/494129/

  2016-04-14 20:50:00,177 10041 INFO [tempest.lib.common.rest_client] 
Request (ListenersTestAdmin:test_create_two_listener_simultaneous): 201 POST 
http://127.0.0.1:9696/v2.0/lbaas/listeners 0.312s
  2016-04-14 20:50:00,177 10041 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
  Body: {"listener": {"tenant_id": 
"deffb4d7c0584e89a8ec99551565713c", "loadbalancer_id": 
"7e0cbab9-e7d3-4ddf-86a5-86362b9fc271", "protocol": "HTTP", "protocol_port": 
9080}}
  Response - Headers: {'content-type': 'application/json', 
'x-openstack-request-id': 'req-b9ec3d6d-6159-4102-bb28-874e7b3943bb', 'date': 
'Thu, 14 Apr 2016 20:50:00 GMT', 'content-length': '384', 'status': '201', 
'connection': 'close'}
  Body: {"listener": {"protocol_port": 9080, "protocol": "HTTP", 
"description": "", "default_tls_container_ref": null, "admin_state_up": true, 
"loadbalancers": [{"id": "7e0cbab9-e7d3-4ddf-86a5-86362b9fc271"}], "tenant_id": 
"deffb4d7c0584e89a8ec99551565713c", "sni_container_refs": [], 
"connection_limit": -1, "default_pool_id": null, "id": 
"6b9dfdc4-2299-45a7-a832-e6ce7531bc69", "name": ""}}
  2016-04-14 20:50:00,211 10041 INFO [tempest.lib.common.rest_client] 
Request (ListenersTestAdmin:test_create_two_listener_simultaneous): 500 POST 
http://127.0.0.1:9696/v2.0/lbaas/listeners 0.033s
  2016-04-14 20:50:00,212 10041 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
  Body: {"listener": {"tenant_id": 
"deffb4d7c0584e89a8ec99551565713c", "loadbalancer_id": 
"7e0cbab9-e7d3-4ddf-86a5-86362b9fc271", "protocol": "HTTP", "protocol_port": 
9081}}
  Response - Headers: {'content-type': 'application/json', 
'x-openstack-request-id': 'req-5d101dbb-42d4-4a26-86e0-df1dc97067a3', 'date': 
'Thu, 14 Apr 2016 20:50:00 GMT', 'content-length': '161', 'status': '500', 
'connection': 'close'}
  Body: {"NeutronError": {"message": "Invalid state PENDING_UPDATE 
of loadbalancer resource 7e0cbab9-e7d3-4ddf-86a5-86362b9fc271", "type": 
"StateInvalid", "detail": ""}}
  2016-04-14 20:50:00,213 10041 INFO 
[neutron_lbaas.tests.tempest.v2.api.test_listeners_admin:ListenersTestAdmin] 
Finished: test_create_two_listener_simultaneous

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1570596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566656] Re: neutron purge operation should not delete dhcp port

2016-04-21 Thread Armando Migliaccio
If you are about to purge a resource, tenants have lost access to the
cloud.

** Changed in: neutron
   Status: Confirmed => Won't Fix

** Tags removed: released-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1566656

Title:
  neutron purge operation should not delete dhcp port

Status in neutron:
  Won't Fix

Bug description:
  Now neutronclient can use purge to delete tenant resources. It will
  check whether the resource is used by other tenants. I found the purge
  will delete the dhcp port and it just check this resource's tenant_id
  is the specified one. This will cause an issue like:

  There are 2 tenant A and B , there is an admin user in both of them.
  1. In tenant A, I create an network, but no subnet. It won't allocate a dhcp 
port now.
  2. Now I change to tenant B, as user  in tenant B is admin role, I will see 
the network which created by tenant A user. So I create a subnet towards the 
network, and it will create a dhcp port which owned by tenant A(as dhcp port 
creation is based on network tenant). Then tenant B user can add  interface/ 
create vm port in this subnet.
  3. another tenant C with admin user exec neutron purge tenant A_id, it will 
check the tenant A's resource in system, it will not delete the network, but 
delete the dhcp port. And dhcp port will be created later, its owner is still  
tenant A.

  if  it can verify the network can not delete, we should not delete
  dhcp port, it will cause during the recreate dhcp port, new port
  creation can not get the ip addr.  And it will be recreated in the
  end, this view is meaningless and risky.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1566656/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572795] Re: There are some verbose files in Create Network

2016-04-21 Thread Rob Cresswell
Muddled patches. Patch here: https://review.openstack.org/#/c/307099/

** Changed in: horizon
   Status: Fix Released => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1572795

Title:
  There are some verbose files in Create Network

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  By https://review.openstack.org/#/c/298508/,
  Create Network also became to be able to use common html templates.
  Therefore, files being used by its page can be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1572795/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1573197] [NEW] [RFE] Neutron API enhancement for visibility into multi-segmented networks

2016-04-21 Thread Sukhdev Kapur
Public bug reported:

Neutron networks are, by default, assumed to be single segmented L2
domains, represented by a single segmentation ID (e.g VLAN ID). Current
neutron API (neutron net-show) works well with this model. However, with
the introduction of HPB, this assumption is not true anymore. Networks
are now multi-segmented. A given network could have anywhere from 3 to N
number of segments depending upon the breadth/size of the data center
topology. This will be true with the implementation of routed networks
as well.

In general, the segments, in multi-segmented networks, will be
dynamically created.  As mentioned earlier, the number of these segments
will grow and shrink dynamically representing the breadth of data center
topology. Therefore, at the very least, admins would like to have
visibility into these segments - e.g. which segmentation type/id is
consumed in which segment of the network.

Venders and Operators are forced to come up with their hacks to get such 
visibility. 
This RFE proposes that we enhance neutron API to address this visibility issue 
in a vendor/implementation agnostic way - by either enhancing "neutron 
net-show" or by introducing additional commands such as "neutron 
net-segments-list/neutron net-segment-show". 

This capability is needed for Neutron-Manila integration as well. Manila
requires visibility into the segmentation IDs used in specific segments
of a network. Please see Manila use case here -
https://wiki.openstack.org/wiki/Manila/design/manila-newton-hpb-support

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1573197

Title:
  [RFE] Neutron API enhancement for visibility into multi-segmented
  networks

Status in neutron:
  New

Bug description:
  Neutron networks are, by default, assumed to be single segmented L2
  domains, represented by a single segmentation ID (e.g VLAN ID).
  Current neutron API (neutron net-show) works well with this model.
  However, with the introduction of HPB, this assumption is not true
  anymore. Networks are now multi-segmented. A given network could have
  anywhere from 3 to N number of segments depending upon the
  breadth/size of the data center topology. This will be true with the
  implementation of routed networks as well.

  In general, the segments, in multi-segmented networks, will be
  dynamically created.  As mentioned earlier, the number of these
  segments will grow and shrink dynamically representing the breadth of
  data center topology. Therefore, at the very least, admins would like
  to have visibility into these segments - e.g. which segmentation
  type/id is consumed in which segment of the network.

  Venders and Operators are forced to come up with their hacks to get such 
visibility. 
  This RFE proposes that we enhance neutron API to address this visibility 
issue in a vendor/implementation agnostic way - by either enhancing "neutron 
net-show" or by introducing additional commands such as "neutron 
net-segments-list/neutron net-segment-show". 

  This capability is needed for Neutron-Manila integration as well.
  Manila requires visibility into the segmentation IDs used in specific
  segments of a network. Please see Manila use case here -
  https://wiki.openstack.org/wiki/Manila/design/manila-newton-hpb-
  support

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1573197/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1573165] [NEW] Cannot open network namespace when booting up instances

2016-04-21 Thread Yufen Kuo
Public bug reported:

# neutron --version
3.1.0

When booting up instance using nova boot command, I am getting ERROR status on 
the instance
# nova list
+--+---+++-++
| ID   | Name  | Status | Task State | Power 
State | Networks   |
+--+---+++-++
| 98f341b8-19e4-4f14-930f-8c27bc0dc6f9 | test1 | ERROR  | -  | NOSTATE  
   | public=10.40.18.12 |
+--+---+++-++
# neutron  net-list
+--+++
| id   | name   | subnets   
 |
+--+++
| 5e8aabe0-9561-4f78-82b9-1ca0a350e533 | public | 
f020c35c-0a8b-495a-bd7d-e9d8de4188de 10.40.16.0/22 |
+--+++
# neutron net-show 5e8aabe0-9561-4f78-82b9-1ca0a350e533
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| id| 5e8aabe0-9561-4f78-82b9-1ca0a350e533 |
| mtu   | 0|
| name  | public   |
| port_security_enabled | True |
| provider:network_type | flat |
| provider:physical_network | public   |
| provider:segmentation_id  |  |
| router:external   | False|
| shared| True |
| status| ACTIVE   |
| subnets   | f020c35c-0a8b-495a-bd7d-e9d8de4188de |
| tenant_id | d78b9cabab66472b937d364ebfce3986 |
+---+--+

in the nova-conductor.log file on controller
2016-04-20 16:58:22.132 28114 ERROR nova.scheduler.utils 
[req-93298eea-2308-44ed-bb75-68f83c55dc10 8da5babc58424fffaf8a23bbc0276739 
d78b9cabab66472b937d364ebfce3986 - - -] [instance: 
98f341b8-19e4-4f14-930f-8c27bc0dc6f9] Error from last host:  
(node: [u'Traceback (most recent call last):\n', u'  File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1905, in 
_do_build_and_run_instance\nfilter_properties)\n', u'  File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2057, in 
_build_and_run_instance\ninstance_uuid=instance.uuid, 
reason=six.text_type(e))\n', u'RescheduledException: Build of instance 
98f341b8-19e4-4f14-930f-8c27bc0dc6f9 was re-scheduled: Cannot setup network: 
Unexpected error while running command.\nCommand: sudo nova-rootwrap 
/etc/nova/rootwrap.conf ip netns exec 
70856f7f66e6f3354be5b1ce212c0c299e0b652ea0a7100e1595b1bbac5ea91c ip link set lo 
up\nExit code: 1\nStdout: u\'\'\nStderr: u\'Cannot open network
  namespace "70856f7f66e6f3354be5b1ce212c0c299e0b652ea0a7100e1595b1bbac5ea91c": 
Permission denied\\n\'\n']


on Compute host:
# ip netns
d8b5b2e5f869a35357eb79487f9c3e2e55c2378af2484f218e6152a43517dc39
70856f7f66e6f3354be5b1ce212c0c299e0b652ea0a7100e1595b1bbac5ea91c
f66c01f7d829162c137dd9803c035e635cfc6cdc079da01ae94e856170340d4b
# ls -l
total 0
lrwxrwxrwx. 1 root root 18 Apr 20 17:58 
70856f7f66e6f3354be5b1ce212c0c299e0b652ea0a7100e1595b1bbac5ea91c -> 
/proc/33051/ns/net
lrwxrwxrwx. 1 root root 18 Apr 20 17:58 
d8b5b2e5f869a35357eb79487f9c3e2e55c2378af2484f218e6152a43517dc39 -> 
/proc/33130/ns/net
lrwxrwxrwx. 1 root root 18 Apr 20 17:58 
f66c01f7d829162c137dd9803c035e635cfc6cdc079da01ae94e856170340d4b -> 
/proc/32852/ns/net

# ls -l /proc/33051/ns/net
ls: cannot access /proc/33051/ns/net: No such file or directory

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1573165

Title:
  Cannot open network namespace when booting up  instances

Status in neutron:
  New

Bug description:
  # neutron --version
  3.1.0

  When booting up instance using nova boot command, I am getting ERROR status 
on the instance
  # nova list
  
+--+---+++-++
  | ID   | Name  | Status | Task State | Power 
State | 

[Yahoo-eng-team] [Bug 1573142] [NEW] proxied neutron network information is incorrect

2016-04-21 Thread Monty Taylor
Public bug reported:

The information in the addresses dict for a server with a neutron
floating ip is misleading. Here is an example:

 'addresses': {u'openstackci-network1': [{u'OS-EXT-IPS-MAC:mac_addr': 
u'fa:16:3e:d7:d1:c2',
  u'OS-EXT-IPS:type': u'fixed',
  u'addr': u'10.0.1.24',
  u'version': 4},
 {u'OS-EXT-IPS-MAC:mac_addr': 
u'fa:16:3e:d7:d1:c2',
  u'OS-EXT-IPS:type': u'floating',
  u'addr': u'172.99.106.118',
  u'version': 4}]},

This is incorrect. The neutron networks on this cloud are:

| 405abfcc-77dc-49b2-a271-139619ac9b26 | openstackjenkins-network1 | 
a47910bc-f649-45db-98ec-e2421c413f4e 10.0.1.0/24 |
| 7004a83a-13d3-4dcd-8cf5-52af1ace4cae | GATEWAY_NET   | 
cf785ee0-6cc9-4712-be3d-0bf6c86cf455 |


The floating IP does not come from openstackci-network1 - it comes from 
GATEWAY_NET. But in the nova addresses dict, the floating IP is showing up as a 
member of the list of addresses from openstackci-network1. It should really 
look like this:

 'addresses': {u'openstackci-network1': [{u'OS-EXT-IPS-MAC:mac_addr': 
u'fa:16:3e:d7:d1:c2',
  u'OS-EXT-IPS:type': u'fixed',
  u'addr': u'10.0.1.24',
  u'version': 4}],
 {u'GATEWAY_NET': [{u'OS-EXT-IPS-MAC:mac_addr': 
u'fa:16:3e:d7:d1:c2',
  u'OS-EXT-IPS:type': u'floating',
  u'addr': u'172.99.106.118',
  u'version': 4}]},

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1573142

Title:
  proxied neutron network information is incorrect

Status in OpenStack Compute (nova):
  New

Bug description:
  The information in the addresses dict for a server with a neutron
  floating ip is misleading. Here is an example:

   'addresses': {u'openstackci-network1': [{u'OS-EXT-IPS-MAC:mac_addr': 
u'fa:16:3e:d7:d1:c2',
u'OS-EXT-IPS:type': u'fixed',
u'addr': u'10.0.1.24',
u'version': 4},
   {u'OS-EXT-IPS-MAC:mac_addr': 
u'fa:16:3e:d7:d1:c2',
u'OS-EXT-IPS:type': u'floating',
u'addr': u'172.99.106.118',
u'version': 4}]},

  This is incorrect. The neutron networks on this cloud are:

  | 405abfcc-77dc-49b2-a271-139619ac9b26 | openstackjenkins-network1 | 
a47910bc-f649-45db-98ec-e2421c413f4e 10.0.1.0/24 |
  | 7004a83a-13d3-4dcd-8cf5-52af1ace4cae | GATEWAY_NET   | 
cf785ee0-6cc9-4712-be3d-0bf6c86cf455 |

  
  The floating IP does not come from openstackci-network1 - it comes from 
GATEWAY_NET. But in the nova addresses dict, the floating IP is showing up as a 
member of the list of addresses from openstackci-network1. It should really 
look like this:

   'addresses': {u'openstackci-network1': [{u'OS-EXT-IPS-MAC:mac_addr': 
u'fa:16:3e:d7:d1:c2',
u'OS-EXT-IPS:type': u'fixed',
u'addr': u'10.0.1.24',
u'version': 4}],
   {u'GATEWAY_NET': [{u'OS-EXT-IPS-MAC:mac_addr': 
u'fa:16:3e:d7:d1:c2',
u'OS-EXT-IPS:type': u'floating',
u'addr': u'172.99.106.118',
u'version': 4}]},

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1573142/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572867] Re: DEVICE_OWNER_PREFIXES not be defined in anywhere

2016-04-21 Thread Doug Wiegley
It's defined in neutron_lib.constants, and by deprecation link via
neutron.common.constants.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572867

Title:
  DEVICE_OWNER_PREFIXES not  be defined in anywhere

Status in neutron:
  Invalid

Bug description:
  in neutron\objects\qos\rule.py ,the constant DEVICE_OWNER_PREFIXES not
  be defined in anywhere

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1572867/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570748] Re: Bug: resize instance after edit flavor with horizon

2016-04-21 Thread Matt Riedemann
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Also affects: nova/liberty
   Importance: Undecided
   Status: New

** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

** Changed in: nova/mitaka
   Status: New => In Progress

** Changed in: nova/mitaka
 Assignee: (unassigned) => Matt Riedemann (mriedem)

** Changed in: nova/mitaka
   Importance: Undecided => Medium

** Changed in: nova/liberty
   Importance: Undecided => Medium

** Changed in: nova/kilo
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1570748

Title:
  Bug: resize instance after edit flavor with horizon

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  New
Status in OpenStack Compute (nova) liberty series:
  New
Status in OpenStack Compute (nova) mitaka series:
  In Progress
Status in tempest:
  In Progress

Bug description:
  Error occured when resize instance after edit flavor with horizon (and
  also delete flavor used by instance)

  Reproduce step :

  1. create flavor A
  2. boot instance using flavor A
  3. edit flavor with horizon (or delete flavor A)
  -> the result is same to edit or to delelet flavor because edit flavor 
means delete/recreate flavor)
  4. resize or migrate instance
  5. Error occured

  Log : 
  nova-compute.log
 File "/opt/openstack/src/nova/nova/conductor/manager.py", line 422, in 
_object_dispatch
   return getattr(target, method)(*args, **kwargs)

 File "/opt/openstack/src/nova/nova/objects/base.py", line 163, in wrapper
   result = fn(cls, context, *args, **kwargs)

 File "/opt/openstack/src/nova/nova/objects/flavor.py", line 132, in 
get_by_id
   db_flavor = db.flavor_get(context, id)

 File "/opt/openstack/src/nova/nova/db/api.py", line 1479, in flavor_get
   return IMPL.flavor_get(context, id)

 File "/opt/openstack/src/nova/nova/db/sqlalchemy/api.py", line 233, in 
wrapper
   return f(*args, **kwargs)

 File "/opt/openstack/src/nova/nova/db/sqlalchemy/api.py", line 4732, in 
flavor_get
   raise exception.FlavorNotFound(flavor_id=id)

   FlavorNotFound: Flavor 7 could not be found.

  
  This Error is occured because of below code:
  /opt/openstack/src/nova/nova/compute/manager.py

  def resize_instance(self, context, instance, image,
  reservations, migration, instance_type,
  clean_shutdown=True):
  
  if (not instance_type or
  not isinstance(instance_type, objects.Flavor)):
  instance_type = objects.Flavor.get_by_id(
  context, migration['new_instance_type_id'])
  

  I think that deleted flavor should be taken when resize instance. 
  I tested this in stable/kilo, but I think stable/liberty and stable/mitaka 
has same bug because source code is not changed.

  thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1570748/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1571628] Re: os-server-groups description is missing

2016-04-21 Thread Atsushi SAKAI
** Project changed: openstack-api-site => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1571628

Title:
  os-server-groups description is missing

Status in OpenStack Compute (nova):
  New

Bug description:
  Description of os-server-groups is missing from the Compute API
  reference [1].

  [1]: http://developer.openstack.org/api-ref-compute-v2.1.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1571628/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572013] Re: missing parameter explaination in "Servers" section of v2.1 compute api

2016-04-21 Thread Atsushi SAKAI
** Project changed: openstack-api-site => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1572013

Title:
  missing parameter explaination in "Servers" section of v2.1 compute
  api

Status in OpenStack Compute (nova):
  New

Bug description:
  URL:  http://developer.openstack.org/api-ref-compute-v2.1.html
  I think the request parameters listed in "GET /v2.1/​{tenant_id}​/servers" of 
"Servers" section are not complete, when i want to get all servers of all 
tenants, there should be "?all_tenants=true" in the url, as i read in 
python-novaclient source code and it works actually after testing; but there is 
no specific description about "all_tenant" listed in "Request parameters" 
following in the api documentation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1572013/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572013] [NEW] missing parameter explaination in "Servers" section of v2.1 compute api

2016-04-21 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

URL:  http://developer.openstack.org/api-ref-compute-v2.1.html
I think the request parameters listed in "GET /v2.1/​{tenant_id}​/servers" of 
"Servers" section are not complete, when i want to get all servers of all 
tenants, there should be "?all_tenants=true" in the url, as i read in 
python-novaclient source code and it works actually after testing; but there is 
no specific description about "all_tenant" listed in "Request parameters" 
following in the api documentation.

** Affects: nova
 Importance: Undecided
 Assignee: Sharat Sharma (sharat-sharma)
 Status: New


** Tags: servers-api-doc
-- 
missing parameter explaination in "Servers" section of v2.1 compute api 
https://bugs.launchpad.net/bugs/1572013
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1571628] [NEW] os-server-groups description is missing

2016-04-21 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Description of os-server-groups is missing from the Compute API
reference [1].

[1]: http://developer.openstack.org/api-ref-compute-v2.1.html

** Affects: nova
 Importance: Undecided
 Assignee: Sharat Sharma (sharat-sharma)
 Status: New


** Tags: nova
-- 
os-server-groups description is missing
https://bugs.launchpad.net/bugs/1571628
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566820] Re: More meaningful message should be displayed when a conflict error occurred in ng-swift

2016-04-21 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/303350
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=2327e9c1e1f2188f4229656791d589686fc66325
Submitter: Jenkins
Branch:master

commit 2327e9c1e1f2188f4229656791d589686fc66325
Author: Kenji Ishii 
Date:   Fri Apr 8 10:33:09 2016 +

Improve error message of when Conflict error occur in ng-swift

When we create a container, folder or object, error will occur if the
specified name is already exist. And when we delete, if specified
container or folder have a content, error will occur.
At the moment, error message is always same in all cases.
This patch will improve the message of when Conflict exception occur.

Change-Id: I825ef09badd1b10bb6fdab8f223bd6dfed28f3a4
Closes-bug: #1566820


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1566820

Title:
  More meaningful message should be displayed when a conflict error
  occurred in ng-swift

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  When a user create/delete folder or object, it might occur a Conflict(or 
Already Exist) Error.
  However, at the moment, the message displayed to users doesn't include this 
meaning. 
  Just display "Unable to create/delete xxx".
  When we can know the cause of the error, we should display the message 
including that cause.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1566820/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1573092] [NEW] neutron and python-neutronclient should allow for 32bit ASN

2016-04-21 Thread Dr. Jens Rosenboom
Public bug reported:

Currently there is a limit hardcoded in Neutron that will only allow
16bit ASN being used in the configuration of BGP speakers and peers,
i.e. a range of [1..65535]. But with https://tools.ietf.org/html/rfc6793
it is possible for BGP implementations to allow 32bit numbers to be used
and in fact some RIRs have already run out of 16bit ASNs and are only
handing out new ASNs that are above 65536.

So although the ryu-based reference implementation does not support
this, there may be other agents e.g. based on ExaBGP that will support
32bit ASNs being used, and it doesn't seem sensible that Neutron should
prevent this upfront.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1573092

Title:
  neutron and python-neutronclient should allow for 32bit ASN

Status in neutron:
  New

Bug description:
  Currently there is a limit hardcoded in Neutron that will only allow
  16bit ASN being used in the configuration of BGP speakers and peers,
  i.e. a range of [1..65535]. But with
  https://tools.ietf.org/html/rfc6793 it is possible for BGP
  implementations to allow 32bit numbers to be used and in fact some
  RIRs have already run out of 16bit ASNs and are only handing out new
  ASNs that are above 65536.

  So although the ryu-based reference implementation does not support
  this, there may be other agents e.g. based on ExaBGP that will support
  32bit ASNs being used, and it doesn't seem sensible that Neutron
  should prevent this upfront.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1573092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1573073] [NEW] When router has no ports _process_updated_router fails because the namespace does not exist

2016-04-21 Thread Saverio Proto
Public bug reported:

Happens in Kilo. Cannot test on other releases.

Steps to reproduce, create a router and dont add any port.
The command 'neutron router-port-list' should return an empty set.
As soon as you create the router, the namespace qrouter- is present on 
the network node, but after a while it is purged.
As soon as the namespace is not there anymore check the log file vpn-agent.log 
for the stacktrace.

When a router has no ports the namespace is deleted from the network
node. However this brakes the router updates and the file vpn-agent.log
is flooded with this traces:


2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Traceback 
(most recent call last):
2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/common/utils.py", line 343, in call
2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info return 
func(*args, **kwargs)
2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 628, 
in process
2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info 
self._process_internal_ports()
2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 404, 
in _process_internal_ports
2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info 
existing_devices = self._get_existing_devices()
2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 328, 
in _get_existing_devices
2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info ip_devs = 
ip_wrapper.get_devices(exclude_loopback=True)
2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 102, in 
get_devices
2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info 
log_fail_as_error=self.log_fail_as_error
2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py", line 137, in 
execute
2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info raise 
RuntimeError(m)
2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info RuntimeError:
2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Command: 
['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 
'exec', 'qrouter-8fc0f640-35bb-4d0b-bbbd-80c22be0e762', 'find', 
'/sys/class/net', '-maxdepth', '1', '-type', 'l', '-printf', '%f ']
2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Exit code: 1
2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Stdin:
2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Stdout:
2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info Stderr: Cannot 
open network namespace "qrouter-8fc0f640-35bb-4d0b-bbbd-80c22be0e762": No such 
file or directory
2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info
2016-04-21 16:22:17.771 23382 TRACE neutron.agent.l3.router_info
2016-04-21 16:22:17.774 23382 ERROR neutron.agent.l3.agent [-] Failed to 
process compatible router '8fc0f640-35bb-4d0b-bbbd-80c22be0e762'
2016-04-21 16:22:17.774 23382 TRACE neutron.agent.l3.agent Traceback (most 
recent call last):
2016-04-21 16:22:17.774 23382 TRACE neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 467, in 
_process_router_update
2016-04-21 16:22:17.774 23382 TRACE neutron.agent.l3.agent 
self._process_router_if_compatible(router)
2016-04-21 16:22:17.774 23382 TRACE neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 414, in 
_process_router_if_compatible
2016-04-21 16:22:17.774 23382 TRACE neutron.agent.l3.agent 
self._process_updated_router(router)
2016-04-21 16:22:17.774 23382 TRACE neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 428, in 
_process_updated_router
2016-04-21 16:22:17.774 23382 TRACE neutron.agent.l3.agent ri.process(self)
2016-04-21 16:22:17.774 23382 TRACE neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/common/utils.py", line 346, in call
2016-04-21 16:22:17.774 23382 TRACE neutron.agent.l3.agent self.logger(e)
2016-04-21 16:22:17.774 23382 TRACE neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in __exit__
2016-04-21 16:22:17.774 23382 TRACE neutron.agent.l3.agent 
six.reraise(self.type_, self.value, self.tb)
2016-04-21 16:22:17.774 23382 TRACE neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/common/utils.py", line 343, in call
2016-04-21 16:22:17.774 23382 TRACE neutron.agent.l3.agent return 
func(*args, **kwargs)
2016-04-21 16:22:17.774 

[Yahoo-eng-team] [Bug 1566676] Re: PD: Optimize get_ports query by filtering on subnet

2016-04-21 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/302062
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=8693d21ef34f4b3a4273862894838d47e924b63f
Submitter: Jenkins
Branch:master

commit 8693d21ef34f4b3a4273862894838d47e924b63f
Author: venkata anil 
Date:   Wed Apr 6 07:08:09 2016 +

Optimize get_ports query by filtering on subnet

Current code gets all ports from DB and then for each port it checks
if it has a PD subnet. With this, we get ports which are not needed
and again we need to check for subnet in these ports.

We can optimize this by querying the DB by filtering on subnet.

Closes-bug: #1566676
Change-Id: I2451f9a2c6a64ce27a607b193900caa05742273d


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1566676

Title:
  PD: Optimize get_ports query by filtering on subnet

Status in neutron:
  Fix Released

Bug description:
  Current code for prefix delegation gets all ports from DB and then for
  each port it checks if it has a PD subnet. With this, we get
  unnecessary ports(ports without PD subnet) and again we need to check
  for subnet in these ports.

  We can optimize this by querying the DB by filtering on subnet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1566676/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552786] Re: VMware: Port Group and Port ID not explicit from port binding

2016-04-21 Thread Matt Riedemann
** Changed in: nova
   Importance: Wishlist => Low

** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

** Changed in: nova/mitaka
 Assignee: (unassigned) => Lee Yarwood (lyarwood)

** Changed in: nova/mitaka
   Status: New => In Progress

** Changed in: nova/mitaka
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1552786

Title:
  VMware: Port Group and Port ID not explicit from port binding

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  In Progress

Bug description:
  Various Neutron core plugins and/or ML2 mechanism drivers that support
  VMware vCenter as a Nova compute backend have different ways to map
  Neutron resources to vCenter constructs. The vCenter VIF driver code
  in Nova currently assumes a particular mapping. The Neutron plugin or
  driver should be able to use the port's binding:vif_details attribute
  to explicitly specify the vCenter port key and port group to be used
  for the VIF.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1552786/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1573049] [NEW] documentation glanceclient images list v1

2016-04-21 Thread Jorge Cardoso
Public bug reported:


The file glanceclient/v1/images.py has the following documentation:

def list(self, **kwargs):
"""Get a list of images.
:param page_size: number of items to request in each paginated request
:param limit: maximum number of images to return
:param marker: begin returning images that appear later in the image
   list than that represented by this image id
:param filters: dict of direct comparison filters that mimics the
structure of an image object
:param owner: If provided, only images with this owner (tenant id)
  will be listed. An empty string ('') matches ownerless
  images.
:param return_request_id: If an empty list is provided, populate this
  list with the request ID value from the header
  x-openstack-request-id
:rtype: list of :class:`Image`
"""

Nonetheless, the parameter 'return_request_id' used within **kwargs is
'return_req_id' and not  'return_request_id' (see line 246)

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1573049

Title:
  documentation glanceclient images list v1

Status in Glance:
  New

Bug description:
  
  The file glanceclient/v1/images.py has the following documentation:

  def list(self, **kwargs):
  """Get a list of images.
  :param page_size: number of items to request in each paginated request
  :param limit: maximum number of images to return
  :param marker: begin returning images that appear later in the image
 list than that represented by this image id
  :param filters: dict of direct comparison filters that mimics the
  structure of an image object
  :param owner: If provided, only images with this owner (tenant id)
will be listed. An empty string ('') matches ownerless
images.
  :param return_request_id: If an empty list is provided, populate this
list with the request ID value from the header
x-openstack-request-id
  :rtype: list of :class:`Image`
  """

  Nonetheless, the parameter 'return_request_id' used within **kwargs is
  'return_req_id' and not  'return_request_id' (see line 246)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1573049/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543316] Re: Curvature network topology: Deactivate Open Console from topology when instance does not on running state

2016-04-21 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/278311
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=b5673ecd47f44d6a929f35603b91cebb432461fd
Submitter: Jenkins
Branch:master

commit b5673ecd47f44d6a929f35603b91cebb432461fd
Author: Itxaka 
Date:   Wed Feb 10 12:11:10 2016 +0100

Net topology: Show console link only when useful

Right now we are showing the "open console" link on the
network topology screen for instances on each instance,
even if they are on a status that we know its not
going to allow the console to connect.

This patch makes it so the "open console" link will
only appear if the instance object has console data.

It also changes the order of getting the instance
data in the django part as to make less calls if we
dont need the console link.

Provides a list with known statuses that wont connect
to the console used to get the console data or not.

Change-Id: I536fee5186ac933b92a7dc01a2a5b82a6db0ae4c
Closes-Bug: #1543316


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1543316

Title:
   Curvature network topology: Deactivate Open Console from topology
  when instance does not on running state

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  ===
  Deactivate Open Console from network topology when instance does not on 
running state
   
  Version-Release number of selected component (if applicable):
  =
  python-django-horizon-8.0.0-10.el7ost.noarch
  openstack-dashboard-8.0.0-10.el7ost.noarch

  How reproducible:
  =
  100%

  Steps to Reproduce:
  ===
  1. Launch an instance
  2. Pause or Suspend an instance
  3. Browse to: Projec --> Network --> Network Topology
  4. Click on instance

  Actual results:
  ===
  Open Console option displayed and active 

  Expected results:
  =
  Open Console option should not displayed or inactive

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1543316/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464239] Re: mount: special device /dev/sdb does not exist

2016-04-21 Thread Steven Hardy
** Changed in: tripleo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1464239

Title:
  mount: special device /dev/sdb does not exist

Status in OpenStack Compute (nova):
  In Progress
Status in tripleo:
  Fix Released

Bug description:
  As of today it looks like all jobs fail due to a missing Ephemeral
  partition:

  mount: special device /dev/sdb does not exist

  
  

  This Nova commit looks suspicious: 7f8128f87f5a2fa93c857295fb7e4163986eda25
  "Add the swap and ephemeral BDMs if needed"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1464239/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1531881] Re: AttributeError: 'module' object has no attribute 'dump_as_bytes'

2016-04-21 Thread Steven Hardy
** Changed in: tripleo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1531881

Title:
  AttributeError: 'module' object has no attribute 'dump_as_bytes'

Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Compute (nova) liberty series:
  In Progress
Status in tripleo:
  Fix Released

Bug description:
  Seeing the following traceback from nova-compute when trying to launch
  instances in tripleo-ci for stable/liberty (using the ironic driver):

  2016-01-07 13:32:27.691 19349 ERROR nova.compute.manager [instance: 
5a7c299b-f6b6-48d8-a20e-36e72c7bed79] Traceback (most recent call last):
  2016-01-07 13:32:27.691 19349 ERROR nova.compute.manager [instance: 
5a7c299b-f6b6-48d8-a20e-36e72c7bed79]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2155, in 
_build_resources
  2016-01-07 13:32:27.691 19349 ERROR nova.compute.manager [instance: 
5a7c299b-f6b6-48d8-a20e-36e72c7bed79] yield resources
  2016-01-07 13:32:27.691 19349 ERROR nova.compute.manager [instance: 
5a7c299b-f6b6-48d8-a20e-36e72c7bed79]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2009, in 
_build_and_run_instance
  2016-01-07 13:32:27.691 19349 ERROR nova.compute.manager [instance: 
5a7c299b-f6b6-48d8-a20e-36e72c7bed79] block_device_info=block_device_info)
  2016-01-07 13:32:27.691 19349 ERROR nova.compute.manager [instance: 
5a7c299b-f6b6-48d8-a20e-36e72c7bed79]   File 
"/usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py", line 802, in 
spawn
  2016-01-07 13:32:27.691 19349 ERROR nova.compute.manager [instance: 
5a7c299b-f6b6-48d8-a20e-36e72c7bed79] files=injected_files)
  2016-01-07 13:32:27.691 19349 ERROR nova.compute.manager [instance: 
5a7c299b-f6b6-48d8-a20e-36e72c7bed79]   File 
"/usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py", line 716, in 
_generate_configdrive
  2016-01-07 13:32:27.691 19349 ERROR nova.compute.manager [instance: 
5a7c299b-f6b6-48d8-a20e-36e72c7bed79] "error: %s"), e, instance=instance)
  2016-01-07 13:32:27.691 19349 ERROR nova.compute.manager [instance: 
5a7c299b-f6b6-48d8-a20e-36e72c7bed79]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 195, in __exit__
  2016-01-07 13:32:27.691 19349 ERROR nova.compute.manager [instance: 
5a7c299b-f6b6-48d8-a20e-36e72c7bed79] six.reraise(self.type_, self.value, 
self.tb)
  2016-01-07 13:32:27.691 19349 ERROR nova.compute.manager [instance: 
5a7c299b-f6b6-48d8-a20e-36e72c7bed79]   File 
"/usr/lib/python2.7/site-packages/nova/virt/ironic/driver.py", line 711, in 
_generate_configdrive
  2016-01-07 13:32:27.691 19349 ERROR nova.compute.manager [instance: 
5a7c299b-f6b6-48d8-a20e-36e72c7bed79] with 
configdrive.ConfigDriveBuilder(instance_md=i_meta) as cdb:
  2016-01-07 13:32:27.691 19349 ERROR nova.compute.manager [instance: 
5a7c299b-f6b6-48d8-a20e-36e72c7bed79]   File 
"/usr/lib/python2.7/site-packages/nova/virt/configdrive.py", line 72, in 
__init__
  2016-01-07 13:32:27.691 19349 ERROR nova.compute.manager [instance: 
5a7c299b-f6b6-48d8-a20e-36e72c7bed79] 
self.add_instance_metadata(instance_md)
  2016-01-07 13:32:27.691 19349 ERROR nova.compute.manager [instance: 
5a7c299b-f6b6-48d8-a20e-36e72c7bed79]   File 
"/usr/lib/python2.7/site-packages/nova/virt/configdrive.py", line 93, in 
add_instance_metadata
  2016-01-07 13:32:27.691 19349 ERROR nova.compute.manager [instance: 
5a7c299b-f6b6-48d8-a20e-36e72c7bed79] for (path, data) in 
instance_md.metadata_for_config_drive():
  2016-01-07 13:32:27.691 19349 ERROR nova.compute.manager [instance: 
5a7c299b-f6b6-48d8-a20e-36e72c7bed79]   File 
"/usr/lib/python2.7/site-packages/nova/api/metadata/base.py", line 465, in 
metadata_for_config_drive
  2016-01-07 13:32:27.691 19349 ERROR nova.compute.manager [instance: 
5a7c299b-f6b6-48d8-a20e-36e72c7bed79] yield (filepath, 
jsonutils.dump_as_bytes(data['meta-data']))
  2016-01-07 13:32:27.691 19349 ERROR nova.compute.manager [instance: 
5a7c299b-f6b6-48d8-a20e-36e72c7bed79] AttributeError: 'module' object has no 
attribute 'dump_as_bytes'
  2016-01-07 13:32:27.691 19349 ERROR nova.compute.manager [instance: 
5a7c299b-f6b6-48d8-a20e-36e72c7bed79]
  2016-01-07 13:32:27.693 19349 INFO nova.compute.manager 
[req-6b73f4c5-c031-496e-b2f0-a5380d3ca7ba 285d1c33eca8410e9ed03bbe3de03d15 
9448d5b54ff84bd6a8a04b1083eb920f - - -] [instance: 
5a7c299b-f6b6-48d8-a20e-36e72c7bed79] Termi

  
  I believe it's caused by this commit:
  https://review.openstack.org/#/c/246792/

  which I've submitted a revert for:
  https://review.openstack.org/#/c/264793/

  The failed tripleo-ci job:
  
http://logs.openstack.org/46/254946/4/check-tripleo/gate-tripleo-ci-f22-nonha/1363b32/

  from this patch:
  https://review.openstack.org/#/c/254946/

  The version of oslo.serialization in use on the job is 

[Yahoo-eng-team] [Bug 1561856] Re: Request Mitaka release for networking-ofagent

2016-04-21 Thread Ihar Hrachyshka
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1561856

Title:
  Request Mitaka release for networking-ofagent

Status in networking-ofagent:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  Please release stable/mitaka branch of networking-ofagent.
  This will be the last release of ofagent.

  commit id: bf23655bfbde95535fc9c519d11087545983d29b
  tag: 2.0.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-ofagent/+bug/1561856/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567181] Re: Request release for networking-fujitsu for stable/mitaka

2016-04-21 Thread Ihar Hrachyshka
As a result of stadium discussion, networking-fujitsu is no longer a
part of neutron project, which is reflected in
https://review.openstack.org/#/c/303026/3/reference/projects.yaml merged
yesterday. From now on, all release requests are handled by the team
managing the networking-fujitsu project.

Note that since start of Newton, all release requests for all projects
follow the model as described in: http://lists.openstack.org/pipermail
/openstack-dev/2016-March/090737.html

** Changed in: neutron
   Status: Confirmed => Won't Fix

** Changed in: neutron
 Assignee: Ihar Hrachyshka (ihar-hrachyshka) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1567181

Title:
  Request release for networking-fujitsu for stable/mitaka

Status in networking-fujitsu:
  New
Status in neutron:
  Won't Fix

Bug description:
  Please release stable/mitaka branch of networking-fujitsu.

  tag: 2.0.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-fujitsu/+bug/1567181/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572999] [NEW] separated logs for failed integration tests

2016-04-21 Thread Sergei Chipiga
Public bug reported:

We should have separated logs for failed integration tests to make them
more readable and extensible.

** Affects: horizon
 Importance: Undecided
 Assignee: Sergei Chipiga (schipiga)
 Status: In Progress

** Description changed:

- We should have separated logs for fallen integration tests to make them
+ We should have separated logs for failed integration tests to make them
  more readable and extensible.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1572999

Title:
  separated logs for failed integration tests

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  We should have separated logs for failed integration tests to make
  them more readable and extensible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1572999/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572543] Re: Request stable/liberty release for openstack/networking-hyperv

2016-04-21 Thread Ihar Hrachyshka
As a result of stadium discussion, networking-hyperv is no longer a part
of neutron project, which is reflected in
https://review.openstack.org/#/c/303026/3/reference/projects.yaml merged
yesterday. From now on, all release requests are handled by the team
managing the networking-hyperv project.

Note that since start of Newton, all release requests for all projects
follow the model as described in: http://lists.openstack.org/pipermail
/openstack-dev/2016-March/090737.html

** Also affects: networking-hyperv
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: Confirmed => Won't Fix

** Changed in: neutron
 Assignee: Ihar Hrachyshka (ihar-hrachyshka) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572543

Title:
  Request stable/liberty release for openstack/networking-hyperv

Status in networking-hyperv:
  New
Status in neutron:
  Won't Fix

Bug description:
  Please release the new version for stable/liberty branch of
  networking-hyperv.

  commit id: 13401e80e3360b9f25797879c7ade7b768ca034f
  tag: 1.0.4

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-hyperv/+bug/1572543/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572862] Re: update volume multiattach to true

2016-04-21 Thread haobing1
** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1572862

Title:
  update volume multiattach  to true

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  When i  use "cinder create" to create a volume,the default multiattach
  is false.But then i want to attach this volume to multi-instances,it
  will failed.   It would be fine we can use a command to update  volume
  multiattach to True, then we can attach this volume to multi-
  instances.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1572862/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572967] [NEW] Liberty: POST with Content_Type: text/plain results to 400

2016-04-21 Thread Ghanshyam Mann
Public bug reported:

with below patch POST request with Content_Type: text/plain results to
success:

-  I2b5b08f164e0d45c55d5e2685b3e2a8641843fba
In that cleanup, deserializer were cleaned up which used to block the POST 
request with Content_Type: text/plain and return 400. and results to make that 
success.  But stable branch before Mitaka returns the same error 400.

This should be backported to have consistent behavior.

Master/Mitaka-

openstack@openstack-VirtualBox:~/devstack$ curl -g -i -X POST 
http://192.168.11.122:8774/v2.1/498175dbe9184abcbc003da2d248250b/servers -H 
"User-Agent: python-novaclient" -H "Content-Type: text/plain" -H "Accept: 
application/json" -H "X-OpenStack-Nova-API-Version: 2.25" -H "X-Auth-Token: 
b29d4fe2f7b44ad0bd451e13bd31f585" -d '{"server": {"min_count": 1, "flavorRef": 
"42", "name": "test", "imageRef": "b8f25318-3fa3-4b1a-8e41-c224dace04cc", 
"max_count": 1}}'
HTTP/1.1 202 Accepted
Content-Length: 446
Location: 
http://192.168.11.122:8774/v2.1/498175dbe9184abcbc003da2d248250b/servers/fcd6eb18-d247-4a09-87b2-0aa5f4140810
Content-Type: application/json
X-Openstack-Nova-Api-Version: 2.25
Vary: X-OpenStack-Nova-API-Version
X-Compute-Request-Id: req-860d65ff-f6cf-4522-9e2a-ff046b59ac88
Date: Thu, 21 Apr 2016 10:55:14 GMT


Liberty-

curl -g -i -X POST 
http://192.168.11.122:8774/v2.1/498175dbe9184abcbc003da2d248250b/servers -H 
"User-Agent: python-novaclient" -H "Content-Type: text/plain" -H "Accept: 
application/json"  -H "X-Auth-Token: b29d4fe2f7b44ad0bd451e13bd31f585" -d 
'{"server": {"min_count": 1, "flavorRef": "42", "name": "test", "imageRef": 
"b8f25318-3fa3-4b1a-8e41-c224dace04cc", "max_count": 1}}'
HTTP/1.1 400 Bad Request
X-Openstack-Nova-Api-Version: 2.1
Vary: X-OpenStack-Nova-API-Version
Content-Length: 68
Content-Type: application/json; charset=UTF-8
X-Compute-Request-Id: req-ff32a873-10bd-49ce-810d-b87c2c34de61
Date: Thu, 21 Apr 2016 11:06:35 GMT

{"badRequest": {"message": "Unsupported Content-Type", "code":
400}}openstack@openstack-VirtualBox:~/devstack

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1572967

Title:
  Liberty: POST with Content_Type:  text/plain results to 400

Status in OpenStack Compute (nova):
  New

Bug description:
  with below patch POST request with Content_Type: text/plain results to
  success:

  -  I2b5b08f164e0d45c55d5e2685b3e2a8641843fba
  In that cleanup, deserializer were cleaned up which used to block the POST 
request with Content_Type: text/plain and return 400. and results to make that 
success.  But stable branch before Mitaka returns the same error 400.

  This should be backported to have consistent behavior.

  Master/Mitaka-

  openstack@openstack-VirtualBox:~/devstack$ curl -g -i -X POST 
http://192.168.11.122:8774/v2.1/498175dbe9184abcbc003da2d248250b/servers -H 
"User-Agent: python-novaclient" -H "Content-Type: text/plain" -H "Accept: 
application/json" -H "X-OpenStack-Nova-API-Version: 2.25" -H "X-Auth-Token: 
b29d4fe2f7b44ad0bd451e13bd31f585" -d '{"server": {"min_count": 1, "flavorRef": 
"42", "name": "test", "imageRef": "b8f25318-3fa3-4b1a-8e41-c224dace04cc", 
"max_count": 1}}'
  HTTP/1.1 202 Accepted
  Content-Length: 446
  Location: 
http://192.168.11.122:8774/v2.1/498175dbe9184abcbc003da2d248250b/servers/fcd6eb18-d247-4a09-87b2-0aa5f4140810
  Content-Type: application/json
  X-Openstack-Nova-Api-Version: 2.25
  Vary: X-OpenStack-Nova-API-Version
  X-Compute-Request-Id: req-860d65ff-f6cf-4522-9e2a-ff046b59ac88
  Date: Thu, 21 Apr 2016 10:55:14 GMT


  Liberty-

  curl -g -i -X POST 
http://192.168.11.122:8774/v2.1/498175dbe9184abcbc003da2d248250b/servers -H 
"User-Agent: python-novaclient" -H "Content-Type: text/plain" -H "Accept: 
application/json"  -H "X-Auth-Token: b29d4fe2f7b44ad0bd451e13bd31f585" -d 
'{"server": {"min_count": 1, "flavorRef": "42", "name": "test", "imageRef": 
"b8f25318-3fa3-4b1a-8e41-c224dace04cc", "max_count": 1}}'
  HTTP/1.1 400 Bad Request
  X-Openstack-Nova-Api-Version: 2.1
  Vary: X-OpenStack-Nova-API-Version
  Content-Length: 68
  Content-Type: application/json; charset=UTF-8
  X-Compute-Request-Id: req-ff32a873-10bd-49ce-810d-b87c2c34de61
  Date: Thu, 21 Apr 2016 11:06:35 GMT

  {"badRequest": {"message": "Unsupported Content-Type", "code":
  400}}openstack@openstack-VirtualBox:~/devstack

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1572967/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572925] [NEW] nova cloudpipe-create fails Unexpected API Error and HTTP 500 error

2016-04-21 Thread Sagar
Public bug reported:

Openstack liberty (Ubuntu 14.04)
while executing nova cloudpipe-create  it fails with error:

ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-14a4fba1-b70f-4b64-b183-c9a1e29cd3cb)

I am using 4node openstack setup. Everything is working fine except
this.

#nova --debug cloudpipe-create 32ef54e0dbbc4b709642a31ac97dabcf

DEBUG (session:198) REQ: curl -g -i -X GET http://D446-Controller:35357/v3 -H 
"Accept: application/json" -H "User-Agent: python-keystoneclient"
INFO (connectionpool:205) Starting new HTTP connection (1): D446-Controller
DEBUG (connectionpool:385) "GET /v3 HTTP/1.1" 200 255
DEBUG (session:215) RESP: [200] Content-Length: 255 Vary: X-Auth-Token 
Keep-Alive: timeout=5, max=100 Server: Apache/2.4.7 (Ubuntu) Connection: 
Keep-Alive Date: Thu, 21 Apr 2016 09:35:53 GMT x-openstack-request-id: 
req-70f7268f-8f1b-4657-96e7-2cb2253e59fa Content-Type: application/json 
X-Distribution: Ubuntu 
RESP BODY: {"version": {"status": "stable", "updated": "2015-03-30T00:00:00Z", 
"media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v3+json"}], "id": "v3.4", "links": 
[{"href": "http://D446-Controller:35357/v3/;, "rel": "self"}]}}

DEBUG (base:188) Making authentication request to 
http://D446-Controller:35357/v3/auth/tokens
DEBUG (connectionpool:385) "POST /v3/auth/tokens HTTP/1.1" 201 6658
DEBUG (session:198) REQ: curl -g -i -X GET http://D446-Controller:8774/v2/ -H 
"User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: 
{SHA1}2d9f498f6fee652894df44e52c48386b0a55a41f"
INFO (connectionpool:205) Starting new HTTP connection (1): D446-Controller
DEBUG (connectionpool:385) "GET /v2/ HTTP/1.1" 200 380
DEBUG (session:215) RESP: [200] Date: Thu, 21 Apr 2016 09:35:53 GMT Connection: 
keep-alive Content-Type: application/json Content-Length: 380 
X-Compute-Request-Id: req-6e4268b4-9f95-4188-85d5-8b4458f6102d 
RESP BODY: {"version": {"status": "SUPPORTED", "updated": 
"2011-01-21T11:33:21Z", "links": [{"href": "http://D446-Controller:8774/v2/;, 
"rel": "self"}, {"href": "http://docs.openstack.org/;, "type": "text/html", 
"rel": "describedby"}], "min_version": "", "version": "", "media-types": 
[{"base": "application/json", "type": 
"application/vnd.openstack.compute+json;version=2"}], "id": "v2.0"}}

DEBUG (session:198) REQ: curl -g -i -X POST 
http://D446-Controller:8774/v2/32ef54e0dbbc4b709642a31ac97dabcf/os-cloudpipe -H 
"User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: 
application/json" -H "X-Auth-Token: 
{SHA1}2d9f498f6fee652894df44e52c48386b0a55a41f" -d '{"cloudpipe": 
{"project_id": "32ef54e0dbbc4b709642a31ac97dabcf"}}'
DEBUG (connectionpool:385) "POST 
/v2/32ef54e0dbbc4b709642a31ac97dabcf/os-cloudpipe HTTP/1.1" 500 202
DEBUG (session:215) RESP: [500] Date: Thu, 21 Apr 2016 09:35:53 GMT Connection: 
keep-alive Content-Type: application/json; charset=UTF-8 Content-Length: 202 
X-Compute-Request-Id: req-098216b1-556e-4a6d-a9d8-be591373fda3 
RESP BODY: {"computeFault": {"message": "Unexpected API Error. Please report 
this at http://bugs.launchpad.net/nova/ and attach the Nova API log if 
possible.\n", "code": 500}}

DEBUG (shell:905) Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-098216b1-556e-4a6d-a9d8-be591373fda3)
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/novaclient/shell.py", line 903, in main
OpenStackComputeShell().main(argv)
  File "/usr/lib/python2.7/dist-packages/novaclient/shell.py", line 830, in main
args.func(self.cs, args)
  File "/usr/lib/python2.7/dist-packages/novaclient/v2/shell.py", line 540, in 
do_cloudpipe_create
cs.cloudpipe.create(args.project)
  File "/usr/lib/python2.7/dist-packages/novaclient/v2/cloudpipe.py", line 42, 
in create
return_raw=True)
  File "/usr/lib/python2.7/dist-packages/novaclient/base.py", line 169, in 
_create
_resp, body = self.api.client.post(url, body=body)
  File "/usr/lib/python2.7/dist-packages/keystoneclient/adapter.py", line 176, 
in post
return self.request(url, 'POST', **kwargs)
  File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 93, in 
request
raise exceptions.from_response(resp, body, url, method)
ClientException: Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-098216b1-556e-4a6d-a9d8-be591373fda3)
ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-098216b1-556e-4a6d-a9d8-be591373fda3)





NOVA-API.LOG

2016-04-21 14:51:41.420 12199 INFO nova.osapi_compute.wsgi.server 
[req-e6070882-a033-43b4-bfbc-12b5c8c178d0 

[Yahoo-eng-team] [Bug 1572795] Re: There are some verbose files in Create Network

2016-04-21 Thread Rob Cresswell
Patch here: https://review.openstack.org/#/c/298508/

** Changed in: horizon
   Status: In Progress => Fix Released

** Changed in: horizon
   Importance: Undecided => Low

** Changed in: horizon
Milestone: None => newton-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1572795

Title:
  There are some verbose files in Create Network

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  By https://review.openstack.org/#/c/298508/,
  Create Network also became to be able to use common html templates.
  Therefore, files being used by its page can be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1572795/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567393] Re: remove menu item "disable user" for admin

2016-04-21 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/306214
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=e390773260593bd4119b3784acb961d758c12f40
Submitter: Jenkins
Branch:master

commit e390773260593bd4119b3784acb961d758c12f40
Author: Ryan Evans 
Date:   Fri Apr 15 02:07:46 2016 +

Removed "Disable user" from dropdown menu for self

Previously the "Disable user" option was disabled for the
logged in user, it was displayed but not clickable. This has
been changed to no longer display at all.

Change-Id: I3114e1191915717638ec57ccf444872fb2a39dd2
Closes-Bug: 1567393


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1567393

Title:
  remove menu item "disable user" for admin

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Environment:
  - upstream

  Steps:
  - Login as admin
  - Go to "Identity" -> "Users"
  - Click dropdown actions menu for admin

  Expected result:
  - Only "Change Password" is present

  Actual result:
  - There is item "Disable user" but it's disabled. And any case it's not 
possible to disable admin. It's better to remove such item in order not to 
confuse an user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1567393/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1386543] Re: FWaaS - New blocking rules has no affect for existing traffic

2016-04-21 Thread Ha Van Tu
*** This bug is a duplicate of bug 1474279 ***
https://bugs.launchpad.net/bugs/1474279

** This bug has been marked a duplicate of bug 1474279
   FWaaS let connection opened if delete allow rule, beacuse of conntrack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1386543

Title:
  FWaaS - New blocking rules has no affect for existing traffic

Status in neutron:
  New

Bug description:
  When building a firewall with a rule to block a specific Traffic - the
  current traffic is not blocked.

  For example:

  Running a Ping to an instance and then building a firewall with a rule to 
block ICMP to this instance doesn't have affect while the ping command is still 
running.
  Exiting the command and then trying pinging the Instance again shows the 
desired result - i.e. the traffic is blocked.

  This is also the case for SSH.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1386543/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572867] [NEW] DEVICE_OWNER_PREFIXES not be defined in anywhere

2016-04-21 Thread QunyingRan
Public bug reported:

in neutron\objects\qos\rule.py ,the constant DEVICE_OWNER_PREFIXES not
be defined in anywhere

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572867

Title:
  DEVICE_OWNER_PREFIXES not  be defined in anywhere

Status in neutron:
  New

Bug description:
  in neutron\objects\qos\rule.py ,the constant DEVICE_OWNER_PREFIXES not
  be defined in anywhere

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1572867/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572862] [NEW] update volume multiattach to true

2016-04-21 Thread haobing1
Public bug reported:

When i  use "cinder create" to create a volume,the default multiattach
is false.But then i want to attach this volume to multi-instances,it
will failed.   It would be fine we can use a command to update  volume
multiattach to True .

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1572862

Title:
  update volume multiattach  to true

Status in OpenStack Compute (nova):
  New

Bug description:
  When i  use "cinder create" to create a volume,the default multiattach
  is false.But then i want to attach this volume to multi-instances,it
  will failed.   It would be fine we can use a command to update  volume
  multiattach to True .

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1572862/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572509] Re: Nova boot fails when freed SRIOV port is used for booting

2016-04-21 Thread Preethi Dsilva
*** This bug is a duplicate of bug 1572593 ***
https://bugs.launchpad.net/bugs/1572593

** Project changed: neutron => nova-project

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572509

Title:
  Nova boot fails when freed SRIOV port is used for booting

Status in Nova:
  Incomplete

Bug description:
  Nova boot fails when freed SRIOV port is used for booting

  steps to reproduce:
  ==
  1.create a SRIOV port
  2.boot a vm -->Boot is successful and vm gets ip
  3.now delete the vm using nova delete --successful (mac is released from VF)
  4.using the port created in step 1 boot a new vm.

  VM fails to boot with following error
  [01;31m2016-04-20 10:55:05.005 TRACE nova.compute.manager ^[[01;35m[instance: 
a47344fb-3254-4409-9c23-55e3cde693d9] ^[[00mhostname=instance.hostname)
  ^[[01;31m2016-04-20 10:55:05.005 TRACE nova.compute.manager 
^[[01;35m[instance: a47344fb-3254-4409-9c23-55e3cde693d9] 
^[[00mPortNotUsableDNS: Port 7219a612-014e-452e-b79a-19c87cc33db4 not usable 
for instance a47344fb-3254-4409-9c23-55e3cde693d9. Value vm4 assigned to 
dns_name attribute does not match instance's hostname vmtest4
  ^[[01;31m2016-04-20 10:55:05.005 TRACE nova.compute.manager 
^[[01;35m[instance: a47344fb-3254-4409-9c23-55e3cde693d9] ^[[00m

  Expected :
  =
  As port is unbound in step 3 we should be able to bind it in step 4.

  The setup consists of controller and compute node with Mellanox card
  enabled for SRIOV. Ubuntu 14.04 qcow2 image is used for tenant VM boot

  Tested the above with MItaka code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova-project/+bug/1572509/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327035] Re: error occurred when swift Pseudo-folder include "="

2016-04-21 Thread Sharat Sharma
*** This bug is a duplicate of bug 1347734 ***
https://bugs.launchpad.net/bugs/1347734

** This bug has been marked a duplicate of bug 1347734
   The container dashboard does not handle unicode url correctly

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1327035

Title:
  error occurred when swift Pseudo-folder include "="

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  when swift Pseudo-folder include "=" (this kind of usage often use in
  hive partition), hirizon link will transfer "=" to "%253D", for
  example, my Pseudo-folder name is "A=B", if I want enter "A=B",
  actually enter "A%253DB" through horizon

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1327035/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572509] Re: Nova boot fails when freed SRIOV port is used for booting

2016-04-21 Thread Elena Ezhova
*** This bug is a duplicate of bug 1572593 ***
https://bugs.launchpad.net/bugs/1572593

Please see a link to the bug in my previous comment, that bug was
already moved to nova and has a fix on review.

** This bug has been marked a duplicate of bug 1572593
   Impossible attach detached port to another instance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1572509

Title:
  Nova boot fails when freed SRIOV port is used for booting

Status in Nova:
  Incomplete

Bug description:
  Nova boot fails when freed SRIOV port is used for booting

  steps to reproduce:
  ==
  1.create a SRIOV port
  2.boot a vm -->Boot is successful and vm gets ip
  3.now delete the vm using nova delete --successful (mac is released from VF)
  4.using the port created in step 1 boot a new vm.

  VM fails to boot with following error
  [01;31m2016-04-20 10:55:05.005 TRACE nova.compute.manager ^[[01;35m[instance: 
a47344fb-3254-4409-9c23-55e3cde693d9] ^[[00mhostname=instance.hostname)
  ^[[01;31m2016-04-20 10:55:05.005 TRACE nova.compute.manager 
^[[01;35m[instance: a47344fb-3254-4409-9c23-55e3cde693d9] 
^[[00mPortNotUsableDNS: Port 7219a612-014e-452e-b79a-19c87cc33db4 not usable 
for instance a47344fb-3254-4409-9c23-55e3cde693d9. Value vm4 assigned to 
dns_name attribute does not match instance's hostname vmtest4
  ^[[01;31m2016-04-20 10:55:05.005 TRACE nova.compute.manager 
^[[01;35m[instance: a47344fb-3254-4409-9c23-55e3cde693d9] ^[[00m

  Expected :
  =
  As port is unbound in step 3 we should be able to bind it in step 4.

  The setup consists of controller and compute node with Mellanox card
  enabled for SRIOV. Ubuntu 14.04 qcow2 image is used for tenant VM boot

  Tested the above with MItaka code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova-project/+bug/1572509/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474279] Re: FWaaS let connection opened if delete allow rule, beacuse of conntrack

2016-04-21 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/300960
Committed: 
https://git.openstack.org/cgit/openstack/neutron-fwaas/commit/?id=fadfe86516de7982c86de4dd1a0d275d0a6c84f7
Submitter: Jenkins
Branch:master

commit fadfe86516de7982c86de4dd1a0d275d0a6c84f7
Author: Ha Van Tu 
Date:   Mon Apr 4 14:03:12 2016 +0700

Fix "Not applying Firewall rules immediately" problem

This patch removes the conntrack entries of the established
connection when the firewall updates its rules.

Change-Id: I8d149d3cb0c8cbca2211446b082fcfcda93e2b19
Closes-Bug: #1474279


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1474279

Title:
  FWaaS let connection opened if delete allow rule, beacuse of conntrack

Status in neutron:
  Fix Released

Bug description:
  Hi,

  I've faced a problem with FWaaS plugin in Neutron (Juno).
  The firewall works, but when I delete a rule from the policy, the
  connection will still works because of conntrack... (I tried with ping,
  and ssh)
  It's okay, if the connection will kept alive, if it's really alive, (an
  active SSH for example) but if I delete the ICMP rule, and stop pinging,
  and restart pinging, the ping will still works...

  If I go to my neutron server, and do a conntrack -F command on my
  relevant qrouter, the firewall starts working based on the valid rules...

  Are there any way, to configure the conntrack cleanup when FWaaS
  configuration modified by user?

  If not, can somebody help me, where to make changes on code, to run that
  command in the proper namespace after the iptables rule-generation?

  
  Regards,
   Peter

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1474279/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp