[Yahoo-eng-team] [Bug 1938030] [NEW] neutron deployed with httpd does not work with ovn mech driver

2021-07-26 Thread Rabi Mishra
*** This bug is a duplicate of bug 1912359 ***
https://bugs.launchpad.net/bugs/1912359

Public bug reported:

Deploying neutron with httpd with ovn driver fails with the 500 with the
below traceback[1] when creating networks.

'openstack network agent list'  does not list any controller agents or
metadata agents.

Looks like ovn driver pre_fork_initialize/post_fork_initialize are not
triggered that initializes ovn db connections.

neutron.conf
-

auth_strategy=keystone
core_plugin=ml2
host=oc0-controller-0.mydomain.tld
dns_domain=openstacklocal
dhcp_agent_notification=True
allow_overlapping_ips=True
global_physnet_mtu=1500
vlan_transparent=False
service_plugins=qos,ovn-router,trunk,segments,port_forwarding,log
l3_ha=False
max_l3_agents_per_router=3
api_workers=2
rpc_workers=1
router_scheduler_driver=neutron.scheduler.l3_agent_scheduler.ChanceScheduler
router_distributed=False
enable_dvr=False
allow_automatic_l3agent_failover=True

ml2_conf.ini

[ml2]
type_drivers=geneve,vxlan,vlan,flat
tenant_network_types=geneve,vlan
mechanism_drivers=ovn
path_mtu=0
extension_drivers=qos,port_security,dns
overlay_ip_version=4

[ml2_type_geneve]
max_header_size=38
vni_ranges=1:65536

[ml2_type_vxlan]
vxlan_group=224.0.0.1
vni_ranges=1:65536

[ml2_type_vlan]
network_vlan_ranges=datacentre:1:1000

[ml2_type_flat]
flat_networks=datacentre

[ovn]
ovn_nb_connection=tcp:172.16.13.9:6641
ovn_sb_connection=tcp:172.16.13.9:6642
ovsdb_connection_timeout=180
neutron_sync_mode=log
ovn_metadata_enabled=True
additional_worker_classes_with_ovn_idl=[MaintenanceWorker, RpcWorker]
enable_distributed_floating_ip=True
dns_servers=
ovn_emit_need_to_frag=False


[1]

2021-07-26 10:52:26.530 19 DEBUG neutron.api.v2.base 
[req-67d890d1-889e-40c7-a380-99396f88cd29 1f65b02b06844d59aa18031903301a79 
20ad786fb60d459a9ac43bea8623d8b3 - default default] Request body: {'network': 
{'name': 'test_net', 'admin_state_up': True}} prepare_request_body 
/usr/lib/python3.6/site-packages/neutron/api/v2/base.py:729
2021-07-26 10:52:26.532 19 INFO neutron.quota 
[req-67d890d1-889e-40c7-a380-99396f88cd29 1f65b02b06844d59aa18031903301a79 
20ad786fb60d459a9ac43bea8623d8b3 - default default] Loaded quota_driver: 
.
2021-07-26 10:52:26.560 19 DEBUG neutron.pecan_wsgi.hooks.quota_enforcement 
[req-67d890d1-889e-40c7-a380-99396f88cd29 1f65b02b06844d59aa18031903301a79 
20ad786fb60d459a9ac43bea8623d8b3 - default default] Made reservation on behalf 
of 20ad786fb60d459a9ac43bea8623d8b3 for: {'network': 1} before 
/usr/lib/python3.6/site-packages/neutron/pecan_wsgi/hooks/quota_enforcement.py:55
2021-07-26 10:52:26.615 19 DEBUG neutron.policy 
[req-67d890d1-889e-40c7-a380-99396f88cd29 1f65b02b06844d59aa18031903301a79 
20ad786fb60d459a9ac43bea8623d8b3 - default default] Loaded default policies 
from ['neutron'] under neutron.policies entry points register_rules 
/usr/lib/python3.6/site-packages/neutron/policy.py:75
2021-07-26 10:52:26.671 19 DEBUG neutron_lib.callbacks.manager 
[req-67d890d1-889e-40c7-a380-99396f88cd29 1f65b02b06844d59aa18031903301a79 
20ad786fb60d459a9ac43bea8623d8b3 - default default] Notify callbacks 
['neutron.plugins.ml2.plugin.SecurityGroupDbMixin._ensure_default_security_group_handler--9223372036846591125']
 for network, before_create _notify_loop 
/usr/lib/python3.6/site-packages/neutron_lib/callbacks/manager.py:193
2021-07-26 10:52:26.732 19 DEBUG neutron.plugins.ml2.drivers.helpers 
[req-67d890d1-889e-40c7-a380-99396f88cd29 1f65b02b06844d59aa18031903301a79 
20ad786fb60d459a9ac43bea8623d8b3 - default default] geneve segment allocate 
from pool success with {'geneve_vni': 32248}  
allocate_partially_specified_segment 
/usr/lib/python3.6/site-packages/neutron/plugins/ml2/drivers/helpers.py:155
2021-07-26 10:52:26.747 19 DEBUG neutron_lib.callbacks.manager 
[req-67d890d1-889e-40c7-a380-99396f88cd29 1f65b02b06844d59aa18031903301a79 
20ad786fb60d459a9ac43bea8623d8b3 - default default] Notify callbacks 
['neutron.services.segments.db._add_segment_host_mapping_for_segment--9223363271446231241',
 
'neutron.plugins.ml2.plugin.Ml2Plugin._handle_segment_change--9223372036848473802']
 for segment, precommit_create _notify_loop 
/usr/lib/python3.6/site-packages/neutron_lib/callbacks/manager.py:193
2021-07-26 10:52:26.747 19 INFO neutron.db.segments_db 
[req-67d890d1-889e-40c7-a380-99396f88cd29 1f65b02b06844d59aa18031903301a79 
20ad786fb60d459a9ac43bea8623d8b3 - default default] Added segment 
71185329-c337-40d3-b4b6-42797d1d76d7 of type geneve for network 
8313f893-f5ca-4dfd-8a3a-b87a305b10ef
2021-07-26 10:52:26.813 19 DEBUG neutron_lib.callbacks.manager 
[req-67d890d1-889e-40c7-a380-99396f88cd29 1f65b02b06844d59aa18031903301a79 
20ad786fb60d459a9ac43bea8623d8b3 - default default] Notify callbacks 
['neutron.services.qos.qos_plugin.QoSPlugin._validate_create_network_callback--9223372036854650062',
 
'neutron.services.auto_allocate.db._ensure_external_network_default_value_callback-8765406609216']
 for network, precommit_create _noti

[Yahoo-eng-team] [Bug 1935847] [NEW] [RFE] Basic Authentication Support for Standalone Neutron

2021-07-12 Thread Rabi Mishra
Public bug reported:

There are number of use-cases where users would like run standalone
neutron (at times along with some other services like Ironic for
baremetal provisioning), but would still need some basic authentication
for users accessing neutron APIs.

Though it's probably possible to deploy neutron with a web server and
the configure the web server for basic authentication, it can be a big
'overhead' for small deployments to deploy web server for standalone
neutron and configure it for basic auth.

Also, projects like TripleO still does not deploy neutron with
httpd+mod_wsgi due to some issues encountered earlier. The current
proposal of a light TripleO undercloud with standalone neutron with
basic authentication would benefit from this feature.

It's possible to implement a simple basic auth middleware which is non-
invasive and provide the desired feature for standalone neutron.

** Affects: neutron
 Importance: Undecided
 Status: In Progress


** Tags: rfe

** Tags added: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1935847

Title:
  [RFE] Basic Authentication Support for Standalone Neutron

Status in neutron:
  In Progress

Bug description:
  There are number of use-cases where users would like run standalone
  neutron (at times along with some other services like Ironic for
  baremetal provisioning), but would still need some basic
  authentication for users accessing neutron APIs.

  Though it's probably possible to deploy neutron with a web server and
  the configure the web server for basic authentication, it can be a big
  'overhead' for small deployments to deploy web server for standalone
  neutron and configure it for basic auth.

  Also, projects like TripleO still does not deploy neutron with
  httpd+mod_wsgi due to some issues encountered earlier. The current
  proposal of a light TripleO undercloud with standalone neutron with
  basic authentication would benefit from this feature.

  It's possible to implement a simple basic auth middleware which is
  non-invasive and provide the desired feature for standalone neutron.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1935847/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1934039] [NEW] Neutron with noauth authentication strategy needs fake 'project_id' in request body

2021-06-29 Thread Rabi Mishra
Public bug reported:

Neutron can be deployed standalone without keystone using
authentication_strategy=noauth. However with the policy enforcement for
resources[1] to have the tenant_id/project_id in the request body or
'X_PROJECT_ID' header, one can't create resources without providing a
fake project_id in POST requests.

Neutron should remove this need for requests have a fake project_id.
Also, neutron does not support 'http_basic' auth_strategy atm which
would be a good addition.

[1] https://github.com/openstack/neutron-
lib/blob/master/neutron_lib/api/definitions/port.py#L90

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1934039

Title:
  Neutron with noauth authentication strategy needs fake 'project_id' in
  request body

Status in neutron:
  New

Bug description:
  Neutron can be deployed standalone without keystone using
  authentication_strategy=noauth. However with the policy enforcement
  for resources[1] to have the tenant_id/project_id in the request body
  or 'X_PROJECT_ID' header, one can't create resources without providing
  a fake project_id in POST requests.

  Neutron should remove this need for requests have a fake project_id.
  Also, neutron does not support 'http_basic' auth_strategy atm which
  would be a good addition.

  [1] https://github.com/openstack/neutron-
  lib/blob/master/neutron_lib/api/definitions/port.py#L90

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1934039/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1840291] [NEW] keystone does not retry on DbDeadlock [HTTP 500] for delete_credential_for_user

2019-08-15 Thread Rabi Mishra
ne 269, in 
_execute_on_connection
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi return 
connection._execute_clauseelement(self, multiparams, params)
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1060, in 
_execute_clauseelement
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi compiled_sql, 
distilled_params
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1200, in 
_execute_context
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi context)
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1409, in 
_handle_dbapi_exception
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi 
util.raise_from_cause(newraise, exc_info)
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 203, in 
raise_from_cause
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi 
reraise(type(exception), exception, tb=exc_tb, cause=cause)
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1193, in 
_execute_context
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi context)
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 507, in 
do_execute
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi 
cursor.execute(statement, parameters)
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 166, in execute
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi result = 
self._query(query)
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 322, in _query
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi conn.query(q)
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/pymysql/connections.py", line 856, in query
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi 
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1057, in 
_read_query_result
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi result.read()
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1340, in read
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi first_packet = 
self.connection._read_packet()
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1014, in 
_read_packet
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi 
packet.check_error()
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/pymysql/connections.py", line 393, in 
check_error
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi 
err.raise_mysql_exception(self._data)
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/pymysql/err.py", line 107, in 
raise_mysql_exception
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi raise 
errorclass(errno, errval)
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi DBDeadlock: 
(pymysql.err.InternalError) (1205, u'Lock wait timeout exceeded; try restarting 
transaction') [SQL: u'DELETE FROM credential WHERE credential.user_id = 
%(user_id_1)s'] [parameters: {u'user_id_1': 
u'd7830b696f8b49ce86770ba7b97b64fc'}] (Background on this error at: 
http://sqlalche.me/e/2j85)
2019-08-14 03:34:15.264 199385 ERROR keystone.common.wsgi

** Affects: keystone
 Importance: Undecided
 Assignee: Rabi Mishra (rabi)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) => Rabi Mishra (rabi)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1840291

Title:
   keystone does not retry on DbDeadlock [HTTP 500] for
  delete_credential_for_user

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  traceback:

  We do have it for identity backend via 
  
https://github.com/openstack/keystone/commit/e439476c1e434587122053a5c02c9ee4908e8b7c,
 but not for credential bac

[Yahoo-eng-team] [Bug 1731395] Re: test_server_cfn_init and test_server_signal_userdata_format_software_config failing frequently

2017-11-09 Thread Rabi Mishra
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1731395

Title:
  test_server_cfn_init and
  test_server_signal_userdata_format_software_config failing frequently

Status in OpenStack Heat:
  New
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Traceback:

  2017-11-10 03:07:31.357527 | primary | 2017-11-10 03:07:31.357 | 
heat_integrationtests.scenario.test_server_signal.ServerSignalIntegrationTest.test_server_signal_userdata_format_software_config
  2017-11-10 03:07:31.359064 | primary | 2017-11-10 03:07:31.358 | 

  2017-11-10 03:07:31.360609 | primary | 2017-11-10 03:07:31.360 |
  2017-11-10 03:07:31.362409 | primary | 2017-11-10 03:07:31.361 | Captured 
traceback:
  2017-11-10 03:07:31.364077 | primary | 2017-11-10 03:07:31.363 | 
~~~
  2017-11-10 03:07:31.365498 | primary | 2017-11-10 03:07:31.365 | 
Traceback (most recent call last):
  2017-11-10 03:07:31.366965 | primary | 2017-11-10 03:07:31.366 |   File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 387, in 
_stack_delete
  2017-11-10 03:07:31.368528 | primary | 2017-11-10 03:07:31.368 | 
success_on_not_found=True)
  2017-11-10 03:07:31.370051 | primary | 2017-11-10 03:07:31.369 |   File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 368, in 
_wait_for_stack_status
  2017-11-10 03:07:31.371436 | primary | 2017-11-10 03:07:31.371 | 
fail_regexp):
  2017-11-10 03:07:31.372777 | primary | 2017-11-10 03:07:31.372 |   File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 332, in 
_verify_status
  2017-11-10 03:07:31.374123 | primary | 2017-11-10 03:07:31.373 | 
stack_status_reason=stack.stack_status_reason)
  2017-11-10 03:07:31.375699 | primary | 2017-11-10 03:07:31.375 | 
heat_integrationtests.common.exceptions.StackBuildErrorException: Stack 
ServerSignalIntegrationTest-1700906107 is in DELETE_FAILED status due to 
'Resource DELETE failed: ResourceInError: resources.server: Went to status 
ERROR due to "Server Se-gnalIntegrationTest-1700906107-server-l5mj3wi5kfci 
delete failed: (404) Port id 387d58a6-7a65-4514-9695-18bfcb2302f5 could not be 
found."'

  Noticed at:
  
http://logs.openstack.org/40/509140/22/check/legacy-heat-dsvm-functional-orig-mysql-lbaasv2/b0e5f36/job-output.txt.gz

  http://logs.openstack.org/98/509098/19/check/heat-functional-convg-
  mysql-lbaasv2-amqp1/df22003/job-output.txt.gz

  From neutron logs:
  
http://logs.openstack.org/40/509140/22/check/legacy-heat-dsvm-functional-orig-mysql-lbaasv2/b0e5f36/logs/screen-q-svc.txt.gz#_Nov_10_03_12_34_409860

  Probably it's https://review.openstack.org/#/c/505613/

  Testing the revert with https://review.openstack.org/518834

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1731395/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1723856] Re: lbaasv2 tests fail with error

2017-10-16 Thread Rabi Mishra
So reverting 4f627b4e8dfe699944a196fe90e0642cced6278f fixes the lbaas
issue and hence the heat gate.

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: heat
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1723856

Title:
  lbaasv2 tests fail with error

Status in OpenStack Heat:
  New
Status in neutron:
  New

Bug description:
  Noticed at:

  http://logs.openstack.org/52/511752/3/check/legacy-heat-dsvm-
  functional-convg-mysql-lbaasv2/dcd512d/job-output.txt.gz

  
  lbaasv2 agent log:

  http://logs.openstack.org/52/511752/3/check/legacy-heat-dsvm-
  functional-convg-mysql-
  lbaasv2/dcd512d/logs/screen-q-lbaasv2.txt.gz?#_Oct_16_02_26_51_171646

  
  May be due to https://review.openstack.org/#/c/505701/

  traceback:

  2017-10-16 02:45:43.838922 | primary | 2017-10-16 02:45:43.838 | 
==
  2017-10-16 02:45:43.840365 | primary | 2017-10-16 02:45:43.840 | Failed 2 
tests - output below:
  2017-10-16 02:45:43.842320 | primary | 2017-10-16 02:45:43.841 | 
==
  2017-10-16 02:45:43.843926 | primary | 2017-10-16 02:45:43.843 |
  2017-10-16 02:45:43.845738 | primary | 2017-10-16 02:45:43.845 | 
heat_integrationtests.functional.test_lbaasv2.LoadBalancerv2Test.test_create_update_loadbalancer
  2017-10-16 02:45:43.847384 | primary | 2017-10-16 02:45:43.846 | 

  2017-10-16 02:45:43.848836 | primary | 2017-10-16 02:45:43.848 |
  2017-10-16 02:45:43.850193 | primary | 2017-10-16 02:45:43.849 | Captured 
traceback:
  2017-10-16 02:45:43.851909 | primary | 2017-10-16 02:45:43.851 | 
~~~
  2017-10-16 02:45:43.853340 | primary | 2017-10-16 02:45:43.852 | 
Traceback (most recent call last):
  2017-10-16 02:45:43.855053 | primary | 2017-10-16 02:45:43.854 |   File 
"/opt/stack/new/heat/heat_integrationtests/functional/test_lbaasv2.py", line 
109, in test_create_update_loadbalancer
  2017-10-16 02:45:43.856727 | primary | 2017-10-16 02:45:43.856 | 
parameters=parameters)
  2017-10-16 02:45:43.858396 | primary | 2017-10-16 02:45:43.857 |   File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 437, in 
update_stack
  2017-10-16 02:45:43.859969 | primary | 2017-10-16 02:45:43.859 | 
self._wait_for_stack_status(**kwargs)
  2017-10-16 02:45:43.861455 | primary | 2017-10-16 02:45:43.861 |   File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 368, in 
_wait_for_stack_status
  2017-10-16 02:45:43.862957 | primary | 2017-10-16 02:45:43.862 | 
fail_regexp):
  2017-10-16 02:45:43.864506 | primary | 2017-10-16 02:45:43.864 |   File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 327, in 
_verify_status
  2017-10-16 02:45:43.866142 | primary | 2017-10-16 02:45:43.865 | 
stack_status_reason=stack.stack_status_reason)
  2017-10-16 02:45:43.867842 | primary | 2017-10-16 02:45:43.867 | 
heat_integrationtests.common.exceptions.StackBuildErrorException: Stack 
LoadBalancerv2Test-1022777367/f0a78a75-c1ed-4921-a7f7-c4028f3f60c3 is in 
UPDATE_FAILED status due to 'Resource UPDATE failed: ResourceInError: 
resources.loadbalancer: Went to status ERROR due to "Unknown"'
  2017-10-16 02:45:43.869183 | primary | 2017-10-16 02:45:43.868 |
  2017-10-16 02:45:43.870571 | primary | 2017-10-16 02:45:43.870 |
  2017-10-16 02:45:43.872501 | primary | 2017-10-16 02:45:43.872 | 
heat_integrationtests.scenario.test_autoscaling_lbv2.AutoscalingLoadBalancerv2Test.test_autoscaling_loadbalancer_neutron
  2017-10-16 02:45:43.874213 | primary | 2017-10-16 02:45:43.873 | 

  2017-10-16 02:45:43.875784 | primary | 2017-10-16 02:45:43.875 |
  2017-10-16 02:45:43.877352 | primary | 2017-10-16 02:45:43.876 | Captured 
traceback:
  2017-10-16 02:45:43.878767 | primary | 2017-10-16 02:45:43.878 | 
~~~
  2017-10-16 02:45:43.880302 | primary | 2017-10-16 02:45:43.879 | 
Traceback (most recent call last):
  2017-10-16 02:45:43.881941 | primary | 2017-10-16 02:45:43.881 |   File 
"/opt/stack/new/heat/heat_integrationtests/scenario/test_autoscaling_lbv2.py", 
line 97, in test_autoscaling_loadbalancer_neutron
  2017-10-16 02:45:43.883543 | primary | 2017-10-16 02:45:43.883 | 
self.check_num_responses(lb_url, 1)
  2017-10-16 02:45:43.884968 | primary | 2017-10-16 02:45:43.884 |   File 
"/opt/stack/new/heat/heat_integrationtests/scenario/test_autoscaling_lbv2.py", 
line 51, in check_num_responses
  2017-10-16 02:45:43.886354 | primary | 2017-10-16 02:45:43.885 | 
self.assertEqual(expected_num, len(resp))
  2017-10-16 02:45:43.887791 | primary | 2017-10-16 02:45:43.887 

[Yahoo-eng-team] [Bug 1716721] Re: "OS::Neutron::Subnet" ignore gateway_ip when subnetpool used

2017-09-12 Thread Rabi Mishra
I think heat is sending 'gateway_ip': None in the request, but neutron
is ignoring it.

2017-09-13 05:22:44.089 DEBUG neutron.api.v2.base 
[req-c7b87873-d640-4ea6-ae80-34843c321d0a demo demo] Request body: {'subnet': 
{'subnetpool_id': 'e59a988e-0076-436b-b147-e78bb9e86e77', 'dns_nameserve
rs': [], 'name': 'test_stack-test_subnet1-zrfaogklhwls', 'network_id': 
'cd2cd3be-5d0a-448e-8e8c-73371f2cbce3', 'enable_dhcp': True, 'gateway_ip': 
None, 'ip_version': 4}} from (pid=13385) prepare_reque
st_body /opt/stack/neutron/neutron/api/v2/base.py:695


** Project changed: heat => neutron

** Also affects: heat
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1716721

Title:
  "OS::Neutron::Subnet" ignore gateway_ip when subnetpool used

Status in OpenStack Heat:
  New
Status in neutron:
  New

Bug description:
  heat version - 8.0.4

  Step to reproduce:
  1. template:
  heat_template_version: 2017-02-24

  description: test template

  resources:
test_subnetpool:
  type: OS::Neutron::SubnetPool
  properties:
default_prefixlen: 25
max_prefixlen: 32
min_prefixlen: 22
prefixes:
  - "192.168.0.0/16"
test_net1:
  type: OS::Neutron::Net
test_subnet1:
  type: OS::Neutron::Subnet
  properties:
network: { get_resource: test_net1 }
ip_version: 4
subnetpool: { get_resource: test_subnetpool }
gateway_ip: null

  2. create stack
  3. created subnet have gateway IP: 192.168.0.1 but expected disabled gateway

  Because of gateway_ip property ignored when subnetpool present there
  is no way to create subnet without gateway from subnetpool.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1716721/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1703856] Re: 502 Bad gateway error on image-create

2017-07-13 Thread Rabi Mishra
I think we're encountering this issue intermittently in our gate.

http://logs.openstack.org/20/480820/10/gate/gate-heat-dsvm-functional-
convg-mysql-lbaasv2-ubuntu-
xenial/69db418/console.html#_2017-07-13_08_01_38_111555

27 failures in last 7 days.

http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%20%5C%22The%20proxy%20server%20received%20an%20invalid%3A%20response%20from%20an%20upstream%20server%5C%22%20AND%20project%3A%20%5C%22openstack%2Fheat%5C%22


** Also affects: heat
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1703856

Title:
  502 Bad gateway error on image-create

Status in Glance:
  Confirmed
Status in heat:
  New

Bug description:
  
  The glance code that I am using is from the upstream master branch (Pike) and 
I just pulled down the latest code this morning and still can reproduce this 
problem.

  Up until about 2 weeks ago, I was able to upload my database image
  into glance using this command:

  glance image-create --name 'Db 12.1.0.2' --file
  Oracle12201DBRAC_x86_64-xvdb.qcow2 --container-format bare --disk-
  format qcow2

  However, now it fails as follows:

   glance --debug  image-create --name 'Db 12.1.0.2' --file
  Oracle12201DBRAC_x86_64-xvdb.qcow2 --container-format bare --disk-
  format qcow2

  DEBUG:keystoneauth.session:REQ: curl -g -i -X GET 
http://172.16.35.10/identity -H "Accept: application/json" -H "User-Agent: 
glance keystoneauth1/2.21.0 python-requests/2.18.1 CPython/2.7.12"
  DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): 172.16.35.10
  DEBUG:urllib3.connectionpool:http://172.16.35.10:80 "GET /identity HTTP/1.1" 
300 606
  DEBUG:keystoneauth.session:RESP: [300] Date: Wed, 12 Jul 2017 14:26:39 GMT 
Server: Apache/2.4.18 (Ubuntu) Vary: X-Auth-Token Content-Type: 
application/json Content-Length: 606 Connection: close 
  RESP BODY: {"versions": {"values": [{"status": "stable", "updated": 
"2017-02-22T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v3+json"}], "id": "v3.8", "links": 
[{"href": "http://172.16.35.10/identity/v3/";, "rel": "self"}]}, {"status": 
"deprecated", "updated": "2016-08-04T00:00:00Z", "media-types": [{"base": 
"application/json", "type": "application/vnd.openstack.identity-v2.0+json"}], 
"id": "v2.0", "links": [{"href": "http://172.16.35.10/identity/v2.0/";, "rel": 
"self"}, {"href": "https://docs.openstack.org/";, "type": "text/html", "rel": 
"describedby"}]}]}}

  DEBUG:keystoneauth.identity.v3.base:Making authentication request to 
http://172.16.35.10/identity/v3/auth/tokens
  DEBUG:urllib3.connectionpool:Resetting dropped connection: 172.16.35.10
  DEBUG:urllib3.connectionpool:http://172.16.35.10:80 "POST 
/identity/v3/auth/tokens HTTP/1.1" 201 4893
  DEBUG:keystoneauth.identity.v3.base:{"token": {"is_domain": false, "methods": 
["password"], "roles": [{"id": "325205c52aba4b31801e2d71ec95483b", "name": 
"admin"}], "expires_at": "2017-07-12T15:26:40.00Z", "project": {"domain": 
{"id": "default", "name": "Default"}, "id": "4aa1233111e140b2a1e4ba170881f092", 
"name": "demo"}, "catalog": [{"endpoints": [{"url": 
"http://172.16.35.10/image";, "interface": "public", "region": "RegionOne", 
"region_id": "RegionOne", "id": "0d10d85bc3ae4e13a49ed344fcf6f737"}], "type": 
"image", "id": "01c2acd1845d4dd28c5b69351fa0dbf3", "name": "glance"}, 
{"endpoints": [{"url": 
"http://172.16.35.10:8004/v1/4aa1233111e140b2a1e4ba170881f092";, "interface": 
"public", "region": "RegionOne", "region_id": "RegionOne", "id": 
"0fbba7f276e44921ba112edd1e157561"}, {"url": 
"http://172.16.35.10:8004/v1/4aa1233111e140b2a1e4ba170881f092";, "interface": 
"internal", "region": "RegionOne", "region_id": "RegionOne", "id": 
"72abdff47e2940f09db32720b709d01f"}, {"url": "http://172
 .16.35.10:8004/v1/4aa1233111e140b2a1e4ba170881f092", "interface": "admin", 
"region": "RegionOne", "region_id": "RegionOne", "id": 
"d2789811c71342d69d69e45c09268ebc"}], "type": "orchestration", "id": 
"343101b65cba48afafb5b70fcbae5c3d", "name": "heat"}, {"endpoints": [{"url": 
"http://172.16.35.10/compute/v2/4aa1233111e140b2a1e4ba170881f092";, "interface": 
"public", "region": "RegionOne", "region_id": "RegionOne", "id": 
"d7fe183ce05d46d986c7ec7600b583a5"}], "type": "compute_legacy", "id": 
"3d75e8b88ed14f95b162b5398acfde82", "name": "nova_legacy"}, {"endpoints": 
[{"url": "http://172.16.35.10:8082";, "interface": "admin", "region": 
"RegionOne", "region_id": "RegionOne", "id": 
"65e5e92c5646468583f033cfb05ae0cb"}, {"url": "http://172.16.35.10:8082";, 
"interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": 
"8cbae4cbce354314aa5f2b5e5c4e4592"}, {"url": "http://172.16.35.10:8082";, 
"interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": 
"d761a53278654a
 c690fb56b42752c1a4"}], "type": "application-cat

[Yahoo-eng-team] [Bug 1698355] Re: py35 dsvm job failing with RemoteDisconnected error

2017-07-03 Thread Rabi Mishra
** Changed in: heat
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1698355

Title:
  py35 dsvm job failing with RemoteDisconnected error

Status in heat:
  Invalid
Status in neutron:
  Fix Released
Status in oslo.serialization:
  Confirmed

Bug description:
  traceback:

  2017-06-16 10:24:47.339195 | 2017-06-16 10:24:47.338 | 
  2017-06-16 10:24:47.340517 | 2017-06-16 10:24:47.340 | 
heat_integrationtests.scenario.test_autoscaling_lbv2.AutoscalingLoadBalancerv2Test.test_autoscaling_loadbalancer_neutron
  2017-06-16 10:24:47.342125 | 2017-06-16 10:24:47.341 | 

  2017-06-16 10:24:47.343471 | 2017-06-16 10:24:47.343 | 
  2017-06-16 10:24:47.344919 | 2017-06-16 10:24:47.344 | Captured traceback:
  2017-06-16 10:24:47.346272 | 2017-06-16 10:24:47.346 | ~~~
  2017-06-16 10:24:47.347614 | 2017-06-16 10:24:47.347 | b'Traceback (most 
recent call last):'
  2017-06-16 10:24:47.348873 | 2017-06-16 10:24:47.348 | b'  File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 376, in 
_stack_delete'
  2017-06-16 10:24:47.350049 | 2017-06-16 10:24:47.349 | b'
success_on_not_found=True)'
  2017-06-16 10:24:47.351627 | 2017-06-16 10:24:47.351 | b'  File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 357, in 
_wait_for_stack_status'
  2017-06-16 10:24:47.352791 | 2017-06-16 10:24:47.352 | b'
fail_regexp):'
  2017-06-16 10:24:47.353977 | 2017-06-16 10:24:47.353 | b'  File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 321, in 
_verify_status'
  2017-06-16 10:24:47.355411 | 2017-06-16 10:24:47.355 | b'
stack_status_reason=stack.stack_status_reason)'
  2017-06-16 10:24:47.356920 | 2017-06-16 10:24:47.356 | 
b"heat_integrationtests.common.exceptions.StackBuildErrorException: Stack 
AutoscalingLoadBalancerv2Test-1133164547 is in DELETE_FAILED status due to 
'Resource DELETE failed: ConnectFailure: resources.sec_group: Unable to 
establish connection to 
http://10.1.43.45:9696/v2.0/security-group-rules/8d33f0cf-d473-455a-8fe2-978c64af5e0d:
 ('Connection aborted.', RemoteDisconnected('Remote end closed connection 
without response',))'"
  2017-06-16 10:24:47.358227 | 2017-06-16 10:24:47.357 | b''

  http://logs.openstack.org/65/473765/1/check/gate-heat-dsvm-functional-
  convg-mysql-lbaasv2-py35-ubuntu-
  xenial/e07f32f/console.html#_2017-06-16_10_24_47_356920

  
  heat engine log:

  http://logs.openstack.org/65/473765/1/check/gate-heat-dsvm-functional-
  convg-mysql-lbaasv2-py35-ubuntu-
  xenial/e07f32f/logs/screen-h-eng.txt.gz?level=INFO#_Jun_16_10_24_22_312023

  
  In the same job nova is failing to connect to neutron with the same error

  
  
http://logs.openstack.org/65/473765/1/check/gate-heat-dsvm-functional-convg-mysql-lbaasv2-py35-ubuntu-xenial/e07f32f/logs/screen-n-api.txt.gz?level=ERROR#_Jun_16_10_24_22_633655

  
  It seems to be happening for security-groups and ports/floating-ip stuff 
mostly.

  
  Not sure if this is a neutronclient/urllib3 issue(I see a new openstacksdk 
release[1])or something specific to changes merged recently to neuton[2].

  
  [1] 
https://github.com/openstack/requirements/commit/1b30d517efd442867888359e4619d822f13a3cf2

  [2] https://review.openstack.org/#/q/topic:bp/push-notifications

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1698355/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1698355] [NEW] py35 dsvm job failing with RemoteDisconnected error

2017-06-16 Thread Rabi Mishra
Public bug reported:

traceback:

2017-06-16 10:24:47.339195 | 2017-06-16 10:24:47.338 | 
2017-06-16 10:24:47.340517 | 2017-06-16 10:24:47.340 | 
heat_integrationtests.scenario.test_autoscaling_lbv2.AutoscalingLoadBalancerv2Test.test_autoscaling_loadbalancer_neutron
2017-06-16 10:24:47.342125 | 2017-06-16 10:24:47.341 | 

2017-06-16 10:24:47.343471 | 2017-06-16 10:24:47.343 | 
2017-06-16 10:24:47.344919 | 2017-06-16 10:24:47.344 | Captured traceback:
2017-06-16 10:24:47.346272 | 2017-06-16 10:24:47.346 | ~~~
2017-06-16 10:24:47.347614 | 2017-06-16 10:24:47.347 | b'Traceback (most 
recent call last):'
2017-06-16 10:24:47.348873 | 2017-06-16 10:24:47.348 | b'  File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 376, in 
_stack_delete'
2017-06-16 10:24:47.350049 | 2017-06-16 10:24:47.349 | b'
success_on_not_found=True)'
2017-06-16 10:24:47.351627 | 2017-06-16 10:24:47.351 | b'  File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 357, in 
_wait_for_stack_status'
2017-06-16 10:24:47.352791 | 2017-06-16 10:24:47.352 | b'fail_regexp):'
2017-06-16 10:24:47.353977 | 2017-06-16 10:24:47.353 | b'  File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 321, in 
_verify_status'
2017-06-16 10:24:47.355411 | 2017-06-16 10:24:47.355 | b'
stack_status_reason=stack.stack_status_reason)'
2017-06-16 10:24:47.356920 | 2017-06-16 10:24:47.356 | 
b"heat_integrationtests.common.exceptions.StackBuildErrorException: Stack 
AutoscalingLoadBalancerv2Test-1133164547 is in DELETE_FAILED status due to 
'Resource DELETE failed: ConnectFailure: resources.sec_group: Unable to 
establish connection to 
http://10.1.43.45:9696/v2.0/security-group-rules/8d33f0cf-d473-455a-8fe2-978c64af5e0d:
 ('Connection aborted.', RemoteDisconnected('Remote end closed connection 
without response',))'"
2017-06-16 10:24:47.358227 | 2017-06-16 10:24:47.357 | b''

http://logs.openstack.org/65/473765/1/check/gate-heat-dsvm-functional-
convg-mysql-lbaasv2-py35-ubuntu-
xenial/e07f32f/console.html#_2017-06-16_10_24_47_356920


heat engine log:

http://logs.openstack.org/65/473765/1/check/gate-heat-dsvm-functional-
convg-mysql-lbaasv2-py35-ubuntu-
xenial/e07f32f/logs/screen-h-eng.txt.gz?level=INFO#_Jun_16_10_24_22_312023


In the same job nova is failing to connect to neutron with the same error


http://logs.openstack.org/65/473765/1/check/gate-heat-dsvm-functional-convg-mysql-lbaasv2-py35-ubuntu-xenial/e07f32f/logs/screen-n-api.txt.gz?level=ERROR#_Jun_16_10_24_22_633655


It seems to be happening for security-groups and ports/floating-ip stuff mostly.


Not sure if this is a neutronclient/urllib3 issue(I see a new openstacksdk 
release[1])or something specific to changes merged recently to neuton[2].


[1] 
https://github.com/openstack/requirements/commit/1b30d517efd442867888359e4619d822f13a3cf2

[2] https://review.openstack.org/#/q/topic:bp/push-notifications

** Affects: heat
 Importance: Critical
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New

** Changed in: heat
   Importance: Undecided => Critical

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1698355

Title:
  py35 dsvm job failing with RemoteDisconnected error

Status in heat:
  New
Status in neutron:
  New

Bug description:
  traceback:

  2017-06-16 10:24:47.339195 | 2017-06-16 10:24:47.338 | 
  2017-06-16 10:24:47.340517 | 2017-06-16 10:24:47.340 | 
heat_integrationtests.scenario.test_autoscaling_lbv2.AutoscalingLoadBalancerv2Test.test_autoscaling_loadbalancer_neutron
  2017-06-16 10:24:47.342125 | 2017-06-16 10:24:47.341 | 

  2017-06-16 10:24:47.343471 | 2017-06-16 10:24:47.343 | 
  2017-06-16 10:24:47.344919 | 2017-06-16 10:24:47.344 | Captured traceback:
  2017-06-16 10:24:47.346272 | 2017-06-16 10:24:47.346 | ~~~
  2017-06-16 10:24:47.347614 | 2017-06-16 10:24:47.347 | b'Traceback (most 
recent call last):'
  2017-06-16 10:24:47.348873 | 2017-06-16 10:24:47.348 | b'  File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 376, in 
_stack_delete'
  2017-06-16 10:24:47.350049 | 2017-06-16 10:24:47.349 | b'
success_on_not_found=True)'
  2017-06-16 10:24:47.351627 | 2017-06-16 10:24:47.351 | b'  File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 357, in 
_wait_for_stack_status'
  2017-06-16 10:24:47.352791 | 2017-06-16 10:24:47.352 | b'
fail_regexp):'
  2017-06-16 10:24:47.353977 | 2017-06-16 10:24:47.353 | b'  File 
"/opt/stack/new/heat/heat_integr

[Yahoo-eng-team] [Bug 1585858] Re: InterfaceDetachFailed: resources.server: Failed to detach interface

2017-06-05 Thread Rabi Mishra
This is happening very frequently now. Every time with a different
libvirt error.

Like the one below:

http://logs.openstack.org/93/462293/3/gate/gate-heat-dsvm-functional-
convg-mysql-lbaasv2-non-apache-ubuntu-
xenial/ea8c446/logs/screen-n-cpu.txt.gz?level=ERROR


Jun 06 05:25:27.340500 ubuntu-xenial-osic-cloud1-s3500-9151671 
nova-compute[4359]: ERROR oslo.service.loopingcall [-] Dynamic interval looping 
call 'oslo_service.loopingcall._func' failed: libvirtError: internal error: End 
of file from qemu monitor
Jun 06 05:25:27.340614 ubuntu-xenial-osic-cloud1-s3500-9151671 
nova-compute[4359]: ERROR oslo.service.loopingcall Traceback (most recent call 
last):
Jun 06 05:25:27.340696 ubuntu-xenial-osic-cloud1-s3500-9151671 
nova-compute[4359]: ERROR oslo.service.loopingcall   File 
"/usr/local/lib/python2.7/dist-packages/oslo_service/loopingcall.py", line 137, 
in _run_loop
Jun 06 05:25:27.340773 ubuntu-xenial-osic-cloud1-s3500-9151671 
nova-compute[4359]: ERROR oslo.service.loopingcall result = 
func(*self.args, **self.kw)
Jun 06 05:25:27.340850 ubuntu-xenial-osic-cloud1-s3500-9151671 
nova-compute[4359]: ERROR oslo.service.loopingcall   File 
"/usr/local/lib/python2.7/dist-packages/oslo_service/loopingcall.py", line 394, 
in _func
Jun 06 05:25:27.340927 ubuntu-xenial-osic-cloud1-s3500-9151671 
nova-compute[4359]: ERROR oslo.service.loopingcall result = f(*args, 
**kwargs)
Jun 06 05:25:27.341001 ubuntu-xenial-osic-cloud1-s3500-9151671 
nova-compute[4359]: ERROR oslo.service.loopingcall   File 
"/opt/stack/new/nova/nova/virt/libvirt/guest.py", line 446, in 
_do_wait_and_retry_detach
Jun 06 05:25:27.341091 ubuntu-xenial-osic-cloud1-s3500-9151671 
nova-compute[4359]: ERROR oslo.service.loopingcall 
_try_detach_device(config, persistent=False, live=live)
Jun 06 05:25:27.341172 ubuntu-xenial-osic-cloud1-s3500-9151671 
nova-compute[4359]: ERROR oslo.service.loopingcall   File 
"/opt/stack/new/nova/nova/virt/libvirt/guest.py", line 427, in 
_try_detach_device
Jun 06 05:25:27.341257 ubuntu-xenial-osic-cloud1-s3500-9151671 
nova-compute[4359]: ERROR oslo.service.loopingcall 
device=alternative_device_name)
Jun 06 05:25:27.341347 ubuntu-xenial-osic-cloud1-s3500-9151671 
nova-compute[4359]: ERROR oslo.service.loopingcall   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
Jun 06 05:25:27.341424 ubuntu-xenial-osic-cloud1-s3500-9151671 
nova-compute[4359]: ERROR oslo.service.loopingcall self.force_reraise()
Jun 06 05:25:27.341509 ubuntu-xenial-osic-cloud1-s3500-9151671 
nova-compute[4359]: ERROR oslo.service.loopingcall   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
Jun 06 05:25:27.341602 ubuntu-xenial-osic-cloud1-s3500-9151671 
nova-compute[4359]: ERROR oslo.service.loopingcall six.reraise(self.type_, 
self.value, self.tb)
Jun 06 05:25:27.341688 ubuntu-xenial-osic-cloud1-s3500-9151671 
nova-compute[4359]: ERROR oslo.service.loopingcall   File 
"/opt/stack/new/nova/nova/virt/libvirt/guest.py", line 407, in 
_try_detach_device
Jun 06 05:25:27.341766 ubuntu-xenial-osic-cloud1-s3500-9151671 
nova-compute[4359]: ERROR oslo.service.loopingcall self.detach_device(conf, 
persistent=persistent, live=live)
Jun 06 05:25:27.341843 ubuntu-xenial-osic-cloud1-s3500-9151671 
nova-compute[4359]: ERROR oslo.service.loopingcall   File 
"/opt/stack/new/nova/nova/virt/libvirt/guest.py", line 471, in detach_device
Jun 06 05:25:27.341930 ubuntu-xenial-osic-cloud1-s3500-9151671 
nova-compute[4359]: ERROR oslo.service.loopingcall 
self._domain.detachDeviceFlags(device_xml, flags=flags)
Jun 06 05:25:27.342391 ubuntu-xenial-osic-cloud1-s3500-9151671 
nova-compute[4359]: ERROR oslo.service.loopingcall   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit
Jun 06 05:25:27.342483 ubuntu-xenial-osic-cloud1-s3500-9151671 
nova-compute[4359]: ERROR oslo.service.loopingcall result = 
proxy_call(self._autowrap, f, *args, **kwargs)
Jun 06 05:25:27.342570 ubuntu-xenial-osic-cloud1-s3500-9151671 
nova-compute[4359]: ERROR oslo.service.loopingcall   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in 
proxy_call
Jun 06 05:25:27.342661 ubuntu-xenial-osic-cloud1-s3500-9151671 
nova-compute[4359]: ERROR oslo.service.loopingcall rv = execute(f, *args, 
**kwargs)
Jun 06 05:25:27.342738 ubuntu-xenial-osic-cloud1-s3500-9151671 
nova-compute[4359]: ERROR oslo.service.loopingcall   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute
Jun 06 05:25:27.342814 ubuntu-xenial-osic-cloud1-s3500-9151671 
nova-compute[4359]: ERROR oslo.service.loopingcall six.reraise(c, e, tb)
Jun 06 05:25:27.342897 ubuntu-xenial-osic-cloud1-s3500-9151671 
nova-compute[4359]: ERROR oslo.service.loopingcall   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker
Jun 06 05:25:27.342985 ubuntu-xenial-osic-cloud1-s3500-9151671 
nova

[Yahoo-eng-team] [Bug 1694371] Re: test_stack_snapshot_restore failing intermittently

2017-05-30 Thread Rabi Mishra
I can also see it's timing out waiting for vif plugging callback in the
logs i.e "Timeout waiting for vif plugging callback for instance
9456073b-a8d6-44ff-b86a-620fbc68a484". Only seen these failures for jobs
in in osic cloud. Is it just a coincidence or something to do with the
infra?

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1694371

Title:
  test_stack_snapshot_restore failing intermittently

Status in heat:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  It seems like the server is going to ERROR state during rebuild.

  traceback:

  2017-05-28 04:34:37.767757 | 2017-05-28 04:34:37.767 | Captured traceback:
  2017-05-28 04:34:37.768828 | 2017-05-28 04:34:37.768 | ~~~
  2017-05-28 04:34:37.770099 | 2017-05-28 04:34:37.769 | b'Traceback (most 
recent call last):'
  2017-05-28 04:34:37.771108 | 2017-05-28 04:34:37.770 | b'  File 
"/opt/stack/new/heat/heat_integrationtests/functional/test_snapshot_restore.py",
 line 74, in test_stack_snapshot_restore'
  2017-05-28 04:34:37.772364 | 2017-05-28 04:34:37.772 | b'
self.stack_restore(stack_identifier, snapshot_id)'
  2017-05-28 04:34:37.773407 | 2017-05-28 04:34:37.773 | b'  File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 626, in 
stack_restore'
  2017-05-28 04:34:37.774541 | 2017-05-28 04:34:37.774 | b'
self._wait_for_stack_status(stack_id, wait_for_status)'
  2017-05-28 04:34:37.775568 | 2017-05-28 04:34:37.775 | b'  File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 357, in 
_wait_for_stack_status'
  2017-05-28 04:34:37.776642 | 2017-05-28 04:34:37.776 | b'
fail_regexp):'
  2017-05-28 04:34:37.778354 | 2017-05-28 04:34:37.778 | b'  File 
"/opt/stack/new/heat/heat_integrationtests/common/test.py", line 321, in 
_verify_status'
  2017-05-28 04:34:37.779448 | 2017-05-28 04:34:37.779 | b'
stack_status_reason=stack.stack_status_reason)'
  2017-05-28 04:34:37.780644 | 2017-05-28 04:34:37.780 | 
b"heat_integrationtests.common.exceptions.StackBuildErrorException: Stack 
StackSnapshotRestoreTest-1374582671/7fb8f800-1545-4e34-a6fa-3e2adbf4443a is in 
RESTORE_FAILED status due to 'Error: resources.my_server: Rebuilding server 
failed, status 'ERROR''"
  2017-05-28 04:34:37.782119 | 2017-05-28 04:34:37.781 | b''

  
  Noticed at:

  http://logs.openstack.org/16/462216/16/check/gate-heat-dsvm-
  functional-convg-mysql-lbaasv2-py35-ubuntu-
  xenial/17c2da9/console.html#_2017-05-28_04_34_37_753094

  
  Looks like a nova issue from the below traceback.

  http://logs.openstack.org/16/462216/16/check/gate-heat-dsvm-
  functional-convg-mysql-lbaasv2-py35-ubuntu-
  xenial/17c2da9/logs/screen-n-cpu.txt.gz?level=ERROR#_May_28_04_14_49_044455

  
  May 28 04:14:49.042877 ubuntu-xenial-osic-cloud1-s3700-9024798 
nova-compute[26709]: ERROR nova.compute.manager [instance: 
45105d34-b970-4ced-968c-a1c4ead5b282]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 6758, in 
_error_out_instance_on_exception
  May 28 04:14:49.042955 ubuntu-xenial-osic-cloud1-s3700-9024798 
nova-compute[26709]: ERROR nova.compute.manager [instance: 
45105d34-b970-4ced-968c-a1c4ead5b282] yield
  May 28 04:14:49.043027 ubuntu-xenial-osic-cloud1-s3700-9024798 
nova-compute[26709]: ERROR nova.compute.manager [instance: 
45105d34-b970-4ced-968c-a1c4ead5b282]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2814, in rebuild_instance
  May 28 04:14:49.043100 ubuntu-xenial-osic-cloud1-s3700-9024798 
nova-compute[26709]: ERROR nova.compute.manager [instance: 
45105d34-b970-4ced-968c-a1c4ead5b282] bdms, recreate, on_shared_storage, 
preserve_ephemeral)
  May 28 04:14:49.043197 ubuntu-xenial-osic-cloud1-s3700-9024798 
nova-compute[26709]: ERROR nova.compute.manager [instance: 
45105d34-b970-4ced-968c-a1c4ead5b282]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2855, in 
_do_rebuild_instance_with_claim
  May 28 04:14:49.043299 ubuntu-xenial-osic-cloud1-s3700-9024798 
nova-compute[26709]: ERROR nova.compute.manager [instance: 
45105d34-b970-4ced-968c-a1c4ead5b282] self._do_rebuild_instance(*args, 
**kwargs)
  May 28 04:14:49.043384 ubuntu-xenial-osic-cloud1-s3700-9024798 
nova-compute[26709]: ERROR nova.compute.manager [instance: 
45105d34-b970-4ced-968c-a1c4ead5b282]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2977, in 
_do_rebuild_instance
  May 28 04:14:49.043458 ubuntu-xenial-osic-cloud1-s3700-9024798 
nova-compute[26709]: ERROR nova.compute.manager [instance: 
45105d34-b970-4ced-968c-a1c4ead5b282] self._rebuild_default_impl(**kwargs)
  May 28 04:14:49.043534 ubuntu-xenial-osic-cloud1-s3700-9024798 
nova-compute[26709]: ERROR nova.compute.manager [instance: 
45105d34-b970-4ced-968c-a1c4ead5b282]   File 
"/opt/stack/n

[Yahoo-eng-team] [Bug 1691885] Re: Updating Nova::Server with Neutron::Port resource fails

2017-05-23 Thread Rabi Mishra
Looks to me like a neutron issue. neutron is looking for floatingips for
the port/fixed_ip combination[1]. Not sure why? And probably floatingip
extension is not available as this message suggests "No controller found
for: floatingips - returning response code 404"

[1] 2017-05-22 23:15:41.822 12962 INFO neutron.wsgi [req-
e54946bc-b617-4505-b963-d65ff8b65483 b429b29882af4f5b95787f646e860ebc
0fbac869a6d144f7abc7ae92788c9eb5 - default default] 192.168.24.1 "GET
/v2.0/floatingips.json?fixed_ip_address=192.168.24.18&port_id=69e79425-38a8-4782-ab29-ed6f7c5ec981
HTTP/1.1" status: 404  len: 285 time: 0.0071950

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1691885

Title:
  Updating Nova::Server with Neutron::Port resource fails

Status in heat:
  New
Status in neutron:
  New

Bug description:
  A Nova::Server resource that was created with an implicit port cannot
  be updated.

  If I first create the following resource:
  # template1.yaml
  resources:
my_ironic_instance:
  type: OS::Nova::Server
  properties:
key_name: default
image: overcloud-full
flavor: baremetal
networks:
  - network: ctlplane
ip_address: "192.168.24.10"

  And then try to run a stack update with a different ip_address:
  # template2.yaml
  resources:
my_ironic_instance:
  type: OS::Nova::Server
  properties:
key_name: default
image: overcloud-full
flavor: baremetal
networks:
  - network: ctlplane
ip_address: "192.168.24.20"

  This fails with the following error:
  RetryError: resources.my_ironic_instance: RetryError[]

  I also tried assigning an external IP to the Nova::Server created in the 
template1.yaml, but that gave me the same error.
  # template3.yaml
  resources:
instance_port:
  type: OS::Neutron::Port
  properties:
network: ctlplane
fixed_ips:
  - subnet: "ctlplane-subnet"
ip_address: "192.168.24.20"

my_ironic_instance:
  type: OS::Nova::Server
  properties:
key_name: default
image: overcloud-full
flavor: baremetal
networks:
  - network: ctlplane
port: {get_resource: instance_port}

  However, if I first create the Nova::Server resource with an external
  port specified (as in template3.yaml above), then I can update the
  port to a different IP address and Ironic/Neutron does the right thing
  (at least since the recent attach/detach VIF in Ironic code has
  merged). So it appears that you can update a port if the port was
  created externally, but not if the port was created as part of the
  Nova::Server resource.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1691885/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632054] Re: Heat engine doesn't detect lbaas listener failures

2016-10-10 Thread Rabi Mishra
As mentioned in the mail thread you mentioned and confirmed by the lbaas
team, this would probably need changes in lbaas first.

- lbaas api should expose provisioning_status for all top level objects (ex. 
listener)
api
- The suggested status api('show-load-balancer-status-tree'), which could 
probably be used in the meantime has bugs. Therefore we should probably wait 
till this is implemented properly in lbaas.

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1632054

Title:
  Heat engine doesn't detect lbaas listener failures

Status in heat:
  New
Status in neutron:
  New

Bug description:
  Please refer to the mail-list for comments from other developers,
  https://openstack.nimeyo.com/97427/openstack-neutron-octavia-doesnt-
  detect-listener-failures

  I am trying to use heat to launch lb resources with Octavia as backend. The
  template I used is from
  
https://github.com/openstack/heat-templates/blob/master/hot/lbaasv2/lb_group.yaml
  .

  Following are a few observations:

  1. Even though Listener was created with ERROR status, heat will still go
  ahead and mark it Creation Complete. As in the heat code, it only check
  whether root Loadbalancer status is change from PENDING_UPDATE to ACTIVE.
  And Loadbalancer status will be changed to ACTIVE anyway no matter
  Listener's status.

  2. As heat engine wouldn't know the Listener's creation failure, it will
  continue to create Pool\Member\Heatthmonitor on top of an Listener which
  actually doesn't exist. It causes a few undefined behaviors. As a result,
  those LBaaS resources in ERROR state are unable to be cleaned up
  with either normal neutron or heat api.

  3. The bug is introduce from here,
  
https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/neutron/lbaas/listener.py#L188.
  It only checks the provisioning status of the root loadbalancer.
  However the listener itself has its own provisioning status which may
  go into ERROR.

  4. The same scenario applies for not only listener but also pool,
  member, healthmonitor, etc., basically every resources except
  loadbalancer from lbaas.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1632054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1629726] Re: recompiled pycparser 2.14 breaks Cinder db sync and Nova UTs

2016-10-03 Thread Rabi Mishra
** Also affects: heat
   Importance: Undecided
   Status: New

** Changed in: heat
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1629726

Title:
  recompiled pycparser 2.14 breaks Cinder db sync and Nova UTs

Status in Cinder:
  Confirmed
Status in heat:
  Confirmed
Status in Ironic:
  Confirmed
Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  http://logs.openstack.org/76/380876/1/check/gate-grenade-dsvm-ubuntu-
  xenial/3d5e102/logs/grenade.sh.txt.gz#_2016-10-02_23_32_34_069

  2016-10-02 23:32:34.069 | + lib/cinder:init_cinder:421   :   
/usr/local/bin/cinder-manage --config-file /etc/cinder/cinder.conf db sync
  2016-10-02 23:32:34.691 | Traceback (most recent call last):
  2016-10-02 23:32:34.691 |   File "/usr/local/bin/cinder-manage", line 6, in 

  2016-10-02 23:32:34.691 | from cinder.cmd.manage import main
  2016-10-02 23:32:34.691 |   File 
"/opt/stack/old/cinder/cinder/cmd/manage.py", line 77, in 
  2016-10-02 23:32:34.691 | from cinder import db
  2016-10-02 23:32:34.691 |   File 
"/opt/stack/old/cinder/cinder/db/__init__.py", line 20, in 
  2016-10-02 23:32:34.691 | from cinder.db.api import *  # noqa
  2016-10-02 23:32:34.691 |   File "/opt/stack/old/cinder/cinder/db/api.py", 
line 43, in 
  2016-10-02 23:32:34.691 | from cinder.api import common
  2016-10-02 23:32:34.691 |   File 
"/opt/stack/old/cinder/cinder/api/common.py", line 30, in 
  2016-10-02 23:32:34.691 | from cinder import utils
  2016-10-02 23:32:34.691 |   File "/opt/stack/old/cinder/cinder/utils.py", 
line 40, in 
  2016-10-02 23:32:34.691 | from os_brick import encryptors
  2016-10-02 23:32:34.691 |   File 
"/usr/local/lib/python2.7/dist-packages/os_brick/encryptors/__init__.py", line 
16, in 
  2016-10-02 23:32:34.691 | from os_brick.encryptors import nop
  2016-10-02 23:32:34.691 |   File 
"/usr/local/lib/python2.7/dist-packages/os_brick/encryptors/nop.py", line 16, 
in 
  2016-10-02 23:32:34.691 | from os_brick.encryptors import base
  2016-10-02 23:32:34.691 |   File 
"/usr/local/lib/python2.7/dist-packages/os_brick/encryptors/base.py", line 19, 
in 
  2016-10-02 23:32:34.691 | from os_brick import executor
  2016-10-02 23:32:34.691 |   File 
"/usr/local/lib/python2.7/dist-packages/os_brick/executor.py", line 21, in 

  2016-10-02 23:32:34.691 | from os_brick.privileged import rootwrap as 
priv_rootwrap
  2016-10-02 23:32:34.691 |   File 
"/usr/local/lib/python2.7/dist-packages/os_brick/privileged/__init__.py", line 
13, in 
  2016-10-02 23:32:34.691 | from oslo_privsep import capabilities as c
  2016-10-02 23:32:34.691 |   File 
"/usr/local/lib/python2.7/dist-packages/oslo_privsep/capabilities.py", line 73, 
in 
  2016-10-02 23:32:34.691 | ffi.cdef(CDEF)
  2016-10-02 23:32:34.692 |   File 
"/usr/local/lib/python2.7/dist-packages/cffi/api.py", line 105, in cdef
  2016-10-02 23:32:34.692 | self._cdef(csource, override=override, 
packed=packed)
  2016-10-02 23:32:34.692 |   File 
"/usr/local/lib/python2.7/dist-packages/cffi/api.py", line 119, in _cdef
  2016-10-02 23:32:34.692 | self._parser.parse(csource, override=override, 
**options)
  2016-10-02 23:32:34.692 |   File 
"/usr/local/lib/python2.7/dist-packages/cffi/cparser.py", line 299, in parse
  2016-10-02 23:32:34.692 | self._internal_parse(csource)
  2016-10-02 23:32:34.692 |   File 
"/usr/local/lib/python2.7/dist-packages/cffi/cparser.py", line 304, in 
_internal_parse
  2016-10-02 23:32:34.692 | ast, macros, csource = self._parse(csource)
  2016-10-02 23:32:34.692 |   File 
"/usr/local/lib/python2.7/dist-packages/cffi/cparser.py", line 260, in _parse
  2016-10-02 23:32:34.692 | ast = _get_parser().parse(csource)
  2016-10-02 23:32:34.692 |   File 
"/usr/local/lib/python2.7/dist-packages/cffi/cparser.py", line 40, in 
_get_parser
  2016-10-02 23:32:34.692 | _parser_cache = pycparser.CParser()
  2016-10-02 23:32:34.692 |   File 
"/usr/local/lib/python2.7/dist-packages/pycparser/c_parser.py", line 87, in 
__init__
  2016-10-02 23:32:34.692 | outputdir=taboutputdir)
  2016-10-02 23:32:34.692 |   File 
"/usr/local/lib/python2.7/dist-packages/pycparser/c_lexer.py", line 66, in build
  2016-10-02 23:32:34.692 | self.lexer = lex.lex(object=self, **kwargs)
  2016-10-02 23:32:34.692 |   File 
"/usr/local/lib/python2.7/dist-packages/pycparser/ply/lex.py", line 911, in lex
  2016-10-02 23:32:34.692 | lexobj.readtab(lextab, ldict)
  2016-10-02 23:32:34.692 |   File 
"/usr/local/lib/python2.7/dist-packages/pycparser/ply/lex.py", line 233, in 
readtab
  2016-10-02 23:32:34.692 | titem.append((re.compile(pat, 
lextab._lexreflags | re.VERBOSE), _names_to_funcs(func_name, fdict)))
  2016-10-02 23:32:34.692 |   File "/usr/lib/python2.7/re.py", line 194, in 
compile
  2016-10-02 23:32:34.692 | return _co

[Yahoo-eng-team] [Bug 1629830] Re: pycparser-2.14 wheel is wrongly built and raises AssertionError

2016-10-03 Thread Rabi Mishra
*** This bug is a duplicate of bug 1629726 ***
https://bugs.launchpad.net/bugs/1629726

** Also affects: heat
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1629830

Title:
  pycparser-2.14 wheel is wrongly built and raises AssertionError

Status in heat:
  New
Status in Ironic:
  New
Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Most of our gate jobs are impacted (Nova UTs, Tempest tests, Grenade
  checks) by a new respin of pycparser that raises an AssertionError
  when imported.

  Note that not only Nova but a lot of OpenStack projects are impacted :
  
http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%20\%22AssertionError:%20sorry,%20but%20this%20version%20only%20supports%20100%20named%20groups\%22&from=24h

  The issue has been logged upstream
  https://github.com/eliben/pycparser/issues/147 but we somehow need to
  downgrade pycparser first in order to stabilize our gates.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1629830/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603860] Re: Could not load neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver

2016-09-28 Thread Rabi Mishra
** Changed in: heat
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1603860

Title:
  Could not load
  neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver

Status in heat:
  Invalid
Status in neutron:
  Fix Released

Bug description:
  It seems the recent change[1] has broken the heat gate.

  The neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver
  could not be loaded(used by heat) as you can see in the log[2]

  Heat tests fail with the following error, as it can't reach the lb
  url.

  ---
  2016-07-18 03:37:24.960600 | 2016-07-18 03:37:24.960 | Captured traceback:
  2016-07-18 03:37:24.962940 | 2016-07-18 03:37:24.962 | ~~~
  2016-07-18 03:37:24.964775 | 2016-07-18 03:37:24.964 | Traceback (most 
recent call last):
  2016-07-18 03:37:24.966464 | 2016-07-18 03:37:24.966 |   File 
"/opt/stack/new/heat/heat_integrationtests/scenario/test_autoscaling_lbv2.py", 
line 95, in test_autoscaling_loadbalancer_neutron
  2016-07-18 03:37:24.967935 | 2016-07-18 03:37:24.967 | 
self.check_num_responses(lb_url, 1)
  2016-07-18 03:37:24.971719 | 2016-07-18 03:37:24.970 |   File 
"/opt/stack/new/heat/heat_integrationtests/scenario/test_autoscaling_lbv2.py", 
line 49, in check_num_responses
  2016-07-18 03:37:24.973581 | 2016-07-18 03:37:24.973 | 
self.assertEqual(expected_num, len(resp))
  2016-07-18 03:37:24.975288 | 2016-07-18 03:37:24.974 |   File 
"/opt/stack/new/heat/.tox/integration/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
  2016-07-18 03:37:24.977807 | 2016-07-18 03:37:24.977 | 
self.assertThat(observed, matcher, message)
  2016-07-18 03:37:24.979352 | 2016-07-18 03:37:24.979 |   File 
"/opt/stack/new/heat/.tox/integration/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
  2016-07-18 03:37:24.980967 | 2016-07-18 03:37:24.980 | raise 
mismatch_error
  2016-07-18 03:37:24.983051 | 2016-07-18 03:37:24.982 | 
testtools.matchers._impl.MismatchError: 1 != 0
  2016-07-18 03:37:24.984806 | 2016-07-18 03:37:24.984 | 
  

  
  [1] 
https://github.com/openstack/neutron-lbaas/commit/56795d73094832b58b4804007ed31b5e896f59fc
  [2] 
http://logs.openstack.org/56/343356/1/check/gate-heat-dsvm-functional-orig-mysql-lbaasv2/d1b8aca/logs/screen-q-lbaasv2.txt.gz#_2016-07-18_03_18_51_864

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1603860/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1628377] [NEW] test_stack_update_replace_with_ip_rollback filure

2016-09-27 Thread Rabi Mishra
Public bug reported:

heat integration tests test_stack_update_replace_with_ip_rollback failed
with below error. Though there no previous occurrences of this error, I
can see db errors in neutron logs[1].

http://logs.openstack.org/39/377439/2/gate/gate-heat-dsvm-functional-
convg-mysql-lbaasv2/8b93a55/console.html

2016-09-28 04:06:13.600724 | 2016-09-28 04:06:13.600 | Captured traceback:
2016-09-28 04:06:13.601965 | 2016-09-28 04:06:13.601 | ~~~
2016-09-28 04:06:13.603681 | 2016-09-28 04:06:13.603 | Traceback (most 
recent call last):
2016-09-28 04:06:13.605168 | 2016-09-28 04:06:13.604 |   File 
"/opt/stack/new/heat/heat_integrationtests/functional/test_create_update_neutron_port.py",
 line 119, in test_stack_update_replace_with_ip_rollback
2016-09-28 04:06:13.607511 | 2016-09-28 04:06:13.606 | 
self.assertEqual(_id, new_id)
2016-09-28 04:06:13.608660 | 2016-09-28 04:06:13.608 |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 411, in 
assertEqual
2016-09-28 04:06:13.611349 | 2016-09-28 04:06:13.611 | 
self.assertThat(observed, matcher, message)
2016-09-28 04:06:13.613880 | 2016-09-28 04:06:13.613 |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 498, in 
assertThat
2016-09-28 04:06:13.616079 | 2016-09-28 04:06:13.615 | raise 
mismatch_error
2016-09-28 04:06:13.617343 | 2016-09-28 04:06:13.617 | 
testtools.matchers._impl.MismatchError: !=:
2016-09-28 04:06:13.619135 | 2016-09-28 04:06:13.618 | reference = 
u'04c2e178-5c96-4f22-9072-faea92fa6560'
2016-09-28 04:06:13.620321 | 2016-09-28 04:06:13.620 | actual= 
u'0255770f-e6a5-45de-b604-10c06f12d42c'

[1]

http://logs.openstack.org/39/377439/2/gate/gate-heat-dsvm-functional-
convg-mysql-
lbaasv2/8b93a55/logs/screen-q-svc.txt.gz#_2016-09-28_04_06_01_173

2016-09-28 04:06:01.173 4912 DEBUG neutron.callbacks.manager 
[req-f70ab80e-2870-4bbe-aad0-1841203355d8 demo -] Notify callbacks 
[('neutron.db.l3_db._notify_routers_callback-8748586454487', ), 
('neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api.DhcpAgentNotifyAPI._native_event_send_dhcp_notification--9223372036829987228',
 >), 
('neutron.db.l3_dvrscheduler_db._notify_port_delete-8748585805241', )] for port, after_delete _notify_loop 
/opt/stack/new/neutron/neutron/callbacks/manager.py:142
2016-09-28 04:06:01.173 4914 DEBUG neutron.db.api 
[req-dfcdfd65-63b9-45cd-913c-43f6287e2e37 - -] Retry wrapper got retriable 
exception: Traceback (most recent call last):
  File "/opt/stack/new/neutron/neutron/db/api.py", line 119, in wrapped
return f(*dup_args, **dup_kwargs)
  File "/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py", line 1733, in 
update_port_status
context.session.flush()
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 
2019, in flush
self._flush(objects)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 
2137, in _flush
transaction.rollback(_capture_exception=True)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", 
line 60, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 
2101, in _flush
flush_context.execute()
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py", 
line 373, in execute
rec.execute(self)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py", 
line 532, in execute
uow
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", 
line 170, in save_obj
mapper, table, update)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", 
line 728, in _emit_update_statements
(table.description, len(records), rows))
StaleDataError: UPDATE statement on table 'standardattributes' expected to 
update 1 row(s); 0 were matched.
 wrapped /opt/stack/new/neutron/neutron/db/api.py:124

** Affects: neutron
 Importance: Undecided
 Status: New

** Project changed: heat => neutron

** Description changed:

- test_stack_update_replace_with_ip_rollback failed with below error.
- Though there no previous occurrences of this error, I can see db errors
- in neutron logs[1].
+ heat integration tests test_stack_update_replace_with_ip_rollback failed
+ with below error. Though there no previous occurrences of this error, I
+ can see db errors in neutron logs[1].
  
- 
- 
http://logs.openstack.org/39/377439/2/gate/gate-heat-dsvm-functional-convg-mysql-lbaasv2/8b93a55/console.html
+ http://logs.openstack.org/39/377439/2/gate/gate-heat-dsvm-functional-
+ convg-mysql-lbaasv2/8b93a55/console.html
  
  2016-09-28 04:06:13.600724 | 2016-09-28 04:06:13.600 | Captured traceback:
  2016-09-28 04:06:13.601965 | 2016-09-28 04:06:13.601 | ~~~
  2016-09-28 04:06:13.603681 | 2016-09-28 04:06:13.603 | Traceback (most 
recent call last):
  2016-09-28 04:06:13.605168 | 

[Yahoo-eng-team] [Bug 1624976] Re: dsvm jobs broken as neutron fails to start

2016-09-18 Thread Rabi Mishra
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => Rabi Mishra (rabi)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624976

Title:
  dsvm jobs broken as neutron fails to start

Status in heat:
  New
Status in neutron:
  In Progress

Bug description:
  I can see the below error in the neutron logs.

  2016-09-18 23:45:48.183 4787 DEBUG neutron.db.servicetype_db [-] Adding 
provider configuration for service VPN add_provider_configuration 
/opt/stack/new/neutron/neutron/db/servicetype_db.py:52
  2016-09-18 23:45:48.183 4787 ERROR neutron.services.service_base [-] No 
providers specified for 'VPN' service, exiting

  http://logs.openstack.org/27/369827/4/check/gate-heat-dsvm-functional-
  orig-mysql-
  lbaasv2/7bdf656/logs/screen-q-svc.txt.gz#_2016-09-18_23_45_48_183

  Not sure what neutron/vpnaas change has resulted in this failure.

  As we're now using the vpnaas devstack plugin for jobs, we could
  probably stop setting the default service_provider in
  pre_test_hook.sh.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1624976/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1616094] Re: Required attribute 'lb_method' not specified when creating a LBaaSv2

2016-08-29 Thread Rabi Mishra
** Changed in: heat
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1616094

Title:
  Required attribute 'lb_method' not specified when creating a LBaaSv2

Status in heat:
  Invalid
Status in neutron:
  Incomplete
Status in python-neutronclient:
  In Progress

Bug description:
  When creating a LBaaS v2 loadbalancer, listener and pool, I get:

  - s n i p -
  2016-08-23 14:04:32 [pool]: CREATE_FAILED  BadRequest: resources.pool: Failed 
to parse request. Required attribute 'lb_method' not specified
  - s n i p -

  The test stack:

  - s n i p -
  heat_template_version: 2015-04-30
  description: Loadbalancer template

  resources:
    lbaas:
  type: OS::Neutron::LBaaS::LoadBalancer
  properties:
    name: lbaas-test
    description: lbaas-test
    vip_subnet: subnet-97

    listener:
  type: OS::Neutron::LBaaS::Listener
  properties:
    name: listener-test
    description: listener-test
    loadbalancer: { get_resource: lbaas }
    protocol: TCP
    protocol_port: 666

    pool:
  type: OS::Neutron::LBaaS::Pool
  properties:
    name: hapool-test
    description: hapool-test
    listener: { get_resource: listener }
    protocol: TCP
    lb_algorithm: LEAST_CONNECTIONS
  - s n i p -

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1616094/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603859] [NEW] Could not load neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver

2016-07-17 Thread Rabi Mishra
Public bug reported:

It seems the recent change[1] has broken the heat gate.

The neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver could
not be loaded(used by heat) as you can see in the log[2]

Heat tests fail with the following error, as it can't reach the lb url.

---
2016-07-18 03:37:24.960600 | 2016-07-18 03:37:24.960 | Captured traceback:
2016-07-18 03:37:24.962940 | 2016-07-18 03:37:24.962 | ~~~
2016-07-18 03:37:24.964775 | 2016-07-18 03:37:24.964 | Traceback (most 
recent call last):
2016-07-18 03:37:24.966464 | 2016-07-18 03:37:24.966 |   File 
"/opt/stack/new/heat/heat_integrationtests/scenario/test_autoscaling_lbv2.py", 
line 95, in test_autoscaling_loadbalancer_neutron
2016-07-18 03:37:24.967935 | 2016-07-18 03:37:24.967 | 
self.check_num_responses(lb_url, 1)
2016-07-18 03:37:24.971719 | 2016-07-18 03:37:24.970 |   File 
"/opt/stack/new/heat/heat_integrationtests/scenario/test_autoscaling_lbv2.py", 
line 49, in check_num_responses
2016-07-18 03:37:24.973581 | 2016-07-18 03:37:24.973 | 
self.assertEqual(expected_num, len(resp))
2016-07-18 03:37:24.975288 | 2016-07-18 03:37:24.974 |   File 
"/opt/stack/new/heat/.tox/integration/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
2016-07-18 03:37:24.977807 | 2016-07-18 03:37:24.977 | 
self.assertThat(observed, matcher, message)
2016-07-18 03:37:24.979352 | 2016-07-18 03:37:24.979 |   File 
"/opt/stack/new/heat/.tox/integration/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
2016-07-18 03:37:24.980967 | 2016-07-18 03:37:24.980 | raise 
mismatch_error
2016-07-18 03:37:24.983051 | 2016-07-18 03:37:24.982 | 
testtools.matchers._impl.MismatchError: 1 != 0
2016-07-18 03:37:24.984806 | 2016-07-18 03:37:24.984 | 



[1] 
https://github.com/openstack/neutron-lbaas/commit/56795d73094832b58b4804007ed31b5e896f59fc
[2] 
http://logs.openstack.org/56/343356/1/check/gate-heat-dsvm-functional-orig-mysql-lbaasv2/d1b8aca/logs/screen-q-lbaasv2.txt.gz#_2016-07-18_03_18_51_864

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1603859

Title:
  Could not load
  neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver

Status in neutron:
  New

Bug description:
  It seems the recent change[1] has broken the heat gate.

  The neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver
  could not be loaded(used by heat) as you can see in the log[2]

  Heat tests fail with the following error, as it can't reach the lb
  url.

  ---
  2016-07-18 03:37:24.960600 | 2016-07-18 03:37:24.960 | Captured traceback:
  2016-07-18 03:37:24.962940 | 2016-07-18 03:37:24.962 | ~~~
  2016-07-18 03:37:24.964775 | 2016-07-18 03:37:24.964 | Traceback (most 
recent call last):
  2016-07-18 03:37:24.966464 | 2016-07-18 03:37:24.966 |   File 
"/opt/stack/new/heat/heat_integrationtests/scenario/test_autoscaling_lbv2.py", 
line 95, in test_autoscaling_loadbalancer_neutron
  2016-07-18 03:37:24.967935 | 2016-07-18 03:37:24.967 | 
self.check_num_responses(lb_url, 1)
  2016-07-18 03:37:24.971719 | 2016-07-18 03:37:24.970 |   File 
"/opt/stack/new/heat/heat_integrationtests/scenario/test_autoscaling_lbv2.py", 
line 49, in check_num_responses
  2016-07-18 03:37:24.973581 | 2016-07-18 03:37:24.973 | 
self.assertEqual(expected_num, len(resp))
  2016-07-18 03:37:24.975288 | 2016-07-18 03:37:24.974 |   File 
"/opt/stack/new/heat/.tox/integration/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
  2016-07-18 03:37:24.977807 | 2016-07-18 03:37:24.977 | 
self.assertThat(observed, matcher, message)
  2016-07-18 03:37:24.979352 | 2016-07-18 03:37:24.979 |   File 
"/opt/stack/new/heat/.tox/integration/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
  2016-07-18 03:37:24.980967 | 2016-07-18 03:37:24.980 | raise 
mismatch_error
  2016-07-18 03:37:24.983051 | 2016-07-18 03:37:24.982 | 
testtools.matchers._impl.MismatchError: 1 != 0
  2016-07-18 03:37:24.984806 | 2016-07-18 03:37:24.984 | 
  

  
  [1] 
https://github.com/openstack/neutron-lbaas/commit/56795d73094832b58b4804007ed31b5e896f59fc
  [2] 
http://logs.openstack.org/56/343356/1/check/gate-heat-dsvm-functional-orig-mysql-lbaasv2/d1b8aca/logs/screen-q-lbaasv2.txt.gz#_2016-07-18_03_18_51_864

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1603859/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : h

[Yahoo-eng-team] [Bug 1603860] [NEW] Could not load neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver

2016-07-17 Thread Rabi Mishra
Public bug reported:

It seems the recent change[1] has broken the heat gate.

The neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver could
not be loaded(used by heat) as you can see in the log[2]

Heat tests fail with the following error, as it can't reach the lb url.

---
2016-07-18 03:37:24.960600 | 2016-07-18 03:37:24.960 | Captured traceback:
2016-07-18 03:37:24.962940 | 2016-07-18 03:37:24.962 | ~~~
2016-07-18 03:37:24.964775 | 2016-07-18 03:37:24.964 | Traceback (most 
recent call last):
2016-07-18 03:37:24.966464 | 2016-07-18 03:37:24.966 |   File 
"/opt/stack/new/heat/heat_integrationtests/scenario/test_autoscaling_lbv2.py", 
line 95, in test_autoscaling_loadbalancer_neutron
2016-07-18 03:37:24.967935 | 2016-07-18 03:37:24.967 | 
self.check_num_responses(lb_url, 1)
2016-07-18 03:37:24.971719 | 2016-07-18 03:37:24.970 |   File 
"/opt/stack/new/heat/heat_integrationtests/scenario/test_autoscaling_lbv2.py", 
line 49, in check_num_responses
2016-07-18 03:37:24.973581 | 2016-07-18 03:37:24.973 | 
self.assertEqual(expected_num, len(resp))
2016-07-18 03:37:24.975288 | 2016-07-18 03:37:24.974 |   File 
"/opt/stack/new/heat/.tox/integration/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
2016-07-18 03:37:24.977807 | 2016-07-18 03:37:24.977 | 
self.assertThat(observed, matcher, message)
2016-07-18 03:37:24.979352 | 2016-07-18 03:37:24.979 |   File 
"/opt/stack/new/heat/.tox/integration/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
2016-07-18 03:37:24.980967 | 2016-07-18 03:37:24.980 | raise 
mismatch_error
2016-07-18 03:37:24.983051 | 2016-07-18 03:37:24.982 | 
testtools.matchers._impl.MismatchError: 1 != 0
2016-07-18 03:37:24.984806 | 2016-07-18 03:37:24.984 | 



[1] 
https://github.com/openstack/neutron-lbaas/commit/56795d73094832b58b4804007ed31b5e896f59fc
[2] 
http://logs.openstack.org/56/343356/1/check/gate-heat-dsvm-functional-orig-mysql-lbaasv2/d1b8aca/logs/screen-q-lbaasv2.txt.gz#_2016-07-18_03_18_51_864

** Affects: heat
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New

** Also affects: heat
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1603860

Title:
  Could not load
  neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver

Status in heat:
  New
Status in neutron:
  New

Bug description:
  It seems the recent change[1] has broken the heat gate.

  The neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver
  could not be loaded(used by heat) as you can see in the log[2]

  Heat tests fail with the following error, as it can't reach the lb
  url.

  ---
  2016-07-18 03:37:24.960600 | 2016-07-18 03:37:24.960 | Captured traceback:
  2016-07-18 03:37:24.962940 | 2016-07-18 03:37:24.962 | ~~~
  2016-07-18 03:37:24.964775 | 2016-07-18 03:37:24.964 | Traceback (most 
recent call last):
  2016-07-18 03:37:24.966464 | 2016-07-18 03:37:24.966 |   File 
"/opt/stack/new/heat/heat_integrationtests/scenario/test_autoscaling_lbv2.py", 
line 95, in test_autoscaling_loadbalancer_neutron
  2016-07-18 03:37:24.967935 | 2016-07-18 03:37:24.967 | 
self.check_num_responses(lb_url, 1)
  2016-07-18 03:37:24.971719 | 2016-07-18 03:37:24.970 |   File 
"/opt/stack/new/heat/heat_integrationtests/scenario/test_autoscaling_lbv2.py", 
line 49, in check_num_responses
  2016-07-18 03:37:24.973581 | 2016-07-18 03:37:24.973 | 
self.assertEqual(expected_num, len(resp))
  2016-07-18 03:37:24.975288 | 2016-07-18 03:37:24.974 |   File 
"/opt/stack/new/heat/.tox/integration/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
  2016-07-18 03:37:24.977807 | 2016-07-18 03:37:24.977 | 
self.assertThat(observed, matcher, message)
  2016-07-18 03:37:24.979352 | 2016-07-18 03:37:24.979 |   File 
"/opt/stack/new/heat/.tox/integration/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
  2016-07-18 03:37:24.980967 | 2016-07-18 03:37:24.980 | raise 
mismatch_error
  2016-07-18 03:37:24.983051 | 2016-07-18 03:37:24.982 | 
testtools.matchers._impl.MismatchError: 1 != 0
  2016-07-18 03:37:24.984806 | 2016-07-18 03:37:24.984 | 
  

  
  [1] 
https://github.com/openstack/neutron-lbaas/commit/56795d73094832b58b4804007ed31b5e896f59fc
  [2] 
http://logs.openstack.org/56/343356/1/check/gate-heat-dsvm-functional-orig-mysql-lbaasv2/d1b8aca/logs/screen-q-lbaasv2.txt.gz#_2016-07-18_03_18_51_864

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1603860/+subscriptions

-- 
Mailing list: ht

[Yahoo-eng-team] [Bug 1595819] Re: functional.test_autoscaling.AutoScalingSignalTest failure

2016-06-23 Thread Rabi Mishra
** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1595819

Title:
  functional.test_autoscaling.AutoScalingSignalTest failure

Status in heat:
  New
Status in OpenStack Identity (keystone):
  New

Bug description:
  It seems the gate is broken with AutoScalingSignaltest faling. 
  I suspect this is due to the recent keystone change 
https://review.openstack.org/#/c/314284/

  keystone error traceback:

  2016-06-24 03:34:02.273 8966 ERROR oslo_db.sqlalchemy.exc_filters
  [req-99ad876c-6f1f-4742-8dcf-52d461fa0c9f
  16a1dbfadccd44a3813a0e097a61ac25 - a08006a9c7c84f5ca7afaed59ac8e7a5
  a08006a9c7c84f5ca7afaed59ac8e7a5 -] DBAPIError exception wrapped from
  (pymysql.err.IntegrityError) (1048, u"Column 'password' cannot be
  null") [SQL: u'INSERT INTO password (local_user_id, password,
  created_at, expires_at) VALUES (%(local_user_id)s, %(password)s,
  %(created_at)s, %(expires_at)s)'] [parameters: {'local_user_id': 36,
  'password': None, 'created_at': datetime.datetime(2016, 6, 24, 3, 34,
  2, 269022), 'expires_at': None}]

  
  heat engine traceback:

  http://logs.openstack.org/76/333676/1/check/gate-heat-dsvm-functional-
  orig-mysql-
  lbaasv2/516afaa/logs/screen-h-eng.txt.gz#_2016-06-24_03_34_02_280

  b2c8-e27e2a774982]
  2016-06-24 03:34:02.280 4863 ERROR heat.engine.resource Traceback (most 
recent call last):
  2016-06-24 03:34:02.280 4863 ERROR heat.engine.resource   File 
"/opt/stack/new/heat/heat/engine/resource.py", line 716, in _action_recorder
  2016-06-24 03:34:02.280 4863 ERROR heat.engine.resource yield
  2016-06-24 03:34:02.280 4863 ERROR heat.engine.resource   File 
"/opt/stack/new/heat/heat/engine/resource.py", line 796, in _do_action
  2016-06-24 03:34:02.280 4863 ERROR heat.engine.resource yield 
self.action_handler_task(action, args=handler_args)
  2016-06-24 03:34:02.280 4863 ERROR heat.engine.resource   File 
"/opt/stack/new/heat/heat/engine/scheduler.py", line 312, in wrapper
  2016-06-24 03:34:02.280 4863 ERROR heat.engine.resource step = 
next(subtask)
  2016-06-24 03:34:02.280 4863 ERROR heat.engine.resource   File 
"/opt/stack/new/heat/heat/engine/resource.py", line 759, in action_handler_task
  2016-06-24 03:34:02.280 4863 ERROR heat.engine.resource handler_data = 
handler(*args)
  2016-06-24 03:34:02.280 4863 ERROR heat.engine.resource   File 
"/opt/stack/new/heat/heat/engine/resources/stack_user.py", line 110, in 
handle_suspend
  2016-06-24 03:34:02.280 4863 ERROR heat.engine.resource user_id=user_id, 
project_id=self.stack.stack_user_project_id)
  2016-06-24 03:34:02.280 4863 ERROR heat.engine.resource   File 
"/opt/stack/new/heat/heat/engine/clients/os/keystone/heat_keystoneclient.py", 
line 528, in disable_stack_domain_user
  2016-06-24 03:34:02.280 4863 ERROR heat.engine.resource 
self.domain_admin_client.users.update(user=user_id, enabled=False)
  2016-06-24 03:34:02.280 4863 ERROR heat.engine.resource   File 
"/usr/local/lib/python2.7/dist-packages/debtcollector/renames.py", line 43, in 
decorator
  2016-06-24 03:34:02.280 4863 ERROR heat.engine.resource return 
wrapped(*args, **kwargs)
  2016-06-24 03:34:02.280 4863 ERROR heat.engine.resource   File 
"/usr/local/lib/python2.7/dist-packages/positional/__init__.py", line 101, in 
inner
  2016-06-24 03:34:02.280 4863 ERROR heat.engine.resource return 
wrapped(*args, **kwargs)
  2016-06-24 03:34:02.280 4863 ERROR heat.engine.resource   File 
"/usr/local/lib/python2.7/dist-packages/keystoneclient/v3/users.py", line 209, 
in update
  2016-06-24 03:34:02.280 4863 ERROR heat.engine.resource log=False)
  2016-06-24 03:34:02.280 4863 ERROR heat.engine.resource   File 
"/usr/local/lib/python2.7/dist-packages/keystoneclient/base.py", line 227, in 
_update
  2016-06-24 03:34:02.280 4863 ERROR heat.engine.resource **kwargs)
  2016-06-24 03:34:02.280 4863 ERROR heat.engine.resource   File 
"/usr/local/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 190, in 
patch
  2016-06-24 03:34:02.280 4863 ERROR heat.engine.resource return 
self.request(url, 'PATCH', **kwargs)
  2016-06-24 03:34:02.280 4863 ERROR heat.engine.resource   File 
"/usr/local/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 335, in 
request
  2016-06-24 03:34:02.280 4863 ERROR heat.engine.resource resp = 
super(LegacyJsonAdapter, self).request(*args, **kwargs)
  2016-06-24 03:34:02.280 4863 ERROR heat.engine.resource   File 
"/usr/local/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 103, in 
request
  2016-06-24 03:34:02.280 4863 ERROR heat.engine.resource return 
self.session.request(url, method, **kwargs)
  2016-06-24 03:34:02.280 4863 ERROR heat.engine.resource   File 
"/usr/local/lib/python2.7/dist-packages/positional/__init__.py", line 101, in 
inner
  2016-06-24 03:34:02.280 4863

[Yahoo-eng-team] [Bug 1567507] [NEW] neutron-lbaas broken with neutron change

2016-04-07 Thread Rabi Mishra
Public bug reported:

It seems recent change
https://github.com/openstack/neutron/commit/34a328fe12950c339b8259451262470c627f2f00
has broken neutron-lbaas.

Hence all dependent projects are broken with below error in q-lbaas.

2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
[req-0a3a7771-0f1e-4424-9b96-0b7613cc1c82 demo -] Create vip 
7c347fc8-c282-4231-aa1c-e23a0d180abb failed on device driver haproxy_ns
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager Traceback (most recent 
call last):
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/agent/agent_manager.py",
 line 227, in create_vip
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
driver.create_vip(vip)
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 348, in create_vip
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
self._refresh_device(vip['pool_id'])
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 344, in _refresh_device
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager if not 
self.deploy_instance(logical_config) and self.exists(pool_id):
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
271, in inner
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager return f(*args, 
**kwargs)
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 337, in deploy_instance
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
self.create(logical_config)
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 92, in create
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
logical_config['vip']['address'])
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 247, in _plug
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
self.plugin_rpc.plug_vip_port(port['id'])
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/agent/agent_api.py",
 line 58, in plug_vip_port
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager host=self.host)
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 
158, in call
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager retry=self.retry)
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, 
in _send
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager timeout=timeout, 
retry=retry)
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 470, in send
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager retry=retry)
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 461, in _send
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager raise result
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager AttributeError: 'str' 
object has no attribute 'strftime'
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager Traceback (most recent 
call last):
2016-04-07 13:47:56.319 28677 ERROR 
neutron_lbaas.servi

[Yahoo-eng-team] [Bug 1512937] Re: CREATE_FAILED status due to 'Resource CREATE failed: NotFound: resources.pool: No eligible backend for pool

2015-11-04 Thread Rabi Mishra
** Changed in: neutron
   Status: Fix Released => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1512937

Title:
  CREATE_FAILED status due to 'Resource CREATE failed: NotFound:
  resources.pool: No eligible backend for pool

Status in heat:
  Fix Committed
Status in neutron:
  Fix Committed

Bug description:
  LB scenario tests seems to be failing with the following error.

  2015-11-03 20:55:53.906 | 2015-11-03 20:55:53.901 | 
heat_integrationtests.scenario.test_autoscaling_lb.AutoscalingLoadBalancerTest.test_autoscaling_loadbalancer_neutron
  2015-11-03 20:55:53.908 | 2015-11-03 20:55:53.902 | 

  2015-11-03 20:55:53.910 | 2015-11-03 20:55:53.905 | 
  2015-11-03 20:55:53.912 | 2015-11-03 20:55:53.906 | Captured traceback:
  2015-11-03 20:55:53.914 | 2015-11-03 20:55:53.908 | ~~~
  2015-11-03 20:55:53.915 | 2015-11-03 20:55:53.910 | Traceback (most 
recent call last):
  2015-11-03 20:55:53.918 | 2015-11-03 20:55:53.913 |   File 
"heat_integrationtests/scenario/test_autoscaling_lb.py", line 96, in 
test_autoscaling_loadbalancer_neutron
  2015-11-03 20:55:53.919 | 2015-11-03 20:55:53.914 | environment=env
  2015-11-03 20:55:53.921 | 2015-11-03 20:55:53.916 |   File 
"heat_integrationtests/scenario/scenario_base.py", line 56, in launch_stack
  2015-11-03 20:55:53.923 | 2015-11-03 20:55:53.918 | 
expected_status=expected_status
  2015-11-03 20:55:53.925 | 2015-11-03 20:55:53.920 |   File 
"heat_integrationtests/common/test.py", line 503, in stack_create
  2015-11-03 20:55:53.927 | 2015-11-03 20:55:53.922 | 
self._wait_for_stack_status(**kwargs)
  2015-11-03 20:55:53.929 | 2015-11-03 20:55:53.923 |   File 
"heat_integrationtests/common/test.py", line 321, in _wait_for_stack_status
  2015-11-03 20:55:53.931 | 2015-11-03 20:55:53.925 | fail_regexp):
  2015-11-03 20:55:53.933 | 2015-11-03 20:55:53.927 |   File 
"heat_integrationtests/common/test.py", line 288, in _verify_status
  2015-11-03 20:55:53.934 | 2015-11-03 20:55:53.929 | 
stack_status_reason=stack.stack_status_reason)
  2015-11-03 20:55:53.936 | 2015-11-03 20:55:53.930 | 
heat_integrationtests.common.exceptions.StackBuildErrorException: Stack 
AutoscalingLoadBalancerTest-448494246/077130e4-429c-44fb-887a-2ac5c0d7a9b2 is 
in CREATE_FAILED status due to 'Resource CREATE failed: NotFound: 
resources.pool: No eligible backend for pool 
9de892e7-ef89-4082-8c1e-3fbba0eea7f6'

  lbaas service seems to be existing with error.

   CRITICAL neutron [req-d0fcef08-11a3-4688-9e8c-80535c6d1da2 None None]
  ValueError: Empty module name

  http://logs.openstack.org/09/232709/7/check/gate-heat-dsvm-functional-
  orig-mysql/6ebcd1f/logs/screen-q-lbaas.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1512937/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1512937] Re: CREATE_FAILED status due to 'Resource CREATE failed: NotFound: resources.pool: No eligible backend for pool

2015-11-03 Thread Rabi Mishra
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1512937

Title:
  CREATE_FAILED status due to 'Resource CREATE failed: NotFound:
  resources.pool: No eligible backend for pool

Status in heat:
  New
Status in neutron:
  New

Bug description:
  LB scenario tests seems to be failing with the following error.

  2015-11-03 20:55:53.906 | 2015-11-03 20:55:53.901 | 
heat_integrationtests.scenario.test_autoscaling_lb.AutoscalingLoadBalancerTest.test_autoscaling_loadbalancer_neutron
  2015-11-03 20:55:53.908 | 2015-11-03 20:55:53.902 | 

  2015-11-03 20:55:53.910 | 2015-11-03 20:55:53.905 | 
  2015-11-03 20:55:53.912 | 2015-11-03 20:55:53.906 | Captured traceback:
  2015-11-03 20:55:53.914 | 2015-11-03 20:55:53.908 | ~~~
  2015-11-03 20:55:53.915 | 2015-11-03 20:55:53.910 | Traceback (most 
recent call last):
  2015-11-03 20:55:53.918 | 2015-11-03 20:55:53.913 |   File 
"heat_integrationtests/scenario/test_autoscaling_lb.py", line 96, in 
test_autoscaling_loadbalancer_neutron
  2015-11-03 20:55:53.919 | 2015-11-03 20:55:53.914 | environment=env
  2015-11-03 20:55:53.921 | 2015-11-03 20:55:53.916 |   File 
"heat_integrationtests/scenario/scenario_base.py", line 56, in launch_stack
  2015-11-03 20:55:53.923 | 2015-11-03 20:55:53.918 | 
expected_status=expected_status
  2015-11-03 20:55:53.925 | 2015-11-03 20:55:53.920 |   File 
"heat_integrationtests/common/test.py", line 503, in stack_create
  2015-11-03 20:55:53.927 | 2015-11-03 20:55:53.922 | 
self._wait_for_stack_status(**kwargs)
  2015-11-03 20:55:53.929 | 2015-11-03 20:55:53.923 |   File 
"heat_integrationtests/common/test.py", line 321, in _wait_for_stack_status
  2015-11-03 20:55:53.931 | 2015-11-03 20:55:53.925 | fail_regexp):
  2015-11-03 20:55:53.933 | 2015-11-03 20:55:53.927 |   File 
"heat_integrationtests/common/test.py", line 288, in _verify_status
  2015-11-03 20:55:53.934 | 2015-11-03 20:55:53.929 | 
stack_status_reason=stack.stack_status_reason)
  2015-11-03 20:55:53.936 | 2015-11-03 20:55:53.930 | 
heat_integrationtests.common.exceptions.StackBuildErrorException: Stack 
AutoscalingLoadBalancerTest-448494246/077130e4-429c-44fb-887a-2ac5c0d7a9b2 is 
in CREATE_FAILED status due to 'Resource CREATE failed: NotFound: 
resources.pool: No eligible backend for pool 
9de892e7-ef89-4082-8c1e-3fbba0eea7f6'

  lbaas service seems to be existing with error.

   CRITICAL neutron [req-d0fcef08-11a3-4688-9e8c-80535c6d1da2 None None]
  ValueError: Empty module name

  http://logs.openstack.org/09/232709/7/check/gate-heat-dsvm-functional-
  orig-mysql/6ebcd1f/logs/screen-q-lbaas.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1512937/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490429] [NEW] glance image-show just returns 'id'

2015-08-31 Thread Rabi Mishra
Public bug reported:

glance -d image-show 31e0d3a0-c29d-49bc-bc71-ee8a3f11c693
curl -g -i -X HEAD -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 
'User-Agent: python-glanceclient' -H 'Connection: keep-alive' -H 'X-Auth-Token: 
{SHA1}936d7b4cf9f2f0a3793e0ebb446a58ecd3d577aa' -H 'Content-Type: 
application/octet-stream' 
http://192.168.1.51:9292/v1/images/31e0d3a0-c29d-49bc-bc71-ee8a3f11c693

HTTP/1.1 200 OK
Content-Length: 0
X-Image-Meta-Id: 31e0d3a0-c29d-49bc-bc71-ee8a3f11c693
X-Image-Meta-Deleted: False
X-Image-Meta-Checksum: ee1eca47dc88f4879d8a229cc70a07c6
X-Image-Meta-Status: active
X-Image-Meta-Container_format: bare
X-Image-Meta-Protected: False
X-Image-Meta-Min_disk: 0
X-Image-Meta-Min_ram: 0
X-Image-Meta-Created_at: 2015-08-31T07:57:41.00
X-Image-Meta-Size: 13287936
Connection: keep-alive
Etag: ee1eca47dc88f4879d8a229cc70a07c6
X-Image-Meta-Is_public: True
Date: Mon, 31 Aug 2015 07:59:49 GMT
X-Image-Meta-Owner: 7cadb48541814309be95e0a977517b49
X-Image-Meta-Updated_at: 2015-08-31T07:57:41.00
Content-Type: text/html; charset=UTF-8
X-Openstack-Request-Id: req-e6434a2b-a014-4ae2-bcc3-07377afbc5a5
X-Image-Meta-Disk_format: qcow2
X-Image-Meta-Name: cirros-0.3.4

Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/glanceclient/shell.py", line 667, in 
main
args.func(client, args)
  File "/usr/lib/python2.7/site-packages/glanceclient/v1/shell.py", line 142, 
in do_image_show
image_id = utils.find_resource(gc.images, args.image).id
  File 
"/usr/lib/python2.7/site-packages/glanceclient/openstack/common/apiclient/base.py",
 line 491, in __getattr__
self.get()
  File 
"/usr/lib/python2.7/site-packages/glanceclient/openstack/common/apiclient/base.py",
 line 509, in get
new = self.manager.get(self.id)
  File 
"/usr/lib/python2.7/site-packages/glanceclient/openstack/common/apiclient/base.py",
 line 494, in __getattr__
raise AttributeError(k)
AttributeError: id
id


I don't  see any error in g-reg.log or g-api.log

** Affects: python-glanceclient
 Importance: Undecided
 Status: New

** Project changed: glance => python-glanceclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1490429

Title:
  glance image-show just returns 'id'

Status in python-glanceclient:
  New

Bug description:
  glance -d image-show 31e0d3a0-c29d-49bc-bc71-ee8a3f11c693
  curl -g -i -X HEAD -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 
'User-Agent: python-glanceclient' -H 'Connection: keep-alive' -H 'X-Auth-Token: 
{SHA1}936d7b4cf9f2f0a3793e0ebb446a58ecd3d577aa' -H 'Content-Type: 
application/octet-stream' 
http://192.168.1.51:9292/v1/images/31e0d3a0-c29d-49bc-bc71-ee8a3f11c693

  HTTP/1.1 200 OK
  Content-Length: 0
  X-Image-Meta-Id: 31e0d3a0-c29d-49bc-bc71-ee8a3f11c693
  X-Image-Meta-Deleted: False
  X-Image-Meta-Checksum: ee1eca47dc88f4879d8a229cc70a07c6
  X-Image-Meta-Status: active
  X-Image-Meta-Container_format: bare
  X-Image-Meta-Protected: False
  X-Image-Meta-Min_disk: 0
  X-Image-Meta-Min_ram: 0
  X-Image-Meta-Created_at: 2015-08-31T07:57:41.00
  X-Image-Meta-Size: 13287936
  Connection: keep-alive
  Etag: ee1eca47dc88f4879d8a229cc70a07c6
  X-Image-Meta-Is_public: True
  Date: Mon, 31 Aug 2015 07:59:49 GMT
  X-Image-Meta-Owner: 7cadb48541814309be95e0a977517b49
  X-Image-Meta-Updated_at: 2015-08-31T07:57:41.00
  Content-Type: text/html; charset=UTF-8
  X-Openstack-Request-Id: req-e6434a2b-a014-4ae2-bcc3-07377afbc5a5
  X-Image-Meta-Disk_format: qcow2
  X-Image-Meta-Name: cirros-0.3.4

  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/glanceclient/shell.py", line 667, in 
main
  args.func(client, args)
File "/usr/lib/python2.7/site-packages/glanceclient/v1/shell.py", line 142, 
in do_image_show
  image_id = utils.find_resource(gc.images, args.image).id
File 
"/usr/lib/python2.7/site-packages/glanceclient/openstack/common/apiclient/base.py",
 line 491, in __getattr__
  self.get()
File 
"/usr/lib/python2.7/site-packages/glanceclient/openstack/common/apiclient/base.py",
 line 509, in get
  new = self.manager.get(self.id)
File 
"/usr/lib/python2.7/site-packages/glanceclient/openstack/common/apiclient/base.py",
 line 494, in __getattr__
  raise AttributeError(k)
  AttributeError: id
  id

  
  I don't  see any error in g-reg.log or g-api.log

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-glanceclient/+bug/1490429/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464825] [NEW] alembic migration sciprt for vpnaas error

2015-06-12 Thread Rabi Mishra
te
self.errorhandler(self, exc, value)
  File "/usr/lib64/python2.7/site-packages/MySQLdb/connections.py", line 36, in 
defaulterrorhandler
    raise errorclass, errorvalue
sqlalchemy.exc.OperationalError: (OperationalError) (1832, "Cannot change 
column 'ipsec_site_conn_id': used in a foreign key constraint 
'cisco_csr_identifier_map_ibfk_1'") 'ALTER TABLE cisco_csr_identifier_map 
MODIFY ipsec_site_conn_id VARCHAR(36) NULL' ()

** Affects: neutron
 Importance: Undecided
 Assignee: Rabi Mishra (rabi)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Rabi Mishra (rabi)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464825

Title:
  alembic migration sciprt for vpnaas error

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  
  5689aa52_fix_identifier_map_fk.py alembic migration script error 'Cannot 
change column 'ipsec_site_conn_id': used in a foreign key constraint 
'cisco_csr_identifier_map_ibfk_1'


  + /usr/bin/neutron-db-manage --service vpnaas --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini 
upgrade head
  INFO  [alembic.migration] Context impl MySQLImpl.
  INFO  [alembic.migration] Will assume non-transactional DDL.
  INFO  [alembic.migration] Context impl MySQLImpl.
  INFO  [alembic.migration] Will assume non-transactional DDL.
  INFO  [alembic.migration] Running upgrade  -> start_neutron_vpnaas, start 
neutron-vpnaas chain
  INFO  [alembic.migration] Running upgrade start_neutron_vpnaas -> 
3ea02b2a773e, add_index_tenant_id
  INFO  [alembic.migration] Running upgrade 3ea02b2a773e -> kilo, kilo
  INFO  [alembic.migration] Running upgrade kilo -> 5689aa52, fix 
identifier map fk
  Traceback (most recent call last):
File "/usr/bin/neutron-db-manage", line 10, in 
  sys.exit(main())
File "/opt/stack/neutron/neutron/db/migration/cli.py", line 238, in main
  CONF.command.func(config, CONF.command.name)
File "/opt/stack/neutron/neutron/db/migration/cli.py", line 106, in 
do_upgrade
  do_alembic_command(config, cmd, revision, sql=CONF.command.sql)
File "/opt/stack/neutron/neutron/db/migration/cli.py", line 72, in 
do_alembic_command
  getattr(alembic_command, cmd)(config, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/alembic/command.py", line 165, in 
upgrade
  script.run_env()
File "/usr/lib/python2.7/site-packages/alembic/script.py", line 390, in 
run_env
  util.load_python_file(self.dir, 'env.py')
File "/usr/lib/python2.7/site-packages/alembic/util.py", line 243, in 
load_python_file
  module = load_module_py(module_id, path)
File "/usr/lib/python2.7/site-packages/alembic/compat.py", line 79, in 
load_module_py
  mod = imp.load_source(module_id, path, fp)
File 
"/opt/stack/neutron-vpnaas/neutron_vpnaas/db/migration/alembic_migrations/env.py",
 line 86, in 
  run_migrations_online()
File 
"/opt/stack/neutron-vpnaas/neutron_vpnaas/db/migration/alembic_migrations/env.py",
 line 77, in run_migrations_online
  context.run_migrations()
File "", line 7, in run_migrations
File "/usr/lib/python2.7/site-packages/alembic/environment.py", line 738, 
in run_migrations
  self.get_context().run_migrations(**kw)
File "/usr/lib/python2.7/site-packages/alembic/migration.py", line 309, in 
run_migrations
  step.migration_fn(**kw)
File 
"/opt/stack/neutron-vpnaas/neutron_vpnaas/db/migration/alembic_migrations/versions/5689aa52_fix_identifier_map_fk.py",
 line 48, in upgrade
  existing_nullable=True)
File "", line 7, in alter_column
File "", line 1, in 
File "/usr/lib/python2.7/site-packages/alembic/util.py", line 388, in go
  return fn(*arg, **kw)
File "/usr/lib/python2.7/site-packages/alembic/operations.py", line 478, in 
alter_column
  existing_autoincrement=existing_autoincrement
File "/usr/lib/python2.7/site-packages/alembic/ddl/mysql.py", line 65, in 
alter_column
  else existing_autoincrement
File "/usr/lib/python2.7/site-packages/alembic/ddl/impl.py", line 122, in 
_exec
  return conn.execute(construct, *multiparams, **params)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
841, in execute
  return meth(self, multiparams, params)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/ddl.py", line 69, 
in _execute_on_connection
  return connection._execute_ddl(self, multiparams, params)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
8

[Yahoo-eng-team] [Bug 1438528] Re: gate-tempest-dsvm-neutron-src-python-heatclient fails for XStatic-Angular-Irdragndrop

2015-03-30 Thread Rabi Mishra
Probably this is an issue with pypi itself. The name is wrong with
latest release.


** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1438528

Title:
  gate-tempest-dsvm-neutron-src-python-heatclient fails for XStatic-
  Angular-Irdragndrop

Status in OpenStack Dashboard (Horizon):
  New
Status in Tempest:
  New

Bug description:
  I think there is an issue with requirements.

  "XStatic-Angular-Irdragndrop" is not correct, it should be "XStatic-
  Angular-lrdragndrop"

  Log:

  2015-03-31 04:06:31.727 | Collecting XStatic>=1.0.0 (from 
horizon==2015.1.dev110)
  2015-03-31 04:06:31.743 |   Downloading 
http://pypi.region-b.geo-1.openstack.org/packages/source/X/XStatic/XStatic-1.0.1.tar.gz
  2015-03-31 04:06:32.070 | Collecting XStatic-Angular>=1.3.7 (from 
horizon==2015.1.dev110)
  2015-03-31 04:06:32.088 |   Downloading 
http://pypi.region-b.geo-1.openstack.org/packages/source/X/XStatic-Angular/XStatic-Angular-1.3.7.0.tar.gz
 (641kB)
  2015-03-31 04:06:32.591 | Collecting XStatic-Angular-Bootstrap>=0.11.0.2 
(from horizon==2015.1.dev110)
  2015-03-31 04:06:32.610 |   Downloading 
http://pypi.region-b.geo-1.openstack.org/packages/source/X/XStatic-Angular-Bootstrap/XStatic-Angular-Bootstrap-0.11.0.2.tar.gz
  2015-03-31 04:06:32.975 | Collecting XStatic-Angular-Irdragndrop>=1.0.2.1 
(from horizon==2015.1.dev110)
  2015-03-31 04:06:32.990 |   HTTP error 404 while getting 
http://pypi.region-b.geo-1.openstack.org/packages/source/X/XStatic-Angular-Irdragndrop/XStatic-Angular-Irdragndrop-1.0.2.1.tar.gz#md5=7f57941bb72f83fe01875152ddb24ce1
 (from 
http://pypi.region-b.geo-1.openstack.org/simple/xstatic-angular-irdragndrop/)
  2015-03-31 04:06:32.990 |   Could not install requirement 
XStatic-Angular-Irdragndrop>=1.0.2.1 (from horizon==2015.1.dev110) because of 
error 404 Client Error: Not Found
  2015-03-31 04:06:33.323 |   Could not install requirement 
XStatic-Angular-Irdragndrop>=1.0.2.1 (from horizon==2015.1.dev110) because of 
HTTP error 404 Client Error: Not Found for URL 
http://pypi.region-b.geo-1.openstack.org/packages/source/X/XStatic-Angular-Irdragndrop/XStatic-Angular-Irdragndrop-1.0.2.1.tar.gz#md5=7f57941bb72f83fe01875152ddb24ce1
 (from 
http://pypi.region-b.geo-1.openstack.org/simple/xstatic-angular-irdragndrop/)
  2015-03-31 04:06:33.346 | + exit_trap
  2015-03-31 04:06:33.346 | + local r=1
  2015-03-31 04:06:33.346 | ++ jobs -p
  2015-03-31 04:06:33.347 | + jobs=
  2015-03-31 04:06:33.347 | + [[ -n '' ]]
  2015-03-31 04:06:33.347 | + kill_spinner
  2015-03-31 04:06:33.347 | + '[' '!' -z '' ']'
  2015-03-31 04:06:33.347 | + [[ 1 -ne 0 ]]
  2015-03-31 04:06:33.347 | + echo 'Error on exit'
  2015-03-31 04:06:33.347 | Error on exit

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1438528/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1365226] Re: Add security group to running instance with nexus monolithic plugin throws error

2015-03-17 Thread Rabi Mishra
** Changed in: neutron/icehouse
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1365226

Title:
  Add security group to running instance with nexus monolithic plugin
  throws error

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Released

Bug description:
  While adding new security group to an existing instance with cisco
  nexus plugin(provider network) throws the following error.

  # nova add-secgroup   987efb45-1a6c-4a76-a26b-fe9cfdd8073e  test-security
  ERROR: The server has either erred or is incapable of performing the 
requested operation. (HTTP 500) (Request-ID: 
req-8e0b85a2-7499-4420-8a68-3c68aa3ee1c9)

  Looking at the server.log points to an empty host list being passed.
  /var/log/neutron/server.log

  
  2014-09-02 20:10:22.116 52259 INFO neutron.wsgi 
[req-091df3c8-7bdb-42b5-801a-a26a650a451a None] (52259) accepted 
('172.21.9.134', 39434)

  2014-09-02 20:10:22.211 52259 ERROR 
neutron.plugins.cisco.models.virt_phy_sw_v2 [-] Unable to update port '' on 
Nexus switch
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2 Traceback (most recent call last):
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/cisco/models/virt_phy_sw_v2.py",
 line 405, in u
  pdate_port
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2 
self._invoke_nexus_for_net_create(context, *create_args)
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/cisco/models/virt_phy_sw_v2.py",
 line 263, in _
  invoke_nexus_for_net_create
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2 [network, attachment])
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/cisco/models/virt_phy_sw_v2.py",
 line 148, in _
  invoke_plugin_per_device
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2 return func(*args, **kwargs)
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/cisco/nexus/cisco_nexus_plugin_v2.py",
 line 79,
   in create_network
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2 raise 
cisco_exc.NexusComputeHostNotConfigured(host=host)
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2 NexusComputeHostNotConfigured: 
Connection to None is not configured.
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2
  2014-09-02 20:10:22.256 52259 INFO neutron.api.v2.resource 
[req-8ebea742-09b5-416b-820f-69461c496319 None] update failed (client error): 
Connection to None is not configured.
  2014-09-02 20:10:22.257 52259 INFO neutron.wsgi 
[req-8ebea742-09b5-416b-820f-69461c496319 None] 172.21.9.134 - - [02/Sep/2014 
20:10:22] "PUT //v2.0/ports/c2e6b716-5c7d-4d23-ab78-ecd2a649469b.json HTTP/1.1" 
404 322 0.140213

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1365226/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368087] [NEW] implement provider network extension for ml2 nexus

2014-09-11 Thread Rabi Mishra
Public bug reported:

Current implementation of ml2 nexus plugin does not have support for
'provider network' extension.  This is supported in the monolithic
plugin. Hence migration of existing deployments is not possible without
this being implemented.

** Affects: neutron
 Importance: Undecided
 Assignee: Rabi Mishra (rabi)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Rabi Mishra (rabi)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1368087

Title:
  implement provider network extension for ml2 nexus

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Current implementation of ml2 nexus plugin does not have support for
  'provider network' extension.  This is supported in the monolithic
  plugin. Hence migration of existing deployments is not possible
  without this being implemented.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1368087/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1365226] [NEW] Add security group to running instance with nexus monolithic plugin throws error

2014-09-03 Thread Rabi Mishra
Public bug reported:

While adding new security group to an existing instance with cisco nexus
plugin(provider network) throws the following error.

2014-09-02 20:10:22.116 52259 INFO neutron.wsgi [req-091df3c8-7bdb-42b5
-801a-a26a650a451a None] (52259) accepted ('172.21.9.134', 39434)

2014-09-02 20:10:22.211 52259 ERROR neutron.plugins.cisco.models.virt_phy_sw_v2 
[-] Unable to update port '' on Nexus switch
2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 
Traceback (most recent call last):
2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 
  File 
"/usr/lib/python2.7/site-packages/neutron/plugins/cisco/models/virt_phy_sw_v2.py",
 line 405, in u
pdate_port
2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 
self._invoke_nexus_for_net_create(context, *create_args)
2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 
  File 
"/usr/lib/python2.7/site-packages/neutron/plugins/cisco/models/virt_phy_sw_v2.py",
 line 263, in _
invoke_nexus_for_net_create
2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 
[network, attachment])
2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 
  File 
"/usr/lib/python2.7/site-packages/neutron/plugins/cisco/models/virt_phy_sw_v2.py",
 line 148, in _
invoke_plugin_per_device
2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 
return func(*args, **kwargs)
2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 
  File 
"/usr/lib/python2.7/site-packages/neutron/plugins/cisco/nexus/cisco_nexus_plugin_v2.py",
 line 79,
 in create_network
2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 
raise cisco_exc.NexusComputeHostNotConfigured(host=host)
2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 
NexusComputeHostNotConfigured: Connection to None is not configured.
2014-09-02 20:10:22.211 52259 TRACE neutron.plugins.cisco.models.virt_phy_sw_v2 
2014-09-02 20:10:22.256 52259 INFO neutron.api.v2.resource 
[req-8ebea742-09b5-416b-820f-69461c496319 None] update failed (client error): 
Connection to None is not configured.
2014-09-02 20:10:22.257 52259 INFO neutron.wsgi 
[req-8ebea742-09b5-416b-820f-69461c496319 None] 172.21.9.134 - - [02/Sep/2014 
20:10:22] "PUT //v2.0/ports/c2e6b716-5c7d-4d23-ab78-ecd2a649469b.json HTTP/1.1" 
404 322 0.140213

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1365226

Title:
  Add security group to running instance with nexus monolithic plugin
  throws error

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  While adding new security group to an existing instance with cisco
  nexus plugin(provider network) throws the following error.

  2014-09-02 20:10:22.116 52259 INFO neutron.wsgi [req-091df3c8-7bdb-
  42b5-801a-a26a650a451a None] (52259) accepted ('172.21.9.134', 39434)

  2014-09-02 20:10:22.211 52259 ERROR 
neutron.plugins.cisco.models.virt_phy_sw_v2 [-] Unable to update port '' on 
Nexus switch
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2 Traceback (most recent call last):
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/cisco/models/virt_phy_sw_v2.py",
 line 405, in u
  pdate_port
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2 
self._invoke_nexus_for_net_create(context, *create_args)
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/cisco/models/virt_phy_sw_v2.py",
 line 263, in _
  invoke_nexus_for_net_create
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2 [network, attachment])
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/cisco/models/virt_phy_sw_v2.py",
 line 148, in _
  invoke_plugin_per_device
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2 return func(*args, **kwargs)
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/cisco/nexus/cisco_nexus_plugin_v2.py",
 line 79,
   in create_network
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2 raise 
cisco_exc.NexusComputeHostNotConfigured(host=host)
  2014-09-02 20:10:22.211 52259 TRACE 
neutron.plugins.cisco.models.virt_phy_sw_v2 NexusComputeHostNotConfigured: 
Connection to None is not configured.
  2014-09-02 20:10:22.211 52259 TRACE 
neutron

[Yahoo-eng-team] [Bug 1285504] [NEW] lbaas add vip should allow vip from a different subnet

2014-02-26 Thread Rabi Mishra
Public bug reported:

Current implementation of  adding a 'vip' from the horizon dashboard by
default uses the pool 'subnet' and does not allow to select the vip from
a different subnet. It has an optional parameter to select an available
ip from the pool subnet or it picks the next available ip from the pool
subnet

Neutron does not have any restriction on where 'vip' ip  should be from.
It can be from any 'subnet' available.

Dasboard should optionally allow user to select a subnet from the
available subnets.

** Affects: horizon
     Importance: Undecided
 Assignee: Rabi Mishra (ramishra)
 Status: In Progress

** Changed in: horizon
     Assignee: (unassigned) => Rabi Mishra (ramishra)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1285504

Title:
  lbaas add vip should allow vip from a different subnet

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Current implementation of  adding a 'vip' from the horizon dashboard
  by default uses the pool 'subnet' and does not allow to select the vip
  from a different subnet. It has an optional parameter to select an
  available ip from the pool subnet or it picks the next available ip
  from the pool subnet

  Neutron does not have any restriction on where 'vip' ip  should be
  from. It can be from any 'subnet' available.

  Dasboard should optionally allow user to select a subnet from the
  available subnets.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1285504/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp