[Yahoo-eng-team] [Bug 1830535] [NEW] Invalid neutron metadata proxy configuration leads to confusing nova exception

2019-05-26 Thread Gary Kotton
Public bug reported:

The exception below is received when nova has the following configuration:
1. nova metadata proxy service has:
[neutron]
metadata_proxy_shared_secret = novarocks

2. neutron nsx_v section does not have metadata_shared_secret defined
(this means that the proxy does not inject the signature in the headers)

Exception:
{"log":"\n","stream":"stdout","time":"2019-05-23T05:23:07.140817119Z"}
{"log":"2019-05-23 05:23:07.140 27 INFO nova.metadata.wsgi.server [-] 
172.165.0.171,151.168.64.0 \"GET /openstack/2013-10-17/meta_data.json 
HTTP/1.1\" status: 500 len: 139 time: 
0.0021250\n","stream":"stdout","time":"2019-05-23T05:23:07.140925898Z"}
{"log":"2019-05-23 05:23:11.479 21 INFO nova.metadata.wsgi.server [-] 
151.168.64.0 \"GET / HTTP/1.0\" status: 200 len: 234 time: 
0.0003791\n","stream":"stdout","time":"2019-05-23T05:23:11.479529255Z"}
{"log":"2019-05-23 05:23:17.148 23 INFO nova.metadata.wsgi.server [-] Traceback 
(most recent call 
last):\n","stream":"stdout","time":"2019-05-23T05:23:17.149314446Z"}
{"log":" File \"/usr/lib/python2.7/site-packages/eventlet/wsgi.py\", line 547, 
in 
handle_one_response\n","stream":"stdout","time":"2019-05-23T05:23:17.149350181Z"}
{"log":" result = self.application(self.environ, 
start_response)\n","stream":"stdout","time":"2019-05-23T05:23:17.149354111Z"}
{"log":" File \"/usr/lib/python2.7/site-packages/paste/urlmap.py\", line 216, 
in __call__\n","stream":"stdout","time":"2019-05-23T05:23:17.149356993Z"}
{"log":" return app(environ, 
start_response)\n","stream":"stdout","time":"2019-05-23T05:23:17.149359848Z"}
{"log":" File \"/usr/lib/python2.7/site-packages/webob/dec.py\", line 129, in 
__call__\n","stream":"stdout","time":"2019-05-23T05:23:17.149386987Z"}
{"log":" resp = self.call_func(req, *args, 
**kw)\n","stream":"stdout","time":"2019-05-23T05:23:17.149390526Z"}
{"log":" File \"/usr/lib/python2.7/site-packages/webob/dec.py\", line 193, in 
call_func\n","stream":"stdout","time":"2019-05-23T05:23:17.149393698Z"}
{"log":" return self.func(req, *args, 
**kwargs)\n","stream":"stdout","time":"2019-05-23T05:23:17.149396416Z"}
{"log":" File \"/usr/lib/python2.7/site-packages/oslo_middleware/base.py\", 
line 130, in 
__call__\n","stream":"stdout","time":"2019-05-23T05:23:17.149399029Z"}
{"log":" response = 
req.get_response(self.application)\n","stream":"stdout","time":"2019-05-23T05:23:17.149401804Z"}
{"log":" File \"/usr/lib/python2.7/site-packages/webob/request.py\", line 1314, 
in send\n","stream":"stdout","time":"2019-05-23T05:23:17.149404384Z"}
{"log":" application, 
catch_exc_info=False)\n","stream":"stdout","time":"2019-05-23T05:23:17.149407042Z"}
{"log":" File \"/usr/lib/python2.7/site-packages/webob/request.py\", line 1278, 
in 
call_application\n","stream":"stdout","time":"2019-05-23T05:23:17.149409578Z"}
{"log":" app_iter = application(self.environ, 
start_response)\n","stream":"stdout","time":"2019-05-23T05:23:17.149412418Z"}
{"log":" File \"/usr/lib/python2.7/site-packages/webob/dec.py\", line 129, in 
__call__\n","stream":"stdout","time":"2019-05-23T05:23:17.149415333Z"}
{"log":" resp = self.call_func(req, *args, 
**kw)\n","stream":"stdout","time":"2019-05-23T05:23:17.149418371Z"}
{"log":" File \"/usr/lib/python2.7/site-packages/webob/dec.py\", line 193, in 
call_func\n","stream":"stdout","time":"2019-05-23T05:23:17.149420897Z"}
{"log":" return self.func(req, *args, 
**kwargs)\n","stream":"stdout","time":"2019-05-23T05:23:17.149423678Z"}
{"log":" File 
\"/usr/lib/python2.7/site-packages/nova/api/metadata/handler.py\", line 108, in 
__call__\n","stream":"stdout","time":"2019-05-23T05:23:17.14942617Z"}
{"log":" meta_data = 
self._handle_instance_id_request_from_lb(req)\n","stream":"stdout","time":"2019-05-23T05:23:17.149428942Z"}
{"log":" File 
\"/usr/lib/python2.7/site-packages/nova/api/metadata/handler.py\", line 259, in 
_handle_instance_id_request_from_lb\n","stream":"stdout","time":"2019-05-23T05:23:17.149431537Z"}
{"log":" 
instance_address)\n","stream":"stdout","time":"2019-05-23T05:23:17.149434403Z"}
{"log":" File 
\"/usr/lib/python2.7/site-packages/nova/api/metadata/handler.py\", line 284, in 
_validate_shared_secret\n","stream":"stdout","time":"2019-05-23T05:23:17.149436904Z"}
{"log":" if not secutils.constant_time_compare(expected_signature, 
signature):\n","stream":"stdout","time":"2019-05-23T05:23:17.14943964Z"}
{"log":"TypeError: 'NoneType' does not have the buffer 
interface\n","stream":"stdout","time":"2019-05-23T05:23:17.149442368Z"}
{"log":"\n","stream":"stdout","time":"2019-05-23T05:23:17.149444941Z"}

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1830535

Title:
  Invalid neutron metadata proxy configuration leads to confusing nova
  exception

Status in OpenStack Compute (nova):
  New

Bug description:
  The exception below is 

[Yahoo-eng-team] [Bug 1815792] [NEW] Unable to delete instance when volume detached at the same time

2019-02-13 Thread Gary Kotton
8:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c]   
File "/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/driver.py", line 
564, in destroy
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c] 
self._detach_instance_volumes(instance, block_device_info)
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c]   
File "/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/driver.py", line 
539, in _detach_instance_volumes
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c] 
instance=instance)
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c]   
File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c] 
self.force_reraise()
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c]   
File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c] 
six.reraise(self.type_, self.value, self.tb)
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c]   
File "/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/driver.py", line 
528, in _detach_instance_volumes
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c] 
disk.get('device_name'))
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c]   
File "/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/driver.py", line 
470, in detach_volume
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c] 
self._volumeops.detach_volume(connection_info, instance)
201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 26114 
ERROR nova.compute.manager [instance: 4af36ad5-7c64-4e0b-8160-919842a6c39c]   
File "/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/volumeops.py", line 
663, in detach_volume

** Affects: nova
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1815792

Title:
  Unable to delete instance when volume detached at the same time

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  When using K8s above OpenStack there is a race condition when a
  persistant volume will be deleted at the same time that a instance
  using that volume is deleted. This results in the instance going into
  an error state.

  201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 
26114 ERROR nova.compute.manager [instance: 
4af36ad5-7c64-4e0b-8160-919842a6c39c] Traceback (most recent call last):
  201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 
26114 ERROR nova.compute.manager [instance: 
4af36ad5-7c64-4e0b-8160-919842a6c39c]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2478, in 
do_terminate_instance
  201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 
26114 ERROR nova.compute.manager [instance: 
4af36ad5-7c64-4e0b-8160-919842a6c39c] self._delete_instance(context, 
instance, bdms, quotas)
  201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 
26114 ERROR nova.compute.manager [instance: 
4af36ad5-7c64-4e0b-8160-919842a6c39c]   File 
"/usr/lib/python2.7/dist-packages/nova/hooks.py", line 154, in inner
  201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 
26114 ERROR nova.compute.manager [instance: 
4af36ad5-7c64-4e0b-8160-919842a6c39c] rv = f(*args, **kwargs)
  201902131136/compute01/logs/nova/nova-compute.log:2019-02-13 08:29:45.432 
26114 ERROR nova.compute.manager [instance: 
4af36ad5-7c64-4e0b-8160-919842a6c39c]   File 
"/usr/lib/python2.7/dist-pac

[Yahoo-eng-team] [Bug 1815466] [NEW] Live migrate abort results in instance going into an error state

2019-02-11 Thread Gary Kotton
Public bug reported:

Only the libvirt driver supports live migration abort (and that may have
some caveates if the action results in a exception). If the call takes
places with a driver that does not support the opertaion then the follow
stack trace appears:

nova live-migration-abort doesn’t work.

It failed with exception "NotImplementedError".


Steps:
1) Do live migration (make sure vmkernel adaptors has vmotion enabled)
2) During live-migration is going on from one cluster to another trigger 
command nova live-migration-abort.
3) you will see nova instance goes into ERROR state.

nova-compute error logs:

2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server 
[req-8cca230a-2ef0-4af4-9172-2259a38496ba fbb20822241440e7acac310fdc238280 
bf2edf2a7ddc48a299f183b8da055c46 - d5b840b9e52c43d3ad9e208e22ad531f 
d5b840b9e52c43d3ad9e208e22ad531f] Exception during message handling: 
NotImplementedError
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 163, in 
_process_incoming
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 220, 
in dispatch
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 190, 
in _do_dispatch
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 76, in 
wrapped
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server function_name, 
call_dict, binary)
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server 
self.force_reraise()
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/dist-packages/nova/exception_wrapper.py", line 67, in 
wrapped
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server return f(self, 
context, *args, **kw)
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/dist-packages/nova/compute/utils.py", line 977, in 
decorated_function
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server return 
function(self, context, *args, **kwargs)
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 214, in 
decorated_function
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server 
kwargs['instance'], e, sys.exc_info())
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server 
self.force_reraise()
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 202, in 
decorated_function
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server return 
function(self, context, *args, **kwargs)
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6304, in 
live_migration_abort
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server 
self.driver.live_migration_abort(instance)
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server File 
"/usr/lib/python2.7/dist-packages/nova/virt/driver.py", line 943, in 
live_migration_abort
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server raise 
NotImplementedError()
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server 
NotImplementedError
2019-02-05 18:28:59.040 31616 ERROR oslo_messaging.rpc.server

This will resulty in the instanc

[Yahoo-eng-team] [Bug 1811565] Re: Live migrations gives a cryptic message when instance is paused

2019-01-14 Thread Gary Kotton
This fix is https://review.openstack.org/#/c/630496/

** Project changed: neutron => nova

** Changed in: nova
   Status: Incomplete => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1811565

Title:
  Live migrations gives a cryptic message when instance is paused

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  root@utu1604template:~# nova list
  
+--+---+++-+---+
  | ID | Name | Status | Task State | Power State | Networks |
  
+--+---+++-+---+
  | 0a07b744-90ef-4250-adce-cd5adfc2925a | FEDORA-1 | ACTIVE | - | Running | 
net1=1.0.0.9 |
  | c1c5dbbc-ba4d-4c43-9fa4-b50284be6b44 | FEDORA-1 | ACTIVE | - | Running | 
net1=1.0.0.4 |
  | 05858218-80c3-4f6c-a6c0-e2094faa20a6 | FEDORA-3 | ACTIVE | - | Running | 
net1=1.0.0.5 |
  | fddae0ed-fdff-4962-9284-84303ad0cf5c | FEDORA-4 | ACTIVE | - | Running | 
net1=1.0.0.20 |
  | d5dcef89-c89d-4e75-b567-ef6ddf0d95e8 | FEDORA-vsan-1 | ACTIVE | - | Running 
| net1=1.0.0.6 |
  | c4d9ea4e-7f5d-4ffa-a091-15cf817b6656 | FEDORA-vsan-2 | ACTIVE | - | Running 
| net1=1.0.0.21 |
  | 112df993-c9dd-4c6c-b0ae-c20e133bba07 | vm1 | ACTIVE | - | Running | 
net1=1.0.0.8, 172.24.4.14 |
  | da3fb215-8665-4761-b197-076c80b7b2bd | vm1 | ACTIVE | - | Running | 
net1=1.0.0.3, 172.24.4.3 |
  | dbb473c9-248a-4f5e-ab9b-30c6526f4978 | vm1 | ACTIVE | - | Running | 
net1=1.0.0.7, 172.24.4.10 |
  | 3882b496-1433-4cb7-a0c5-151a0c47f8bd | vm2 | ACTIVE | - | Running | 
net1=1.0.0.12 |
  
+--+---+++-+---+

  
  root@utu1604template:~# nova pause d5dcef89-c89d-4e75-b567-ef6ddf0d95e8

  
  root@utu1604template:~# nova service-list
  
++--+---+--+-+---++-+
  | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
  
++--+---+--+-+---++-+
  | 11 | nova-conductor | nova-conductor-f7f8ccbf9-tw48t | internal | enabled | 
up | 2019-01-08T09:30:36.00 | - |
  | 14 | nova-consoleauth | nova-consoleauth-6fc4cc74b5-hjlbc | internal | 
enabled | up | 2019-01-08T09:30:40.00 | - |
  | 17 | nova-scheduler | nova-scheduler-65d75fb975-8qr25 | internal | enabled 
| up | 2019-01-08T09:30:39.00 | - |
  | 20 | nova-compute | nova-compute-04 | az1 | enabled | up | 
2019-01-08T09:30:38.00 | - |
  | 23 | nova-compute | nova-compute-02 | az3 | enabled | up | 
2019-01-08T09:30:38.00 | - |
  | 26 | nova-compute | nova-compute-03 | az2 | enabled | up | 
2019-01-08T09:30:40.00 | - |
  | 29 | nova-compute | nova-compute-01 | az4 | enabled | up | 
2019-01-08T09:30:42.00 | - |
  
++--+---+--+-+---++-+

  
  root@utu1604template:~# nova live-migration 
d5dcef89-c89d-4e75-b567-ef6ddf0d95e8 nova-compute-02
  ERROR (Conflict): Cannot 'os-migrateLive' instance 
d5dcef89-c89d-4e75-b567-ef6ddf0d95e8 while it is in power_state 7 (HTTP 409) 
(Request-ID: req-77e0da31-1629-48fb-aaa9-d7a5ff178759)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1811565/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1811565] [NEW] Live migrations gives a cryptic message when instance is paused

2019-01-13 Thread Gary Kotton
Public bug reported:

root@utu1604template:~# nova list
+--+---+++-+---+
| ID | Name | Status | Task State | Power State | Networks |
+--+---+++-+---+
| 0a07b744-90ef-4250-adce-cd5adfc2925a | FEDORA-1 | ACTIVE | - | Running | 
net1=1.0.0.9 |
| c1c5dbbc-ba4d-4c43-9fa4-b50284be6b44 | FEDORA-1 | ACTIVE | - | Running | 
net1=1.0.0.4 |
| 05858218-80c3-4f6c-a6c0-e2094faa20a6 | FEDORA-3 | ACTIVE | - | Running | 
net1=1.0.0.5 |
| fddae0ed-fdff-4962-9284-84303ad0cf5c | FEDORA-4 | ACTIVE | - | Running | 
net1=1.0.0.20 |
| d5dcef89-c89d-4e75-b567-ef6ddf0d95e8 | FEDORA-vsan-1 | ACTIVE | - | Running | 
net1=1.0.0.6 |
| c4d9ea4e-7f5d-4ffa-a091-15cf817b6656 | FEDORA-vsan-2 | ACTIVE | - | Running | 
net1=1.0.0.21 |
| 112df993-c9dd-4c6c-b0ae-c20e133bba07 | vm1 | ACTIVE | - | Running | 
net1=1.0.0.8, 172.24.4.14 |
| da3fb215-8665-4761-b197-076c80b7b2bd | vm1 | ACTIVE | - | Running | 
net1=1.0.0.3, 172.24.4.3 |
| dbb473c9-248a-4f5e-ab9b-30c6526f4978 | vm1 | ACTIVE | - | Running | 
net1=1.0.0.7, 172.24.4.10 |
| 3882b496-1433-4cb7-a0c5-151a0c47f8bd | vm2 | ACTIVE | - | Running | 
net1=1.0.0.12 |
+--+---+++-+---+


root@utu1604template:~# nova pause d5dcef89-c89d-4e75-b567-ef6ddf0d95e8


root@utu1604template:~# nova service-list
++--+---+--+-+---++-+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
++--+---+--+-+---++-+
| 11 | nova-conductor | nova-conductor-f7f8ccbf9-tw48t | internal | enabled | 
up | 2019-01-08T09:30:36.00 | - |
| 14 | nova-consoleauth | nova-consoleauth-6fc4cc74b5-hjlbc | internal | 
enabled | up | 2019-01-08T09:30:40.00 | - |
| 17 | nova-scheduler | nova-scheduler-65d75fb975-8qr25 | internal | enabled | 
up | 2019-01-08T09:30:39.00 | - |
| 20 | nova-compute | nova-compute-04 | az1 | enabled | up | 
2019-01-08T09:30:38.00 | - |
| 23 | nova-compute | nova-compute-02 | az3 | enabled | up | 
2019-01-08T09:30:38.00 | - |
| 26 | nova-compute | nova-compute-03 | az2 | enabled | up | 
2019-01-08T09:30:40.00 | - |
| 29 | nova-compute | nova-compute-01 | az4 | enabled | up | 
2019-01-08T09:30:42.00 | - |
++--+---+--+-+---++-+


root@utu1604template:~# nova live-migration 
d5dcef89-c89d-4e75-b567-ef6ddf0d95e8 nova-compute-02
ERROR (Conflict): Cannot 'os-migrateLive' instance 
d5dcef89-c89d-4e75-b567-ef6ddf0d95e8 while it is in power_state 7 (HTTP 409) 
(Request-ID: req-77e0da31-1629-48fb-aaa9-d7a5ff178759)

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1811565

Title:
  Live migrations gives a cryptic message when instance is paused

Status in neutron:
  New

Bug description:
  root@utu1604template:~# nova list
  
+--+---+++-+---+
  | ID | Name | Status | Task State | Power State | Networks |
  
+--+---+++-+---+
  | 0a07b744-90ef-4250-adce-cd5adfc2925a | FEDORA-1 | ACTIVE | - | Running | 
net1=1.0.0.9 |
  | c1c5dbbc-ba4d-4c43-9fa4-b50284be6b44 | FEDORA-1 | ACTIVE | - | Running | 
net1=1.0.0.4 |
  | 05858218-80c3-4f6c-a6c0-e2094faa20a6 | FEDORA-3 | ACTIVE | - | Running | 
net1=1.0.0.5 |
  | fddae0ed-fdff-4962-9284-84303ad0cf5c | FEDORA-4 | ACTIVE | - | Running | 
net1=1.0.0.20 |
  | d5dcef89-c89d-4e75-b567-ef6ddf0d95e8 | FEDORA-vsan-1 | ACTIVE | - | Running 
| net1=1.0.0.6 |
  | c4d9ea4e-7f5d-4ffa-a091-15cf817b6656 | FEDORA-vsan-2 | ACTIVE | - | Running 
| net1=1.0.0.21 |
  | 112df993-c9dd-4c6c-b0ae-c20e133bba07 | vm1 | ACTIVE | - | Running | 
net1=1.0.0.8, 172.24.4.14 |
  | da3fb215-8665-4761-b197-076c80b7b2bd | vm1 | ACTIVE | - | Running | 
net1=1.0.0.3, 172.24.4.3 |
  | dbb473c9-248a-4f5e-ab9b-30c6526f4978 | vm1 | ACTIVE | - | Running | 
net1=1.0.0.7, 172.24.4.10 |
  | 3882b496-1433-4cb7-a0c5-151a0c47f8bd | vm2 | ACTIVE | - | Running | 
net1=1.0.0.12 |
  
+--+---+++-+---+

  
  root@utu1604template:~# nova pause d5dcef89-c89d-4e75-b567-ef6ddf0d95e8

  
  root@utu1604template:~# nova service-list
  

[Yahoo-eng-team] [Bug 1810642] [NEW] Neutron service taking up all CPU resources

2019-01-06 Thread Gary Kotton
Public bug reported:

The neutron service has very high CPU utilization when we have the following 
scenario:
1. NSX neutron plugin is used
2. There are thousands of instances that have puppet installed which are 
constantly polling the metadata service

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1810642

Title:
  Neutron service taking up all CPU resources

Status in OpenStack Compute (nova):
  New

Bug description:
  The neutron service has very high CPU utilization when we have the following 
scenario:
  1. NSX neutron plugin is used
  2. There are thousands of instances that have puppet installed which are 
constantly polling the metadata service

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1810642/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1806711] [NEW] Message timeout leads to inconsistant disk attachments

2018-12-04 Thread Gary Kotton
nstance: 
500f5906-1385-4afb-9317-bbab4599dad9]   File 
"/usr/lib/python2.7/dist-packages/oslo_vmware/api.py", line 449, in _poll_task
2018-12-03 08:09:13.528 5325 ERROR nova.compute.manager [instance: 
500f5906-1385-4afb-9317-bbab4599dad9] raise task_ex
2018-12-03 08:09:13.528 5325 ERROR nova.compute.manager [instance: 
500f5906-1385-4afb-9317-bbab4599dad9] VimFaultException: Failed to add disk 
'scsi0:3'.
2018-12-03 08:09:13.528 5325 ERROR nova.compute.manager [instance: 
500f5906-1385-4afb-9317-bbab4599dad9] Faults: ['GenericVmConfigFault']
2018-12-03 08:09:13.528 5325 ERROR nova.compute.manager [instance: 
500f5906-1385-4afb-9317-bbab4599dad9]

** Affects: nova
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Gary Kotton (garyk)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1806711

Title:
  Message timeout leads to inconsistant disk attachments

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  A message timeout causes a disk attachment to fail (which is legit). The 
problem is that the disk is actually attached to the VM and the rollback does 
not remove the disk from the VM and only sets the cinder/nova stats as 
available for the volume.
  Subsequent attachments will fail.

  Original stack is:

  2018-12-03 04:30:58.003 2442 ERROR nova.virt.block_device [instance: 
500f5906-1385-4afb-9317-bbab4599dad9] Traceback (most recent call last):
  2018-12-03 04:30:58.003 2442 ERROR nova.virt.block_device [instance: 
500f5906-1385-4afb-9317-bbab4599dad9]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/block_device.py", line 306, in 
attach
  2018-12-03 04:30:58.003 2442 ERROR nova.virt.block_device [instance: 
500f5906-1385-4afb-9317-bbab4599dad9] encryption=encryption)
  2018-12-03 04:30:58.003 2442 ERROR nova.virt.block_device [instance: 
500f5906-1385-4afb-9317-bbab4599dad9]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/driver.py", line 465, in 
detach_volume
  2018-12-03 04:30:58.003 2442 ERROR nova.virt.block_device [instance: 
500f5906-1385-4afb-9317-bbab4599dad9] if self._check_vio_import(instance):
  2018-12-03 04:30:58.003 2442 ERROR nova.virt.block_device [instance: 
500f5906-1385-4afb-9317-bbab4599dad9]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/driver.py", line 458, in 
_check_vio_import
  2018-12-03 04:30:58.003 2442 ERROR nova.virt.block_device [instance: 
500f5906-1385-4afb-9317-bbab4599dad9] instance_meta = 
instance_metadata.InstanceMetadata(instance)
  2018-12-03 04:30:58.003 2442 ERROR nova.virt.block_device [instance: 
500f5906-1385-4afb-9317-bbab4599dad9]   File 
"/usr/lib/python2.7/dist-packages/nova/api/metadata/base.py", line 141, in 
__init__
  2018-12-03 04:30:58.003 2442 ERROR nova.virt.block_device [instance: 
500f5906-1385-4afb-9317-bbab4599dad9] self.mappings = 
_format_instance_mapping(ctxt, instance)
  2018-12-03 04:30:58.003 2442 ERROR nova.virt.block_device [instance: 
500f5906-1385-4afb-9317-bbab4599dad9]   File 
"/usr/lib/python2.7/dist-packages/nova/api/metadata/base.py", line 700, in 
_format_instance_mapping
  2018-12-03 04:30:58.003 2442 ERROR nova.virt.block_device [instance: 
500f5906-1385-4afb-9317-bbab4599dad9] ctxt, instance.uuid)
  2018-12-03 04:30:58.003 2442 ERROR nova.virt.block_device [instance: 
500f5906-1385-4afb-9317-bbab4599dad9]   File 
"/usr/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 177, in 
wrapper
  2018-12-03 04:30:58.003 2442 ERROR nova.virt.block_device [instance: 
500f5906-1385-4afb-9317-bbab4599dad9] args, kwargs)
  2018-12-03 04:30:58.003 2442 ERROR nova.virt.block_device [instance: 
500f5906-1385-4afb-9317-bbab4599dad9]   File 
"/usr/lib/python2.7/dist-packages/nova/conductor/rpcapi.py", line 239, in 
object_class_action_versions
  2018-12-03 04:30:58.003 2442 ERROR nova.virt.block_device [instance: 
500f5906-1385-4afb-9317-bbab4599dad9] args=args, kwargs=kwargs)
  2018-12-03 04:30:58.003 2442 ERROR nova.virt.block_device [instance: 
500f5906-1385-4afb-9317-bbab4599dad9]   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 169, in 
call
  2018-12-03 04:30:58.003 2442 ERROR nova.virt.block_device [instance: 
500f5906-1385-4afb-9317-bbab4599dad9] retry=self.retry)
  2018-12-03 04:30:58.003 2442 ERROR nova.virt.block_device [instance: 
500f5906-1385-4afb-9317-bbab4599dad9]   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 97, in 
_send
  2018-12-03 04:30:58.003 2442 ERROR nova.virt.block_device [instance: 
500f5906-1385-4afb-9317-bbab4599dad9] timeout=timeout, retry=retry)
  2018-12-03 04:30:58.003 2442 ERROR nova.virt.block_device [instance: 
500f5906

[Yahoo-eng-team] [Bug 1796565] [NEW] When deleting an instance the message "Unable to connect to Neutron" appears

2018-10-07 Thread Gary Kotton
Public bug reported:

This is due to the fact that the exception that is caught below is far too 
broad.
https://github.com/openstack/horizon/blob/master/openstack_dashboard/api/neutron.py#L1719
In this specific case horizon is under load and neutron raises an exception 
that the port is not found.
So the end user is complaining that irrelevant message are apprearing

** Affects: horizon
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1796565

Title:
  When deleting an instance the message "Unable to connect to Neutron"
  appears

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  This is due to the fact that the exception that is caught below is far too 
broad.
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/api/neutron.py#L1719
  In this specific case horizon is under load and neutron raises an exception 
that the port is not found.
  So the end user is complaining that irrelevant message are apprearing

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1796565/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1777422] [NEW] Resource tracker periodic task taking a very long time

2018-06-18 Thread Gary Kotton
Public bug reported:

We have 250 instances on a compute node and the resource tracker
periodic task is taking very long:

2018-06-17 10:30:56.194 1658 DEBUG oslo_concurrency.lockutils [req-
fb2573f9-3862-45db-b546-7a00fdd9a871 - - - - -] Lock "compute_resources"
released by "nova.compute.resource_tracker._update_available_resource"
:: held 10.666s inner /usr/lib/python2.7/dist-
packages/oslo_concurrency/lockutils.py:288

This is due to the deepcopy. This copies the structure N times per
iteration, once for each instance. This is very costly.

** Affects: nova
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1777422

Title:
  Resource tracker periodic task taking a very long time

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  We have 250 instances on a compute node and the resource tracker
  periodic task is taking very long:

  2018-06-17 10:30:56.194 1658 DEBUG oslo_concurrency.lockutils [req-
  fb2573f9-3862-45db-b546-7a00fdd9a871 - - - - -] Lock
  "compute_resources" released by
  "nova.compute.resource_tracker._update_available_resource" :: held
  10.666s inner /usr/lib/python2.7/dist-
  packages/oslo_concurrency/lockutils.py:288

  This is due to the deepcopy. This copies the structure N times per
  iteration, once for each instance. This is very costly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1777422/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1776621] [NEW] Scale: when periodic pool size is small and there is a lot of load the compute service goes down

2018-06-13 Thread Gary Kotton
heduler/rpcapi.py", line 158, in 
select_destinations
2018-06-12 20:48:10.161 6328 ERROR nova.conductor.manager return 
cctxt.call(ctxt, 'select_destinations', **msg_args)
2018-06-12 20:48:10.161 6328 ERROR nova.conductor.manager File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 174, in 
call
2018-06-12 20:48:10.161 6328 ERROR nova.conductor.manager retry=self.retry)
2018-06-12 20:48:10.161 6328 ERROR nova.conductor.manager File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 131, in 
_send
2018-06-12 20:48:10.161 6328 ERROR nova.conductor.manager timeout=timeout, 
retry=retry)
2018-06-12 20:48:10.161 6328 ERROR nova.conductor.manager File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 
559, in send
2018-06-12 20:48:10.161 6328 ERROR nova.conductor.manager retry=retry)

** Affects: nova
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1776621

Title:
  Scale: when periodic pool size is small and there is a lot of load the
  compute service goes down

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  When the nova power sync pool is exhausted the compute service will go
  down. This results in scale and performance tests failing.

  2018-06-12 19:58:48.871 30126 WARNING oslo.messaging._drivers.impl_rabbit 
[req-196321bb-a11a-4e6e-a80a-544ecd093986 c3de6d9ec02c494d978330d8f1a64da1 
d37803befc35418981f1f0b6dceec696 - default default] Unexpected error during 
heartbeart thread processing, retrying...: error: [Errno 104] Connection reset 
by peer
  2018-06-12 19:58:48.872 30126 WARNING oslo.messaging._drivers.impl_rabbit 
[req-196321bb-a11a-4e6e-a80a-544ecd093986 c3de6d9ec02c494d978330d8f1a64da1 
d37803befc35418981f1f0b6dceec696 - default default] Unexpected error during 
heartbeart thread processing, retrying...: error: [Errno 104] Connection reset 
by peer
  2018-06-12 19:58:54.793 30126 WARNING oslo.messaging._drivers.impl_rabbit 
[req-196321bb-a11a-4e6e-a80a-544ecd093986 c3de6d9ec02c494d978330d8f1a64da1 
d37803befc35418981f1f0b6dceec696 - default default] Unexpected error during 
heartbeart thread processing, retrying...: error: [Errno 104] Connection reset 
by peer
  2018-06-12 21:37:23.805 30126 DEBUG oslo_concurrency.lockutils 
[req-196321bb-a11a-4e6e-a80a-544ecd093986 c3de6d9ec02c494d978330d8f1a64da1 
d37803befc35418981f1f0b6dceec696 - default default] Lock "compute_resources" 
released by "nova.compute.resource_tracker._update_available_resource" :: held 
6004.943s inner 
/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:288
  2018-06-12 21:37:23.807 30126 ERROR nova.compute.manager 
[req-196321bb-a11a-4e6e-a80a-544ecd093986 c3de6d9ec02c494d978330d8f1a64da1 
d37803befc35418981f1f0b6dceec696 - default default] Error updating resources 
for node domain-c7.fd3d2358-cc8d-4773-9fef-7a2713ac05ba.: MessagingTimeout: 
Timed out waiting for a reply to message ID 1eb4b1b40f0f4c66b0266608073717e8

  root@controller01:/var/log/nova# vi nova-conductor.log.1
  2018-06-12 20:48:10.161 6328 ERROR nova.conductor.manager 
[req-77b5e1d7-a4b7-468e-98af-dfdfbf2fad7f 1b5d8da24b39464cb6736d122ccc0665 
eb361d7bc9bd40059a2ce2848c985772 - default default] Failed to schedule 
instances: NoValidHost_Remote: No valid host was found. There are not enough 
hosts available.
  Traceback (most recent call last):

File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
226, in inner
  return func(*args, **kwargs)

File "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 
153, in select_destinations
  allocation_request_version, return_alternates)

File "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", 
line 93, in select_destinations
  allocation_request_version, return_alternates)

File "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", 
line 245, in _schedule
  claimed_instance_uuids)

File "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", 
line 282, in _ensure_sufficient_hosts
  raise exception.NoValidHost(reason=reason)

  NoValidHost: No valid host was found. There are not enough hosts available.
  2018-06-12 20:48:10.161 6328 ERROR nova.conductor.manager Traceback (most 
recent call last):
  2018-06-12 20:48:10.161 6328 ERROR nova.conductor.manager File 
"/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 1118, in 
schedule_and_build_instances
  2018-06-12 20:48:10.161 6328 ERROR nova.conductor.manager instance_uuids, 
return_alternates=True)
  2018-06-12 20:48:10.161 6328 ERROR nova.conductor.manager File 
"/usr/lib/python2.7/dist-packages/nova/conductor/mana

[Yahoo-eng-team] [Bug 1771107] Re: unit test networking_bgpvpn.tests.unit.extensions.test_bgpvpn.BgpvpnExtensionTestCase.test_bgpvpn_list fails

2018-05-15 Thread Gary Kotton
** No longer affects: neutron

** Changed in: bgpvpn
   Status: In Progress => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1771107

Title:
  unit test
  
networking_bgpvpn.tests.unit.extensions.test_bgpvpn.BgpvpnExtensionTestCase.test_bgpvpn_list
  fails

Status in networking-bgpvpn:
  Fix Committed

Bug description:
  A change in neutron results in a networking-bgpvpn unit test failure:

  ==
  ERROR: 
networking_bgpvpn.tests.unit.extensions.test_bgpvpn.BgpvpnExtensionTestCase.test_bgpvpn_list
  --
  Traceback (most recent call last):
File "/home/teom7365/prog/openstack/neutron/neutron/tests/base.py", line 
140, in func
  return f(self, *args, **kwargs)
File "networking_bgpvpn/tests/unit/extensions/test_bgpvpn.py", line 192, in 
test_bgpvpn_list
  _get_path(BGPVPN_URI, fmt=self.fmt))
File 
"/home/teom7365/prog/openstack/networking-bgpvpn/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py",
 line 331, in get
  expect_errors=expect_errors)
File 
"/home/teom7365/prog/openstack/networking-bgpvpn/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py",
 line 651, in do_request
  self._check_status(status, res)
File 
"/home/teom7365/prog/openstack/networking-bgpvpn/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py",
 line 683, in _check_status
  res)
  webtest.app.AppError: Bad response: 404 Not Found (not 200 OK or 3xx redirect 
for http://localhost/bgpvpn/bgpvpns.json)
  '{"NeutronError": {"message": "Extensions not found: [\'bgpvpn\'].", "type": 
"ExtensionsNotFound", "detail": ""}}'


  'git bisect' allows to identify the neutron commit which introduces
  the issue:

 https://review.openstack.org/#/c/545490/

  
  At this point, I don't have identified /why/ this commit causes this failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/bgpvpn/+bug/1771107/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1770638] Re: While updating the address-scope with optional argument - -share=true, then user unable to update as - -share=false.

2018-05-14 Thread Gary Kotton
This is currently by design -
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/address_scope_db.py#n88


** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1770638

Title:
  While updating the address-scope with optional argument - -share=true,
  then user unable to update as - -share=false.

Status in neutron:
  Invalid

Bug description:
  Problem:
  While updating the address-scope with optional argument - -share=true, then 
user unable to update as - -share=false.

  Steps to reproduce:

  1) Update the address scope using the command
  neutron address-scope-update --shared true 
0055ebc1-771f-4f98-9775-bafd8f563061
  2) Then try to update the address scope again 
  neutron address-scope-update --shared false 
0055ebc1-771f-4f98-9775-bafd8f563061
  output:
  Unable to update address scope 0055ebc1-771f-4f98-9775-bafd8f563061 : Shared 
address scope can't be unshared.
  Neutron server returns request_ids: 
['req-d98ce885-7e18-4b1b-9d7b-2c7538f80156']

  
  So, it should update the address scope as false again.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1770638/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1767058] [NEW] Rally test boot_and_delete_server_with_secgroups fails

2018-04-26 Thread Gary Kotton
Public bug reported:

When running rally scenarios the test fails with PortNotFoundClient exception.
>From the logs we see the following:

Failure:
---

▼   11  GetResourceFailure  Failed to get the resource : The server has either erred or is incapable of 
performing the requested operation. (HTTP 500) (Request-ID: 
req-a9471015-e72e-42cf-acc3-8b361415d91e)
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/rally/task/runner.py", line 71, 
in _run_scenario_once
getattr(scenario_inst, method_name)(**scenario_kwargs)
  File 
"/usr/local/lib/python2.7/dist-packages/rally/plugins/openstack/scenarios/nova/security_group.py",
 line 167, in run
self._delete_server(server)
  File 
"/usr/local/lib/python2.7/dist-packages/rally/plugins/openstack/scenarios/nova/utils.py",
 line 451, in _delete_server
check_interval=CONF.openstack.nova_server_delete_poll_interval
  File "/usr/local/lib/python2.7/dist-packages/rally/task/utils.py", line 214, 
in wait_for_status
resource = update_resource(resource)
  File "/usr/local/lib/python2.7/dist-packages/rally/task/utils.py", line 80, 
in _get_from_manager
raise exceptions.GetResourceFailure(resource=resource, err=e)
GetResourceFailure: Failed to get the resource : The server has either erred or is incapable of 
performing the requested operation. (HTTP 500) (Request-ID: 
req-a9471015-e72e-42cf-acc3-8b361415d91e)


Neutron:
---
port-creation:
2018-04-22 09:18:31.984 32139 DEBUG vmware_nsxlib.v3.client 
[req-7f2bece2-7bd3-4e25-a5be-9c363e77bd5b f8e81abca4094924831b9538389b8d5b 
c87b5a2bc4e649069e9cfa4bbc874663 - default default] REST call: POST 
api/v1/logical-ports. Headers: {'X-NSX-EUSER': 
u'f8e81abca4094924831b9538389b8d5b c87b5a2bc4e649069e9cfa4bbc874663 - default 
default', 'Content-Type': 'application/json', 'X-NSX-EREQID': 
'req-7f2bece2-7bd3-4e25-a5be-9c363e77bd5b', 'Accept': 'application/json'}. 
Body: {"address_bindings": [{"ip_address": "100.1.124.13", "mac_address": 
"fa:16:3e:d4:3a:8f"}], "admin_state": "UP", "attachment": null, "description": 
"", "display_name": "", "logical_switch_id": 
"dfc6fa69-6df7-44db-ba43-68152104103f", "switching_profile_ids": [{"key": 
"SpoofGuardSwitchingProfile", "value": 
"0bb5dd95-1e3d-464e-a9d5-bd040bd91d9d"}], "tags": [{"scope": 
"os-neutron-port-id", "tag": "1954b052-bc2c-48ba-aa24-e8c96fac1723"}, {"scope": 
"os-project-id", "tag": "c87b5a2bc4e649069e9cfa4bbc874663"}, {"scope": 
"os-project-name", "tag": "c_rally_41596539_tkcSTCuv"}, {"scope": 
"os-api-version", "tag": "12.0.0.dev8294691"}, {"scope": "os-security-group", 
"tag": "a4d155d5-285a-4e9a-8982-04f6a242d3ed"}, {"scope": "os-security-group", 
"tag": "OS-Default-Section"}]} _rest_call 
/usr/lib/python2.7/dist-packages/vmware_nsxlib/v3/client.py:201
.
.
2018-04-22 09:18:32.278 32139 INFO neutron.wsgi 
[req-7f2bece2-7bd3-4e25-a5be-9c363e77bd5b f8e81abca4094924831b9538389b8d5b 
c87b5a2bc4e649069e9cfa4bbc874663 - default default] 
10.127.108.239,10.127.108.237 "POST /v2.0/ports HTTP/1.1" status: 201  len: 983 
time: 2.8035491

port-deletion:

2018-04-22 09:18:55.552 32135 INFO neutron.wsgi [req-774309da-
6fc6-4521-aed2-37f2eb01b5b4 f8e81abca4094924831b9538389b8d5b
c87b5a2bc4e649069e9cfa4bbc874663 - default default]
10.127.108.239,10.127.108.237 "DELETE /v2.0/ports/1954b052-bc2c-48ba-
aa24-e8c96fac1723 HTTP/1.1" status: 204  len: 168 time: 2.4448841

Nova:

2018-04-22 09:18:55.539 19221 DEBUG oslo_concurrency.lockutils 
[req-824643af-4107-46a9-ba5a-950fcec27ba4 8d478ca1fa274601a50f8d2a38c5e70f 
69fad6d3240645b5854b52bca7e9b85e - default default] Lock 
"9ab78fdd-5798-4426-baf2-2706d0d69345" acqu
ired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 
0.000s inner /usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:276
2018-04-22 09:18:55.541 19221 DEBUG oslo_concurrency.lockutils 
[req-824643af-4107-46a9-ba5a-950fcec27ba4 8d478ca1fa274601a50f8d2a38c5e70f 
69fad6d3240645b5854b52bca7e9b85e - default default] Lock 
"9ab78fdd-5798-4426-baf2-2706d0d69345" rele
ased by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 
0.002s inner /usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:288
2018-04-22 09:18:55.662 19221 DEBUG nova.policy 
[req-824643af-4107-46a9-ba5a-950fcec27ba4 8d478ca1fa274601a50f8d2a38c5e70f 
69fad6d3240645b5854b52bca7e9b85e - default default] Policy check for 
os_compute_api:os-extended-server-attributes f
ailed with credentials {'service_roles': [], 'user_id': 
u'8d478ca1fa274601a50f8d2a38c5e70f', 'roles': [u'_member_'], 'user_domain_id': 
u'default', 'service_project_id': None, 'service_user_id': None, 
'service_user_domain_id': None, 'servi
ce_project_domain_id': None, 'is_admin_project': False, 'is_admin': False, 
'project_id': u'69fad6d3240645b5854b52bca7e9b85e', 'project_domain_id': 
u'default'} authorize /usr/lib/python2.7/dist-packages/nova/policy.py:168
2018-04-22 09:18:55.774 19225 DEBUG neutronclient.v2_0.client 

[Yahoo-eng-team] [Bug 1762414] [NEW] Unable to create a FIP when L2GW is enabled

2018-04-09 Thread Gary Kotton
Public bug reported:

  File "/usr/share/openstack-dashboard/horizon/tables/base.py", line 1389, in 
_filter_action
[Thu Mar 29 09:11:07.992793 2018] [wsgi:error] [pid 22987:tid 140314318284544] 
return action._allowed(request, datum) and row_matched
[Thu Mar 29 09:11:07.992800 2018] [wsgi:error] [pid 22987:tid 140314318284544] 
File "/usr/share/openstack-dashboard/horizon/tables/actions.py", line 140, in 
_allowed
[Thu Mar 29 09:11:07.992806 2018] [wsgi:error] [pid 22987:tid 140314318284544] 
return self.allowed(request, datum)
[Thu Mar 29 09:11:07.992813 2018] [wsgi:error] [pid 22987:tid 140314318284544] 
File 
"/usr/share/openstack-dashboard/openstack_dashboard/dashboards/project/floating_ips/tables.py",
 line 51, in allowed
[Thu Mar 29 09:11:07.992820 2018] [wsgi:error] [pid 22987:tid 140314318284544] 
targets=('floatingip', ))
[Thu Mar 29 09:11:07.992826 2018] [wsgi:error] [pid 22987:tid 140314318284544] 
File "/usr/share/openstack-dashboard/horizon/utils/memoized.py", line 95, in 
wrapped
[Thu Mar 29 09:11:07.992833 2018] [wsgi:error] [pid 22987:tid 140314318284544] 
value = cache[key] = func(*args, **kwargs)
[Thu Mar 29 09:11:07.992840 2018] [wsgi:error] [pid 22987:tid 140314318284544] 
File "/usr/share/openstack-dashboard/openstack_dashboard/usage/quotas.py", line 
419, in tenant_quota_usages
[Thu Mar 29 09:11:07.992846 2018] [wsgi:error] [pid 22987:tid 140314318284544] 
_get_tenant_network_usages(request, usages, disabled_quotas, tenant_id)
[Thu Mar 29 09:11:07.992853 2018] [wsgi:error] [pid 22987:tid 140314318284544] 
File "/usr/share/openstack-dashboard/openstack_dashboard/usage/quotas.py", line 
320, in _get_tenant_network_usages
[Thu Mar 29 09:11:07.992866 2018] [wsgi:error] [pid 22987:tid 140314318284544] 
details = neutron.tenant_quota_detail_get(request, tenant_id)
[Thu Mar 29 09:11:07.992873 2018] [wsgi:error] [pid 22987:tid 140314318284544] 
File "/usr/share/openstack-dashboard/openstack_dashboard/api/neutron.py", line 
1477, in tenant_quota_detail_get
[Thu Mar 29 09:11:07.992880 2018] [wsgi:error] [pid 22987:tid 140314318284544] 
response = neutronclient(request).get('/quotas/%s/details' % tenant_id)
[Thu Mar 29 09:11:07.992886 2018] [wsgi:error] [pid 22987:tid 140314318284544] 
File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 354, 
in get
[Thu Mar 29 09:11:07.992893 2018] [wsgi:error] [pid 22987:tid 140314318284544] 
headers=headers, params=params)
[Thu Mar 29 09:11:07.992899 2018] [wsgi:error] [pid 22987:tid 140314318284544] 
File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 331, 
in retry_request
[Thu Mar 29 09:11:07.992906 2018] [wsgi:error] [pid 22987:tid 140314318284544] 
headers=headers, params=params)
[Thu Mar 29 09:11:07.992912 2018] [wsgi:error] [pid 22987:tid 140314318284544] 
File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 294, 
in do_request
[Thu Mar 29 09:11:07.992918 2018] [wsgi:error] [pid 22987:tid 140314318284544] 
self._handle_fault_response(status_code, replybody, resp)
[Thu Mar 29 09:11:07.992925 2018] [wsgi:error] [pid 22987:tid 140314318284544] 
File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 269, 
in _handle_fault_response
[Thu Mar 29 09:11:07.992932 2018] [wsgi:error] [pid 22987:tid 140314318284544] 
exception_handler_v20(status_code, error_body)
[Thu Mar 29 09:11:07.992938 2018] [wsgi:error] [pid 22987:tid 140314318284544] 
File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 93, 
in exception_handler_v20
[Thu Mar 29 09:11:07.992944 2018] [wsgi:error] [pid 22987:tid 140314318284544] 
request_ids=request_ids)
[Thu Mar 29 09:11:07.992951 2018] [wsgi:error] [pid 22987:tid 140314318284544] 
Forbidden: User does not have admin privileges: Cannot GET resource for non 
admin tenant.

This is due to:
>From Queens onwards we have a issue with horizon and L2GW. We are unable to 
>create a floating IP. This does not occur when using the CLI only via horizon. 
>The error received is
‘Error: User does not have admin privileges: Cannot GET resource for non admin 
tenant. Neutron server returns request_ids: 
['req-f07a3aac-0994-4d3a-8409-1e55b374af9d']’
This is due to: 
https://github.com/openstack/networking-l2gw/blob/master/networking_l2gw/db/l2gateway/l2gateway_db.py#L316

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1762414

Title:
  Unable to create a FIP when L2GW is enabled

Status in neutron:
  New

Bug description:
File "/usr/share/openstack-dashboard/horizon/tables/base.py", line 1389, in 
_filter_action
  [Thu Mar 29 09:11:07.992793 2018] [wsgi:error] [pid 22987:tid 
140314318284544] return action._allowed(request, datum) and row_matched
  [Thu Mar 29 09:11:07.992800 2018] [wsgi:error] [pid 22987:tid 
140314318284544] File 

[Yahoo-eng-team] [Bug 1759234] [NEW] Stable queens migrations not tagged

2018-03-27 Thread Gary Kotton
Public bug reported:

There is a missing tag with the stable queens migrations. This is
addressed with https://review.openstack.org/#/c/554038/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1759234

Title:
  Stable queens migrations not tagged

Status in neutron:
  New

Bug description:
  There is a missing tag with the stable queens migrations. This is
  addressed with https://review.openstack.org/#/c/554038/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1759234/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1739071] [NEW] Floating IP assigned to a DHCP port leads to a exception if DHCP port is deleted

2017-12-19 Thread Gary Kotton
.7/dist-packages/pymysql/err.py", line 107, in 
raise_mysql_exception
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource raise 
errorclass(errno, errval)
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource DBReferenceError: 
(pymysql.err.IntegrityError) (1451, u'Cannot delete or update a parent row: a 
foreign key constraint fails (`neutron`.`floatingips`, CONSTRAINT 
`floatingips_ibfk_1` FOREIGN KEY (`fixed_port_id`) REFERENCES `ports` (`id`))') 
[SQL: u'DELETE FROM ports WHERE ports.id = %(id)s'] [parameters: {'id': 
u'e7aa58b7-8210-4ac5-967e-8b210cfe4e87'}]
2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1739071

Title:
  Floating IP assigned to a DHCP port leads to a exception if DHCP port
  is deleted

Status in neutron:
  In Progress

Bug description:
  A DHCP port should not be allowed to have a floating IP assigned to
  it. Here is the trace for when this port is deleted.

  root@loadbalancer01:~# neutron subnet-update  
2d370a99-f177-4b85-892e-56def086e046 --disable-dhcp
  Request Failed: internal server error while processing your request.
  Neutron server returns request_ids: 
['req-f43b7e82-58eb-400e-a9ac-3341f3324e8c']

  Backtrace from neutron-server.log file:
  2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource 
[req-f43b7e82-58eb-400e-a9ac-3341f3324e8c ab32ddb0a8e54a6eb756a0a1d82f8345 
1cb4e51898b04cbabdacb0b84b6a7e7e - - -] update failed: No details.
  2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
  2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 93, in 
resource
  2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
  2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 617, in update
  2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource return 
self._update(request, id, body, **kwargs)
  2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/db/api.py", line 95, in wrapped
  2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource setattr(e, 
'_RETRY_EXCEEDED', True)
  2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource 
self.force_reraise()
  2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/db/api.py", line 91, in wrapped
  2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
  2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/oslo_db/api.py", line 151, in wrapper
  2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource 
self.force_reraise()
  2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/oslo_db/api.py", line 139, in wrapper
  2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
  2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/db/api.py", line 131, in wrapped
  2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource 
traceback.format_exc())
  2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
  2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource 
self.force_reraise()
  2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2017-12-17 05:45:58.568 3666 ERROR neutron.api.v2

[Yahoo-eng-team] [Bug 1738983] [NEW] Dynamic routing: invalid exception inputs leads to test exceptions

2017-12-19 Thread Gary Kotton
Public bug reported:

Unit tests have exceptions:

Exception encountered during bgp_speaker rescheduling.
Traceback (most recent call last):
  File 
"/home/gkotton/neutron-dynamic-routing/.tox/py27/src/neutron/neutron/db/agentschedulers_db.py",
 line 159, in reschedule_resources_from_down_agents
reschedule_resource(context, binding_resource_id)
  File "neutron_dynamic_routing/db/bgp_dragentscheduler_db.py", line 174, in 
reschedule_bgp_speaker
failure_reason="no eligible dr agent found")
TypeError: __init__() got an unexpected keyword argument 'failure_reason'
BgpDrAgent 84dfa040-c1d4-40b1-8a16-0502ffccbb5b is down

The tests pass but this will lead to extra failures in the real worl

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1738983

Title:
  Dynamic routing: invalid exception inputs leads to test exceptions

Status in neutron:
  In Progress

Bug description:
  Unit tests have exceptions:

  Exception encountered during bgp_speaker rescheduling.
  Traceback (most recent call last):
File 
"/home/gkotton/neutron-dynamic-routing/.tox/py27/src/neutron/neutron/db/agentschedulers_db.py",
 line 159, in reschedule_resources_from_down_agents
  reschedule_resource(context, binding_resource_id)
File "neutron_dynamic_routing/db/bgp_dragentscheduler_db.py", line 174, in 
reschedule_bgp_speaker
  failure_reason="no eligible dr agent found")
  TypeError: __init__() got an unexpected keyword argument 'failure_reason'
  BgpDrAgent 84dfa040-c1d4-40b1-8a16-0502ffccbb5b is down

  The tests pass but this will lead to extra failures in the real worl

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1738983/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1738612] [NEW] Fkloating IP update breakes decomposed pliugins

2017-12-17 Thread Gary Kotton
Public bug reported:

Commit 088e317cd2dd8488feb29a4fa6600227d1810479 breaks dynamic routing:

ft1.14: 
neutron_dynamic_routing.tests.unit.db.test_bgp_db.Ml2BgpTests.test__get_routes_by_router_with_fip_StringException:
 Traceback (most recent call last):
  File 
"/home/zuul/src/git.openstack.org/openstack/neutron/neutron/tests/base.py", 
line 132, in func
return f(self, *args, **kwargs)
  File "neutron_dynamic_routing/tests/unit/db/test_bgp_db.py", line 780, in 
test__get_routes_by_router_with_fip
fip = self.l3plugin.create_floatingip(self.context, fip_data)
  File "/home/zuul/src/git.openstack.org/openstack/neutron/neutron/db/api.py", 
line 162, in wrapped
return method(*args, **kwargs)
  File "/home/zuul/src/git.openstack.org/openstack/neutron/neutron/db/api.py", 
line 92, in wrapped
setattr(e, '_RETRY_EXCEEDED', True)
  File 
"/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
self.force_reraise()
  File 
"/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
six.reraise(self.type_, self.value, self.tb)
  File "/home/zuul/src/git.openstack.org/openstack/neutron/neutron/db/api.py", 
line 88, in wrapped
return f(*args, **kwargs)
  File 
"/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/py27/local/lib/python2.7/site-packages/oslo_db/api.py",
 line 147, in wrapper
ectxt.value = e.inner_exc
  File 
"/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
self.force_reraise()
  File 
"/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
six.reraise(self.type_, self.value, self.tb)
  File 
"/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/py27/local/lib/python2.7/site-packages/oslo_db/api.py",
 line 135, in wrapper
return f(*args, **kwargs)
  File "/home/zuul/src/git.openstack.org/openstack/neutron/neutron/db/api.py", 
line 127, in wrapped
LOG.debug("Retry wrapper got retriable exception: %s", e)
  File 
"/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
self.force_reraise()
  File 
"/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
six.reraise(self.type_, self.value, self.tb)
  File "/home/zuul/src/git.openstack.org/openstack/neutron/neutron/db/api.py", 
line 123, in wrapped
return f(*dup_args, **dup_kwargs)
  File 
"/home/zuul/src/git.openstack.org/openstack/neutron/neutron/db/l3_dvr_db.py", 
line 995, in create_floatingip
context, floatingip, initial_status)
  File 
"/home/zuul/src/git.openstack.org/openstack/neutron/neutron/db/l3_db.py", line 
1290, in _create_floatingip
if fip['subnet_id']:
KeyError: 'subnet_id'

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1738612

Title:
  Fkloating IP update breakes decomposed pliugins

Status in neutron:
  In Progress

Bug description:
  Commit 088e317cd2dd8488feb29a4fa6600227d1810479 breaks dynamic
  routing:

  ft1.14: 
neutron_dynamic_routing.tests.unit.db.test_bgp_db.Ml2BgpTests.test__get_routes_by_router_with_fip_StringException:
 Traceback (most recent call last):
File 
"/home/zuul/src/git.openstack.org/openstack/neutron/neutron/tests/base.py", 
line 132, in func
  return f(self, *args, **kwargs)
File "neutron_dynamic_routing/tests/unit/db/test_bgp_db.py", line 780, in 
test__get_routes_by_router_with_fip
  fip = self.l3plugin.create_floatingip(self.context, fip_data)
File 
"/home/zuul/src/git.openstack.org/openstack/neutron/neutron/db/api.py", line 
162, in wrapped
  return method(*args, **kwargs)
File 
"/home/zuul/src/git.openstack.org/openstack/neutron/neutron/db/api.py", line 
92, in wrapped
  setattr(e, '_RETRY_EXCEEDED', True)
File 
"/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  self.force_reraise()
File 
"/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/py27/local

[Yahoo-eng-team] [Bug 1736946] [NEW] Conductor: fails to clean up networking resources

2017-12-07 Thread Gary Kotton
n
Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m    self._start()
Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", 
line 489, in _start
Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m    engine_args, maker_args)
Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", 
line 511, in _setup_for_connection
Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m    "No sql_connection parameter 
is established")
Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00mCantStartEngineError: No 
sql_connection parameter is established
Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m

** Affects: nova
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1736946

Title:
  Conductor: fails to clean up networking resources

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  If libvirt fails to deploy instance - for example due to problematic
  vif type being passed. The conductor will fail to clean up resources.
  This fails with the exception below. This is due to the fact that the
  cell mapping was not invoked.

  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00mTraceback (most recent call last):
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
163, in _process_incoming
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m    res = 
self.dispatcher.dispatch(message)
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
220, in dispatch
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m    return 
self._do_dispatch(endpoint, method, ctxt, args)
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
190, in _do_dispatch
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m    result = func(ctxt, **new_args)
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m  File 
"/opt/stack/nova/nova/conductor/manager.py", line 559, in build_instances
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m    
self._destroy_build_request(context, instance)
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m  File 
"/opt/stack/nova/nova/conductor/manager.py", line 477, in _destroy_build_request
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m    context, instance.uuid)
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
184, in wrapper
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m    result = fn(cls, context, 
*args, **kwargs)
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m  File 
"/opt/stack/nova/nova/objects/build_request.py", line 176, in 
get_by_instance_uuid
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m    db_req = 
cls._get_by_instance_uuid_from_db(context, instance_uuid)
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", 
line 983, in wrapper
  Dec  7 09:12:50 utu1604template nova-conductor[22761]: ERROR 
oslo_messaging.rpc.server #033[01;35m#033[00m    with 
self._transaction_scope(context

[Yahoo-eng-team] [Bug 1736678] [NEW] Tags: server exception when using invalid tag inputs

2017-12-06 Thread Gary Kotton
on-server[19677]: ERROR 
neutron.pecan_wsgi.hooks.translation File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 138, in wrapper
Dec 05 20:08:36 utu1604template neutron-server[19677]: ERROR 
neutron.pecan_wsgi.hooks.translation return f(*args, **kwargs)
Dec 05 20:08:36 utu1604template neutron-server[19677]: ERROR 
neutron.pecan_wsgi.hooks.translation File 
"/opt/stack/neutron/neutron/db/api.py", line 128, in wrapped
Dec 05 20:08:36 utu1604template neutron-server[19677]: ERROR 
neutron.pecan_wsgi.hooks.translation LOG.debug("Retry wrapper got retriable 
exception: %s", e)
Dec 05 20:08:36 utu1604template neutron-server[19677]: ERROR 
neutron.pecan_wsgi.hooks.translation File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
Dec 05 20:08:36 utu1604template neutron-server[19677]: ERROR 
neutron.pecan_wsgi.hooks.translation self.force_reraise()
Dec 05 20:08:36 utu1604template neutron-server[19677]: ERROR 
neutron.pecan_wsgi.hooks.translation File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
Dec 05 20:08:36 utu1604template neutron-server[19677]: ERROR 
neutron.pecan_wsgi.hooks.translation six.reraise(self.type_, self.value, 
self.tb)
Dec 05 20:08:36 utu1604template neutron-server[19677]: ERROR 
neutron.pecan_wsgi.hooks.translation File 
"/opt/stack/neutron/neutron/db/api.py", line 124, in wrapped
Dec 05 20:08:36 utu1604template neutron-server[19677]: ERROR 
neutron.pecan_wsgi.hooks.translation return f(*dup_args, **dup_kwargs)
Dec 05 20:08:36 utu1604template neutron-server[19677]: ERROR 
neutron.pecan_wsgi.hooks.translation File 
"/opt/stack/neutron/neutron/pecan_wsgi/controllers/utils.py", line 76, in 
wrapped
Dec 05 20:08:36 utu1604template neutron-server[19677]: ERROR 
neutron.pecan_wsgi.hooks.translation return f(*args, **kwargs)
Dec 05 20:08:36 utu1604template neutron-server[19677]: ERROR 
neutron.pecan_wsgi.hooks.translation File 
"/opt/stack/neutron/neutron/pecan_wsgi/controllers/utils.py", line 398, in index
Dec 05 20:08:36 utu1604template neutron-server[19677]: ERROR 
neutron.pecan_wsgi.hooks.translation result = controller_method(*args, 
**uri_identifiers)
Dec 05 20:08:36 utu1604template neutron-server[19677]: ERROR 
neutron.pecan_wsgi.hooks.translation File 
"/opt/stack/neutron/neutron/extensions/tagging.py", line 136, in update_all
Dec 05 20:08:36 utu1604template neutron-server[19677]: ERROR 
neutron.pecan_wsgi.hooks.translation validate_tags(body)
Dec 05 20:08:36 utu1604template neutron-server[19677]: ERROR 
neutron.pecan_wsgi.hooks.translation File 
"/opt/stack/neutron/neutron/extensions/tagging.py", line 74, in validate_tags
Dec 05 20:08:36 utu1604template neutron-server[19677]: ERROR 
neutron.pecan_wsgi.hooks.translation if 'tags' not in body:
Dec 05 20:08:36 utu1604template neutron-server[19677]: ERROR 
neutron.pecan_wsgi.hooks.translation TypeError: argument of type 'NoneType' is 
not iterable

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Gary Kotton (garyk)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1736678

Title:
  Tags: server exception when using invalid tag inputs

Status in neutron:
  In Progress

Bug description:
  Dec 05 20:08:36 utu1604template devstack@keystone.service[11631]: DEBUG 
keystone.common.authorization [None req-8d8b3040-9dc4-4bf0-acfd-b1df1e929ca6 
None None] RBAC: Authorizing identity:validate_token() {{(pid=11634) 
_build_policy_check_credentials 
/opt/stack/keystone/keystone/common/authorization.py:137}}
  Dec 05 20:08:36 utu1604template devstack@keystone.service[11631]: DEBUG 
keystone.policy.backends.rules [None req-8d8b3040-9dc4-4bf0-acfd-b1df1e929ca6 
None None] enforce identity:validate_token: {'is_delegated_auth': False, 
'access_token_id': None, 'user_id': u'5fc8eb4a128b426a8f0d64fb282c89ff', 
'roles': [u'service'], 'user_domain_id': u'default', 'consumer_id': None, 
'trustee_id': None, 'is_domain': False, 'is_admin_project': True, 'trustor_id': 
None, 'token': , 'project_id': 
u'63dfd8edb7a744689f5a4e1d72b168c0', 'trust_id': None, 'project_domain_id': 
u'default'} {{(pid=11634) enforce 
/opt/stack/keystone/keystone/policy/backends/rules.py:33}}
  Dec 05 20:08:36 utu1604template devstack@keystone.service[11631]: DEBUG 
keystone.common.authorization [None req-8d8b3040-9dc4-4bf0-acfd-b1df1e929ca6 
None None] RBAC: Authorization granted {{(pid=11634) check_policy 
/opt/stack/keystone/keystone/common/authorization.py:240}}
  Dec 05 20:08:36 utu1604template devstack@keystone.service[11631]: [pid: 
11634|app: 0|req: 26017/52064] 10.115.78.247 () {62 vars in 1404 bytes} [Tue 
Dec 5 20:08:36 2017] GET /identity/v3/auth/tokens => ge

[Yahoo-eng-team] [Bug 1703540] [NEW] Reschedule with libvirt exception leaves dangling neutron ports

2017-07-11 Thread Gary Kotton
Public bug reported:

When an instance fails to spawn, for example with the exception:

2017-07-11 04:39:56.942 ERROR nova.compute.manager 
[req-1e54a66a-6da5-4720-89cc-f65568dea131 ashok ashok] [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b] Instance failed to spawn
2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b] Traceback (most recent call last):
2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2124, in _build_resources
2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b] yield resources
2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1930, in _build_and_run_instance
2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b] block_device_info=block_device_info)
2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2714, in spawn
2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b] destroy_disks_on_failure=True)
2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 5130, in 
_create_domain_and_network
2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b] destroy_disks_on_failure)
2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b] self.force_reraise()
2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b] six.reraise(self.type_, self.value, 
self.tb)
2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 5102, in 
_create_domain_and_network
2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b] post_xml_callback=post_xml_callback)
2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 5020, in _create_domain
2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b] guest.launch(pause=pause)
2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b]   File 
"/opt/stack/nova/nova/virt/libvirt/guest.py", line 145, in launch
2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b] self._encoded_xml, errors='ignore')
2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b] self.force_reraise()
2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b] six.reraise(self.type_, self.value, 
self.tb)
2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b]   File 
"/opt/stack/nova/nova/virt/libvirt/guest.py", line 140, in launch
2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b] return 
self._domain.createWithFlags(flags)
2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 186, in doit
2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in 
proxy_call
2017-07-11 04:39:56.942 TRACE nova.compute.manager [instance: 
d37e6882-8c94-47dc-8c2f-c9052a25b95b] rv = execute(f, 

[Yahoo-eng-team] [Bug 1696403] [NEW] Pep8 fails due to six 1.10.0

2017-06-07 Thread Gary Kotton
Public bug reported:

2017-06-07 09:29:49.968889 | Traceback (most recent call last):
2017-06-07 09:29:49.968921 |   File 
"/home/jenkins/workspace/gate-neutron-pep8-ubuntu-xenial/.tox/pep8/bin/pylint", 
line 11, in 
2017-06-07 09:29:49.968933 | sys.exit(run_pylint())
2017-06-07 09:29:49.968974 |   File 
"/home/jenkins/workspace/gate-neutron-pep8-ubuntu-xenial/.tox/pep8/local/lib/python2.7/site-packages/pylint/__init__.py",
 line 23, in run_pylint
2017-06-07 09:29:49.968986 | Run(sys.argv[1:])
2017-06-07 09:29:49.969025 |   File 
"/home/jenkins/workspace/gate-neutron-pep8-ubuntu-xenial/.tox/pep8/local/lib/python2.7/site-packages/pylint/lint.py",
 line 1332, in __init__
2017-06-07 09:29:49.969037 | linter.check(args)
2017-06-07 09:29:49.969074 |   File 
"/home/jenkins/workspace/gate-neutron-pep8-ubuntu-xenial/.tox/pep8/local/lib/python2.7/site-packages/pylint/lint.py",
 line 747, in check
2017-06-07 09:29:49.969090 | self._do_check(files_or_modules)
2017-06-07 09:29:49.969129 |   File 
"/home/jenkins/workspace/gate-neutron-pep8-ubuntu-xenial/.tox/pep8/local/lib/python2.7/site-packages/pylint/lint.py",
 line 869, in _do_check
2017-06-07 09:29:49.969152 | self.check_astroid_module(ast_node, walker, 
rawcheckers, tokencheckers)
2017-06-07 09:29:49.969193 |   File 
"/home/jenkins/workspace/gate-neutron-pep8-ubuntu-xenial/.tox/pep8/local/lib/python2.7/site-packages/pylint/lint.py",
 line 946, in check_astroid_module
2017-06-07 09:29:49.969206 | walker.walk(ast_node)
2017-06-07 09:29:49.969245 |   File 
"/home/jenkins/workspace/gate-neutron-pep8-ubuntu-xenial/.tox/pep8/local/lib/python2.7/site-packages/pylint/utils.py",
 line 874, in walk
2017-06-07 09:29:49.969255 | self.walk(child)
2017-06-07 09:29:49.969294 |   File 
"/home/jenkins/workspace/gate-neutron-pep8-ubuntu-xenial/.tox/pep8/local/lib/python2.7/site-packages/pylint/utils.py",
 line 871, in walk
2017-06-07 09:29:49.969304 | cb(astroid)
2017-06-07 09:29:49.969349 |   File 
"/home/jenkins/workspace/gate-neutron-pep8-ubuntu-xenial/.tox/pep8/local/lib/python2.7/site-packages/pylint/checkers/imports.py",
 line 249, in visit_import
2017-06-07 09:29:49.969368 | importedmodnode = 
self.get_imported_module(node, name)
2017-06-07 09:29:49.969411 |   File 
"/home/jenkins/workspace/gate-neutron-pep8-ubuntu-xenial/.tox/pep8/local/lib/python2.7/site-packages/pylint/checkers/imports.py",
 line 288, in get_imported_module
2017-06-07 09:29:49.969428 | return importnode.do_import_module(modname)
2017-06-07 09:29:49.969469 |   File 
"/home/jenkins/workspace/gate-neutron-pep8-ubuntu-xenial/.tox/pep8/local/lib/python2.7/site-packages/astroid/mixins.py",
 line 119, in do_import_module
2017-06-07 09:29:49.969484 | relative_only=level and level >= 1)
2017-06-07 09:29:49.969525 |   File 
"/home/jenkins/workspace/gate-neutron-pep8-ubuntu-xenial/.tox/pep8/local/lib/python2.7/site-packages/astroid/scoped_nodes.py",
 line 415, in import_module
2017-06-07 09:29:49.969542 | return MANAGER.ast_from_module_name(modname)
2017-06-07 09:29:49.969584 |   File 
"/home/jenkins/workspace/gate-neutron-pep8-ubuntu-xenial/.tox/pep8/local/lib/python2.7/site-packages/astroid/manager.py",
 line 161, in ast_from_module_name
2017-06-07 09:29:49.969592 | raise e
2017-06-07 09:29:49.969620 | astroid.exceptions.AstroidImportError: Failed to 
import module six.moves.urllib.parse with error:
2017-06-07 09:29:49.969636 | No module named six.moves.urllib.parse.
2017-06-07 09:29:50.688358 | ERROR: InvocationError: '/bin/sh 
./tools/coding-checks.sh --pylint '
2017-06-07 09:29:50.688923 | ___ summary 

2017-06-07 09:29:50.688975 | ERROR:   pep8: commands failed

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

- 2017-06-07 11:29:07.751893 | Traceback (most recent call last):
- 2017-06-07 11:29:07.751925 |   File 
"/home/jenkins/workspace/gate-neutron-pep8-ubuntu-xenial/.tox/pep8/bin/pylint", 
line 11, in 
- 2017-06-07 11:29:07.751938 | sys.exit(run_pylint())
- 2017-06-07 11:29:07.751979 |   File 
"/home/jenkins/workspace/gate-neutron-pep8-ubuntu-xenial/.tox/pep8/local/lib/python2.7/site-packages/pylint/__init__.py",
 line 23, in run_pylint
- 2017-06-07 11:29:07.751992 | Run(sys.argv[1:])
- 2017-06-07 11:29:07.752033 |   File 
"/home/jenkins/workspace/gate-neutron-pep8-ubuntu-xenial/.tox/pep8/local/lib/python2.7/site-packages/pylint/lint.py",
 line 1332, in __init__
- 2017-06-07 11:29:07.752046 | linter.check(args)
- 2017-06-07 11:29:07.752085 |   File 
"/home/jenkins/workspace/gate-neutron-pep8-ubuntu-xenial/.tox/pep8/local/lib/python2.7/site-packages/pylint/lint.py",
 line 747, in check
- 2017-06-07 11:29:07.752103 | self._do_check(files_or_modules)
- 2017-06-07 11:29:07.752143 |   File 
"/home/jenkins/workspace/gate-neutron-pep8-ubuntu-xenial/.tox/pep8/local/lib/python2.7/site-packages/pylint/lint.py",
 line 869, in _do_check
- 

[Yahoo-eng-team] [Bug 1627106] Re: TimeoutException while executing tests adding bridge using OVSDB native

2017-02-27 Thread Gary Kotton
Moving to triaged as the issue has not been addressed.

** Changed in: neutron
   Status: Fix Released => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1627106

Title:
  TimeoutException while executing tests adding bridge using OVSDB
  native

Status in neutron:
  Triaged

Bug description:
  http://logs.openstack.org/91/366291/12/check/gate-neutron-dsvm-
  functional-ubuntu-trusty/a23c816/testr_results.html.gz

  Traceback (most recent call last):
File "neutron/tests/base.py", line 125, in func
  return f(self, *args, **kwargs)
File "neutron/tests/functional/agent/ovsdb/test_impl_idl.py", line 62, in 
test_post_commit_vswitchd_completed_no_failures
  self._add_br_and_test()
File "neutron/tests/functional/agent/ovsdb/test_impl_idl.py", line 56, in 
_add_br_and_test
  self._add_br()
File "neutron/tests/functional/agent/ovsdb/test_impl_idl.py", line 52, in 
_add_br
  tr.add(ovsdb.add_br(self.brname))
File "neutron/agent/ovsdb/api.py", line 76, in __exit__
  self.result = self.commit()
File "neutron/agent/ovsdb/impl_idl.py", line 72, in commit
  'timeout': self.timeout})
  neutron.agent.ovsdb.api.TimeoutException: Commands 
[AddBridgeCommand(name=test-br6925d8e2, datapath_type=None, may_exist=True)] 
exceeded timeout 10 seconds

  
  I suspect this one may hit us because we finally made timeout working with 
Icd745514adc14730b9179fa7a6dd5c115f5e87a5.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1627106/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661181] [NEW] IPv6 subnet update create s RPC client with each call

2017-02-02 Thread Gary Kotton
Public bug reported:

Commit fc7cae844cb783887b8a8eb4d9c3286116d740e6 invokes the class once
per call. We should do this only once like in the l3_db.py

** Affects: neutron
 Importance: High
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1661181

Title:
  IPv6 subnet update create s RPC client with each call

Status in neutron:
  In Progress

Bug description:
  Commit fc7cae844cb783887b8a8eb4d9c3286116d740e6 invokes the class once
  per call. We should do this only once like in the l3_db.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1661181/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656629] [NEW] Pagination exception from commit 175bfe048283ae930b60b1cea87874d1fa1d10d6

2017-01-15 Thread Gary Kotton
gate-vmware-nsx-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
six.reraise(self.type_, self.value, self.tb)
  File 
"/home/jenkins/workspace/gate-vmware-nsx-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_db/api.py",
 line 139, in wrapper
return f(*args, **kwargs)
  File "/tmp/openstack/neutron/neutron/db/api.py", line 128, in wrapped
traceback.format_exc())
  File 
"/home/jenkins/workspace/gate-vmware-nsx-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
self.force_reraise()
  File 
"/home/jenkins/workspace/gate-vmware-nsx-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
six.reraise(self.type_, self.value, self.tb)
  File "/tmp/openstack/neutron/neutron/db/api.py", line 123, in wrapped
return f(*dup_args, **dup_kwargs)
  File "/tmp/openstack/neutron/neutron/db/l3_db.py", line 1413, in 
get_floatingips
page_reverse=page_reverse)
  File "/tmp/openstack/neutron/neutron/db/common_db_mixin.py", line 253, in 
_get_collection
page_reverse=page_reverse)
  File "/tmp/openstack/neutron/neutron/db/common_db_mixin.py", line 243, in 
_get_collection_query
sort_dirs=sort_dirs)
  File 
"/home/jenkins/workspace/gate-vmware-nsx-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/utils.py",
 line 235, in paginate_query
crit_attrs.append((model_attr > marker_values[i]))
  File 
"/home/jenkins/workspace/gate-vmware-nsx-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/sql/operators.py",
 line 318, in __gt__
return self.operate(gt, other)
  File 
"/home/jenkins/workspace/gate-vmware-nsx-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/attributes.py",
 line 175, in operate
return op(self.comparator, *other, **kwargs)
  File 
"/home/jenkins/workspace/gate-vmware-nsx-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/sql/operators.py",
 line 318, in __gt__
return self.operate(gt, other)
  File 
"/home/jenkins/workspace/gate-vmware-nsx-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/properties.py",
 line 269, in operate
return op(self.__clause_element__(), *other, **kwargs)
  File 
"/home/jenkins/workspace/gate-vmware-nsx-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/sql/operators.py",
 line 318, in __gt__
return self.operate(gt, other)
  File 
"/home/jenkins/workspace/gate-vmware-nsx-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py",
 line 742, in operate
return op(self.comparator, *other, **kwargs)
  File 
"/home/jenkins/workspace/gate-vmware-nsx-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/sql/operators.py",
 line 318, in __gt__
return self.operate(gt, other)
  File "", line 1, in 
  File 
"/home/jenkins/workspace/gate-vmware-nsx-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/sql/type_api.py",
 line 60, in operate
return o[0](self.expr, op, *(other + o[1:]), **kwargs)
  File 
"/home/jenkins/workspace/gate-vmware-nsx-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/sql/default_comparator.py",
 line 53, in _boolean_compare
"Only '=', '!=', 'is_()', 'isnot()' operators can "
ArgumentError: Only '=', '!=', 'is_()', 'isnot()' operators can be used with 
None/True/False
}}}

Traceback (most recent call last):
  File "/tmp/openstack/neutron/neutron/tests/base.py", line 114, in func
return f(self, *args, **kwargs)
  File "/tmp/openstack/neutron/neutron/tests/base.py", line 114, in func
    return f(self, *args, **kwargs)
  File "/tmp/openstack/neutron/neutron/tests/unit/extensions/test_l3.py", line 
2713, in test_floatingip_list_with_pagination
('floating_ip_address', 'asc'), 2, 2)
  File 
"/tmp/openstack/neutron/neutron/tests/unit/db/test_db_base_plugin_v2.py", line 
709, in _test_list_with_pagination
self.assertThat(len(res[resources]),
KeyError: 'floatingips'

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1656629

Title:
  Pagination exception from commit
  175bfe048283ae930b60b1cea87874d1fa1d10d6

Status in neutron:
  In Progress

Bug description:
  Commit 175bfe048283ae930b60b1cea87874d1fa1d10d6 c

[Yahoo-eng-team] [Bug 1648046] [NEW] AFTER_INIT callback not invoked when api_workers == 0

2016-12-07 Thread Gary Kotton
Public bug reported:

After change 
https://github.com/openstack/neutron/commit/483c5982c020ff21ceecf1d575c2d8fad2937d6e
 the AFTER_INIT callback is not invoked when a user configures api_workers 0.
This breaks plugins/applications that expect the AFTER_INIT to be called

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress


** Tags: newton-backport-potential

** Tags added: newton-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1648046

Title:
  AFTER_INIT callback not invoked when api_workers == 0

Status in neutron:
  In Progress

Bug description:
  After change 
https://github.com/openstack/neutron/commit/483c5982c020ff21ceecf1d575c2d8fad2937d6e
 the AFTER_INIT callback is not invoked when a user configures api_workers 0.
  This breaks plugins/applications that expect the AFTER_INIT to be called

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1648046/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1644739] Re: Rally scenario leaves a lot of error messages

2016-11-30 Thread Gary Kotton
** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1644739

Title:
  Rally scenario leaves a lot of error messages

Status in neutron:
  Invalid

Bug description:
  After sanity run on NSXT-CH + devstack-stable/newton, found many error
  logs as below in log file during subnet deletion although all test
  cases finished successfully:-

  2016-11-23 07:47:24.174 22310 ERROR
  neutron.ipam.drivers.neutrondb_ipam.driver [req-27ab17fd-e979-4bac-
  961d-eab3eb577ba1 c_rally_40934127_a0ehSH2m -] IPAM subnet referenced
  to Neutron subnet c4e49fbe-d34d-403e-8b7b-85f6a6950248 does not exist

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1644739/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1644739] [NEW] Rally scenario leaves a lot of error messages

2016-11-25 Thread Gary Kotton
Public bug reported:

After sanity run on NSXT-CH + devstack-stable/newton, found many error
logs as below in log file during subnet deletion although all test cases
finished successfully:-

2016-11-23 07:47:24.174 22310 ERROR
neutron.ipam.drivers.neutrondb_ipam.driver [req-27ab17fd-e979-4bac-961d-
eab3eb577ba1 c_rally_40934127_a0ehSH2m -] IPAM subnet referenced to
Neutron subnet c4e49fbe-d34d-403e-8b7b-85f6a6950248 does not exist

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1644739

Title:
  Rally scenario leaves a lot of error messages

Status in neutron:
  In Progress

Bug description:
  After sanity run on NSXT-CH + devstack-stable/newton, found many error
  logs as below in log file during subnet deletion although all test
  cases finished successfully:-

  2016-11-23 07:47:24.174 22310 ERROR
  neutron.ipam.drivers.neutrondb_ipam.driver [req-27ab17fd-e979-4bac-
  961d-eab3eb577ba1 c_rally_40934127_a0ehSH2m -] IPAM subnet referenced
  to Neutron subnet c4e49fbe-d34d-403e-8b7b-85f6a6950248 does not exist

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1644739/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630918] [NEW] stable/new: resource tracker exception

2016-10-06 Thread Gary Kotton
Public bug reported:

2016-10-06 20:20:30.611 8718 WARNING nova.compute.monitors 
[req-6606069f-7dd1-41ad-997e-3b4b8ca276e4 - -] Excluding 
nova.compute.monitors.cpu monitor virt_driver. Not in the list of enabled 
monitors (CONF.compute_monitors).
2016-10-06 20:20:30.612 8718 INFO nova.compute.resource_tracker 
[req-6606069f-7dd1-41ad-997e-3b4b8ca276e4 - -] Auditing locally available 
compute resources for node kvm-comp1
2016-10-06 20:20:31.987 8718 ERROR nova.compute.manager 
[req-6606069f-7dd1-41ad-997e-3b4b8ca276e4 - -] Error updating resources for 
node kvm-comp1.
2016-10-06 20:20:31.987 8718 ERROR nova.compute.manager Traceback (most recent 
call last):
2016-10-06 20:20:31.987 8718 ERROR nova.compute.manager   File 
"/opt/stack/nova/nova/compute/manager.py", line 6410, in 
update_available_resource_for_node
2016-10-06 20:20:31.987 8718 ERROR nova.compute.manager 
rt.update_available_resource(context)
2016-10-06 20:20:31.987 8718 ERROR nova.compute.manager   File 
"/opt/stack/nova/nova/compute/resource_tracker.py", line 513, in 
update_available_resource
2016-10-06 20:20:31.987 8718 ERROR nova.compute.manager resources = 
self.driver.get_available_resource(self.nodename)
2016-10-06 20:20:31.987 8718 ERROR nova.compute.manager   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 5301, in 
get_available_resource
2016-10-06 20:20:31.987 8718 ERROR nova.compute.manager disk_over_committed 
= self._get_disk_over_committed_size_total()
2016-10-06 20:20:31.987 8718 ERROR nova.compute.manager   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 6846, in 
_get_disk_over_committed_size_total
2016-10-06 20:20:31.987 8718 ERROR nova.compute.manager 
local_instances[guest.uuid], bdms[guest.uuid])
2016-10-06 20:20:31.987 8718 ERROR nova.compute.manager KeyError: 
'c7704e45-e60a-4169-a143-4668e47f0b4b'
2016-10-06 20:20:31.987 8718 ERROR nova.compute.manager

** Affects: nova
 Importance: Critical
 Status: New

** Changed in: nova
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1630918

Title:
  stable/new: resource tracker exception

Status in OpenStack Compute (nova):
  New

Bug description:
  2016-10-06 20:20:30.611 8718 WARNING nova.compute.monitors 
[req-6606069f-7dd1-41ad-997e-3b4b8ca276e4 - -] Excluding 
nova.compute.monitors.cpu monitor virt_driver. Not in the list of enabled 
monitors (CONF.compute_monitors).
  2016-10-06 20:20:30.612 8718 INFO nova.compute.resource_tracker 
[req-6606069f-7dd1-41ad-997e-3b4b8ca276e4 - -] Auditing locally available 
compute resources for node kvm-comp1
  2016-10-06 20:20:31.987 8718 ERROR nova.compute.manager 
[req-6606069f-7dd1-41ad-997e-3b4b8ca276e4 - -] Error updating resources for 
node kvm-comp1.
  2016-10-06 20:20:31.987 8718 ERROR nova.compute.manager Traceback (most 
recent call last):
  2016-10-06 20:20:31.987 8718 ERROR nova.compute.manager   File 
"/opt/stack/nova/nova/compute/manager.py", line 6410, in 
update_available_resource_for_node
  2016-10-06 20:20:31.987 8718 ERROR nova.compute.manager 
rt.update_available_resource(context)
  2016-10-06 20:20:31.987 8718 ERROR nova.compute.manager   File 
"/opt/stack/nova/nova/compute/resource_tracker.py", line 513, in 
update_available_resource
  2016-10-06 20:20:31.987 8718 ERROR nova.compute.manager resources = 
self.driver.get_available_resource(self.nodename)
  2016-10-06 20:20:31.987 8718 ERROR nova.compute.manager   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 5301, in 
get_available_resource
  2016-10-06 20:20:31.987 8718 ERROR nova.compute.manager 
disk_over_committed = self._get_disk_over_committed_size_total()
  2016-10-06 20:20:31.987 8718 ERROR nova.compute.manager   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 6846, in 
_get_disk_over_committed_size_total
  2016-10-06 20:20:31.987 8718 ERROR nova.compute.manager 
local_instances[guest.uuid], bdms[guest.uuid])
  2016-10-06 20:20:31.987 8718 ERROR nova.compute.manager KeyError: 
'c7704e45-e60a-4169-a143-4668e47f0b4b'
  2016-10-06 20:20:31.987 8718 ERROR nova.compute.manager

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1630918/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1630912] [NEW] stable/newton: unable to spin up a kvm instance

2016-10-06 Thread Gary Kotton
Public bug reported:

Happens randowmly:

2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager 
[req-50d2dc6d-072e-423e-9de0-096ba307411f admin demo] [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4] Instance failed to spawn
2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4] Traceback (most recent call last):
2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2078, in _build_resources
2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4] yield resources
2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1920, in _build_and_run_instance
2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4] block_device_info=block_device_info)
2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2571, in spawn
2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4] admin_pass=admin_password)
2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2960, in _create_image
2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4] image_id=disk_images['kernel_id'])
2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4]   File 
"/opt/stack/nova/nova/virt/libvirt/imagebackend.py", line 218, in cache
2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4] *args, **kwargs)
2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4]   File 
"/opt/stack/nova/nova/virt/libvirt/imagebackend.py", line 504, in create_image
2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4] prepare_template(target=base, *args, 
**kwargs)
2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
264, in inner
2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4] do_log=False, semaphores=semaphores, 
delay=delay):
2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4]   File 
"/usr/lib/python2.7/contextlib.py", line 17, in __enter__
2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4] return self.gen.next()
2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
216, in lock
2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4] ext_lock.acquire(delay=delay)
2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4]   File 
"/usr/local/lib/python2.7/dist-packages/fasteners/process_lock.py", line 151, 
in acquire
2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4] self._do_open()
2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4]   File 
"/usr/local/lib/python2.7/dist-packages/fasteners/process_lock.py", line 123, 
in _do_open
2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4] self.lockfile = open(self.path, 'a')
2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4] IOError: [Errno 13] Permission denied: 
'/opt/stack/data/nova/instances/locks/nova-273d5645757056cdd056de4cfe9f121b9eee6ae3'
2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4]

** Affects: nova
 Importance: Critical
 Status: New

** Changed in: nova
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1630912

Title:
  stable/newton: unable to spin up a kvm instance

Status in OpenStack Compute (nova):
  New

Bug description:
  Happens randowmly:

  2016-10-06 20:22:18.100 1931 ERROR nova.compute.manager 
[req-50d2dc6d-072e-423e-9de0-096ba307411f admin demo] [instance: 
04709c94-0ba3-4312-ad1c-5739e5600bf4] Instance 

[Yahoo-eng-team] [Bug 1625140] [NEW] Deprecation warning for unused code

2016-09-19 Thread Gary Kotton
Public bug reported:

2016-09-18 03:20:37.403628 | Captured stderr:
2016-09-18 03:20:37.403670 | 
2016-09-18 03:20:37.403795 | neutron/tests/unit/api/test_api_common.py:33: 
DeprecationWarning: Using class 'NeutronController' (either directly or via 
inheritance) is deprecated in version 'newton' and will be removed in version 
'ocata'
2016-09-18 03:20:37.404051 |   self.controller = FakeController(None)
2016-09-18 03:20:37.404091 |

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1625140

Title:
  Deprecation warning for unused code

Status in neutron:
  In Progress

Bug description:
  2016-09-18 03:20:37.403628 | Captured stderr:
  2016-09-18 03:20:37.403670 | 
  2016-09-18 03:20:37.403795 | 
neutron/tests/unit/api/test_api_common.py:33: DeprecationWarning: Using class 
'NeutronController' (either directly or via inheritance) is deprecated in 
version 'newton' and will be removed in version 'ocata'
  2016-09-18 03:20:37.404051 |   self.controller = FakeController(None)
  2016-09-18 03:20:37.404091 |

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1625140/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619554] Re: [KVM] VM console throwing error atkbd serio0: Unknown key pressed

2016-09-11 Thread Gary Kotton
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1619554

Title:
  [KVM] VM console throwing error atkbd serio0: Unknown key pressed

Status in devstack:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  With master latest code console of cirros images on KVM is throwing
  error.

  root@runner:~/tools# glance image-show e63f91c4-3326-4a88-ad3a-aea819d64df9
  +--+--+
  | Property | Value|
  +--+--+
  | checksum | eb9139e4942121f22bbc2afc0400b2a4 |
  | container_format | ami  |
  | created_at   | 2016-08-31T09:29:20Z |
  | disk_format  | ami  |
  | hypervisor_type  | qemu |
  | id   | e63f91c4-3326-4a88-ad3a-aea819d64df9 |
  | kernel_id| e5f5bafd-a998-463a-9a29-03c9812ec948 |
  | min_disk | 0|
  | min_ram  | 0|
  | name | cirros-0.3.4-x86_64-uec   |
  | owner| 1b64a0ec2d8e481abc938bd44197c40c |
  | protected| False|
  | ramdisk_id   | 643e5ddb-bd94-4ddb-9656-84987a2ef917 |
  | size | 25165824 |
  | status   | active   |
  | tags | []   |
  | updated_at   | 2016-09-02T06:38:01Z |
  | virtual_size | None |
  | visibility   | public   |
  +--+--+
  root@runner:~/tools# glance image-list | grep 
e63f91c4-3326-4a88-ad3a-aea819d64df9
  | e63f91c4-3326-4a88-ad3a-aea819d64df9 | cirros-0.3.4-x86_64-uec  
  |
  root@runner:~/tools# 

  stack@controller:~/nsbu_cqe_openstack/devstack$ git log -2
  commit e6b7e7ff3f5c1b1afdae1c3f9c35754d11c0a6aa
  Author: Gary Kotton <gkot...@vmware.com>
  Date:   Sun Aug 14 06:55:42 2016 -0700

  Enable neutron to work in a multi node setup
  
  On the controller node where devstack is being run should create
  the neutron network. The compute node should not.
  
  The the case that we want to run a multi-node neutron setup we need
  to configure the following (in the case that a plugin does not
  have any agents running on the compute node):
  ENABLED_SERVICES=n-cpu,neutron
  
  In addition to this the code did not enable decomposed plugins to
  configure their nova configurations if necessary.
  
  This patch ensure that the multi-node support works.
  
  Change-Id: I8e80edd453a1106ca666d6c531b2433be631bce4
  Closes-bug: #1613069

  commit 79722563a67d941a808b02aeccb3c6d4f1af0c41
  Merge: 434035e 4d60175
  Author: Jenkins <jenk...@review.openstack.org>
  Date:   Tue Aug 30 19:52:15 2016 +

  Merge "Add support for placement API to devstack"
  stack@controller:~/nsbu_cqe_openstack/devstack$ 


  Console logs:

  
  [0.00] Initializing cgroup subsys cpuset
  [0.00] Initializing cgroup subsys cpu
  [0.00] Linux version 3.2.0-80-virtual (buildd@batsu) (gcc version 
4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #116-Ubuntu SMP Mon Mar 23 17:28:52 UTC 
2015 (Ubuntu 3.2.0-80.116-virtual 3.2.68)
  [0.00] Command line: root=/dev/vda console=tty0 console=ttyS0 
no_timer_check
  [0.00] KERNEL supported cpus:
  [0.00]   Intel GenuineIntel
  [0.00]   AMD AuthenticAMD
  [0.00]   Centaur CentaurHauls
  [0.00] BIOS-provided physical RAM map:
  [0.00]  BIOS-e820:  - 0009fc00 (usable)
  [0.00]  BIOS-e820: 0009fc00 - 000a (reserved)
  [0.00]  BIOS-e820: 000f - 0010 (reserved)
  [0.00]  BIOS-e820: 0010 - 1fffe000 (usable)
  [0.00]  BIOS-e820: 1fffe000 - 2000 (reserved)
  [0.00]  BIOS-e820: fffc - 0001 (reserved)
  [0.00] NX (Execute Disable) protection: active
  [0.00] SMBIOS 2.4 present.
  [0.00] No AGP bridge found
  [0.00] last_pfn = 0x1fffe max_arch_pfn = 0x4
  [0.00] x86 PAT enabled: cpu 0, old 0x7040600070406, new 
0x7010600070106
  [0.00] found SMP MP-table at [880f0b00] f0b00
  [0.00] init_memory_mapping: -1fffe000
  [0.00] RAMDISK: 1fc5e000 - 1fff
  [0.00] ACPI: RSDP 000

[Yahoo-eng-team] [Bug 1622294] Re: L2GW: depraction warnings for model_base

2016-09-11 Thread Gary Kotton
https://review.openstack.org/368392

** Changed in: neutron
 Assignee: (unassigned) => Gary Kotton (garyk)

** Also affects: networking-l2gw
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1622294

Title:
  L2GW: depraction warnings for model_base

Status in networking-l2gw:
  New

Bug description:
  2016-09-10 19:41:19.693103 | /home/jenkins/workspace/gate-vmware-nsx-
  python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-
  packages/networking_l2gw/db/l2gateway/l2gateway_models.py:22:
  DeprecationWarning: neutron.db.model_base.BASEV2: moved to
  neutron_lib.db.model_base.BASEV2

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-l2gw/+bug/1622294/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1622294] [NEW] L2GW: depraction warnings for model_base

2016-09-11 Thread Gary Kotton
Public bug reported:

2016-09-10 19:41:19.693103 | /home/jenkins/workspace/gate-vmware-nsx-
python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-
packages/networking_l2gw/db/l2gateway/l2gateway_models.py:22:
DeprecationWarning: neutron.db.model_base.BASEV2: moved to
neutron_lib.db.model_base.BASEV2

** Affects: networking-l2gw
 Importance: Undecided
 Status: New

** Changed in: neutron
Milestone: None => newton-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1622294

Title:
  L2GW: depraction warnings for model_base

Status in networking-l2gw:
  New

Bug description:
  2016-09-10 19:41:19.693103 | /home/jenkins/workspace/gate-vmware-nsx-
  python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-
  packages/networking_l2gw/db/l2gateway/l2gateway_models.py:22:
  DeprecationWarning: neutron.db.model_base.BASEV2: moved to
  neutron_lib.db.model_base.BASEV2

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-l2gw/+bug/1622294/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1620279] Re: Allow metadata agent to make calls to more than one nova_metadata_ip

2016-09-11 Thread Gary Kotton
I do not think that this is a bug. The nova_api IP configured in can be an IP 
address of a VIP of nova-api's. 
We do not need to invent the wheel here

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1620279

Title:
  Allow metadata agent to make calls to more than one nova_metadata_ip

Status in neutron:
  Won't Fix

Bug description:
  Currently in config of metadata agent there is option to set IP address of 
nova metadata service (nova_metadata_ip).
  There can be situation that there is more than one nova-api service in 
cluster and in such case if configured nova metadata IP will return e.g. error 
500 then it will be returned to instance, but there can be situation that all 
other nova-api services are working fine and call to other Nova service would 
return proper metadata.

  So proposition is to change nova_metadata_ip string option to list of
  IP addresses and to change metadata agent that it will try to make
  calls to one of configured Nova services. If response from this Nova
  service will not be 200, than agent will try to make call to next Nova
  service. If response from all Nova services will fail, then it will
  return lowest error code which will get from Nova (for example Nova-
  api-1 returned 500 and Nova-api-2 returned 404 - agent will return to
  VM response 404).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1620279/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1621043] [NEW] VMware-NSX: unable to get a DHCP address when using the simple-dvs plugin

2016-09-07 Thread Gary Kotton
Public bug reported:

The vmware-nsx repo has a number of different plugins. The DVS plugin enables 
one to be able to use Neutron with a number of limitation (no security groups 
and no layer 3 support). If one wants to achieve this with the DVS plugin then 
there are many different examples that they can use - for example 
https://github.com/openstack/networking-vsphere/blob/master/networking_vsphere/drivers/dvs_driver.py
 (that requires the ML2 plugin).
In the vmware-nsx dvs case there are no agents running. A instance is unable to 
get a DHCP address as we are unable to plugin the DHCP agent to the OVS. This 
cannot be done as the DeviceManager is hard coded and we need to make use of 
the network information to get the VLAN tag.
Enabling the plu and unplug opertaions for the DHCP agent to be overriden will 
enable the plugin to expose this and for people a chance to get neutron up and 
running in an existing vsphere env.

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress


** Tags: rfe

** Changed in: neutron
Milestone: None => newton-rc1

** Tags added: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1621043

Title:
  VMware-NSX: unable to get a DHCP address when using the simple-dvs
  plugin

Status in neutron:
  In Progress

Bug description:
  The vmware-nsx repo has a number of different plugins. The DVS plugin enables 
one to be able to use Neutron with a number of limitation (no security groups 
and no layer 3 support). If one wants to achieve this with the DVS plugin then 
there are many different examples that they can use - for example 
https://github.com/openstack/networking-vsphere/blob/master/networking_vsphere/drivers/dvs_driver.py
 (that requires the ML2 plugin).
  In the vmware-nsx dvs case there are no agents running. A instance is unable 
to get a DHCP address as we are unable to plugin the DHCP agent to the OVS. 
This cannot be done as the DeviceManager is hard coded and we need to make use 
of the network information to get the VLAN tag.
  Enabling the plu and unplug opertaions for the DHCP agent to be overriden 
will enable the plugin to expose this and for people a chance to get neutron up 
and running in an existing vsphere env.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1621043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1620516] [NEW] Dynamic routing: deprecated warnings for address scope

2016-09-06 Thread Gary Kotton
Public bug reported:

{0} 
neutron_dynamic_routing.tests.unit.db.test_bgp_db.BgpTests.test_get_ipv4_tenant_subnet_routes_by_bgp_speaker_dvr_router
 [4.296522s] ... ok
2016-09-05 15:09:18.942215 | 
2016-09-05 15:09:18.942233 | Captured stderr:
2016-09-05 15:09:18.942250 | 
2016-09-05 15:09:18.942357 | neutron_dynamic_routing/db/bgp_db.py:454: 
DeprecationWarning: neutron.db.address_scope_db.AddressScope: moved to 
neutron.db.models.address_scope.AddressScope
2016-09-05 15:09:18.942395 |   address_scope = 
aliased(address_scope_db.AddressScope)

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress


** Tags: l3-bgp

** Tags added: l3-bgp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1620516

Title:
  Dynamic routing: deprecated warnings for address scope

Status in neutron:
  In Progress

Bug description:
  {0} 
neutron_dynamic_routing.tests.unit.db.test_bgp_db.BgpTests.test_get_ipv4_tenant_subnet_routes_by_bgp_speaker_dvr_router
 [4.296522s] ... ok
  2016-09-05 15:09:18.942215 | 
  2016-09-05 15:09:18.942233 | Captured stderr:
  2016-09-05 15:09:18.942250 | 
  2016-09-05 15:09:18.942357 | neutron_dynamic_routing/db/bgp_db.py:454: 
DeprecationWarning: neutron.db.address_scope_db.AddressScope: moved to 
neutron.db.models.address_scope.AddressScope
  2016-09-05 15:09:18.942395 |   address_scope = 
aliased(address_scope_db.AddressScope)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1620516/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1620334] [NEW] Neutron dynamic routing: remove derpracted DB warnings

2016-09-05 Thread Gary Kotton
Public bug reported:

neutron_dynamic_routing/db/bgp_db.py:46: DeprecationWarning: 
neutron.db.model_base.BASEV2: moved to neutron_lib.db.model_base.BASEV2
  class BgpSpeakerPeerBinding(model_base.BASEV2):
neutron_dynamic_routing/db/bgp_db.py:64: DeprecationWarning: 
neutron.db.model_base.BASEV2: moved to neutron_lib.db.model_base.BASEV2
  class BgpSpeakerNetworkBinding(model_base.BASEV2):
neutron_dynamic_routing/db/bgp_db.py:84: DeprecationWarning: 
neutron.db.model_base.BASEV2: moved to neutron_lib.db.model_base.BASEV2
  class BgpSpeaker(model_base.BASEV2,
neutron_dynamic_routing/db/bgp_db.py:85: DeprecationWarning: 
neutron.db.model_base.HasId: moved to neutron_lib.db.model_base.HasId
  model_base.HasId,
neutron_dynamic_routing/db/bgp_db.py:86: DeprecationWarning: 
neutron.db.model_base.HasTenant: moved to neutron_lib.db.model_base.HasProject
  model_base.HasTenant):
neutron_dynamic_routing/db/bgp_db.py:107: DeprecationWarning: 
neutron.db.model_base.BASEV2: moved to neutron_lib.db.model_base.BASEV2
  class BgpPeer(model_base.BASEV2,
neutron_dynamic_routing/db/bgp_db.py:108: DeprecationWarning: 
neutron.db.model_base.HasId: moved to neutron_lib.db.model_base.HasId
  model_base.HasId,
neutron_dynamic_routing/db/bgp_db.py:109: DeprecationWarning: 
neutron.db.model_base.HasTenant: moved to neutron_lib.db.model_base.HasProject
  model_base.HasTenant):
neutron_dynamic_routing/db/bgp_dragentscheduler_db.py:47: DeprecationWarning: 
neutron.db.model_base.BASEV2: moved to neutron_lib.db.model_base.BASEV2
  class BgpSpeakerDrAgentBinding(model_base.BASEV2):

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1620334

Title:
  Neutron dynamic routing: remove derpracted DB warnings

Status in neutron:
  In Progress

Bug description:
  neutron_dynamic_routing/db/bgp_db.py:46: DeprecationWarning: 
neutron.db.model_base.BASEV2: moved to neutron_lib.db.model_base.BASEV2
class BgpSpeakerPeerBinding(model_base.BASEV2):
  neutron_dynamic_routing/db/bgp_db.py:64: DeprecationWarning: 
neutron.db.model_base.BASEV2: moved to neutron_lib.db.model_base.BASEV2
class BgpSpeakerNetworkBinding(model_base.BASEV2):
  neutron_dynamic_routing/db/bgp_db.py:84: DeprecationWarning: 
neutron.db.model_base.BASEV2: moved to neutron_lib.db.model_base.BASEV2
class BgpSpeaker(model_base.BASEV2,
  neutron_dynamic_routing/db/bgp_db.py:85: DeprecationWarning: 
neutron.db.model_base.HasId: moved to neutron_lib.db.model_base.HasId
model_base.HasId,
  neutron_dynamic_routing/db/bgp_db.py:86: DeprecationWarning: 
neutron.db.model_base.HasTenant: moved to neutron_lib.db.model_base.HasProject
model_base.HasTenant):
  neutron_dynamic_routing/db/bgp_db.py:107: DeprecationWarning: 
neutron.db.model_base.BASEV2: moved to neutron_lib.db.model_base.BASEV2
class BgpPeer(model_base.BASEV2,
  neutron_dynamic_routing/db/bgp_db.py:108: DeprecationWarning: 
neutron.db.model_base.HasId: moved to neutron_lib.db.model_base.HasId
model_base.HasId,
  neutron_dynamic_routing/db/bgp_db.py:109: DeprecationWarning: 
neutron.db.model_base.HasTenant: moved to neutron_lib.db.model_base.HasProject
model_base.HasTenant):
  neutron_dynamic_routing/db/bgp_dragentscheduler_db.py:47: DeprecationWarning: 
neutron.db.model_base.BASEV2: moved to neutron_lib.db.model_base.BASEV2
class BgpSpeakerDrAgentBinding(model_base.BASEV2):

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1620334/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1620303] [NEW] LBaas: deprecation warnings for BASEV2

2016-09-05 Thread Gary Kotton
Public bug reported:

neutron_lbaas/services/loadbalancer/data_models.py:88: DeprecationWarning: 
neutron.db.model_base.BASEV2: moved to neutron_lib.db.model_base.BASEV2
  if isinstance(attr, model_base.BASEV2):

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1620303

Title:
  LBaas: deprecation warnings for BASEV2

Status in neutron:
  In Progress

Bug description:
  neutron_lbaas/services/loadbalancer/data_models.py:88: 
DeprecationWarning: neutron.db.model_base.BASEV2: moved to 
neutron_lib.db.model_base.BASEV2
if isinstance(attr, model_base.BASEV2):

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1620303/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1620220] [NEW] Depercation warnings with Qos

2016-09-05 Thread Gary Kotton
Public bug reported:

neutron/db/qos/models.py:98: DeprecationWarning: neutron.db.models_v2.HasId: 
moved to neutron_lib.db.model_base.HasId
2016-09-05 00:58:56.305984 |   class QosMinimumBandwidthRule(models_v2.HasId, 
model_base.BASEV2):

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1620220

Title:
  Depercation warnings with Qos

Status in neutron:
  In Progress

Bug description:
  neutron/db/qos/models.py:98: DeprecationWarning: neutron.db.models_v2.HasId: 
moved to neutron_lib.db.model_base.HasId
  2016-09-05 00:58:56.305984 |   class QosMinimumBandwidthRule(models_v2.HasId, 
model_base.BASEV2):

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1620220/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619535] [NEW] DHCP: release exception log has no information information

2016-09-01 Thread Gary Kotton
Public bug reported:

Commit 2aa23de58f55f7b1001508326c5ac2627ba3a429 added in a warning in
the event that a release failed. This would have no information that can
help anyone deal with it.

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1619535

Title:
  DHCP: release exception log has no information information

Status in neutron:
  In Progress

Bug description:
  Commit 2aa23de58f55f7b1001508326c5ac2627ba3a429 added in a warning in
  the event that a release failed. This would have no information that
  can help anyone deal with it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1619535/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1618881] [NEW] Exception raised has double full stop

2016-08-31 Thread Gary Kotton
Public bug reported:

An example:

Invalid input for operation: Exceeded maximum amount of fixed ips per
port..

Reason - neutron lib exception has a full stop:

class InvalidInput(BadRequest):
"""A bad request due to invalid input.
A specialization of the BadRequest error indicating bad input was
specified.
:param error_message: Details on the operation that failed due to bad
input.
"""
message = _("Invalid input for operation: %(error_message)s.")

** Affects: neutron
     Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1618881

Title:
  Exception raised has double full stop

Status in neutron:
  In Progress

Bug description:
  An example:

  Invalid input for operation: Exceeded maximum amount of fixed ips per
  port..

  Reason - neutron lib exception has a full stop:

  class InvalidInput(BadRequest):
  """A bad request due to invalid input.
  A specialization of the BadRequest error indicating bad input was
  specified.
  :param error_message: Details on the operation that failed due to bad
  input.
  """
  message = _("Invalid input for operation: %(error_message)s.")

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1618881/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1617707] [NEW] Auto allocate: when router creation fails network and subnets are not cleaned up

2016-08-28 Thread Gary Kotton
Public bug reported:

When the router allocation fails then the auto allocated networks and subnets 
are left dangling.
An example of this is when the external network does not have a subnet created 
on it (this is a validation that is done on the vmware_nsx plugin - as an edge 
appliance needs to be able to have a external interface on the external network)
Another reason may be the tenant exceed in the router quota they she/he has.

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1617707

Title:
  Auto allocate: when router creation fails network and subnets are not
  cleaned up

Status in neutron:
  In Progress

Bug description:
  When the router allocation fails then the auto allocated networks and subnets 
are left dangling.
  An example of this is when the external network does not have a subnet 
created on it (this is a validation that is done on the vmware_nsx plugin - as 
an edge appliance needs to be able to have a external interface on the external 
network)
  Another reason may be the tenant exceed in the router quota they she/he has.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1617707/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615498] [NEW] VMware: unable to launhc an instance on a 'portgroup' provider network

2016-08-22 Thread Gary Kotton
Public bug reported:

The vmware_nsx NSX|V and DVS plugins enable a admin to create a provide
network that points to an existing portgroup.

One is uanble to spin up and instance on these networks.

The trace is as follows:

2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] Traceback (most recent call last):
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2514, in 
_build_resources
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] yield resources
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2384, in 
_build_and_run_instance
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] block_device_info=block_device_info)
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/driver.py", line 429, in 
spawn
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] admin_password, network_info, 
block_device_info)
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py", line 872, in 
spawn
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] metadata)
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py", line 328, in 
build_virtual_machine
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] pci_devices)
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vif.py", line 170, in 
get_vif_info
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] is_neutron, vif, pci_info))
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vif.py", line 138, in 
get_vif_dict
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] ref = get_network_ref(session, 
cluster, vif, is_neutron)
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vif.py", line 127, in 
get_network_ref
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] network_ref = 
_get_neutron_network(session, network_id, cluster, vif)
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vif.py", line 117, in 
_get_neutron_network
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] raise 
exception.NetworkNotFoundForBridge(bridge=network_id)
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] NetworkNotFoundForBridge: Network could 
not be found for bridge 7bc687f6-aeff-4420-9663-03e75b7c28e3


This is due to the fact that the validation for the network selection is
done according the the network UUID (which is the name of the port
group). In the case of importing a existing port group this actually
needs to be the name of the portgroup.

The NSX|V and DVS plugins enforce that the name of the network needs to
match the name of the existing port group

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1615498

Title:
  VMware: unable to launhc an instance on a 'portgroup' provider network

Status in OpenStack Compute (nova):
  New

Bug description:
  The vmware_nsx NSX|V and DVS plugins enable a admin to create a
  provide network that points to an existing portgroup.

  One is uanble to spin up and instance on these networks.

  The trace is as follows:

  2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1615403] [NEW] IPAM: intermittent subnet creation failure

2016-08-21 Thread Gary Kotton
Public bug reported:

2016-08-21 14:02:57.227 13823 ERROR neutron.db.db_base_plugin_v2 
[req-7eb3586f-e5c8-440c-9252-108612807472 tempest-BulkNetworkOpsTest-2010636197 
-] An exception occurred while creating the subnet:{'subnet': {'host_routes': 
, 'prefixlen': 
, 'name': '', 
'enable_dhcp': True, u'network_id': u'e19504d7-6e97-4405-89f6-3a21b190c52f', 
'tenant_id': u'654a05200b6f41eebbe3151e1c39e878', 'dns_nameservers': 
, 'ipv6_ra_mode': 
, 'allocation_pools': 
[IPRange('10.20.0.18', '10.20.0.30')], 'gateway_ip': '10.20.0.17', 
u'ip_version': 4, 'ipv6_address_mode': , u'cidr': '10.20.0.16/28', 'network:tenant_id': 
u'654a05200b6f41eebbe3151e1c39e878', 'subnetpool_id': 
, 'descri
 ption': ''}}
2016-08-21 14:02:57.245 13823 DEBUG neutron.db.api 
[req-7eb3586f-e5c8-440c-9252-108612807472 tempest-BulkNetworkOpsTest-2010636197 
-] Retry wrapper got retriable exception: Traceback (most recent call last):
  File "/opt/stack/neutron/neutron/db/api.py", line 77, in wrapped
return f(*args, **kwargs)
  File "/opt/stack/neutron/neutron/api/v2/base.py", line 496, in _create
objs = do_create(body, bulk=True)
  File "/opt/stack/neutron/neutron/api/v2/base.py", line 492, in do_create
request.context, reservation.reservation_id)
  File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
220, in __exit__
self.force_reraise()
  File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
196, in force_reraise
six.reraise(self.type_, self.value, self.tb)
  File "/opt/stack/neutron/neutron/api/v2/base.py", line 485, in do_create
return obj_creator(request.context, **kwargs)
  File "/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 442, in 
create_subnet_bulk
return self._create_bulk('subnet', context, subnets)
  File "/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 333, in 
_create_bulk
{'resource': resource, 'item': item})
  File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
220, in __exit__
self.force_reraise()
  File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
196, in force_reraise
six.reraise(self.type_, self.value, self.tb)
  File "/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 326, in 
_create_bulk
objects.append(obj_creator(context, item))
  File "/opt/stack/vmware-nsx/vmware_nsx/plugins/nsx_v3/plugin.py", line 1004, 
in create_subnet
context, subnet)
  File "/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 723, in 
create_subnet
return self._create_subnet(context, subnet, subnetpool_id)
  File "/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 614, in 
_create_subnet
subnetpool_id)
  File "/opt/stack/neutron/neutron/db/ipam_pluggable_backend.py", line 486, in 
allocate_subnet
subnet_request.subnet_id)
  File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
220, in __exit__
self.force_reraise()
  File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
196, in force_reraise
six.reraise(self.type_, self.value, self.tb)
  File "/opt/stack/neutron/neutron/db/ipam_pluggable_backend.py", line 472, in 
allocate_subnet
subnet_request)
  File "/opt/stack/neutron/neutron/db/ipam_backend_mixin.py", line 500, in 
_save_subnet
context.session.flush()
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 
2019, in flush
self._flush(objects)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 
2137, in _flush
transaction.rollback(_capture_exception=True)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", 
line 60, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 
2101, in _flush
flush_context.execute()
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py", 
line 373, in execute
rec.execute(self)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py", 
line 532, in execute
uow
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", 
line 174, in save_obj
mapper, table, insert)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", 
line 767, in _emit_insert_statements
execute(statement, multiparams)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
914, in execute
return meth(self, multiparams, params)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/elements.py", 
line 323, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1010, in _execute_clauseelement
compiled_sql, distilled_params
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1146, in _execute_context
context)
  File 

[Yahoo-eng-team] [Bug 1612353] Re: [master] instance launch failing on ESXi and KVM

2016-08-14 Thread Gary Kotton
*** This bug is a duplicate of bug 1613069 ***
https://bugs.launchpad.net/bugs/1613069

I think that this is a devstack issues. Please see
https://review.openstack.org/#/c/355244/

** Changed in: nova
   Importance: Undecided => Critical

** Also affects: devstack
   Importance: Undecided
   Status: New

** No longer affects: nova

** This bug has been marked a duplicate of bug 1613069
   Unable to spin up multinode nova compute with neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1612353

Title:
  [master] instance launch failing on ESXi and KVM

Status in devstack:
  New

Bug description:
  setup:

  1 controller
  2 network nodes(q-dhcp)
  1 ESXi nova compute
  3 KVM ubuntu compute

  Instance cirros launch is failing on both ESXi and KVM.

  vmware@cntr1:~$ nova service-list 
  
++--+---+--+-+---++-+
  | Id | Binary   | Host  | Zone | Status  | State | Updated_at 
| Disabled Reason |
  
++--+---+--+-+---++-+
  | 5  | nova-conductor   | cntr1 | internal | enabled | up| 
2016-08-11T16:36:49.00 | -   |
  | 7  | nova-compute | esx-comp1 | nova | enabled | up| 
2016-08-11T16:36:41.00 | -   |
  | 8  | nova-compute | kvm-comp3 | nova | enabled | up| 
2016-08-11T16:36:41.00 | -   |
  | 9  | nova-compute | kvm-comp2 | nova | enabled | up| 
2016-08-11T16:36:41.00 | -   |
  | 10 | nova-compute | kvm-comp1 | nova | enabled | up| 
2016-08-11T16:36:41.00 | -   |
  | 11 | nova-scheduler   | cntr1 | internal | enabled | up| 
2016-08-11T16:36:46.00 | -   |
  | 12 | nova-consoleauth | cntr1 | internal | enabled | up| 
2016-08-11T16:36:46.00 | -   |
  
++--+---+--+-+---++-+
  vmware@cntr1:~$ 
   
  n-cpu.log:2016-08-11 21:56:06.902 31615 DEBUG oslo_concurrency.lockutils 
[req-ba3c3657-1bde-4afa-821b-4abc0b5d514b admin demo] Lock 
"d7316f88-1cce-471c-90ac-3e8d9f405cbd" acquired by 
"nova.compute.manager._locked_do_build_and_run_instance" :: waited 0.000s inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
  n-cpu.log:2016-08-11 21:56:06.930 31615 DEBUG nova.compute.manager 
[req-ba3c3657-1bde-4afa-821b-4abc0b5d514b admin demo] [instance: 
d7316f88-1cce-471c-90ac-3e8d9f405cbd] Starting instance... 
_do_build_and_run_instance /opt/stack/nova/nova/compute/manager.py:1749
  n-cpu.log:2016-08-11 21:56:07.035 31615 INFO nova.compute.claims 
[req-ba3c3657-1bde-4afa-821b-4abc0b5d514b admin demo] [instance: 
d7316f88-1cce-471c-90ac-3e8d9f405cbd] Attempting claim: memory 512 MB, disk 1 
GB, vcpus 1 CPU
  n-cpu.log:2016-08-11 21:56:07.035 31615 INFO nova.compute.claims 
[req-ba3c3657-1bde-4afa-821b-4abc0b5d514b admin demo] [instance: 
d7316f88-1cce-471c-90ac-3e8d9f405cbd] Total memory: 128918 MB, used: 512.00 MB
  n-cpu.log:2016-08-11 21:56:07.036 31615 INFO nova.compute.claims 
[req-ba3c3657-1bde-4afa-821b-4abc0b5d514b admin demo] [instance: 
d7316f88-1cce-471c-90ac-3e8d9f405cbd] memory limit: 193377.00 MB, free: 
192865.00 MB
  n-cpu.log:2016-08-11 21:56:07.036 31615 INFO nova.compute.claims 
[req-ba3c3657-1bde-4afa-821b-4abc0b5d514b admin demo] [instance: 
d7316f88-1cce-471c-90ac-3e8d9f405cbd] Total disk: 226 GB, used: 0.00 GB
  n-cpu.log:2016-08-11 21:56:07.036 31615 INFO nova.compute.claims 
[req-ba3c3657-1bde-4afa-821b-4abc0b5d514b admin demo] [instance: 
d7316f88-1cce-471c-90ac-3e8d9f405cbd] disk limit: 226.00 GB, free: 226.00 GB
  n-cpu.log:2016-08-11 21:56:07.036 31615 INFO nova.compute.claims 
[req-ba3c3657-1bde-4afa-821b-4abc0b5d514b admin demo] [instance: 
d7316f88-1cce-471c-90ac-3e8d9f405cbd] Total vcpu: 16 VCPU, used: 0.00 VCPU
  n-cpu.log:2016-08-11 21:56:07.037 31615 INFO nova.compute.claims 
[req-ba3c3657-1bde-4afa-821b-4abc0b5d514b admin demo] [instance: 
d7316f88-1cce-471c-90ac-3e8d9f405cbd] vcpu limit not specified, defaulting to 
unlimited
  n-cpu.log:2016-08-11 21:56:07.037 31615 INFO nova.compute.claims 
[req-ba3c3657-1bde-4afa-821b-4abc0b5d514b admin demo] [instance: 
d7316f88-1cce-471c-90ac-3e8d9f405cbd] Claim successful
  n-cpu.log:2016-08-11 21:56:07.278 31615 DEBUG nova.compute.manager 
[req-ba3c3657-1bde-4afa-821b-4abc0b5d514b admin demo] [instance: 
d7316f88-1cce-471c-90ac-3e8d9f405cbd] Start building networks asynchronously 
for instance. _build_resources /opt/stack/nova/nova/compute/manager.py:2016
  n-cpu.log:2016-08-11 21:56:07.400 31615 DEBUG nova.compute.manager 
[req-ba3c3657-1bde-4afa-821b-4abc0b5d514b admin demo] [instance: 

[Yahoo-eng-team] [Bug 1608224] [NEW] neutron lib warnings in ipv6_util; s

2016-07-31 Thread Gary Kotton
Public bug reported:

Captured stderr:


/home/gkotton/networking-sfc/.tox/py27/src/neutron/neutron/common/ipv6_utils.py:70:
 DeprecationWarning: IPV6_SLAAC in version 'mitaka' and will be removed in 
version 'newton': moved to neutron_lib.constants
  modes = [constants.IPV6_SLAAC, constants.DHCPV6_STATELESS]

/home/gkotton/networking-sfc/.tox/py27/src/neutron/neutron/common/ipv6_utils.py:70:
 DeprecationWarning: DHCPV6_STATELESS in version 'mitaka' and will be removed 
in version 'newton': moved to neutron_lib.constants
  modes = [constants.IPV6_SLAAC, constants.DHCPV6_STATELESS]

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Gary Kotton (garyk)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1608224

Title:
  neutron lib warnings in ipv6_util;s

Status in neutron:
  In Progress

Bug description:
  Captured stderr:
  
  
/home/gkotton/networking-sfc/.tox/py27/src/neutron/neutron/common/ipv6_utils.py:70:
 DeprecationWarning: IPV6_SLAAC in version 'mitaka' and will be removed in 
version 'newton': moved to neutron_lib.constants
modes = [constants.IPV6_SLAAC, constants.DHCPV6_STATELESS]
  
/home/gkotton/networking-sfc/.tox/py27/src/neutron/neutron/common/ipv6_utils.py:70:
 DeprecationWarning: DHCPV6_STATELESS in version 'mitaka' and will be removed 
in version 'newton': moved to neutron_lib.constants
modes = [constants.IPV6_SLAAC, constants.DHCPV6_STATELESS]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1608224/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1604438] [NEW] Neutron lib - migration tool fails without bc

2016-07-19 Thread Gary Kotton
Public bug reported:

gkotton@ubuntu:~/neutron-lib$ ./tools/migration_report.sh ../vmware-nsx/
You have 2517 total imports
You imported Neutron 477 times
You imported Neutron-Lib 108 times
./tools/migration_report.sh: line 30: bc: command not found
./tools/migration_report.sh: line 31: bc: command not found
./tools/migration_report.sh: line 33: [: : integer expression expected

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1604438

Title:
  Neutron lib - migration tool fails without bc

Status in neutron:
  In Progress

Bug description:
  gkotton@ubuntu:~/neutron-lib$ ./tools/migration_report.sh ../vmware-nsx/
  You have 2517 total imports
  You imported Neutron 477 times
  You imported Neutron-Lib 108 times
  ./tools/migration_report.sh: line 30: bc: command not found
  ./tools/migration_report.sh: line 31: bc: command not found
  ./tools/migration_report.sh: line 33: [: : integer expression expected

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1604438/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600697] Re: Using VMware NSXv driver, when update the port with specified address pair, got exception

2016-07-17 Thread Gary Kotton
** Project changed: neutron => vmware-nsx

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1600697

Title:
  Using VMware NSXv driver, when update the port with specified address
  pair, got exception

Status in vmware-nsx:
  New

Bug description:
  yangyubj@yangyubj-virtual-machine:~$ neutron port-update 
26d1a5e7-a745-4f6a-b965-bb33709a8a23 --allowed-address-pair 
ip_address=10.0.0.0/8
  Request 
https://10.155.20.92/api/4.0/services/spoofguard/spoofguardpolicy-9?action=approve
 is Bad, response 
  The value 10.0.0.0/8 for vm 
(01985390-cf9a-4cf7-bad3-2f164edc45d9) - Network adapter 1 is invalid. Valid 
value should be 
{2}220core-services
  Neutron server returns request_ids: 
['req-98cc6835-a2a2-43b3-bde4-b6aa47e313e1']

  
  The root cause is the NSXv can not support cidr 10.0.0.0/8

To manage notifications about this bug go to:
https://bugs.launchpad.net/vmware-nsx/+bug/1600697/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1599086] [NEW] Security groups: exception under load

2016-07-05 Thread Gary Kotton
resource 
reraise(type(exception), exception, tb=exc_tb, cause=cause)
2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1139, 
in _execute_context
2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource context)
2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 
450, in do_execute
2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource 
cursor.execute(statement, parameters)
2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 158, in 
execute
2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource result = 
self._query(query)
2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 308, in _query
2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource conn.query(q)
2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 820, in 
query
2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource self._affected_rows = 
self._read_query_result(unbuffered=unbuffered)
2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 1002, in 
_read_query_result
2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource result.read()
2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 1285, in 
read
2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource first_packet = 
self.connection._read_packet()
2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 966, in 
_read_packet
2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource packet.check_error()
2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 394, in 
check_error
2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource 
err.raise_mysql_exception(self._data)
2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/err.py", line 120, in 
raise_mysql_exception
2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource 
_check_mysql_exception(errinfo)
2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/err.py", line 112, in 
_check_mysql_exception
2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource raise 
errorclass(errno, errorvalue)
2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource DBError: 
(pymysql.err.IntegrityError) (1062, u"Duplicate entry '' for key 'PRIMARY'") 
[SQL: u'INSERT INTO default_security_group (tenant_id, security_group_id) 
VALUES (%(tenant_id)s, %(security_group_id)s)'] [parameters: {'tenant_id': '', 
'security_group_id': '7e39244d-922c-4e3a-b93e-db6125bcb7e6'}]
2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1599086

Title:
  Security groups: exception under load

Status in neutron:
  In Progress

Bug description:
  
  For one of the iteration, adding router interface failed showing the below DB 
error.

  2016-07-04 17:12:59.057 ERROR neutron.api.v2.resource 
[req-33bb4fd7-25a5-4460-82d0-ab5e5b8d574c 
ctx_rally_8204b9df57e44bcf9804a278c35bf2a4_user_0 
8204b9df57e44bcf9804a278c35bf2a4] add_router_interface failed
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 84, in resource
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 148, in wrapper
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource self.force_reraise()
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-07-04 17:12:59.057 TRACE neutron.api.v2.resource 

[Yahoo-eng-team] [Bug 1591878] [NEW] Unit test not using neutrob-lib constants

2016-06-13 Thread Gary Kotton
Public bug reported:

{0}
neutron.tests.unit.tests.common.test_net_helpers.PortAllocationTestCase.test_get_free_namespace_port
[0.816689s] ... ok

Captured stderr:

neutron/tests/unit/tests/common/test_net_helpers.py:64: DeprecationWarning: 
PROTO_NAME_TCP in version 'mitaka' and will be removed in version 'newton': 
moved to neutron_lib.constants
  n_const.PROTO_NAME_TCP)

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1591878

Title:
  Unit test not using neutrob-lib constants

Status in neutron:
  In Progress

Bug description:
  {0}
  
neutron.tests.unit.tests.common.test_net_helpers.PortAllocationTestCase.test_get_free_namespace_port
  [0.816689s] ... ok

  Captured stderr:
  
  neutron/tests/unit/tests/common/test_net_helpers.py:64: 
DeprecationWarning: PROTO_NAME_TCP in version 'mitaka' and will be removed in 
version 'newton': moved to neutron_lib.constants
n_const.PROTO_NAME_TCP)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1591878/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591111] [NEW] schedulers deprecated warning in unit tests

2016-06-10 Thread Gary Kotton
Public bug reported:

aptured stderr:


/home/gkotton/vmware-nsx/.tox/py27/src/neutron/neutron/db/agentschedulers_db.py:229:
 DeprecationWarning: Using function/method 
'NsxV3Plugin.add_agent_status_check()' is deprecated: This will be removed in 
the N cycle. Please use 'add_agent_status_check_worker' instead.
  self.add_agent_status_check(self.remove_networks_from_down_agents)

{0} 
vmware_nsx.tests.unit.nsx_v3.test_plugin.TestL3NatTestCase.test_router_add_interface_port_bad_tenant_returns_404
 [3.499239s] ... ok

Captured stderr:


/home/gkotton/vmware-nsx/.tox/py27/src/neutron/neutron/db/agentschedulers_db.py:229:
 DeprecationWarning: Using function/method 
'NsxV3Plugin.add_agent_status_check()' is deprecated: This will be removed in 
the N cycle. Please use 'add_agent_status_check_worker' instead.
  self.add_agent_status_check(self.remove_networks_from_down_agents)

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/159

Title:
  schedulers deprecated warning in unit tests

Status in neutron:
  In Progress

Bug description:
  aptured stderr:
  
  
/home/gkotton/vmware-nsx/.tox/py27/src/neutron/neutron/db/agentschedulers_db.py:229:
 DeprecationWarning: Using function/method 
'NsxV3Plugin.add_agent_status_check()' is deprecated: This will be removed in 
the N cycle. Please use 'add_agent_status_check_worker' instead.
self.add_agent_status_check(self.remove_networks_from_down_agents)
  
  {0} 
vmware_nsx.tests.unit.nsx_v3.test_plugin.TestL3NatTestCase.test_router_add_interface_port_bad_tenant_returns_404
 [3.499239s] ... ok

  Captured stderr:
  
  
/home/gkotton/vmware-nsx/.tox/py27/src/neutron/neutron/db/agentschedulers_db.py:229:
 DeprecationWarning: Using function/method 
'NsxV3Plugin.add_agent_status_check()' is deprecated: This will be removed in 
the N cycle. Please use 'add_agent_status_check_worker' instead.
self.add_agent_status_check(self.remove_networks_from_down_agents)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/159/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1587021] [NEW] Log file full of info messages that segments are not supported

2016-05-30 Thread Gary Kotton
Public bug reported:

The following info messages are in the log file:

2016-05-30 11:27:46.320 INFO neutron.services.segments.db [req-
be57cc26-a3ae-4370-aed9-cdb4ed6ccabb None None] Core plug-in does not
implement 'check_segment_for_agent'. It is not possible to build a hosts
segments mapping

In the even that a core plugin does not support this we should not be
posting this for each update sent by the agent

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1587021

Title:
  Log file full of info messages that segments are not supported

Status in neutron:
  In Progress

Bug description:
  The following info messages are in the log file:

  2016-05-30 11:27:46.320 INFO neutron.services.segments.db [req-
  be57cc26-a3ae-4370-aed9-cdb4ed6ccabb None None] Core plug-in does not
  implement 'check_segment_for_agent'. It is not possible to build a
  hosts segments mapping

  In the even that a core plugin does not support this we should not be
  posting this for each update sent by the agent

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1587021/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1586780] [NEW] ML2: warnings for PORT_STATUS_ACTIVE not from neutron-lib

2016-05-29 Thread Gary Kotton
Public bug reported:

 neutron/plugins/ml2/drivers/mech_agent.py:71: DeprecationWarning: 
PORT_STATUS_ACTIVE in version 'mitaka' and will be removed in version 'newton': 
moved to neutron_lib.constants
2016-05-28 23:32:20.396 |   if not context.host or port['status'] == 
constants.PORT_STATUS_ACTIVE:
2016-05-28 23:32:20.397 | neutron/plugins/ml2/drivers/mech_agent.py:71: 
DeprecationWarning: PORT_STATUS_ACTIVE in version 'mitaka' and will be removed 
in version 'newton': moved to neutron_lib.constants
2016-05-28 23:32:20.397 |   if not context.host or port['status'] == 
constants.PORT_STATUS_ACTIVE:
2016-05-28 23:32:20.397 | neutron/plugins/ml2/drivers/mech_agent.py:71: 
DeprecationWarning: PORT_STATUS_ACTIVE in version 'mitaka' and will be removed 
in version 'newton': moved to neutron_lib.constants
2016-05-28 23:32:20.397 |   if not context.host or port['status'] == 
constants.PORT_STATUS_ACTIVE:
2016-05-28 23:32:20.397 | neutron/plugins/ml2/drivers/mech_agent.py:71: 
DeprecationWarning: PORT_STATUS_ACTIVE in version 'mitaka' and will be removed 
in version 'newton': moved to neutron_lib.constants
2016-05-28 23:32:20.397 |   if not context.host or port['status'] == 
constants.PORT_STATUS_ACTIVE:
2016-05-28 23:32:20.397 |

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

** Summary changed:

- ML2: warnings for PORT_STATUS_ACTIV not from neutron-lib
+ ML2: warnings for PORT_STATUS_ACTIVE not from neutron-lib

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1586780

Title:
  ML2: warnings for PORT_STATUS_ACTIVE not from neutron-lib

Status in neutron:
  In Progress

Bug description:
   neutron/plugins/ml2/drivers/mech_agent.py:71: DeprecationWarning: 
PORT_STATUS_ACTIVE in version 'mitaka' and will be removed in version 'newton': 
moved to neutron_lib.constants
  2016-05-28 23:32:20.396 |   if not context.host or port['status'] == 
constants.PORT_STATUS_ACTIVE:
  2016-05-28 23:32:20.397 | neutron/plugins/ml2/drivers/mech_agent.py:71: 
DeprecationWarning: PORT_STATUS_ACTIVE in version 'mitaka' and will be removed 
in version 'newton': moved to neutron_lib.constants
  2016-05-28 23:32:20.397 |   if not context.host or port['status'] == 
constants.PORT_STATUS_ACTIVE:
  2016-05-28 23:32:20.397 | neutron/plugins/ml2/drivers/mech_agent.py:71: 
DeprecationWarning: PORT_STATUS_ACTIVE in version 'mitaka' and will be removed 
in version 'newton': moved to neutron_lib.constants
  2016-05-28 23:32:20.397 |   if not context.host or port['status'] == 
constants.PORT_STATUS_ACTIVE:
  2016-05-28 23:32:20.397 | neutron/plugins/ml2/drivers/mech_agent.py:71: 
DeprecationWarning: PORT_STATUS_ACTIVE in version 'mitaka' and will be removed 
in version 'newton': moved to neutron_lib.constants
  2016-05-28 23:32:20.397 |   if not context.host or port['status'] == 
constants.PORT_STATUS_ACTIVE:
  2016-05-28 23:32:20.397 |

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1586780/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1586776] [NEW] RPC Unit tests: have future warnings about UUIDS

2016-05-29 Thread Gary Kotton
Public bug reported:

 
/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_versionedobjects/fields.py:329:
 FutureWarning: uuid is an invalid UUID. Using UUIDFields with invalid UUIDs is 
no longer supported, and will be removed in a future release. Please update 
your code to input valid UUIDs or accept ValueErrors for invalid UUIDs. See 
http://docs.openstack.org/developer/oslo.versionedobjects/api/fields.html#oslo_versionedobjects.fields.UUIDField
 for further details
2016-05-28 23:27:53.481 |   "for further details" % value, FutureWarning)

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1586776

Title:
  RPC Unit tests: have future warnings about UUIDS

Status in neutron:
  In Progress

Bug description:
   
/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_versionedobjects/fields.py:329:
 FutureWarning: uuid is an invalid UUID. Using UUIDFields with invalid UUIDs is 
no longer supported, and will be removed in a future release. Please update 
your code to input valid UUIDs or accept ValueErrors for invalid UUIDs. See 
http://docs.openstack.org/developer/oslo.versionedobjects/api/fields.html#oslo_versionedobjects.fields.UUIDField
 for further details
  2016-05-28 23:27:53.481 |   "for further details" % value, FutureWarning)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1586776/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585123] [NEW] Remove warnings for

2016-05-24 Thread Gary Kotton
Public bug reported:

2016-05-24 08:48:14.108 | 
2016-05-24 08:48:14.108 | Captured stderr:
2016-05-24 08:48:14.108 | 
2016-05-24 08:48:14.108 | neutron/db/securitygroups_db.py:474: 
DeprecationWarning: PROTO_NAME_IPV6_ICMP_LEGACY in version 'mitaka' and will be 
removed in version 'newton': moved to neutron_lib.constants
2016-05-24 08:48:14.108 |   n_const.PROTO_NAME_IPV6_ICMP_LEGACY,
2016-05-24 08:48:14.108 | neutron/db/securitygroups_db.py:474: 
DeprecationWarning: PROTO_NAME_IPV6_ICMP_LEGACY in version 'mitaka' and will be 
removed in version 'newton': moved to neutron_lib.constants
2016-05-24 08:48:14.108 |   n_const.PROTO_NAME_IPV6_ICMP_LEGACY,
2016-05-24 08:48:14.108 |

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585123

Title:
  Remove warnings for

Status in neutron:
  In Progress

Bug description:
  2016-05-24 08:48:14.108 | 
  2016-05-24 08:48:14.108 | Captured stderr:
  2016-05-24 08:48:14.108 | 
  2016-05-24 08:48:14.108 | neutron/db/securitygroups_db.py:474: 
DeprecationWarning: PROTO_NAME_IPV6_ICMP_LEGACY in version 'mitaka' and will be 
removed in version 'newton': moved to neutron_lib.constants
  2016-05-24 08:48:14.108 |   n_const.PROTO_NAME_IPV6_ICMP_LEGACY,
  2016-05-24 08:48:14.108 | neutron/db/securitygroups_db.py:474: 
DeprecationWarning: PROTO_NAME_IPV6_ICMP_LEGACY in version 'mitaka' and will be 
removed in version 'newton': moved to neutron_lib.constants
  2016-05-24 08:48:14.108 |   n_const.PROTO_NAME_IPV6_ICMP_LEGACY,
  2016-05-24 08:48:14.108 |

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585123/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585075] [NEW] OSProfiler break decomposed unit tests

2016-05-24 Thread Gary Kotton
Public bug reported:

running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 
${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./vmware_nsx/tests/unit}  --load-list 
/tmp/tmp.FZwlpEExbg/tmpMkIDeW
2016-05-24 05:46:28.751 | {1} 
vmware_nsx.tests.unit.dvs.test_utils.DvsUtilsTestCase.test_dvs_set [0.776316s] 
... ok
2016-05-24 05:46:28.805 | {5} 
vmware_nsx.tests.unit.dvs.test_plugin.NeutronSimpleDvsTest.test_create_and_delete_dvs_network_tag
 [1.197120s] ... FAILED
2016-05-24 05:46:28.805 | 
2016-05-24 05:46:28.806 | Captured traceback-1:
2016-05-24 05:46:28.806 | ~
2016-05-24 05:46:28.806 | Traceback (most recent call last):
2016-05-24 05:46:28.806 |   File 
"/home/jenkins/workspace/gate-vmware-nsx-python27/.tox/py27/local/lib/python2.7/site-packages/fixtures/fixture.py",
 line 208, in setUp
2016-05-24 05:46:28.806 | raise SetupError(details)
2016-05-24 05:46:28.806 | fixtures.fixture.SetupError: {}
2016-05-24 05:46:28.806 | 
2016-05-24 05:46:28.806 | 
2016-05-24 05:46:28.806 | Captured traceback:
2016-05-24 05:46:28.806 | ~~~
2016-05-24 05:46:28.806 | Traceback (most recent call last):
2016-05-24 05:46:28.807 |   File 
"/home/jenkins/workspace/gate-vmware-nsx-python27/.tox/py27/local/lib/python2.7/site-packages/fixtures/fixture.py",
 line 197, in setUp
2016-05-24 05:46:28.807 | self._setUp()
2016-05-24 05:46:28.807 |   File 
"/tmp/openstack/neutron/neutron/tests/unit/testlib_api.py", line 67, in _setUp
2016-05-24 05:46:28.807 | engine = db_api.get_engine()
2016-05-24 05:46:28.807 |   File 
"/tmp/openstack/neutron/neutron/db/api.py", line 73, in get_engine
2016-05-24 05:46:28.807 | facade = _create_facade_lazily()
2016-05-24 05:46:28.807 |   File 
"/tmp/openstack/neutron/neutron/db/api.py", line 63, in _create_facade_lazily
2016-05-24 05:46:28.807 | if cfg.CONF.profiler.enabled and 
cfg.CONF.profiler.trace_sqlalchemy:
2016-05-24 05:46:28.807 |   File 
"/home/jenkins/workspace/gate-vmware-nsx-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_config/cfg.py",
 line 2183, in __getattr__
2016-05-24 05:46:28.807 | raise NoSuchOptError(name)
2016-05-24 05:46:28.807 | oslo_config.cfg.NoSuchOptError: no such option in 
group DEFAULT: profiler
2016-05-24 05:46:28.807 | 
2016-05-24 05:46:28.845 | {2} 
vmware_nsx.tests.unit.dvs.test_plugin.NeutronSimpleDvsTest.test_create_router_only_dvs_backend
 [0.965711s] ... FAILED
2016-05-24 05:46:28.845 | 
2016-05-24 05:46:28.845 | Captured traceback-1:
2016-05-24 05:46:28.845 | ~
2016-05-24 05:46:28.845 | Traceback (most recent call last):
2016-05-24 05:46:28.845 |   File 
"/home/jenkins/workspace/gate-vmware-nsx-python27/.tox/py27/local/lib/python2.7/site-packages/fixtures/fixture.py",
 line 208, in setUp
2016-05-24 05:46:28.846 | raise SetupError(details)
2016-05-24 05:46:28.846 | fixtures.fixture.SetupError: {}

** Affects: neutron
 Importance: High
 Assignee: Gary Kotton (garyk)
 Status: In Progress

** Changed in: neutron
   Importance: Undecided => High

** Summary changed:

- OSProfiler break deocmposed unit tests
+ OSProfiler break decomposed unit tests

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585075

Title:
  OSProfiler break decomposed unit tests

Status in neutron:
  In Progress

Bug description:
  running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 
${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./vmware_nsx/tests/unit}  --load-list 
/tmp/tmp.FZwlpEExbg/tmpMkIDeW
  2016-05-24 05:46:28.751 | {1} 
vmware_nsx.tests.unit.dvs.test_utils.DvsUtilsTestCase.test_dvs_set [0.776316s] 
... ok
  2016-05-24 05:46:28.805 | {5} 
vmware_nsx.tests.unit.dvs.test_plugin.NeutronSimpleDvsTest.test_create_and_delete_dvs_network_tag
 [1.197120s] ... FAILED
  2016-05-24 05:46:28.805 | 
  2016-05-24 05:46:28.806 | Captured traceback-1:
  2016-05-24 05:46:28.806 | ~
  2016-05-24 05:46:28.806 | Traceback (most recent call last):
  2016-05-24 05:46:28.806 |   File 
"/home/jenkins/workspace/gate-vmware-nsx-python27/.tox/py27/local/lib/python2.7/site-packages/fixtures/fixture.py",
 line 208, in setUp
  2016-05-24 05:46:28.806 | raise SetupError(details)
  2016-05-24 05:46:28.806 | fixtures.fixture.SetupError: {}
  2016-05-24 05:46:28.806 | 
  2016-05-24 05:46:28.806 | 
  2016-05-24 05:46:28.806 | Captured traceback:
  2016-05-24 05:46:28.806 | ~~~
  2016-05-24 05:46:28.806 | Traceback (most recent call last):
  2016-05-24 05:46:28.807 |   File 
"/home/jenkins/workspace/gate-vmware-nsx-python27/.tox/py27/local/lib/python2.7/site-packages/fixtures/fixture.py",
 line 197, in s

[Yahoo-eng-team] [Bug 1582541] [NEW] Segments: atrributes deprcation warning

2016-05-17 Thread Gary Kotton
Public bug reported:

{2}
neutron.tests.unit.extensions.test_segment.TestSegment.test_create_segment_no_phys_net
[0.313844s] ... ok

Captured stderr:

neutron/api/v2/attributes.py:428: DeprecationWarning: Function 
'neutron.api.v2.attributes.convert_to_int()' has moved to 
'neutron_lib.api.converters.convert_to_int()' in version 'mitaka' and will be 
removed in version 'ocata': moved to neutron_lib
  res_dict[attr] = attr_vals['convert_to'](res_dict[attr])

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Gary Kotton (garyk)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1582541

Title:
  Segments: atrributes deprcation warning

Status in neutron:
  In Progress

Bug description:
  {2}
  
neutron.tests.unit.extensions.test_segment.TestSegment.test_create_segment_no_phys_net
  [0.313844s] ... ok

  Captured stderr:
  
  neutron/api/v2/attributes.py:428: DeprecationWarning: Function 
'neutron.api.v2.attributes.convert_to_int()' has moved to 
'neutron_lib.api.converters.convert_to_int()' in version 'mitaka' and will be 
removed in version 'ocata': moved to neutron_lib
res_dict[attr] = attr_vals['convert_to'](res_dict[attr])

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1582541/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582522] [NEW] Pecan: derpecation warnings with functional tests

2016-05-16 Thread Gary Kotton
Public bug reported:

Functional tests show:

2016-05-17 02:57:54.378 | 2016-05-17 02:57:54.326 | 
2016-05-17 02:58:05.690 | 2016-05-17 02:58:05.638 | 
neutron/pecan_wsgi/hooks/policy_enforcement.py:166: DeprecationWarning: 
BaseException.message has been deprecated as of Python 2.6
2016-05-17 02:58:05.692 | 2016-05-17 02:58:05.640 |   raise 
webob.exc.HTTPForbidden(e.message)
2016-05-17 02:58:05.693 | 2016-05-17 02:58:05.641 |

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1582522

Title:
  Pecan: derpecation warnings with functional tests

Status in neutron:
  In Progress

Bug description:
  Functional tests show:

  2016-05-17 02:57:54.378 | 2016-05-17 02:57:54.326 | 
  2016-05-17 02:58:05.690 | 2016-05-17 02:58:05.638 | 
neutron/pecan_wsgi/hooks/policy_enforcement.py:166: DeprecationWarning: 
BaseException.message has been deprecated as of Python 2.6
  2016-05-17 02:58:05.692 | 2016-05-17 02:58:05.640 |   raise 
webob.exc.HTTPForbidden(e.message)
  2016-05-17 02:58:05.693 | 2016-05-17 02:58:05.641 |

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1582522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581320] [NEW] deprecation warnings for ICMPV6_ALLOWED_TYPES

2016-05-12 Thread Gary Kotton
Public bug reported:

neutron/agent/linux/openvswitch_firewall/firewall.py:371: 
DeprecationWarning: ICMPV6_ALLOWED_TYPES in version 'mitaka' and will be 
removed in version 'newton': moved to neutron_lib.constants
2016-05-12 16:12:52.348 |   for icmp_type in constants.ICMPV6_ALLOWED_TYPES:
2016-05-12 16:12:52.348 | 
neutron/agent/linux/openvswitch_firewall/firewall.py:550: DeprecationWarning: 
ICMPV6_ALLOWED_TYPES in version 'mitaka' and will be removed in version 
'newton': moved to neutron_lib.constants
2016-05-12 16:12:52.348 |   for icmp_type in constants.ICMPV6_ALLOWED_TYPES:

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Gary Kotton (garyk)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1581320

Title:
  deprecation warnings for ICMPV6_ALLOWED_TYPES

Status in neutron:
  In Progress

Bug description:
  neutron/agent/linux/openvswitch_firewall/firewall.py:371: 
DeprecationWarning: ICMPV6_ALLOWED_TYPES in version 'mitaka' and will be 
removed in version 'newton': moved to neutron_lib.constants
  2016-05-12 16:12:52.348 |   for icmp_type in 
constants.ICMPV6_ALLOWED_TYPES:
  2016-05-12 16:12:52.348 | 
neutron/agent/linux/openvswitch_firewall/firewall.py:550: DeprecationWarning: 
ICMPV6_ALLOWED_TYPES in version 'mitaka' and will be removed in version 
'newton': moved to neutron_lib.constants
  2016-05-12 16:12:52.348 |   for icmp_type in 
constants.ICMPV6_ALLOWED_TYPES:

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1581320/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1579967] [NEW] segments extension has deprcated warnings

2016-05-09 Thread Gary Kotton
Public bug reported:

 neutron/extensions/segment.py:51: DeprecationWarning: ATTR_NOT_SPECIFIED in 
version 'mitaka' and will be removed in version 'newton': moved to 
neutron_lib.constants
2016-05-09 20:58:33.732 |   'default': attributes.ATTR_NOT_SPECIFIED,
2016-05-09 20:58:33.732 | neutron/extensions/segment.py:62: 
DeprecationWarning: ATTR_NOT_SPECIFIED in version 'mitaka' and will be removed 
in version 'newton': moved to neutron_lib.constants
2016-05-09 20:58:33.732 |   'default': attributes.ATTR_NOT_SPECIFIED,

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1579967

Title:
  segments extension has deprcated warnings

Status in neutron:
  In Progress

Bug description:
   neutron/extensions/segment.py:51: DeprecationWarning: ATTR_NOT_SPECIFIED in 
version 'mitaka' and will be removed in version 'newton': moved to 
neutron_lib.constants
  2016-05-09 20:58:33.732 |   'default': attributes.ATTR_NOT_SPECIFIED,
  2016-05-09 20:58:33.732 | neutron/extensions/segment.py:62: 
DeprecationWarning: ATTR_NOT_SPECIFIED in version 'mitaka' and will be removed 
in version 'newton': moved to neutron_lib.constants
  2016-05-09 20:58:33.732 |   'default': attributes.ATTR_NOT_SPECIFIED,

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1579967/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1571553] [NEW] When ovs_use_veth is set as False the DHCP port is shown as down

2016-04-18 Thread Gary Kotton
Public bug reported:

On the NSX manager the DHCP port is shown as down. The DHCP service
actually works but the port is down. This is due to the fact that the
port state is list as down:

_uuid   : 611a249b-8645-4040-9ade-65a0412ad0ba
admin_state : down
bfd : {}
bfd_status  : {}
cfm_fault   : []
cfm_fault_status: []
cfm_flap_count  : []
cfm_health  : []
cfm_mpid: []
cfm_remote_mpids: []
cfm_remote_opstate  : []
discovered_ips  : []
duplex  : []
error   : []
external_ids: {attached-mac="fa:16:3e:a7:1d:e5", 
iface-id="19a2e24a-6cbf-4af0-8508-1eeb8c632259", iface-status=active}
ifindex : 0
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current: []
link_resets : 0
link_speed  : []
link_state  : down
lldp: {}
mac : []
mac_in_use  : []
mtu : []
name: "tap19a2e24a-6c"
ofport  : 360
ofport_request  : []
options : {}
other_config: {}
statistics  : {collisions=0, rx_bytes=0, rx_crc_err=0, rx_dropped=0, 
rx_errors=0, rx_frame_err=0, rx_over_err=0, rx_packets=0, tx_bytes=648, 
tx_dropped=0, tx_errors=0, tx_packets=8}
status  : {driver_name=openvswitch}
type: internal​

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1571553

Title:
  When ovs_use_veth is set as False the DHCP port is shown as down

Status in neutron:
  In Progress

Bug description:
  On the NSX manager the DHCP port is shown as down. The DHCP service
  actually works but the port is down. This is due to the fact that the
  port state is list as down:

  _uuid   : 611a249b-8645-4040-9ade-65a0412ad0ba
  admin_state : down
  bfd : {}
  bfd_status  : {}
  cfm_fault   : []
  cfm_fault_status: []
  cfm_flap_count  : []
  cfm_health  : []
  cfm_mpid: []
  cfm_remote_mpids: []
  cfm_remote_opstate  : []
  discovered_ips  : []
  duplex  : []
  error   : []
  external_ids: {attached-mac="fa:16:3e:a7:1d:e5", 
iface-id="19a2e24a-6cbf-4af0-8508-1eeb8c632259", iface-status=active}
  ifindex : 0
  ingress_policing_burst: 0
  ingress_policing_rate: 0
  lacp_current: []
  link_resets : 0
  link_speed  : []
  link_state  : down
  lldp: {}
  mac : []
  mac_in_use  : []
  mtu : []
  name: "tap19a2e24a-6c"
  ofport  : 360
  ofport_request  : []
  options : {}
  other_config: {}
  statistics  : {collisions=0, rx_bytes=0, rx_crc_err=0, rx_dropped=0, 
rx_errors=0, rx_frame_err=0, rx_over_err=0, rx_packets=0, tx_bytes=648, 
tx_dropped=0, tx_errors=0, tx_packets=8}
  status  : {driver_name=openvswitch}
  type: internal​

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1571553/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478879] Re: Enable extra dhcp opt extension

2016-04-18 Thread Gary Kotton
** Changed in: vmware-nsx
   Status: New => Fix Released

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1478879

Title:
  Enable extra dhcp opt extension

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in vmware-nsx:
  Fix Released

Bug description:
  This extension can be supported without effort by the NSX-mh plugin
  and it should be added to supported extension aliases.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1478879/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483103] Re: When subnetpool didn't follow the cidr's prefix which input at the subnetpool creation.

2016-04-18 Thread Gary Kotton
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1483103

Title:
  When subnetpool didn't follow the cidr's prefix which input at the
  subnetpool creation.

Status in neutron:
  Invalid

Bug description:
  I create subnetpool like:
  neutron subnetpool-create --pool-prefix 6.6.6.6/24 --pool-prefix 9.9.9.9/30 
--pool-prefix 8.8.8.8/20 test2
  And then I create a subnet from this subnetpool.The returned message is like:
  neutron subnet-create test --subnetpool test2
  Failed to allocate subnet: Insufficient prefix space to allocate subnet size 
/8

  But when it contains *.*.*.*/8 which the prefixlen is 8 . Just like:
  neutron subnetpool-create --pool-prefix 6.6.6.6/8 --pool-prefix 9.9.9.9/30 
--pool-prefix 8.8.8.8/20 test2
  And the subnet creation will be success.

  The subnetpool's default_prefixlen is always 8.And this may make user to 
input cidr which the prefixlen is 8.
  I think when use the subnetpool to create subnet, it should get one of pool 
and cannot limit according to the default prefixlen.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1483103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1571503] [NEW] update_lease_expiration is not used by the DHCP agent

2016-04-18 Thread Gary Kotton
Public bug reported:

Not used since commit d9832282cf656b162c51afdefb830dacab72defe

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1571503

Title:
  update_lease_expiration is not used by the DHCP agent

Status in neutron:
  In Progress

Bug description:
  Not used since commit d9832282cf656b162c51afdefb830dacab72defe

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1571503/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561118] [NEW] Agents: remove deprecated methods

2016-03-23 Thread Gary Kotton
Public bug reported:

Tracker for removing the deprecated method in the agents directoy. The commits 
where these were added is
 - pullup_route (commit 23b907bc6e87be153c13b1bf3e069467cdda0d27) -

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1561118

Title:
  Agents: remove deprecated methods

Status in neutron:
  In Progress

Bug description:
  Tracker for removing the deprecated method in the agents directoy. The 
commits where these were added is
   - pullup_route (commit 23b907bc6e87be153c13b1bf3e069467cdda0d27) -

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1561118/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558461] [NEW] VMware NSX CI fails with quota issues

2016-03-19 Thread Gary Kotton
Public bug reported:

The patch https://review.openstack.org/#/c/255285/ breaks the CI. The
floating IP tests fail with:

"Details: {u'message': u"Quota exceeded for resources: ['port'].",
u'type': u'OverQuota', u'detail': u’’}"

This does not happen when using the patch before this as the HEAD.

** Affects: neutron
 Importance: Critical
 Status: New

** Changed in: neutron
   Importance: Undecided => Critical

** Changed in: neutron
Milestone: None => mitaka-rc1

** Summary changed:

- VMware NSX CI files with quota issues
+ VMware NSX CI fails with quota issues

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1558461

Title:
  VMware NSX CI fails with quota issues

Status in neutron:
  New

Bug description:
  The patch https://review.openstack.org/#/c/255285/ breaks the CI. The
  floating IP tests fail with:

  "Details: {u'message': u"Quota exceeded for resources: ['port'].",
  u'type': u'OverQuota', u'detail': u’’}"

  This does not happen when using the patch before this as the HEAD.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1558461/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1549288] [NEW] VMware: unable to launch and instance with NSX|V3 neutron plugin

2016-02-24 Thread Gary Kotton
nner
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: 
fb488363-759e-4da5-a57e-01756bdf9cb0] idle = self.f(*self.args, **self.kw)
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: 
fb488363-759e-4da5-a57e-01756bdf9cb0]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_vmware/api.py", line 95, in _func
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: 
fb488363-759e-4da5-a57e-01756bdf9cb0] result = f(*args, **kwargs)
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: 
fb488363-759e-4da5-a57e-01756bdf9cb0]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_vmware/api.py", line 331, in 
_invoke_api
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: 
fb488363-759e-4da5-a57e-01756bdf9cb0] details=excep.details)
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: 
fb488363-759e-4da5-a57e-01756bdf9cb0] ManagedObjectNotFoundException: The 
object has already been deleted or has not been completely created
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: 
fb488363-759e-4da5-a57e-01756bdf9cb0] Cause: Server raised fault: 'The object 
has already been deleted or has not been completely created'
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: 
fb488363-759e-4da5-a57e-01756bdf9cb0] Faults: [ManagedObjectNotFound]
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: 
fb488363-759e-4da5-a57e-01756bdf9cb0] Details: {'obj': 'vm-35'}
2016-02-22 21:52:35.471 TRACE nova.virt.vmwareapi.vmops [instance: 
fb488363-759e-4da5-a57e-01756bdf9cb0]

** Affects: nova
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1549288

Title:
  VMware: unable to launch and instance with NSX|V3 neutron plugin

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  
  n-cpu log:
  2016-02-22 21:52:35.450 ERROR oslo_vmware.common.loopingcall [-] in fixed 
duration looping call
  2016-02-22 21:52:35.450 TRACE oslo_vmware.common.loopingcall Traceback (most 
recent call last):
  2016-02-22 21:52:35.450 TRACE oslo_vmware.common.loopingcall   File 
"/usr/local/lib/python2.7/dist-packages/oslo_vmware/common/loopingcall.py", 
line 76, in _inner2016-02-22 21:52:35.450 TRACE oslo_vmware.common.loopingcall  
   self.f(*self.args, **self.kw)
  2016-02-22 21:52:35.450 TRACE oslo_vmware.common.loopingcall   File 
"/usr/local/lib/python2.7/dist-packages/oslo_vmware/api.py", line 428, in 
_poll_task
  2016-02-22 21:52:35.450 TRACE oslo_vmware.common.loopingcall raise 
task_ex2016-02-22 21:52:35.450 TRACE oslo_vmware.common.loopingcall 
VimFaultException: Invalid configuration for device '0'.
  2016-02-22 21:52:35.450 TRACE oslo_vmware.common.loopingcall Faults: 
['InvalidDeviceSpec']
  2016-02-22 21:52:35.450 TRACE oslo_vmware.common.loopingcall 2016-02-22 
21:52:35.451 ERROR nova.compute.manager 
[req-ce1da942-7b51-4a54-830b-cfdb43a12ac8 admin demo] [instance: 
fb488363-759e-4da5-a57e-01756bdf9cb0] Instance fail
  ed to spawn
  2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: 
fb488363-759e-4da5-a57e-01756bdf9cb0] Traceback (most recent call 
last):2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: 
fb488363-759e-4da5-a57e-01756bdf9cb0]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2155, in _bu
  ild_resources
  2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: 
fb488363-759e-4da5-a57e-01756bdf9cb0] yield resources2016-02-22 
21:52:35.451 TRACE nova.compute.manager [instance: 
fb488363-759e-4da5-a57e-01756bdf9cb0]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2009, in _bu
  ild_and_run_instance2016-02-22 21:52:35.451 TRACE nova.compute.manager 
[instance: fb488363-759e-4da5-a57e-01756bdf9cb0] 
block_device_info=block_device_info)
  2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: 
fb488363-759e-4da5-a57e-01756bdf9cb0]   File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 406, i
  n spawn
  2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: 
fb488363-759e-4da5-a57e-01756bdf9cb0] admin_password, network_info, 
block_device_info)
  2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: 
fb488363-759e-4da5-a57e-01756bdf9cb0]   File 
"/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 689, in spawn
  2016-02-22 21:52:35.451 TRACE nova.compute.manager [instance: 
fb488363-759e-4da5-a57e-01756bdf9cb0] metadata)2016-02-22 21:52:35.451 
TRACE nova.compute.manager [instance: fb488363-759e-4da5-a57e-01756bdf9cb0]   
File "/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 303, in
   build_virtual_machine2016-02-22 21:52:35.451 TRACE nova.compute.manager 
[instance: fb488363-759e-4da5-a57

[Yahoo-eng-team] [Bug 1544548] [NEW] DHCP: no indication in API that DHCP service is not running

2016-02-11 Thread Gary Kotton
lf._run(['o'], ('show', self.name)))
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
 File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 315, in _run
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
   return self._parent._run(options, self.COMMAND, args)
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
 File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 87, in _run
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
   log_fail_as_error=self.log_fail_as_error)
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
 File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 103, in _execute
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
   log_fail_as_error=log_fail_as_error)
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
 File "/opt/stack/neutron/neutron/agent/linux/utils.py", line 119, in execute
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
   addl_env=addl_env)
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
 File "/opt/stack/neutron/neutron/agent/linux/utils.py", line 88, in 
create_process
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
   stderr=subprocess.PIPE)
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
 File "/opt/stack/neutron/neutron/common/utils.py", line 206, in 
subprocess_popen
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
   close_fds=close_fds, env=env)
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
 File "/usr/local/lib/python2.7/dist-packages/eventlet/green/subprocess.py", 
line 53, in __init__
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
   subprocess_orig.Popen.__init__(self, args, 0, *argss, **kwds)
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
 File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
   errread, errwrite)
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
 File "/usr/lib/python2.7/subprocess.py", line 1231, in _execute_child
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
   self.pid = os.fork()
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00mOSError: [Errno 12] Cannot allocate memory
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m
2016-01-18 02:51:13.016 ^[[00;32mDEBUG oslo_concurrency.lockutils 
[^[[01;36mreq-f6d7a436-b9ff-45ca-9cfc-0f147b97effb 
^[[00;36mctx_rally_bbaa10b4eb2749b3a09b375682b6cb6e_user_0 
bbaa10b4eb2749b3a09b375682b6cb6e^[[00;32m] ^[[01;35m^[[00;32mLock "dhcp-agent" 
released by "neutron.agent.dhcp.agent.network_create_end" :: held 0.521s^[[00m 
^[[00;33mfrom (pid=26547) inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282

** Affects: neutron
 Importance: High
 Assignee: Gary Kotton (garyk)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Gary Kotton (garyk)

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544548

Title:
  DHCP: no indication in API that DHCP service is not running

Status in neutron:
  New

Bug description:
  Even if DHCP namespace creation fails at the network node due to some
  reason, neutron API still returns success to the user.

  2016-01-18 02:51:12.661 ^[[00;32mDEBUG neutron.agent.dhcp.agent [^[[01
  ;36mreq-f6d7a436-b9ff-45ca-9cfc-0f147b97effb
  ^[[00;36mctx_rally_bbaa10b4eb2749b3a09b375682b6cb6e_user_0
  bbaa10b4eb2749b3a09b375682b6cb6e^[[00;32m] ^[[01;35m^[[00;32mCalling
  driver for network: 351d9017-6e92-4310-ae6d-cf1d0bce0b14 action:
  enable^[[00m ^[[00;33mfrom (pid=26547) call_driver
  /opt/stack/neutron/neutron/agent/dhcp/agent.py:104

  2016-01-18 02:51:12.662 ^[[00;32mDEBUG neutron.agent.linux.dhcp 
[^[[01;36mreq-f6d7a436-b9ff-45ca-9cfc-0f147b97effb 
^[[00;36mctx_rally_bbaa10b4eb2749b3a09b375682b6cb6e_user_0 
bbaa10b4eb2749b3a09b375682b6cb6e^[[00;32m] ^[[01;35m^[[00;32mDHCP port 
dhcpa382383f-19b6-5ca7-94ec-5ec1e62dc705-351d9017-6e92-4310-ae6d-cf1d0bce0b14 
on network 351d9017-6e92-4310-ae6d-cf1d0bce0b14 does not yet exist. Checking 
for a reserved port.^[[00m ^[[00;33mfrom (pid=26547) _setup_reserved_dhcp_port 
/opt/stack/neutron/neutron/agent/linux/dhcp.py:1098^[[00m
  2016-01-18 02:51:12.663 ^[[00;32mDEBUG neutron.agent.linux.dhcp 
[^[[01;3

[Yahoo-eng-team] [Bug 1543012] [NEW] Routres: attaching a router to a external network without a subnet leads to exceptions

2016-02-08 Thread Gary Kotton
Public bug reported:

2016-01-29 06:45:03.920 18776 ERROR neutron.api.v2.resource 
[req-c2074082-a6d0-4e5a-8657-41fecb82dacc ] add_router_interface failed
2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 83, in 
resource
2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 207, in 
_handle_action
2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource return 
getattr(self._plugin, name)(*arg_list, **kwargs)
2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource File 
"/usr/local/lib/python2.7/dist-packages/vmware_nsx/neutron/plugins/vmware/plugins/nsx_v.py",
 line 1672, in add_router_interface
2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource context, router_id, 
interface_info)
2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource File 
"/usr/local/lib/python2.7/dist-packages/vmware_nsx/neutron/plugins/vmware/plugins/nsx_v_drivers/shared_router_driver.py",
 line 723, in add_router_interface
2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource context, router_id, 
router_db.admin_state_up)
2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource File 
"/usr/local/lib/python2.7/dist-packages/vmware_nsx/neutron/plugins/vmware/plugins/nsx_v_drivers/shared_router_driver.py",
 line 468, in _bind_router_on_available_edge
2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource 
self._get_available_and_conflicting_ids(context, router_id))
2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource File 
"/usr/local/lib/python2.7/dist-packages/vmware_nsx/neutron/plugins/vmware/plugins/nsx_v_drivers/shared_router_driver.py",
 line 273, in _get_available_and_conflicting_ids
2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource 
gwp['fixed_ips'][0]['subnet_id'])
2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource IndexError: list 
index out of range

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1543012

Title:
  Routres: attaching a router to a external network without a subnet
  leads to exceptions

Status in neutron:
  New

Bug description:
  2016-01-29 06:45:03.920 18776 ERROR neutron.api.v2.resource 
[req-c2074082-a6d0-4e5a-8657-41fecb82dacc ] add_router_interface failed
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 83, in 
resource
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 207, in 
_handle_action
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource return 
getattr(self._plugin, name)(*arg_list, **kwargs)
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource File 
"/usr/local/lib/python2.7/dist-packages/vmware_nsx/neutron/plugins/vmware/plugins/nsx_v.py",
 line 1672, in add_router_interface
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource context, 
router_id, interface_info)
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource File 
"/usr/local/lib/python2.7/dist-packages/vmware_nsx/neutron/plugins/vmware/plugins/nsx_v_drivers/shared_router_driver.py",
 line 723, in add_router_interface
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource context, 
router_id, router_db.admin_state_up)
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource File 
"/usr/local/lib/python2.7/dist-packages/vmware_nsx/neutron/plugins/vmware/plugins/nsx_v_drivers/shared_router_driver.py",
 line 468, in _bind_router_on_available_edge
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource 
self._get_available_and_conflicting_ids(context, router_id))
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource File 
"/usr/local/lib/python2.7/dist-packages/vmware_nsx/neutron/plugins/vmware/plugins/nsx_v_drivers/shared_router_driver.py",
 line 273, in _get_available_and_conflicting_ids
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource 
gwp['fixed_ips'][0]['subnet_id'])
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource IndexError: list 
index out of range

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1543012/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : 

[Yahoo-eng-team] [Bug 1540960] [NEW] DHCP: when upgrading from icehouse DHCP breaks

2016-02-02 Thread Gary Kotton
Public bug reported:

When upgrading the J, K or any other version the DNS support break. 
That is, the DHCP agent has the option for setting DNS servers, this is via the 
the options dnsmasq_config_file=/etc/neutron/myconfig.ini
Where myconfig will have support for the following:
dhcp-option=5,2.4.5.6,2.4.5.7
dhcp-option=6,2.4.5.7,2.4.5.6

In the guest /etc/resolve.conf we see the DHCP address and not the
configured IP's

The reason for this is
https://github.com/openstack/neutron/commit/3686d035ded94eadab6a3268e4b0f0cca11a22f8

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540960

Title:
  DHCP: when upgrading from icehouse DHCP breaks

Status in neutron:
  New

Bug description:
  When upgrading the J, K or any other version the DNS support break. 
  That is, the DHCP agent has the option for setting DNS servers, this is via 
the the options dnsmasq_config_file=/etc/neutron/myconfig.ini
  Where myconfig will have support for the following:
  dhcp-option=5,2.4.5.6,2.4.5.7
  dhcp-option=6,2.4.5.7,2.4.5.6

  In the guest /etc/resolve.conf we see the DHCP address and not the
  configured IP's

  The reason for this is
  
https://github.com/openstack/neutron/commit/3686d035ded94eadab6a3268e4b0f0cca11a22f8

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1540960/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1536718] [NEW] VMware: unable to boot an instance

2016-01-21 Thread Gary Kotton
Public bug reported:

The commit
https://github.com/openstack/nova/commit/fbe31e461ac3f16edb795993558a2314b4c16b52
broke the driver

2016-01-17 02:44:57.103 ERROR nova.compute.manager 
[req-eecba3d3-7745-42c4-92f1-f20c6c208bc8 
tempest-AggregatesAdminTestJSON-1988995251 
tempest-AggregatesAdminTestJSON-1542172103] [instance: 
10a7107a-01c8-4691-8a5a-e306451d6d78] Instance failed to spawn
2016-01-17 02:44:57.103 3120 ERROR nova.compute.manager [instance: 
10a7107a-01c8-4691-8a5a-e306451d6d78] Traceback (most recent call last):
2016-01-17 02:44:57.103 3120 ERROR nova.compute.manager [instance: 
10a7107a-01c8-4691-8a5a-e306451d6d78]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2184, in _build_resources
2016-01-17 02:44:57.103 3120 ERROR nova.compute.manager [instance: 
10a7107a-01c8-4691-8a5a-e306451d6d78] yield resources
2016-01-17 02:44:57.103 3120 ERROR nova.compute.manager [instance: 
10a7107a-01c8-4691-8a5a-e306451d6d78]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2030, in _build_and_run_instance
2016-01-17 02:44:57.103 3120 ERROR nova.compute.manager [instance: 
10a7107a-01c8-4691-8a5a-e306451d6d78] block_device_info=block_device_info)
2016-01-17 02:44:57.103 3120 ERROR nova.compute.manager [instance: 
10a7107a-01c8-4691-8a5a-e306451d6d78]   File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 382, in spawn
2016-01-17 02:44:57.103 3120 ERROR nova.compute.manager [instance: 
10a7107a-01c8-4691-8a5a-e306451d6d78] image_meta = 
objects.ImageMeta.from_dict(image_meta)
2016-01-17 02:44:57.103 3120 ERROR nova.compute.manager [instance: 
10a7107a-01c8-4691-8a5a-e306451d6d78]   File 
"/opt/stack/nova/nova/objects/image_meta.py", line 97, in from_dict
2016-01-17 02:44:57.103 3120 ERROR nova.compute.manager [instance: 
10a7107a-01c8-4691-8a5a-e306451d6d78] image_meta.get("properties", {}))
2016-01-17 02:44:57.103 3120 ERROR nova.compute.manager [instance: 
10a7107a-01c8-4691-8a5a-e306451d6d78] AttributeError: 'ImageMeta' object has no 
attribute 'get'
2016-01-17 02:44:57.103 3120 ERROR nova.compute.manager [instance: 
10a7107a-01c8-4691-8a5a-e306451d6d78]

** Affects: nova
     Importance: Critical
 Assignee: Gary Kotton (garyk)
 Status: In Progress

** Changed in: nova
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1536718

Title:
  VMware: unable to boot an instance

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The commit
  
https://github.com/openstack/nova/commit/fbe31e461ac3f16edb795993558a2314b4c16b52
  broke the driver

  2016-01-17 02:44:57.103 ERROR nova.compute.manager 
[req-eecba3d3-7745-42c4-92f1-f20c6c208bc8 
tempest-AggregatesAdminTestJSON-1988995251 
tempest-AggregatesAdminTestJSON-1542172103] [instance: 
10a7107a-01c8-4691-8a5a-e306451d6d78] Instance failed to spawn
  2016-01-17 02:44:57.103 3120 ERROR nova.compute.manager [instance: 
10a7107a-01c8-4691-8a5a-e306451d6d78] Traceback (most recent call last):
  2016-01-17 02:44:57.103 3120 ERROR nova.compute.manager [instance: 
10a7107a-01c8-4691-8a5a-e306451d6d78]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2184, in _build_resources
  2016-01-17 02:44:57.103 3120 ERROR nova.compute.manager [instance: 
10a7107a-01c8-4691-8a5a-e306451d6d78] yield resources
  2016-01-17 02:44:57.103 3120 ERROR nova.compute.manager [instance: 
10a7107a-01c8-4691-8a5a-e306451d6d78]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2030, in _build_and_run_instance
  2016-01-17 02:44:57.103 3120 ERROR nova.compute.manager [instance: 
10a7107a-01c8-4691-8a5a-e306451d6d78] block_device_info=block_device_info)
  2016-01-17 02:44:57.103 3120 ERROR nova.compute.manager [instance: 
10a7107a-01c8-4691-8a5a-e306451d6d78]   File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 382, in spawn
  2016-01-17 02:44:57.103 3120 ERROR nova.compute.manager [instance: 
10a7107a-01c8-4691-8a5a-e306451d6d78] image_meta = 
objects.ImageMeta.from_dict(image_meta)
  2016-01-17 02:44:57.103 3120 ERROR nova.compute.manager [instance: 
10a7107a-01c8-4691-8a5a-e306451d6d78]   File 
"/opt/stack/nova/nova/objects/image_meta.py", line 97, in from_dict
  2016-01-17 02:44:57.103 3120 ERROR nova.compute.manager [instance: 
10a7107a-01c8-4691-8a5a-e306451d6d78] image_meta.get("properties", {}))
  2016-01-17 02:44:57.103 3120 ERROR nova.compute.manager [instance: 
10a7107a-01c8-4691-8a5a-e306451d6d78] AttributeError: 'ImageMeta' object has no 
attribute 'get'
  2016-01-17 02:44:57.103 3120 ERROR nova.compute.manager [instance: 
10a7107a-01c8-4691-8a5a-e306451d6d78]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1536718/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-te

[Yahoo-eng-team] [Bug 1532750] [NEW] VMware: unable to spin up instance as network not created on host

2016-01-11 Thread Gary Kotton
Public bug reported:

When using Neutron there are edge cases when the network created by
Neutron has not yet been created on the actual host. This results in the
VM creation failing as the network is still to be created on the host:

2016-01-08 20:56:29.486 ^[[00;32mDEBUG oslo_vmware.exceptions 
[^[[00;36m-^[[00;32m] ^[[01;35m^[[00;32mFault InvalidDeviceSpec not 
matched.^[[00m ^[[00;33mfrom (pid=28979) get_fault_class 
/usr/local/lib/python2.7/dist-packages/oslo_vmware/exceptions.py:295^[[00m
2016-01-08 20:56:29.486 ^[[01;31mERROR oslo_vmware.common.loopingcall 
[^[[00;36m-^[[01;31m] ^[[01;35m^[[01;31min fixed duration looping call^[[00m
^[[01;31m2016-01-08 20:56:29.486 TRACE oslo_vmware.common.loopingcall 
^[[01;35m^[[00mTraceback (most recent call last):
^[[01;31m2016-01-08 20:56:29.486 TRACE oslo_vmware.common.loopingcall 
^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_vmware/common/loopingcall.py", 
line 76, in _inner
^[[01;31m2016-01-08 20:56:29.486 TRACE oslo_vmware.common.loopingcall 
^[[01;35m^[[00mself.f(*self.args, **self.kw)
^[[01;31m2016-01-08 20:56:29.486 TRACE oslo_vmware.common.loopingcall 
^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_vmware/api.py", line 428, in 
_poll_task
^[[01;31m2016-01-08 20:56:29.486 TRACE oslo_vmware.common.loopingcall 
^[[01;35m^[[00mraise task_ex
^[[01;31m2016-01-08 20:56:29.486 TRACE oslo_vmware.common.loopingcall 
^[[01;35m^[[00mVimFaultException: Invalid configuration for device '0'.
^[[01;31m2016-01-08 20:56:29.486 TRACE oslo_vmware.common.loopingcall 
^[[01;35m^[[00mFaults: ['InvalidDeviceSpec']
^[[01;31m2016-01-08 20:56:29.486 TRACE oslo_vmware.common.loopingcall 
^[[01;35m^[[00m
 
Adding a retry will successfully address this - giving the actual host time to 
create the network

** Affects: nova
 Importance: High
 Assignee: Gary Kotton (garyk)
 Status: In Progress


** Tags: liberty-backport-potential vmware

** Changed in: nova
   Importance: Undecided => High

** Tags added: liberty-backport-potential vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1532750

Title:
  VMware: unable to spin up instance as network not created on host

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  When using Neutron there are edge cases when the network created by
  Neutron has not yet been created on the actual host. This results in
  the VM creation failing as the network is still to be created on the
  host:

  2016-01-08 20:56:29.486 ^[[00;32mDEBUG oslo_vmware.exceptions 
[^[[00;36m-^[[00;32m] ^[[01;35m^[[00;32mFault InvalidDeviceSpec not 
matched.^[[00m ^[[00;33mfrom (pid=28979) get_fault_class 
/usr/local/lib/python2.7/dist-packages/oslo_vmware/exceptions.py:295^[[00m
  2016-01-08 20:56:29.486 ^[[01;31mERROR oslo_vmware.common.loopingcall 
[^[[00;36m-^[[01;31m] ^[[01;35m^[[01;31min fixed duration looping call^[[00m
  ^[[01;31m2016-01-08 20:56:29.486 TRACE oslo_vmware.common.loopingcall 
^[[01;35m^[[00mTraceback (most recent call last):
  ^[[01;31m2016-01-08 20:56:29.486 TRACE oslo_vmware.common.loopingcall 
^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_vmware/common/loopingcall.py", 
line 76, in _inner
  ^[[01;31m2016-01-08 20:56:29.486 TRACE oslo_vmware.common.loopingcall 
^[[01;35m^[[00mself.f(*self.args, **self.kw)
  ^[[01;31m2016-01-08 20:56:29.486 TRACE oslo_vmware.common.loopingcall 
^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_vmware/api.py", line 428, in 
_poll_task
  ^[[01;31m2016-01-08 20:56:29.486 TRACE oslo_vmware.common.loopingcall 
^[[01;35m^[[00mraise task_ex
  ^[[01;31m2016-01-08 20:56:29.486 TRACE oslo_vmware.common.loopingcall 
^[[01;35m^[[00mVimFaultException: Invalid configuration for device '0'.
  ^[[01;31m2016-01-08 20:56:29.486 TRACE oslo_vmware.common.loopingcall 
^[[01;35m^[[00mFaults: ['InvalidDeviceSpec']
  ^[[01;31m2016-01-08 20:56:29.486 TRACE oslo_vmware.common.loopingcall 
^[[01;35m^[[00m
   
  Adding a retry will successfully address this - giving the actual host time 
to create the network

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1532750/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1530650] [NEW] Neutron: Unable to create ICMP customer rules when type/code is 0

2016-01-03 Thread Gary Kotton
Public bug reported:

The horizon checks for ICMP codes and types are invalid. This should be
between 0 and 255

** Affects: horizon
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Gary Kotton (garyk)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1530650

Title:
  Neutron: Unable to create ICMP customer rules when type/code is 0

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The horizon checks for ICMP codes and types are invalid. This should
  be between 0 and 255

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1530650/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523503] [NEW] Scheuling does not with a hybid cloud

2015-12-07 Thread Gary Kotton
Public bug reported:

The glance image metadata 'hypervisor_type' and 'hypervisor_version_requires' 
are not honored. The reason is that these are replaced by img_hv_type and 
img_hv_requested_version
So the scheduler will not take these into account. This break scheduling ina 
hybrid cloud

** Affects: nova
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress


** Tags: liberty-backport-potential

** Tags added: liberty-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1523503

Title:
  Scheuling does not with a hybid cloud

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The glance image metadata 'hypervisor_type' and 'hypervisor_version_requires' 
are not honored. The reason is that these are replaced by img_hv_type and 
img_hv_requested_version
  So the scheduler will not take these into account. This break scheduling ina 
hybrid cloud

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1523503/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1522054] [NEW] hacking does not catch LOG.info(_('who lets the dogs out'))

2015-12-02 Thread Gary Kotton
Public bug reported:

It should be:
LOG.info(_LI('who who who who'))

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Gary Kotton (garyk)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1522054

Title:
  hacking does not catch LOG.info(_('who lets the dogs out'))

Status in neutron:
  In Progress

Bug description:
  It should be:
  LOG.info(_LI('who who who who'))

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1522054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521620] [NEW] Neutron-lbaas gate broken due to pep8

2015-12-01 Thread Gary Kotton
x import 
ip_lib" (wrong-import-order)
2015-12-01 11:40:28.624 | C: 28, 0: external import "from 
oslo_utils import excutils" comes before "from neutron.agent.linux import 
ip_lib" (wrong-import-order)
2015-12-01 11:40:28.624 | * Module 
neutron_lbaas.services.loadbalancer.drivers.haproxy.jinja_cfg
2015-12-01 11:40:28.624 | C: 22, 0: external import "from 
oslo_config import cfg" comes before "from neutron.common import utils as 
n_utils" (wrong-import-order)
2015-12-01 11:40:28.624 | * Module 
neutron_lbaas.services.loadbalancer.drivers.haproxy.cfg
2015-12-01 11:40:28.624 | C: 18, 0: external import "from six 
import moves" comes before "from neutron.common import utils as n_utils" 
(wrong-import-order)
2015-12-01 11:40:28.624 | * Module 
neutron_lbaas.services.loadbalancer.drivers.haproxy.synchronous_namespace_driver
2015-12-01 11:40:28.624 | C: 16, 0: external import "from oslo_log 
import log as logging" comes before "from neutron.i18n import _LW" 
(wrong-import-order)
2015-12-01 11:40:28.624 | * Module 
neutron_lbaas.services.loadbalancer.drivers.common.agent_driver_base
2015-12-01 11:40:28.624 | C: 26, 0: external import "from 
oslo_config import cfg" comes before "from neutron.common import constants as 
q_const" (wrong-import-order)
2015-12-01 11:40:28.625 | C: 27, 0: external import "from oslo_log 
import log as logging" comes before "from neutron.common import constants as 
q_const" (wrong-import-order)
2015-12-01 11:40:28.625 | C: 28, 0: external import "import 
oslo_messaging" comes before "from neutron.common import constants as 
q_const" (wrong-import-order)
2015-12-01 11:40:28.625 | C: 29, 0: external import "from 
oslo_utils import importutils" comes before "from neutron.common import 
constants as q_const" (wrong-import-order)
2015-12-01 11:40:29.816 | ERROR: InvocationError: 
'/home/jenkins/workspace/gate-neutron-lbaas-pep8/.tox/pep8/bin/pylint 
--rcfile=.pylintrc --output-format=colorized neutron_lbaas'
2015-12-01 11:40:29.816 | ___ summary _

** Affects: neutron
 Importance: Critical
 Assignee: Gary Kotton (garyk)
 Status: In Progress

** Changed in: neutron
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1521620

Title:
  Neutron-lbaas gate broken due to pep8

Status in neutron:
  In Progress

Bug description:
  2015-12-01 11:39:24.491 | pep8 runtests: PYTHONHASHSEED='3236704285'
  2015-12-01 11:39:24.491 | pep8 runtests: commands[0] | flake8
  2015-12-01 11:39:24.492 |   /home/jenkins/workspace/gate-neutron-lbaas-pep8$ 
/home/jenkins/workspace/gate-neutron-lbaas-pep8/.tox/pep8/bin/flake8 
  2015-12-01 11:39:27.194 | pep8 runtests: commands[1] | pylint 
--rcfile=.pylintrc --output-format=colorized neutron_lbaas
  2015-12-01 11:39:27.194 |   /home/jenkins/workspace/gate-neutron-lbaas-pep8$ 
/home/jenkins/workspace/gate-neutron-lbaas-pep8/.tox/pep8/bin/pylint 
--rcfile=.pylintrc --output-format=colorized neutron_lbaas 
  2015-12-01 11:39:27.513 | Warning: option ignore-iface-methods is obsolete 
and it is slated for removal in Pylint 1.6.
  2015-12-01 11:39:46.623 | * Module 
neutron_lbaas.agent_scheduler
  2015-12-01 11:39:46.623 | C: 22, 0: external import "from 
oslo_log import log as logging" comes before "from neutron.db import 
agents_db" (wrong-import-order)
  2015-12-01 11:39:46.623 | C: 23, 0: external import "import six" 
comes before "from neutron.db import agents_db" (wrong-import-order)
  2015-12-01 11:39:46.624 | C: 24, 0: external import "import 
sqlalchemy as sa" comes before "from neutron.db import agents_db" 
(wrong-import-order)
  2015-12-01 11:39:46.624 | C: 25, 0: external import "from 
sqlalchemy import orm" comes before "from neutron.db import agents_db" 
(wrong-import-order)
  2015-12-01 11:39:46.624 | C: 26, 0: external import "from 
sqlalchemy.orm import joinedload" comes before "from neutron.db import 
agents_db" (wrong-import-order)
  2015-12-01 11:39:46.624 | * Module 
neutron_lbaas.db.loadbalancer.loadbalancer_dbv2
  2015-12-01 11:39:46.624 | C: 25, 0: external import "from oslo_db 
import exception" comes before "from neutron.api.v2 import attributes"[0

[Yahoo-eng-team] [Bug 1519888] [NEW] Neutron gate failure for test_restart_l3_agent_on_sighup

2015-11-25 Thread Gary Kotton
Public bug reported:

ft1.85:_ 
neutron.tests.functional.agent.test_l3_agent.TestL3AgentRestart.test_restart_l3_agent_on_sighupStringException:
 Empty attachments:
  pythonlogging:''
  pythonlogging:'neutron.api.extensions'
  stderr
  stdout

Traceback (most recent call last):
  File "neutron/tests/functional/agent/test_l3_agent.py", line 37, in 
test_restart_l3_agent_on_sighup
workers=1)
  File "neutron/tests/functional/test_server.py", line 158, in 
_test_restart_service_on_sighup
'size': expected_size}))
  File "neutron/agent/linux/utils.py", line 326, in wait_until_true
eventlet.sleep(sleep)
  File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/greenthread.py",
 line 34, in sleep
hub.switch()
  File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 294, in switch
return self.greenlet.switch()
RuntimeError: Timed out waiting for file 
/tmp/tmpfLMsUh/tmpbsDvqt/test_server.tmp to be created and its size become 
equal to 10.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1519888

Title:
  Neutron gate failure for test_restart_l3_agent_on_sighup

Status in neutron:
  New

Bug description:
  ft1.85:_ 
neutron.tests.functional.agent.test_l3_agent.TestL3AgentRestart.test_restart_l3_agent_on_sighupStringException:
 Empty attachments:
pythonlogging:''
pythonlogging:'neutron.api.extensions'
stderr
stdout

  Traceback (most recent call last):
File "neutron/tests/functional/agent/test_l3_agent.py", line 37, in 
test_restart_l3_agent_on_sighup
  workers=1)
File "neutron/tests/functional/test_server.py", line 158, in 
_test_restart_service_on_sighup
  'size': expected_size}))
File "neutron/agent/linux/utils.py", line 326, in wait_until_true
  eventlet.sleep(sleep)
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/greenthread.py",
 line 34, in sleep
  hub.switch()
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 294, in switch
  return self.greenlet.switch()
  RuntimeError: Timed out waiting for file 
/tmp/tmpfLMsUh/tmpbsDvqt/test_server.tmp to be created and its size become 
equal to 10.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1519888/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486565] Re: Network/Image names allows terminal escape sequence

2015-11-25 Thread Gary Kotton
nova boot --image e2f1e48a-b9f5-41f2-8b9e-d0833c945ef7 --flavor 1 --nic
net-id=30d01de4-328d-4cd4-9ec0-1e6cba1cb3f4 $(echo -e
"\E[37mhidden\x1b[f")

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1486565

Title:
  Network/Image names allows terminal escape sequence

Status in Glance:
  New
Status in neutron:
  New
Status in OpenStack Compute (nova):
  New
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  This allows a malicious user to create network that will mess with
  administrator terminal when they list network.

  Steps to reproduces:

  As a user: neutron net-create $(echo -e "\E[37mhidden\x1b[f")

  As an admin: neutron net-list

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1486565/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1317515] Re: flavor-access-add doesn't validate the tenant id

2015-11-22 Thread Gary Kotton
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1317515

Title:
  flavor-access-add doesn't validate the tenant id

Status in Cinder:
  New
Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  I can use a random string to represent the tenant when calling flavor-
  access-add, and it will be shown in the flavor-access-list output even
  though it has no meaning and won't work. This causes confusion for
  users, who use the command to add tenants by name and then wonder why
  they can't access the new flavour (e.g. bug 1083602, bug 1315479).

  Steps to reproduce:

  1. Create a private flavour

  $ nova flavor-create --is-public false abcdef auto 1 1 1
  
+--++---+--+---+--+---+-+---+
  | ID   | Name   | Memory_MB | Disk | 
Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
  
+--++---+--+---+--+---+-+---+
  | f349c9a1-ce16-4511-9da0-eeced2f1baa4 | abcdef | 1 | 1| 0
 |  | 1 | 1.0 | False |
  
+--++---+--+---+--+---+-+---+

  2. Give access to a tenant by name

  $ nova flavor-access-add f349c9a1-ce16-4511-9da0-eeced2f1baa4 demo

  3. It looks like it was added correctly, but if I do 'nova flavor-
  list'  with a user from the demo tenant it will not show the flavour.

  $ nova flavor-access-list --flavor f349c9a1-ce16-4511-9da0-eeced2f1baa4
  +--+---+
  | Flavor_ID| Tenant_ID |
  +--+---+
  | f349c9a1-ce16-4511-9da0-eeced2f1baa4 | demo  |
  +--+---+

  The name doesn't need to exist at all, I can successfully add random
  strings:

  $ nova flavor-access-add f349c9a1-ce16-4511-9da0-eeced2f1baa4 
this-tenant-does-not-exist
  +--++
  | Flavor_ID| Tenant_ID  |
  +--++
  | f349c9a1-ce16-4511-9da0-eeced2f1baa4 | demo   |
  | f349c9a1-ce16-4511-9da0-eeced2f1baa4 | this-tenant-does-not-exist |
  +--++

  I think we shouldn't allow invalid IDs when running "nova flavor-
  access-add".

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1317515/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432960] Re: OVSVAPP:VM fails to boot due to ERROR:got an unexpected keyword argument 'flavor'

2015-11-19 Thread Gary Kotton
This is not a neutron bug and is nova related. It may be due to the fact
that the OVSVAPP has nova code out of tree...

** Also affects: nova
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1432960

Title:
  OVSVAPP:VM fails to boot due to ERROR:got an unexpected keyword
  argument 'flavor'

Status in OpenStack Compute (nova):
  New

Bug description:
  setup:
  
  Created a devstack containg controller and compute proxy.Local.conf details 
are placed in link provided below
  Made the ovsvapp up
  On nova boot the vm fails to get hosted due to error:got an unexpected 
keyword argument 'flavor' in screen-n-cpu.log

  All the logs are placed at
  http://15.126.220.115/23177/156283/062a382ff7ec8e18255ecf3cb870138d4d606ce1/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1432960/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515866] Re: two lbaas agent instances are run for gate-neutron-lbaasv1-dsvm-api

2015-11-18 Thread Gary Kotton
I do not think that this is relevant anymore as the V1 is deprecated.

** Tags added: lbaas

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1515866

Title:
  two lbaas agent instances are run for gate-neutron-lbaasv1-dsvm-api

Status in neutron:
  Won't Fix

Bug description:
  on gate, two instances of lbaas agents are executed.
  one by neutron-legacy, another by neutron-lbaas devstack plugin.

  for example:

  http://logs.openstack.org/34/243934/4/check/gate-neutron-lbaasv1-dsvm-
  api/3b43406/logs/devstacklog.txt.gz#_2015-11-12_22_53_07_383

  http://logs.openstack.org/34/243934/4/check/gate-neutron-lbaasv1-dsvm-
  api/3b43406/logs/devstacklog.txt.gz#_2015-11-12_22_55_57_529

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1515866/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515508] Re: Kilo:No command to configure LB session persistence in openstack CLI.

2015-11-18 Thread Gary Kotton
** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

** Changed in: python-neutronclient
   Importance: Undecided => Low

** Changed in: python-neutronclient
   Status: New => Confirmed

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1515508

Title:
  Kilo:No command to configure LB session persistence in openstack CLI.

Status in python-neutronclient:
  Confirmed

Bug description:
  openstack env is kilo using ubuntu 14.04,  can't configure load
  balance session persistence in openstack CLI. anyway to configure it
  via cli?

  [root@nsj1 ~]# neutron lb-vip-update 748b4c56-7a3b-41d4-b591-ecd9014730ed 
--session_persistence type=HTTP_COOKIE
  name 'HTTP_COOKIE' is not defined

  [root@nsj1 ~]# neutron lb-vip-update 748b4c56-7a3b-41d4-b591-ecd9014730ed 
--session_persistence type=http_cookie
  name 'http_cookie' is not defined

  
  [root@nsj1 ~]# neutron lb-vip-update 748b4c56-7a3b-41d4-b591-ecd9014730ed 
--session_persistence type=app_cookie
  name 'app_cookie' is not defined


  [root@nsj1 ~]# neutron lb-vip-create
  usage: neutron lb-vip-create [-h] [-f {shell,table,value}] [-c COLUMN]
   [--max-width ] [--prefix PREFIX]
   [--request-format {json,xml}]
   [--tenant-id TENANT_ID] [--address ADDRESS]
   [--admin-state-down]
   [--connection-limit CONNECTION_LIMIT]
   [--description DESCRIPTION] --name NAME
   --protocol-port PROTOCOL_PORT --protocol
   {TCP,HTTP,HTTPS} --subnet-id SUBNET
   POOL
  neutron lb-vip-create: error: too few arguments

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1515508/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1516791] Re: LBaaS v2 doc for show loadbalancer has incorrect status

2015-11-18 Thread Gary Kotton
** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1516791

Title:
  LBaaS v2 doc for show loadbalancer has incorrect status

Status in openstack-manuals:
  New

Bug description:
  http://developer.openstack.org/api-ref-
  networking-v2-ext.html#showLoadBalancerv2 displays the output of a
  show loadbalancer call. One of the keys shown is 'status'.

  Based on the code (https://github.com/openstack/neutron-
  lbaas/blob/master/neutron_lbaas/services/loadbalancer/data_models.py#L499)
  I would expect to see 2 statuses here, a provisioning_status and an
  operating_status.

  This appears to be an error in the docs, since a Loadbalancer object
  would contain both of these attributes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-manuals/+bug/1516791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515670] Re: VPNaaS: Modify neutron-client to allow Horizon to detect multiple local subnet feature

2015-11-18 Thread Gary Kotton
** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

** No longer affects: neutron

** Changed in: python-neutronclient
   Importance: Undecided => High

** Changed in: python-neutronclient
 Assignee: (unassigned) => Akihiro Motoki (amotoki)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1515670

Title:
  VPNaaS: Modify neutron-client to allow Horizon to detect multiple
  local subnet feature

Status in python-neutronclient:
  New

Bug description:
  In review of 231133, Akihiro mentioned follow up work for the neutron
  client, so that Horizon can detect whether or not the new multiple
  local subnet feature, with endpoint groups, is available.

  Placeholder for that work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1515670/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480813] Re: How can tenant vm instance access management network

2015-11-17 Thread Gary Kotton
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1480813

Title:
  How can tenant vm instance access management network

Status in neutron:
  Invalid

Bug description:
  Hi team,

  Now we use monasca as a monitroing solution for our public cloud
  product. But we encounter a network issue that is we wonder setup
  agent program in user's vm instance, monasca agent need access
  keystone for authentication. We thought of two methods:

  1.Create a virtual nic on vm instance and access the keystone through the 
management network.
  2.Through L3 router to external network and expose keystone api on external 
network.

  Seems these two methods both have security issue.

  I want to know which solution is prefered and how to avoid security issue. 
  Thank you.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1480813/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489678] Re: return inappropriate message when run net-list -F {fields} from neutronclient

2015-11-17 Thread Gary Kotton
** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489678

Title:
  return inappropriate message when run net-list -F {fields} from
  neutronclient

Status in python-neutronclient:
  New

Bug description:
  version: 2.6.0

  Run neutron net-list -F ${which not contained fields},it will return
  some inappropriate info, the expected is return nothing.

  normal:
  neutron net-list
  
+--+--+--+
  | id   | name | subnets   
   |
  
+--+--+--+
  | d6424edb-103f-4aa8-8098-86199630bb1c | public   | 
309abcb3-da67-4948-9b26-f8788057d685 172.24.4.0/24   |
  |  |  | 
a69a8e17-21ff-4170-a6ff-6eb59c7932af 2001:db8::/64   |
  | 38653292-71cb-4dd4-b55f-750224ce484a | private  | 
9d334a61-dbb8-4271-9a06-8f1dd5a933c1 10.0.0.0/24 |
  |  |  | 
aed807fb-4167-4388-bdfe-2b9c0843e0cd fdca:d8ff:39ca::/64 |
  
+--+--+--+

  
  repro:
  run neutron net-list -F abc
  it returned:
  ++
  ||
  ++

  ++

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1489678/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515926] [NEW] Noav gate fails: IPv6 subnet is configured for automatic address

2015-11-13 Thread Gary Kotton
Public bug reported:

2015-11-13 07:19:53.080 ERROR oslo_messaging.rpc.dispatcher 
[req-c1a7e5c5-bdac-4ce9-8099-18965908e11b None None] Exception during message 
handling: Invalid input for operation: IPv6 address 2003::f816:3eff:fef9:ebe0 
can not be directly assigned to a port on subnet 
14b253af-925c-45e9-9523-76a649ee37c4 since the subnet is configured for 
automatic addresses.
2015-11-13 07:19:53.080 27535 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-11-13 07:19:53.080 27535 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply
2015-11-13 07:19:53.080 27535 ERROR oslo_messaging.rpc.dispatcher 
executor_callback))
2015-11-13 07:19:53.080 27535 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
186, in _dispatch
2015-11-13 07:19:53.080 27535 ERROR oslo_messaging.rpc.dispatcher 
executor_callback)
2015-11-13 07:19:53.080 27535 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
129, in _do_dispatch
2015-11-13 07:19:53.080 27535 ERROR oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2015-11-13 07:19:53.080 27535 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 147, in wrapper
2015-11-13 07:19:53.080 27535 ERROR oslo_messaging.rpc.dispatcher 
ectxt.value = e.inner_exc
2015-11-13 07:19:53.080 27535 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, in 
__exit__
2015-11-13 07:19:53.080 27535 ERROR oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-11-13 07:19:53.080 27535 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 137, in wrapper
2015-11-13 07:19:53.080 27535 ERROR oslo_messaging.rpc.dispatcher return 
f(*args, **kwargs)
2015-11-13 07:19:53.080 27535 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/neutron/neutron/api/rpc/handlers/dhcp_rpc.py", line 226, in 
update_dhcp_port
2015-11-13 07:19:53.080 27535 ERROR oslo_messaging.rpc.dispatcher return 
self._port_action(plugin, context, port, 'update_port')
2015-11-13 07:19:53.080 27535 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/neutron/neutron/api/rpc/handlers/dhcp_rpc.py", line 88, in 
_port_action
2015-11-13 07:19:53.080 27535 ERROR oslo_messaging.rpc.dispatcher return 
plugin.update_port(context, port['id'], port)
2015-11-13 07:19:53.080 27535 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py", line 1154, in 
update_port
2015-11-13 07:19:53.080 27535 ERROR oslo_messaging.rpc.dispatcher port)
2015-11-13 07:19:53.080 27535 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py", line 1221, in 
update_port
2015-11-13 07:19:53.080 27535 ERROR oslo_messaging.rpc.dispatcher new_port, 
new_mac)
2015-11-13 07:19:53.080 27535 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/neutron/neutron/db/ipam_non_pluggable_backend.py", line 221, in 
update_port_with_ips
2015-11-13 07:19:53.080 27535 ERROR oslo_messaging.rpc.dispatcher 
original['mac_address'], db_port['device_owner'])
2015-11-13 07:19:53.080 27535 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/neutron/neutron/db/ipam_non_pluggable_backend.py", line 322, in 
_update_ips_for_port
2015-11-13 07:19:53.080 27535 ERROR oslo_messaging.rpc.dispatcher 
changes.add, device_owner)
2015-11-13 07:19:53.080 27535 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/new/neutron/neutron/db/ipam_non_pluggable_backend.py", line 264, in 
_test_fixed_ips_for_port
2015-11-13 07:19:53.080 27535 ERROR oslo_messaging.rpc.dispatcher raise 
n_exc.InvalidInput(error_message=msg)
2015-11-13 07:19:53.080 27535 ERROR oslo_messaging.rpc.dispatcher InvalidInput: 
Invalid input for operation: IPv6 address 2003::f816:3eff:fef9:ebe0 can not be 
directly assigned to a port on subnet 14b253af-925c-45e9-9523-76a649ee37c4 
since the subnet is configured for automatic addresses.
2015-11-13 07:19:53.080 27535 ERROR oslo_messaging.rpc.dispatcher 
2015-11-13 07:19:53.083 ERROR oslo_messaging._drivers.common 
[req-c1a7e5c5-bdac-4ce9-8099-18965908e11b None None] Returning exception 
Invalid input for operation: IPv6 address 2003::f816:3eff:fef9:ebe0 can not be 
directly assigned to a port on subnet 14b253af-925c-45e9-9523-76a649ee37c4 
since the subnet is configured for automatic addresses. to caller
2015-11-13 07:19:53.083 ERROR oslo_messaging._drivers.common 
[req-c1a7e5c5-bdac-4ce9-8099-18965908e11b None None] ['Traceback (most recent 
call last):\n', '  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in 

[Yahoo-eng-team] [Bug 1499269] Re: cannot attach direct type port (sr-iov) to existing instance

2015-11-10 Thread Gary Kotton
This looks like a nova issue and not a neutron one.

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1499269

Title:
  cannot attach direct type port (sr-iov) to existing instance

Status in neutron:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  Whenever I try to attach a direct port to an existing instance It
  fails:

  #neutron port-create Management --binding:vnic_type direct
  Created a new port:
  
+---+-+
  | Field | Value   
|
  
+---+-+
  | admin_state_up| True
|
  | allowed_address_pairs | 
|
  | binding:host_id   | 
|
  | binding:profile   | {}  
|
  | binding:vif_details   | {}  
|
  | binding:vif_type  | unbound 
|
  | binding:vnic_type | direct  
|
  | device_id | 
|
  | device_owner  | 
|
  | fixed_ips | {"subnet_id": 
"6c82ff4c-124e-469a-8444-1446cc5d979f", "ip_address": "10.92.29.123"} |
  | id| ce455654-4eb5-4b89-b868-b426381951c8
|
  | mac_address   | fa:16:3e:6b:15:e8   
|
  | name  | 
|
  | network_id| 5764ca50-1f30-4daa-8c86-a21fed9a679c
|
  | security_groups   | 5d2faf7b-2d32-49a8-978e-a91f57ece17d
|
  | status| DOWN
|
  | tenant_id | d5ecb0eea96f4996b565fd983a768b11
|
  
+---+-+

  # nova interface-attach --port-id ce455654-4eb5-4b89-b868-b426381951c8 
voicisc4srv1
  ERROR (ClientException): Failed to attach interface (HTTP 500) (Request-ID: 
req-11516bf4-7ab7-414c-a4ee-63e44aaf00a5)

  nova-compute.log:

  0a14aeb82013e5a1631de80 d5ecb0eea96f4996b565fd983a768b11 - - -] Exception 
during message handling: Failed to attach network adapter device to 
056d455a-314d-4853-839e-70229a56dfcd
  2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
  2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
  2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
  2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6632, in 
attach_interface
  2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher 
port_id, requested_ip)
  2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 443, in 
decorated_function
  2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-09-24 10:41:35.434 43535 TRACE oslo_messaging.rpc.dispatcher   File 

[Yahoo-eng-team] [Bug 1496893] Re: Unable to modify "IP Allocation pool" for public subnet from Horizon but can do it from CLI

2015-11-10 Thread Gary Kotton
** Also affects: horizon
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1496893

Title:
  Unable to modify "IP Allocation pool" for public subnet from Horizon
  but can do it from CLI

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Running RedHat OSP7 with Kilo installed on my Nodes. T

  he topology consist of :
  1. Controller + Network
  2. Compute1
  3. Compute2

  The CLI shows this for the public subnet :

  [root@g07-controller-1 ~(keystone_admin)]# neutron subnet-list
  
+--+--++--+
  | id   | name | cidr   | 
allocation_pools |
  
+--+--++--+
  | fefa2447-f5db-4ff6-afa9-a669385cfda4 |  | 10.30.118.0/27 | {"start": 
"10.30.118.10", "end": "10.30.118.20"} |
  
+--+--++--+
  [root@g07-controller-1 ~(keystone_admin)]# 

  
  Now from the CLI I am able to change the pool :

  [root@g07-controller-1 ~(keystone_admin)]# neutron subnet-update 
--allocation-pool start=10.30.118.10,end=10.30.118.25 
fefa2447-f5db-4ff6-afa9-a669385cfda4 
  Updated subnet: fefa2447-f5db-4ff6-afa9-a669385cfda4
  [root@g07-controller-1 ~(keystone_admin)]# neutron subnet-list
  
+--+--++--+
  | id   | name | cidr   | 
allocation_pools |
  
+--+--++--+
  | fefa2447-f5db-4ff6-afa9-a669385cfda4 |  | 10.30.118.0/27 | {"start": 
"10.30.118.10", "end": "10.30.118.25"} |
  
+--+--++--+

  
  When tried to do the same thing from Horizon, I was able to change and also 
got a success message but when I checked again the pool size had not changed.


  Some logs are :

  ==> /var/log/httpd/horizon_access.log <==
  10.131.78.38 - - [17/Sep/2015:07:48:12 -0700] "GET 
/dashboard/admin/networks/d0047805-804b-4a03-942d-b27590d6aef4/detail HTTP/1.1" 
200 4415 
"http://10.30.118.4/dashboard/admin/networks/d0047805-804b-4a03-942d-b27590d6aef4/detail;
 "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_3) AppleWebKit/600.5.17 (KHTML, 
like Gecko) Version/8.0.5 Safari/600.5.17"
  10.131.78.38 - - [17/Sep/2015:07:48:12 -0700] "GET 
/dashboard/i18n/js/horizon/ HTTP/1.1" 200 2372 
"http://10.30.118.4/dashboard/admin/networks/d0047805-804b-4a03-942d-b27590d6aef4/detail;
 "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_3) AppleWebKit/600.5.17 (KHTML, 
like Gecko) Version/8.0.5 Safari/600.5.17"

  ==> /var/log/neutron/server.log <==
  2015-09-17 07:48:12.409 26650 INFO neutron.wsgi [-] (26650) accepted 
('10.30.118.4', 60599)
  2015-09-17 07:48:12.411 26650 INFO neutron.wsgi 
[req-66292bc0-ac2c-48eb-972f-fe44673775fc ] 10.30.118.4 - - [17/Sep/2015 
07:48:12] "GET /v2.0/extensions.json HTTP/1.1" 200 4806 0.002139
  2015-09-17 07:48:12.414 26650 INFO neutron.wsgi [-] (26650) accepted 
('10.30.118.4', 60600)
  2015-09-17 07:48:12.421 26650 INFO neutron.wsgi 
[req-d1473f09-183d-4fd0-8739-0d5a87b7a935 ] 10.30.118.4 - - [17/Sep/2015 
07:48:12] "GET 
/v2.0/subnets.json?network_id=d0047805-804b-4a03-942d-b27590d6aef4 HTTP/1.1" 
200 670 0.007126
  2015-09-17 07:48:12.423 26650 INFO neutron.wsgi [-] (26650) accepted 
('10.30.118.4', 60601)
  2015-09-17 07:48:12.435 26650 INFO neutron.wsgi 
[req-2fe1133f-d998-41fd-88e4-f90cf97d13f7 ] 10.30.118.4 - - [17/Sep/2015 
07:48:12] "GET /v2.0/ports.json?network_id=d0047805-804b-4a03-942d-b27590d6aef4 
HTTP/1.1" 200 1444 0.011954
  2015-09-17 07:48:12.437 26650 INFO neutron.wsgi [-] (26650) accepted 
('10.30.118.4', 60602)
  2015-09-17 07:48:12.442 26650 INFO neutron.wsgi 
[req-4e76244a-35b6-46b5-84e4-f845a322f7d8 ] 10.30.118.4 - - [17/Sep/2015 
07:48:12] "GET 
/v2.0/networks/d0047805-804b-4a03-942d-b27590d6aef4/dhcp-agents.json HTTP/1.1" 
200 731 0.005297
  2015-09-17 07:48:12.444 26650 INFO neutron.wsgi [-] (26650) accepted 
('10.30.118.4', 60603)
  2015-09-17 07:48:12.455 26650 INFO neutron.wsgi 
[req-c89b4555-df61-4c98-9d4f-00a26b4db7fd ] 10.30.118.4 - - [17/Sep/2015 
07:48:12] "GET /v2.0/networks/d0047805-804b-4a03-942d-b27590d6aef4.json 
HTTP/1.1" 200 598 0.011052
  2015-09-17 07:48:12.456 26650 INFO neutron.wsgi [-] (26650) accepted 
('10.30.118.4', 60604)
  2015-09-17 07:48:12.465 26650 INFO neutron.wsgi 
[req-a4a36cb9-fb80-47de-a033-c9f7c25b6835 ] 10.30.118.4 - 

[Yahoo-eng-team] [Bug 1449901] Re: Vmware NSXv: missing functionality to controll security-group rules logging using dfw API

2015-11-09 Thread Gary Kotton
** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1449901

Title:
  Vmware NSXv: missing functionality to controll security-group rules
  logging using dfw API

Status in neutron:
  Won't Fix

Bug description:
  NSXv Distributed FW expose api to control rule logging,
  we wish to utilize this api and give the cloud admin a way to control 
security-group rules logging.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1449901/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506465] [NEW] VMware: unable to deploy debian image with liberty

2015-10-15 Thread Gary Kotton
ack/nova/nova/objects/fields.py", line 272, in coerce
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions return 
super(SCSIModel, self).coerce(obj, attr, value)
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/fields.py", line 
284, in coerce
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions raise 
ValueError(msg)
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions ValueError: Field 
value  is invalid
2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions 


The reason for this is that the default adapter type is set as ""

** Affects: nova
 Importance: Critical
 Assignee: Gary Kotton (garyk)
 Status: In Progress


** Tags: liberty-backport-potential vmware

** Changed in: nova
 Assignee: (unassigned) => Gary Kotton (garyk)

** Changed in: nova
   Importance: Undecided => Critical

** Tags added: vmware

** Tags added: liberty-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1506465

Title:
  VMware: unable to deploy debian image with liberty

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  A debian image loaded in and used in Kilo is unable to be used in
  liberty.

  nicira@htb-1n-eng-dhcp8:~$ nova image-show  debian-2.6.32-i686 
  +-+--+
  | Property| Value|
  +-+--+
  | OS-EXT-IMG-SIZE:size| 1073741824   |
  | created | 2015-10-15T11:19:33Z |
  | id  | 34d8aa13-fbad-44ad-90c9-4d259abb0a2a |
  | metadata hw_vif_model   | e1000|
  | metadata vmware_adaptertype |  |
  | metadata vmware_disktype| preallocated |
  | minDisk | 0|
  | minRam  | 0|
  | name| debian-2.6.32-i686   |
  | progress| 100  |
  | status  | ACTIVE   |
  | updated | 2015-10-15T11:19:49Z |
  +-+--+

  The trace from the API is:

  2015-10-15 11:25:48.118 ERROR nova.api.openstack.extensions 
[req-ca854e8c-68fe-47f6-9ce0-c2a5083b417d demo demo] Unexpected exception in 
API method
  2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions Traceback (most 
recent call last):
  2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 478, in wrapped
  2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions return 
f(*args, **kwargs)
  2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/validation/__init__.py", line 73, in wrapper
  2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions return 
func(*args, **kwargs)
  2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/validation/__init__.py", line 73, in wrapper
  2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions return 
func(*args, **kwargs)
  2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/servers.py", line 602, in create
  2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions 
**create_kwargs)
  2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/hooks.py", line 149, in inner
  2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions rv = f(*args, 
**kwargs)
  2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/api.py", line 1562, in create
  2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions 
check_server_group_quota=check_server_group_quota)
  2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/api.py", line 1162, in _create_instance
  2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions 
auto_disk_config, reservation_id, max_count)
  2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/api.py", line 921, in 
_validate_and_build_base_options
  2015-10-15 11:25:48.118 TRACE nova.api.openstack.extensions image_meta = 
objects.ImageMeta.from_dict(boot_meta)
  2015-10-15 11:25:48.118 TRACE nova.api.openstack.exte

[Yahoo-eng-team] [Bug 1505318] [NEW] VMware: fail to attach configure driver when cluster is in a folder

2015-10-12 Thread Gary Kotton
Public bug reported:

2015-10-12 10:36:17.470 INFO nova.virt.vmwareapi.vmops 
[req-85b5ece9-2f07-4c6d-ba63-307e9941b191 demo demo] [instance: 
bfa0cfd0-ddb7-4be5-a06d-2bc38ef9990f] Using config drive for instance
/usr/local/lib/python2.7/dist-packages/urllib3/util/ssl_.py:100: 
InsecurePlatformWarning: A true SSLContext object is not available. This 
prevents urllib3 from configuring SSL appropriately and may cause certain SSL 
connections to fail. For more information, see 
https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
  InsecurePlatformWarning
2015-10-12 10:36:17.819 ERROR oslo_vmware.rw_handles 
[req-85b5ece9-2f07-4c6d-ba63-307e9941b191 demo demo] Error occurred while 
writing data to 
https://10.161.64.148:443/folder/bfa0cfd0-ddb7-4be5-a06d-2bc38ef9990f/configdrive.iso?dsName=vdnetSharedStorage=os-datacenter.
2015-10-12 10:36:17.819 TRACE oslo_vmware.rw_handles Traceback (most recent 
call last):
2015-10-12 10:36:17.819 TRACE oslo_vmware.rw_handles   File 
"/usr/local/lib/python2.7/dist-packages/oslo_vmware/rw_handles.py", line 276, 
in write
2015-10-12 10:36:17.819 TRACE oslo_vmware.rw_handles 
self._file_handle.send(data)
2015-10-12 10:36:17.819 TRACE oslo_vmware.rw_handles   File 
"/usr/lib/python2.7/httplib.py", line 798, in send
2015-10-12 10:36:17.819 TRACE oslo_vmware.rw_handles self.sock.sendall(data)
2015-10-12 10:36:17.819 TRACE oslo_vmware.rw_handles   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/green/ssl.py", line 168, in 
sendall
2015-10-12 10:36:17.819 TRACE oslo_vmware.rw_handles v = 
self.send(data_to_send)
2015-10-12 10:36:17.819 TRACE oslo_vmware.rw_handles   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/green/ssl.py", line 143, in 
send
2015-10-12 10:36:17.819 TRACE oslo_vmware.rw_handles super(GreenSSLSocket, 
self).send, data, flags)
2015-10-12 10:36:17.819 TRACE oslo_vmware.rw_handles   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/green/ssl.py", line 110, in 
_call_trampolining
2015-10-12 10:36:17.819 TRACE oslo_vmware.rw_handles return func(*a, **kw)
2015-10-12 10:36:17.819 TRACE oslo_vmware.rw_handles   File 
"/usr/lib/python2.7/ssl.py", line 198, in send
2015-10-12 10:36:17.819 TRACE oslo_vmware.rw_handles v = 
self._sslobj.write(data)
2015-10-12 10:36:17.819 TRACE oslo_vmware.rw_handles error: [Errno 32] Broken 
pipe
2015-10-12 10:36:17.819 TRACE oslo_vmware.rw_handles 
2015-10-12 10:36:17.833 ERROR nova.virt.vmwareapi.vmops 
[req-85b5ece9-2f07-4c6d-ba63-307e9941b191 demo demo] [instance: 
bfa0cfd0-ddb7-4be5-a06d-2bc38ef9990f] Creating config drive failed with error: 
Error occurred while writing data to 
https://10.161.64.148:443/folder/bfa0cfd0-ddb7-4be5-a06d-2bc38ef9990f/configdrive.iso?dsName=vdnetSharedStorage=os-datacenter.
Cause: [Errno 32] Broken pipe
2015-10-12 10:36:17.834 ERROR nova.compute.manager 
[req-85b5ece9-2f07-4c6d-ba63-307e9941b191 demo demo] [instance: 
bfa0cfd0-ddb7-4be5-a06d-2bc38ef9990f] Instance failed to spawn
2015-10-12 10:36:17.834 TRACE nova.compute.manager [instance: 
bfa0cfd0-ddb7-4be5-a06d-2bc38ef9990f] Traceback (most recent call last):
2015-10-12 10:36:17.834 TRACE nova.compute.manager [instance: 
bfa0cfd0-ddb7-4be5-a06d-2bc38ef9990f]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2172, in _build_resources
2015-10-12 10:36:17.834 TRACE nova.compute.manager [instance: 
bfa0cfd0-ddb7-4be5-a06d-2bc38ef9990f] yield resources
2015-10-12 10:36:17.834 TRACE nova.compute.manager [instance: 
bfa0cfd0-ddb7-4be5-a06d-2bc38ef9990f]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2019, in _build_and_run_instance
2015-10-12 10:36:17.834 TRACE nova.compute.manager [instance: 
bfa0cfd0-ddb7-4be5-a06d-2bc38ef9990f] block_device_info=block_device_info)
2015-10-12 10:36:17.834 TRACE nova.compute.manager [instance: 
bfa0cfd0-ddb7-4be5-a06d-2bc38ef9990f]   File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 390, in spawn
2015-10-12 10:36:17.834 TRACE nova.compute.manager [instance: 
bfa0cfd0-ddb7-4be5-a06d-2bc38ef9990f] admin_password, network_info, 
block_device_info)
2015-10-12 10:36:17.834 TRACE nova.compute.manager [instance: 
bfa0cfd0-ddb7-4be5-a06d-2bc38ef9990f]   File 
"/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 759, in spawn
2015-10-12 10:36:17.834 TRACE nova.compute.manager [instance: 
bfa0cfd0-ddb7-4be5-a06d-2bc38ef9990f] injected_files, admin_password, 
network_info)
2015-10-12 10:36:17.834 TRACE nova.compute.manager [instance: 
bfa0cfd0-ddb7-4be5-a06d-2bc38ef9990f]   File 
"/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 242, in 
_configure_config_drive
2015-10-12 10:36:17.834 TRACE nova.compute.manager [instance: 
bfa0cfd0-ddb7-4be5-a06d-2bc38ef9990f] cookies)
2015-10-12 10:36:17.834 TRACE nova.compute.manager [instance: 
bfa0cfd0-ddb7-4be5-a06d-2bc38ef9990f]   File 
"/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 813, in 
_create_config_drive
2015-10-12 10:36:17.834 TRACE nova.compute.manager 

[Yahoo-eng-team] [Bug 1504876] [NEW] VMware: unable to soft reboot a VM when there is a soft lockup

2015-10-10 Thread Gary Kotton
Public bug reported:

In the event that there is a kernel exception in the guest and a soft
reboot is invoked, then the instance cannot be rebooted.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1504876

Title:
  VMware: unable to soft reboot a VM when there is a soft lockup

Status in OpenStack Compute (nova):
  New

Bug description:
  In the event that there is a kernel exception in the guest and a soft
  reboot is invoked, then the instance cannot be rebooted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1504876/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498370] [NEW] DHCP agent: interface unplug leads to exeception

2015-09-22 Thread Gary Kotton
Public bug reported:

2015-09-22 01:23:42.612 ERROR neutron.agent.dhcp.agent [-] Unable to disable 
dhcp for c543db4d-e077-488f-b58c-5805f63f86b6.
2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent Traceback (most recent 
call last):
2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/dhcp/agent.py", line 115, in call_driver
2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent getattr(driver, 
action)(**action_kwargs)
2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 221, in disable
2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent 
self._destroy_namespace_and_port()
2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 226, in 
_destroy_namespace_and_port
2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent 
self.device_manager.destroy(self.network, self.interface_name)
2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 1223, in destroy
2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent 
self.driver.unplug(device_name, namespace=network.namespace)
2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/interface.py", line 358, in unplug
2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent tap_name = 
self._get_tap_name(device_name, prefix)
2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/interface.py", line 299, in 
_get_tap_name
2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent dev_name = 
dev_name.replace(prefix or self.DEV_NAME_PREFIX,
2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent AttributeError: 
'NoneType' object has no attribute 'replace'
2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent 
2015-09-22 01:23:42.616 INFO neutron.agent.dhcp.agent [-] Synchronizing state 
complete

The reason is the device is None

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1498370

Title:
  DHCP agent: interface unplug leads to exeception

Status in neutron:
  New

Bug description:
  2015-09-22 01:23:42.612 ERROR neutron.agent.dhcp.agent [-] Unable to disable 
dhcp for c543db4d-e077-488f-b58c-5805f63f86b6.
  2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent Traceback (most recent 
call last):
  2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/dhcp/agent.py", line 115, in call_driver
  2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent getattr(driver, 
action)(**action_kwargs)
  2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 221, in disable
  2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent 
self._destroy_namespace_and_port()
  2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 226, in 
_destroy_namespace_and_port
  2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent 
self.device_manager.destroy(self.network, self.interface_name)
  2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 1223, in destroy
  2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent 
self.driver.unplug(device_name, namespace=network.namespace)
  2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/interface.py", line 358, in unplug
  2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent tap_name = 
self._get_tap_name(device_name, prefix)
  2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/interface.py", line 299, in 
_get_tap_name
  2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent dev_name = 
dev_name.replace(prefix or self.DEV_NAME_PREFIX,
  2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent AttributeError: 
'NoneType' object has no attribute 'replace'
  2015-09-22 01:23:42.612 TRACE neutron.agent.dhcp.agent 
  2015-09-22 01:23:42.616 INFO neutron.agent.dhcp.agent [-] Synchronizing state 
complete

  The reason is the device is None

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1498370/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491256] [NEW] VMware: nova log files overflow

2015-09-02 Thread Gary Kotton
Public bug reported:

The commit 
https://review.openstack.org/#q,bcc002809894f39c84dca5c46034468ed0469c2b,n,z 
removed the suds logging level. This causes the log files to overlfow with data.
An example is http://208.91.1.172/logs/178652/3/1437935064/ the nova compute 
log file is 56M compared to all other files that are a few K.
This makes debugging and troubleshooting terribly difficult. This also has 
upgrade impact for people going to L from any other version and that is a 
degradation

** Affects: nova
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1491256

Title:
  VMware: nova log files overflow

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The commit 
https://review.openstack.org/#q,bcc002809894f39c84dca5c46034468ed0469c2b,n,z 
removed the suds logging level. This causes the log files to overlfow with data.
  An example is http://208.91.1.172/logs/178652/3/1437935064/ the nova compute 
log file is 56M compared to all other files that are a few K.
  This makes debugging and troubleshooting terribly difficult. This also has 
upgrade impact for people going to L from any other version and that is a 
degradation

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1491256/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490257] [NEW] VMware: nova ignores cinder volume mappings

2015-08-30 Thread Gary Kotton
Public bug reported:

When booting from a volume the Nova driver ignores the cinder
'storrage_policy' and moves the volume to the datastore that nova
selected. Nova should use the cinder datastore

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1490257

Title:
  VMware: nova ignores cinder volume mappings

Status in OpenStack Compute (nova):
  New

Bug description:
  When booting from a volume the Nova driver ignores the cinder
  'storrage_policy' and moves the volume to the datastore that nova
  selected. Nova should use the cinder datastore

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1490257/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475837] [NEW] RBAC network support breaks decomposed plugins

2015-07-18 Thread Gary Kotton
)
  File 
/home/jenkins/workspace/gate-vmware-nsx-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 435, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: 500 != 201

** Affects: neutron
 Importance: Critical
 Assignee: Gary Kotton (garyk)
 Status: In Progress

** Changed in: neutron
   Importance: Undecided = Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475837

Title:
  RBAC network support breaks decomposed plugins

Status in neutron:
  In Progress

Bug description:
  ft1.1586: 
vmware_nsx.neutron.tests.unit.vmware.test_nsx_v_plugin.TestPortsV2.test_delete_port_public_network_StringException:
 Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  2015-07-17 19:35:37,537 INFO [neutron.manager] Loading core plugin: 
vmware_nsx.neutron.plugins.vmware.plugin.NsxVPlugin
  2015-07-17 19:35:37,744 INFO 
[vmware_nsx.neutron.plugins.vmware.plugins.managers] Configured router type 
driver names: ['distributed', 'exclusive', 'shared']
  2015-07-17 19:35:37,745 INFO 
[vmware_nsx.neutron.plugins.vmware.plugins.managers] Loaded type driver names: 
['exclusive', 'distributed', 'shared']
  2015-07-17 19:35:37,745 INFO 
[vmware_nsx.neutron.plugins.vmware.plugins.managers] Registered types: 
['exclusive', 'distributed', 'shared']
  2015-07-17 19:35:37,745 INFO 
[vmware_nsx.neutron.plugins.vmware.plugins.managers] Tenant router_types: 
['shared', 'distributed', 'exclusive']
  2015-07-17 19:35:37,745 INFO [neutron.manager] Service L3_ROUTER_NAT is 
supported by the core plugin
  2015-07-17 19:35:37,756ERROR [neutron.api.extensions] Extension path 
'neutron/tests/unit/extensions' doesn't exist!
  2015-07-17 19:35:37,761ERROR [neutron.api.extensions] Extension path 
'neutron/tests/unit/extensions' doesn't exist!
  2015-07-17 19:35:37,762  WARNING [neutron.notifiers.nova] Authenticating to 
nova using nova_admin_* options is deprecated. This should be done using an 
auth plugin, like password
  2015-07-17 19:35:37,763  WARNING [neutron.quota] subnet is already registered.
  2015-07-17 19:35:37,764  WARNING [neutron.notifiers.nova] Authenticating to 
nova using nova_admin_* options is deprecated. This should be done using an 
auth plugin, like password
  2015-07-17 19:35:37,765  WARNING [neutron.quota] subnetpool is already 
registered.
  2015-07-17 19:35:37,765  WARNING [neutron.notifiers.nova] Authenticating to 
nova using nova_admin_* options is deprecated. This should be done using an 
auth plugin, like password
  2015-07-17 19:35:37,767  WARNING [neutron.quota] network is already 
registered.
  2015-07-17 19:35:37,767  WARNING [neutron.notifiers.nova] Authenticating to 
nova using nova_admin_* options is deprecated. This should be done using an 
auth plugin, like password
  2015-07-17 19:35:37,768  WARNING [neutron.quota] port is already registered.
  2015-07-17 19:35:37,889 INFO 
[vmware_nsx.neutron.plugins.vmware.plugins.nsx_v] Added {'sg_id': '0', 
'vnic_id': '1'}(sg_id)s member to NSX security group 1
  2015-07-17 19:35:37,934ERROR [neutron.api.v2.resource] create failed
  Traceback (most recent call last):
File 
/home/jenkins/workspace/gate-vmware-nsx-python27/.tox/py27/src/neutron/neutron/api/v2/resource.py,
 line 83, in resource
  result = method(request=request, **args)
File 
/home/jenkins/workspace/gate-vmware-nsx-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_db/api.py,
 line 146, in wrapper
  ectxt.value = e.inner_exc
File 
/home/jenkins/workspace/gate-vmware-nsx-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py,
 line 119, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File 
/home/jenkins/workspace/gate-vmware-nsx-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_db/api.py,
 line 136, in wrapper
  return f(*args, **kwargs)
File 
/home/jenkins/workspace/gate-vmware-nsx-python27/.tox/py27/src/neutron/neutron/api/v2/base.py,
 line 412, in create
  item[self._resource])
File 
/home/jenkins/workspace/gate-vmware-nsx-python27/.tox/py27/src/neutron/neutron/api/v2/base.py,
 line 690, in _validate_network_tenant_ownership
  resource_item['network_id'])
File vmware_nsx/neutron/plugins/vmware/plugins/nsx_v.py, line 733, in 
get_network
  net_result = self._make_network_dict(network)
File 
/home/jenkins/workspace/gate-vmware-nsx-python27/.tox/py27/src/neutron/neutron/db/db_base_plugin_common.py,
 line 228, in _make_network_dict
  entry.target_tenant in ('*', context.tenant_id)):
  AttributeError: 'NoneType' object has no attribute 'tenant_id'
  }}}

  pythonlogging:'neutron.api.extensions': {{{
  2015-07-17 19:35:37,756ERROR [neutron.api.extensions] Extension path 
'neutron/tests/unit/extensions' doesn't exist!
  2015-07-17 19:35:37,761ERROR

[Yahoo-eng-team] [Bug 1471751] [NEW] VMware: unable to delete VM with attached volumes

2015-07-06 Thread Gary Kotton
Public bug reported:

When the ESX running the VM's was deleted or removed the following
exception occurred when the instance was deleted via OpenStack:

2015-07-02 21:33:00.638 21297 TRACE oslo.messaging.rpc.dispatcher 
ManagedObjectNotFound: The object has already been deleted or has not been 
completely created
2015-07-02 21:33:00.638 21297 TRACE oslo.messaging.rpc.dispatcher Cause: Server 
raised fault: 'The object has already been deleted or has not been completely 
created'
2015-07-02 21:33:00.638 21297 TRACE oslo.messaging.rpc.dispatcher Faults: 
[ManagedObjectNotFound]
2015-07-02 21:33:00.638 21297 TRACE oslo.messaging.rpc.dispatcher Details: 
{'obj': 'vm-2073'}
2015-07-02 21:33:00.638 21297 TRACE oslo.messaging.rpc.dispatcher
2015-07-02 21:33:00.653 21297 ERROR oslo.messaging._drivers.common [-] 
Returning exception The object has already been deleted or has not been 
completely created
Cause: Server raised fault: 'The object has already been deleted or has not 
been completely created'
Faults: [ManagedObjectNotFound]
Details: {'obj': 'vm-2073'} to caller

This prevent the instance from being deleted in openstack

** Affects: nova
 Importance: High
 Assignee: Gary Kotton (garyk)
 Status: New


** Tags: vmware

** Changed in: nova
   Importance: Undecided = High

** Changed in: nova
 Assignee: (unassigned) = Gary Kotton (garyk)

** Tags added: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1471751

Title:
  VMware: unable to delete VM with attached volumes

Status in OpenStack Compute (Nova):
  New

Bug description:
  When the ESX running the VM's was deleted or removed the following
  exception occurred when the instance was deleted via OpenStack:

  2015-07-02 21:33:00.638 21297 TRACE oslo.messaging.rpc.dispatcher 
ManagedObjectNotFound: The object has already been deleted or has not been 
completely created
  2015-07-02 21:33:00.638 21297 TRACE oslo.messaging.rpc.dispatcher Cause: 
Server raised fault: 'The object has already been deleted or has not been 
completely created'
  2015-07-02 21:33:00.638 21297 TRACE oslo.messaging.rpc.dispatcher Faults: 
[ManagedObjectNotFound]
  2015-07-02 21:33:00.638 21297 TRACE oslo.messaging.rpc.dispatcher Details: 
{'obj': 'vm-2073'}
  2015-07-02 21:33:00.638 21297 TRACE oslo.messaging.rpc.dispatcher
  2015-07-02 21:33:00.653 21297 ERROR oslo.messaging._drivers.common [-] 
Returning exception The object has already been deleted or has not been 
completely created
  Cause: Server raised fault: 'The object has already been deleted or has not 
been completely created'
  Faults: [ManagedObjectNotFound]
  Details: {'obj': 'vm-2073'} to caller

  This prevent the instance from being deleted in openstack

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1471751/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240653] Re: VMware: move file name handling to utility methods

2015-06-24 Thread Gary Kotton
This has been addressed in the oslo vmware library.

** Changed in: nova
   Status: Confirmed = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1240653

Title:
  VMware: move file name handling to utility methods

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  It's tempting to use os.path to create proper file names in nova-
  compute.  However for the vmware driver this breaks if nova-compute is
  run on windows.We should use utility methods to create the
  filename/path so that people don't need to know to not to use os.path.

  
  Another option is to use os_posix.path()

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1240653/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   3   >