[Yahoo-eng-team] [Bug 1449363] [NEW] OVS-agent: "invalid IP address" in arp spoofing protection

2015-04-27 Thread YAMAMOTO Takashi
Public bug reported:

arp spoofing code tries to install flows with arp_spa=ipv6_address and
ovs-ofctl correctly complains.

2015-04-26 00:17:36.844 ERROR neutron.agent.linux.utils [req-f516905e-77b4-4975-
8b8d-5b3669cdda0d None None] 
Command: ['ovs-ofctl', 'add-flows', 'br-int', '-']
Exit code: 1
Stdin: 
hard_timeout=0,idle_timeout=0,priority=2,arp,arp_spa=2003::3,arp_op=0x2,table=24,in_port=197,actions=normal
Stdout:
Stderr: ovs-ofctl: -:1: 2003::3: invalid IP address

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiaW52YWxpZCBJUCBhZGRyZXNzXCIgYW5kIGZpbGVuYW1lOiBcInEtYWd0LmxvZy5nelwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDMwMTk4NDczMjM3fQ==

** Affects: neutron
 Importance: Undecided
 Assignee: YAMAMOTO Takashi (yamamoto)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1449363

Title:
  OVS-agent: "invalid IP address" in arp spoofing protection

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  arp spoofing code tries to install flows with arp_spa=ipv6_address and
  ovs-ofctl correctly complains.

  2015-04-26 00:17:36.844 ERROR neutron.agent.linux.utils 
[req-f516905e-77b4-4975-
  8b8d-5b3669cdda0d None None] 
  Command: ['ovs-ofctl', 'add-flows', 'br-int', '-']
  Exit code: 1
  Stdin: 
hard_timeout=0,idle_timeout=0,priority=2,arp,arp_spa=2003::3,arp_op=0x2,table=24,in_port=197,actions=normal
  Stdout:
  Stderr: ovs-ofctl: -:1: 2003::3: invalid IP address

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiaW52YWxpZCBJUCBhZGRyZXNzXCIgYW5kIGZpbGVuYW1lOiBcInEtYWd0LmxvZy5nelwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDMwMTk4NDczMjM3fQ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1449363/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1425198] Re: Failure to resize a volume-based instance on Icehouse

2015-04-27 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1425198

Title:
  Failure to resize a volume-based instance on Icehouse

Status in OpenStack Compute (Nova):
  Expired

Bug description:
  I have an RDO Icehouse. All instances are volume-based. Trying to
  resize a VM that has 2 attached volumes. Resize request is accepted
  fine, however nothing happens. No disk is actually stored on a nova
  host.

  In the logs I see the following:

  ==> conductor.log <==
  2015-02-24 21:16:33.341 34256 ERROR glanceclient.common.http [-] Request 
returned failure status.
  2015-02-24 21:16:33.342 34256 WARNING nova.compute.utils 
[req-0d4f95e1-2f7b-47ce-a4a4-0baf98452b06 fa8e24d6ab78409ba9a3e3484bdf01d8 
276980a87a4d4da096c0d987d85a408c] [instance: 
1a38991a-ac99-49c6-ba50-8e3129393f31] Can't access image : Image  could not be 
found.
  2015-02-24 21:16:33.351 34256 INFO oslo.messaging._drivers.impl_qpid [-] 
Connected to AMQP server on tst-ctrl01.gcloud.tst:5672

  ==> scheduler.log <==
  2015-02-24 21:16:33.378 34390 WARNING nova.scheduler.host_manager 
[req-0d4f95e1-2f7b-47ce-a4a4-0baf98452b06 fa8e24d6ab78409ba9a3e3484bdf01d8 
276980a87a4d4da096c0d987d85a408c] Host has more disk space than database 
expected (3gb > -213gb)
  2015-02-24 21:16:33.379 34390 WARNING nova.scheduler.host_manager 
[req-0d4f95e1-2f7b-47ce-a4a4-0baf98452b06 fa8e24d6ab78409ba9a3e3484bdf01d8 
276980a87a4d4da096c0d987d85a408c] Host has more disk space than database 
expected (24gb > -353gb)

  
  Any hints?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1425198/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449344] [NEW] When VM security group is empty, the packets is still block by security group

2015-04-27 Thread haoliang
Public bug reported:

1.1 Under test tenement,create network:net1,subnet:subnet1,network 
address:192.168.1.0/24 and other keep default 
1.2 Create rotuer:R1,R1 inner interface relate to subnet1 and set outer network 
for R1
1.2 Create VM1-1,choose subnet1,security group is empty and firewall is closed  
1.3 VM1-1 ping subnet1 gw:192.168.1.1 fail

Capture in tap.xxx of linux bridge which is connect to VM1-1 ,we can see icmp 
request packets which is go to 192.168.1.1 from VM1-1
Capture in qvb.xxx,we can't see any packets.Therefore,the packets is deny by 
security group.But VM1-1 security group is empty.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1449344

Title:
  When VM security group is empty,the packets is still block by security
  group

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  1.1 Under test tenement,create network:net1,subnet:subnet1,network 
address:192.168.1.0/24 and other keep default 
  1.2 Create rotuer:R1,R1 inner interface relate to subnet1 and set outer 
network for R1
  1.2 Create VM1-1,choose subnet1,security group is empty and firewall is 
closed  
  1.3 VM1-1 ping subnet1 gw:192.168.1.1 fail

  Capture in tap.xxx of linux bridge which is connect to VM1-1 ,we can see icmp 
request packets which is go to 192.168.1.1 from VM1-1
  Capture in qvb.xxx,we can't see any packets.Therefore,the packets is deny by 
security group.But VM1-1 security group is empty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1449344/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449318] [NEW] power_state does not take effect when runcmd errors

2015-04-27 Thread Laurence Rowe
Public bug reported:

When the runcmd errors the power-state-change does not take effect and
the instance is not powered off.


AMI ID: ubuntu-vivid-15.04-amd64-server-20150422 (ami-2d10241d)

Instance launched on EC2 using awscli:

$ aws --region us-west-2 ec2 run-instances --image-id ami-2d10241d
--instance-type t2.medium --security-groups ssh-http-https --user-data
file://fail.cfg

Minimal fail.cfg cloud config:
```
#cloud-config
power_state:
  mode: poweroff

runcmd:
- set -e
- python3 -c "raise Exception"
```

Longer fail.cfg used for retrieving logs:
```
#cloud-config
output:
  all: '| tee -a /var/log/cloud-init-output.log'

power_state:
  mode: poweroff

bootcmd:
- cloud-init-per once ssh-users-ca echo "TrustedUserCAKeys 
/etc/ssh/users_ca.pub" >> /etc/ssh/sshd_config

runcmd:
- set -e
- python3 -c "raise Exception"

write_files:
- path: /etc/ssh/users_ca.pub
  content: 
```

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Attachment added: "cloud-init.log and cloud-init-output.log"
   
https://bugs.launchpad.net/bugs/1449318/+attachment/4386159/+files/cloud-init.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1449318

Title:
  power_state does not take effect when runcmd errors

Status in Init scripts for use on cloud images:
  New

Bug description:
  When the runcmd errors the power-state-change does not take effect and
  the instance is not powered off.

  
  AMI ID: ubuntu-vivid-15.04-amd64-server-20150422 (ami-2d10241d)

  Instance launched on EC2 using awscli:

  $ aws --region us-west-2 ec2 run-instances --image-id ami-2d10241d
  --instance-type t2.medium --security-groups ssh-http-https --user-data
  file://fail.cfg

  Minimal fail.cfg cloud config:
  ```
  #cloud-config
  power_state:
mode: poweroff

  runcmd:
  - set -e
  - python3 -c "raise Exception"
  ```

  Longer fail.cfg used for retrieving logs:
  ```
  #cloud-config
  output:
all: '| tee -a /var/log/cloud-init-output.log'

  power_state:
mode: poweroff

  bootcmd:
  - cloud-init-per once ssh-users-ca echo "TrustedUserCAKeys 
/etc/ssh/users_ca.pub" >> /etc/ssh/sshd_config

  runcmd:
  - set -e
  - python3 -c "raise Exception"

  write_files:
  - path: /etc/ssh/users_ca.pub
content: 
  ```

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1449318/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449286] [NEW] lb's operating_status is not in DISABLED state when an user creates a loadbalancer with admin_state_up field as 'False'

2015-04-27 Thread Madhusudhan Kandadai
Public bug reported:

when I create a loadbalancer with the following body, I could see the
'operating_status' is still showing 'ONLINE', it should be 'DISABLED'. I
also believe 'provisioning_status' should be shown as 'OFFLINE' , but I
could see as 'ONLINE'

Steps can be reproduced:

POST http://:9696/v2.0/lbaas/loadbalancers with the required
headers:

Body:

{
"loadbalancer": {
"vip_subnet_id": "",
"admin_state_up": false
}
}

Response:

{
  "loadbalancers": [
{
  "description": "",
  "admin_state_up": false,
  "tenant_id": "aad7bae2df174c1291bf994a8b8fac89",
  "provisioning_status": "ACTIVE",
  "listeners": [],
  "vip_address": "10.0.0.5",
  "vip_port_id": "59104203-e503-4d67-93ff-70a8df3c53c4",
  "provider": "haproxy",
  "vip_subnet_id": "94672fbb-0f7e-4c54-a538-a9826bd616d1",
  "id": "e1976562-b1f6-45cd-8e32-5b961f80fa24",
  "operating_status": "ONLINE",
  "name": ""
}
  ]
}

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1449286

Title:
  lb's operating_status is not in DISABLED state when an user creates a
  loadbalancer with admin_state_up field as 'False'

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  when I create a loadbalancer with the following body, I could see the
  'operating_status' is still showing 'ONLINE', it should be 'DISABLED'.
  I also believe 'provisioning_status' should be shown as 'OFFLINE' ,
  but I could see as 'ONLINE'

  Steps can be reproduced:

  POST http://:9696/v2.0/lbaas/loadbalancers with the required
  headers:

  Body:

  {
  "loadbalancer": {
  "vip_subnet_id": "",
  "admin_state_up": false
  }
  }

  Response:

  {
"loadbalancers": [
  {
"description": "",
"admin_state_up": false,
"tenant_id": "aad7bae2df174c1291bf994a8b8fac89",
"provisioning_status": "ACTIVE",
"listeners": [],
"vip_address": "10.0.0.5",
"vip_port_id": "59104203-e503-4d67-93ff-70a8df3c53c4",
"provider": "haproxy",
"vip_subnet_id": "94672fbb-0f7e-4c54-a538-a9826bd616d1",
"id": "e1976562-b1f6-45cd-8e32-5b961f80fa24",
"operating_status": "ONLINE",
"name": ""
  }
]
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1449286/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449263] [NEW] The native OVSDB Connection class should allow users to pass in their own Idl instance

2015-04-27 Thread Terry Wilson
Public bug reported:

The OVS library now allows registering a notification hook by
subclassing the Idl class and defining a notify() function. To be able
to use this in Neutron (and networking-ovn), it must be possible for the
Connection object to use a subclassed Idl. It currently is hardcoded to
instantiate its own idl from ovs.db.idl.Idl.

networking-ovn needs to pass in a subclassed Idl to be able to notify
neutron when a port is successfully wired. Neutron could use this to
avoid having to spawn ovsdb-client monitor when using the native OVSDB
driver.

** Affects: neutron
 Importance: Undecided
 Assignee: Terry Wilson (otherwiseguy)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1449263

Title:
  The native OVSDB Connection class should allow users to pass in their
  own Idl instance

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The OVS library now allows registering a notification hook by
  subclassing the Idl class and defining a notify() function. To be able
  to use this in Neutron (and networking-ovn), it must be possible for
  the Connection object to use a subclassed Idl. It currently is
  hardcoded to instantiate its own idl from ovs.db.idl.Idl.

  networking-ovn needs to pass in a subclassed Idl to be able to notify
  neutron when a port is successfully wired. Neutron could use this to
  avoid having to spawn ovsdb-client monitor when using the native OVSDB
  driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1449263/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449221] [NEW] Nova volume-detach lacks '--force' command for cleanup

2015-04-27 Thread Scott DAngelo
Public bug reported:

Cinder volumes can get stuck in a state of 'attaching' or 'detaching' and they 
need to be cleaned up or they will be incapable of being used. This is not 
possible because python-novaclient  'nova volume-detach' lacks a '--force' 
option.
Nova will need to call Cinder force_detach. Cinder already has a force_detach 
API that should also be called to  ask the storage driver to 
terminate_connection and detach the volume from the backend.  The Nova 
BlockDeviceMapping table can have an entry indicating that a volume is 
attached. Changes to nova to allow a force_detach are needed to remove an entry 
for a given volume in the case where the volume gets stuck in 'attaching' or 
'detaching'.

** Affects: nova
 Importance: Undecided
 Assignee: Scott DAngelo (scott-dangelo)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Scott DAngelo (scott-dangelo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1449221

Title:
  Nova volume-detach lacks '--force' command for cleanup

Status in OpenStack Compute (Nova):
  New

Bug description:
  Cinder volumes can get stuck in a state of 'attaching' or 'detaching' and 
they need to be cleaned up or they will be incapable of being used. This is not 
possible because python-novaclient  'nova volume-detach' lacks a '--force' 
option.
  Nova will need to call Cinder force_detach. Cinder already has a force_detach 
API that should also be called to  ask the storage driver to 
terminate_connection and detach the volume from the backend.  The Nova 
BlockDeviceMapping table can have an entry indicating that a volume is 
attached. Changes to nova to allow a force_detach are needed to remove an entry 
for a given volume in the case where the volume gets stuck in 'attaching' or 
'detaching'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1449221/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1431652] Re: os-volume_attachments return 500 error code instead of 404 if invalid volume is specified

2015-04-27 Thread Matthew Edmonds
** Project changed: nova => python-cinderclient

** Changed in: python-cinderclient
   Status: Invalid => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1431652

Title:
  os-volume_attachments return 500 error code instead of 404 if invalid
  volume is specified

Status in Python client library for Cinder:
  Confirmed

Bug description:
  If I do a DELETE of os-volume_attachments with invalid volume, 500
  error code is being returned instead of 404.

  The problem is at volume = self.volume_api.get(context, volume_id)
  where NotFound exception is  not being handled. This problem is fixed
  in v3 API.

  2015-03-12 08:49:19.146 20273 INFO nova.osapi_compute.wsgi.server 
[req-001f6e6e-4726-4738-a3e7-74c5c7eaaac5 None] 9.114.193.249,127.0.0.1 "DELETE 
/v2/dd069270f6634cafaf66777c4a2ee137/servers/e44ee780-0b57-4bcb-89ef-ab99e4d7d1a0/os-volume_attachments/volume-815308985
 HTTP/1.1" status: 500 len: 295 time: 0.6408780
  ...
  2015-03-12 08:49:18.969 20273 ERROR nova.api.openstack 
[req-001f6e6e-4726-4738-a3e7-74c5c7eaaac5 None] Caught error: Not Found (HTTP 
404) (Request-ID: req-8d133de9-430e-41ad-819a-3f9685deed29)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack Traceback (most recent 
call last):
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/__init__.py", line 125, in 
__call__
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack return 
req.get_response(self.application)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/request.py", line 1296, in send
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/request.py", line 1260, in 
call_application
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 144, in __call__
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token.py", line 748, 
in __call__
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack return 
self._call_app(env, start_response)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token.py", line 684, 
in _call_app
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack return 
self._app(env, _fake_start_response)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 144, in __call__
  ...
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/contrib/volumes.py",
 line 398, in delete
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack volume = 
self.volume_api.get(context, volume_id)
  ...
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack item = 
cinder.cinderclient(context).volumes.get(volume_id)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/cinderclient/v2/volumes.py", line 227, in get
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack return 
self._get("/volumes/%s" % volume_id, "volume")
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/cinderclient/base.py", line 149, in _get
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack resp, body = 
self.api.client.get(url)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/cinderclient/client.py", line 88, in get
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack return 
self._cs_request(url, 'GET', **kwargs)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/cinderclient/client.py", line 85, in 
_cs_request
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack return 
self.request(url, method, **kwargs)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/cinderclient/client.py", line 80, in request
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack return 
super(SessionClient, self).request(*args, **kwargs)
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/keystoneclient/adapter.py", line 166, in 
request
  2015-03-12 08:49:18.969 20273 TRACE nova.api.openstack resp = 
super(LegacyJsonAdapter, self).request(*args, **kwargs)
  2015-03-12 08:49:18.96

[Yahoo-eng-team] [Bug 1447459] Re: stable/kilo fetches master translations

2015-04-27 Thread Rob Cresswell
** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1447459

Title:
  stable/kilo fetches master translations

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack I18n & L10n:
  New

Bug description:
  Stable kilo fetches master (or lates) translations instead of the
  *-kilo resources.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1447459/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449111] [NEW] Horizon doesn't accept changes / authentication issue

2015-04-27 Thread Justin Bowen
Public bug reported:

Completed a fresh install using instructions [1]  up till where it said
to use launchpad as we installed it on a single server to start with.
After install, I am able to access Juju and Horizon with no problems. If
I try to make a change in horizon, it will seem to take it, but not long
after I will start to see a list of errors along of the lines of "unable
to retrieve x" and "unauthorized" while logged in with the main login
which shows up in openstack-status.  if I logout, or if someone else
tries to login after these messages show up, it shows an unsuccessful
login at first, and then shows an authentication error issue. Seen this
on a few forums as being related to cinder user not being setup in
mysql, haven't been able to confirm that is the root cause yet.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: horizon openstack standalone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1449111

Title:
  Horizon doesn't accept changes / authentication issue

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Completed a fresh install using instructions [1]  up till where it
  said to use launchpad as we installed it on a single server to start
  with. After install, I am able to access Juju and Horizon with no
  problems. If I try to make a change in horizon, it will seem to take
  it, but not long after I will start to see a list of errors along of
  the lines of "unable to retrieve x" and "unauthorized" while logged in
  with the main login which shows up in openstack-status.  if I logout,
  or if someone else tries to login after these messages show up, it
  shows an unsuccessful login at first, and then shows an authentication
  error issue. Seen this on a few forums as being related to cinder user
  not being setup in mysql, haven't been able to confirm that is the
  root cause yet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1449111/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449084] [NEW] Boot from volume does not boot from volume

2015-04-27 Thread Dane Fichter
Public bug reported:

Booting from volume does not actually boot from the volume; it boots
from a Glance image. Perform the following steps to test this:

Using the GUI steps:
1. In the "Volumes" tab, select "Create Volume". For "Volume Source", select an 
image (I use CirrOS). Click "Create Volume". 
2. On your host machine, open a terminal and overwrite the volume:
$ sudo dd if=/dev/zero of=dev/stack-volumes-lvmdriver-1/volume-[ID OF VOLUME] 
bs=10M
3. In the "Instances" tab, select "Launch Instance". For "Instance Boot 
Source", select "Boot from volume". Be sure to select a flavor with enough 
storage to support the volume (if using CirrOS, pick m1.tiny). For "Volume", 
select the volume you created in step 1. Click "Launch".

Using the CLI:
1. Create the volume:
cinder create --image-id $(glance image-list | grep cirros-0.3.1-x86_64-uec[^-] 
| cut -d '|' -f 2 | xargs echo) --name sample-volume 1
2. Overwrite the volume:
$ sudo dd if=/dev/zero of=dev/stack-volumes-lvmdriver-1/volume-[ID OF VOLUME] 
bs=10M
3. Boot the volume:
nova boot --flavor m1.tiny --boot-volume sample-volume instance

Expected result: The instance should not boot in either of these cases; the 
volumes are empty.
Actual result: The instance boots successfully in both of these cases. 

Additional test to show that the instance is actually being booted from
the Glance image:

Using the CLI:
1. Create the volume:
cinder create --image-id $(glance image-list | grep cirros-0.3.1-x86_64-uec[^-] 
| cut -d '|' -f 2 | xargs echo) --name sample-volume 1
2. Delete the Glance image:
glance image-list | grep cirros-0.3.1-x86_64-uec | cut -d '|' -f 2 | xargs 
glance image-delete
3. Attempt to boot the volume:
nova boot --flavor m1.tiny --boot-volume sample-volume instance

Expected result: This should succeed; we are attempting to boot from the volume.
Actual result: This fails.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1449084

Title:
  Boot from volume does not boot from volume

Status in OpenStack Compute (Nova):
  New

Bug description:
  Booting from volume does not actually boot from the volume; it boots
  from a Glance image. Perform the following steps to test this:

  Using the GUI steps:
  1. In the "Volumes" tab, select "Create Volume". For "Volume Source", select 
an image (I use CirrOS). Click "Create Volume". 
  2. On your host machine, open a terminal and overwrite the volume:
  $ sudo dd if=/dev/zero of=dev/stack-volumes-lvmdriver-1/volume-[ID OF VOLUME] 
bs=10M
  3. In the "Instances" tab, select "Launch Instance". For "Instance Boot 
Source", select "Boot from volume". Be sure to select a flavor with enough 
storage to support the volume (if using CirrOS, pick m1.tiny). For "Volume", 
select the volume you created in step 1. Click "Launch".

  Using the CLI:
  1. Create the volume:
  cinder create --image-id $(glance image-list | grep 
cirros-0.3.1-x86_64-uec[^-] | cut -d '|' -f 2 | xargs echo) --name 
sample-volume 1
  2. Overwrite the volume:
  $ sudo dd if=/dev/zero of=dev/stack-volumes-lvmdriver-1/volume-[ID OF VOLUME] 
bs=10M
  3. Boot the volume:
  nova boot --flavor m1.tiny --boot-volume sample-volume instance

  Expected result: The instance should not boot in either of these cases; the 
volumes are empty.
  Actual result: The instance boots successfully in both of these cases. 

  Additional test to show that the instance is actually being booted
  from the Glance image:

  Using the CLI:
  1. Create the volume:
  cinder create --image-id $(glance image-list | grep 
cirros-0.3.1-x86_64-uec[^-] | cut -d '|' -f 2 | xargs echo) --name 
sample-volume 1
  2. Delete the Glance image:
  glance image-list | grep cirros-0.3.1-x86_64-uec | cut -d '|' -f 2 | xargs 
glance image-delete
  3. Attempt to boot the volume:
  nova boot --flavor m1.tiny --boot-volume sample-volume instance

  Expected result: This should succeed; we are attempting to boot from the 
volume.
  Actual result: This fails.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1449084/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447132] Re: nova-manage db migrate_flavor_data doesn't do instances not in instance_extra

2015-04-27 Thread John Garbutt
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
Milestone: None => kilo-rc3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1447132

Title:
  nova-manage db migrate_flavor_data doesn't do instances not in
  instance_extra

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  New

Bug description:
  nova-manage db migrate_flavor_data selects all of the instances by
  joining them to the instance_extra table and then checks which ones
  have flavor information in the metadata table or the extras table.
  However, if an instance isn't in instance_extra (for example, it
  hasn't been written to since the creation of the extras table) then it
  won't be migrated (even if it isn't deleted AND has flavor info in the
  metadata table).

  migrate_flavor_data should select all of the instances in the metadata
  table with flavor information and migrate those.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1447132/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1448075] Re: Recent compute RPC API version bump missed out on security group parts of the api

2015-04-27 Thread Thierry Carrez
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
   Status: New => In Progress

** Changed in: nova/kilo
   Importance: Undecided => Critical

** Changed in: nova/kilo
Milestone: None => kilo-rc3

** Tags removed: kilo-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1448075

Title:
  Recent compute RPC API version bump missed out on security group parts
  of the api

Status in OpenStack Compute (Nova):
  In Progress
Status in OpenStack Compute (nova) kilo series:
  In Progress

Bug description:
  Because compute and security group client side RPC API:s both share
  the same target, they need to be bumped together like what has been
  done previously in 6ac1a84614dc6611591cb1f1ec8cce737972d069 and
  6b238a5c9fcef0e62cefbaf3483645f51554667b.

  In fact, having two different client side RPC API:s for the same
  target is of little value and to avoid future mistakes should really
  be merged into one.

  The impact of this bug is that all security group related calls will
  start to fail in an upgrade scenario.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1448075/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1448813] Re: radvd running as neutron user in Kilo, attached network dead

2015-04-27 Thread Thierry Carrez
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
Milestone: None => kilo-rc3

** Changed in: neutron/kilo
   Status: New => In Progress

** Changed in: neutron/kilo
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1448813

Title:
  radvd running as neutron user in Kilo, attached network dead

Status in App Catalog:
  Confirmed
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in neutron kilo series:
  In Progress

Bug description:
  Kilo RC1 release, mirantis Debian Jessie build

  Linux Kernel 3.19.3, ML2 vlan networking

  radvd version 1:1.9.1-1.3

  Network with IPv6 ULA SLAAC, IPv6 GUA SLAAC, Ipv4 RFC1918 configured.

  Radvd does not start, neutron-l3-agent does not set up OVS vlan
  forwarding between network and compute node, IPv4 completely
  disconnected as well. Looks like complete L2 breakage.

  Need to get this one fixed before release of Kilo.

  Work around:

  chown root:neutron /usr/sbin/radvd
  chmod 2750 /usr/sbin/radvd

  radvd gives message about not being able to create an IPv6 ICMP port
  in neutron-l3-agent log, just like when run as an non-root user.

  Notice radvd is not being executed via root wrap/sudo anymore, like
  all the other ip route/ip address/ip netns information gathering
  commands. Was executing in a privileged fashion missed in Neutron code
  refactor?

To manage notifications about this bug go to:
https://bugs.launchpad.net/app-catalog/+bug/1448813/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441107] Re: Missing entry point for haproxy namespace driver

2015-04-27 Thread Thierry Carrez
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
Milestone: None => kilo-rc3

** Changed in: neutron
Milestone: kilo-rc3 => None

** Changed in: neutron/kilo
   Status: New => In Progress

** Changed in: neutron/kilo
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1441107

Title:
  Missing entry point for haproxy namespace driver

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron kilo series:
  In Progress

Bug description:
  Using haproxy NSDriver for lbaas agent from neutron repo causes agent
  failure during start. The reason is that translation from neutron to
  neutron_lbaas in package path is missing in entry points.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1441107/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447883] Re: Restrict netmask of CIDR to avoid DHCP resync is not enough

2015-04-27 Thread Thierry Carrez
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron/kilo
Milestone: None => kilo-rc3

** Changed in: neutron/kilo
   Status: New => In Progress

** Changed in: neutron/kilo
   Importance: Undecided => Critical

** Tags removed: kilo-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1447883

Title:
  Restrict netmask of CIDR to avoid DHCP resync is not enough

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron kilo series:
  In Progress
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  Restrict netmask of CIDR to avoid DHCP resync  is not enough.
  https://bugs.launchpad.net/neutron/+bug/1443798

  I'd like to prevent following case:

  [Condition]
- Plugin: ML2
- subnet with "enable_dhcp" is True

  [Operations]
  A. Specify "[]"(empty list) at "allocation_pools" when create/update-subnet
  ---
  $ $ curl -X POST -d '{"subnet": {"name": "test_subnet", "cidr": 
"192.168.200.0/24", "ip_version": 4, "network_id": 
"649c5531-338e-42b5-a2d1-4d49140deb02", "allocation_pools": []}}' -H 
"x-auth-token:$TOKEN" -H "content-type:application/json" 
http://127.0.0.1:9696/v2.0/subnets

  Then, the dhcp-agent creates own DHCP-port, it is reproduced resync
  bug.

  B. Create port and exhaust allocation_pools
  ---
  1. Create subnet with 192.168.1.0/24. And, DHCP-port has alteady created.
 gateway_ip: 192.168.1.1
 DHCP-port: 192.168.1.2
 allocation_pools{"start": 192.168.1.2, "end": 192.168.1.254}
 the number of availability ip_addresses is 252.

  2. Create non-dhcp port and exhaust ip_addresses in allocation_pools
 In this case, user creates a port 252 times.
 the number of availability ip_addresses is 0.

  3. User deletes the DHCP-port(192.168.1.2)
 the number of availability ip_addresses is 1.

  4. User creates a non-dhcp port.
 the number of availability ports are 0.
 Then, dhcp-agent tries to create DHCP-port. It is reproduced resync bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1447883/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447883] Re: Restrict netmask of CIDR to avoid DHCP resync is not enough

2015-04-27 Thread Jeremy Stanley
Based on precedent set in bug 1362651, it looks like the impact of this
isn't a sufficient denial of service to warrant a security advisory for
existing stable releases.

** Changed in: ossa
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1447883

Title:
  Restrict netmask of CIDR to avoid DHCP resync is not enough

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  Restrict netmask of CIDR to avoid DHCP resync  is not enough.
  https://bugs.launchpad.net/neutron/+bug/1443798

  I'd like to prevent following case:

  [Condition]
- Plugin: ML2
- subnet with "enable_dhcp" is True

  [Operations]
  A. Specify "[]"(empty list) at "allocation_pools" when create/update-subnet
  ---
  $ $ curl -X POST -d '{"subnet": {"name": "test_subnet", "cidr": 
"192.168.200.0/24", "ip_version": 4, "network_id": 
"649c5531-338e-42b5-a2d1-4d49140deb02", "allocation_pools": []}}' -H 
"x-auth-token:$TOKEN" -H "content-type:application/json" 
http://127.0.0.1:9696/v2.0/subnets

  Then, the dhcp-agent creates own DHCP-port, it is reproduced resync
  bug.

  B. Create port and exhaust allocation_pools
  ---
  1. Create subnet with 192.168.1.0/24. And, DHCP-port has alteady created.
 gateway_ip: 192.168.1.1
 DHCP-port: 192.168.1.2
 allocation_pools{"start": 192.168.1.2, "end": 192.168.1.254}
 the number of availability ip_addresses is 252.

  2. Create non-dhcp port and exhaust ip_addresses in allocation_pools
 In this case, user creates a port 252 times.
 the number of availability ip_addresses is 0.

  3. User deletes the DHCP-port(192.168.1.2)
 the number of availability ip_addresses is 1.

  4. User creates a non-dhcp port.
 the number of availability ports are 0.
 Then, dhcp-agent tries to create DHCP-port. It is reproduced resync bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1447883/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449049] [NEW] Floatingip status is empty for an HA router.

2015-04-27 Thread Sridhar Gaddam
Public bug reported:

When we associate a floatingip in an HA router setup, it is properly 
associated. 
However, when we check the status of the Floating ip, it is shown as empty.

$neutron floatingip-show 2675730e-6e9f-438d-be8a-0d45c641cf7a
+-+--+
| Field   | Value|
+-+--+
| fixed_ip_address| 10.0.0.3 |
| floating_ip_address | 172.24.4.3   |
| floating_network_id | 074aa016-0da2-4000-8493-7511bc3e1789 |
| id  | 2675730e-6e9f-438d-be8a-0d45c641cf7a |
| port_id | 700c1977-98a8-4009-bdcc-bd4e8493b4bb |
| router_id   | c82c091e-d4e8-4e8c-9d29-8eb80ed16224 |
| status  |  |
| tenant_id   | 5bb32fcba8d141b7b5db9de07ef31d88 |
+-+--+

** Affects: neutron
 Importance: Undecided
 Assignee: Sridhar Gaddam (sridhargaddam)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Sridhar Gaddam (sridhargaddam)

** Changed in: neutron
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1449049

Title:
  Floatingip status is empty for an HA router.

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  When we associate a floatingip in an HA router setup, it is properly 
associated. 
  However, when we check the status of the Floating ip, it is shown as empty.

  $neutron floatingip-show 2675730e-6e9f-438d-be8a-0d45c641cf7a
  +-+--+
  | Field   | Value|
  +-+--+
  | fixed_ip_address| 10.0.0.3 |
  | floating_ip_address | 172.24.4.3   |
  | floating_network_id | 074aa016-0da2-4000-8493-7511bc3e1789 |
  | id  | 2675730e-6e9f-438d-be8a-0d45c641cf7a |
  | port_id | 700c1977-98a8-4009-bdcc-bd4e8493b4bb |
  | router_id   | c82c091e-d4e8-4e8c-9d29-8eb80ed16224 |
  | status  |  |
  | tenant_id   | 5bb32fcba8d141b7b5db9de07ef31d88 |
  +-+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1449049/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449048] [NEW] Cannot find device "vxlan-nnnn" when deleting VxLAN based networks

2015-04-27 Thread Richard Winters
Public bug reported:

Running base line mutli-node test shows tracebacks in q-agt.log when
deleting test networks

Test Bed:
   - Kilo based
   - LinuxBridge with VxLAN
   - Mutli node - node1: controller/compute, node2: compute

Test details:
 - Create tenant/router - attach to public network
 - Create two tenant networks.
 - Create VM on each network on each compute node - i.e. 2 nets with 2 VMs each
 - Verify traffic to all VMs, VM to VM
 - Delete all VMs, networks, routers  
 - Check logs for Tracebacks.

Even though the traceback shows - the vxlan-1006 interface still gets
deleted.  Another traceback shown below also shows a similar problem
when trying to delete a bridge.


2015-04-27 06:30:21.012 DEBUG 
neutron.plugins.linuxbridge.agent.linuxbridge_neutron_agent 
[req-0cf41562-c9f9-4153-a5e8-2800838dc2f6 None None] Deleting vxlan interface 
vxlan-1006 for vlan from (pid=16503) delete_vxlan 
/opt/stack/neutron/neutron/plugins/linuxbridge/agent/linuxbridge_neutron_a
gent.py:500
2015-04-27 06:30:21.012 DEBUG neutron.agent.linux.utils 
[req-0cf41562-c9f9-4153-a5e8-2800838dc2f6 None None] Running command (rootwrap 
daemon): ['ip', 'link', 'set', 'vxlan-1006', 'down'] from (pid=16503) 
execute_rootwrap_daemon /opt/stack/neutron/neutron/agent/linux/utils.py:100
2015-04-27 06:30:21.021 DEBUG neutron.agent.linux.utils 
[req-0cf41562-c9f9-4153-a5e8-2800838dc2f6 None None] 
Command: ['ip', 'link', 'set', u'vxlan-1006', 'down']
Exit code: 0
Stdin: 
Stdout: 
Stderr:  from (pid=16503) execute 
/opt/stack/neutron/neutron/agent/linux/utils.py:134
2015-04-27 06:30:21.023 DEBUG neutron.agent.linux.utils 
[req-0cf41562-c9f9-4153-a5e8-2800838dc2f6 None None] Running command (rootwrap 
daemon): ['ip', 'link', 'delete', 'vxlan-1006'] from (pid=16503) 
execute_rootwrap_daemon /opt/stack/neutron/neutron/agent/linux/utils.py:100
2015-04-27 06:30:21.025 DEBUG neutron.agent.linux.utils 
[req-cf0ecf5c-b75b-4891-8962-303e9f8c3c96 admin 
c28923a14e0d4ee9badd971171e3567c] 
Command: ['ip', '-o', 'link', 'show', 'vxlan-1006']
Exit code: 0
Stdin: 
Stdout: 71: vxlan-1006:  mtu 1450 qdisc 
noqueue state UNKNOWN mode DEFAULT group default \link/ether 
26:6e:5b:e3:6d:81 brd ff:ff:ff:ff:ff:ff

Stderr:  from (pid=16503) execute 
/opt/stack/neutron/neutron/agent/linux/utils.py:134
2015-04-27 06:30:21.026 DEBUG 
neutron.plugins.linuxbridge.agent.linuxbridge_neutron_agent 
[req-cf0ecf5c-b75b-4891-8962-303e9f8c3c96 admin 
c28923a14e0d4ee9badd971171e3567c] Deleting vxlan interface vxlan-1006 for vlan 
from (pid=16503) delete_vxlan /opt/stack/neutron/neutron/plugins/linuxbridg
e/agent/linuxbridge_neutron_agent.py:500
2015-04-27 06:30:21.026 DEBUG neutron.agent.linux.utils 
[req-cf0ecf5c-b75b-4891-8962-303e9f8c3c96 admin 
c28923a14e0d4ee9badd971171e3567c] Running command (rootwrap daemon): ['ip', 
'link', 'set', 'vxlan-1006', 'down'] from (pid=16503) execute_rootwrap_daemon 
/opt/stack/neutron/neutron/agent/linux/utils.py:100
2015-04-27 06:30:21.044 ERROR neutron.agent.linux.utils 
[req-cf0ecf5c-b75b-4891-8962-303e9f8c3c96 admin 
c28923a14e0d4ee9badd971171e3567c] 
Command: ['ip', 'link', 'set', u'vxlan-1006', 'down']
Exit code: 1
Stdin: 
Stdout: 
Stderr: Cannot find device "vxlan-1006"

2015-04-27 06:30:21.044 ERROR oslo_messaging.rpc.dispatcher 
[req-cf0ecf5c-b75b-4891-8962-303e9f8c3c96 admin 
c28923a14e0d4ee9badd971171e3567c] Exception during message handling: 
Command: ['ip', 'link', 'set', u'vxlan-1006', 'down']
Exit code: 1
Stdin: 
Stdout: 
Stderr: Cannot find device "vxlan-1006"

2015-04-27 06:30:21.044 TRACE oslo_messaging.rpc.dispatcher Traceback (most 
recent call last):
2015-04-27 06:30:21.044 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply
2015-04-27 06:30:21.044 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
2015-04-27 06:30:21.044 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
186, in _dispatch
2015-04-27 06:30:21.044 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
2015-04-27 06:30:21.044 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
130, in _do_dispatch
2015-04-27 06:30:21.044 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2015-04-27 06:30:21.044 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/neutron/neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py",
 line 653, in network_delete
2015-04-27 06:30:21.044 TRACE oslo_messaging.rpc.dispatcher 
self.agent.br_mgr.delete_vlan_bridge(bridge_name)
2015-04-27 06:30:21.044 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/neutron/neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py",
 line 427, in delete_vlan_bridge
2015-04-27 06:30:21.044 TRACE oslo_messaging.rpc.dispatcher 
self.delete_vxlan(interface)
2015-04-27 06:30:21.044 TRACE os

[Yahoo-eng-team] [Bug 1448813] Re: radvd running as neutron user in Kilo, attached network dead

2015-04-27 Thread Kyle Mestery
** Tags added: kilo-rc-potential

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
Milestone: None => liberty-1

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1448813

Title:
  radvd running as neutron user in Kilo, attached network dead

Status in App Catalog:
  New
Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Kilo RC1 release, mirantis Debian Jessie build

  Linux Kernel 3.19.3, ML2 vlan networking

  radvd version 1:1.9.1-1.3

  Network with IPv6 ULA SLAAC, IPv6 GUA SLAAC, Ipv4 RFC1918 configured.

  Radvd does not start, neutron-l3-agent does not set up OVS vlan
  forwarding between network and compute node, IPv4 completely
  disconnected as well. Looks like complete L2 breakage.

  Need to get this one fixed before release of Kilo.

  Work around:

  chown root:neutron /usr/sbin/radvd
  chmod 2750 /usr/sbin/radvd

  radvd gives message about not being able to create an IPv6 ICMP port
  in neutron-l3-agent log, just like when run as an non-root user.

  Notice radvd is not being executed via root wrap/sudo anymore, like
  all the other ip route/ip address/ip netns information gathering
  commands. Was executing in a privileged fashion missed in Neutron code
  refactor?

To manage notifications about this bug go to:
https://bugs.launchpad.net/app-catalog/+bug/1448813/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449028] [NEW] NUMA tuning broken in select libvirt versions

2015-04-27 Thread Stephen Finucane
Public bug reported:

#1438226 reported that CPU pinning was broken in select versions of
libvirt. Further investigation has highlighted issues with NUMA tuning
in general on these versions. The same error messages seen with when
configuring CPU pinning are seen when configuring NUMA tuning. The
results from testing, mostly duplicated from the aforementioned bug
report, are given below. Note that v1.2.10 is still being tested at this
time.

This is somewhat related to #1422775 ("nova libvirt driver assumes qemu
support for NUMA pinning").

---

# Testing Configuration

Testing was conducted in a container which provided a single-node,
Fedora 21-based (3.17.8-300.fc21.x86_64) OpenStack instance (built with
devstack). The yum-provided libvirt and its dependencies were removed
and libvirt and libvirt-python were built and installed from source.

# Results

The results are as follows (currently incomplete):

versions  status
  --
1.2.9 ok
1.2.9.1   ok
1.2.9.2   fail
1.2.10???
1.2.11ok
1.2.12ok

v1.2.9.2 is broken by this (backported) patch:

https://www.redhat.com/archives/libvir-
list/2014-November/msg00275.html

This can be seen as commit

e226772 (qemu: fix domain startup failing with 'strict' mode in
numatune)

# Error logs

v1.2.9.2 produces the following exception:

Traceback (most recent call last):
  File "/opt/stack/nova/nova/compute/manager.py", line 2301, in 
_build_resources
yield resources
  File "/opt/stack/nova/nova/compute/manager.py", line 2171, in 
_build_and_run_instance
flavor=flavor)
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2357, in spawn
block_device_info=block_device_info)
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 4376, in 
_create_domain_and_network
power_on=power_on)
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 4307, in 
_create_domain
LOG.error(err)
  File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 82, 
in __exit__
six.reraise(self.type_, self.value, self.tb)
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 4297, in 
_create_domain
domain.createWithFlags(launch_flags)
  File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 183, in 
doit
result = proxy_call(self._autowrap, f, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 141, in 
proxy_call
rv = execute(f, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 122, in 
execute
six.reraise(c, e, tb)
  File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 80, in 
tworker
rv = meth(*args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1029, in 
createWithFlags
if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', 
dom=self)
libvirtError: Failed to create controller cpu for group: No such file or 
directory

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1449028

Title:
  NUMA tuning broken in select libvirt versions

Status in OpenStack Compute (Nova):
  New

Bug description:
  #1438226 reported that CPU pinning was broken in select versions of
  libvirt. Further investigation has highlighted issues with NUMA tuning
  in general on these versions. The same error messages seen with when
  configuring CPU pinning are seen when configuring NUMA tuning. The
  results from testing, mostly duplicated from the aforementioned bug
  report, are given below. Note that v1.2.10 is still being tested at
  this time.

  This is somewhat related to #1422775 ("nova libvirt driver assumes
  qemu support for NUMA pinning").

  ---

  # Testing Configuration

  Testing was conducted in a container which provided a single-node,
  Fedora 21-based (3.17.8-300.fc21.x86_64) OpenStack instance (built
  with devstack). The yum-provided libvirt and its dependencies were
  removed and libvirt and libvirt-python were built and installed from
  source.

  # Results

  The results are as follows (currently incomplete):

  versions  status
    --
  1.2.9 ok
  1.2.9.1   ok
  1.2.9.2   fail
  1.2.10???
  1.2.11ok
  1.2.12ok

  v1.2.9.2 is broken by this (backported) patch:

  https://www.redhat.com/archives/libvir-
  list/2014-November/msg00275.html

  This can be seen as commit

  e226772 (qemu: fix domain startup failing with 'strict' mode in
  numatune)

  # Error logs

  v1.2.9.2 produces the following exception:

  Traceback (most recent call last):
    File "/opt/stack/nova/nova/compute/manager.py", line 2301, in 
_build_resources
  yield resources
    Fi

[Yahoo-eng-team] [Bug 1448991] [NEW] Introduction to "target.token.*" is needed in the RBAC part of configuration.rst

2015-04-27 Thread DWang
Public bug reported:

When limiting access to token-related operations, we may need references
like %(target.token.user.domain.id)s.

However possible values of "target.token.*" is missing while
"target.user.*", "target.role.*" and so on are listed in
configuration.rst:

http://docs.openstack.org/developer/keystone/configuration.html
#keystone-api-protection-with-role-based-access-control-rbac

I think this work will supplement keystone's current documentation.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1448991

Title:
  Introduction to "target.token.*" is needed in the RBAC part of
  configuration.rst

Status in OpenStack Identity (Keystone):
  New

Bug description:
  When limiting access to token-related operations, we may need
  references like %(target.token.user.domain.id)s.

  However possible values of "target.token.*" is missing while
  "target.user.*", "target.role.*" and so on are listed in
  configuration.rst:

  http://docs.openstack.org/developer/keystone/configuration.html
  #keystone-api-protection-with-role-based-access-control-rbac

  I think this work will supplement keystone's current documentation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1448991/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432873] Re: Add FDB bridge entry fails if old entry not removed

2015-04-27 Thread Darren Birkett
For openstack-ansible, the bump to the latest juno release (which
includes the fix referenced in this bug) is here:

https://review.openstack.org/#/c/177388/

Once that merges, the work for openstack-ansible in this bug is
complete.

** Also affects: openstack-ansible/juno
   Importance: Undecided
   Status: New

** Changed in: openstack-ansible/juno
Milestone: None => 10.1.5

** Changed in: openstack-ansible
Milestone: 10.1.5 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1432873

Title:
  Add FDB bridge entry fails if old entry not removed

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in Ansible playbooks for deploying OpenStack:
  Triaged
Status in openstack-ansible juno series:
  New

Bug description:
  Running on Ubuntu 14.04 with Linuxbridge agent and L2pop with vxlan
  networks.

  In situations where "remove_fdb_entries" messages are lost/never consumed, 
future "add_fdb_bridge_entry" attempts will fail with the following example 
error message:
  2015-03-16 21:10:08.520 30207 ERROR neutron.agent.linux.utils 
[req-390ab63a-9d3c-4d0e-b75b-200e9f5b97c6 None]
  Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'bridge', 'fdb', 'add', 'fa:16:3e:a5:15:35', 
'dev', 'vxlan-15', 'dst', '172.30.100.60']
  Exit code: 2
  Stdout: ''
  Stderr: 'RTNETLINK answers: File exists\n'

  In our case, instances were unable to communicate with their Neutron
  router because vxlan traffic was being forwarded to the wrong vxlan
  endpoint. This was corrected by either migrating the router to a new
  agent or by executing a "bridge fdb del" for the fdb entry
  corresponding with the Neutron router mac address. Once deleted, the
  LB agent added the appropriate fdb entry at the next polling event.

  If anything is unclear, please let me know.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1432873/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp