[Yahoo-eng-team] [Bug 1541760] [NEW] nova-network create return wrong error number

2016-02-04 Thread jichenjc
Public bug reported:

jichen@devstack1:~$ nova network-list
+--+-+-+
| ID   | Label   | Cidr|
+--+-+-+
| e0b9ae95-5bd9-4eda-a41d-e8b3135b5e5b | ji4 | 192.168.59.0/24 |
+--+-+-+


jichen@devstack1:~$ nova tenant-network-create ji4 192.168.59.1/24
ERROR (ClientException): Create networks failed (HTTP 503) (Request-ID: 
req-ea32935d-4f28-4fdf-9cd2-3a191f9b4882)

we should return conflict (409) instead of 503

** Affects: nova
 Importance: Undecided
 Assignee: jichenjc (jichenjc)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => jichenjc (jichenjc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1541760

Title:
  nova-network create return wrong error number

Status in OpenStack Compute (nova):
  New

Bug description:
  jichen@devstack1:~$ nova network-list
  +--+-+-+
  | ID   | Label   | Cidr|
  +--+-+-+
  | e0b9ae95-5bd9-4eda-a41d-e8b3135b5e5b | ji4 | 192.168.59.0/24 |
  +--+-+-+

  
  jichen@devstack1:~$ nova tenant-network-create ji4 192.168.59.1/24
  ERROR (ClientException): Create networks failed (HTTP 503) (Request-ID: 
req-ea32935d-4f28-4fdf-9cd2-3a191f9b4882)

  we should return conflict (409) instead of 503

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1541760/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541805] [NEW] nova boot gives an API error

2016-02-04 Thread Natanam Sethu
Public bug reported:

issue description: i am installing Liberty on Ubuntu 14.04 LTS, when i
tried to launch an instance i got this error.

snat@controller:~$ nova boot --flavor m1.small --image cirros --nic 
net-id=b9a485f1-3e77-4422-8ce8-26413a311450 --security-group default --key-name 
mykey public-instance
ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-4d22b711-1423-45f4-b11e-d6f596ee2703)

This is definetely a bug.

uname -a

Linux controller 3.13.0-76-generic #120-Ubuntu SMP Mon Jan 18 15:59:10
UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

Nova.conf

[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
libvirt_use_virtio_for_bridges=True
verbose=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
enabled_apis=ec2,osapi_compute,metadata
rpc_backend = rabbit
uth_strategy = keystone
my_ip = 10.0.0.11
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = 
nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
enabled_apis=osapi_compute,metadata
verbose = True

[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = nova
[vnc]
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
[glance]
host = controller
[oslo_concurrency]
lock_path = /var/lib/nova/tmp

snat@controller:~$ nova flavor-list
++---+---+--+---+--+---+-+---+
| ID | Name  | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | 
Is_Public |
++---+---+--+---+--+---+-+---+
| 1  | m1.tiny   | 512   | 1| 0 |  | 1 | 1.0 | 
True  |
| 2  | m1.small  | 2048  | 20   | 0 |  | 1 | 1.0 | 
True  |
| 3  | m1.medium | 4096  | 40   | 0 |  | 2 | 1.0 | 
True  |
| 4  | m1.large  | 8192  | 80   | 0 |  | 4 | 1.0 | 
True  |
| 5  | m1.xlarge | 16384 | 160  | 0 |  | 8 | 1.0 | 
True  |
++---+---+--+---+--+---+-+---+

so Nova flavor list works just fine and when i try to boot it says the
flavor m1.small & m1.tiny does not exist.

i have run a nova debug boot

snat@controller:~$ nova --debug boot --flavor m1.small --image cirros --nic 
net-id=b9a485f1-3e77-4422-8ce8-26413a311450 --security-group default --key-name 
mykey public-instance
DEBUG (session:198) REQ: curl -g -i -X GET http://controller:5000/v3 -H 
"Accept: application/json" -H "User-Agent: python-keystoneclient"
INFO (connectionpool:205) Starting new HTTP connection (1): controller
DEBUG (connectionpool:385) "GET /v3 HTTP/1.1" 200 249
DEBUG (session:215) RESP: [200] Content-Length: 249 Vary: X-Auth-Token 
Keep-Alive: timeout=5, max=100 Server: Apache/2.4.7 (Ubuntu) Connection: 
Keep-Alive Date: Thu, 04 Feb 2016 09:22:59 GMT x-openstack-request-id: 
req-2f991c43-7ffa-4abe-9a1c-f1f28e614cb1 Content-Type: application/json 
X-Distribution: Ubuntu
RESP BODY: {"version": {"status": "stable", "updated": "2015-03-30T00:00:00Z", 
"media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v3+json"}], "id": "v3.4", "links": 
[{"href": "http://controller:5000/v3/;, "rel": "self"}]}}

DEBUG (base:188) Making authentication request to 
http://controller:5000/v3/auth/tokens
DEBUG (connectionpool:385) "POST /v3/auth/tokens HTTP/1.1" 201 2799
DEBUG (session:198) REQ: curl -g -i -X GET http://controller:8774/v2/ -H 
"User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: 
{SHA1}d1437835279b61c370c2ac34e6af5e3a86583436"
INFO (connectionpool:205) Starting new HTTP connection (1): controller
DEBUG (connectionpool:385) "GET /v2/ HTTP/1.1" 200 375
DEBUG (session:215) RESP: [200] Date: Thu, 04 Feb 2016 09:23:00 GMT Connection: 
keep-alive Content-Type: application/json Content-Length: 375 
X-Compute-Request-Id: req-708a6ab4-cb2a-4f3d-9d61-28468f6424ea
RESP BODY: {"version": {"status": "SUPPORTED", "updated": 
"2011-01-21T11:33:21Z", "links": [{"href": "http://controller:8774/v2/;, "rel": 
"self"}, {"href": "http://docs.openstack.org/;, "type": "text/html", "rel": 
"describedby"}], "min_version": "", "version": "", "media-types": [{"base": 
"application/json", "type": 

[Yahoo-eng-team] [Bug 1541788] [NEW] Force line break for detail table cells

2016-02-04 Thread Tatiana Ovchinnikova
Public bug reported:

Some IDs from details don't fit properly the detail table cells and
actually are considered to be a long unsplitable word (see pic).

** Affects: horizon
 Importance: Undecided
 Assignee: Tatiana Ovchinnikova (tmazur)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1541788

Title:
  Force line break for detail table cells

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Some IDs from details don't fit properly the detail table cells and
  actually are considered to be a long unsplitable word (see pic).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1541788/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541802] [NEW] lbaasv2 namespace missing host routes from VIP subnet

2016-02-04 Thread Damien Churchill
Public bug reported:

When a lbaasv2 namespace is created it only receives the default gateway
for that subnet, the additional host routes defined against the subnet
are ignored which results in certain areas of a network being
inaccessible.

# ip netns exec qlbaas-ae4b71ef-e874-46a1-a489-c2a6e186ffe3 ip r s
default via 192.168.31.254 dev tap9e9051cd-ff 
192.168.31.0/24 dev tap9e9051cd-ff  proto kernel  scope link  src 192.168.31.48

Version Info:

OpenStack: Liberty
Distro: Ubuntu 14.04.3

Not sure if any more information is required.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541802

Title:
  lbaasv2 namespace missing host routes from VIP subnet

Status in neutron:
  New

Bug description:
  When a lbaasv2 namespace is created it only receives the default
  gateway for that subnet, the additional host routes defined against
  the subnet are ignored which results in certain areas of a network
  being inaccessible.

  # ip netns exec qlbaas-ae4b71ef-e874-46a1-a489-c2a6e186ffe3 ip r s
  default via 192.168.31.254 dev tap9e9051cd-ff 
  192.168.31.0/24 dev tap9e9051cd-ff  proto kernel  scope link  src 
192.168.31.48

  Version Info:

  OpenStack: Liberty
  Distro: Ubuntu 14.04.3

  Not sure if any more information is required.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1541802/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541774] [NEW] create response for networks and ports is missing extensions

2016-02-04 Thread Kevin Benton
Public bug reported:

when issuing a create port or a create network, any extensions loaded
via the dictionary extension mechanisms are not included in the
response.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541774

Title:
  create response for networks and ports is missing extensions

Status in neutron:
  New

Bug description:
  when issuing a create port or a create network, any extensions loaded
  via the dictionary extension mechanisms are not included in the
  response.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1541774/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542039] [NEW] nova should not reschedule an instance that has already been deleted

2016-02-04 Thread Chris Friesen
Public bug reported:

I'm investigating an issue where an instance with a large disk and an
attached cinder volume was booted in a stable/kilo OpenStack setup with
the diskFilter disabled.

The timeline looks like this:
scheduler picks initial compute node
nova attempts to boot it up on one compute node, it runs out of disk space and 
gets rescheduled
 scheduler picks another compute node
user requests instance deletion
user requests cinder volume deletion
nova attempts to boot it up on second compute node, it runs out of disk space 
and gets rescheduled
scheduler picks a third compute node
nova  attempts to boot it up on third compute node, runs into problems due to 
missing cinder volume


The issue I want to address in this bug is whether it makes sense to reschedule 
the instance when the instance has already been deleted.

Also, instance deletion sets the task_state to 'deleting' early on.  In
compute.manager.ComputeManager._do_build_and_run_instance(), if we
decide to reschedule then nova-compute will set the task_state to
'scheduling' and then save the instance, which I think could overwrite
the 'deleting' state in the DB.

So...would it make sense to have nova-compute put an
"expected_task_state" on the instance.save() call that sets the
'scheduling' task_state?

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: compute

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1542039

Title:
  nova should not reschedule an instance that has already been deleted

Status in OpenStack Compute (nova):
  New

Bug description:
  I'm investigating an issue where an instance with a large disk and an
  attached cinder volume was booted in a stable/kilo OpenStack setup
  with the diskFilter disabled.

  The timeline looks like this:
  scheduler picks initial compute node
  nova attempts to boot it up on one compute node, it runs out of disk space 
and gets rescheduled
   scheduler picks another compute node
  user requests instance deletion
  user requests cinder volume deletion
  nova attempts to boot it up on second compute node, it runs out of disk space 
and gets rescheduled
  scheduler picks a third compute node
  nova  attempts to boot it up on third compute node, runs into problems due to 
missing cinder volume

  
  The issue I want to address in this bug is whether it makes sense to 
reschedule the instance when the instance has already been deleted.

  Also, instance deletion sets the task_state to 'deleting' early on.
  In compute.manager.ComputeManager._do_build_and_run_instance(), if we
  decide to reschedule then nova-compute will set the task_state to
  'scheduling' and then save the instance, which I think could overwrite
  the 'deleting' state in the DB.

  So...would it make sense to have nova-compute put an
  "expected_task_state" on the instance.save() call that sets the
  'scheduling' task_state?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1542039/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521581] Re: v2 - "readOnly" key should be used in schemas

2016-02-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/267972
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=4fc9d7df5e1dc1f4d430871857317fa7cced7e68
Submitter: Jenkins
Branch:master

commit 4fc9d7df5e1dc1f4d430871857317fa7cced7e68
Author: zwei 
Date:   Fri Jan 15 14:52:14 2016 +0800

v2 - "readOnly" key should be used in schemas

If it has a value of boolean true,
this keyword indicates that the instance property SHOULD NOT be changed,
and attempts by a user agent to modify the value of this property are 
expected to be rejected by a server.
The value of this keyword MUST be a boolean.
The default value is false.

Further link for reference: 
http://json-schema.org/latest/json-schema-hypermedia.html#anchor15

Closes-Bug: #1521581
Change-Id: I279fba4099667d193609a31259057b897380d6f0


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1521581

Title:
  v2 - "readOnly" key should be used in schemas

Status in Glance:
  Fix Released

Bug description:
  Currently, the way object properties are labelled read-only is through
  the description, like so:

  "status": {
  "enum": [
  "queued",
  "saving",
  "active",
  "killed",
  "deleted",
  "pending_delete",
  "deactivated"
  ],
  "type": "string",
  "description": "Status of the image (READ-ONLY)"
  }

  
  This is not the recommended way to indicate read-only status. The "readOnly" 
property should be used instead:

  "status": {
  "enum": [
  "queued",
  "saving",
  "active",
  "killed",
  "deleted",
  "pending_delete",
  "deactivated"
  ],
  "type": "string",
  "readOnly": true,
  "description": "Status of the image"
  }

  
  Further link for reference: 
http://json-schema.org/latest/json-schema-hypermedia.html#anchor15

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1521581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1511574] Re: [RFE] Support cleanup of all resources associated with a given tenant

2016-02-04 Thread Armando Migliaccio
** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: In Progress => Won't Fix

** Changed in: neutron
 Assignee: John Davidge (john-davidge) => (unassigned)

** Changed in: python-neutronclient
 Assignee: (unassigned) => John Davidge (john-davidge)

** Changed in: python-neutronclient
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1511574

Title:
  [RFE] Support cleanup of all resources associated with a given tenant

Status in neutron:
  Won't Fix
Status in python-neutronclient:
  Confirmed

Bug description:
  In the ops painpoints session (https://etherpad.openstack.org/p
  /mitaka-neutron-next-ops-painpoints) a problem was identified where
  removing a tenant can leave behind stray routers, ports, etc.

  It was suggested that a simple 'neutron purge ' command or
  similar would simplify the process of cleaning up these stray
  resources by removing everything associated with the given tenant.

  The expectation is that this command would be admin-only, and neutron
  should not be responsible for deciding whether the action is 'safe'.
  It should work regardless of whether the given tenant is active or
  not.

  This suggestion was very popular with the operators in the room. The
  consensus was that this would save a lot of time and effort where
  currently these resources have to be discovered and then removed one
  by one.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1511574/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542032] [NEW] IP reassembly issue on the Linux bridges in Openstack

2016-02-04 Thread Claude LeFrancois
Public bug reported:

Hi,

Sorry for text diagram. It does not look very well on this screen.
Please, copy paste in a decent fixed width text editor.

Thanks,

Claude.


Title: IP reassembly issue on the Linux bridges in Openstack


Summary: When the security groups and the Neutron firewall are active in
Openstack, each and every VM virtual network interfaces (VNIC) is
isolated in a Linux bridge and IP reassembly must be performed in order
to allow firewall inspection of the traffic. The reassembled traffic
sometimes exceed the capacity of the physical interfaces and the traffic
is not forwarded properly.

Linux bridge diagram:
-

--|   |--|
   VM |   |  OVS |
  --- |   --  --- | -  - |
---
  | TAP |-|---| QBR bridge |--| QVB |-|-|QVO|  | P |-|| 
FW-ADMIN || PHY |
  --- |   --  --- | -  - |
---
  |   |  |
- |   |--|

Introduction:
-

In Openstack, the virtual machine (VM) uses the OpenvSwitch (OVS) for
networking purposes. This is not a mandatory setup but this is a common
setup in Openstack.

When the Neutron firewall and the security groups are active, each VM
VNIC, also called a tap interface, is connected to a Linux bridge. This
is the QBR bridge. The QVB interface enables the network communication
with OVS. The QVB interface interacts with the QVO interface in OVS.

Security analysis is performed on the Linux bridge. In order to perform
adequate traffic inspection, the fragmented traffic has to be re-
assembled. The traffic is then forwarded according to Maximum Transmit
Unit (MTU) of the interfaces in the bridge.

The MTU values on all the interfaces are set to 65000 bytes. This is
where a part of the problem experienced with NFV applications is
observed.

Analysis:
-

As a real life example, the NFV application uses NFS between VMs. NFS is
a well known feature in Unix environments. This feature provides network
file systems. This is the equivalent of a network drive in the Windows
world.

NFS is known to produce large frames. In this example, the VM1
(169.254.4.242) send a larg NFS write instruction to the VM2. The
example below shows a 5 KB packet. The traffic is fragmented in several
packets as instructed by the VM1 VNIC. This is the desired behavior.

root@node-11:~# tcpdump -e -n -i tap3e79842d-eb host 169.254.1.13

23:46:48.938255 00:80:37:0e:0f:12 > 00:80:37:0e:0b:12, ethertype IPv4 (0x0800), 
length 1514: 169.254.4.242.3015988240 > 169.254.1.13.2049: 1472 write fh 
Unknown/01000601B1198A1CB3CC4E1EA3AB0B26017B0AD653620700D59B28C7 4863 
(4863) bytes @ 229376
23:46:48.938271 00:80:37:0e:0f:12 > 00:80:37:0e:0b:12, ethertype IPv4 (0x0800), 
length 1514: 169.254.4.242 > 169.254.1.13: ip-proto-17
23:46:48.938279 00:80:37:0e:0f:12 > 00:80:37:0e:0b:12, ethertype IPv4 (0x0800), 
length 1514: 169.254.4.242 > 169.254.1.13: ip-proto-17
23:46:48.938287 00:80:37:0e:0f:12 > 00:80:37:0e:0b:12, ethertype IPv4 (0x0800), 
length 590: 169.254.4.242 > 169.254.1.13: ip-proto-17

The same packet is found on the QVB interface in one large frame.

root@node-11:~# tcpdump -e -n -i qvb3e79842d-eb host 169.254.1.13

23:46:48.938322 00:80:37:0e:0f:12 > 00:80:37:0e:0b:12, ethertype IPv4
(0x0800), length 5030: 169.254.4.242.3015988240 > 169.254.1.13.2049:
4988 write fh
Unknown/01000601B1198A1CB3CC4E1EA3AB0B26017B0AD653620700D59B28C7
4863 (4863) bytes @ 229376

Such large packets cannot cross physical interfaces without being
fragmented again if jumbo frames support is not active in the network.
Even with jumbo frames, the NFS frame size can easily cross the 9K
barrier. NFS frame size up to 32 KB can be observed with NFS over UDP.

For some reasons, this traffic does not seem to be transmitted properly
between compute hosts in Openstack.

Further investigations have revealed the large frames are leaving the
OVS internal bridge (br-int) in direction of the private bridge (br-prv)
using a patch interface in OVS. Once the traffic has reached this point,
it uses the "P" interface (i.e.: p_51a2-0) to reach another Linux
bridge (br-fw-admin) where the physical interface is connected to. The
"P" interface has its MTU set to 65000 and the the physical interface as
long as the Linux bridge are set to 1500. A tcpdump analysis reveals the
large frames are reaching the "P" interface and the Linux bridge.
However, the traffic is not observed on the physical interface. The
traffic does not use the DF bit.

This is the reason why the VNF application works fine when all the VMs
are located on the same compute host while the NFS application does not
work properly 

[Yahoo-eng-team] [Bug 1542176] [NEW] 'dict' object has no attribute 'container_format'\n"]

2016-02-04 Thread Balaji
Public bug reported:

While creating docker instance getting error below and the instance try
to build on qemu.


2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager 
[req-0a99ac05-6fe8-4164-b875-48be3f24dd98 8a4e98a2addf47b6af8897af05518c19 
4bbddf0552544090a0d90962b850d952 - - -] [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75] Instance failed to spawn
2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75] Traceback (most recent call last):
2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2155, in 
_build_resources
2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75] yield resources
2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2009, in 
_build_and_run_instance
2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75] block_device_info=block_device_info)
2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75]   File 
"/usr/local/lib/python2.7/dist-packages/novadocker/virt/docker/driver.py", line 
508, in spawn
2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75] image_name = 
self._get_image_name(context, instance, image_meta)
2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75]   File 
"/usr/local/lib/python2.7/dist-packages/novadocker/virt/docker/driver.py", line 
371, in _get_image_name
2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75] fmt = image.container_format
2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75] AttributeError: 'dict' object has no 
attribute 'container_format'
2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75]

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1542176

Title:
  'dict' object has no attribute 'container_format'\n"]

Status in OpenStack Compute (nova):
  New

Bug description:
  While creating docker instance getting error below and the instance
  try to build on qemu.

  
  2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager 
[req-0a99ac05-6fe8-4164-b875-48be3f24dd98 8a4e98a2addf47b6af8897af05518c19 
4bbddf0552544090a0d90962b850d952 - - -] [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75] Instance failed to spawn
  2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75] Traceback (most recent call last):
  2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2155, in 
_build_resources
  2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75] yield resources
  2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2009, in 
_build_and_run_instance
  2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75] block_device_info=block_device_info)
  2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75]   File 
"/usr/local/lib/python2.7/dist-packages/novadocker/virt/docker/driver.py", line 
508, in spawn
  2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75] image_name = 
self._get_image_name(context, instance, image_meta)
  2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75]   File 
"/usr/local/lib/python2.7/dist-packages/novadocker/virt/docker/driver.py", line 
371, in _get_image_name
  2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75] fmt = image.container_format
  2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75] AttributeError: 'dict' object has no 
attribute 'container_format'
  2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1542176/+subscriptions

-- 
Mailing list: 

[Yahoo-eng-team] [Bug 1537842] Re: Instance Metadata should only show metadata definitions with properties_target: metadata

2016-02-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/272305
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=e55240d882baf0ec8c19deaef9b9d0ba60b59100
Submitter: Jenkins
Branch:master

commit e55240d882baf0ec8c19deaef9b9d0ba60b59100
Author: Justin Pomeroy 
Date:   Mon Jan 25 16:20:07 2016 -0600

Support properties_target when fetching namespaces

This allows specifying a properties target when fetching metadata
definitions namespaces from glance, and updates the instance
metadata widget to show only "metadata" properties (as opposed to
"scheduler_hints") when creating an instance or updating the
metadata.

Closes-Bug: #1537842
Change-Id: I64dd279139eca2cbd0c0a6e808ade4cbcba8df95


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1537842

Title:
  Instance Metadata should only show metadata definitions with
  properties_target: metadata

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The ability to add metadata to an instance leveraging these
  definitions at launch time was recently added to horizon. [0] It was
  also added as an action on the instances table [1].

  [0] https://review.openstack.org/209680
  [1] https://review.openstack.org/243624

   In a follow up discussion, somebody asked about using the metadata 
definitions to also choose nova scheduler hints at launch time and how to make 
sure that the two weren't confused. This raised our awareness that we don't 
have properties_target set to "metadata" (rather than "scheduler_hints") for 
OS::Nova::Instance on the following software namespace files:
  The metadata definitions the Glance Metadata Definitions catalog allow each 
namespace to be associated with a resource type in OpenStack. Some types of 
resources have more than one type of properties, so the namespaces allow this 
to be specified using a properties_target attribute.

  We have now updated the existing namespaces in Glance for
  OS::Nova::Instance to have a properties target  set to "metadata" [3].

  [3] https://review.openstack.org/#/c/271100

  
  So, the NG launch instance metadata step and project update metadata action 
should be updated to only show namespaces that have have the properties_target 
of metadata set.

  Please note that there will be an additional bug opened on glance to
  change these namespaces from OS::Glance::Instance to OS::Nova::Server
  to align with the heat resource type.

  
http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::Server

  ** It should be noted that updating namespaces is already possible
  with glance-manage. E.g.

  ttripp@ubuntu:/opt/stack/glance$ glance-manage db_load_metadefs etc/metadefs 
-h
  usage: glance-manage db_load_metadefs [-h]
[path] [merge] [prefer_new] [overwrite]

  positional arguments:
path
merge
prefer_new
overwrite

  So, you just have to call:

  ttripp@ubuntu:/opt/stack/glance$ glance-manage db_load_metadefs
  etc/metadefs true true

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1537842/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507950] Re: The metadata_proxy for a network will never be deleted even if it is not needed.

2016-02-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/237618
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=dc0c7b5588409fe64d7680e94f50279ab9ec4043
Submitter: Jenkins
Branch:master

commit dc0c7b5588409fe64d7680e94f50279ab9ec4043
Author: Hong Hui Xiao 
Date:   Tue Oct 20 05:31:46 2015 -0400

Delete metadata_proxy for network if it is not needed

Currently, once the metadata_process is created for the network,
it will never be eliminated unless the network is deleted. Even if
user disable the metadata for network and restart dhcp agent, the
metdata proxy for network will still be there. This will waste the
resource of neutron host. This patch will let the dhcp-agent
delete useless metadata_proxy at startup.

Additional functional tests are added to for related scenario.

Change-Id: Id867b211fe7c01a11ba73a5ebc275c595933becf
Closes-Bug: #1507950


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507950

Title:
  The metadata_proxy for a network will never be deleted even if it is
  not needed.

Status in neutron:
  Fix Released

Bug description:
  Find this issue in a large scale test. Steps to reproduce:
  (1) I have about 1000 networks and set enable_isolated_metadata=True firstly.
  (2) But then I find the metadata_proxy process is too many, and I want to  
disable it. So I set the enable_isolated_metadata=False.
  (3) restart dhcp-agent
  (4) The metdata_proxy are still there.

  To eliminate the metadata_proxy process for networks, it looks like
  that I can delete the networks or kill the metadata_proxy process
  manually(or just restart the host).  And, obviously, I want to keep
  the networks.

  neutron-dhcp-agent should try to kill the useless metadata_proxy to
  keep the neutron host clean and with less burden.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507950/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541218] Re: make tests use test copy of policy.json

2016-02-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/275541
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=52f507c319e05bbdfbd5e28d92fdced7900ebf7d
Submitter: Jenkins
Branch:master

commit 52f507c319e05bbdfbd5e28d92fdced7900ebf7d
Author: Dave Chen 
Date:   Wed Feb 3 15:02:40 2016 +0800

Reinitialize the policy engine where it is needed

Policy engine should be reinitialized in the testcases where
policy enforcement is needed so that the `policy.json` from
the code base is readable.

Previously, it's only reinitialized for V3 restful testcases,
but the V2 APIs such as create credential also need to read
policy file.

Bunches of testcases will fail if run testcases separately.
$ python -m unittest keystone.tests.unit.test_v2

...
Ran 122 tests in 18.954s
FAILED (errors=73, skipped=3)

V2 restful testcases could be pass and escaped detection just
because they are run with V3 restful testcases together, and
the `policy.json` is loaded from code base and won't loaded
any more.

Change-Id: I0cbc13f0902db66de0d673c64ec81a56861a2bc3
Closes-Bug: #1541218


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1541218

Title:
  make tests use test copy of policy.json

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  How to reproduce:

  1. remove the `/etc/keystone/policy.json` if there is since this file
  only exists after env is setup, this will make sure the test engine to
  search the `policy.json` in the code base, for example,  load the
  policy file from here `/opt/stack/keystone/etc/policy.json`

  2. run the v2 testcases separately,
  $ python -m unittest keystone.tests.unit.test_v2

  or simply run only one testcases.
  $ python -m unittest 
keystone.tests.unit.test_credential.TestCredentialEc2.test_ec2_list_credentials

  3. You will hit the following exceptions.

  enforce identity:validate_token: {'is_delegated_auth': False, 
'access_token_id': None, 'user_id': u'180af26c59e9460f81652569d27fc439', 
'roles': ['Service'], 'user_domain_id': 'default', 'trustee_id': None, 
'trustor_id': None, 'consumer_id': None, 'token': , 'project_id': 'service', 'trust_id': None, 
'project_domain_id': 'default'}
  Failed to find some config files: policy.json
  Traceback (most recent call last):
    File "keystone/common/wsgi.py", line 247, in __call__
  result = method(context, **params)
    File "/usr/local/lib/python2.7/dist-packages/oslo_log/versionutils.py", 
line 165, in wrapped
  return func_or_cls(*args, **kwargs)
    File "keystone/common/controller.py", line 179, in inner
  utils.flatten_dict(policy_dict))
    File "keystone/policy/backends/rules.py", line 77, in enforce
  enforce(credentials, action, target)
    File "keystone/policy/backends/rules.py", line 69, in enforce
  return _ENFORCER.enforce(action, target, credentials, **extra)
    File "/usr/local/lib/python2.7/dist-packages/oslo_policy/policy.py", line 
540, in enforce
  self.load_rules()
    File "/usr/local/lib/python2.7/dist-packages/oslo_policy/policy.py", line 
443, in load_rules
  self.policy_path = self._get_policy_path(self.policy_file)
    File "/usr/local/lib/python2.7/dist-packages/oslo_policy/policy.py", line 
513, in _get_policy_path
  raise cfg.ConfigFilesNotFoundError((path,))
  ConfigFilesNotFoundError: Failed to find some config files: policy.json
  }}}

  Traceback (most recent call last):
    File "keystone/tests/unit/test_v2.py", line 186, in 
test_validate_token_service_role
  token=token)
    File "keystone/tests/unit/rest.py", line 208, in admin_request
  return self._request(app=self.admin_app, **kwargs)
    File "keystone/tests/unit/rest.py", line 197, in _request
  response = self.restful_request(**kwargs)
    File "keystone/tests/unit/rest.py", line 182, in restful_request
  **kwargs)
    File "keystone/tests/unit/rest.py", line 90, in request
  **kwargs)
    File "/usr/local/lib/python2.7/dist-packages/webtest/app.py", line 567, in 
request
  expect_errors=expect_errors,
    File "/usr/local/lib/python2.7/dist-packages/webtest/app.py", line 632, in 
do_request
  self._check_status(status, res)
    File "/usr/local/lib/python2.7/dist-packages/webtest/app.py", line 664, in 
_check_status
  res)
  webtest.app.AppError: Bad response: 500 Internal Server Error (not 200 OK or 
3xx redirect for http://localhost/v2.0/tokens/3c69de14762f42ac89852eb1f3c7eab5)
  '{"error": {"message": "An unexpected error prevented the server from 
fulfilling your request.", "code": 500, "title": "Internal Server Error"}}'

  Ran 122 tests in 18.954s

  FAILED (errors=73, skipped=3)

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1541201] Re: deprecate pki related commands

2016-02-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/276052
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=0f306111fb65c69a947fbf48989ebbae681a0c10
Submitter: Jenkins
Branch:master

commit 0f306111fb65c69a947fbf48989ebbae681a0c10
Author: Steve Martinelli 
Date:   Thu Feb 4 01:09:16 2016 -0500

deprecate pki_setup from keystone-manage

with PKI deprecated, we should also deprecate this command

bp: deprecated-as-of-mitaka
Closes-Bug: 1541201
Change-Id: If0600fc52084d1bb2acaadb05d858e4b69ff48eb


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1541201

Title:
  deprecate pki related commands

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  with PKI deprecated, we should deprecate pki_setup

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1541201/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533859] Re: There should be a DB API test that ensures no new tables have soft-delete columns

2016-02-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/275912
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=0d48617e5744acba94b90f8f6844c9964715691f
Submitter: Jenkins
Branch:master

commit 0d48617e5744acba94b90f8f6844c9964715691f
Author: Diana Clarke 
Date:   Wed Feb 3 15:18:45 2016 -0500

Test that new tables don't use soft deletes

Soft deletes were deprecated in Mitaka. Whitelist the existing
tables that use soft deletes, and add a test to make sure no new
ones are added.

Change-Id: Ibdf0f0e9944a8d3e71ef7411d14f0054ed17e7b6
Closes-Bug: #1533859


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1533859

Title:
  There should be a DB API test that ensures no new tables have soft-
  delete columns

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  In mitaka we approved a spec to no longer have the SoftDeleteMixin in
  the data model so new tables don't implicitly inherit from that and
  get the deleted and deleted_at columns:

  http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved
  /no-more-soft-delete.html

  We don't have anything enforcing that policy though, except code
  review, which has failed a few times.

  We should have a db api unit test which basically has a whitelist of
  which tables already have those columns and then we check the models
  against that, and if any new tables are introduced in the model which
  have the deleted or deleted_at columns, they'd fail the test.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1533859/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542108] [NEW] MTU concerns for the Linux bridge agent

2016-02-04 Thread Matt Kassawara
Public bug reported:

I ran some experiments with the Linux bridge agent [1] to determine the
source of MTU problems and offer a potential solution. The environment
for these experiments contains the following items:

1) A physical (underlying) network supporting MTU of 1500 or 9000 bytes.
2) One controller node running the neutron server, Linux bridge agent, L3 
agent, DHCP agent, and metadata agent.
3) One compute node running the Linux bridge agent.
4) A neutron provider/public network.
5) A neutron self-service/private network.
6) A neutron router between the provider and self-service networks.
7) The self-service network uses the VXLAN protocol with IPv4 endpoints which 
adds 50 bytes of overhead.
8) An instance on the self-service network with a floating IP address from an 
allocation pool on the provider network.

Background:

1. For tunnel interfaces, Linux automatically subtracts protocol
overhead from the parent interface MTU. For example, if eth0 has a 1500
MTU, a VXLAN interface using it as a parent device has a 1450 MTU.

2. For bridge devices, Linux assumes a 1500 MTU and changes the MTU to
the lowest MTU of any port on the bridge. For example, a bridge without
ports has a 1500 MTU. If eth0 has a 9000 MTU and you add it as a port on
the bridge, the bridge changes to a 9000 MTU. If eth1 has a 1500 MTU and
you add it as a port on the bridge, the bridge changes to a 1500 MTU.

3. Only devices that operate at layer-3 can participate in path MTU
discovery (PMTUD). Therefore, a change of MTU in a layer-2 device such
as a bridge or veth pair causes that device to discard packets larger
than the smallest MTU.

Observations:

1.  For any physical network MTU, instances must use a MTU value that
accounts for overlay protocol overhead. Neutron currently offers a way
to provide a correct value via DHCP. However, it only addresses packets
outbound from instances. The next two items address packets inbound to
instances.

2. For any physical network MTU, each end of the veth pair between the
self-service network router interface (qr) in the router namespace
(qrouter) and the self-service network bridge on the controller node
(qbr) contains a different MTU. The qr end has a 1500 MTU, the default
value, and the qbr end has a 1450 MTU because the bridge contains a
VXLAN interface with a 1450 MTU. Thus, the veth pair discards packets
with a payload larger than 1450 bytes.

3. For a physical network MTU larger than 1500, each end of the veth
pair between the provider network router gateway (qg) in the router
namespace (qrouter) and the provider network bridge on the controller
node (qbr) contains a different MTU. The qg end has a 1500 MTU, the
default value, and the qbr end inherits the larger MTU of physical
network interface. Thus, the veth pair discards packets with a payload
larger than 1500 bytes.

Potential solution:

As per background item (3), MTU disparities must occur in a device that
operates at layer-3. For example, a router namespace that contains
interfaces with IP addresses. We can accomplish this task in neutron by
always using the same MTU on both ends of a veth pair. In observation
item (2), both ends of the veth pair should use 1450, the self-service
network MTU. In observation item (3), both ends of the veth pair should
use 9000, the provider network MTU. If a packet from the provider
network to the instance has a payload larger than 1450 bytes, the router
can send an ICMP message to the source telling it to use a 1450 MTU.

[1] http://lists.openstack.org/pipermail/openstack-
dev/2016-January/084241.html

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: linuxbridge

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1542108

Title:
  MTU concerns for the Linux bridge agent

Status in neutron:
  New

Bug description:
  I ran some experiments with the Linux bridge agent [1] to determine
  the source of MTU problems and offer a potential solution. The
  environment for these experiments contains the following items:

  1) A physical (underlying) network supporting MTU of 1500 or 9000 bytes.
  2) One controller node running the neutron server, Linux bridge agent, L3 
agent, DHCP agent, and metadata agent.
  3) One compute node running the Linux bridge agent.
  4) A neutron provider/public network.
  5) A neutron self-service/private network.
  6) A neutron router between the provider and self-service networks.
  7) The self-service network uses the VXLAN protocol with IPv4 endpoints which 
adds 50 bytes of overhead.
  8) An instance on the self-service network with a floating IP address from an 
allocation pool on the provider network.

  Background:

  1. For tunnel interfaces, Linux automatically subtracts protocol
  overhead from the parent interface MTU. For example, if eth0 has a
  1500 MTU, a VXLAN interface using it as a parent device has a 1450
  MTU.

  

[Yahoo-eng-team] [Bug 1535201] Re: networking-midonet icehouse and juno branch retirement

2016-02-04 Thread YAMAMOTO Takashi
** Changed in: networking-midonet
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1535201

Title:
  networking-midonet icehouse and juno branch retirement

Status in networking-midonet:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  this bug tracks the status of retirement of icehouse and juno branch
  for networking-midonet project.

  see http://lists.openstack.org/pipermail/openstack-
  announce/2015-December/000869.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1535201/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542113] [NEW] iplib logging tracebacks

2016-02-04 Thread Kevin Benton
Public bug reported:

Visible in agent logs during normal operations.

2015-12-04 18:32:22.418 DEBUG neutron.agent.linux.utils 
[req-d13b0a54-2efb-4577-83b1-6d44ea35b21b None None] Exit code: 0 execute 
/opt/stack/new/neutron/neutron/agent/linux/utils.py:142
Traceback (most recent call last):
  File "/usr/lib/python2.7/logging/__init__.py", line 851, in emit
msg = self.format(record)
  File "/usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py", line 117, 
in format
return logging.StreamHandler.format(self, record)
  File "/usr/lib/python2.7/logging/__init__.py", line 724, in format
return fmt.format(record)
  File "/usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py", line 
256, in format
return logging.Formatter.format(self, record)
  File "/usr/lib/python2.7/logging/__init__.py", line 464, in format
record.message = record.getMessage()
  File "/usr/lib/python2.7/logging/__init__.py", line 328, in getMessage
msg = msg % self.args
ValueError: unsupported format character 'a' (0x61) at index 90
Logged from file linuxbridge_neutron_agent.py, line 449
2015-12-04 18:32:22.418 DEBUG 
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent 
[req-d13b0a54-2efb-4577-83b1-6d44ea35b21b None None] Setting admin_state_up to 
True for port ea6bf437-0711-46c5-9502-53c4188da67d _ensure_port_admin_state 
/opt/stack/new/neutron/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:937
2015-12-04 18:32:22.419 DEBUG neutron.agent.linux.utils 
[req-d13b0a54-2efb-4577-83b1-6d44ea35b21b None None] Running command (rootwrap 
daemon): ['ip', 'link', 'set', 'tapea6bf437-07', 'up'] execute_rootwrap_daemon

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1542113

Title:
  iplib logging tracebacks

Status in neutron:
  New

Bug description:
  Visible in agent logs during normal operations.

  2015-12-04 18:32:22.418 DEBUG neutron.agent.linux.utils 
[req-d13b0a54-2efb-4577-83b1-6d44ea35b21b None None] Exit code: 0 execute 
/opt/stack/new/neutron/neutron/agent/linux/utils.py:142
  Traceback (most recent call last):
File "/usr/lib/python2.7/logging/__init__.py", line 851, in emit
  msg = self.format(record)
File "/usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py", line 
117, in format
  return logging.StreamHandler.format(self, record)
File "/usr/lib/python2.7/logging/__init__.py", line 724, in format
  return fmt.format(record)
File "/usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py", line 
256, in format
  return logging.Formatter.format(self, record)
File "/usr/lib/python2.7/logging/__init__.py", line 464, in format
  record.message = record.getMessage()
File "/usr/lib/python2.7/logging/__init__.py", line 328, in getMessage
  msg = msg % self.args
  ValueError: unsupported format character 'a' (0x61) at index 90
  Logged from file linuxbridge_neutron_agent.py, line 449
  2015-12-04 18:32:22.418 DEBUG 
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent 
[req-d13b0a54-2efb-4577-83b1-6d44ea35b21b None None] Setting admin_state_up to 
True for port ea6bf437-0711-46c5-9502-53c4188da67d _ensure_port_admin_state 
/opt/stack/new/neutron/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:937
  2015-12-04 18:32:22.419 DEBUG neutron.agent.linux.utils 
[req-d13b0a54-2efb-4577-83b1-6d44ea35b21b None None] Running command (rootwrap 
daemon): ['ip', 'link', 'set', 'tapea6bf437-07', 'up'] execute_rootwrap_daemon

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1542113/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542113] Re: iplib logging tracebacks

2016-02-04 Thread Kevin Benton
already fixed in https://review.openstack.org/#/c/254166/

** Changed in: neutron
   Status: New => Fix Released

** Changed in: neutron
 Assignee: Kevin Benton (kevinbenton) => (unassigned)

** Changed in: neutron
 Assignee: (unassigned) => Andreas Scheuring (andreas-scheuring)

** Changed in: neutron
   Importance: Undecided => Low

** Summary changed:

- iplib logging tracebacks
+ linixbridge logging tracebacks

** Changed in: neutron
   Status: Fix Released => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1542113

Title:
  linixbridge logging tracebacks

Status in neutron:
  Fix Committed

Bug description:
  Visible in agent logs during normal operations.

  2015-12-04 18:32:22.418 DEBUG neutron.agent.linux.utils 
[req-d13b0a54-2efb-4577-83b1-6d44ea35b21b None None] Exit code: 0 execute 
/opt/stack/new/neutron/neutron/agent/linux/utils.py:142
  Traceback (most recent call last):
File "/usr/lib/python2.7/logging/__init__.py", line 851, in emit
  msg = self.format(record)
File "/usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py", line 
117, in format
  return logging.StreamHandler.format(self, record)
File "/usr/lib/python2.7/logging/__init__.py", line 724, in format
  return fmt.format(record)
File "/usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py", line 
256, in format
  return logging.Formatter.format(self, record)
File "/usr/lib/python2.7/logging/__init__.py", line 464, in format
  record.message = record.getMessage()
File "/usr/lib/python2.7/logging/__init__.py", line 328, in getMessage
  msg = msg % self.args
  ValueError: unsupported format character 'a' (0x61) at index 90
  Logged from file linuxbridge_neutron_agent.py, line 449
  2015-12-04 18:32:22.418 DEBUG 
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent 
[req-d13b0a54-2efb-4577-83b1-6d44ea35b21b None None] Setting admin_state_up to 
True for port ea6bf437-0711-46c5-9502-53c4188da67d _ensure_port_admin_state 
/opt/stack/new/neutron/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:937
  2015-12-04 18:32:22.419 DEBUG neutron.agent.linux.utils 
[req-d13b0a54-2efb-4577-83b1-6d44ea35b21b None None] Running command (rootwrap 
daemon): ['ip', 'link', 'set', 'tapea6bf437-07', 'up'] execute_rootwrap_daemon

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1542113/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541879] [NEW] Neutron devstack gate fails to install keystone due to fresh testtools

2016-02-04 Thread Ihar Hrachyshka
Public bug reported:

Today testtools 2.0.0 were released, and now gate fails as in:

http://logs.openstack.org/43/272643/1/check/gate-tempest-dsvm-neutron-
full/85bf432/logs/devstacklog.txt.gz

2016-02-04 14:17:24.168 | Traceback (most recent call last):
2016-02-04 14:17:24.168 |   File "/usr/local/bin/keystone-manage", line 4, in 

2016-02-04 14:17:24.169 | 
__import__('pkg_resources').require('keystone==2015.1.4.dev2')
2016-02-04 14:17:24.169 |   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3141, 
in 
2016-02-04 14:17:24.169 | @_call_aside
2016-02-04 14:17:24.169 |   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3127, 
in _call_aside
2016-02-04 14:17:24.169 | f(*args, **kwargs)
2016-02-04 14:17:24.169 |   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3154, 
in _initialize_master_working_set
2016-02-04 14:17:24.170 | working_set = WorkingSet._build_master()
2016-02-04 14:17:24.170 |   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 642, 
in _build_master
2016-02-04 14:17:24.170 | return cls._build_from_requirements(__requires__)
2016-02-04 14:17:24.170 |   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 655, 
in _build_from_requirements
2016-02-04 14:17:24.170 | dists = ws.resolve(reqs, Environment())
2016-02-04 14:17:24.170 |   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 833, 
in resolve
2016-02-04 14:17:24.170 | raise VersionConflict(dist, 
req).with_context(dependent_req)
2016-02-04 14:17:24.170 | pkg_resources.ContextualVersionConflict: (fixtures 
1.2.0 (/usr/local/lib/python2.7/dist-packages), 
Requirement.parse('fixtures>=1.3.0'), set(['testtools']))

** Affects: neutron
 Importance: Undecided
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Ihar Hrachyshka (ihar-hrachyshka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541879

Title:
  Neutron devstack gate fails to install keystone due to fresh testtools

Status in neutron:
  New

Bug description:
  Today testtools 2.0.0 were released, and now gate fails as in:

  http://logs.openstack.org/43/272643/1/check/gate-tempest-dsvm-neutron-
  full/85bf432/logs/devstacklog.txt.gz

  2016-02-04 14:17:24.168 | Traceback (most recent call last):
  2016-02-04 14:17:24.168 |   File "/usr/local/bin/keystone-manage", line 4, in 

  2016-02-04 14:17:24.169 | 
__import__('pkg_resources').require('keystone==2015.1.4.dev2')
  2016-02-04 14:17:24.169 |   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3141, 
in 
  2016-02-04 14:17:24.169 | @_call_aside
  2016-02-04 14:17:24.169 |   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3127, 
in _call_aside
  2016-02-04 14:17:24.169 | f(*args, **kwargs)
  2016-02-04 14:17:24.169 |   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3154, 
in _initialize_master_working_set
  2016-02-04 14:17:24.170 | working_set = WorkingSet._build_master()
  2016-02-04 14:17:24.170 |   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 642, 
in _build_master
  2016-02-04 14:17:24.170 | return 
cls._build_from_requirements(__requires__)
  2016-02-04 14:17:24.170 |   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 655, 
in _build_from_requirements
  2016-02-04 14:17:24.170 | dists = ws.resolve(reqs, Environment())
  2016-02-04 14:17:24.170 |   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 833, 
in resolve
  2016-02-04 14:17:24.170 | raise VersionConflict(dist, 
req).with_context(dependent_req)
  2016-02-04 14:17:24.170 | pkg_resources.ContextualVersionConflict: (fixtures 
1.2.0 (/usr/local/lib/python2.7/dist-packages), 
Requirement.parse('fixtures>=1.3.0'), set(['testtools']))

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1541879/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541876] [NEW] Version 2.50.1 of Selenium breaks integration tests

2016-02-04 Thread Timur Sufiev
Public bug reported:

Usual stacktrace is below, the issue happens consistently for the same
tests/table/button combination, but does not always happen for every
commit (seems to be some correlation to testing node environment, since
nodes in NodePool may be built from different images).

2016-02-04 02:30:27.503 | 2016-02-04 02:30:27.457 | Screenshot: 
{{{/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/integration_tests_screenshots/test_create_delete_user_2016.02.04-022512.png}}}
2016-02-04 02:30:27.524 | 2016-02-04 02:30:27.492 | 
2016-02-04 02:30:27.551 | 2016-02-04 02:30:27.519 | Traceback (most recent call 
last):
2016-02-04 02:30:27.574 | 2016-02-04 02:30:27.550 |   File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/tests/test_user_create_delete.py",
 line 26, in test_create_delete_user
2016-02-04 02:30:27.579 | 2016-02-04 02:30:27.557 | project='admin', 
role='admin')
2016-02-04 02:30:27.594 | 2016-02-04 02:30:27.572 |   File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/pages/identity/userspage.py",
 line 52, in create_user
2016-02-04 02:30:27.617 | 2016-02-04 02:30:27.593 | create_user_form = 
self.users_table.create_user()
2016-02-04 02:30:27.631 | 2016-02-04 02:30:27.608 |   File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/regions/tables.py",
 line 162, in wrapper
2016-02-04 02:30:27.672 | 2016-02-04 02:30:27.638 | return method(table, 
action_element)
2016-02-04 02:30:27.695 | 2016-02-04 02:30:27.653 |   File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/pages/identity/userspage.py",
 line 25, in create_user
2016-02-04 02:30:27.696 | 2016-02-04 02:30:27.656 | create_button.click()
2016-02-04 02:30:27.696 | 2016-02-04 02:30:27.673 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/selenium/webdriver/remote/webelement.py",
 line 75, in click
2016-02-04 02:30:27.703 | 2016-02-04 02:30:27.676 | 
self._execute(Command.CLICK_ELEMENT)
2016-02-04 02:30:27.715 | 2016-02-04 02:30:27.689 |   File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/webdriver.py",
 line 107, in _execute
2016-02-04 02:30:27.734 | 2016-02-04 02:30:27.712 | params)
2016-02-04 02:30:27.787 | 2016-02-04 02:30:27.722 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/selenium/webdriver/remote/webelement.py",
 line 469, in _execute
2016-02-04 02:30:27.794 | 2016-02-04 02:30:27.772 | return 
self._parent.execute(command, params)
2016-02-04 02:30:27.803 | 2016-02-04 02:30:27.781 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py",
 line 201, in execute
2016-02-04 02:30:27.813 | 2016-02-04 02:30:27.789 | 
self.error_handler.check_response(response)
2016-02-04 02:30:27.816 | 2016-02-04 02:30:27.794 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/selenium/webdriver/remote/errorhandler.py",
 line 193, in check_response
2016-02-04 02:30:27.820 | 2016-02-04 02:30:27.797 | raise 
exception_class(message, screen, stacktrace)
2016-02-04 02:30:27.829 | 2016-02-04 02:30:27.804 | 
selenium.common.exceptions.WebDriverException: Message: Element is not 
clickable at point (944, 0.98333740234375). Other element would receive the 
click: 

** Affects: horizon
 Importance: Critical
 Assignee: Timur Sufiev (tsufiev-x)
 Status: New

** Changed in: horizon
   Importance: Undecided => Critical

** Changed in: horizon
 Assignee: (unassigned) => Timur Sufiev (tsufiev-x)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1541876

Title:
  Version 2.50.1 of Selenium breaks integration tests

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Usual stacktrace is below, the issue happens consistently for the same
  tests/table/button combination, but does not always happen for every
  commit (seems to be some correlation to testing node environment,
  since nodes in NodePool may be built from different images).

  2016-02-04 02:30:27.503 | 2016-02-04 02:30:27.457 | Screenshot: 
{{{/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/integration_tests_screenshots/test_create_delete_user_2016.02.04-022512.png}}}
  2016-02-04 02:30:27.524 | 2016-02-04 02:30:27.492 | 
  2016-02-04 02:30:27.551 | 2016-02-04 02:30:27.519 | Traceback (most recent 
call last):
  2016-02-04 02:30:27.574 | 2016-02-04 02:30:27.550 |   File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/tests/test_user_create_delete.py",
 line 26, in test_create_delete_user
  2016-02-04 02:30:27.579 | 2016-02-04 02:30:27.557 | project='admin', 
role='admin')
  2016-02-04 02:30:27.594 | 2016-02-04 02:30:27.572 |   File 

[Yahoo-eng-team] [Bug 1517292] Re: boot instance with ephemeral disk failed in kilo

2016-02-04 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1517292

Title:
  boot instance with ephemeral disk failed in kilo

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Boot an instance with ephemeral disk in kilo, but return http request error:
  eg:
  nova boot --flavor 2 --image 5905bd7e-a87f-4856-8401-b8eb7211c84d --nic 
net-id=12ace164-d996-4261-9228-23ca0680f7a8 --ephemeral size=5,format=ext3 
test_vm1
  ERROR (BadRequest): Block Device Mapping is Invalid: Boot sequence for the 
instance and image/block device mapping combination is not valid. (HTTP 400) 
(Request-ID: req-b571662f-e554-49a7-979f-763f34b4b162)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1517292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1535246] Re: Openstack should be OpenStack on license headers

2016-02-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/268982
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=dcf6ffe18540b2f9b331dedec0e0a7a7f237b130
Submitter: Jenkins
Branch:master

commit dcf6ffe18540b2f9b331dedec0e0a7a7f237b130
Author: Emma Foley 
Date:   Mon Jan 18 10:19:17 2016 +

Fixes typos Openstack -> OpenStack

Occurances of Openstack (incorrect capitalization) are replaced with
OpenStack

Change-Id: I7f33060a2dd430cdd49aebf9420e3cd54d21c72c
Closes-Bug: #1535246


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1535246

Title:
  Openstack should be OpenStack on license headers

Status in neutron:
  Fix Released

Bug description:
  There are some occurrences of Openstack which should be OpenStack.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1535246/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541805] Re: nova boot gives an API error

2016-02-04 Thread Augustina Ragwitz
Thanks for taking the time to file a bug! Unfortunately this like a
configuration issue. Try searching the web for the error I pasted above
for articles that may offer solutions. You can also try our support
channels like the #openstack channel on irc.freenode.org or the mailing
list. If you followed specific documentation and discover that it is
inaccurate, please reopen this issue by changing the status to "New" and
making a comment to that effect. We'll reassign it to the openstack-
manuals team.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1541805

Title:
  nova boot gives an API error

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  issue description: i am installing Liberty on Ubuntu 14.04 LTS, when i
  tried to launch an instance i got this error.

  snat@controller:~$ nova boot --flavor m1.small --image cirros --nic 
net-id=b9a485f1-3e77-4422-8ce8-26413a311450 --security-group default --key-name 
mykey public-instance
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-4d22b711-1423-45f4-b11e-d6f596ee2703)

  This is definetely a bug.

  uname -a

  Linux controller 3.13.0-76-generic #120-Ubuntu SMP Mon Jan 18 15:59:10
  UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

  Nova.conf

  [DEFAULT]
  dhcpbridge_flagfile=/etc/nova/nova.conf
  dhcpbridge=/usr/bin/nova-dhcpbridge
  logdir=/var/log/nova
  state_path=/var/lib/nova
  lock_path=/var/lock/nova
  force_dhcp_release=True
  libvirt_use_virtio_for_bridges=True
  verbose=True
  ec2_private_dns_show_ip=True
  api_paste_config=/etc/nova/api-paste.ini
  enabled_apis=ec2,osapi_compute,metadata
  rpc_backend = rabbit
  uth_strategy = keystone
  my_ip = 10.0.0.11
  network_api_class = nova.network.neutronv2.api.API
  security_group_api = neutron
  linuxnet_interface_driver = 
nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
  firewall_driver = nova.virt.firewall.NoopFirewallDriver
  enabled_apis=osapi_compute,metadata
  verbose = True

  [oslo_messaging_rabbit]
  rabbit_host = controller
  rabbit_userid = openstack
  rabbit_password = RABBIT_PASS
  [database]
  connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
  [keystone_authtoken]
  auth_uri = http://controller:5000
  auth_url = http://controller:35357
  auth_plugin = password
  project_domain_id = default
  user_domain_id = default
  project_name = service
  username = nova
  password = nova
  [vnc]
  vncserver_listen = $my_ip
  vncserver_proxyclient_address = $my_ip
  [glance]
  host = controller
  [oslo_concurrency]
  lock_path = /var/lib/nova/tmp

  snat@controller:~$ nova flavor-list
  
++---+---+--+---+--+---+-+---+
  | ID | Name  | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor 
| Is_Public |
  
++---+---+--+---+--+---+-+---+
  | 1  | m1.tiny   | 512   | 1| 0 |  | 1 | 1.0 
| True  |
  | 2  | m1.small  | 2048  | 20   | 0 |  | 1 | 1.0 
| True  |
  | 3  | m1.medium | 4096  | 40   | 0 |  | 2 | 1.0 
| True  |
  | 4  | m1.large  | 8192  | 80   | 0 |  | 4 | 1.0 
| True  |
  | 5  | m1.xlarge | 16384 | 160  | 0 |  | 8 | 1.0 
| True  |
  
++---+---+--+---+--+---+-+---+

  so Nova flavor list works just fine and when i try to boot it says the
  flavor m1.small & m1.tiny does not exist.

  i have run a nova debug boot

  snat@controller:~$ nova --debug boot --flavor m1.small --image cirros --nic 
net-id=b9a485f1-3e77-4422-8ce8-26413a311450 --security-group default --key-name 
mykey public-instance
  DEBUG (session:198) REQ: curl -g -i -X GET http://controller:5000/v3 -H 
"Accept: application/json" -H "User-Agent: python-keystoneclient"
  INFO (connectionpool:205) Starting new HTTP connection (1): controller
  DEBUG (connectionpool:385) "GET /v3 HTTP/1.1" 200 249
  DEBUG (session:215) RESP: [200] Content-Length: 249 Vary: X-Auth-Token 
Keep-Alive: timeout=5, max=100 Server: Apache/2.4.7 (Ubuntu) Connection: 
Keep-Alive Date: Thu, 04 Feb 2016 09:22:59 GMT x-openstack-request-id: 
req-2f991c43-7ffa-4abe-9a1c-f1f28e614cb1 Content-Type: application/json 
X-Distribution: Ubuntu
  RESP BODY: {"version": {"status": "stable", "updated": 
"2015-03-30T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v3+json"}], "id": "v3.4", "links": 
[{"href": "http://controller:5000/v3/;, "rel": "self"}]}}

  DEBUG (base:188) Making authentication request to 
http://controller:5000/v3/auth/tokens
  

[Yahoo-eng-team] [Bug 1542008] Re: Deleting IdP doesn't show the name of the records to be deleted

2016-02-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/276449
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=4e1e82221f7d9334d434e6afe195d184fdc4ebab
Submitter: Jenkins
Branch:master

commit 4e1e82221f7d9334d434e6afe195d184fdc4ebab
Author: Balaji Narayanan 
Date:   Thu Feb 4 13:00:06 2016 -0800

Override get_object_display() for IdP table

This allow the IdP identifier to be shown on the delete
message prompt.

Change-Id: I94673a67efc6364bdc23ec03f7174ba291f3
Closes-bug: #1542008


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1542008

Title:
  Deleting IdP doesn't show the name of the records to be deleted

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:

  When deleting an Identity Provider, the delete prompt does not show
  the IdP name.

  The table class missed to override the get_object_display(self, datum)
  method.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1542008/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429576] Re: region field in 'new_endpoint_ref' is never effective.

2016-02-04 Thread Dave Chen
Since the code base for those testcases have been updated dramatically,
the issue is not applied to current code base. So mark it won't fix.

** Changed in: keystone
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1429576

Title:
  region field in 'new_endpoint_ref' is never effective.

Status in OpenStack Identity (keystone):
  Won't Fix

Bug description:
  we use 'region' instead of 'region_id' in EP refer for testing.
  
https://github.com/openstack/keystone/blob/f6c01dd1673b290578e9fff063e27104412ffeda/keystone/tests/unit/test_catalog.py#L158

  But we are trying to get region_id from the reference data, so this field and 
any tests depend on region_id shall never take effective.
  
https://github.com/openstack/keystone/blob/f6c01dd1673b290578e9fff063e27104412ffeda/keystone/catalog/backends/sql.py#L317

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1429576/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1527202] Re: get_all function should return a empty object instead of list

2016-02-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/258962
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=abea8b15d78ffda4fd796b09bf650295832e32a6
Submitter: Jenkins
Branch:master

commit abea8b15d78ffda4fd796b09bf650295832e32a6
Author: jichenjc 
Date:   Thu Dec 17 19:31:41 2015 +0800

Return empty object list instead []

When some error occurs, compute layer should return
ObjectList instead of [] to the API layer.

Change-Id: Ic33fb891e43c3348a79957169545dc509c56c341
Closes-Bug: 1527202


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1527202

Title:
  get_all function should return a empty object instead of list

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  jichen@devstack1:~$ curl -g -i -X GET 
http://192.168.122.239:8774/v2.1/d1c5aa58af6c426492c642eb649017be/servers/detail?metadata={'a':'b'}
 -H "User-Agent: python-novaclient" -H "Accept: application/json" -H 
"X-OpenStack-Nova-API-Version: 2.6" -H "X-Auth-Token: 
2fbccf9e89444b309bc2c0fb31afdbbd"
  HTTP/1.1 500 Internal Server Error
  X-Openstack-Nova-Api-Version: 2.6
  Vary: X-OpenStack-Nova-API-Version
  Content-Length: 198
  Content-Type: application/json; charset=UTF-8
  X-Compute-Request-Id: req-b14b6637-bb19-4a7c-b60a-92526d29f966
  Date: Wed, 16 Dec 2015 16:42:35 GMT

  {"computeFault": {"message": "Unexpected API Error. Please report this
  at http://bugs.launchpad.net/nova/ and attach the Nova API log if
  possible.\n", "code":
  500}}jichen@devstack1:~$



  2015-12-16 11:42:35.973 DEBUG nova.compute.api 
[req-b14b6637-bb19-4a7c-b60a-92526d29f966 admin demo] Searching by: {'deleted': 
False, 'project_id': u'd1c5aa58af6c426492c642eb649017be', u'metadata': 
u'{a:b}'} from (pid=5597) get_all /opt/stack/nova/nova/compute/api.py:2055
  2015-12-16 11:42:35.973 ERROR nova.api.openstack.extensions 
[req-b14b6637-bb19-4a7c-b60a-92526d29f966 admin demo] Unexpected exception in 
API method
  2015-12-16 11:42:35.973 TRACE nova.api.openstack.extensions Traceback (most 
recent call last):
  2015-12-16 11:42:35.973 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 478, in wrapped
  2015-12-16 11:42:35.973 TRACE nova.api.openstack.extensions return 
f(*args, **kwargs)
  2015-12-16 11:42:35.973 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/servers.py", line 280, in detail
  2015-12-16 11:42:35.973 TRACE nova.api.openstack.extensions servers = 
self._get_servers(req, is_detail=True)
  2015-12-16 11:42:35.973 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/servers.py", line 412, in 
_get_servers
  2015-12-16 11:42:35.973 TRACE nova.api.openstack.extensions 
instance_list.fill_faults()
  2015-12-16 11:42:35.973 TRACE nova.api.openstack.extensions AttributeError: 
'list' object has no attribute 'fill_faults'
  2015-12-16 11:42:35.973 TRACE nova.api.openstack.extensions
  2015-12-16 11:42:35.974 INFO nova.api.openstack.wsgi 
[req-b14b6637-bb19-4a7c-b60a-92526d29f966 admin demo] HTTP exception thrown: 
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1527202/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483322] Re: python-memcached get_multi has much faster than get when get multiple value

2016-02-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/274468
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=205fb7c8b34e521bdc14b5c3698d1597753b27d4
Submitter: Jenkins
Branch:master

commit 205fb7c8b34e521bdc14b5c3698d1597753b27d4
Author: Davanum Srinivas 
Date:   Fri Jan 29 12:50:58 2016 -0500

Switch to oslo.cache lib

Common memorycache was replaced by analogous tool
from oslo.cache lib. In-memory cache was replaced
by oslo.cache.dict backend. Memcached was replaced
by dogpile.cache.memcached backend.

Implements blueprint oslo-for-mitaka

Closes-Bug: #1483322
Co-Authored-By: Sergey Nikitin 
Co-Authored-By: Pavel Kholkin 

Change-Id: I371f7a68e6a6c1c4cd101f61b9ad96c15187a80e


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1483322

Title:
  python-memcached get_multi has much faster than get when get multiple
  value

Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.cache:
  Fix Released

Bug description:
  nova use memcache with python.memcached's get function.

  when multiple litem reterived it uses as for .. in .. loop..
  in this case get_multi has better performance.

  In my case,  here is test result

  get 2.3020670414
  get_multi 0.0353858470917

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1483322/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540779] Re: DVR router should not allow manually removed from an agent in 'dvr' mode

2016-02-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/263145
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=af62088fb54c675917c07c2f94973075f24c440a
Submitter: Jenkins
Branch:master

commit af62088fb54c675917c07c2f94973075f24c440a
Author: lzklibj 
Date:   Sat Jan 2 12:44:30 2016 +0800

Fix remove_router_from_l3_agent for 'dvr' mode agent

It's possible to run command remove_router_from_l3_agent to remove
a DVR router from an agent in 'dvr' mode. But the implicit *binding*
between DVR router and agent in 'dvr' mode should come and go as
dvr serviced port on host come and go, not manually managed.

Closes-Bug: #1540779
Change-Id: Ied6c88c85ced7b956fad3473ede4688020a357a4


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540779

Title:
  DVR router should not allow manually removed from an agent in 'dvr'
  mode

Status in neutron:
  Fix Released

Bug description:
  Per bp/improve-dvr-l3-agent-binding, command "neutron 
l3-agent-list-hosting-router ROUTER" couldn't show bindings for DVR routers on 
agents in 'dvr' mode now.
  It's good to hide the implicit *binding* between DVR router and agent in 
'dvr' mode, for DVR routers should come and go as dvr serviced port on host 
come and go, not for manually managed.
  But it still be possible to run "neutron l3-agent-router-remove AGENT ROUTER" 
to remove a DVR router from an agent in 'dvr' mode. This will make DVR router 
namespace deleted, and l3 networking on that node crashed.
  We should add a checking for removing router from agent in 'dvr' mode, and 
forbidden processing going on.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1540779/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525915] Re: [OSSA 2016-006] Normal user can change image status if show_multiple_locations has been set to true (CVE-2016-0757)

2016-02-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/275737
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=6179e1e98808548f1c12a2b66784cac3c1e5ac0f
Submitter: Jenkins
Branch:master

commit 6179e1e98808548f1c12a2b66784cac3c1e5ac0f
Author: Erno Kuvaja 
Date:   Tue Jan 19 13:37:05 2016 +

Prevent user to remove last location of the image

If the last location of the image is removed, image transitions back to 
queued.
This allows user to upload new data into the existing image record. By
preventing removal of the last location we prevent the image transition 
back to
queued.

This change also prevents doing the same operation via replacing the 
locations
with empty list.

SecurityImpact
DocImpact
APIImpact

Change-Id: Ieb03aaba887492819f9c58aa67f7acfcea81720e
Closes-Bug: #1525915


** Changed in: glance
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1525915

Title:
  [OSSA 2016-006] Normal user can change image status if
  show_multiple_locations has been set to true (CVE-2016-0757)

Status in Glance:
  Fix Released
Status in Glance kilo series:
  New
Status in Glance liberty series:
  Fix Committed
Status in OpenStack Security Advisory:
  Fix Committed

Bug description:
  This issue is being treated as a potential security risk under
  embargo. Please do not make any public mention of embargoed (private)
  security vulnerabilities before their coordinated publication by the
  OpenStack Vulnerability Management Team in the form of an official
  OpenStack Security Advisory. This includes discussion of the bug or
  associated fixes in public forums such as mailing lists, code review
  systems and bug trackers. Please also avoid private disclosure to
  other individuals not already approved for access to this information,
  and provide this same reminder to those who are made aware of the
  issue prior to publication. All discussion should remain confined to
  this private bug report, and any proposed fixes should be added to the
  bug as attachments.

  --

  User (non admin) can set image back to queued state by deleting
  location(s) from image when "show_multiple_locations" config parameter
  has been set to true.

  This breaks the immutability promise glance has similar way as
  described in OSSA 2015-019 as the image gets transitioned from active
  to queued and new image data can be uploaded.

  ubuntu@devstack-02:~/devstack$ glance image-show 
f4bb4c9e-71ba-4a8c-b70a-640dbe37b3bc
  
+--+--+
  | Property | Value
|
  
+--+--+
  | checksum | eb9139e4942121f22bbc2afc0400b2a4 
|
  | container_format | ami  
|
  | created_at   | 2015-12-14T09:58:54Z 
|
  | disk_format  | ami  
|
  | id   | f4bb4c9e-71ba-4a8c-b70a-640dbe37b3bc 
|
  | locations| [{"url": 
"file:///opt/stack/data/glance/images/f4bb4c9e-71ba-4a8c-b70a-  |
  |  | 640dbe37b3bc", "metadata": {}}]  
|
  | min_disk | 0
|
  | min_ram  | 0
|
  | name | cirros-test  
|
  | owner| ab69274aa31a4fba8bf559af2b0b98bd 
|
  | protected| False
|
  | size | 25165824 
|
  | status   | active   
|
  | tags | []   
|
  | updated_at   | 2015-12-14T09:58:54Z 
|
  | virtual_size | None 
|
  | visibility   | private  
|
  

[Yahoo-eng-team] [Bug 1541928] [NEW] Adopt oslo.versionedobjects for core resources (ports, networks, subnets, ...)

2016-02-04 Thread Ihar Hrachyshka
Public bug reported:

Starting Mitaka, we started adoption of oslo.versionedobjects library
for managing database access to models for core resources (ports,
networks, subnets, ...) This bug will serve as a catch all bug for
patches related to the effort.

** Affects: neutron
 Importance: Medium
 Status: Confirmed


** Tags: oslo rfe

** Changed in: neutron
   Importance: Undecided => Medium

** Changed in: neutron
   Status: New => Confirmed

** Tags added: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541928

Title:
  Adopt oslo.versionedobjects for core resources (ports, networks,
  subnets, ...)

Status in neutron:
  Confirmed

Bug description:
  Starting Mitaka, we started adoption of oslo.versionedobjects library
  for managing database access to models for core resources (ports,
  networks, subnets, ...) This bug will serve as a catch all bug for
  patches related to the effort.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1541928/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1539197] Re: Integration tests fail with 'TypeError: string indices must be integers'

2016-02-04 Thread Timur Sufiev
** Changed in: horizon
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1539197

Title:
  Integration tests fail with 'TypeError: string indices must be
  integers'

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  New release of Selenium is under suspicion:
  https://github.com/SeleniumHQ/selenium/issues/1497

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1539197/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541895] [NEW] [RFE] [IPAM] Make IPAM driver a per-subnet pool option

2016-02-04 Thread John Belamaric
Public bug reported:

Currently, the ipam_driver is an installation-wide configuration option.
However, the design and intent was to make this an option that can be
changed on a per-subnetpool basis.

So, you could have one subnetpool that gets subnets and IPs from, say,
Infoblox, but other subnetpools that get it from the default reference
driver.

You would specify a "default_pam_driver" for operations not associated
with a specific subnetpool, but also be able to associate different
drivers (and possibly different configuration options of the same
driver), with different subnetpools.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541895

Title:
  [RFE] [IPAM] Make IPAM driver a per-subnet pool option

Status in neutron:
  New

Bug description:
  Currently, the ipam_driver is an installation-wide configuration
  option. However, the design and intent was to make this an option that
  can be changed on a per-subnetpool basis.

  So, you could have one subnetpool that gets subnets and IPs from, say,
  Infoblox, but other subnetpools that get it from the default reference
  driver.

  You would specify a "default_pam_driver" for operations not associated
  with a specific subnetpool, but also be able to associate different
  drivers (and possibly different configuration options of the same
  driver), with different subnetpools.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1541895/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1336317] Re: List of images is not sorted in any useful manner

2016-02-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/130844
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=f8e595b0fa4d88bf764e13e7f871e0a0f05a8fde
Submitter: Jenkins
Branch:master

commit f8e595b0fa4d88bf764e13e7f871e0a0f05a8fde
Author: Timur Sufiev 
Date:   Fri Oct 24 20:14:46 2014 +0400

Sort images list in ascending alphabetical order

Move most of the pagination-logic to `api.glance.image_list_detailed`,
thus making code in Admin/Project->Images->get_data() less confusing
(and remove hard-coded 'asc'|'desc' values).

Also prepare to get images both from glanceclient.v1 and
glanceclient.v2 (which doesn't set `is_public` attr on images using
`visibility` attr instead).

Change-Id: Ibe6d3dd1e94a1d1fbf95382599a5f53c3559ce5a
Closes-Bug: #1534670
Closes-Bug: #1336317


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1336317

Title:
  List of images is not sorted in any useful manner

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Various places in the Horizon UI display a list of images.

  None of these places seem to sorted the list in any kind of useful
  order.  If it is sorted at all, it must be sorted based on some
  property which is invisible in the UI, thus giving the impression it
  is unsorted

  When a provider has uploaded images for very many different OS (100+)
  this makes it really tedious to find the one you actually want to look
  at.

  I see this problem at

  /project/images/
  /admin/images/images/

  In addition at /project/instances/launch  the image list appears to be
  sorted on name, but case-sensitive to 'Fedora' comes before
  'archlinux'. IMHO it should be case-insensitive so peoples choice of
  upper/lowercase doesn't mix up the sorting.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1336317/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1534670] Re: Prev button in Project->Images table redirects to the wrong page

2016-02-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/130844
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=f8e595b0fa4d88bf764e13e7f871e0a0f05a8fde
Submitter: Jenkins
Branch:master

commit f8e595b0fa4d88bf764e13e7f871e0a0f05a8fde
Author: Timur Sufiev 
Date:   Fri Oct 24 20:14:46 2014 +0400

Sort images list in ascending alphabetical order

Move most of the pagination-logic to `api.glance.image_list_detailed`,
thus making code in Admin/Project->Images->get_data() less confusing
(and remove hard-coded 'asc'|'desc' values).

Also prepare to get images both from glanceclient.v1 and
glanceclient.v2 (which doesn't set `is_public` attr on images using
`visibility` attr instead).

Change-Id: Ibe6d3dd1e94a1d1fbf95382599a5f53c3559ce5a
Closes-Bug: #1534670
Closes-Bug: #1336317


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1534670

Title:
  Prev button in Project->Images table redirects to the wrong page

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Steps to reproduce: on default 3 devstack images (plain cirros image,
  cirros kernel and cirros ramdisk) set Items per Page to 1, then go the
  last page and from there click 'Prev' link. Notice that it redirects
  not to the second page, but to the first one.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1534670/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382440] Re: Detaching multipath volume doesn't work properly when using different targets with same portal for each multipath device

2016-02-04 Thread Eric Harney
** Also affects: cinder
   Importance: Undecided
   Status: New

** Also affects: cinder/kilo
   Importance: Undecided
   Status: New

** Changed in: cinder/kilo
Milestone: None => 2015.1.3

** Changed in: cinder/kilo
   Status: New => Fix Released

** Changed in: cinder
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1382440

Title:
  Detaching multipath volume doesn't work properly when using different
  targets with same portal for each multipath device

Status in Cinder:
  Invalid
Status in Cinder kilo series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Released
Status in os-brick:
  Fix Released

Bug description:
  Overview:
  On Icehouse(2014.1.2) with "iscsi_use_multipath=true", detaching iSCSI 
  multipath volume doesn't work properly. When we use different targets(IQNs) 
  associated with same portal for each different multipath device, all of 
  the targets will be deleted via disconnect_volume().

  This problem is not yet fixed in upstream. However, the attached patch
  fixes this problem.

  Steps to Reproduce:

  We can easily reproduce this issue without any special storage
  system in the following Steps:

1. configure "iscsi_use_multipath=True" in nova.conf on compute node.
2. configure "volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver"
   in cinder.conf on cinder node.
2. create an instance.
3. create 3 volumes and attach them to the instance.
4. detach one of these volumes.
5. check "multipath -ll" and "iscsiadm --mode session".

  Detail:

  This problem was introduced with the following patch which modified
  attaching and detaching volume operations for different targets
  associated with different portals for the same multipath device.

commit 429ac4dedd617f8c1f7c88dd8ece6b7d2f2accd0
Author: Xing Yang 
Date:   Date: Mon Jan 6 17:27:28 2014 -0500

  Fixed a problem in iSCSI multipath

  We found out that:

  > # Do a discovery to find all targets.
  > # Targets for multiple paths for the same multipath device
  > # may not be the same.
  > out = self._run_iscsiadm_bare(['-m',
  >   'discovery',
  >   '-t',
  >   'sendtargets',
  >   '-p',
  >   iscsi_properties['target_portal']],
  >   check_exit_code=[0, 255])[0] \
  > or ""
  >
  > ips_iqns = self._get_target_portals_from_iscsiadm_output(out)
  ...
  > # If no other multipath device attached has the same iqn
  > # as the current device
  > if not in_use:
  > # disconnect if no other multipath devices with same iqn
  > self._disconnect_mpath(iscsi_properties, ips_iqns)
  > return
  > elif multipath_device not in devices:
  > # delete the devices associated w/ the unused multipath
  > self._delete_mpath(iscsi_properties, multipath_device, ips_iqns)

  When we use different targets(IQNs) associated with same portal for each 
different
  multipath device, the ips_iqns has all targets in compute node from the 
result of
  "iscsiadm -m discovery -t sendtargets -p ".
  Then, the _delete_mpath() deletes all of the targets in the ips_iqns
  via /sys/block/sdX/device/delete.

  For example, we create an instance and attach 3 volumes to the
  instance:

# iscsiadm --mode session
tcp: [17] 192.168.0.55:3260,1 
iqn.2010-10.org.openstack:volume-5c526ffa-ba88-4fe2-a570-9e35c4880d12
tcp: [18] 192.168.0.55:3260,1 
iqn.2010-10.org.openstack:volume-b4495e7e-b611-4406-8cce-4681ac1e36de
tcp: [19] 192.168.0.55:3260,1 
iqn.2010-10.org.openstack:volume-b2c01f6a-5723-40e7-9f21-f6b728021b0e
# multipath -ll
330030001 dm-7 IET,VIRTUAL-DISK
size=4.0G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
   `- 23:0:0:1 sdd 8:48 active ready running
330010001 dm-5 IET,VIRTUAL-DISK
size=2.0G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
   `- 21:0:0:1 sdb 8:16 active ready running
330020001 dm-6 IET,VIRTUAL-DISK
size=3.0G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
   `- 22:0:0:1 sdc 8:32 active ready running

  Then we detach one of these volumes:

# nova volume-detach 95f959cd-d180-4063-ae03-9d21dbd7cc50 5c526ffa-
  ba88-4fe2-a570-9e35c4880d12

  As a result of detaching the volume, the compute node remains 3 iSCSI sessions
  and the 

[Yahoo-eng-team] [Bug 1443421] Re: After VM migration, tunnels not getting removed with L2Pop ON, when using multiple api_workers in controller

2016-02-04 Thread Jesse Pretorius
** Changed in: openstack-ansible/trunk
Milestone: mitaka-2 => mitaka-3

** No longer affects: openstack-ansible/liberty

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1443421

Title:
  After VM migration, tunnels not getting removed with L2Pop ON, when
  using multiple api_workers in controller

Status in neutron:
  In Progress
Status in openstack-ansible:
  Confirmed
Status in openstack-ansible trunk series:
  Confirmed

Bug description:
  Using multiple api_workers, for "nova live-migration" command, 
  a) tunnel flows and tunnel ports are always removed from old host
  b) and other hosts(sometimes) not getting notification about port delete from 
old host. So in other hosts, tunnel ports and flood flows(except unicast flow 
about port) for old host still remain.
  Root cause and fix is explained in comments 12 and 13.

  According to bug reporter, this bug can also be reproducible like below.
  Setup : Neutron server  HA (3 nodes).
  Hypervisor – ESX with OVsvapp
  l2 POP is on Network node and off on Ovsvapp.

  Condition:
  Make L2 pop on OVs agent, api workers =10 in the controller.

  On network node,the VXLAN tunnel is created with ESX2 and the Tunnel
  with ESX1 is not removed after migrating VM from ESX1 to ESX2.

  Attaching the logs of servers and agent logs.

  stack@OSC-NS1:/opt/stack/logs/screen$ sudo ovs-vsctl show
  662d03fb-c784-498e-927c-410aa6788455
  Bridge br-ex
  Port phy-br-ex
  Interface phy-br-ex
  type: patch
  options: {peer=int-br-ex}
  Port "eth2"
  Interface "eth2"
  Port br-ex
  Interface br-ex
  type: internal
  Bridge br-tun
  Port patch-int
  Interface patch-int
  type: patch
  options: {peer=patch-tun}
  Port "vxlan-6447007a"
  Interface "vxlan-6447007a"
  type: vxlan
  options: {df_default="true", in_key=flow, 
local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.122"} 
This should have been deleted after MIGRATION.
  Port "vxlan-64470082"
  Interface "vxlan-64470082"
  type: vxlan
  options: {df_default="true", in_key=flow, 
local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.130"}
  Port br-tun
  Interface br-tun
  type: internal
  Port "vxlan-6447002a"
  Interface "vxlan-6447002a"
  type: vxlan
  options: {df_default="true", in_key=flow, 
local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.42"}
  Bridge "br-eth1"
  Port "br-eth1"
  Interface "br-eth1"
  type: internal
  Port "phy-br-eth1"
  Interface "phy-br-eth1"
  type: patch
  options: {peer="int-br-eth1"}
  Bridge br-int
  fail_mode: secure
  Port patch-tun
  Interface patch-tun
  type: patch
  options: {peer=patch-int}
  Port "int-br-eth1"
  Interface "int-br-eth1"
  type: patch
  options: {peer="phy-br-eth1"}
  Port br-int
  Interface br-int
  type: internal
  Port int-br-ex
  Interface int-br-ex
  type: patch
  options: {peer=phy-br-ex}
  Port "tap9515e5b3-ec"
  tag: 11
  Interface "tap9515e5b3-ec"
  type: internal
  ovs_version: "2.0.2"

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1443421/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542014] [NEW] [LBaaS V2] Missing region and endpoint parameters in barbican_acl.py

2016-02-04 Thread Aishwarya Thangappa
Public bug reported:

Currently, lbaas has no way to pass region and endpoint-type to barbican
client when accessing the barbican containers.

This becomes an issue in a cloud with multiple regions and endpoint
types.  So we would like to have region and endpoint-type as parameters
while requesting for a barbican client in barbican_acl.py

** Affects: neutron
 Importance: Undecided
 Assignee: Aishwarya Thangappa (aishu-ece)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Aishwarya Thangappa (aishu-ece)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1542014

Title:
  [LBaaS V2] Missing region and endpoint parameters in barbican_acl.py

Status in neutron:
  New

Bug description:
  Currently, lbaas has no way to pass region and endpoint-type to
  barbican client when accessing the barbican containers.

  This becomes an issue in a cloud with multiple regions and endpoint
  types.  So we would like to have region and endpoint-type as
  parameters while requesting for a barbican client in barbican_acl.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1542014/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542008] [NEW] Deleting IdP doesn't show the name of the records to be deleted

2016-02-04 Thread Lin Hua Cheng
Public bug reported:


When deleting an Identity Provider, the delete prompt does not show the
IdP name.

The table class missed to override the get_object_display(self, datum)
method.

** Affects: horizon
 Importance: Low
 Status: Confirmed

** Changed in: horizon
   Status: New => Confirmed

** Changed in: horizon
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1542008

Title:
  Deleting IdP doesn't show the name of the records to be deleted

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:

  When deleting an Identity Provider, the delete prompt does not show
  the IdP name.

  The table class missed to override the get_object_display(self, datum)
  method.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1542008/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541738] Re: Rule on the tun bridge is not updated in time while migrating the vm

2016-02-04 Thread Sean M. Collins
OK. Marking this as confirmed and moving to wishlist, based on the fact
that this seems to be more about an enhancement to migration time. If
you have a patch feel free to submit it to gerrit and link to this bug
in your commit message.

** Changed in: neutron
   Status: Incomplete => Opinion

** Changed in: neutron
 Assignee: Sean M. Collins (scollins) => (unassigned)

** Changed in: neutron
   Status: Opinion => Confirmed

** Changed in: neutron
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541738

Title:
  Rule on the tun bridge is not updated in time while migrating the vm

Status in neutron:
  Confirmed

Bug description:
  ENV:neutron/master, vxlan

  After the vm live migration, we can observe that the vm is active
  using command "nova show". However, the vm network is not ready. When
  processing vm live migration, nova invokes neutron update_port. It
  only updates the host ID of the port attribute, but doesn't update the
  rules on the tun bridge. This means the output port in the rule below
  is not updated to the vxlan port, which should be connected to the
  host node that the vm is migrated to.

  ovs-ofctl dump-flows br-tun | grep 1ef
  cookie=0x0, duration=194.884s, table=20, n_packets=0, n_bytes=0, 
hard_timeout=300, idle_age=194, 
priority=1,vlan_tci=0x0306/0x0fff,dl_dst=5a:c6:4f:34:61:06 
actions=load:0->NXM_OF_VLAN_TCI[],load:0x1ef->NXM_NX_TUN_ID[],output:24

  Due to the reason explained above, the time  for vm migration is
  increased. By monitoring the rule status on the tun bridge and the
  network connectivity, the network connectivity is restored after the
  rule of tun bridge is updated.

  Therefore, the time for vm migration can be reduced by updating the
  rule immediately.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1541738/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493422] [NEW] Remove partial fix of bug #1274034

2016-02-04 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

2 changes[1] have been merged in order to enable #1274034 fix. But the 2
remaining changes[2] have not been merged and an alternative solution
has been found so the 2 first changes[2] introduced dead code.


[1] https://review.openstack.org/141130 
  https://review.openstack.org/157097 
[2] https://review.openstack.org/157634
  https://review.openstack.org/158491

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
Remove partial fix of bug #1274034
https://bugs.launchpad.net/bugs/1493422
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493422] Re: Remove partial fix of bug #1274034

2016-02-04 Thread Cedric Brandily
Solved by
https://review.openstack.org/#q,I61e38fc0d8cf8e79252aabc19a70240be57e4a32,n,z

** Changed in: neutron
   Status: New => Fix Released

** Changed in: neutron
 Assignee: (unassigned) => Cedric Brandily (cbrandily)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1493422

Title:
  Remove partial fix of bug #1274034

Status in neutron:
  Fix Released

Bug description:
  2 changes[1] have been merged in order to enable #1274034 fix. But the
  2 remaining changes[2] have not been merged and an alternative
  solution has been found so the 2 first changes[2] introduced dead
  code.

  
  [1] https://review.openstack.org/141130 
https://review.openstack.org/157097 
  [2] https://review.openstack.org/157634
https://review.openstack.org/158491

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1493422/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493422] Re: Remove partial fix of bug #1274034

2016-02-04 Thread Cedric Brandily
Honestly, i don't understand why it ended in OSSN: it wasn't correspond
to my intention

** Project changed: ossn => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1493422

Title:
  Remove partial fix of bug #1274034

Status in neutron:
  Fix Released

Bug description:
  2 changes[1] have been merged in order to enable #1274034 fix. But the
  2 remaining changes[2] have not been merged and an alternative
  solution has been found so the 2 first changes[2] introduced dead
  code.

  
  [1] https://review.openstack.org/141130 
https://review.openstack.org/157097 
  [2] https://review.openstack.org/157634
https://review.openstack.org/158491

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1493422/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542014] Re: [LBaaS V2] Missing region and endpoint parameters in barbican_acl.py

2016-02-04 Thread Nate Johnston
The fix linked to was proposed on January 22 and was merged on January
26 - either this bug, or the linkage to that fix, are spurious.

** Changed in: neutron
   Importance: Undecided => Wishlist

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1542014

Title:
  [LBaaS V2] Missing region and endpoint parameters in barbican_acl.py

Status in neutron:
  Invalid

Bug description:
  Currently, lbaas has no way to pass region and endpoint-type to
  barbican client when accessing the barbican containers.

  This becomes an issue in a cloud with multiple regions and endpoint
  types.  So we would like to have region and endpoint-type as
  parameters while requesting for a barbican client in barbican_acl.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1542014/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541487] Re: glance.tests.integration.v2.test_tasks_api.TestTasksApi.test_all_task_api hangs when run with testtools

2016-02-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/275815
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=57321d5a1adab07798ad554a922f631c1cd99ce1
Submitter: Jenkins
Branch:master

commit 57321d5a1adab07798ad554a922f631c1cd99ce1
Author: Victor Stinner 
Date:   Wed Feb 3 17:35:16 2016 +0100

Fix _wait_on_task_execution()

Attempt to fix a race condition in test_all_task_api of
glance.tests.integration.v2.test_tasks_api.TestTasksApi.
_wait_on_task_execution() must use eventlet.sleep() instead of time.sleep() 
to
give control to the pending server task, instead of blocking the whole 
process.

Note: The time module is not monkey-patched, so time.sleep() really hangs 
the
current thread for the specified duration. For an unknown reason, the test 
pass
in most cases, but always fail with testtools.

Change-Id: I785a7cf0d556ad72c443946adac3b4f5f361edd8
Closes-Bug: #1541487


** Changed in: glance
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1541487

Title:
  glance.tests.integration.v2.test_tasks_api.TestTasksApi.test_all_task_api
  hangs when run with testtools

Status in Glance:
  Fix Released

Bug description:
  When glance.tests.integration.v2.test_tasks_api is run directly with
  testtools, the test fails:

  "python -u -m testtools.run
  glance.tests.integration.v2.test_tasks_api.TestTasksApi.test_all_task_api"

  For an unknown reason, the test pass when run with testr:

  "testr run
  glance.tests.integration.v2.test_tasks_api.TestTasksApi.test_all_task_api"

  It looks like the _wait_on_task_execution() method of
  glance/tests/integration/v2/test_tasks_api.py is not reliable. The
  method uses time.sleep() to give time to the "server" to execute a
  task run in background. Problem: in practice, the "server" is in the
  same process than the client, eventlet is used to scheduled tasks of
  the server. time.sleep() really blocks the whole process, including
  the server which is supposed to run the task.

  Sorry, I'm unable to explain why the test pass with testr, eventlet,
  taskflow, etc. are too magic for my little brain :-)

  IMHO we must enable monkey-patch to run Glance unit and integration
  tests.

  Or at least, _wait_on_task_execution() must call eventlet.sleep(), not
  time.sleep().

  Note: time.sleep() is not monkey-patched when the test is run with
  testtools or testr, the test runner doesn't change that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1541487/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542024] [NEW] keystoneauth1.access.service_catalog.ServiceCatalog is missing factory method

2016-02-04 Thread Eric Larese
Public bug reported:

The file keystoneauth1.access.service_catalog.ServiceCatalog is missing
a factory() method equivalent to the factory() method provided by
keystoneclient.service_catalog.ServiceCatalog.factory().  This method
allows for creation of a ServiceCatalog object without knowing the
specific API version in advance.

** Affects: keystoneauth
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1542024

Title:
  keystoneauth1.access.service_catalog.ServiceCatalog is missing factory
  method

Status in keystoneauth:
  New

Bug description:
  The file keystoneauth1.access.service_catalog.ServiceCatalog is
  missing a factory() method equivalent to the factory() method provided
  by keystoneclient.service_catalog.ServiceCatalog.factory().  This
  method allows for creation of a ServiceCatalog object without knowing
  the specific API version in advance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystoneauth/+bug/1542024/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542024] Re: keystoneauth1.access.service_catalog.ServiceCatalog is missing factory method

2016-02-04 Thread Matthew Edmonds
looks like this was removed by
https://github.com/openstack/keystoneauth/commit/473b70566a88ce84967654e5fc2dd87e04538fb9

The assumption there is that nobody would ever go to the ServiceCatalog
directly, but unfortunately nova does. This issue was found when we were
looking at
https://github.com/openstack/nova/blob/f19ddc4c507dfc64e4d7f930caefab5a5e1680b8/nova/context.py#L50
and trying to see how to update that not to be specific to keystone v2.
Maybe you have an alternative suggestion on how to fix nova, Jamie?

** Project changed: keystone => keystoneauth

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1542024

Title:
  keystoneauth1.access.service_catalog.ServiceCatalog is missing factory
  method

Status in keystoneauth:
  New

Bug description:
  The file keystoneauth1.access.service_catalog.ServiceCatalog is
  missing a factory() method equivalent to the factory() method provided
  by keystoneclient.service_catalog.ServiceCatalog.factory().  This
  method allows for creation of a ServiceCatalog object without knowing
  the specific API version in advance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystoneauth/+bug/1542024/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541714] Re: DVR routers are not created on a compute node that runs agent in 'dvr' mode

2016-02-04 Thread Swaminathan Vasudevan
It was an invalid user configuration.

The "dvr"node was not configured with the right agent mode, and so this
issue was seen.

Please ignore this bug.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541714

Title:
  DVR routers are not created on a compute node that runs agent in 'dvr'
  mode

Status in neutron:
  Invalid

Bug description:
  DVR routers are not created on a compute node that is running L3 agent
  in "dvr" mode.

  This might have been introduced by the latest patch that changed the 
scheduling behavior.
  https://review.openstack.org/#/c/254837/

  Steps to reproduce:

  1. Stack up two nodes. ( dvr_snat node) and (dvr node)
  2. Create a Network
  3. Create a Subnet
  4. Create a Router
  5. Add Subnet to the Router
  6. Create a VM on the "dvr_snat" node.
  Everything works fine here. We can see the router-namespace, snat-namespace 
and the dhcp-namespace.

  7. Now Create a VM and force the VM to be created on the second node ( dvr 
node).
- nova boot --flavor xyz --image abc --net net-id yyy-id 
--availability-zone nova:dvr-node myinstance2

  Now see the image is created in the second node.
  But the router namespace is missing in the second node.

  The router is scheduled to the dvr-snat node, but not to the compute
  node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1541714/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332133] Re: Description is mandatory parameter when creating Security Group

2016-02-04 Thread Lin Hua Cheng
fixed in here for horizon:
https://github.com/openstack/horizon/commit/d114dfe00967fa5ecb24122692c327c058ef4e23
#diff-1b208a86527f69c9e048ceab631bfed1

** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1332133

Title:
  Description is mandatory parameter when creating Security Group

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Steps to reproduce:
  1. Create security group.

  Actual result:
  Description is mandatory parameter when creating Security Group.

  Expected result:
  Description should not be mandatory parameter when creating Security Group.

  Explanation:
  1. Description is not mandatory information.
  2. Inconsistency with other Open Stack items (any other item in Open Stack 
doesn't require to enter Description mandatory).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1332133/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1292175] Re: Display N1K policy profile information in instance

2016-02-04 Thread Lin Hua Cheng
closing horizon, since n1k code moved to https://github.com/openstack
/horizon-cisco-ui

** Also affects: horizon-cisco-ui
   Importance: Undecided
   Status: New

** Changed in: horizon
   Status: In Progress => Invalid

** Changed in: horizon
   Status: Invalid => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1292175

Title:
  Display N1K policy profile information in instance

Status in OpenStack Dashboard (Horizon):
  Won't Fix
Status in horizon-cisco-ui:
  New

Bug description:
  When an N1K profile is associated with an instance, currently the N1K
  profile information is not displayed in the instance detail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1292175/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541738] [NEW] Rule on the tun bridge is not updated in time while migrating the vm

2016-02-04 Thread jingting
Public bug reported:

ENV:neutron/master, vxlan

After the vm live migration, we can observe that the vm is active using
command "nova show". However, the vm network is not ready. When
processing vm live migration, nova invokes neutron update_port. It only
updates the host ID of the port attribute, but doesn't update the rules
on the tun bridge. This means the output port in the rule below is not
updated to the vxlan port, which should be connected to the host node
that the vm is migrated to.

ovs-ofctl dump-flows br-tun | grep 1ef
cookie=0x0, duration=194.884s, table=20, n_packets=0, n_bytes=0, 
hard_timeout=300, idle_age=194, 
priority=1,vlan_tci=0x0306/0x0fff,dl_dst=5a:c6:4f:34:61:06 
actions=load:0->NXM_OF_VLAN_TCI[],load:0x1ef->NXM_NX_TUN_ID[],output:24

Due to the reason explained above, the time  for vm migration is
increased. By monitoring the rule status on the tun bridge and the
network connectivity, the network connectivity is restored after the
rule of tun bridge is updated.

Therefore, the time for vm migration can be reduced by updating the rule
immediately.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541738

Title:
  Rule on the tun bridge is not updated in time while migrating the vm

Status in neutron:
  New

Bug description:
  ENV:neutron/master, vxlan

  After the vm live migration, we can observe that the vm is active
  using command "nova show". However, the vm network is not ready. When
  processing vm live migration, nova invokes neutron update_port. It
  only updates the host ID of the port attribute, but doesn't update the
  rules on the tun bridge. This means the output port in the rule below
  is not updated to the vxlan port, which should be connected to the
  host node that the vm is migrated to.

  ovs-ofctl dump-flows br-tun | grep 1ef
  cookie=0x0, duration=194.884s, table=20, n_packets=0, n_bytes=0, 
hard_timeout=300, idle_age=194, 
priority=1,vlan_tci=0x0306/0x0fff,dl_dst=5a:c6:4f:34:61:06 
actions=load:0->NXM_OF_VLAN_TCI[],load:0x1ef->NXM_NX_TUN_ID[],output:24

  Due to the reason explained above, the time  for vm migration is
  increased. By monitoring the rule status on the tun bridge and the
  network connectivity, the network connectivity is restored after the
  rule of tun bridge is updated.

  Therefore, the time for vm migration can be reduced by updating the
  rule immediately.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1541738/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541742] [NEW] fullstack tests break when tearing down database

2016-02-04 Thread Ihar Hrachyshka
Public bug reported:

http://logs.openstack.org/41/265041/7/check/gate-neutron-dsvm-
fullstack/8ac64cd/testr_results.html.gz

Late runs fail with the same errors:

Traceback (most recent call last):
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/fixtures/fixture.py",
 line 125, in cleanUp
return self._cleanups(raise_errors=raise_first)
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/fixtures/callmany.py",
 line 88, in __call__
reraise(error[0], error[1], error[2])
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/fixtures/callmany.py",
 line 82, in __call__
cleanup(*args, **kwargs)
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/testresources/__init__.py",
 line 797, in tearDownResources
resource[1].finishedWith(getattr(test, resource[0]), result)
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/testresources/__init__.py",
 line 509, in finishedWith
self._clean_all(resource, result)
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/testresources/__init__.py",
 line 478, in _clean_all
self.clean(resource)
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/provision.py",
 line 127, in clean
resource.database.engine)
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/provision.py",
 line 263, in drop_all_objects
self.impl.drop_all_objects(engine)
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/provision.py",
 line 415, in drop_all_objects
conn.execute(schema.DropConstraint(fkc))
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 914, in execute
return meth(self, multiparams, params)
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/sqlalchemy/sql/ddl.py",
 line 68, in _execute_on_connection
return connection._execute_ddl(self, multiparams, params)
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 968, in _execute_ddl
compiled
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1146, in _execute_context
context)
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1337, in _handle_dbapi_exception
util.raise_from_cause(newraise, exc_info)
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py",
 line 200, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb)
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1139, in _execute_context
context)
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py",
 line 450, in do_execute
cursor.execute(statement, parameters)
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/pymysql/cursors.py",
 line 146, in execute
result = self._query(query)
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/pymysql/cursors.py",
 line 296, in _query
conn.query(q)
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/pymysql/connections.py",
 line 819, in query
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/pymysql/connections.py",
 line 1001, in _read_query_result
result.read()
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/pymysql/connections.py",
 line 1285, in read
first_packet = self.connection._read_packet()
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/pymysql/connections.py",
 line 945, in _read_packet
packet_header = self._read_bytes(4)
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/pymysql/connections.py",
 line 971, in _read_bytes
data = self._rfile.read(num_bytes)
  File 
"/opt/stack/new/neutron/.tox/dsvm-fullstack-constraints/local/lib/python2.7/site-packages/pymysql/_socketio.py",
 line 59, in readinto
return self._sock.recv_into(b)
  File 

[Yahoo-eng-team] [Bug 1519269] Re: Release request for networking-fujitsu for stable/liberty

2016-02-04 Thread Yushiro FURUKAWA
** Changed in: networking-fujitsu
 Assignee: (unassigned) => Yushiro FURUKAWA (y-furukawa-2)

** Changed in: networking-fujitsu
   Status: New => Fix Released

** Changed in: networking-fujitsu
Milestone: None => 1.0.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1519269

Title:
  Release request for networking-fujitsu for stable/liberty

Status in networking-fujitsu:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  Branch name:

 stable/liberty

  Tags name:

 1.0.0

  
  The Liberty release of networking-fujitsu

* Mechanism driver for FUJITSU Converged Fabric Switch

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-fujitsu/+bug/1519269/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp