[Yahoo-eng-team] [Bug 1674946] Re: cloud-init fails with "Unknown network_data link type: dvs"

2017-04-05 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.9-90-g61eb03fe-
0ubuntu1

---
cloud-init (0.7.9-90-g61eb03fe-0ubuntu1) zesty; urgency=medium

  * New upstream snapshot.
- OpenStack: add 'dvs' to the list of physical link types.
  (LP: #1674946)

 -- Scott Moser   Mon, 03 Apr 2017 11:10:38 -0400

** Changed in: cloud-init (Ubuntu)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1674946

Title:
  cloud-init fails with "Unknown network_data link type: dvs"

Status in cloud-init:
  Fix Committed
Status in OpenStack Compute (nova):
  New
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Confirmed
Status in cloud-init source package in Yakkety:
  Confirmed

Bug description:
  === Begin SRU Template ===
  [Impact]
  When a config drive provides network_data.json on Openstack running on
  ESXi cloud-init will fail to configure networking.

  Console log and /var/log/cloud-init.log will show:
   ValueError: Unknown network_data link type: hyperv

  This woudl also occur when the type of the network device as declared
  to cloud-init was 'hw_veb', 'hyperv', or 'vhostuser'.

  [Test Case]
  Launch an instance with config drive on hyperv cloud.

  [Regression Potential]
  Low to none. cloud-init is relaxing requirements and will accept things
  now that it previously complained were invalid.

  This is very similar to change in bug 1642679.
  Upstream Openstack Merge proposal to stop this from continually
  happening at https://review.openstack.org/#/c/400883/
  === End SRU Template ===

  When booting an OpenStack instance, cloud-init fails with:

  [   33.307325] cloud-init[445]: Cloud-init v. 0.7.9 running 'init-local' at 
Mon, 20 Mar 2017 14:42:58 +. Up 31.06 seconds.
  [   33.368434] cloud-init[445]: 2017-03-20 14:43:00,779 - util.py[WARNING]: 
failed stage init-local
  [   33.449886] cloud-init[445]: failed run of stage init-local
  [   33.490863] cloud-init[445]: 

  [   33.542214] cloud-init[445]: Traceback (most recent call last):
  [   33.585204] cloud-init[445]:   File 
"/usr/lib/python3/dist-packages/cloudinit/cmd/main.py", line 513, in 
status_wrapper
  [   33.654579] cloud-init[445]: ret = functor(name, args)
  [   33.696372] cloud-init[445]:   File 
"/usr/lib/python3/dist-packages/cloudinit/cmd/main.py", line 269, in main_init
  [   33.755593] cloud-init[445]: 
init.apply_network_config(bring_up=bool(mode != sources.DSMODE_LOCAL))
  [   33.809124] cloud-init[445]:   File 
"/usr/lib/python3/dist-packages/cloudinit/stages.py", line 622, in 
apply_network_config
  [   33.847161] cloud-init[445]: netcfg, src = 
self._find_networking_config()
  [   33.876562] cloud-init[445]:   File 
"/usr/lib/python3/dist-packages/cloudinit/stages.py", line 609, in 
_find_networking_config
  [   33.916335] cloud-init[445]: if self.datasource and 
hasattr(self.datasource, 'network_config'):
  [   33.956207] cloud-init[445]:   File 
"/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceConfigDrive.py", 
line 147, in network_config
  [   34.008213] cloud-init[445]: self.network_json, 
known_macs=self.known_macs)
  [   34.049714] cloud-init[445]:   File 
"/usr/lib/python3/dist-packages/cloudinit/sources/helpers/openstack.py", line 
627, in convert_net_json
  [   34.104226] cloud-init[445]: 'Unknown network_data link type: %s' % 
link['type'])
  [   34.144219] cloud-init[445]: ValueError: Unknown network_data link type: 
dvs
  [   34.175934] cloud-init[445]: 


  I am using Neutron with the Simple DVS plugin.

  Related bugs:
   * bug 1674946: cloud-init fails with "Unknown network_data link type: dvs
   * bug 1642679: OpenStack network_config.json implementation fails on Hyper-V 
compute nodes

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1674946/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1680311] [NEW] Vmware NSX Set Router Gateway error

2017-04-05 Thread YuYang
Public bug reported:

i use openstack newton,and use nsx 6.2,when i set router's gateway get
the error:


2017-04-06 11:39:22.198 21 ERROR vmware_nsx.plugins.nsx_v.plugin 
[req-49f17b50-4dfa-4481-9ab8-58183317c012 1f33b40283b24adcaaf727ed11424523 
4e8be975a6924817bdfb95f9cd69ab06 - - -] Failed to update gw_info 
{u'network_id': u'5f1a565d-ce51-4485-8ef4-626f59be9a15'} on router 
f4a4cd9c-4203-4af6-83d9-c6356057ab6b
2017-04-06 11:39:23.979 21 WARNING vmware_nsx.plugins.nsx_v.vshield.edge_utils 
[req-49f17b50-4dfa-4481-9ab8-58183317c012 1f33b40283b24adcaaf727ed11424523 
4e8be975a6924817bdfb95f9cd69ab06 - - -] Failed to find edge_id to delete dhcp 
binding for port 1156334a-ac9a-45e3-9f3a-667ad73ef7c9
2017-04-06 11:39:41.917 19 INFO neutron.wsgi 
[req-f19d9053-541d-43a0-b13a-29ba179da647 33bf0ad9495e421e9619ec4df4d33a20 
e7618c96038a431e8a2534386949872c - - -] 192.168.217.64,192.168.217.61 - - 
[06/Apr/2017 11:39:41] "GET 
/v2.0/ports.json?tenant_id=232c80474d534e20bc3406809da17398_id=6de3a6b2-7edd-498a-ba3a-56d427980abb
 HTTP/1.1" 200 1130 0.115649
2017-04-06 11:39:42.104 19 INFO neutron.wsgi 
[req-73e29279-04d7-4e10-a141-7cbb7664d39d 33bf0ad9495e421e9619ec4df4d33a20 
e7618c96038a431e8a2534386949872c - - -] 192.168.217.64,192.168.217.61 - - 
[06/Apr/2017 11:39:42] "GET 
/v2.0/networks.json?id=509d63f0-430b-4c78-9736-f560abbd856e HTTP/1.1" 200 723 
0.180216
2017-04-06 11:39:42.158 19 INFO neutron.wsgi 
[req-961f9fed-82ff-48a3-8d4b-ac6321150566 33bf0ad9495e421e9619ec4df4d33a20 
e7618c96038a431e8a2534386949872c - - -] 192.168.217.64,192.168.217.61 - - 
[06/Apr/2017 11:39:42] "GET 
/v2.0/floatingips.json?fixed_ip_address=10.0.77.8_id=1bc8b7dd-d090-496e-a3e2-5af3413249a3
 HTTP/1.1" 200 193 0.047324
2017-04-06 11:39:42.357 19 INFO neutron.wsgi 
[req-599ac427-9b77-4f5e-a767-bb9656e3915a 33bf0ad9495e421e9619ec4df4d33a20 
e7618c96038a431e8a2534386949872c - - -] 192.168.217.64,192.168.217.61 - - 
[06/Apr/2017 11:39:42] "GET 
/v2.0/subnets.json?id=42e160ec-305c-4199-b83f-ed0ef20927be HTTP/1.1" 200 852 
0.189339
2017-04-06 11:39:42.860 19 INFO neutron.wsgi 
[req-d3ffe38e-c74a-4bfe-80bb-e4b9b067b91f 33bf0ad9495e421e9619ec4df4d33a20 
e7618c96038a431e8a2534386949872c - - -] 192.168.217.64,192.168.217.61 - - 
[06/Apr/2017 11:39:42] "GET 
/v2.0/ports.json?network_id=509d63f0-430b-4c78-9736-f560abbd856e_owner=network%3Adhcp
 HTTP/1.1" 200 1008 0.493123
2017-04-06 11:39:43.172 21 ERROR neutron.api.v2.resource 
[req-49f17b50-4dfa-4481-9ab8-58183317c012 1f33b40283b24adcaaf727ed11424523 
4e8be975a6924817bdfb95f9cd69ab06 - - -] update failed: No details.
2017-04-06 11:39:43.172 21 ERROR neutron.api.v2.resource Traceback (most recent 
call last):
2017-04-06 11:39:43.172 21 ERROR neutron.api.v2.resource   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/api/v2/resource.py",
 line 79, in resource
2017-04-06 11:39:43.172 21 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
2017-04-06 11:39:43.172 21 ERROR neutron.api.v2.resource   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/api/v2/base.py", 
line 604, in update
2017-04-06 11:39:43.172 21 ERROR neutron.api.v2.resource return 
self._update(request, id, body, **kwargs)
2017-04-06 11:39:43.172 21 ERROR neutron.api.v2.resource   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/db/api.py", line 
88, in wrapped
2017-04-06 11:39:43.172 21 ERROR neutron.api.v2.resource setattr(e, 
'_RETRY_EXCEEDED', True)
2017-04-06 11:39:43.172 21 ERROR neutron.api.v2.resource   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_utils/excutils.py", 
line 220, in __exit__
2017-04-06 11:39:43.172 21 ERROR neutron.api.v2.resource 
self.force_reraise()
2017-04-06 11:39:43.172 21 ERROR neutron.api.v2.resource   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_utils/excutils.py", 
line 196, in force_reraise
2017-04-06 11:39:43.172 21 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2017-04-06 11:39:43.172 21 ERROR neutron.api.v2.resource   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/db/api.py", line 
84, in wrapped
2017-04-06 11:39:43.172 21 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2017-04-06 11:39:43.172 21 ERROR neutron.api.v2.resource   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_db/api.py", line 
151, in wrapper
2017-04-06 11:39:43.172 21 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
2017-04-06 11:39:43.172 21 ERROR neutron.api.v2.resource   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_utils/excutils.py", 
line 220, in __exit__
2017-04-06 11:39:43.172 21 ERROR neutron.api.v2.resource 
self.force_reraise()
2017-04-06 11:39:43.172 21 ERROR neutron.api.v2.resource   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_utils/excutils.py", 
line 196, in force_reraise
2017-04-06 11:39:43.172 21 ERROR neutron.api.v2.resource 

[Yahoo-eng-team] [Bug 1677391] Re: Glance does not accept 'Range' in requests.

2017-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/441501
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=423f3401741a64f0ead66bcb8cb0cd8cafb02cf8
Submitter: Jenkins
Branch:master

commit 423f3401741a64f0ead66bcb8cb0cd8cafb02cf8
Author: Dharini Chandrasekar 
Date:   Fri Mar 3 22:38:34 2017 +

Accept Range requests and set appropriate response

Currently glance v2 API incorrectly accepts ‘Content-Range’ header
for random image access and does not set response headers.
As per rfc7233, ‘Range’ requests should be accepted and ‘Content-Range’
must be returned in the response headers.

This patch enables Glance v2 API to accept the more appropriate ‘Range’
requests and sets ‘Content-Range’ response header.

For backward compatibility with pre-Pike Glance clients, the incorrect
'Content-Range' header will be accepted silently in perpetuity.
Thus this patch contains tests for 'Content-Range' in requests to
prevent regressions.

DocImpact
Implements lite-spec I5bdadde682a0c50836bd95e2a6651d6e7e18f172
Closes-Bug: #1677391

Co-Authored-By: Hemanth Makkapati 

Change-Id: Ib7ebc792c32995751744be3f36cbc9a0c1eead2a


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1677391

Title:
  Glance does not accept 'Range' in requests.

Status in Glance:
  Fix Released

Bug description:
  Glance currently supports partial download requests if provided with a 
Content-Range header in the GET request. [1]
  However, Glance needs to add support for 'Range' requests and serve partial 
image download requests accordingly as per rfc7233. [2]

  [1] 
https://github.com/openstack/glance/blob/master/glance/api/v2/image_data.py#L287-L336
  [2] https://tools.ietf.org/html/rfc7233#section-3.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1677391/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1680310] [NEW] Accept Range requests and set appropriate response

2017-04-05 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/441501
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/glance" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit 423f3401741a64f0ead66bcb8cb0cd8cafb02cf8
Author: Dharini Chandrasekar 
Date:   Fri Mar 3 22:38:34 2017 +

Accept Range requests and set appropriate response

Currently glance v2 API incorrectly accepts ‘Content-Range’ header
for random image access and does not set response headers.
As per rfc7233, ‘Range’ requests should be accepted and ‘Content-Range’
must be returned in the response headers.

This patch enables Glance v2 API to accept the more appropriate ‘Range’
requests and sets ‘Content-Range’ response header.

For backward compatibility with pre-Pike Glance clients, the incorrect
'Content-Range' header will be accepted silently in perpetuity.
Thus this patch contains tests for 'Content-Range' in requests to
prevent regressions.

DocImpact
Implements lite-spec I5bdadde682a0c50836bd95e2a6651d6e7e18f172
Closes-Bug: #1677391

Co-Authored-By: Hemanth Makkapati 

Change-Id: Ib7ebc792c32995751744be3f36cbc9a0c1eead2a

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: doc glance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1680310

Title:
  Accept Range requests and set appropriate response

Status in Glance:
  New

Bug description:
  https://review.openstack.org/441501
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/glance" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 423f3401741a64f0ead66bcb8cb0cd8cafb02cf8
  Author: Dharini Chandrasekar 
  Date:   Fri Mar 3 22:38:34 2017 +

  Accept Range requests and set appropriate response
  
  Currently glance v2 API incorrectly accepts ‘Content-Range’ header
  for random image access and does not set response headers.
  As per rfc7233, ‘Range’ requests should be accepted and ‘Content-Range’
  must be returned in the response headers.
  
  This patch enables Glance v2 API to accept the more appropriate ‘Range’
  requests and sets ‘Content-Range’ response header.
  
  For backward compatibility with pre-Pike Glance clients, the incorrect
  'Content-Range' header will be accepted silently in perpetuity.
  Thus this patch contains tests for 'Content-Range' in requests to
  prevent regressions.
  
  DocImpact
  Implements lite-spec I5bdadde682a0c50836bd95e2a6651d6e7e18f172
  Closes-Bug: #1677391
  
  Co-Authored-By: Hemanth Makkapati 
  
  Change-Id: Ib7ebc792c32995751744be3f36cbc9a0c1eead2a

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1680310/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1680305] [NEW] remote securitygroup address pairs update

2017-04-05 Thread 刘成乾
Public bug reported:

1. create two security groups
sg-test-1:
  id   523ea2a0-8b73-4a9d-b122-68030418f9a6
  security_group_rules   egress, IPv4
 egress, IPv6
 ingress, IPv4, icmp, remote_group_id: 
56dd2c05-fd80-4f1d-a17f-f1be73a42a82

sg-test-2:
  id   56dd2c05-fd80-4f1d-a17f-f1be73a42a82
  security_group_rules   egress, IPv4
 egress, IPv6
2. create two vms with security group
 vm1(10.20.10.12)   port idb11b8dde-69cb-4a1e-bd9c-20db51748c52 
sg-test-1
 vm2(10.20.10.6)port idffcd8854-f4f6-4d66-84cd-ad29192ab778 
sg-test-2

3. in vm1's compute node

   #iptables -nvL  neutron-openvswi-ib11b8dde-6;
   
0 0 RETURN icmp --  *  *   0.0.0.0/00.0.0.0/0   
 match-set NIPv456dd2c05-fd80-4f1d-a17f- src
   #ipset list NIPv456dd2c05-fd80-4f1d-a17f-
   Name: NIPv456dd2c05-fd80-4f1d-a17f-
   Type: hash:net
   Revision: 3
   Header: family inet hashsize 1024 maxelem 65536
   Size in memory: 19216
   References: 1
   Members:
   10.20.10.6

4、update vm2's port 
   #neutron port-update ffcd8854-f4f6-4d66-84cd-ad29192ab778 
--allowed-address-pairs type=dict list=true \
ip_address=10.20.10.66,mac_address=fa:16:3e:02:70:85

5、 ipset list NIPv456dd2c05-fd80-4f1d-a17f- ,not found address
10.20.10.66

release used: ocata

** Affects: neutron
 Importance: Undecided
 Assignee: 刘成乾 (liuchengqian90)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => 刘成乾 (liuchengqian90)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1680305

Title:
  remote securitygroup address pairs update

Status in neutron:
  New

Bug description:
  1. create two security groups
  sg-test-1:
id   523ea2a0-8b73-4a9d-b122-68030418f9a6
security_group_rules   egress, IPv4
   egress, IPv6
   ingress, IPv4, icmp, 
remote_group_id: 56dd2c05-fd80-4f1d-a17f-f1be73a42a82

  sg-test-2:
id   56dd2c05-fd80-4f1d-a17f-f1be73a42a82
security_group_rules   egress, IPv4
   egress, IPv6
  2. create two vms with security group
   vm1(10.20.10.12)   port idb11b8dde-69cb-4a1e-bd9c-20db51748c52 
sg-test-1
   vm2(10.20.10.6)port idffcd8854-f4f6-4d66-84cd-ad29192ab778 
sg-test-2

  3. in vm1's compute node

 #iptables -nvL  neutron-openvswi-ib11b8dde-6;
 
  0 0 RETURN icmp --  *  *   0.0.0.0/00.0.0.0/0 
   match-set NIPv456dd2c05-fd80-4f1d-a17f- src
 #ipset list NIPv456dd2c05-fd80-4f1d-a17f-
 Name: NIPv456dd2c05-fd80-4f1d-a17f-
 Type: hash:net
 Revision: 3
 Header: family inet hashsize 1024 maxelem 65536
 Size in memory: 19216
 References: 1
 Members:
 10.20.10.6

  4、update vm2's port 
 #neutron port-update ffcd8854-f4f6-4d66-84cd-ad29192ab778 
--allowed-address-pairs type=dict list=true \
  ip_address=10.20.10.66,mac_address=fa:16:3e:02:70:85

  5、 ipset list NIPv456dd2c05-fd80-4f1d-a17f- ,not found address
  10.20.10.66

  release used: ocata

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1680305/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424728] Re: Remove old rpc alias(es) from code

2017-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/447286
Committed: 
https://git.openstack.org/cgit/openstack/magnum/commit/?id=31ee6a32b1cb3791e2d2b41d695a1a6757fb42bf
Submitter: Jenkins
Branch:master

commit 31ee6a32b1cb3791e2d2b41d695a1a6757fb42bf
Author: ChangBo Guo(gcb) 
Date:   Sun Mar 19 20:39:17 2017 +0800

Remove old oslo.messaging transport aliases

Those are remnants from the oslo-incubator times. Also, oslo.messaging
deprecated [1] transport aliases since 5.2.0+ that is the minimal
version supported for stable/newton. The patch that bumped the minimal
version for Magnum landed 3 months+ ago, so we can proceed ripping
those aliases from the code base.

[1] I314cefa5fb1803fa7e21e3e34300e5ced31bba89

Closes-Bug: #1424728

Change-Id: I7c35f71f601f3191de4a97866d52db343e5735b1


** Changed in: magnum
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1424728

Title:
  Remove old rpc alias(es) from code

Status in Cinder:
  Fix Released
Status in Designate:
  In Progress
Status in grenade:
  New
Status in Ironic:
  Fix Released
Status in keystonemiddleware:
  Fix Released
Status in Magnum:
  Fix Released
Status in Manila:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.messaging:
  Confirmed
Status in Sahara:
  Fix Released
Status in OpenStack Search (Searchlight):
  Fix Released
Status in watcher:
  Fix Released

Bug description:
  We have several TRANSPORT_ALIASES entries from way back (Essex, Havana)
  http://git.openstack.org/cgit/openstack/nova/tree/nova/rpc.py#n48

  We need a way to warn end users that they need to fix their nova.conf
  So these can be removed in a later release (full cycle?)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1424728/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1680289] [NEW] Keystone logs fernet token when token is invalid

2017-04-05 Thread Divya K Konoor
Public bug reported:

If an incorrect token is passed for keystone validation (verify token), 
Keystone logs the token :
https://github.com/openstack/keystone/blob/master/keystone/token/providers/fernet/token_formatters.py#L94

As this is either an invalid or expired token and of no use to anyone ,
logging this does not pose any vulnerability (unless an expired fernet
token can be used for anything). In any case, it might be better to not
log the entire token .

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1680289

Title:
  Keystone logs fernet token when token is invalid

Status in OpenStack Identity (keystone):
  New

Bug description:
  If an incorrect token is passed for keystone validation (verify token), 
Keystone logs the token :
  
https://github.com/openstack/keystone/blob/master/keystone/token/providers/fernet/token_formatters.py#L94

  As this is either an invalid or expired token and of no use to anyone
  , logging this does not pose any vulnerability (unless an expired
  fernet token can be used for anything). In any case, it might be
  better to not log the entire token .

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1680289/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1406776] Re: Trying to delete a grant with an invalid role ID causes unnecessary processing

2017-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/450938
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=a1683df77f8b71de938b7aef577d05d883667ccd
Submitter: Jenkins
Branch:master

commit a1683df77f8b71de938b7aef577d05d883667ccd
Author: Anthony Washington 
Date:   Tue Mar 28 19:17:07 2017 +

Remove unnecessary processing when deleting grant.

When deleting a grant with an invalid role ID the
exception RoleNotFound will be thrown. However, before
that exception is thrown a lot of unnecessary work
is being done.

In order to improve that process, this patch checks that
the role exists before continuing to delete a grant.

Change-Id: I4f8a26a71096c7dd456195751562cba26c9853d7
Closes-Bug: 1406776


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1406776

Title:
  Trying to delete a grant with an invalid role ID causes unnecessary
  processing

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Trying to delete a grant with an invalid role ID will throw a
  RoleNotFound exception.  However, the check for this is buried in the
  driver...after the time the assignment manager has already carried out
  a bunch of processing (e.g. send out revokes). Is this by design (e.g.
  let people clear up tokens for a role ID that somehow has been already
  deleted) or just an error?  Given that some processing for the
  revoking tokens also happens AFTER the driver call to delete the grant
  (which would abort on RoleNotFound), I'm kind of guessing the later.
  Views?

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1406776/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1635629] Re: neutron-keepalived-state-change logs are removed with router

2017-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/453805
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=ebc30229c4eec9f7e025371a0a6a08b10f955041
Submitter: Jenkins
Branch:master

commit ebc30229c4eec9f7e025371a0a6a08b10f955041
Author: Ihar Hrachyshka 
Date:   Wed Apr 5 17:40:57 2017 +

Log messages for keepalived-state-change in syslog

We don't maintain a long standing log file for the helper process, so
any log messages that we issue from inside it just vanish after router
cleanup.
This patch enforces syslog logging for the daemon, revealing any errors
and other messages that may be triggered by it.

Closes-Bug: #1635629
Change-Id: I8712f5f3a34fc9f403dfba06e7a64bd9eb408c4a


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1635629

Title:
  neutron-keepalived-state-change logs are removed with router

Status in neutron:
  Fix Released

Bug description:
  By default, keepalived-state-change logs are stored at
  /var/lib/neutron/ha_confs//neutron-keepalived-state-
  change.log . The whole directory /var/lib/neutron/ha_confs/
  is used to store metadata and is removed once router is removed, along
  with the logs. This makes debugging of tempest jobs a bit more
  difficult.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1635629/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1680272] [NEW] several tempest tests failed in gate because of libvirt error

2017-04-05 Thread Inessa Vasilevskaya
Public bug reported:

Hope I I got the target project right - could not decide whether it
should be nova (as the error is with libvirt and n-cpu) or neutron (as
long as it was observed in the gate).

tempest.api.compute.servers.test_delete_server.DeleteServersTestJSON.test_delete_server_while_in_pause_state
 [8.822468s] ... FAILED
tempest.api.compute.servers.test_create_server.ServersTestJSON.test_verify_multiple_nics_order
 [17.369368s] ... FAILED
tempest.api.compute.admin.test_volume_swap.TestVolumeSwap.test_volume_swap 
[410.785118s] ... FAILED

Full log - http://logs.openstack.org/55/453355/3/check/gate-tempest-
dsvm-neutron-linuxbridge-ubuntu-xenial/5fa5aea/logs/screen-n-cpu.txt.gz

Except from n-cpu.log - http://paste.openstack.org/show/605578/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1680272

Title:
  several tempest tests failed in gate because of libvirt error

Status in neutron:
  New

Bug description:
  Hope I I got the target project right - could not decide whether it
  should be nova (as the error is with libvirt and n-cpu) or neutron (as
  long as it was observed in the gate).

  
tempest.api.compute.servers.test_delete_server.DeleteServersTestJSON.test_delete_server_while_in_pause_state
 [8.822468s] ... FAILED
  
tempest.api.compute.servers.test_create_server.ServersTestJSON.test_verify_multiple_nics_order
 [17.369368s] ... FAILED
  tempest.api.compute.admin.test_volume_swap.TestVolumeSwap.test_volume_swap 
[410.785118s] ... FAILED

  Full log - http://logs.openstack.org/55/453355/3/check/gate-tempest-
  dsvm-neutron-linuxbridge-ubuntu-
  xenial/5fa5aea/logs/screen-n-cpu.txt.gz

  Except from n-cpu.log - http://paste.openstack.org/show/605578/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1680272/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1680265] [NEW] Horizon ignores external theme templates

2017-04-05 Thread Gary W. Smith
Public bug reported:

When a custom theme resides outside of the openstack_dashboard directory
tree, horizon silently ignores all html template files contained within.
This bug was introduced by the fix for bug 1582298, which altered the
Theme Template loader.

** Affects: horizon
 Importance: Undecided
 Assignee: Gary W. Smith (gary-w-smith)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Gary W. Smith (gary-w-smith)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1680265

Title:
  Horizon ignores external theme templates

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When a custom theme resides outside of the openstack_dashboard
  directory tree, horizon silently ignores all html template files
  contained within.  This bug was introduced by the fix for bug 1582298,
  which altered the Theme Template loader.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1680265/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1678326] Fix merged to nova (master)

2017-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/452351
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=032937ce51c64e0f5292ca3149e3b244863fffca
Submitter: Jenkins
Branch:master

commit 032937ce51c64e0f5292ca3149e3b244863fffca
Author: Matt Riedemann 
Date:   Fri Mar 31 19:56:14 2017 -0400

Add regression test for quota decrement bug 1678326

This was spotted from someone validating the fix for
bug 1670627. They reported that even though they failed
to delete an instance in ERROR state that was in cell0,
the quota usage was decremented.

This is because we committed the quota reservation
to decrement the usage before actually attempting to destroy
the instance, rather than upon successful deletion.

The rollback after InstanceNotFound is a noop because of
how the Quotas.rollback method noops if the reservations
were already committed. That is in itself arguably a bug,
but not fixed here, especially since the counting quotas
work in Pike will remove all of the reservations commit and
rollback code.

Change-Id: I12d1fa1a10f9014863123ac9cc3c63ddfb48378e
Partial-Bug: #1678326


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1678326

Title:
  Quota is decremented during instance delete in cell0 even if the
  instance destroy fails

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  In Progress

Bug description:
  This is a follow on from bug 1670627. As pointed out in comments 23
  and 25, in the local delete case where there is no host and the
  instance is in a cell, we're decrementing quota even if the
  instance.destroy() operation fails.

  We commit the usage decrement here:

  
https://github.com/openstack/nova/blob/88bc8dc5ce32748452c9d3acda9f35e77fedb6ce/nova/compute/api.py#L1878

  Attempt to destroy the instance here:

  
https://github.com/openstack/nova/blob/88bc8dc5ce32748452c9d3acda9f35e77fedb6ce/nova/compute/api.py#L1887

  And if the instance.destroy() fails, we rollback the usage decrement
  here:

  
https://github.com/openstack/nova/blob/88bc8dc5ce32748452c9d3acda9f35e77fedb6ce/nova/compute/api.py#L1891

  That rollback has no effect because once we commit a reservation, it's
  wiped out in the quotas object:

  
https://github.com/openstack/nova/blob/88bc8dc5ce32748452c9d3acda9f35e77fedb6ce/nova/objects/quotas.py#L105

  Attempting to rollback reservations on a quotas object that has
  already committed reservations is a noop:

  
https://github.com/openstack/nova/blob/88bc8dc5ce32748452c9d3acda9f35e77fedb6ce/nova/objects/quotas.py#L111

  --

  Unlike the 'normal' (pre-cellsv2) local delete case which does the
  commit after the instance is destroyed:

  
https://github.com/openstack/nova/blob/88bc8dc5ce32748452c9d3acda9f35e77fedb6ce/nova/compute/api.py#L2023

  And we rollback (but not commit) if instance.destroy() fails because
  the instance is already deleted:

  
https://github.com/openstack/nova/blob/88bc8dc5ce32748452c9d3acda9f35e77fedb6ce/nova/compute/api.py#L2028

  --

  The code in _delete() is wrong because it was copied from the code in
  _delete_while_booting() which is also wrong:

  
https://github.com/openstack/nova/blob/88bc8dc5ce32748452c9d3acda9f35e77fedb6ce/nova/compute/api.py#L1784

  So we have to fix both places.

  I suggest that we also change the noop behavior on the Quotas object
  such that if we attempt a reservation commit or rollback operation on
  a Quotas object that does not have any reservations, it is considered
  a programming error rather than a noop, because that's how this bug
  was introduced in the first place.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1678326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1680211] Re: InstanceNotFound during local delete in API does not short-circuit

2017-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/453840
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=5a9cc2fb7af3e3a9db44646bbd23cfcfb16891f5
Submitter: Jenkins
Branch:master

commit 5a9cc2fb7af3e3a9db44646bbd23cfcfb16891f5
Author: Matt Riedemann 
Date:   Wed Apr 5 15:12:41 2017 -0400

Short-circuit local delete path for cells v2 and InstanceNotFound

When we're going down the local delete path for cells v2 in the API
and instance.destroy() fails with an InstanceNotFound error, we are
racing with a concurrent delete request and know that the instance
is alread deleted, so we can just return rather than fall through to
the rest of the code in the _delete() method, like for BDMs and
console tokens.

Change-Id: I58690a25044d2804573451983323dde05be9e5d6
Closes-Bug: #1680211


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1680211

Title:
  InstanceNotFound during local delete in API does not short-circuit

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  New

Bug description:
  The cells v2-specific local delete path in the compute API is handling
  an InstanceNotFound when instance.destroy() fails, but doesn't return
  from the _delete() method:

  
https://github.com/openstack/nova/blob/caee532ff82d618fb30ace84a4cd0d7b1d1faa96/nova/compute/api.py#L1889

  It just continues to the rest of the delete code, which is pointless
  since we already know the instance is gone.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1680211/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1680254] [NEW] VMware: Failed at boot instance from volume

2017-04-05 Thread Sihan Wang
Public bug reported:

Instance is failing with booting from volume

 [req-e1b1ac9c-3394-44c6-8453-b05f41f2aa16 admin admin]  Instance failed to 
spawn
  Traceback (most recent call last):
File "/opt/stack/nova/nova/compute/manager.py", line 2120, in 
_build_resources
  yield resources
File "/opt/stack/nova/nova/compute/manager.py", line 1925, in 
_build_and_run_instance
  block_device_info=block_device_info)
File "/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 324, in spawn
  admin_password, network_info, block_device_info)
File "/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 791, in spawn
  instance, vi.datastore.ref, adapter_type)
File "/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 606, in 
attach_root_volume
  self.attach_volume(connection_info, instance, adapter_type)
File "/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 383, in 
attach_volume
  self._attach_volume_vmdk(connection_info, instance, adapter_type)
File "/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 337, in 
_attach_volume_vmdk
  if state.lower() != 'poweredoff':
  AttributeError: 'int' object has no attribute 'lower'

Because in nova/compute/power_state.py, all states are in index, so
state should be int not str. So the driver code need to be updated
accordingly.

Command to reproduce in devstack(all repos are in master level):
nova boot --nic net-id= --flavor  --block-device 
source=image,id=,dest=volume,size=2,shutdown=preserve,bootindex=0 
mystanceFromVolume1

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1680254

Title:
  VMware: Failed at boot instance from volume

Status in OpenStack Compute (nova):
  New

Bug description:
  Instance is failing with booting from volume

   [req-e1b1ac9c-3394-44c6-8453-b05f41f2aa16 admin admin]  Instance failed to 
spawn
    Traceback (most recent call last):
  File "/opt/stack/nova/nova/compute/manager.py", line 2120, in 
_build_resources
    yield resources
  File "/opt/stack/nova/nova/compute/manager.py", line 1925, in 
_build_and_run_instance
    block_device_info=block_device_info)
  File "/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 324, in spawn
    admin_password, network_info, block_device_info)
  File "/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 791, in spawn
    instance, vi.datastore.ref, adapter_type)
  File "/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 606, in 
attach_root_volume
    self.attach_volume(connection_info, instance, adapter_type)
  File "/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 383, in 
attach_volume
    self._attach_volume_vmdk(connection_info, instance, adapter_type)
  File "/opt/stack/nova/nova/virt/vmwareapi/volumeops.py", line 337, in 
_attach_volume_vmdk
    if state.lower() != 'poweredoff':
    AttributeError: 'int' object has no attribute 'lower'

  Because in nova/compute/power_state.py, all states are in index, so
  state should be int not str. So the driver code need to be updated
  accordingly.

  Command to reproduce in devstack(all repos are in master level):
  nova boot --nic net-id= --flavor  --block-device 
source=image,id=,dest=volume,size=2,shutdown=preserve,bootindex=0 
mystanceFromVolume1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1680254/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1554412] Re: purge command fails due to foreign key constraint

2017-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/299209
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=c9a21f655e63e4d0fc3a7a7b687c611900a91e69
Submitter: Jenkins
Branch:master

commit c9a21f655e63e4d0fc3a7a7b687c611900a91e69
Author: Ravi Shekhar Jethani 
Date:   Mon Mar 28 14:15:48 2016 +

Provide user friendly message for FK failure

'glance-manage db purge' command fails with DBReferenceError due
to FK constraint failure and exits with stack-trace on the command
prompt.

Made changes to give user-friendly error message to the user as
well as log appropriate error message in glance-manage logs
instead of stack-trace.

Co-author-by: Dinesh Bhor 
Change-Id: I52e56b69f1b78408018c837d71d75c6df3df9e71
Closes-Bug: #1554412


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1554412

Title:
  purge command fails due to foreign key constraint

Status in Glance:
  Fix Released

Bug description:
  In our production environment some how we have encountered a issue in
  purge command. One of the example is, purge command fails if image is
  deleted but members entry for that particular image is not deleted.

  Steps to reproduce:

  1. Apply patch https://review.openstack.org/#/c/278870/7 (otherwise you will 
get subquery related error)
  2. Run purge command "glance-manage db purge 1 1"

  Purge command trace:

  2016-03-08 08:25:16.127 CRITICAL glance [req-7ce644e9-3c7d-
  4cd4-8377-77f7e1a689a5 None None] DBReferenceError:
  (pymysql.err.IntegrityError) (1451, u'Cannot delete or update a parent
  row: a foreign key constraint fails (`glance`.`image_locations`,
  CONSTRAINT `image_locations_ibfk_1` FOREIGN KEY (`image_id`)
  REFERENCES `images` (`id`))') [SQL: u'DELETE FROM images WHERE
  images.id in (SELECT T1.id FROM (SELECT images.id \nFROM images
  \nWHERE images.deleted_at < %(deleted_at_1)s \n LIMIT %(param_1)s) as
  T1)'] [parameters: {u'deleted_at_1': datetime.datetime(2016, 3, 7, 8,
  25, 16, 92480), u'param_1': 1}]

  2016-03-08 08:25:16.127 TRACE glance Traceback (most recent call last):
  2016-03-08 08:25:16.127 TRACE glance   File "/usr/local/bin/glance-manage", 
line 10, in 
  2016-03-08 08:25:16.127 TRACE glance sys.exit(main())
  2016-03-08 08:25:16.127 TRACE glance   File 
"/opt/stack/glance/glance/cmd/manage.py", line 344, in main
  2016-03-08 08:25:16.127 TRACE glance return 
CONF.command.action_fn(*func_args, **func_kwargs)
  2016-03-08 08:25:16.127 TRACE glance   File 
"/opt/stack/glance/glance/cmd/manage.py", line 165, in purge
  2016-03-08 08:25:16.127 TRACE glance db_api.purge_deleted_rows(ctx, 
age_in_days, max_rows)
  2016-03-08 08:25:16.127 TRACE glance   File 
"/opt/stack/glance/glance/db/sqlalchemy/api.py", line 1327, in 
purge_deleted_rows
  2016-03-08 08:25:16.127 TRACE glance result = 
session.execute(delete_statement)
  2016-03-08 08:25:16.127 TRACE glance   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 1034, 
in execute
  2016-03-08 08:25:16.127 TRACE glance bind, 
close_with_result=True).execute(clause, params or {})
  2016-03-08 08:25:16.127 TRACE glance   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 914, 
in execute
  2016-03-08 08:25:16.127 TRACE glance return meth(self, multiparams, 
params)
  2016-03-08 08:25:16.127 TRACE glance   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/elements.py", line 323, 
in _execute_on_connection
  2016-03-08 08:25:16.127 TRACE glance return 
connection._execute_clauseelement(self, multiparams, params)
  2016-03-08 08:25:16.127 TRACE glance   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1010, 
in _execute_clauseelement
  2016-03-08 08:25:16.127 TRACE glance compiled_sql, distilled_params
  2016-03-08 08:25:16.127 TRACE glance   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1146, 
in _execute_context
  2016-03-08 08:25:16.127 TRACE glance context)
  2016-03-08 08:25:16.127 TRACE glance   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1337, 
in _handle_dbapi_exception
  2016-03-08 08:25:16.127 TRACE glance util.raise_from_cause(newraise, 
exc_info)
  2016-03-08 08:25:16.127 TRACE glance   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 200, 
in raise_from_cause
  2016-03-08 08:25:16.127 TRACE glance reraise(type(exception), exception, 
tb=exc_tb, cause=cause)
  2016-03-08 08:25:16.127 TRACE glance   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1139, 
in _execute_context
  2016-03-08 08:25:16.127 TRACE glance context)
  2016-03-08 08:25:16.127 TRACE glance   

[Yahoo-eng-team] [Bug 1670409] Re: Experimental E-M-C migrations broken

2017-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/440875
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=15817166ab35212069517ba0002d8098988fd3bd
Submitter: Jenkins
Branch:master

commit 15817166ab35212069517ba0002d8098988fd3bd
Author: Alexander Bashmakov 
Date:   Thu Mar 2 23:40:16 2017 +

Fix experimental E-M-C migrations

Two closely related issues were discovered in the code which prevent
the experimental E-M-C (rolling upgrade) migration feature introduced
in Ocata from working: a malformed select statement in the ocata
community images data migration (missing function parentheses)
and an incorrect revision dependency in the DB migration (should be
'ocata_expand01' instead of 'expand'. Also added a test for the case
where the Glance database is new/empty which illuminated this bug.

Closes-Bug: 1670409
Change-Id: I40ffaaa3fa2bae7a555a86d9f31422d79fb9bb19


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1670409

Title:
  Experimental E-M-C migrations broken

Status in Glance:
  Fix Released

Bug description:
  Glance Experimental Expand-Migrate-Contract feature introduced in
  Ocata release is currently broken due to couple of bugs in the code:

  
https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/alembic_migrations/data_migrations/ocata_migrate01_community_images.py#L43
  - malformed select statement

  and

  
https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/alembic_migrations/versions/ocata_contract01_drop_is_public.py#L30
  - incorrect revision dependency

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1670409/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1680240] [NEW] test_no_migrations_have_downgrade is broken

2017-04-05 Thread Matt Riedemann
Public bug reported:

We have a unit test to check if a database migration script has a
downgrade method, but it's broken since it didn't catch this:

https://review.openstack.org/#/c/453025/2/nova/db/sqlalchemy/migrate_repo/versions/359_add_service_uuid.py

** Affects: nova
 Importance: Medium
 Assignee: Matt Riedemann (mriedem)
 Status: In Progress


** Tags: db testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1680240

Title:
  test_no_migrations_have_downgrade is broken

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  We have a unit test to check if a database migration script has a
  downgrade method, but it's broken since it didn't catch this:

  
https://review.openstack.org/#/c/453025/2/nova/db/sqlalchemy/migrate_repo/versions/359_add_service_uuid.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1680240/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1679223] Re: tempest.api.compute.servers.test_server_tags.ServerTagsTestJSON fail on centos7 nodes

2017-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/453220
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=7db528379ac8bd9a1f2101a73f78f882884825da
Submitter: Jenkins
Branch:master

commit 7db528379ac8bd9a1f2101a73f78f882884825da
Author: Markus Zoeller 
Date:   Tue Apr 4 17:23:09 2017 +0200

API: accept None as content-length in HTTP requests

The API defines PUT and POST as HTTP methods which need a request body.
It is allowed, that this request body might be of zero length. What was
missing is, that the "content-length" of the request also might be None.

The tempest test cases of servers.test_server_tags.ServerTagsTestJSON
revealed that a PUT with a non-existing body raises a BadRequest exception
because of the missing "content-length".

  [tempest.lib.common.rest_client]
  Request - Headers:
  {'Content-Type': 'application/json', 'Accept': 'application/json',
  'X-OpenStack-Nova-API-Version': '2.26', 'X-Auth-Token': ''}
  Body: None

  Response - Headers: {'status': '400', u'content-length': '66',
  u'server': 'Apache/2.4.6 (CentOS)
  OpenSSL/1.0.1e-fips mod_wsgi/3.4 Python/2.7.5',
  u'date': 'Mon, 03 Apr 2017 10:15:12 GMT',
  u'x-openstack-nova-api-version': '2.26',
  u'x-compute-request-id': 'req-1f8726fc-df20-4592-a214-cff3fa73c8e6',
  u'content-type': 'application/json; charset=UTF-8',
  content-location': 'https://ctrl:8774/v2.1/servers//tags/mytag',
  u'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version',
  u'openstack-api-version': 'compute 2.26',
  u'connection': 'close'}
  Body: {"badRequest":
  {"message": "Malformed request body", "code": 400}
}

For some reason this, this seems to only occur on centos7 test nodes,
but not on ubuntu xenial nodes. The root cause is still unclear to me.
I suspect the underlying "webob.Request" object which is used in Nova's
API, but I don't have proof for that.

This change checks for "content-length is None". The logic to determine the
request content was part of a very long method which is hard to test. That's
why I extracted the code paths to a new method. That made the unit test much
easier.

Change I3236648f79f44d2758bb7ab0d64d58b0143f6bdb alters the tempest test
cases which revealed the missing handling of "content-length is None".

Change-Id: Id0b0ab5050a4ec15ab2a0d0dd67fcefe4b1ecb39
Closes-Bug: #1679223


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1679223

Title:
  tempest.api.compute.servers.test_server_tags.ServerTagsTestJSON fail
  on centos7 nodes

Status in OpenStack Compute (nova):
  Fix Released
Status in tempest:
  In Progress

Bug description:
  Description
  ===
  These tempest test cases fail on centos7 nodes:
  test_server_tags.ServerTagsTestJSON.test_create_delete_tag
  test_server_tags.ServerTagsTestJSON.test_delete_all_tags
  test_server_tags.ServerTagsTestJSON.test_update_all_tags
  test_server_tags.ServerTagsTestJSON.test_check_tag_existence

  
http://logstash.openstack.org/#/dashboard/file/logstash.json?from=7d=message:%5C%22Malformed%20request%20body%5C%22

  
  Steps to reproduce
  ==
  See any test run of the 3rd party CI "IBM zKVM CI" after the last successful 
run from 2017-03-30T06:22:14:
  http://ci-watch.tintri.com/project?project=nova=7+days

  
  Expected result
  ===
  The server tagging functionality tests should work since they got introduced 
with 
https://github.com/openstack/tempest/commit/7c95befefb7db27296114f87cf49e8f2b8f43a59

  
  Actual result
  =

  As an example:

  [tempest.lib.common.rest_client] 
  Request (ServerTagsTestJSON:test_check_tag_existence): 
  400 PUT 
https://15.184.67.250:8774/v2.1/servers/b56af78e-d406-4858-9509-473863275223/tags/tempest-tag-151463324

  [tempest.lib.common.rest_client] 
  Request - Headers: 
  {'Content-Type': 'application/json', 'Accept': 'application/json', 
  'X-OpenStack-Nova-API-Version': '2.26', 'X-Auth-Token': ''}
   Body: None

  Response - Headers: {'status': '400', u'content-length': '66', 
  u'server': 'Apache/2.4.6 (CentOS) OpenSSL/1.0.1e-fips mod_wsgi/3.4 
Python/2.7.5',
  u'date': 'Mon, 03 Apr 2017 10:15:12 GMT', 
  u'x-openstack-nova-api-version': '2.26', 
  u'x-compute-request-id': 'req-1f8726fc-df20-4592-a214-cff3fa73c8e6', 
  u'content-type': 'application/json; charset=UTF-8', 
  content-location': 
'https://15.184.67.250:8774/v2.1/servers/b56af78e-d406-4858-9509-473863275223/tags/tempest-tag-151463324',
 
  u'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 
  

[Yahoo-eng-team] [Bug 1680211] [NEW] InstanceNotFound during local delete in API does not short-circuit

2017-04-05 Thread Matt Riedemann
Public bug reported:

The cells v2-specific local delete path in the compute API is handling
an InstanceNotFound when instance.destroy() fails, but doesn't return
from the _delete() method:

https://github.com/openstack/nova/blob/caee532ff82d618fb30ace84a4cd0d7b1d1faa96/nova/compute/api.py#L1889

It just continues to the rest of the delete code, which is pointless
since we already know the instance is gone.

** Affects: nova
 Importance: Medium
 Assignee: Matt Riedemann (mriedem)
 Status: In Progress

** Affects: nova/ocata
 Importance: Undecided
 Status: New


** Tags: api cellsv2

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1680211

Title:
  InstanceNotFound during local delete in API does not short-circuit

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) ocata series:
  New

Bug description:
  The cells v2-specific local delete path in the compute API is handling
  an InstanceNotFound when instance.destroy() fails, but doesn't return
  from the _delete() method:

  
https://github.com/openstack/nova/blob/caee532ff82d618fb30ace84a4cd0d7b1d1faa96/nova/compute/api.py#L1889

  It just continues to the rest of the delete code, which is pointless
  since we already know the instance is gone.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1680211/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1680136] Re: Stable newton gate is broken

2017-04-05 Thread Dariusz Smigiel
** Project changed: networking-midonet => neutron

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => Critical

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1680136

Title:
  Stable newton gate is broken

Status in neutron:
  Confirmed

Bug description:
  Stable newton branch is busted.
  https://review.openstack.org/#/c/445326/

  Py27 and Py35 tests are passing for checks [1] but failing for gate
  [2]

  [1] 
http://logs.openstack.org/26/445326/2/check/gate-networking-midonet-python27-ubuntu-xenial/57c3b39/testr_results.html.gz
  [2] 
http://logs.openstack.org/26/445326/2/gate/gate-networking-midonet-python27-ubuntu-xenial/5ce12db/testr_results.html.gz

  
  ft21.80: 
midonet.neutron.tests.unit.test_midonet_plugin.TestMidonetL3NatDBIntTest.test_floatingip_list_with_pagination_reverse_StringException:
 Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  WARNING [stevedore.named] Could not load 
midonet.neutron.plugin_v1.MidonetPluginV2
   WARNING [neutron.api.extensions] Extension address-scope not supported by 
any of loaded plugins
   WARNING [neutron.api.extensions] Extension allowed-address-pairs not 
supported by any of loaded plugins
   WARNING [neutron.api.extensions] Extension auto-allocated-topology not 
supported by any of loaded plugins
   WARNING [neutron.api.extensions] Extension availability_zone not supported 
by any of loaded plugins
   WARNING [neutron.api.extensions] Extension default-subnetpools not supported 
by any of loaded plugins
   WARNING [neutron.api.extensions] Extension dns-integration not supported by 
any of loaded plugins
   WARNING [neutron.api.extensions] Extension dvr not supported by any of 
loaded plugins
   WARNING [neutron.api.extensions] Extension extraroute not supported by any 
of loaded plugins
   WARNING [neutron.api.extensions] Extension flavors not supported by any of 
loaded plugins
   WARNING [neutron.api.extensions] Extension ip_allocation not supported by 
any of loaded plugins
   WARNING [neutron.api.extensions] Extension l2_adjacency not supported by any 
of loaded plugins
   WARNING [neutron.api.extensions] Extension l3-ha not supported by any of 
loaded plugins
   WARNING [neutron.api.extensions] Extension l3-flavors not supported by any 
of loaded plugins
   WARNING [neutron.api.extensions] Extension l3_agent_scheduler not supported 
by any of loaded plugins
   WARNING [neutron.api.extensions] Extension metering not supported by any of 
loaded plugins
   WARNING [neutron.api.extensions] Extension multi-provider not supported by 
any of loaded plugins
   WARNING [neutron.api.extensions] Extension net-mtu not supported by any of 
loaded plugins
   WARNING [neutron.api.extensions] Extension network_availability_zone not 
supported by any of loaded plugins
   WARNING [neutron.api.extensions] Extension network-ip-availability not 
supported by any of loaded plugins
   WARNING [neutron.api.extensions] Extension port-security not supported by 
any of loaded plugins
   WARNING [neutron.api.extensions] Extension provider not supported by any of 
loaded plugins
   WARNING [neutron.api.extensions] Extension qos not supported by any of 
loaded plugins
   WARNING [neutron.api.extensions] Extension standard-attr-revisions not 
supported by any of loaded plugins
   WARNING [neutron.api.extensions] Extension router_availability_zone not 
supported by any of loaded plugins
   WARNING [neutron.api.extensions] Extension router-service-type not supported 
by any of loaded plugins
   WARNING [neutron.api.extensions] Extension segment not supported by any of 
loaded plugins
   WARNING [neutron.api.extensions] Extension service-type not supported by any 
of loaded plugins
   WARNING [neutron.api.extensions] Extension subnet-service-types not 
supported by any of loaded plugins
   WARNING [neutron.api.extensions] Extension tag not supported by any of 
loaded plugins
   WARNING [neutron.api.extensions] Extension standard-attr-timestamp not 
supported by any of loaded plugins
   WARNING [neutron.api.extensions] Extension trunk not supported by any of 
loaded plugins
   WARNING [neutron.api.extensions] Extension trunk-details not supported by 
any of loaded plugins
   WARNING [neutron.api.extensions] Extension vlan-transparent not supported by 
any of loaded plugins
 ERROR [neutron.api.extensions] Extension path 
'neutron/tests/unit/extensions' doesn't exist!
   WARNING [neutron.api.extensions] Extension agent-membership not supported by 
any of loaded plugins
   WARNING [neutron.api.extensions] Extension bgp-speaker-router-insertion not 
supported by any of loaded plugins
   WARNING [neutron.api.extensions] Extension gateway-device not supported by 
any of loaded plugins
   WARNING [neutron.api.extensions] 

[Yahoo-eng-team] [Bug 1680136] [NEW] Stable newton gate is broken

2017-04-05 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Stable newton branch is busted.
https://review.openstack.org/#/c/445326/

Py27 and Py35 tests are passing for checks [1] but failing for gate [2]

[1] 
http://logs.openstack.org/26/445326/2/check/gate-networking-midonet-python27-ubuntu-xenial/57c3b39/testr_results.html.gz
[2] 
http://logs.openstack.org/26/445326/2/gate/gate-networking-midonet-python27-ubuntu-xenial/5ce12db/testr_results.html.gz


ft21.80: 
midonet.neutron.tests.unit.test_midonet_plugin.TestMidonetL3NatDBIntTest.test_floatingip_list_with_pagination_reverse_StringException:
 Empty attachments:
  stderr
  stdout

pythonlogging:'': {{{
WARNING [stevedore.named] Could not load 
midonet.neutron.plugin_v1.MidonetPluginV2
 WARNING [neutron.api.extensions] Extension address-scope not supported by any 
of loaded plugins
 WARNING [neutron.api.extensions] Extension allowed-address-pairs not supported 
by any of loaded plugins
 WARNING [neutron.api.extensions] Extension auto-allocated-topology not 
supported by any of loaded plugins
 WARNING [neutron.api.extensions] Extension availability_zone not supported by 
any of loaded plugins
 WARNING [neutron.api.extensions] Extension default-subnetpools not supported 
by any of loaded plugins
 WARNING [neutron.api.extensions] Extension dns-integration not supported by 
any of loaded plugins
 WARNING [neutron.api.extensions] Extension dvr not supported by any of loaded 
plugins
 WARNING [neutron.api.extensions] Extension extraroute not supported by any of 
loaded plugins
 WARNING [neutron.api.extensions] Extension flavors not supported by any of 
loaded plugins
 WARNING [neutron.api.extensions] Extension ip_allocation not supported by any 
of loaded plugins
 WARNING [neutron.api.extensions] Extension l2_adjacency not supported by any 
of loaded plugins
 WARNING [neutron.api.extensions] Extension l3-ha not supported by any of 
loaded plugins
 WARNING [neutron.api.extensions] Extension l3-flavors not supported by any of 
loaded plugins
 WARNING [neutron.api.extensions] Extension l3_agent_scheduler not supported by 
any of loaded plugins
 WARNING [neutron.api.extensions] Extension metering not supported by any of 
loaded plugins
 WARNING [neutron.api.extensions] Extension multi-provider not supported by any 
of loaded plugins
 WARNING [neutron.api.extensions] Extension net-mtu not supported by any of 
loaded plugins
 WARNING [neutron.api.extensions] Extension network_availability_zone not 
supported by any of loaded plugins
 WARNING [neutron.api.extensions] Extension network-ip-availability not 
supported by any of loaded plugins
 WARNING [neutron.api.extensions] Extension port-security not supported by any 
of loaded plugins
 WARNING [neutron.api.extensions] Extension provider not supported by any of 
loaded plugins
 WARNING [neutron.api.extensions] Extension qos not supported by any of loaded 
plugins
 WARNING [neutron.api.extensions] Extension standard-attr-revisions not 
supported by any of loaded plugins
 WARNING [neutron.api.extensions] Extension router_availability_zone not 
supported by any of loaded plugins
 WARNING [neutron.api.extensions] Extension router-service-type not supported 
by any of loaded plugins
 WARNING [neutron.api.extensions] Extension segment not supported by any of 
loaded plugins
 WARNING [neutron.api.extensions] Extension service-type not supported by any 
of loaded plugins
 WARNING [neutron.api.extensions] Extension subnet-service-types not supported 
by any of loaded plugins
 WARNING [neutron.api.extensions] Extension tag not supported by any of loaded 
plugins
 WARNING [neutron.api.extensions] Extension standard-attr-timestamp not 
supported by any of loaded plugins
 WARNING [neutron.api.extensions] Extension trunk not supported by any of 
loaded plugins
 WARNING [neutron.api.extensions] Extension trunk-details not supported by any 
of loaded plugins
 WARNING [neutron.api.extensions] Extension vlan-transparent not supported by 
any of loaded plugins
   ERROR [neutron.api.extensions] Extension path 
'neutron/tests/unit/extensions' doesn't exist!
 WARNING [neutron.api.extensions] Extension agent-membership not supported by 
any of loaded plugins
 WARNING [neutron.api.extensions] Extension bgp-speaker-router-insertion not 
supported by any of loaded plugins
 WARNING [neutron.api.extensions] Extension gateway-device not supported by any 
of loaded plugins
 WARNING [neutron.api.extensions] Extension logging-resource not supported by 
any of loaded plugins
 WARNING [neutron.api.extensions] Extension router-interface-fip not supported 
by any of loaded plugins
 WARNING [neutron.scheduler.dhcp_agent_scheduler] No more DHCP agents
 WARNING [neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api] Unable to schedule 
network 810e84a4-e990-4172-be50-7c004a2e18ab: no agents available; will retry 
on subsequent port and subnet creation events.
 WARNING [neutron.scheduler.dhcp_agent_scheduler] No more DHCP agents
 WARNING 

[Yahoo-eng-team] [Bug 1680188] [NEW] test_router_interface_ops_bump_router failure in gate

2017-04-05 Thread Kevin Benton
Public bug reported:

Spotted in gate

ft730.7: 
neutron.tests.unit.services.revisions.test_revision_plugin.TestRevisionPlugin.test_router_interface_ops_bump_router_StringException:
 Empty attachments:
  pythonlogging:''
  stderr
  stdout

Traceback (most recent call last):
  File "neutron/tests/base.py", line 116, in func
return f(self, *args, **kwargs)
  File "neutron/tests/unit/services/revisions/test_revision_plugin.py", line 
162, in test_router_interface_ops_bump_router
router['revision_number'])
  File 
"/home/jenkins/workspace/gate-neutron-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/unittest2/case.py",
 line 1233, in assertGreater
self.fail(self._formatMessage(msg, standardMsg))
  File 
"/home/jenkins/workspace/gate-neutron-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/unittest2/case.py",
 line 690, in fail
raise self.failureException(msg)
AssertionError: 5 not greater than 5

** Affects: neutron
 Importance: Critical
 Assignee: Kevin Benton (kevinbenton)
 Status: Confirmed


** Tags: gate-failure

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1680188

Title:
  test_router_interface_ops_bump_router failure in gate

Status in neutron:
  Confirmed

Bug description:
  Spotted in gate

  ft730.7: 
neutron.tests.unit.services.revisions.test_revision_plugin.TestRevisionPlugin.test_router_interface_ops_bump_router_StringException:
 Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File "neutron/tests/base.py", line 116, in func
  return f(self, *args, **kwargs)
File "neutron/tests/unit/services/revisions/test_revision_plugin.py", line 
162, in test_router_interface_ops_bump_router
  router['revision_number'])
File 
"/home/jenkins/workspace/gate-neutron-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/unittest2/case.py",
 line 1233, in assertGreater
  self.fail(self._formatMessage(msg, standardMsg))
File 
"/home/jenkins/workspace/gate-neutron-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/unittest2/case.py",
 line 690, in fail
  raise self.failureException(msg)
  AssertionError: 5 not greater than 5

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1680188/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1680183] [NEW] neutron-keepalived-state-change fails with "AssertionError: do not call blocking functions from the mainloop" in functional tests

2017-04-05 Thread Ihar Hrachyshka
Public bug reported:

17:39:17.802 6173 CRITICAL neutron [-] AssertionError: do not call blocking 
functions from the mainloop
17:39:17.802 6173 ERROR neutron Traceback (most recent call last):
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/.tox/dsvm-functional/bin/neutron-keepalived-state-change", 
line 10, in 
17:39:17.802 6173 ERROR neutron sys.exit(main())
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/neutron/cmd/keepalived_state_change.py", line 19, in main
17:39:17.802 6173 ERROR neutron keepalived_state_change.main()
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/neutron/agent/l3/keepalived_state_change.py", line 157, in 
main
17:39:17.802 6173 ERROR neutron cfg.CONF.monitor_cidr).start()
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/neutron/agent/linux/daemon.py", line 249, in start
17:39:17.802 6173 ERROR neutron self.run()
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/neutron/agent/l3/keepalived_state_change.py", line 70, in 
run
17:39:17.802 6173 ERROR neutron for iterable in self.monitor:
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/neutron/agent/linux/async_process.py", line 256, in 
_iter_queue
17:39:17.802 6173 ERROR neutron yield queue.get(block=block)
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/.tox/dsvm-functional/lib/python2.7/site-packages/eventlet/queue.py",
 line 313, in get
17:39:17.802 6173 ERROR neutron return waiter.wait()
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/.tox/dsvm-functional/lib/python2.7/site-packages/eventlet/queue.py",
 line 141, in wait
17:39:17.802 6173 ERROR neutron return get_hub().switch()
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/.tox/dsvm-functional/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 294, in switch
17:39:17.802 6173 ERROR neutron return self.greenlet.switch()
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/.tox/dsvm-functional/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 346, in run
17:39:17.802 6173 ERROR neutron self.wait(sleep_time)
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/.tox/dsvm-functional/lib/python2.7/site-packages/eventlet/hubs/poll.py",
 line 85, in wait
17:39:17.802 6173 ERROR neutron presult = self.do_poll(seconds)
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/.tox/dsvm-functional/lib/python2.7/site-packages/eventlet/hubs/epolls.py",
 line 62, in do_poll
17:39:17.802 6173 ERROR neutron return self.poll.poll(seconds)
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/neutron/agent/l3/keepalived_state_change.py", line 134, in 
handle_sigterm
17:39:17.802 6173 ERROR neutron self._kill_monitor()
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/neutron/agent/l3/keepalived_state_change.py", line 131, in 
_kill_monitor
17:39:17.802 6173 ERROR neutron run_as_root=True)
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/neutron/agent/linux/utils.py", line 221, in kill_process
17:39:17.802 6173 ERROR neutron execute(['kill', '-%d' % signal, pid], 
run_as_root=run_as_root)
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/neutron/agent/linux/utils.py", line 155, in execute
17:39:17.802 6173 ERROR neutron greenthread.sleep(0)
17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/.tox/dsvm-functional/lib/python2.7/site-packages/eventlet/greenthread.py",
 line 31, in sleep
17:39:17.802 6173 ERROR neutron assert hub.greenlet is not current, 'do not 
call blocking functions from the mainloop'
17:39:17.802 6173 ERROR neutron AssertionError: do not call blocking functions 
from the mainloop
17:39:17.802 6173 ERROR neutron

This is what I see when running fullstack l3ha tests, once I enable
syslog logging for the helper process.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: functional-tests l3-ha

** Tags added: l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1680183

Title:
  neutron-keepalived-state-change fails with "AssertionError: do not
  call blocking functions from the mainloop" in functional tests

Status in neutron:
  New

Bug description:
  17:39:17.802 6173 CRITICAL neutron [-] AssertionError: do not call blocking 
functions from the mainloop
  17:39:17.802 6173 ERROR neutron Traceback (most recent call last):
  17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/.tox/dsvm-functional/bin/neutron-keepalived-state-change", 
line 10, in 
  17:39:17.802 6173 ERROR neutron sys.exit(main())
  17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/neutron/cmd/keepalived_state_change.py", line 19, in main
  17:39:17.802 6173 ERROR neutron keepalived_state_change.main()
  17:39:17.802 6173 ERROR neutron   File 
"/opt/stack/neutron/neutron/agent/l3/keepalived_state_change.py", 

[Yahoo-eng-team] [Bug 1680161] [NEW] FWaaS - filtering idoesn't work correctly

2017-04-05 Thread Yushiro FURUKAWA
Public bug reported:

In FWaaS v2, when firewall-policy tries to update public from True to False,
following validation expect to work:

  1. Get firewall group object which includes the firewall-policy as
 'ingress_firewall_policy_id' or 'egress_firewall_policy_id'
  2. Verify whether the firewall-group belongs same project or not

Currently, filter is wrong.  Therefore, it should be fixed:
However, this bug doesn't occur because following bug hasn't fixed yet so it's 
not urgent bug:

https://bugs.launchpad.net/neutron/+bug/1657190

  Before:

filters = {
'ingress_firewall_rule_id': [fwp_id],
'ingress_firewall_rule_id': [fwp_id]
}

  After:

filters = {
'ingress_firewall_rule_id': [fwp_id],
'ingress_firewall_rule_id': [fwp_id]
}

** Affects: neutron
 Importance: Undecided
 Assignee: Yushiro FURUKAWA (y-furukawa-2)
 Status: In Progress


** Tags: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1680161

Title:
  FWaaS - filtering idoesn't work correctly

Status in neutron:
  In Progress

Bug description:
  In FWaaS v2, when firewall-policy tries to update public from True to False,
  following validation expect to work:

1. Get firewall group object which includes the firewall-policy as
   'ingress_firewall_policy_id' or 'egress_firewall_policy_id'
2. Verify whether the firewall-group belongs same project or not

  Currently, filter is wrong.  Therefore, it should be fixed:
  However, this bug doesn't occur because following bug hasn't fixed yet so 
it's not urgent bug:

  https://bugs.launchpad.net/neutron/+bug/1657190

Before:

  filters = {
  'ingress_firewall_rule_id': [fwp_id],
  'ingress_firewall_rule_id': [fwp_id]
  }

After:

  filters = {
  'ingress_firewall_rule_id': [fwp_id],
  'ingress_firewall_rule_id': [fwp_id]
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1680161/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1612804] Re: test_shelve_instance fails with sshtimeout

2017-04-05 Thread Brian Haley
Closing as it has not been seen in a while.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1612804

Title:
  test_shelve_instance fails with sshtimeout

Status in neutron:
  Invalid

Bug description:
  Same failure mode of bug 1522824, root cause might be different.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1612804/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1665698] Re: /etc/qemu-ifup not allowed by apparmor

2017-04-05 Thread Launchpad Bug Tracker
This bug was fixed in the package nova - 2:14.0.4-0ubuntu1.2

---
nova (2:14.0.4-0ubuntu1.2) yakkety; urgency=medium

  * d/p/libvirt-set-vlan-tag-for-macvtap.patch: Pick dependent patch
for fix for libvirt 1.3.1 compatibility (LP: #1665698).

nova (2:14.0.4-0ubuntu1.1) yakkety; urgency=medium

  * d/patches/libvirt-set-script-path.patch: Conditionally set
the script path for ethernet vif types (LP: #1665698).

nova (2:14.0.4-0ubuntu1) yakkety; urgency=medium

  * New upstream point release for OpenStack Newton (LP: #1664306).
  * d/control: Bump python-rfc3986 version.

 -- James Page   Wed, 22 Mar 2017 15:46:21 +

** Changed in: nova (Ubuntu Yakkety)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1665698

Title:
  /etc/qemu-ifup not allowed by apparmor

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive newton series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  In Progress
Status in OpenStack Compute (nova) ocata series:
  In Progress
Status in libvirt package in Ubuntu:
  Fix Released
Status in nova package in Ubuntu:
  Invalid
Status in libvirt source package in Yakkety:
  Fix Committed
Status in nova source package in Yakkety:
  Fix Released

Bug description:
  SRU - Nova
  [Impact]
  OpenStack deployments using vif types `tap`, `ivs`, `iovisor`, `midonet`, and 
`vrouter` are unable to boot instances using libvirt 1.3.1 from Ubuntu 16.04 
(as used by the Newton Ubuntu Cloud Archive). Note that this impacts the nova 
package which is currently in yakkety-proposed/newton-proposed - the version in 
*-updates does not have this issue.

  [Test case]
  Using an OpenStack cloud deployed with one of the above SDN's boot an 
instance.
  The instance will fail to boot with a libvirt error.
  Note cloud must be deployed using the -proposed packages from the Newton UCA.

  [Regression Potential]
  Minimal - the patch restores the previous behaviour for older libvirt 
versions, ensuring compatibility with documented libvirt version baselines in 
OpenStack Nova.

  ---

  SRU - libvirt
  [Impact]

   * Please do note that this SRU statement is about the libvirt portion
     of it, this is a fix of essentially an API break from Xenial to
     Yakkety. This is independent to any decision to the Openstack context
     discussion about the change to drop specifying a path at all.

   * Before 9c17d665fdc5f (v1.3.2 which means 1.3.1 in Xenial for us) it
     was possible to have the following interface configuration:
     
   
     
     This resulted in -netdev tap,script=,.. Fortunately, qemu helped
     us to get away with this as it just ignored the empty script
     path. However, after the commit mentioned above it's libvirtd
     who is executing the script. Unfortunately without special
     case-ing empty script path.

   * The fix adds the special casing that qemu had into libvirts handling
     of the interface definition.

  [Test Case]

   * That is tricky as the way openstack is using to shove that in
     seems to not care on xml validation as much as e.g. virsh.
     If normally adding a device like
     
   
   
     
     At least in xenial AND yakkety blocked by the XML validation.
     But if trying to work around like with path=''
     this gives "-netdev tap,script="",id=hostnet1" on yakkety then
     the fix does not apply as this is '""' and not ''.
     So to add the above snippet you have to edit it in via --skip-
 validate like
     $ virsh edit --skip-validate zesty-on-x-test
     This on older libvrit gave: -netdev tap,script=,id=hostnet1
     Which qemu understood as nop. But newer libvirt refuses.

   * Error:
     error: Failed to start domain 
     error: Cannot find '' in path: No such file or directory

   * Expected:
     Starting the domain as-is without calling a script,
     but also without complaining about being empty.

  [Regression Potential]

   * Regression should be low because of:
     * The fix is upstream for a while now without follow on fix
     * We are essentially going back to how it was
     * There is no case like "I had '' set in my setup but now it is
   a no-op which makes me fail" because if one had '' it failed until
   now.
   * Fix is in zesty for a few days without new fallout being reported
   * also it passed several levels of testing (on the case and general
     regression testing)
   * Due to extra xml checks a device like path='' is not even definable.
     So only those who run --skip-validate or similar are affected in
     the first place.

  [Other Info]

   * n/a

  

  I have VMs failing to start with 2017-02-17 15:38:44.458 264015 ERROR
  nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1680130] [NEW] Attaching a volume w/ an invalid (too long) UUID results in HTTP 500.

2017-04-05 Thread Eric Harney
Public bug reported:

There is some validation here, but it doesn't seem to be complete:
$ nova volume-attach ad1f16cc-d859-4f9d-a7b5-12d28d7ac53c 
--4f9d--6
ERROR (BadRequest): Invalid input for field/attribute volumeId. Value: 
--4f9d--6. u'--4f9d--6' is not a 'uuid' (HTTP 
400)

$ nova volume-attach ad1f16cc-d859-4f9d-a7b5-12d28d7ac53c
--4f9d---

(note the double hyphen)

RemoteError: Remote error: DBDataError (pymysql.err.DataError) (1406,
u"Data too long for column 'volume_id' at row 1") [SQL: u'INSERT INTO
block_device_mapping (created_at, updated_at, deleted_at, deleted,
instance_uuid, source_type, destination_type, guest_format, device_type,
disk_bus, boot_index, device_name, delete_on_termination, snapshot_id,
volume_id, volume_size, image_id, no_device, connection_info, tag,
attachment_id) VALUES   ...

master @ bc31f5a Merge "Add description to policies in
servers_migrations.py"

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1680130

Title:
  Attaching a volume w/ an invalid (too long) UUID results in HTTP 500.

Status in OpenStack Compute (nova):
  New

Bug description:
  There is some validation here, but it doesn't seem to be complete:
  $ nova volume-attach ad1f16cc-d859-4f9d-a7b5-12d28d7ac53c 
--4f9d--6
  ERROR (BadRequest): Invalid input for field/attribute volumeId. Value: 
--4f9d--6. u'--4f9d--6' is not a 'uuid' (HTTP 
400)

  $ nova volume-attach ad1f16cc-d859-4f9d-a7b5-12d28d7ac53c
  --4f9d---

  (note the double hyphen)

  RemoteError: Remote error: DBDataError (pymysql.err.DataError) (1406,
  u"Data too long for column 'volume_id' at row 1") [SQL: u'INSERT INTO
  block_device_mapping (created_at, updated_at, deleted_at, deleted,
  instance_uuid, source_type, destination_type, guest_format,
  device_type, disk_bus, boot_index, device_name, delete_on_termination,
  snapshot_id, volume_id, volume_size, image_id, no_device,
  connection_info, tag, attachment_id) VALUES   ...

  master @ bc31f5a Merge "Add description to policies in
  servers_migrations.py"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1680130/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1680116] [NEW] ds-identify needs to support check_instance_id like function

2017-04-05 Thread Scott Moser
Public bug reported:

When ds-identify runs, it identifies the cloud platform based on available 
information.
For some datasources provided as an attached disk, the cloud provider may 
remove that disk at some time (or potentially, the user might destroy it).  In 
such cases (Azure and ConfigDrive) the system id read from smbios is the same 
as the instance-id, so we can locally quickly check.

cloud-init's datasource search code supports this via the 'check_instance_id' 
method in a datasource.
Basically, that function is called from the old datasource.  If it retunrs 
true, then cloud-init would not go looking for an attached disk that it would 
not find.

The ds-identify code does not support this at the moment.  The result is if you 
did:
 * boot system with configdrive
 * dd if=/dev/zero of=/dev/disk/by-name/config-2
 * reboot

ds-identify would then not recognize this system as config drive even
though the datasource would recognize it was.

** Affects: cloud-init
 Importance: High
 Status: Triaged


** Tags: dsid

** Changed in: cloud-init
   Status: New => Triaged

** Changed in: cloud-init
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1680116

Title:
  ds-identify needs to support check_instance_id like function

Status in cloud-init:
  Triaged

Bug description:
  When ds-identify runs, it identifies the cloud platform based on available 
information.
  For some datasources provided as an attached disk, the cloud provider may 
remove that disk at some time (or potentially, the user might destroy it).  In 
such cases (Azure and ConfigDrive) the system id read from smbios is the same 
as the instance-id, so we can locally quickly check.

  cloud-init's datasource search code supports this via the 'check_instance_id' 
method in a datasource.
  Basically, that function is called from the old datasource.  If it retunrs 
true, then cloud-init would not go looking for an attached disk that it would 
not find.

  The ds-identify code does not support this at the moment.  The result is if 
you did:
   * boot system with configdrive
   * dd if=/dev/zero of=/dev/disk/by-name/config-2
   * reboot

  ds-identify would then not recognize this system as config drive even
  though the datasource would recognize it was.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1680116/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1680040] Re: Not all GET should have a correspondent HEAD, and vice-versa

2017-04-05 Thread Morgan Fainberg
HEAD calls should still be supported. It may not make sense for some
things, but it can be useful (someone can check content length, which
should be identical, or headers, or any number of things that aren't
what you'd expect).

Simply put, it costs us *nothing* to support it.

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1680040

Title:
  Not all GET should have a correspondent HEAD, and vice-versa

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  Most of keystone implement GET and HEAD for the same operation.

  This is okay when you are retrieving an entity or checking its
  existence, for example:

  GET /users/
  HEAD /users/

  However, there are some cases where having GET is obvious, but HEAD
  does not make any sense:

  GET /role_assignments
  HEAD /role_assignments

  GET /users/{user_id}/groups
  HEAD /users/{user_id}/groups

  The role assignments and groups will only be returned in those API,
  there is nothing to be checked, you cannot even specify anything to be
  checked.

  And it happens on the other way around too:

  HEAD /OS-EP-FILTER/projects/{project_id}/endpoints/{endpoint_id}
  GET /OS-EP-FILTER/projects/{project_id}/endpoints/{endpoint_id}

  The endpoint to project association is not an entity by itself,
  performing a GET on it will not return anything, it will just check
  its existence, as does the HEAD request.

  This should be fixed, there is no need to maintain APIs that are not
  useful.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1680040/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667086] Re: [OSSA-2017-003] XSS in federation mappings UI (CVE-2017-7400)

2017-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/447064
Committed: 
https://git.openstack.org/cgit/openstack/ossa/commit/?id=d9fb681d40ed9b2ec535b3ffa49451edfd199167
Submitter: Jenkins
Branch:master

commit d9fb681d40ed9b2ec535b3ffa49451edfd199167
Author: Tristan Cacqueray 
Date:   Fri Mar 17 16:49:35 2017 +

Adds OSSA-2017-003 (CVE-2017-7400)

Change-Id: Iead38e4f72cfe54102612a07a4001862cb5fd32c
Closes-Bug: #1667086


** Changed in: ossa
   Status: In Progress => Fix Released

** CVE added: http://www.cve.mitre.org/cgi-
bin/cvename.cgi?name=2017-7400

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1667086

Title:
  [OSSA-2017-003] XSS in federation mappings UI (CVE-2017-7400)

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  Found in Mitaka

  Steps:
  - Setup federation in keystone and horizon
  - Launch and login to horizon as an admin
  - Click on the Federation->Mappings tab
  - Create or update a mapping with the following content that contains 
javascript

  [
  {
  "local": [
  {
  "domain": {
  "name": "Default"
  },
  "group": {
  "domain": {
  "name": "Default"
  },
  "name": "Federated Users"
  },
  "user": {
  "name": "{alert('test');}",
  "email": "{1}"
  },
  "groups": "{2}"
  }
  ],
  "remote": [
  {
  "type": "REMOTE_USER"
  },
  {
  "type": "MELLON_userEmail"
  },
  {
  "type": "MELLON_groups"
  }
  ]
  }
  ]

  Now whenever this Federation->Mapping page is shown, the javascript
  will execute.

  It appears other pages in horizon protect against such attacks (such
  as Users, Groups, etc).  So I'm guessing that the rendering of this
  page just needs to be escaped to ignore tags.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1667086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1673411] Re: config-drive support is broken

2017-04-05 Thread James Page
** Changed in: nova-lxd (Ubuntu Xenial)
   Status: New => Invalid

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/newton
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/ocata
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/newton
   Status: New => Triaged

** Changed in: cloud-archive/ocata
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1673411

Title:
  config-drive support is broken

Status in Ubuntu Cloud Archive:
  Triaged
Status in Ubuntu Cloud Archive newton series:
  Triaged
Status in Ubuntu Cloud Archive ocata series:
  Triaged
Status in cloud-init:
  Fix Committed
Status in nova-lxd:
  Fix Released
Status in nova-lxd newton series:
  Fix Committed
Status in nova-lxd ocata series:
  Fix Committed
Status in nova-lxd trunk series:
  Fix Released
Status in cloud-init package in Ubuntu:
  Fix Released
Status in nova-lxd package in Ubuntu:
  Triaged
Status in cloud-init source package in Xenial:
  Confirmed
Status in nova-lxd source package in Xenial:
  Invalid
Status in cloud-init source package in Yakkety:
  Confirmed
Status in nova-lxd source package in Yakkety:
  Triaged
Status in cloud-init source package in Zesty:
  Fix Released
Status in nova-lxd source package in Zesty:
  Triaged

Bug description:
  After reviewing https://review.openstack.org/#/c/445579/ and doing
  some testing, it would appear that the config-drive support in the
  nova-lxd driver is not functional.

  cloud-init ignores the data presented in /var/lib/cloud/data and reads
  from the network accessible metadata-service.

  To test this effectively you have to have a fully offline instance
  (i.e. no metadata service access).

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1673411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424728] Re: Remove old rpc alias(es) from code

2017-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/453245
Committed: 
https://git.openstack.org/cgit/openstack/keystonemiddleware/commit/?id=d94c40b1e0515117fbe7d04480640641f2a2edbd
Submitter: Jenkins
Branch:master

commit d94c40b1e0515117fbe7d04480640641f2a2edbd
Author: Thomas Bechtold 
Date:   Tue Apr 4 18:09:34 2017 +0200

Remove deprecated oslo.messaging aliases parameter

Those are remnants from the oslo-incubator times. Also, oslo.messaging
deprecated [1] transport aliases since 5.2.0+ that is the minimal
version supported for stable/newton.

[1] I314cefa5fb1803fa7e21e3e34300e5ced31bba89

Closes-Bug: #1424728
Change-Id: I50c4559ea2ebc8512a05ffad52e5f04b22743ff4


** Changed in: keystonemiddleware
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1424728

Title:
  Remove old rpc alias(es) from code

Status in Cinder:
  Fix Released
Status in Designate:
  In Progress
Status in grenade:
  New
Status in Ironic:
  Fix Released
Status in keystonemiddleware:
  Fix Released
Status in Magnum:
  In Progress
Status in Manila:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.messaging:
  Confirmed
Status in Sahara:
  Fix Released
Status in OpenStack Search (Searchlight):
  Fix Released
Status in watcher:
  Fix Released

Bug description:
  We have several TRANSPORT_ALIASES entries from way back (Essex, Havana)
  http://git.openstack.org/cgit/openstack/nova/tree/nova/rpc.py#n48

  We need a way to warn end users that they need to fix their nova.conf
  So these can be removed in a later release (full cycle?)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1424728/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1680040] [NEW] Not all GET should have a correspondent HEAD, and vice-versa

2017-04-05 Thread Samuel de Medeiros Queiroz
Public bug reported:

Most of keystone implement GET and HEAD for the same operation.

This is okay when you are retrieving an entity or checking its
existence, for example:

GET /users/
HEAD /users/

However, there are some cases where having GET is obvious, but HEAD does
not make any sense:

GET /role_assignments
HEAD /role_assignments

GET /users/{user_id}/groups
HEAD /users/{user_id}/groups

The role assignments and groups will only be returned in those API,
there is nothing to be checked, you cannot even specify anything to be
checked.

And it happens on the other way around too:

HEAD /OS-EP-FILTER/projects/{project_id}/endpoints/{endpoint_id}
GET /OS-EP-FILTER/projects/{project_id}/endpoints/{endpoint_id}

The endpoint to project association is not an entity by itself,
performing a GET on it will not return anything, it will just check its
existence, as does the HEAD request.

This should be fixed, there is no need to maintain APIs that are not
useful.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1680040

Title:
  Not all GET should have a correspondent HEAD, and vice-versa

Status in OpenStack Identity (keystone):
  New

Bug description:
  Most of keystone implement GET and HEAD for the same operation.

  This is okay when you are retrieving an entity or checking its
  existence, for example:

  GET /users/
  HEAD /users/

  However, there are some cases where having GET is obvious, but HEAD
  does not make any sense:

  GET /role_assignments
  HEAD /role_assignments

  GET /users/{user_id}/groups
  HEAD /users/{user_id}/groups

  The role assignments and groups will only be returned in those API,
  there is nothing to be checked, you cannot even specify anything to be
  checked.

  And it happens on the other way around too:

  HEAD /OS-EP-FILTER/projects/{project_id}/endpoints/{endpoint_id}
  GET /OS-EP-FILTER/projects/{project_id}/endpoints/{endpoint_id}

  The endpoint to project association is not an entity by itself,
  performing a GET on it will not return anything, it will just check
  its existence, as does the HEAD request.

  This should be fixed, there is no need to maintain APIs that are not
  useful.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1680040/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1680037] [NEW] JS error in path of angular flavor table.

2017-04-05 Thread wei.ying
Public bug reported:

Env: devstack master branch

Error info:

830e26e34b64.js:1336 Error: [$parse:syntax] Syntax Error: Token ':' is an 
unexpected token at column 17 of the expression [os-flavor-access:is_public] 
starting at [:is_public].
http://errors.angularjs.org/1.5.8/$parse/syntax?p0=%3A=is%20an%20unexpected%20token=17=os-flavor-access%3Ais_public=%3Ais_public
at http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:699:8
at AST.throwError 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:1378:114)
at AST.ast 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:1357:889)
at ASTCompiler.compile 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:1394:107)
at Parser.parse 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:1439:543)
at $parse 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:1442:179)
at Object.link 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:2377:834)
at http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:768:296
at invokeLinkFn 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:1161:81)
at nodeLinkFn 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:1110:147)  wei.ying (wei.yy)

** Description changed:

+ Env: devstack master branch
+ 
+ Error info:
+ 
  830e26e34b64.js:1336 Error: [$parse:syntax] Syntax Error: Token ':' is an 
unexpected token at column 17 of the expression [os-flavor-access:is_public] 
starting at [:is_public].
  
http://errors.angularjs.org/1.5.8/$parse/syntax?p0=%3A=is%20an%20unexpected%20token=17=os-flavor-access%3Ais_public=%3Ais_public
- at http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:699:8
- at AST.throwError 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:1378:114)
- at AST.ast 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:1357:889)
- at ASTCompiler.compile 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:1394:107)
- at Parser.parse 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:1439:543)
- at $parse 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:1442:179)
- at Object.link 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:2377:834)
- at http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:768:296
- at invokeLinkFn 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:1161:81)
- at nodeLinkFn 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:1110:147) http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:699:8
+ at AST.throwError 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:1378:114)
+ at AST.ast 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:1357:889)
+ at ASTCompiler.compile 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:1394:107)
+ at Parser.parse 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:1439:543)
+ at $parse 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:1442:179)
+ at Object.link 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:2377:834)
+ at http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:768:296
+ at invokeLinkFn 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:1161:81)
+ at nodeLinkFn 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:1110:147) https://bugs.launchpad.net/bugs/1680037

Title:
  JS error in path of angular flavor table.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Env: devstack master branch

  Error info:

  830e26e34b64.js:1336 Error: [$parse:syntax] Syntax Error: Token ':' is an 
unexpected token at column 17 of the expression [os-flavor-access:is_public] 
starting at [:is_public].
  
http://errors.angularjs.org/1.5.8/$parse/syntax?p0=%3A=is%20an%20unexpected%20token=17=os-flavor-access%3Ais_public=%3Ais_public
  at http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:699:8
  at AST.throwError 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:1378:114)
  at AST.ast 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:1357:889)
  at ASTCompiler.compile 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:1394:107)
  at Parser.parse 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:1439:543)
  at $parse 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:1442:179)
  at Object.link 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:2377:834)
  at http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:768:296
  at invokeLinkFn 
(http://192.168.122.210:8085/static/dashboard/js/830e26e34b64.js:1161:81)
  at nodeLinkFn 

[Yahoo-eng-team] [Bug 1504641] Re: Listing volumes respects osapi_max_limit but does not provide a link to the next element

2017-04-05 Thread Ghanshyam Mann
** Changed in: nova
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1504641

Title:
  Listing volumes respects osapi_max_limit but does not provide a link
  to the next element

Status in OpenStack Compute (nova):
  Won't Fix
Status in tempest:
  Won't Fix

Bug description:
  When GETting os-volumes, the returned list of volumes respects the
  osapi_max_limit configuration parameter but does not provide a link to
  the next element in the list. For example, with two volumes configured
  and osapi_max_limit set to 1, GETting volumes results in the
  following:

  
  {
  "volumes": [
  {
  "attachments": [
  {}
  ],
  "availabilityZone": "nova",
  "createdAt": "2015-10-09T18:12:04.00",
  "displayDescription": null,
  "displayName": null,
  "id": "08792e26-204b-4bb9-8e9b-0e37331de51c",
  "metadata": {},
  "size": 1,
  "snapshotId": null,
  "status": "error",
  "volumeType": "lvmdriver-1"
  }
  ]
  }

  
  Unsetting osapi_max_limit results in both volumes being listed:

  
  {
  "volumes": [
  {
  "attachments": [
  {}
  ],
  "availabilityZone": "nova",
  "createdAt": "2015-10-09T18:12:04.00",
  "displayDescription": null,
  "displayName": null,
  "id": "08792e26-204b-4bb9-8e9b-0e37331de51c",
  "metadata": {},
  "size": 1,
  "snapshotId": null,
  "status": "error",
  "volumeType": "lvmdriver-1"
  },
  {
  "attachments": [
  {}
  ],
  "availabilityZone": "nova",
  "createdAt": "2015-10-09T18:12:00.00",
  "displayDescription": null,
  "displayName": null,
  "id": "5cf46cd2-8914-4ffd-9037-abd53c55ca76",
  "metadata": {},
  "size": 1,
  "snapshotId": null,
  "status": "error",
  "volumeType": "lvmdriver-1"
  }
  ]
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1504641/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590049] Re: test_add_list_remove_router_on_l3_agent may fail on envs where there is >= two L3 agents and DVR enabled

2017-04-05 Thread Yaroslav Lobankov
Actually, It was fixed by https://review.openstack.org/#/c/370236/ a
long time ago. We added a flag that changes test behavior depending on
the flag value.

** Changed in: tempest
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590049

Title:
  test_add_list_remove_router_on_l3_agent may fail on envs where there
  is >= two L3 agents and DVR enabled

Status in neutron:
  Confirmed
Status in tempest:
  Fix Released

Bug description:
  Environment:
  OpenStack (Mitaka release), where there is more than one L3 agent and DVR 
enabled.

  When DVR is disabled, the resource_setup method just creates a router.
  When DVR is enabled, the resource_setup method creates a router and
  then adds an internal network interface to the created router. After
  we added the internal network interface to the router, the router is
  automatically hosted by some L3 agent. Then the test starts and tries
  to add the router to one of L3 agents and fails because the router is
  already hosted by another L3 agent.

  tempest.lib.exceptions.Conflict: An object with that identifier already exists
  Details: {u'message': u'The router 8569d347-6db4-4e12-b033-60a867ba680f has 
been already hosted by the L3 Agent 6250d031-d455-4afe-b993-5a7719815746.', 
u'type': u'RouterHostedByL3Agent', u'detail': u''}

  The full log can be found here http://paste.openstack.org/show/508686/

  A DVR router can be bound to some L3 agent without adding an internal
  network interface to the created router since the Mitaka release.

  Liberty - http://paste.openstack.org/show/574456/
  Mitaka - http://paste.openstack.org/show/574455/

  In order to fix the test, we need to remove the code block where we
  add the internal network interface to the router, but it will not work
  with the Liberty release. So we can skip the test until Liberty is EOL
  and after that fix the test.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1590049/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1679895] Re: There are some access_and_security urls need update

2017-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/452421
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=b1a907840431291007b8e3f2582486e97d86ff21
Submitter: Jenkins
Branch:master

commit b1a907840431291007b8e3f2582486e97d86ff21
Author: zhurong 
Date:   Sat Apr 1 17:12:04 2017 +0800

Update the access_and_security url

Since this bp reorganise-access-and-security,
move access_and_security to separate panel,
there are some places url need to change too.

Closes-Bug: #1679895

Change-Id: I59560c479ad77d2452484b2138065a003451f376


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1679895

Title:
  There are some access_and_security urls need update

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  https://blueprints.launchpad.net/horizon/+spec/reorganise-access-and-
  security broke out the access and security tabs into separate panels.
  However, several links were not updated across Horizon, such as in:

  openstack_dashboard/dashboards/project/instances/workflows/create_instance.py
  
openstack_dashboard/dashboards/project/key_pairs/templates/key_pairs/create.html
  
openstack_dashboard/dashboards/project/key_pairs/templates/key_pairs/import.html
  
openstack_dashboard/dashboards/project/security_groups/templates/security_groups/create.html
  openstack_dashboard/dashboards/project/security_groups/views.py
  openstack_dashboard/dashboards/project/stacks/mappings.py
  openstack_dashboard/test/integration_tests/regions/menus.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1679895/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648129] Re: add ironic support to trunk extension

2017-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/436756
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=4a6d06550bd98b473bdcf5e58dcfb7f5ca9424ef
Submitter: Jenkins
Branch:master

commit 4a6d06550bd98b473bdcf5e58dcfb7f5ca9424ef
Author: Armando Migliaccio 
Date:   Tue Feb 21 18:43:38 2017 -0800

Inherit segmentation details for trunk subports if requested

This patch introduces support for requests where the user does
not know the segmentation details of a subport and by specifying
segmentation_type=inherit will let the trunk plugin infer these
details from the network to which the subport is connected to, thus
ignoring the segmentation_id in case it were to be specified.

This type of request is currently expected to have correct results
when the network segmentation type is 'vlan', and the network has
only one segment (provider-net extension use case).

DocImpact: Extend trunk documentation to include Ironic use case.

Closes-bug: #1648129

Depends-on: Ib510aade1716e6ca92940b85245eda7d0c84a070
Change-Id: I3be2638fddf3a9723dd852a3f9ea9f64eb1d0dd6


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1648129

Title:
  add ironic support to trunk extension

Status in neutron:
  Fix Released

Bug description:
  During Netwon we left out taking care of Ironic needs. We should make
  sure this doesn't get lost. More details on:

  http://lists.openstack.org/pipermail/openstack-
  dev/2016-December/108530.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1648129/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1679999] [NEW] Inherit segmentation details for trunk subports if requested

2017-04-05 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/436756
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit 4a6d06550bd98b473bdcf5e58dcfb7f5ca9424ef
Author: Armando Migliaccio 
Date:   Tue Feb 21 18:43:38 2017 -0800

Inherit segmentation details for trunk subports if requested

This patch introduces support for requests where the user does
not know the segmentation details of a subport and by specifying
segmentation_type=inherit will let the trunk plugin infer these
details from the network to which the subport is connected to, thus
ignoring the segmentation_id in case it were to be specified.

This type of request is currently expected to have correct results
when the network segmentation type is 'vlan', and the network has
only one segment (provider-net extension use case).

DocImpact: Extend trunk documentation to include Ironic use case.

Closes-bug: #1648129

Depends-on: Ib510aade1716e6ca92940b85245eda7d0c84a070
Change-Id: I3be2638fddf3a9723dd852a3f9ea9f64eb1d0dd6

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/167

Title:
  Inherit segmentation details for trunk subports if requested

Status in neutron:
  New

Bug description:
  https://review.openstack.org/436756
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 4a6d06550bd98b473bdcf5e58dcfb7f5ca9424ef
  Author: Armando Migliaccio 
  Date:   Tue Feb 21 18:43:38 2017 -0800

  Inherit segmentation details for trunk subports if requested
  
  This patch introduces support for requests where the user does
  not know the segmentation details of a subport and by specifying
  segmentation_type=inherit will let the trunk plugin infer these
  details from the network to which the subport is connected to, thus
  ignoring the segmentation_id in case it were to be specified.
  
  This type of request is currently expected to have correct results
  when the network segmentation type is 'vlan', and the network has
  only one segment (provider-net extension use case).
  
  DocImpact: Extend trunk documentation to include Ironic use case.
  
  Closes-bug: #1648129
  
  Depends-on: Ib510aade1716e6ca92940b85245eda7d0c84a070
  Change-Id: I3be2638fddf3a9723dd852a3f9ea9f64eb1d0dd6

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/167/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1679974] Re: HTTP 400 Exception while move all existing metadata in path of update aggregate metadata.

2017-04-05 Thread wei.ying
** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1679974

Title:
  HTTP 400 Exception while move all existing metadata in path of update
  aggregate metadata.

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  while try to update a specific aggregate's extra specsc(remove all
  existing metadata), i am getting HTTP 400 Exception.

  Env: devstack master branch

  Exception info:
  [05/Apr/2017 08:12:41] "PATCH /api/nova/aggregates/2/extra-specs/ HTTP/1.1" 
400 174

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1679974/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1679794] Re: test_show_router_attribute_with_timestamp can fail in certain environments

2017-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/453285
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=9b31b388a83d6cd1746f5698c811943a08263015
Submitter: Jenkins
Branch:master

commit 9b31b388a83d6cd1746f5698c811943a08263015
Author: Brian Haley 
Date:   Tue Apr 4 14:10:12 2017 -0400

Fix tempest router timestamp test when HA enabled

When run in an HA or DVR configured environment,
the test_show_router_attribute_with_timestamp API
test can fail with an 'updated_at' timestamp mismatch.

The test should check if the timestamp is >= since
post-creation code could update the object.

Closes-bug: #1679794

Change-Id: I3c58af022d1699ab05ca964b6d957dae39cf1ccc


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1679794

Title:
  test_show_router_attribute_with_timestamp can fail in certain
  environments

Status in neutron:
  Fix Released

Bug description:
  When running in an HA or DVR configured environment, the following
  tempest API test can fail with an 'updated_at' timestamp mismatch:

  
neutron.tests.tempest.api.test_timestamp.TestTimeStampWithL3.test_show_router_attribute_with_timestamp

  In the HA case, the problem is that the create_router() code in
  l3_hamode_db.py looks like this:

  router_dict = super(L3_HA_NAT_db_mixin,
  self).create_router(context, router)
  if is_ha:
  try:
  router_db = self._get_router(context, router_dict['id'])

  self.schedule_router(context, router_dict['id'])

  router_dict['ha_vr_id'] = router_db.extra_attributes.ha_vr_id
  router_dict['status'] = self._update_router_db(
  context, router_dict['id'],
  {'status': n_const.ROUTER_STATUS_ACTIVE})['status']
  self._notify_router_updated(context, router_db.id,
  schedule_routers=False)

  Since a vr_id is allocated, the router DB entry is updated with that
  info right after it was created.  The original POST call is returning
  what is essentially "stale" data.  A subsequent get will show a
  changes 'updated_at' field.

  You can see this if you create a router with --ha in a one-node
  devstack:

  (admin) $ openstack router create --ha --project
  ae5c9703507546f5801a14ddb124d47e router3

  $ openstack router show router3
  +-+--+
  | Field   | Value|
  +-+--+
  | admin_state_up  | UP   |
  | availability_zone_hints |  |
  | availability_zones  | nova |
  | created_at  | 2017-04-03T16:01:43Z |
  | description |  |
  | distributed | False|
  | external_gateway_info   | None |
  | flavor_id   | None |
  | ha  | False|
  | id  | 08ec51fb-188e-460a-bb01-61ddf160b77c |
  | name| router3  |
  | project_id  | ae5c9703507546f5801a14ddb124d47e |
  | revision_number | 5|
  | routes  |  |
  | status  | ACTIVE   |
  | updated_at  | 2017-04-03T16:01:44Z |
  +-+--+

  The 'updated_at' timestamp is +1s from 'created_at'.

  The test should check >= instead.  The same problem would happen with
  DVR.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1679794/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1679974] [NEW] HTTP 400 Exception while move all existing metadata in path of update aggregate metadata.

2017-04-05 Thread wei.ying
Public bug reported:

while try to update a specific aggregate's extra specsc(remove all
existing metadata), i am getting HTTP 400 Exception.

Env: devstack master branch

Exception info:
[05/Apr/2017 08:12:41] "PATCH /api/nova/aggregates/2/extra-specs/ HTTP/1.1" 400 
174

** Affects: horizon
 Importance: Undecided
 Assignee: wei.ying (wei.yy)
 Status: New

** Description changed:

- while try to update a specific aggregate's extra specsc(delete all ), i
- am getting HTTP 400 Exception.
+ while try to update a specific aggregate's extra specsc(remove all
+ existing metadata), i am getting HTTP 400 Exception.
  
  Env: devstack master branch
  
  Exception info:
  [05/Apr/2017 08:12:41] "PATCH /api/nova/aggregates/2/extra-specs/ HTTP/1.1" 
400 174

** Changed in: horizon
 Assignee: (unassigned) => wei.ying (wei.yy)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1679974

Title:
  HTTP 400 Exception while move all existing metadata in path of update
  aggregate metadata.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  while try to update a specific aggregate's extra specsc(remove all
  existing metadata), i am getting HTTP 400 Exception.

  Env: devstack master branch

  Exception info:
  [05/Apr/2017 08:12:41] "PATCH /api/nova/aggregates/2/extra-specs/ HTTP/1.1" 
400 174

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1679974/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1679976] [NEW] Quobyte driver may fail to raise an error upon error code 4 on volume mounting

2017-04-05 Thread Silvan Kaiser
Public bug reported:

The Quobyte libvirt volume driver currently accepts exit codes [0,4] when 
mounting a Quobyte volume 
(https://github.com/openstack/nova/blob/master/nova/virt/libvirt/volume/quobyte.py#L55).
However error code 4 might be thrown in acceptable (already mounted) and 
unacceptable cases (specific bad state). As the driver already checks for 
existence of the mount when connecting a volume the error code should be 
removed as acceptable and failed upon.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: quobyte

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1679976

Title:
  Quobyte driver may fail to raise an error upon error code 4 on volume
  mounting

Status in OpenStack Compute (nova):
  New

Bug description:
  The Quobyte libvirt volume driver currently accepts exit codes [0,4] when 
mounting a Quobyte volume 
(https://github.com/openstack/nova/blob/master/nova/virt/libvirt/volume/quobyte.py#L55).
  However error code 4 might be thrown in acceptable (already mounted) and 
unacceptable cases (specific bad state). As the driver already checks for 
existence of the mount when connecting a volume the error code should be 
removed as acceptable and failed upon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1679976/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1679941] [NEW] Data still exsits in instance_mappings when delete instance

2017-04-05 Thread Aaron DH
Public bug reported:

I have boot a new instance in devstack:
nova boot demo_vm4 --image 14c388ea-797d-4fc5-99a7-bb8452c53d22 --flavor c1

and then i will add a record in db table instance_mappings:
mysql> select * from instance_mappings where instance_uuid = 
'df4c6cc9-aee6-4a7d-bc03-7e75f8cddca5'\G;
*** 1. row ***
   created_at: 2017-04-05 06:53:59
   updated_at: 2017-04-05 06:54:00
   id: 39
instance_uuid: df4c6cc9-aee6-4a7d-bc03-7e75f8cddca5
  cell_id: 2
   project_id: bad99b731c09492cb5c9b9d876c5004c

Now i have delete this instance:
nova delete demo_vm4

this db record still exsits
should we delete from instance_mappings when delete the instance?

** Affects: nova
 Importance: Undecided
 Status: New

** Summary changed:

- Found data in instance_mappings when delete instance
+ Data still exsits in instance_mappings when delete instance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1679941

Title:
  Data still exsits in instance_mappings when delete instance

Status in OpenStack Compute (nova):
  New

Bug description:
  I have boot a new instance in devstack:
  nova boot demo_vm4 --image 14c388ea-797d-4fc5-99a7-bb8452c53d22 --flavor c1

  and then i will add a record in db table instance_mappings:
  mysql> select * from instance_mappings where instance_uuid = 
'df4c6cc9-aee6-4a7d-bc03-7e75f8cddca5'\G;
  *** 1. row ***
 created_at: 2017-04-05 06:53:59
 updated_at: 2017-04-05 06:54:00
 id: 39
  instance_uuid: df4c6cc9-aee6-4a7d-bc03-7e75f8cddca5
cell_id: 2
 project_id: bad99b731c09492cb5c9b9d876c5004c

  Now i have delete this instance:
  nova delete demo_vm4

  this db record still exsits
  should we delete from instance_mappings when delete the instance?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1679941/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1678475] Re: Apply QoS policy on network:router_gateway

2017-04-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/453053
Committed: 
https://git.openstack.org/cgit/openstack/openstack-manuals/commit/?id=449fc666801ce2cc74e36ebcec9734672d6409fc
Submitter: Jenkins
Branch:master

commit 449fc666801ce2cc74e36ebcec9734672d6409fc
Author: Maxime Guyot 
Date:   Tue Apr 4 10:02:07 2017 +0200

Update networking-guide/config-qos

Change-Id: I76a1cbcdba1c1cce80392723fbc48ce691f2e6c8
Closes-Bug: #1678475
Related-Bug: #1659265


** Changed in: openstack-manuals
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1678475

Title:
  Apply QoS policy on network:router_gateway

Status in neutron:
  Confirmed
Status in openstack-manuals:
  Fix Released

Bug description:
  https://review.openstack.org/425218
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 2d1ee7add7c08ebbf8de7f9a0dc2aeb5344a4052
  Author: Maxime Guyot 
  Date:   Wed Mar 8 15:14:32 2017 +0100

  Apply QoS policy on network:router_gateway
  
  All router ports (internal and external) used to be excluded from QoS
  policies applied on network. This patch excludes only internal router
  ports from network QoS policies.
  This allows cloud administrators to set an egress QoS policy to a
  public/external network and have the QoS policy applied on all external
  router ports (DVR or not). To the tenant this is also egress traffic so
  no confusion compared to QoS policies applied to VM ports.
  
  DocImpact
  
  Update networking-guide/config-qos, User workflow section:
  - Replace "Network owned ports" with "Internal network owned ports"
  
  Change-Id: I2428c2466f41a022196576f4b14526752543da7a
  Closes-Bug: #1659265
  Related-Bug: #1486039

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1678475/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp