[Yahoo-eng-team] [Bug 1686540] Re: test_create_server_invalid_bdm_in_2nd_dict Failed

2017-04-26 Thread Saravana Ganeshan
** Also affects: newton
   Importance: Undecided
   Status: New

** No longer affects: newton

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1686540

Title:
  test_create_server_invalid_bdm_in_2nd_dict Failed

Status in OpenStack Compute (nova):
  New

Bug description:
  When I run the test case test_create_server_invalid_bdm_in_2nd_dict in
  tempest - the test case failed with the below result.

  Openstack Version: Newton

  ==
  Failed 1 tests - output below:
  ==

  
tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_create_server_invalid_bdm_in_2nd_dict[id-12146ac1-d7df-4928-ad25-b1f99e5286cd,negative]
  
--

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "tempest/test.py", line 163, in wrapper
  raise exc
  tempest.lib.exceptions.ServerFault: Got server fault
  Details: Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
  

  
  Captured pythonlogging:
  ~~~
  2017-04-26 14:57:42,265 22886 INFO [tempest.lib.common.rest_client] 
Request (ServersNegativeTestJSON:setUp): 200 GET 
http://172.26.232.170:8774/v2/84fddff4dcc44eecbfa6e8dc824e291d/servers/6239b0ff-6900-4af8-8e49-5c4e0199afa5
 0.138s
  2017-04-26 14:57:42,266 22886 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
  Body: None
  Response - Headers: {'status': '200', u'content-length': '1676', 
'content-location': 
'http://172.26.232.170:8774/v2/84fddff4dcc44eecbfa6e8dc824e291d/servers/6239b0ff-6900-4af8-8e49-5c4e0199afa5',
 u'date': 'Wed, 26 Apr 2017 21:57:42 GMT', u'x-compute-request-id': 
'req-b24ea609-e7bb-4806-8679-98c32d75a780', u'content-type': 
'application/json', u'connection': 'close'}
  Body: {"server": {"OS-EXT-STS:task_state": null, "addresses": 
{"rally_verify_3842fe6d_39ORN5Gi": [{"OS-EXT-IPS-MAC:mac_addr": 
"fa:16:3e:30:8d:2a", "version": 4, "addr": "10.2.0.4", "OS-EXT-IPS:type": 
"fixed"}]}, "links": [{"href": 
"http://172.26.232.170:8774/v2/84fddff4dcc44eecbfa6e8dc824e291d/servers/6239b0ff-6900-4af8-8e49-5c4e0199afa5";,
 "rel": "self"}, {"href": 
"http://172.26.232.170:8774/84fddff4dcc44eecbfa6e8dc824e291d/servers/6239b0ff-6900-4af8-8e49-5c4e0199afa5";,
 "rel": "bookmark"}], "image": {"id": "2897cc0b-1d3c-40b9-8587-447b8d3e0445", 
"links": [{"href": 
"http://172.26.232.170:8774/84fddff4dcc44eecbfa6e8dc824e291d/images/2897cc0b-1d3c-40b9-8587-447b8d3e0445";,
 "rel": "bookmark"}]}, "OS-EXT-STS:vm_state": "active", 
"OS-SRV-USG:launched_at": "2017-04-26T21:57:41.00", "flavor": {"id": 
"c742038a-4d78-4899-aa5f-269f502c8665", "links": [{"href": 
"http://172.26.232.170:8774/84fddff4dcc44eecbfa6e8dc824e291d/flavors/c742038a-4d78-4899-aa5f-269f502c8665";,
 "rel": "boo
 kmark"}]}, "id": "6239b0ff-6900-4af8-8e49-5c4e0199afa5", "security_groups": 
[{"name": "default"}], "user_id": "5104ec988e964669997b4f8a80914288", 
"OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "progress": 
0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", 
"metadata": {}, "status": "ACTIVE", "updated": "2017-04-26T21:57:41Z", 
"hostId": "e49a0d4ef3c82f4d32305bbfd64ae2131f549992e45b4c1cb9be0de7", 
"OS-SRV-USG:terminated_at": null, "key_name": null, "name": 
"tempest-ServersNegativeTestJSON-server-1443765423", "created": 
"2017-04-26T21:57:35Z", "tenant_id": "84fddff4dcc44eecbfa6e8dc824e291d", 
"os-extended-volumes:volumes_attached": [], "config_drive": ""}}
  2017-04-26 14:57:42,891 22886 INFO [tempest.lib.common.rest_client] 
Request (ServersNegativeTestJSON:test_create_server_invalid_bdm_in_2nd_dict): 
200 POST http://172.26.232.170:8776/v1/84fddff4dcc44eecbfa6e8dc824e291d/volumes 
0.620s
  2017-04-26 14:57:42,892 22886 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
  Body: {"volume": {"display_name": 
"tempest-ServersNegativeTestJSON-volume-1449183921", "size": 1}}
  Response - Headers: {'status': '200', u'content-length': '426', 
'content-location': 
'http://172.26.232.170:8776/v1/84fddff4dcc44eecbfa6e8dc824e291d/volumes', 
u'x-compute-request-id': 'req-e4dcf42c-7c6c-4339-807a-a12de5627b28', 
u'connection': 'close', u'date': 'Wed, 26 Apr 2017 21:57:42 GMT', 
u'content-type': 'application/json', u'x-openstack-request-id': 
'req-e4dcf42c-7c6c-4339-807a-a12de5627b28'}
  Body: {"volume": {"status": "creating", "di

[Yahoo-eng-team] [Bug 1686584] Re: a few tempest tests are failing (fip related?)

2017-04-26 Thread YAMAMOTO Takashi
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1686584

Title:
  a few tempest tests are failing (fip related?)

Status in networking-midonet:
  In Progress
Status in neutron:
  In Progress

Bug description:
  the following tests are failing for both of v2 and ml2.

  test_router_interface_fip
  test_update_floatingip_bumps_revision

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1686584/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1686356] Re: Cannot update quota healthmonitor with tenant_id

2017-04-26 Thread yangyide
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1686356

Title:
  Cannot update quota healthmonitor with tenant_id

Status in neutron:
  Invalid

Bug description:
  Version is stable/mitaka.

  I update quota healthmonitor with tenant_id via command-line.

  Like this  neutron quota-update  --healthmonitor 100 --tenant_id
  2edf976f0f274f22a44d18916d6123a6

  I found this record field tenant_id is 100 and field limit is 1 in
  neutron quotas table.

  It seems that other quota is ok except the healthmonitor.

  If I use neutron quota-update  --health-monitor 100 ,it says 
  Unrecognized attribute(s)  'health_monitor'

  below is terminal output.

  [root@node ~]# neutron quota-update  --healthmonitor 100 --tenant_id 
2edf976f0f274f22a44d18916d6123a6
  +-+---+
  | Field   | Value |
  +-+---+
  | floatingip  | 50|
  | healthmonitor   | 1 |
  | l7policy| -1|
  | listener| -1|
  | loadbalancer| 10|
  | network | 10|
  | pool| 10|
  | port| 50|
  | rbac_policy | 10|
  | router  | 10|
  | security_group  | 10|
  | security_group_rule | 100   |
  | subnet  | 10|
  | subnetpool  | -1|
  +-+---+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1686356/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1686588] [NEW] Private images not listed when creating volume or instance from image

2017-04-26 Thread Oisin
Public bug reported:

In Horizon with Image API v2 enabled, in the "Create Volume", "Create
Instance" and "Rebuild Instance" dialogs, when selecting an image, only
public images are listed. Private images are missing from the list. This
only occurs when Horizon is configured to use the glance image API v2,
when v1 is used the issues is resolved.

In /etc/openstack-dashboard/local_settings to contain, the following
works;

OPENSTACK_API_VERSIONS = {
"data-processing": 1.1,
"identity": 3,
"volume": 2,
"image": 1,
}

And the following doesn't;

OPENSTACK_API_VERSIONS = {
"data-processing": 1.1,
"identity": 3,
"volume": 2,
"image": 2,
}

Testing was done with Newton Horizon version 10.0.1.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1686588

Title:
  Private images not listed when creating volume or instance from image

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In Horizon with Image API v2 enabled, in the "Create Volume", "Create
  Instance" and "Rebuild Instance" dialogs, when selecting an image,
  only public images are listed. Private images are missing from the
  list. This only occurs when Horizon is configured to use the glance
  image API v2, when v1 is used the issues is resolved.

  In /etc/openstack-dashboard/local_settings to contain, the following
  works;

  OPENSTACK_API_VERSIONS = {
  "data-processing": 1.1,
  "identity": 3,
  "volume": 2,
  "image": 1,
  }

  And the following doesn't;

  OPENSTACK_API_VERSIONS = {
  "data-processing": 1.1,
  "identity": 3,
  "volume": 2,
  "image": 2,
  }

  Testing was done with Newton Horizon version 10.0.1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1686588/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1686540] [NEW] test_create_server_invalid_bdm_in_2nd_dict Failed

2017-04-26 Thread Saravana Ganeshan
Public bug reported:

When I run the test case test_create_server_invalid_bdm_in_2nd_dict in
tempest - the test case failed with the below result.

Openstack Version: Newton

==
Failed 1 tests - output below:
==

tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_create_server_invalid_bdm_in_2nd_dict[id-12146ac1-d7df-4928-ad25-b1f99e5286cd,negative]
--

Captured traceback:
~~~
Traceback (most recent call last):
  File "tempest/test.py", line 163, in wrapper
raise exc
tempest.lib.exceptions.ServerFault: Got server fault
Details: Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.



Captured pythonlogging:
~~~
2017-04-26 14:57:42,265 22886 INFO [tempest.lib.common.rest_client] 
Request (ServersNegativeTestJSON:setUp): 200 GET 
http://172.26.232.170:8774/v2/84fddff4dcc44eecbfa6e8dc824e291d/servers/6239b0ff-6900-4af8-8e49-5c4e0199afa5
 0.138s
2017-04-26 14:57:42,266 22886 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
Body: None
Response - Headers: {'status': '200', u'content-length': '1676', 
'content-location': 
'http://172.26.232.170:8774/v2/84fddff4dcc44eecbfa6e8dc824e291d/servers/6239b0ff-6900-4af8-8e49-5c4e0199afa5',
 u'date': 'Wed, 26 Apr 2017 21:57:42 GMT', u'x-compute-request-id': 
'req-b24ea609-e7bb-4806-8679-98c32d75a780', u'content-type': 
'application/json', u'connection': 'close'}
Body: {"server": {"OS-EXT-STS:task_state": null, "addresses": 
{"rally_verify_3842fe6d_39ORN5Gi": [{"OS-EXT-IPS-MAC:mac_addr": 
"fa:16:3e:30:8d:2a", "version": 4, "addr": "10.2.0.4", "OS-EXT-IPS:type": 
"fixed"}]}, "links": [{"href": 
"http://172.26.232.170:8774/v2/84fddff4dcc44eecbfa6e8dc824e291d/servers/6239b0ff-6900-4af8-8e49-5c4e0199afa5";,
 "rel": "self"}, {"href": 
"http://172.26.232.170:8774/84fddff4dcc44eecbfa6e8dc824e291d/servers/6239b0ff-6900-4af8-8e49-5c4e0199afa5";,
 "rel": "bookmark"}], "image": {"id": "2897cc0b-1d3c-40b9-8587-447b8d3e0445", 
"links": [{"href": 
"http://172.26.232.170:8774/84fddff4dcc44eecbfa6e8dc824e291d/images/2897cc0b-1d3c-40b9-8587-447b8d3e0445";,
 "rel": "bookmark"}]}, "OS-EXT-STS:vm_state": "active", 
"OS-SRV-USG:launched_at": "2017-04-26T21:57:41.00", "flavor": {"id": 
"c742038a-4d78-4899-aa5f-269f502c8665", "links": [{"href": 
"http://172.26.232.170:8774/84fddff4dcc44eecbfa6e8dc824e291d/flavors/c742038a-4d78-4899-aa5f-269f502c8665";,
 "rel": "bookm
 ark"}]}, "id": "6239b0ff-6900-4af8-8e49-5c4e0199afa5", "security_groups": 
[{"name": "default"}], "user_id": "5104ec988e964669997b4f8a80914288", 
"OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "progress": 
0, "OS-EXT-STS:power_state": 1, "OS-EXT-AZ:availability_zone": "nova", 
"metadata": {}, "status": "ACTIVE", "updated": "2017-04-26T21:57:41Z", 
"hostId": "e49a0d4ef3c82f4d32305bbfd64ae2131f549992e45b4c1cb9be0de7", 
"OS-SRV-USG:terminated_at": null, "key_name": null, "name": 
"tempest-ServersNegativeTestJSON-server-1443765423", "created": 
"2017-04-26T21:57:35Z", "tenant_id": "84fddff4dcc44eecbfa6e8dc824e291d", 
"os-extended-volumes:volumes_attached": [], "config_drive": ""}}
2017-04-26 14:57:42,891 22886 INFO [tempest.lib.common.rest_client] 
Request (ServersNegativeTestJSON:test_create_server_invalid_bdm_in_2nd_dict): 
200 POST http://172.26.232.170:8776/v1/84fddff4dcc44eecbfa6e8dc824e291d/volumes 
0.620s
2017-04-26 14:57:42,892 22886 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
Body: {"volume": {"display_name": 
"tempest-ServersNegativeTestJSON-volume-1449183921", "size": 1}}
Response - Headers: {'status': '200', u'content-length': '426', 
'content-location': 
'http://172.26.232.170:8776/v1/84fddff4dcc44eecbfa6e8dc824e291d/volumes', 
u'x-compute-request-id': 'req-e4dcf42c-7c6c-4339-807a-a12de5627b28', 
u'connection': 'close', u'date': 'Wed, 26 Apr 2017 21:57:42 GMT', 
u'content-type': 'application/json', u'x-openstack-request-id': 
'req-e4dcf42c-7c6c-4339-807a-a12de5627b28'}
Body: {"volume": {"status": "creating", "display_name": 
"tempest-ServersNegativeTestJSON-volume-1449183921", "attachments": [], 
"availability_zone": "nova", "bootable": "false", "encrypted": false, 
"created_at": "2017-04-26T21:57:42.726580", "multiattach": "false", 
"display_description": null, "volume_type": null, "snapshot_id": null, 
"source_volid": null, "metadata": {}, "id": 
"866051f0-1c55-43d3-8eda-3d57347bab06", "size": 1}}
2017-04-26 14:57:43,308 22886 INFO 

[Yahoo-eng-team] [Bug 1686538] [NEW] Runs on OpenStack, doesn't use OpenStack metadata, then complains

2017-04-26 Thread Florian Haas
Public bug reported:

Getting this on a Xenial image running 0.7.9-48-g1c795b9-0ubuntu:

**
# A new feature in cloud-init identified possible datasources for#
# this system as:#
#   ['OpenStack', 'None']#
# However, the datasource used was: Ec2  #
##
# In the future, cloud-init will only attempt to use datasources that#
# are identified or specifically configured. #
# For more information see   #
#   https://bugs.launchpad.net/bugs/1669675  #
##
# If you are seeing this message, please file a bug against  #
# cloud-init at  #
#https://bugs.launchpad.net/cloud-init/+filebug?field.tags=dsid  #
# Make sure to include the cloud provider your instance is   #
# running on.#
##
# After you have filed a bug, you can disable this warning by launching  #
# your instance with the cloud-config below, or putting that content #
# into /etc/cloud/cloud.cfg.d/99-warnings.cfg#
##
# #cloud-config  #
# warnings:  #
#   dsid_missing_source: off #
**
**
# This system is using the EC2 Metadata Service, but does not appear to  #
# be running on Amazon EC2 or one of cloud-init's known platforms that   #
# provide a EC2 Metadata service. In the future, cloud-init may stop #
# reading metadata from the EC2 Metadata Service unless the platform can #
# be identified. #
##
# If you are seeing this message, please file a bug against  #
# cloud-init at  #
#https://bugs.launchpad.net/cloud-init/+filebug?field.tags=dsid  #
# Make sure to include the cloud provider your instance is   #
# running on.#
##
# For more information see   #
#   https://bugs.launchpad.net/bugs/1660385  #
##
# After you have filed a bug, you can disable this warning by#
# launching your instance with the cloud-config below, or#
# putting that content into  #
#/etc/cloud/cloud.cfg.d/99-ec2-datasource.cfg#
##
# #cloud-config  #
# datasource:#
#  Ec2:  #
#   strict_id: false #
**

Disable the warnings above by:
  touch /home/ubuntu/.cloud-warnings.skip
or
  touch /var/lib/cloud/instance/warnings/.skip

However:

# /usr/lib/cloud-init/ds-identify --force
# cat /run/cloud-init/ds-identify.log 

[up 1999.73s] ds-identify --force
policy loaded: mode=report report=false found=all maybe=all notfound=enabled
/etc/cloud/cloud.cfg.d/90_dpkg.cfg set datasource_list: [ NoCloud, ConfigDrive, 
OpenNebula, DigitalOcean, Azure, AltCloud, OVF, MAAS, GCE, OpenStack, 
CloudSigma, SmartOS, Ec2, CloudStack, None ]
DMI_PRODUCT_NAME=OpenStack Nova
DMI_SYS_VENDOR=OpenStack Foundation
DMI_PRODUCT_SERIAL=2c3b31d8-a9e3-446d-9964-554eb4ffc183
DMI_PRODUCT_UUID=321B9885-C76D-4BB5-988D-0CBA6662859B
PID_1_PLATFORM=unavailable
FS_LABELS=cloudimg-rootfs
KERNEL_CMDLINE=BOOT_IMAGE=/boot/vmlinuz-4.4.0-70-generic 
root=LABEL=cloudimg-rootfs ro console=tty1 console=ttyS0
VIRT=kvm
UNAME_KERNEL_NAME=Linux
UNAME_KERNEL_RELEASE=4.4.0-70-generic
UNAME_KERNEL_VERSION=#91-Ubuntu SMP Wed Mar 22 12:47:43 UTC 2017
UNAME_MACHINE=x86_64
UNAME_NODENAME=mybox
UNAME_OPERATING_SYSTEM=GNU/Linux
DSN

[Yahoo-eng-team] [Bug 1680563] Re: Too many RC download buttons in API access page

2017-04-26 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/454305
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=1f7e63910239b9ca5fdd482f165f0b8b1ebeae67
Submitter: Jenkins
Branch:master

commit 1f7e63910239b9ca5fdd482f165f0b8b1ebeae67
Author: Akihiro Motoki 
Date:   Thu Apr 6 18:54:18 2017 +

Move all RC download buttons under a single menu

In the API access page, we now see three or four
"Download OpenStack RC" buttons.
This commit packs them into a single menu.

To complete the goal, a new option called "table_actions_menu_label"
is newly introduced to DataTable.

Also the condition to disable an action dropdown menu is changed
so that the action dropdown is not disabled when "multi_select"
feature is not used in a corresponding table.
Otherwise, the dropdown menu is always disabled and there is
no way to enable it in a DataTable with multi_select=False.

Change-Id: I229c69b09c45bb20b3df48c9901d76b89fd27ee4
Closes-Bug: #1680563


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1680563

Title:
  Too many RC download buttons in API access page

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  In the API access page, we now see three or four "Download OpenStack RC" 
buttons.
  It is nice if they are packed into a single menu.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1680563/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684338] Re: tempest jobs failing with midonet-cluster complaining about keystone

2017-04-26 Thread Ihar Hrachyshka
It's not clear why it's a Neutron issue and not Midonet, so I changed
the component to networking-midonet for now. Feel free to move back or
add neutron to the list of affected projects if you have more
information that points to neutron.

** Project changed: neutron => networking-midonet

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1684338

Title:
  tempest jobs failing with midonet-cluster complaining about keystone

Status in networking-midonet:
  In Progress

Bug description:
  eg. http://logs.openstack.org/11/458011/1/check/gate-tempest-dsvm-
  networking-midonet-ml2-ubuntu-xenial/86d989d/logs/midonet-
  cluster.txt.gz

  2017.04.19 10:50:50.132 ERROR [rest-api-55] auth Login authorization error 
occurred for user null
  java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method) ~[na:1.8.0_121]
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) 
~[na:1.8.0_121]
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
 ~[na:1.8.0_121]
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) 
~[na:1.8.0_121]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) 
~[na:1.8.0_121]
at java.net.Socket.connect(Socket.java:589) ~[na:1.8.0_121]
at java.net.Socket.connect(Socket.java:538) ~[na:1.8.0_121]
at sun.net.NetworkClient.doConnect(NetworkClient.java:180) 
~[na:1.8.0_121]
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) 
~[na:1.8.0_121]
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) 
~[na:1.8.0_121]
at sun.net.www.http.HttpClient.(HttpClient.java:211) 
~[na:1.8.0_121]
at sun.net.www.http.HttpClient.New(HttpClient.java:308) ~[na:1.8.0_121]
at sun.net.www.http.HttpClient.New(HttpClient.java:326) ~[na:1.8.0_121]
at 
sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1202)
 ~[na:1.8.0_121]
at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1138)
 ~[na:1.8.0_121]
at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1032)
 ~[na:1.8.0_121]
at 
sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:966) 
~[na:1.8.0_121]
at 
sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1316)
 ~[na:1.8.0_121]
at 
sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1291)
 ~[na:1.8.0_121]
at 
com.sun.jersey.client.urlconnection.URLConnectionClientHandler$1$1.getOutputStream(URLConnectionClientHandler.java:238)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.sun.jersey.api.client.CommittingOutputStream.commitStream(CommittingOutputStream.java:117)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.sun.jersey.api.client.CommittingOutputStream.write(CommittingOutputStream.java:89)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.sun.jersey.api.client.filter.LoggingFilter$LoggingOutputStream.write(LoggingFilter.java:110)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.fasterxml.jackson.core.json.UTF8JsonGenerator._flushBuffer(UTF8JsonGenerator.java:1848)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.fasterxml.jackson.core.json.UTF8JsonGenerator.flush(UTF8JsonGenerator.java:1041)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.fasterxml.jackson.databind.ObjectWriter.writeValue(ObjectWriter.java:854) 
~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.fasterxml.jackson.jaxrs.base.ProviderBase.writeTo(ProviderBase.java:650) 
~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.sun.jersey.api.client.RequestWriter.writeRequestEntity(RequestWriter.java:300)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:217)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:153)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
... 39 common frames omitted
  Wrapped by: com.sun.jersey.api.client.ClientHandlerException: 
java.net.ConnectException: Connection refused (Connection refused)
at 
com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:155)
 ~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.sun.jersey.api.client.filter.LoggingFilter.handle(LoggingFilter.java:217) 
~[midonet-cluster.jar:5.6-SNAPSHOT]
at com.sun.jersey.api.client.Client.handle(Client.java:652) 
~[midonet-cluster.jar:5.6-SNAPSHOT]
at com.sun.jersey.api.client.WebResource.handle(WebResource.java:682) 
~[midonet-cluster.jar:5.6-SNAPSHOT]
at 
com.sun.jer

[Yahoo-eng-team] [Bug 1669900] Re: ovs-vswitchd crashed in functional test with segmentation fault

2017-04-26 Thread Ihar Hrachyshka
We switched to UCA that should deliver a new openvswitch to us (2.5.2).
Let's close the bug and monitor if it happens again. If it does, let's
reopen.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1669900

Title:
  ovs-vswitchd crashed in functional test with segmentation fault

Status in neutron:
  Fix Released

Bug description:
  2017-03-03T18:39:35.095Z|00107|connmgr|INFO|test-br368b7744<->unix: 1 
flow_mods in the last 0 s (1 adds)
  2017-03-03T18:39:35.144Z|00108|connmgr|INFO|br-tunb76d9d9d9<->unix: 9 
flow_mods in the last 0 s (9 adds)
  2017-03-03T18:39:35.148Z|00109|connmgr|INFO|br-tunb76d9d9d9<->unix: 1 
flow_mods in the last 0 s (1 adds)
  2017-03-03T18:39:35.255Z|3|daemon_unix(monitor)|WARN|2 crashes: pid 7753 
died, killed (Segmentation fault), waiting until 10 seconds since last restart
  2017-03-03T18:39:43.255Z|4|daemon_unix(monitor)|ERR|2 crashes: pid 7753 
died, killed (Segmentation fault), restarting
  2017-03-03T18:39:43.256Z|5|ovs_numa|INFO|Discovered 4 CPU cores on NUMA 
node 0
  2017-03-03T18:39:43.256Z|6|ovs_numa|INFO|Discovered 1 NUMA nodes and 4 
CPU cores
  2017-03-03T18:39:43.256Z|7|memory|INFO|8172 kB peak resident set size 
after 694.6 seconds
  
2017-03-03T18:39:43.256Z|8|reconnect|INFO|unix:/var/run/openvswitch/db.sock:
 connecting...
  
2017-03-03T18:39:43.256Z|9|reconnect|INFO|unix:/var/run/openvswitch/db.sock:
 connected

  
  
http://logs.openstack.org/73/441273/1/check/gate-neutron-dsvm-functional-ubuntu-xenial/82f5446/logs/openvswitch/ovs-vswitchd.txt.gz

  This triggered functional test failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1669900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1686488] [NEW] glance image-download error

2017-04-26 Thread jiaopengju
Public bug reported:

When use command ' glance image-download --file xxx.raw ' to
download the image, it will fail with error code 500. glance-api log
info as below:

 File "/opt/stack/glance/glance/common/wsgi.py", line 794, in __call__
response = self.process_request(req)
  File "/opt/stack/glance/glance/api/middleware/cache.py", line 180, in 
process_request
return method(request, image_id, image_iterator, image_metadata)
  File "/opt/stack/glance/glance/api/middleware/cache.py", line 235, in 
_process_v2_request
self._verify_metadata(image_meta)
  File "/opt/stack/glance/glance/api/middleware/cache.py", line 75, in 
_verify_metadata
image_meta['size'] = self.cache.get_image_size(image_meta['id'])
TypeError: 'ImageTarget' object does not support item assignment

This should be fixed.

** Affects: glance
 Importance: Undecided
 Assignee: jiaopengju (pj-jiao)
 Status: In Progress

** Changed in: glance
 Assignee: (unassigned) => jiaopengju (pj-jiao)

** Changed in: glance
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1686488

Title:
  glance image-download error

Status in Glance:
  In Progress

Bug description:
  When use command ' glance image-download --file xxx.raw ' to
  download the image, it will fail with error code 500. glance-api log
  info as below:

   File "/opt/stack/glance/glance/common/wsgi.py", line 794, in __call__
  response = self.process_request(req)
File "/opt/stack/glance/glance/api/middleware/cache.py", line 180, in 
process_request
  return method(request, image_id, image_iterator, image_metadata)
File "/opt/stack/glance/glance/api/middleware/cache.py", line 235, in 
_process_v2_request
  self._verify_metadata(image_meta)
File "/opt/stack/glance/glance/api/middleware/cache.py", line 75, in 
_verify_metadata
  image_meta['size'] = self.cache.get_image_size(image_meta['id'])
  TypeError: 'ImageTarget' object does not support item assignment

  This should be fixed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1686488/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1686485] Re: cc_ntp fails to work when deploying ubuntu-core

2017-04-26 Thread Blake Rouse
** Also affects: maas
   Importance: Undecided
   Status: New

** Changed in: maas
   Status: New => Triaged

** Changed in: maas
   Importance: Undecided => High

** Changed in: maas
Milestone: None => 2.2.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1686485

Title:
  cc_ntp fails to work when deploying ubuntu-core

Status in cloud-init:
  New
Status in MAAS:
  Triaged

Bug description:
  When deploying Ubuntu Core with MAAS I am seeing this error in
  /var/log/cloud-init.log:

  2017-04-26 18:11:45,172 - cc_apt_configure.py[DEBUG]: Nothing to do: No apt 
config and running on snappy
  2017-04-26 18:11:45,172 - handlers.py[DEBUG]: finish: 
modules-config/config-apt-configure: SUCCESS: config-apt-configure ran 
successfully
  2017-04-26 18:11:45,172 - stages.py[DEBUG]: Running module ntp () with frequency 
once-per-instance
  2017-04-26 18:11:45,172 - handlers.py[DEBUG]: start: 
modules-config/config-ntp: running config-ntp with frequency once-per-instance
  2017-04-26 18:11:45,173 - util.py[DEBUG]: Writing to 
/var/lib/cloud/instances/mpcgqp/sem/config_ntp - wb: [420] 24 bytes
  2017-04-26 18:11:45,173 - helpers.py[DEBUG]: Running config-ntp using lock 
()
  2017-04-26 18:11:45,175 - util.py[DEBUG]: Writing to 
/var/lib/cloud/instances/mpcgqp/sem/update_sources - wb: [420] 24 bytes
  2017-04-26 18:11:45,176 - helpers.py[DEBUG]: Running update-sources using 
lock ()
  2017-04-26 18:11:45,176 - util.py[DEBUG]: Running command ['apt-get', 
'--option=Dpkg::Options::=--force-confold', 
'--option=Dpkg::options::=--force-unsafe-io', '--assume-yes', '--quiet', 
'update'] with allowed return codes [0] (shell=False, capture=False)
  2017-04-26 18:11:45,186 - util.py[DEBUG]: apt-update [apt-get 
--option=Dpkg::Options::=--force-confold 
--option=Dpkg::options::=--force-unsafe-io --assume-yes --quiet update] took 
0.010 seconds
  2017-04-26 18:11:45,186 - util.py[DEBUG]: Running command ['apt-get', 
'--option=Dpkg::Options::=--force-confold', 
'--option=Dpkg::options::=--force-unsafe-io', '--assume-yes', '--quiet', 
'install', 'ntp'] with allowed return codes [0] (shell=False, capture=False)
  2017-04-26 18:11:45,191 - util.py[DEBUG]: apt-install [apt-get 
--option=Dpkg::Options::=--force-confold 
--option=Dpkg::options::=--force-unsafe-io --assume-yes --quiet install ntp] 
took 0.005 seconds
  2017-04-26 18:11:45,193 - util.py[DEBUG]: Reading from 
/etc/cloud/templates/ntp.conf.ubuntu.tmpl (quiet=False)
  2017-04-26 18:11:45,193 - util.py[DEBUG]: Read 2509 bytes from 
/etc/cloud/templates/ntp.conf.ubuntu.tmpl
  2017-04-26 18:11:45,193 - templater.py[DEBUG]: Rendering content of 
'/etc/cloud/templates/ntp.conf.ubuntu.tmpl' using renderer jinja
  2017-04-26 18:11:45,197 - util.py[DEBUG]: Writing to /etc/ntp.conf - wb: 
[420] 2330 bytes
  2017-04-26 18:11:45,200 - handlers.py[DEBUG]: finish: 
modules-config/config-ntp: FAIL: running config-ntp with frequency 
once-per-instance
  2017-04-26 18:11:45,200 - util.py[WARNING]: Running module ntp () failed
  2017-04-26 18:11:45,202 - util.py[DEBUG]: Running module ntp () failed
  Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/cloudinit/stages.py", line 787, in 
_run_modules
  freq=freq)
File "/usr/lib/python3/dist-packages/cloudinit/cloud.py", line 54, in run
  return self._runners.run(name, functor, args, freq, clear_on_fail)
File "/usr/lib/python3/dist-packages/cloudinit/helpers.py", line 187, in run
  results = functor(*args)
File "/usr/lib/python3/dist-packages/cloudinit/config/cc_ntp.py", line 80, 
in handle
  write_ntp_config_template(ntp_cfg, cloud)
File "/usr/lib/python3/dist-packages/cloudinit/config/cc_ntp.py", line 126, 
in write_ntp_config_template
  templater.render_to_file(template_fn, NTP_CONF, params)
File "/usr/lib/python3/dist-packages/cloudinit/templater.py", line 131, in 
render_to_file
  util.write_file(outfn, contents, mode=mode)
File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 1711, in 
write_file
  with open(filename, omode) as fh:
  OSError: [Errno 30] Read-only file system: '/etc/ntp.conf'

  Note: This doesn't break deployment. Deployment still succeeds, except
  for ntp syncing is not setup to point to MAAS.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1686485/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1686485] [NEW] cc_ntp fails to work when deploying ubuntu-core

2017-04-26 Thread Blake Rouse
Public bug reported:

When deploying Ubuntu Core with MAAS I am seeing this error in /var/log
/cloud-init.log:

2017-04-26 18:11:45,172 - cc_apt_configure.py[DEBUG]: Nothing to do: No apt 
config and running on snappy
2017-04-26 18:11:45,172 - handlers.py[DEBUG]: finish: 
modules-config/config-apt-configure: SUCCESS: config-apt-configure ran 
successfully
2017-04-26 18:11:45,172 - stages.py[DEBUG]: Running module ntp () with frequency 
once-per-instance
2017-04-26 18:11:45,172 - handlers.py[DEBUG]: start: modules-config/config-ntp: 
running config-ntp with frequency once-per-instance
2017-04-26 18:11:45,173 - util.py[DEBUG]: Writing to 
/var/lib/cloud/instances/mpcgqp/sem/config_ntp - wb: [420] 24 bytes
2017-04-26 18:11:45,173 - helpers.py[DEBUG]: Running config-ntp using lock 
()
2017-04-26 18:11:45,175 - util.py[DEBUG]: Writing to 
/var/lib/cloud/instances/mpcgqp/sem/update_sources - wb: [420] 24 bytes
2017-04-26 18:11:45,176 - helpers.py[DEBUG]: Running update-sources using lock 
()
2017-04-26 18:11:45,176 - util.py[DEBUG]: Running command ['apt-get', 
'--option=Dpkg::Options::=--force-confold', 
'--option=Dpkg::options::=--force-unsafe-io', '--assume-yes', '--quiet', 
'update'] with allowed return codes [0] (shell=False, capture=False)
2017-04-26 18:11:45,186 - util.py[DEBUG]: apt-update [apt-get 
--option=Dpkg::Options::=--force-confold 
--option=Dpkg::options::=--force-unsafe-io --assume-yes --quiet update] took 
0.010 seconds
2017-04-26 18:11:45,186 - util.py[DEBUG]: Running command ['apt-get', 
'--option=Dpkg::Options::=--force-confold', 
'--option=Dpkg::options::=--force-unsafe-io', '--assume-yes', '--quiet', 
'install', 'ntp'] with allowed return codes [0] (shell=False, capture=False)
2017-04-26 18:11:45,191 - util.py[DEBUG]: apt-install [apt-get 
--option=Dpkg::Options::=--force-confold 
--option=Dpkg::options::=--force-unsafe-io --assume-yes --quiet install ntp] 
took 0.005 seconds
2017-04-26 18:11:45,193 - util.py[DEBUG]: Reading from 
/etc/cloud/templates/ntp.conf.ubuntu.tmpl (quiet=False)
2017-04-26 18:11:45,193 - util.py[DEBUG]: Read 2509 bytes from 
/etc/cloud/templates/ntp.conf.ubuntu.tmpl
2017-04-26 18:11:45,193 - templater.py[DEBUG]: Rendering content of 
'/etc/cloud/templates/ntp.conf.ubuntu.tmpl' using renderer jinja
2017-04-26 18:11:45,197 - util.py[DEBUG]: Writing to /etc/ntp.conf - wb: [420] 
2330 bytes
2017-04-26 18:11:45,200 - handlers.py[DEBUG]: finish: 
modules-config/config-ntp: FAIL: running config-ntp with frequency 
once-per-instance
2017-04-26 18:11:45,200 - util.py[WARNING]: Running module ntp () failed
2017-04-26 18:11:45,202 - util.py[DEBUG]: Running module ntp () failed
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/cloudinit/stages.py", line 787, in 
_run_modules
freq=freq)
  File "/usr/lib/python3/dist-packages/cloudinit/cloud.py", line 54, in run
return self._runners.run(name, functor, args, freq, clear_on_fail)
  File "/usr/lib/python3/dist-packages/cloudinit/helpers.py", line 187, in run
results = functor(*args)
  File "/usr/lib/python3/dist-packages/cloudinit/config/cc_ntp.py", line 80, in 
handle
write_ntp_config_template(ntp_cfg, cloud)
  File "/usr/lib/python3/dist-packages/cloudinit/config/cc_ntp.py", line 126, 
in write_ntp_config_template
templater.render_to_file(template_fn, NTP_CONF, params)
  File "/usr/lib/python3/dist-packages/cloudinit/templater.py", line 131, in 
render_to_file
util.write_file(outfn, contents, mode=mode)
  File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 1711, in 
write_file
with open(filename, omode) as fh:
OSError: [Errno 30] Read-only file system: '/etc/ntp.conf'

Note: This doesn't break deployment. Deployment still succeeds, except
for ntp syncing is not setup to point to MAAS.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1686485

Title:
  cc_ntp fails to work when deploying ubuntu-core

Status in cloud-init:
  New

Bug description:
  When deploying Ubuntu Core with MAAS I am seeing this error in
  /var/log/cloud-init.log:

  2017-04-26 18:11:45,172 - cc_apt_configure.py[DEBUG]: Nothing to do: No apt 
config and running on snappy
  2017-04-26 18:11:45,172 - handlers.py[DEBUG]: finish: 
modules-config/config-apt-configure: SUCCESS: config-apt-configure ran 
successfully
  2017-04-26 18:11:45,172 - stages.py[DEBUG]: Running module ntp () with frequency 
once-per-instance
  2017-04-26 18:11:45,172 - handlers.py[DEBUG]: start: 
modules-config/config-ntp: running config-ntp with frequency once-per-instance
  2017-04-26 18:11:45,173 - util.py[DEBUG]: Writing to 
/var/lib/cloud/instances/mpcgqp/sem/config_ntp - wb: [420] 24 bytes
  2017-04-26 18:11:45,173 - helpers.py[DEBUG]: Running config-ntp using lock 
()
  2017-04-26 18:11:45,175 - util.py[DEBUG]: Writing to 
/var/lib/cloud/instan

[Yahoo-eng-team] [Bug 1661360] Re: InstanceNotFound due to missing osapi_compute service version when running nova-api under wsgi

2017-04-26 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/457283
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=d3c084f23448d1890bfda4a06de246f2be3c1279
Submitter: Jenkins
Branch:master

commit d3c084f23448d1890bfda4a06de246f2be3c1279
Author: Chris Dent 
Date:   Mon Apr 17 16:38:49 2017 +

Register osapi_compute when nova-api is wsgi

When the nova-api services starts from its own standalone binary it
registers itself in the services table. The original wsgi script in
nova/wsgi/nova-api.py did not, leading to the bug referenced below.

The new wsgi script at nova.api.openstack.compute.wsgi, modelled on
a similar thing used for the placement API, provides the necessary
service registration.

If a ServiceTooOld exception happens while trying to register the
service then a very simple (currently very stubby) application is
loaded instead of the compute api. This application returns a 500
and a message.

Some caveats/todos:

* wsgi apps managed under mod-wsgi (and presumably other containers)
  are not imported/compiled/run until the first request is made. In
  this case that means the service handling does not happen until
  that first request, somewhat defeating the purpose if the api is a
  bit idle.

Change-Id: I7c4acfaa6c50ac0e4d6de69eb62ec5bbad72ff85
Closes-Bug: #1661360


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1661360

Title:
  InstanceNotFound due to missing osapi_compute service version when
  running nova-api under wsgi

Status in OpenStack Compute (nova):
  Fix Released
Status in tripleo:
  Fix Released

Bug description:
  Running OpenStack services from master, when we try to run tempest
  test
  
tempest.scenario.test_server_basic_ops.TestServerBasicOps.test_server_basic_ops
  (among others). It always fails with message "u'message': u'Instance
  bf33af04-6b55-4835-bb17-02484c196f13 could not be found.'" (full log
  in http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-centos-7/b29f35b/console.html)

  According to the sequence in the log, this is what happens:

  1. tempest creates an instance:

  http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-
  centos-7/b29f35b/console.html#_2017-02-02_13_04_48_291997

  2. nova server returns instance bf33af04-6b55-4835-bb17-02484c196f13
  so it seems it has been properly created:

  http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-
  centos-7/b29f35b/console.html#_2017-02-02_13_04_48_292483

  3. tempest try to get status of the instance right after creating it
  and nova server returns 404, instance not found:

  http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-
  centos-7/b29f35b/console.html#_2017-02-02_13_04_48_292565

  http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-
  centos-7/b29f35b/console.html#_2017-02-02_13_04_48_292845

  At that time following messages are found in nova log:

  2017-02-02 12:58:10.823 7439 DEBUG nova.compute.api 
[req-eec92d3e-9f78-4915-b3b9-ca6858f8dd6a - - - - -] [instance: 
bf33af04-6b55-4835-bb17-02484c196f13] Fetching instance by UUID get 
/usr/lib/python2.7/site-packages/nova/compute/api.py:2312
  2017-02-02 12:58:10.879 7439 INFO nova.api.openstack.wsgi 
[req-eec92d3e-9f78-4915-b3b9-ca6858f8dd6a - - - - -] HTTP exception thrown: 
Instance bf33af04-6b55-4835-bb17-02484c196f13 could not be found.
  2017-02-02 12:58:10.880 7439 DEBUG nova.api.openstack.wsgi 
[req-eec92d3e-9f78-4915-b3b9-ca6858f8dd6a - - - - -] Returning 404 to user: 
Instance bf33af04-6b55-4835-bb17-02484c196f13 could not be found. __call__ 
/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:1039

  http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-centos-7/b29f35b/logs/nova/nova-
  api.txt.gz#_2017-02-02_12_58_10_879

  4. Then tempest start cleaning up environment, deleting security
  group, etc...

  We are hitting this with nova from commit
  f40467b0eb2b58a369d24a0e832df1ace6c400c3





  
  Tempest starts cleaning up securitygroup

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1661360/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1628819] Re: OVS firewall can generate too many flows

2017-04-26 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/333804
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=192bc5f1a878b6b0c2211f4421fafacb27c7
Submitter: Jenkins
Branch:master

commit 192bc5f1a878b6b0c2211f4421fafacb27c7
Author: IWAMOTO Toshihiro 
Date:   Fri Jun 24 17:20:36 2016 +0900

Use conjunction for security group rules with remote_group_id

Prior to this commit, the number of flows can be prohibitively large
in some cases.

Closes-bug: #1628819
Change-Id: I194e7f40db840d29af317ddc2e342a1409000151


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1628819

Title:
  OVS firewall can generate too many flows

Status in neutron:
  Fix Released

Bug description:
  The firewall code generate O(n^2) flows when a security group rule uses a 
remote_group_id.
  See OVSFirewallDriver.create_rules_generator.

  This can be problematic when a large number of addresses are in a
  security group.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1628819/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1502028] Re: cannot attach a volume when using multiple ceph backends

2017-04-26 Thread melanie witt
As mentioned in comment 8, I think this got fixed by
https://review.openstack.org/#/c/389399 in Ocata (15.0.0.0b2).

As for a backport, the best way is to ask in #openstack-nova in IRC or
in a Nova meeting or on the openstack-dev mailing list.

** Changed in: nova
   Status: Confirmed => Fix Released

** Changed in: nova
 Assignee: Kevin Zhao (kevin-zhao) => Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1502028

Title:
  cannot attach a volume when using multiple ceph backends

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  1. Exact version of Nova/OpenStack you are running: Kilo Stable

  2. Relevant log files:

  I'm testing using ceph RADOS block devices to attach VM; however I've
  hit an issue when ceph cluster is different between VM and volumes.

  <--error message-->
  2015-09-24 11:32:31 13083 DEBUG nova.virt.libvirt.config 
[req-b9bbd744-cf75-477b-b6a6-ea5b72f6181f 9504f2c4fe6b4b34a1bb0330f2faba35 
0788824d5d1f46f2b014597ba8dc0585] Generated XML ('\n  \n  \n
\n\n\n  \n  
\n\n  \n  \n  
727c5319-1926-44ac-ba52-de55485faf2b\n\n',)  to_xml 
/opt/stack/venv/nova-20150831T151915Z/lib/python2.7/site-packages/nova/virt/libvirt/config.py:82
  2015-09-24 11:32:31 13083 ERROR nova.virt.libvirt.driver 
[req-b9bbd744-cf75-477b-b6a6-ea5b72f6181f 9504f2c4fe6b4b34a1bb0330f2faba35 
0788824d5d1f46f2b014597ba8dc0585] [instance: 
3aa05494-88ef-44c3-a7ad-705437b5f165] Failed to attach volume at mountpoint: 
/dev/vdb
  2015-09-24 11:32:31.923 13083 TRACE nova.virt.libvirt.driver [instance: 
3aa05494-88ef-44c3-a7ad-705437b5f165] Traceback (most recent call last):
  2015-09-24 11:32:31.923 13083 TRACE nova.virt.libvirt.driver [instance: 
3aa05494-88ef-44c3-a7ad-705437b5f165]   File 
"/opt/stack/venv/nova-20150831T151915Z/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
 line 1092, in attach_volume
  2015-09-24 11:32:31.923 13083 TRACE nova.virt.libvirt.driver [instance: 
3aa05494-88ef-44c3-a7ad-705437b5f165] 
virt_dom.attachDeviceFlags(conf.to_xml(), flags)
  2015-09-24 11:32:31.923 13083 TRACE nova.virt.libvirt.driver [instance: 
3aa05494-88ef-44c3-a7ad-705437b5f165]   File 
"/opt/stack/venv/nova-20150831T151915Z/lib/python2.7/site-packages/eventlet/tpool.py",
 line 183, in doit
  2015-09-24 11:32:31.923 13083 TRACE nova.virt.libvirt.driver [instance: 
3aa05494-88ef-44c3-a7ad-705437b5f165] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
  2015-09-24 11:32:31.923 13083 TRACE nova.virt.libvirt.driver [instance: 
3aa05494-88ef-44c3-a7ad-705437b5f165]   File 
"/opt/stack/venv/nova-20150831T151915Z/lib/python2.7/site-packages/eventlet/tpool.py",
 line 141, in proxy_call
  2015-09-24 11:32:31.923 13083 TRACE nova.virt.libvirt.driver [instance: 
3aa05494-88ef-44c3-a7ad-705437b5f165] rv = execute(f, *args, **kwargs)
  2015-09-24 11:32:31.923 13083 TRACE nova.virt.libvirt.driver [instance: 
3aa05494-88ef-44c3-a7ad-705437b5f165]   File 
"/opt/stack/venv/nova-20150831T151915Z/lib/python2.7/site-packages/eventlet/tpool.py",
 line 122, in execute
  2015-09-24 11:32:31.923 13083 TRACE nova.virt.libvirt.driver [instance: 
3aa05494-88ef-44c3-a7ad-705437b5f165] six.reraise(c, e, tb)
  2015-09-24 11:32:31.923 13083 TRACE nova.virt.libvirt.driver [instance: 
3aa05494-88ef-44c3-a7ad-705437b5f165]   File 
"/opt/stack/venv/nova-20150831T151915Z/lib/python2.7/site-packages/eventlet/tpool.py",
 line 80, in tworker
  2015-09-24 11:32:31.923 13083 TRACE nova.virt.libvirt.driver [instance: 
3aa05494-88ef-44c3-a7ad-705437b5f165] rv = meth(*args, **kwargs)
  2015-09-24 11:32:31.923 13083 TRACE nova.virt.libvirt.driver [instance: 
3aa05494-88ef-44c3-a7ad-705437b5f165]   File 
"/opt/stack/venv/nova-20150831T151915Z/lib/python2.7/site-packages/libvirt.py", 
line 528, in attachDeviceFlags
  2015-09-24 11:32:31.923 13083 TRACE nova.virt.libvirt.driver [instance: 
3aa05494-88ef-44c3-a7ad-705437b5f165] if ret == -1: raise libvirtError 
('virDomainAttachDeviceFlags() failed', dom=self)
  2015-09-24 11:32:31.923 13083 TRACE nova.virt.libvirt.driver [instance: 
3aa05494-88ef-44c3-a7ad-705437b5f165] libvirtError: internal error: unable to 
execute QEMU command 'device_add': Property 'virtio-blk-device.drive' can't 
find value 'drive-virtio-disk1'
  2015-09-24 11:32:31.923 13083 TRACE nova.virt.libvirt.driver [instance: 
3aa05494-88ef-44c3-a7ad-705437b5f165]
  2015-09-24 11:32:31 13083 ERROR nova.virt.block_device 
[req-b9bbd744-cf75-477b-b6a6-ea5b72f6181f 9504f2c4fe6b4b34a1bb0330f2faba35 
0788824d5d1f46f2b014597ba8dc0585] [instance: 
3aa05494-88ef-44c3-a7ad-705437b5f165] Driver failed to attach volume 
727c5319-1926-44ac-ba52-de55485faf2b at /dev/vdb
  2015-09-24 11:32:31.926 13083 TRACE nova.virt.block_device [instance: 
3aa05494-88ef-44c3-a7ad-705437b5f165] Traceback (most recent call last):
  2015-09-24 11:32:31.926

[Yahoo-eng-team] [Bug 1367899] Re: cloud-init rsyslog config uses deprecated syntax

2017-04-26 Thread Scott Moser
** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu)
   Status: New => Fix Released

** Changed in: cloud-init (Ubuntu)
   Importance: Undecided => Low

** Also affects: cloud-init (Ubuntu Zesty)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Yakkety)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu Xenial)
   Importance: Undecided => Medium

** Changed in: cloud-init (Ubuntu Xenial)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Yakkety)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Zesty)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Xenial)
   Importance: Medium => Low

** Changed in: cloud-init (Ubuntu Yakkety)
   Importance: Undecided => Low

** Changed in: cloud-init (Ubuntu Zesty)
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1367899

Title:
  cloud-init rsyslog config uses deprecated syntax

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Confirmed
Status in cloud-init source package in Yakkety:
  Confirmed
Status in cloud-init source package in Zesty:
  Confirmed
Status in cloud-init package in Debian:
  Fix Released

Bug description:
  The rsyslog config snippet /etc/rsyslog.d/21-cloudinit.conf ends with the line
  & ~

  As of Trusty (well, after Precise) this syntax is deprecated in the shipped 
rsyslog, resulting in a warning message at rsyslog startup, and should be 
replaced with
  & stop

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1367899/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1686449] [NEW] Fix create instances error when selected network has no subnets

2017-04-26 Thread wei.ying
Public bug reported:

Env: devstack master branch

Desc:

When a project is only one available network, and the network without
subnets, the network should not be used to create instances.

If use this network api will return:

"Network f1a03328-60d0-4b0e-a4a0-d25ec0d185c4 requires a subnet in order
to boot instances on. (HTTP 400) (Request-ID: req-
e8a03012-6f03-4797-aaf0-0dfa9b434746)"

** Affects: horizon
 Importance: Undecided
 Assignee: wei.ying (wei.yy)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => wei.ying (wei.yy)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1686449

Title:
  Fix create instances error when selected network has no subnets

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Env: devstack master branch

  Desc:

  When a project is only one available network, and the network without
  subnets, the network should not be used to create instances.

  If use this network api will return:

  "Network f1a03328-60d0-4b0e-a4a0-d25ec0d185c4 requires a subnet in
  order to boot instances on. (HTTP 400) (Request-ID: req-
  e8a03012-6f03-4797-aaf0-0dfa9b434746)"

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1686449/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1643078] Re: Minimum disk size of an image should be prepopulated on create volume from image modal

2017-04-26 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/400515
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=770858e5efa9c2d4bb55d423bb2d4efb1a9a5331
Submitter: Jenkins
Branch:master

commit 770858e5efa9c2d4bb55d423bb2d4efb1a9a5331
Author: Ying Zuo 
Date:   Mon Nov 21 20:51:09 2016 -0800

Pre-populate image size on create volume from image modal

Pre-populate the minimum disk size or the size of the image on
create volume from image modal.

Use the correct volume size unit GiB instead of GB.

Change-Id: I08b0276bec76ce900fd4399356f7f290835998e1
Closes-bug: #1643078


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1643078

Title:
  Minimum disk size of an image should be prepopulated on create volume
  from image modal

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Steps to reproduce:

  1. Create an image with minimum disk size set as higher than the
  actual image size.

  2. Click the Create Volume action in the action menu of the image
  created in step 1.

  3. Note that the actual size of the image is pre-populated on the Size
  input box, but the minimum disk size set on the image should be pre-
  populated instead.

  Also, the modal should show GiB instead of GB since Cinder uses GiB.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1643078/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1677621] Re: Port update exception on nova unshelve for instance with PCI devices

2017-04-26 Thread Matt Riedemann
** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Also affects: nova/newton
   Importance: Undecided
   Status: New

** Changed in: nova/ocata
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1677621

Title:
  Port update exception on nova unshelve for instance with PCI devices

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  New
Status in OpenStack Compute (nova) ocata series:
  In Progress

Bug description:
  Description
  ===
  If an instance with PCI devices (SRIOV, or passthrough) is shelved, a port 
update exception will be seen and the instance will go into Error state when it 
is unshelved.

  The nova API exception message is similar to:

  "Unable to correlate PCI slot :0d:00.1"

  Steps to reproduce
  ==
  1. Launch an instance with SRIOV or PCI passthrough port bindings.

  2. nova shelve 

  -- wait for nova instance status SHELVED_OFFLOADED --

  3. nova unshelve 

  Expected result
  ===
  If there are resources available, the instance should be able to claim PCI 
devices and successfully (re)launch.

  Actual result
  =
  - Instance in error state
  - Exception in nova api logs.

  Environment
  ===
  1. Exact version of OpenStack you are running: Ocata, devstack

  2. Which hypervisor did you use? Libvirt + KVM

  2. Which storage type did you use? LVM

  3. Which networking type did you use? Neutron, OVS

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1677621/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1683752] Re: Evacuate API loses the json-schema validation in 2.13

2017-04-26 Thread Matt Riedemann
** Tags added: api evacuate

** Changed in: nova
   Importance: Undecided => High

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Also affects: nova/newton
   Importance: Undecided
   Status: New

** Changed in: nova/ocata
   Status: New => In Progress

** Changed in: nova/ocata
 Assignee: (unassigned) => Alex Xu (xuhj)

** Changed in: nova/newton
   Status: New => Confirmed

** Changed in: nova/ocata
   Importance: Undecided => High

** Changed in: nova/newton
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1683752

Title:
  Evacuate API loses the json-schema validation in 2.13

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  In Progress
Status in OpenStack Compute (nova) ocata series:
  In Progress

Bug description:
  Evacuate API loses the json-schema validation since the commit
  c01d16e81af6cd9453ffe7133bdc6a4c82e4f6d5

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/evacuate.py?id=c01d16e81af6cd9453ffe7133bdc6a4c82e4f6d5#n80

  @validation.schema(evacuate.evacuate, "2.1", "2.12")
  @validation.schema(evacuate.evacuate_v214, "2.14")
  def _evacuate(self, req, id, body):
  ...

  
  The is a gap between two validation.schema decorator.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1683752/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1686431] [NEW] Fix script error in django create instance form.

2017-04-26 Thread wei.ying
Public bug reported:

Env: devstack master branch

Reason: not load horizon.instances.js in _scripts.html

Steps to reproduce:

1.Enable 'LAUNCH_INSTANCE_LEGACY_ENABLED = True' and 
'LAUNCH_INSTANCE_NG_ENABLED = False'
in openstack_dashboard/local/local_settings.py L:244 & L:245

2. Go to Project/Compute/Instances panel

3. Click Launch Instance button, we'll see the following in console.log

JS error info:

VM22778:3 Uncaught TypeError: Cannot read property 'workflow_init' of undefined
at eval (eval at  (732ce617825a.js:48), :3:22)
at eval ()
at 732ce617825a.js:48
at Function.globalEval (732ce617825a.js:48)
at jQuery.fn.init.domManip (732ce617825a.js:412)
at jQuery.fn.init.append (732ce617825a.js:396)
at Object.horizon.modals.success (1fb30c7e6805.js:67)
at Object.success (1fb30c7e6805.js:84)
at fire (732ce617825a.js:208)
at Object.fireWith [as resolveWith] (732ce617825a.js:213)

** Affects: horizon
 Importance: Undecided
 Assignee: wei.ying (wei.yy)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => wei.ying (wei.yy)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1686431

Title:
  Fix script error in django create instance form.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Env: devstack master branch

  Reason: not load horizon.instances.js in _scripts.html

  Steps to reproduce:

  1.Enable 'LAUNCH_INSTANCE_LEGACY_ENABLED = True' and 
'LAUNCH_INSTANCE_NG_ENABLED = False'
  in openstack_dashboard/local/local_settings.py L:244 & L:245

  2. Go to Project/Compute/Instances panel

  3. Click Launch Instance button, we'll see the following in
  console.log

  JS error info:

  VM22778:3 Uncaught TypeError: Cannot read property 'workflow_init' of 
undefined
  at eval (eval at  (732ce617825a.js:48), :3:22)
  at eval ()
  at 732ce617825a.js:48
  at Function.globalEval (732ce617825a.js:48)
  at jQuery.fn.init.domManip (732ce617825a.js:412)
  at jQuery.fn.init.append (732ce617825a.js:396)
  at Object.horizon.modals.success (1fb30c7e6805.js:67)
  at Object.success (1fb30c7e6805.js:84)
  at fire (732ce617825a.js:208)
  at Object.fireWith [as resolveWith] (732ce617825a.js:213)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1686431/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1617282] Re: functional gate failed with git clone timeout on fetching ovs from github

2017-04-26 Thread James Page
2.5.2 was released to Xenial updates on the 12th April.

Marking this bug as "Fix Released"

** Changed in: openvswitch (Ubuntu)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1617282

Title:
  functional gate failed with git clone timeout on fetching ovs from
  github

Status in neutron:
  Fix Released
Status in openvswitch package in Ubuntu:
  Fix Released

Bug description:
  http://logs.openstack.org/68/351368/23/check/gate-neutron-dsvm-
  functional/0d68031/console.html

  2016-08-25 10:06:34.915685 | fatal: unable to access 
'https://github.com/openvswitch/ovs.git/': Failed to connect to github.com port 
443: Connection timed out
  2016-08-25 10:06:34.920456 | + functions-common:git_timed:603   :   
[[ 128 -ne 124 ]]
  2016-08-25 10:06:34.921769 | + functions-common:git_timed:604   :   
die 604 'git call failed: [git clone' https://github.com/openvswitch/ovs.git 
'/opt/stack/new/ovs]'
  2016-08-25 10:06:34.922982 | + functions-common:die:186 :   
local exitcode=0
  2016-08-25 10:06:34.924373 | + functions-common:die:187 :   
set +o xtrace
  2016-08-25 10:06:34.924404 | [Call Trace]
  2016-08-25 10:06:34.924430 | 
/opt/stack/new/neutron/neutron/tests/contrib/gate_hook.sh:53:compile_ovs
  2016-08-25 10:06:34.924447 | 
/opt/stack/new/neutron/devstack/lib/ovs:57:git_timed
  2016-08-25 10:06:34.924463 | /opt/stack/new/devstack/functions-common:604:die
  2016-08-25 10:06:34.926689 | [ERROR] 
/opt/stack/new/devstack/functions-common:604 git call failed: [git clone 
https://github.com/openvswitch/ovs.git /opt/stack/new/ovs]

  I guess we should stop pulling OVS from github. Instead, we could use
  Xenial platform that already provides ovs == 2.5 from .deb packages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1617282/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1686405] [NEW] Wrong port status after some operations

2017-04-26 Thread Dong Jun
Public bug reported:

The environment is, openstack master branch, all in one devstack, openvswitch 
agent, dhcp agent.
After some operations, a nova port turns to wrong status. This issue can be 
reproduced by following steps:
1. Remove ovs port from ovs bridge using ovs-vsctl del-port command, neutron 
port status changed to down, that's right;[1]
2. Update name of the port, the port turns to ACTIVE status, that's wrong.[2]

It seems a problem of DHCP provisioning block, when updating port name,
a DHCP provisioning block was added, then DHCP agent complete this
provisioning block and make the port ACTIVE.

[1] After step 1:
| admin_state_up| True  
  | binding:host_id   | 
c4  
| binding:vif_type  | ovs   

  | binding:vnic_type | normal  
 
| device_owner  | compute:nova  
  
| id| cad2e6a0-5bda-4e4d-9232-ba7c06acf28e  
  
| status| DOWN
| updated_at| 2017-04-26T13:32:22Z  
  

[2] After step 2:
| admin_state_up| True  
  | binding:host_id   | 
c4  
| binding:vif_type  | ovs   

  | binding:vnic_type | normal  
| device_owner  
| compute:nova  
  | id| 
cad2e6a0-5bda-4e4d-9232-ba7c06acf28e
| name  | bbc   

  | status| ACTIVE
   | 
updated_at| 2017-04-26T13:34:39Z

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1686405

Title:
  Wrong port status after some operations

Status in neutron:
  New

Bug description:
  The environment is, openstack master branch, all in one devstack, openvswitch 
agent, dhcp agent.
  After some operations, a nova port turns to wrong status. This issue can be 
reproduced by following steps:
  1. Remove ovs port from ovs bridge using ovs-vsctl del-port command, neutron 
port status changed to down, that's right;[1]
  2. Update name of the port, the port turns to ACTIVE status, that's wrong.[2]

  It seems a problem of DHCP provisioning block, when updating port
  name, a DHCP provisioning block was added, then DHCP agent complete
  this provisioning block and make the port ACTIVE.

  [1] After step 1:
  | admin_state_up| True
| binding:host_id   
| c4
  | binding:vif_type  | ovs 

| binding:vnic_type | normal
   
  | device_owner  | compute:nova

  | id| cad2e6a0-5bda-4e4d-9232-ba7c06acf28e

  | status| DOWN
  | updated_at| 2017-04-26T13:32:22Z


  [2] After step 2:
  | admin_state_up| True
| binding:host_id   
| c4

[Yahoo-eng-team] [Bug 1686328] Re: Check QoS policy during net/port binding, to avoid incompatible rules

2017-04-26 Thread Rodolfo Alonso
Implemented in https://review.openstack.org/#/c/426946/

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1686328

Title:
  Check QoS policy during net/port binding, to avoid incompatible rules

Status in neutron:
  Fix Released

Bug description:
  With https://bugs.launchpad.net/neutron/+bug/1686035, a user will be
  able to create a QoS policy with rules compatible in all loaded
  backends.

  During the port/net binding, this QoS policy (and rules) must be
  checked.

  Because now all QoS rules applied to a port/net will be totally
  compatible with this port/net, the check made in
  
https://github.com/openstack/neutron/blob/master/neutron/agent/l2/extensions/qos.py#L98
  is not necesary.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1686328/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1686380] [NEW] the resize of the instance does not change the memory quota usage for vram

2017-04-26 Thread 赵明俊
Public bug reported:

When creating an instance, the memory quota is used to contain video
ram.

For an instance resize, the difference between the old flavor and the
new flavor will be check, and update the quota usage, but if the old
flavor and the new flavor define a different video ram in extra_specs,
it will cause the memory quota to be incorrect.

If a flavor defines 512MB of RAM and 64MB of video RAM, creating an
instance will take up 576MB of memory quota, when resize this instance
to a flavor defined 1024MB of RAM but 0MB of video RAM, it will take up
1088MB of memory quota, not 1024MB, and this 64MB memory quota usage
will never be released.

This bug as same as https://launchpad.net/bugs/1681989 , but this for
resize and that for delete.

** Affects: nova
 Importance: Undecided
 Assignee: 赵明俊 (falseuser)
 Status: New


** Tags: quotas

** Description changed:

+ When creating an instance, the memory quota is used to contain video
+ ram.
+ 
  For an instance resize, the difference between the old flavor and the
  new flavor will be check, and update the quota usage, but if the old
  flavor and the new flavor define a different video ram in extra_specs,
  it will cause the memory quota to be incorrect.
  
  If a flavor defines 512MB of RAM and 64MB of video RAM, creating an
  instance will take up 576MB of memory quota, when resize this instance
  to a flavor defined 1024MB of RAM but 0MB of video RAM, it will take up
  1088MB of memory quota, not 1024MB, and this 64MB memory quota usage
  will never be released.
  
  This bug as same as https://launchpad.net/bugs/1681989 , but this for
  resize and that for delete.

** Changed in: nova
 Assignee: (unassigned) => 赵明俊 (falseuser)

** Tags added: quotas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1686380

Title:
  the resize of the instance does not change the memory quota usage for
  vram

Status in OpenStack Compute (nova):
  New

Bug description:
  When creating an instance, the memory quota is used to contain video
  ram.

  For an instance resize, the difference between the old flavor and the
  new flavor will be check, and update the quota usage, but if the old
  flavor and the new flavor define a different video ram in extra_specs,
  it will cause the memory quota to be incorrect.

  If a flavor defines 512MB of RAM and 64MB of video RAM, creating an
  instance will take up 576MB of memory quota, when resize this instance
  to a flavor defined 1024MB of RAM but 0MB of video RAM, it will take
  up 1088MB of memory quota, not 1024MB, and this 64MB memory quota
  usage will never be released.

  This bug as same as https://launchpad.net/bugs/1681989 , but this for
  resize and that for delete.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1686380/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1681843] Re: Nova-placement returns "ValueError: invalid literal for int() with base 10: ''"

2017-04-26 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/455710
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=d8be75b3b8d80598dd6375a8c029fe9906e00b9f
Submitter: Jenkins
Branch:master

commit d8be75b3b8d80598dd6375a8c029fe9906e00b9f
Author: Andy McCrae 
Date:   Tue Apr 11 14:19:44 2017 +0100

Allow CONTENT_LENGTH to be present but empty

The CONTENT_LENGTH environ can be present, but empty, which returns
None, and causes a ValueError when attempting to use .int().

This patch removes the setting of CONTENT_LENGTH to an integer, but
instead ensures that if CONTENT_LENGTH is not empty it is an integer, to
prevent a situation where a bogus "CONTENT_LENGTH" header is specified.

Additionally, as the CONTENT_TYPE environ can similarly be present but
empty, we should .get() it in a similar fashion to ensure it isn't
present but None when CONTENT_LENGTH is specified.

Change-Id: I66b6f9afbea8bf037997a59ba0b976f83c9825fb
Closes-Bug: #1681843


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1681843

Title:
  Nova-placement returns "ValueError: invalid literal for int() with
  base 10: ''"

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Since
  
https://github.com/openstack/nova/commit/6dd047a3307a1056077608fd5bc2d1c3b3285338
  we're seeing errors for Nova-placement service in the OpenStack-
  Ansible gate jobs:

  2017-04-10 17:43:45.761 2477 ERROR nova.api.openstack 
[req-a9cb7079-b603-4c2c-9f99-8bc4293f9700 78e46f12329c4f71a7f6b97aa3a7eb57 
f7d6ca89f4fb446eaebef93f7f235a50 - default default] Caught error: invalid 
literal for int() with base 10: ''
  2017-04-10 17:43:45.761 2477 ERROR nova.api.openstack Traceback (most recent 
call last):
  2017-04-10 17:43:45.761 2477 ERROR nova.api.openstack   File 
"/openstack/venvs/nova-master/lib/python2.7/site-packages/nova/api/openstack/__init__.py",
 line 85, in __call__
  2017-04-10 17:43:45.761 2477 ERROR nova.api.openstack return 
req.get_response(self.application)
  2017-04-10 17:43:45.761 2477 ERROR nova.api.openstack   File 
"/openstack/venvs/nova-master/lib/python2.7/site-packages/webob/request.py", 
line 1316, in send
  2017-04-10 17:43:45.761 2477 ERROR nova.api.openstack application, 
catch_exc_info=False)
  2017-04-10 17:43:45.761 2477 ERROR nova.api.openstack   File 
"/openstack/venvs/nova-master/lib/python2.7/site-packages/webob/request.py", 
line 1280, in call_application
  2017-04-10 17:43:45.761 2477 ERROR nova.api.openstack app_iter = 
application(self.environ, start_response)
  2017-04-10 17:43:45.761 2477 ERROR nova.api.openstack   File 
"/openstack/venvs/nova-master/lib/python2.7/site-packages/webob/dec.py", line 
131, in __call__
  2017-04-10 17:43:45.761 2477 ERROR nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2017-04-10 17:43:45.761 2477 ERROR nova.api.openstack   File 
"/openstack/venvs/nova-master/lib/python2.7/site-packages/webob/dec.py", line 
196, in call_func
  2017-04-10 17:43:45.761 2477 ERROR nova.api.openstack return 
self.func(req, *args, **kwargs)
  2017-04-10 17:43:45.761 2477 ERROR nova.api.openstack   File 
"/openstack/venvs/nova-master/lib/python2.7/site-packages/nova/api/openstack/placement/microversion.py",
 line 123, in __call__
  2017-04-10 17:43:45.761 2477 ERROR nova.api.openstack response = 
req.get_response(self.application)
  2017-04-10 17:43:45.761 2477 ERROR nova.api.openstack   File 
"/openstack/venvs/nova-master/lib/python2.7/site-packages/webob/request.py", 
line 1316, in send
  2017-04-10 17:43:45.761 2477 ERROR nova.api.openstack application, 
catch_exc_info=False)
  2017-04-10 17:43:45.761 2477 ERROR nova.api.openstack   File 
"/openstack/venvs/nova-master/lib/python2.7/site-packages/webob/request.py", 
line 1280, in call_application
  2017-04-10 17:43:45.761 2477 ERROR nova.api.openstack app_iter = 
application(self.environ, start_response)
  2017-04-10 17:43:45.761 2477 ERROR nova.api.openstack   File 
"/openstack/venvs/nova-master/lib/python2.7/site-packages/nova/api/openstack/placement/handler.py",
 line 182, in __call__
  2017-04-10 17:43:45.761 2477 ERROR nova.api.openstack if 
int(environ.get('CONTENT_LENGTH', 0)):
  2017-04-10 17:43:45.761 2477 ERROR nova.api.openstack ValueError: invalid 
literal for int() with base 10: ''
  2017-04-10 17:43:45.761 2477 ERROR nova.api.openstack 

  

  The Error is occurring because "CONTENT_LENGTH" is present but = ''
  and the change attempts to set "None" to an int - which fails.

  We're using Nginx /w uWSGI and I believe a default uwsgi param is set
  to always send a CONTENT_LENGTH and CONTENT_TYPE header even when
  those are empty.

To manage notifications about this bug go to:
https://bugs.launch

[Yahoo-eng-team] [Bug 1685185] Re: disconnect_volume not called when rebase failures are encountered during swap_volume

2017-04-26 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/458807
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=809065485c19fd535db6740bb21b867c41c008fe
Submitter: Jenkins
Branch:master

commit 809065485c19fd535db6740bb21b867c41c008fe
Author: Lee Yarwood 
Date:   Thu Apr 20 19:43:32 2017 +0100

libvirt: Always disconnect_volume after rebase failures

Previously failures when rebasing onto a new volume would leave the
volume connected to the compute host. For some volume backends such as
iSCSI the subsequent call to terminate_connection would then result in
leftover devices remaining on the host.

This change simply catches any error associated with the rebase and
ensures that disconnect_volume is called for the new volume prior to
terminate_connection finally being called.

Change-Id: I5997000a0ba6341c4987405cdc0760c3b471bd59
Closes-bug: #1685185


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1685185

Title:
  disconnect_volume not called when rebase failures are encountered
  during swap_volume

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===

  At present disconnect_volume is not called when rebase failures are
  encountered during swap_volume. This results in the new volume
  remaining connected to the compute host prior to terminate_connection
  then being called. This can in turn lead to left over devices
  remaining on the compute host for some volume backends such as
  LVM/iSCSI.

  Steps to reproduce
  ==

  Downstream, the easiest way to reproduce this is via
  https://bugzilla.redhat.com/show_bug.cgi?id=1401755 :

  # sudo setenforce 1
  # nova update-volume ${instance_uuid} \
   ${attached_NFS_volume_id} \
   ${unattached_iSCSI_volume_id}

  Upstream, I've been unable to get devstack to even work correctly with
  SELinux in enforcing mode so I've been unable to reproduce the
  rollback this way.

  Expected result
  ===

  New volume disconnected from compute host.

  Actual result
  =

  New volume remains connected to compute host.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/

 N/A

  2. Which hypervisor did you use?
 (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
 What's the version of that?

 Libvirt + KVM

  2. Which storage type did you use?
 (For example: Ceph, LVM, GPFS, ...)
 What's the version of that?

 NFS + LVM/iSCSI

  3. Which networking type did you use?
 (For example: nova-network, Neutron with OpenVSwitch, ...)

 N/A

  Logs & Configs
  ==

  Example Newton trace :

  2017-04-12 09:37:53.744 3077 ERROR oslo_messaging.rpc.dispatcher 
[req-f31f6110-e880-41e8-be1a-2c41ea7fd9ce 75fbc7a6db34480091d2a53e2e20b695 
62e53e5e804e49a9890928a5a4846f60 - - -] Exception during message handling: 
internal error: unable to execute QEMU command 'drive-mirror': Could not open 
'/dev/disk/by-id/dm-uuid-mpath-360014053aac4f90daef4d76baa773169': Permission 
denied
  2017-04-12 09:37:53.744 3077 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2017-04-12 09:37:53.744 3077 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 138, 
in _dispatch_and_reply
  2017-04-12 09:37:53.744 3077 ERROR oslo_messaging.rpc.dispatcher 
incoming.message))
  2017-04-12 09:37:53.744 3077 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 183, 
in _dispatch
  2017-04-12 09:37:53.744 3077 ERROR oslo_messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2017-04-12 09:37:53.744 3077 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 127, 
in _do_dispatch
  2017-04-12 09:37:53.744 3077 ERROR oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
  2017-04-12 09:37:53.744 3077 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 110, in wrapped
  2017-04-12 09:37:53.744 3077 ERROR oslo_messaging.rpc.dispatcher payload)
  2017-04-12 09:37:53.744 3077 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2017-04-12 09:37:53.744 3077 ERROR oslo_messaging.rpc.dispatcher 
self.force_reraise()
  2017-04-12 09:37:53.744 3077 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2017-04-12 09:37:53.744 307

[Yahoo-eng-team] [Bug 1673411] Re: config-drive support is broken

2017-04-26 Thread James Page
Marking artful and zesty tasks as fix-released as 15.0.1 of nova-lxd
contains the required fixes for this new way of doing config-drive.

Yakkety is still pending acceptance of 14.2.2 which has the same config-
drive fixes.

** Changed in: nova-lxd (Ubuntu)
   Status: Fix Committed => Fix Released

** Changed in: nova-lxd (Ubuntu Zesty)
   Status: Fix Committed => Fix Released

** Changed in: cloud-archive/ocata
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1673411

Title:
  config-drive support is broken

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive newton series:
  Triaged
Status in Ubuntu Cloud Archive ocata series:
  Fix Released
Status in cloud-init:
  Fix Committed
Status in nova-lxd:
  Fix Released
Status in nova-lxd newton series:
  Fix Committed
Status in nova-lxd ocata series:
  Fix Committed
Status in nova-lxd trunk series:
  Fix Released
Status in cloud-init package in Ubuntu:
  Fix Released
Status in nova-lxd package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Fix Released
Status in nova-lxd source package in Xenial:
  Invalid
Status in cloud-init source package in Yakkety:
  Fix Released
Status in nova-lxd source package in Yakkety:
  Triaged
Status in cloud-init source package in Zesty:
  Fix Released
Status in nova-lxd source package in Zesty:
  Fix Released

Bug description:
  === Begin cloud-init SRU Template ===
  [Impact]
  nova-lxd can provide data to instances in 2 ways:
   a.) metadata service
   b.) config drive

  The support for reading the config drive in cloud-init was never
  functional.  Nova-lxd has changed the way they're presenting the config
  drive to the guest.  Now they are doing so by populating a directory in
  the container /config-drive with the information.
  The change added to cloud-init was to extend support read config drive
  information from that directory.

  [Test Case]
  With a nova-lxd that contains the fix this can be fully tested
  by launching an instance with updated cloud-init and config drive
  attached.

  For cloud-init, the easiest way to demonstrate this is to
  create a lxc container and populate it with a '/config-drive'.

  lxc-proposed-snapshot is
    
https://git.launchpad.net/~smoser/cloud-init/+git/sru-info/tree/bin/lxc-proposed-snapshot
  It publishes an image to lxd with proposed enabled and cloud-init upgraded.

  $ release=xenial
  $ ref=xenial-proposed
  $ name=$release-lp1673411
  $ lxc-proposed-snapshot --proposed --publish $release $ref
  $ lxc init $ref $name

  # lxc will create the 'NoCloud' seed, and the normal search
  # path looks there first, so remove it.

  $ lxc file pull $name/etc/cloud/cloud.cfg.d/90_dpkg.cfg - |
  sed 's/NoCloud, //' |
  lxc file push - $name/etc/cloud/cloud.cfg.d/90_dpkg.cfg

  ## populate a /config-drive with attached 'make-config-drive-dir'
  ## and push it to the container

  $ d=$(mktemp -d)
  $ make-config-drive-dir "$d" "$name"
  $ rm -Rf "$d"

  ## start it and look around
  $ lxc start $name
  $ sleep 10
  $ lxc exec $name cat /run/cloud-init/result.json
  {
   "v1": {
    "datasource": "DataSourceConfigDrive [net,ver=2][source=/config-drive]",
    "errors": []
   }
  }

  [Regression Potential]
  There is a potentiali false positive where a user had data in
  /config-drive and now that information is read as config drive data.

  That would require a directory tree like:
    /config-drive/openstack/2???-??-??/meta_data.json
  or
    /config-drive/openstack/latest/meta_data.json

  Which seems like a small likelyhood of non-contrived hit.

  [Other Info]
  Upstream commit:
   https://git.launchpad.net/cloud-init/commit/?id=443095f4d4b6fe

  === End cloud-init SRU Template ===

  After reviewing https://review.openstack.org/#/c/445579/ and doing
  some testing, it would appear that the config-drive support in the
  nova-lxd driver is not functional.

  cloud-init ignores the data presented in /var/lib/cloud/data and reads
  from the network accessible metadata-service.

  To test this effectively you have to have a fully offline instance
  (i.e. no metadata service access).

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1673411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684604] Re: Live migrate is enabled even there are no other hosts available.

2017-04-26 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/458457
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=9d9a2f6c1b42f9366fd99ad8aac8deab67b0a21a
Submitter: Jenkins
Branch:master

commit 9d9a2f6c1b42f9366fd99ad8aac8deab67b0a21a
Author: zhangdebo 
Date:   Thu Apr 20 04:14:06 2017 -0700

Check the target host before live migrating a instance

In the live migrate form, the selections "Automatically schedule
new host." and "No other hosts available" both use an empty value,
and this input is not required, that causes I can submit the
form even there are no other hosts available.

I think the value of selection "Automatically schedule new host."
should not be empty, and this input should be required.

Change-Id: I6a88ffa23087a0d845cf8a71a6359f3f4fbddbe0
Closes-Bug: #1684604


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1684604

Title:
  Live migrate is enabled even there are no other hosts  available.

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  In the live migrate form, the selections "Automatically schedule new
  host." and "No other hosts available" use a same value, and this input
  is not required, that causes I can submit the form even there are no
  other hosts available.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1684604/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1686356] [NEW] Cannot update quota healthmonitor with tenant_id

2017-04-26 Thread yangyide
Public bug reported:

Version is stable/mitaka.

I update quota healthmonitor with tenant_id via command-line.

Like this  neutron quota-update  --healthmonitor 100 --tenant_id
2edf976f0f274f22a44d18916d6123a6

I found this record field tenant_id is 100 and field limit is 1 in
neutron quotas table.

It seems that other quota is ok except the healthmonitor.

below is terminal output.

[root@node ~]# neutron quota-update  --healthmonitor 100 --tenant_id 
2edf976f0f274f22a44d18916d6123a6
+-+---+
| Field   | Value |
+-+---+
| floatingip  | 50|
| healthmonitor   | 1 |
| l7policy| -1|
| listener| -1|
| loadbalancer| 10|
| network | 10|
| pool| 10|
| port| 50|
| rbac_policy | 10|
| router  | 10|
| security_group  | 10|
| security_group_rule | 100   |
| subnet  | 10|
| subnetpool  | -1|
+-+---+

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1686356

Title:
  Cannot update quota healthmonitor with tenant_id

Status in neutron:
  New

Bug description:
  Version is stable/mitaka.

  I update quota healthmonitor with tenant_id via command-line.

  Like this  neutron quota-update  --healthmonitor 100 --tenant_id
  2edf976f0f274f22a44d18916d6123a6

  I found this record field tenant_id is 100 and field limit is 1 in
  neutron quotas table.

  It seems that other quota is ok except the healthmonitor.

  below is terminal output.

  [root@node ~]# neutron quota-update  --healthmonitor 100 --tenant_id 
2edf976f0f274f22a44d18916d6123a6
  +-+---+
  | Field   | Value |
  +-+---+
  | floatingip  | 50|
  | healthmonitor   | 1 |
  | l7policy| -1|
  | listener| -1|
  | loadbalancer| 10|
  | network | 10|
  | pool| 10|
  | port| 50|
  | rbac_policy | 10|
  | router  | 10|
  | security_group  | 10|
  | security_group_rule | 100   |
  | subnet  | 10|
  | subnetpool  | -1|
  +-+---+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1686356/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1686328] [NEW] Check QoS policy during net/port binding, to avoid incompatible rules

2017-04-26 Thread Rodolfo Alonso
Public bug reported:

With https://bugs.launchpad.net/neutron/+bug/1686035, a user will be
able to create a QoS policy with rules compatible in all loaded
backends.

During the port/net binding, this QoS policy (and rules) must be
checked.

Because now all QoS rules applied to a port/net will be totally
compatible with this port/net, the check made in
https://github.com/openstack/neutron/blob/master/neutron/agent/l2/extensions/qos.py#L98
is not necesary.

** Affects: neutron
 Importance: Undecided
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New


** Tags: qos

** Tags added: qos

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1686328

Title:
  Check QoS policy during net/port binding, to avoid incompatible rules

Status in neutron:
  New

Bug description:
  With https://bugs.launchpad.net/neutron/+bug/1686035, a user will be
  able to create a QoS policy with rules compatible in all loaded
  backends.

  During the port/net binding, this QoS policy (and rules) must be
  checked.

  Because now all QoS rules applied to a port/net will be totally
  compatible with this port/net, the check made in
  
https://github.com/openstack/neutron/blob/master/neutron/agent/l2/extensions/qos.py#L98
  is not necesary.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1686328/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1686315] [NEW] Loading all the time on Firefox 52

2017-04-26 Thread Jiang
Public bug reported:

Openstack Ocata. First select a label, such as
Project/Compute/Instances. Then select another label, such as
Project/Compute/Images. Finally, go to the previous page by click the
button of Firefox(the back button of firefox on the top left), the page
will always show Loading.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1686315

Title:
  Loading all the time on Firefox 52

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Openstack Ocata. First select a label, such as
  Project/Compute/Instances. Then select another label, such as
  Project/Compute/Images. Finally, go to the previous page by click the
  button of Firefox(the back button of firefox on the top left), the
  page will always show Loading.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1686315/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp