[Yahoo-eng-team] [Bug 1892837] Re: tempest.api.compute.servers.test_servers_negative fails in periodic-tripleo-ci-centos-8-standalone-full-tempest-api-master

2020-11-19 Thread wes hayutin
** Changed in: tripleo
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1892837

Title:
  tempest.api.compute.servers.test_servers_negative fails in periodic-
  tripleo-ci-centos-8-standalone-full-tempest-api-master

Status in OpenStack Compute (nova):
  Confirmed
Status in tempest:
  Fix Committed
Status in tripleo:
  Fix Released

Bug description:
  https://logserver.rdoproject.org/openstack-periodic-integration-
  main/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-
  centos-8-standalone-full-tempest-api-
  master/f9602c8/logs/undercloud/var/log/tempest/stestr_results.html.gz

  ft1.1: setUpClass 
(tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON)testtools.testresult.real._StringException:
 Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/tempest/test.py", line 188, in 
setUpClass
  six.reraise(etype, value, trace)
File "/usr/local/lib/python3.6/site-packages/six.py", line 703, in reraise
  raise value
File "/usr/lib/python3.6/site-packages/tempest/test.py", line 181, in 
setUpClass
  cls.resource_setup()
File 
"/usr/lib/python3.6/site-packages/tempest/api/compute/servers/test_servers_negative.py",
 line 64, in resource_setup
  cls.client.delete_server(server['id'])
File 
"/usr/lib/python3.6/site-packages/tempest/lib/services/compute/servers_client.py",
 line 158, in delete_server
  resp, body = self.delete("servers/%s" % server_id)
File "/usr/lib/python3.6/site-packages/tempest/lib/common/rest_client.py", 
line 329, in delete
  return self.request('DELETE', url, extra_headers, headers, body)
File 
"/usr/lib/python3.6/site-packages/tempest/lib/services/compute/base_compute_client.py",
 line 48, in request
  method, url, extra_headers, headers, body, chunked)
File "/usr/lib/python3.6/site-packages/tempest/lib/common/rest_client.py", 
line 702, in request
  self._error_checker(resp, resp_body)
File "/usr/lib/python3.6/site-packages/tempest/lib/common/rest_client.py", 
line 879, in _error_checker
  message=message)
  tempest.lib.exceptions.ServerFault: Got server fault
  Details: Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
  

  
  This is a real bug, we need someone from nova to take a look

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1892837/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1881624] Re: full-tempest-scenario-master failing on test_ipv6_hotplug

2020-07-07 Thread wes hayutin
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1881624

Title:
  full-tempest-scenario-master failing on test_ipv6_hotplug

Status in neutron:
  New
Status in tripleo:
  Triaged

Bug description:
  https://review.rdoproject.org/zuul/builds?pipeline=openstack-periodic-
  master&job_name=periodic-tripleo-ci-centos-8-standalone-full-tempest-
  scenario-master

  test_ipv6_hotplug_dhcpv6stateless[id-9aaedbc4-986d-42d5-9177-3e721728e7e0]
  test_ipv6_hotplug_slaac[id-b13e5408-5250-4a42-8e46-6996ce613e91]


  Traceback (most recent call last):
File 
"/usr/lib/python3.6/site-packages/neutron_tempest_plugin/common/utils.py", line 
78, in wait_until_true
  eventlet.sleep(sleep)
File "/usr/lib/python3.6/site-packages/eventlet/greenthread.py", line 36, 
in sleep
  hub.switch()
File "/usr/lib/python3.6/site-packages/eventlet/hubs/hub.py", line 298, in 
switch
  return self.greenlet.switch()
  eventlet.timeout.Timeout: 120 seconds

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
File 
"/usr/lib/python3.6/site-packages/neutron_tempest_plugin/scenario/test_ipv6.py",
 line 168, in test_ipv6_hotplug_slaac
  self._test_ipv6_hotplug("slaac", "slaac")
File 
"/usr/lib/python3.6/site-packages/neutron_tempest_plugin/scenario/test_ipv6.py",
 line 153, in _test_ipv6_hotplug
  self._test_ipv6_address_configured(ssh_client, vm, ipv6_port)
File 
"/usr/lib/python3.6/site-packages/neutron_tempest_plugin/scenario/test_ipv6.py",
 line 121, in _test_ipv6_address_configured
  "the VM {!r}.".format(ipv6_address, vm['id'])))
File 
"/usr/lib/python3.6/site-packages/neutron_tempest_plugin/common/utils.py", line 
82, in wait_until_true
  raise exception
  RuntimeError: Timed out waiting for IP address 
'2001:db8:0:2:f816:3eff:fe44:681f' to be configured in the VM 
'52e65f9d-f7fe-4198-b6cd-42c7fff3caec'.


  
  
---

  failing consistently on the last 6 periodic runs:

  
  
https://logserver.rdoproject.org/openstack-periodic-master/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-standalone-full-tempest-scenario-master/7568508/logs/undercloud/var/log/tempest/stestr_results.html.gz

  https://logserver.rdoproject.org/openstack-periodic-
  master/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-
  centos-8-standalone-full-tempest-scenario-
  master/0b9fd16/logs/undercloud/var/log/tempest/stestr_results.html.gz

  https://logserver.rdoproject.org/openstack-periodic-
  master/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-
  centos-8-standalone-full-tempest-scenario-
  master/7902c02/logs/undercloud/var/log/tempest/stestr_results.html.gz

  https://logserver.rdoproject.org/openstack-periodic-
  master/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-
  centos-8-standalone-full-tempest-scenario-
  master/e20fa1e/logs/undercloud/var/log/tempest/stestr_results.html.gz

  https://logserver.rdoproject.org/openstack-periodic-
  master/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-
  centos-8-standalone-full-tempest-scenario-
  master/837d9b7/logs/undercloud/var/log/tempest/stestr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1881624/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1753209] Re: neutron_tempest_plugin.api.admin.test_shared_network_extension.RBACSharedNetworksTest, rbac policy in use across tenants.

2019-11-27 Thread wes hayutin
** Changed in: tripleo
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1753209

Title:
  
neutron_tempest_plugin.api.admin.test_shared_network_extension.RBACSharedNetworksTest,
  rbac policy in use across tenants.

Status in neutron:
  Fix Released
Status in tempest:
  New
Status in tripleo:
  Fix Released

Bug description:
  
neutron_tempest_plugin.api.admin.test_shared_network_extension.RBACSharedNetworksTest
  failure

  https://logs.rdoproject.org/openstack-periodic/periodic-tripleo-ci-
  centos-7-ovb-1ctlr_1comp-featureset020-master/6cec620/tempest.html.gz

  Details: {u'message': u'RBAC policy on object 3cfbd0a7-84f2-4e3f-917e-
  bf51b5995e20 cannot be removed because other objects depend on
  it.\nDetails: Callback
  neutron.plugins.ml2.plugin.Ml2Plugin.validate_network_rbac_policy_change
  --9223372036850840529 failed with "Unable to reconfigure sharing
  settings for network 3cfbd0a7-84f2-4e3f-917e-bf51b5995e20. Multiple
  tenants are using it.",Callback
  
neutron.services.network_ip_availability.plugin.NetworkIPAvailabilityPlugin.validate_network_rbac_policy_change
  --9223372036853400817 failed with "Unable to reconfigure sharing
  settings for network 3cfbd0a7-84f2-4e3f-917e-bf51b5995e20. Multiple
  tenants are using it.",Callback
  
neutron.services.network_ip_availability.plugin.NetworkIPAvailabilityPlugin.validate_network_rbac_policy_change
  --9223372036853463713 failed with "Unable to reconfigure sharing
  settings for network 3cfbd0a7-84f2-4e3f-917e-bf51b5995e20. Multiple
  tenants are using it."', u'type': u'RbacPolicyInUse', u'detail': u''}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1753209/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1824315] Re: periodic fedora28 standalone job failing at test_volume_boot_pattern

2019-05-15 Thread wes hayutin
** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1824315

Title:
  periodic fedora28 standalone job failing at test_volume_boot_pattern

Status in OpenStack Compute (nova):
  Invalid
Status in tripleo:
  Invalid

Bug description:
  From tempest

  
http://logs.rdoproject.org/openstack-periodic/git.openstack.org/openstack-infra/tripleo-ci/master/periodic-tripleo-ci-fedora-28-standalone-master/04caef1/logs/tempest.html.gz
  raise value
    File "/usr/lib/python3.6/site-packages/tempest/common/compute.py", line 
236, in create_test_server
  clients.servers_client, server['id'], wait_until)
    File "/usr/lib/python3.6/site-packages/tempest/common/waiters.py", line 76, 
in wait_for_server_status
  server_id=server_id)
  tempest.exceptions.BuildErrorException: Server 
aa994612-f431-4e90-98ea-a993b5c1ab5c failed to build and is in ERROR status
  Details: {'code': 500, 'created': '2019-04-11T07:29:27Z', 'message': 
'Exceeded maximum number of retries. Exhausted all hosts available for retrying 
build failures for instance aa994612-f431-4e90-98ea-a993b5c1ab5c.'}

  And from nova-compute

  
http://logs.rdoproject.org/openstack-periodic/git.openstack.org/openstack-infra/tripleo-ci/master/periodic-tripleo-ci-fedora-28-standalone-master/04caef1/logs/undercloud/var/log/containers/nova/nova-compute.log.txt.gz
  
  : libvirt.libvirtError: internal error: Unable to add port tap02d12a34-c4 to 
OVS bridge br-int
  2019-04-11 07:29:22.833 9 ERROR nova.virt.libvirt.driver 
[req-d91a391f-1040-4522-b6b2-9720e72fdfdb 492b60374c184ae794fc48e140e80da4 
f3efacfd7e50440882a661dc5987d8f4 - default default] [instance: 
aa994612-f431-4e90-98ea-a993b5c1ab5c] Failed to start libvirt guest: 
libvirt.libvirtError: internal error: Unable to add port tap02d12a34-c4 to OVS 
bridge br-int
  2019-04-11 07:29:22.834 9 DEBUG nova.virt.libvirt.vif 
[req-d91a391f-1040-4522-b6b2-9720e72fdfdb 492b60374c184ae794fc48e140e80da4 
f3efacfd7e50440882a661dc5987d8f4 - default default] vif_type=ovs 
instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2019-04-11T07:29:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-2018261125',display_name='tempest-

  at "oc" ovs

  http://logs.rdoproject.org/openstack-periodic/git.openstack.org
  /openstack-infra/tripleo-ci/master/periodic-tripleo-ci-fedora-28
  -standalone-
  master/04caef1/logs/undercloud/var/log/containers/openvswitch/ovn-
  controller.log.txt.gz

  019-04-11T07:28:48.414Z|00153|pinctrl|INFO|DHCPOFFER fa:16:3e:27:14:2d 
10.100.0.4
  2019-04-11T07:28:48.421Z|00154|pinctrl|INFO|DHCPACK fa:16:3e:27:14:2d 
10.100.0.4
  2019-04-11T07:28:58.874Z|00155|binding|INFO|Releasing lport 
a9085a12-aa3d-4345-9ce2-ba774084b5aa from this chassis.
  
2019-04-11T07:29:02.342Z|00156|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt:
 connection closed by peer
  
2019-04-11T07:29:02.342Z|00157|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt:
 connection closed by peer
  
2019-04-11T07:29:03.197Z|00158|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt:
 connecting...
  
2019-04-11T07:29:03.197Z|00159|rconn|WARN|unix:/var/run/openvswitch/br-int.mgmt:
 connection failed (No such file or directory)
  
2019-04-11T07:29:03.197Z|00160|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt:
 waiting 2 seconds before reconnect
  
2019-04-11T07:29:03.197Z|00161|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt:
 connecting...
  
2019-04-11T07:29:03.197Z|00162|rconn|WARN|unix:/var/run/openvswitch/br-int.mgmt:
 connection failed (No such file or directory)
  
2019-04-11T07:29:03.197Z|00163|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt:
 waiting 2 seconds before reconnect
  
2019-04-11T07:29:05.198Z|00164|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt:
 connecting...
  201

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1824315/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1824315] Re: periodic fedora28 standalone job failing at test_volume_boot_pattern

2019-05-15 Thread wes hayutin
** Changed in: tripleo
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1824315

Title:
  periodic fedora28 standalone job failing at test_volume_boot_pattern

Status in OpenStack Compute (nova):
  Invalid
Status in tripleo:
  Invalid

Bug description:
  From tempest

  
http://logs.rdoproject.org/openstack-periodic/git.openstack.org/openstack-infra/tripleo-ci/master/periodic-tripleo-ci-fedora-28-standalone-master/04caef1/logs/tempest.html.gz
  raise value
    File "/usr/lib/python3.6/site-packages/tempest/common/compute.py", line 
236, in create_test_server
  clients.servers_client, server['id'], wait_until)
    File "/usr/lib/python3.6/site-packages/tempest/common/waiters.py", line 76, 
in wait_for_server_status
  server_id=server_id)
  tempest.exceptions.BuildErrorException: Server 
aa994612-f431-4e90-98ea-a993b5c1ab5c failed to build and is in ERROR status
  Details: {'code': 500, 'created': '2019-04-11T07:29:27Z', 'message': 
'Exceeded maximum number of retries. Exhausted all hosts available for retrying 
build failures for instance aa994612-f431-4e90-98ea-a993b5c1ab5c.'}

  And from nova-compute

  
http://logs.rdoproject.org/openstack-periodic/git.openstack.org/openstack-infra/tripleo-ci/master/periodic-tripleo-ci-fedora-28-standalone-master/04caef1/logs/undercloud/var/log/containers/nova/nova-compute.log.txt.gz
  
  : libvirt.libvirtError: internal error: Unable to add port tap02d12a34-c4 to 
OVS bridge br-int
  2019-04-11 07:29:22.833 9 ERROR nova.virt.libvirt.driver 
[req-d91a391f-1040-4522-b6b2-9720e72fdfdb 492b60374c184ae794fc48e140e80da4 
f3efacfd7e50440882a661dc5987d8f4 - default default] [instance: 
aa994612-f431-4e90-98ea-a993b5c1ab5c] Failed to start libvirt guest: 
libvirt.libvirtError: internal error: Unable to add port tap02d12a34-c4 to OVS 
bridge br-int
  2019-04-11 07:29:22.834 9 DEBUG nova.virt.libvirt.vif 
[req-d91a391f-1040-4522-b6b2-9720e72fdfdb 492b60374c184ae794fc48e140e80da4 
f3efacfd7e50440882a661dc5987d8f4 - default default] vif_type=ovs 
instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2019-04-11T07:29:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-2018261125',display_name='tempest-

  at "oc" ovs

  http://logs.rdoproject.org/openstack-periodic/git.openstack.org
  /openstack-infra/tripleo-ci/master/periodic-tripleo-ci-fedora-28
  -standalone-
  master/04caef1/logs/undercloud/var/log/containers/openvswitch/ovn-
  controller.log.txt.gz

  019-04-11T07:28:48.414Z|00153|pinctrl|INFO|DHCPOFFER fa:16:3e:27:14:2d 
10.100.0.4
  2019-04-11T07:28:48.421Z|00154|pinctrl|INFO|DHCPACK fa:16:3e:27:14:2d 
10.100.0.4
  2019-04-11T07:28:58.874Z|00155|binding|INFO|Releasing lport 
a9085a12-aa3d-4345-9ce2-ba774084b5aa from this chassis.
  
2019-04-11T07:29:02.342Z|00156|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt:
 connection closed by peer
  
2019-04-11T07:29:02.342Z|00157|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt:
 connection closed by peer
  
2019-04-11T07:29:03.197Z|00158|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt:
 connecting...
  
2019-04-11T07:29:03.197Z|00159|rconn|WARN|unix:/var/run/openvswitch/br-int.mgmt:
 connection failed (No such file or directory)
  
2019-04-11T07:29:03.197Z|00160|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt:
 waiting 2 seconds before reconnect
  
2019-04-11T07:29:03.197Z|00161|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt:
 connecting...
  
2019-04-11T07:29:03.197Z|00162|rconn|WARN|unix:/var/run/openvswitch/br-int.mgmt:
 connection failed (No such file or directory)
  
2019-04-11T07:29:03.197Z|00163|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt:
 waiting 2 seconds before reconnect
  
2019-04-11T07:29:05.198Z|00164|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt:
 connecting...
  201

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1824315/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1802971] Re: tempest volume_boot_pattern and basic_ops running concurrently causing timeouts

2019-04-10 Thread wes hayutin
** Changed in: tripleo
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1802971

Title:
  tempest volume_boot_pattern and basic_ops running concurrently causing
  timeouts

Status in neutron:
  In Progress
Status in tripleo:
  Fix Released

Bug description:
  http://logs.openstack.org/03/616203/9/gate/tripleo-ci-
  
centos-7-standalone/75f1be5/logs/stackviz/#/testrepository.subunit/timeline?test=tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_port_security_macspoofing_port

  Fails in the gate from time to time, trying to develop a query atm in
  logstash.

  http://logs.openstack.org/03/616203/9/gate/tripleo-ci-
  centos-7-standalone/75f1be5/job-
  output.txt.gz#_2018-11-12_17_41_51_654356

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1802971/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1813224] Re: fedora28 standalone failing on tempest

2019-03-18 Thread wes hayutin
http://zuul.openstack.org/builds?job_name=tripleo-ci-fedora-28-standalone
closing

** Changed in: tripleo
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1813224

Title:
  fedora28 standalone failing on tempest

Status in neutron:
  New
Status in tripleo:
  Fix Released

Bug description:
  The fedora28 tempest jobs are failing in check (it's voting but not in
  the gate).

  tempest.scenario.test_network_basic_ops.TestNetworkBasicOps
  tempest.scenario.test_server_basic_ops.TestServerBasicOps 
  tempest.scenario.test_minimum_basic.TestMinimumBasicScenario

  
  
http://logs.openstack.org/31/626631/11/check/tripleo-ci-fedora-28-standalone/cf314a4/logs/undercloud/home/zuul/tempest/tempest.html.gz

  http://logs.openstack.org/53/623353/10/check/tripleo-ci-
  fedora-28-standalone/0841969/logs/tempest.html

  
  
http://logs.openstack.org/56/593056/43/check/tripleo-ci-fedora-28-standalone/106db25/logs/tempest.html

  
  
http://logs.openstack.org/97/631297/3/check/tripleo-ci-fedora-28-standalone/7fe7dc1/logs/tempest.html

  sova is reporting this job to be <80% passing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1813224/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1787910] Re: OVB overcloud deploy fails on nova placement errors

2018-09-11 Thread wes hayutin
** Changed in: tripleo
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1787910

Title:
  OVB overcloud deploy fails on nova placement errors

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) rocky series:
  Fix Committed
Status in tripleo:
  Fix Released

Bug description:
  https://logs.rdoproject.org/openstack-periodic/git.openstack.org
  /openstack-infra/tripleo-ci/master/legacy-periodic-tripleo-ci-centos-7
  -ovb-3ctlr_1comp-
  
featureset001-master/1544941/logs/undercloud/var/log/extra/errors.txt.gz#_2018-08-20_01_49_09_830

  https://logs.rdoproject.org/openstack-periodic/git.openstack.org
  /openstack-infra/tripleo-ci/master/legacy-periodic-tripleo-ci-centos-7
  -ovb-3ctlr_1comp-
  
featureset001-master/1544941/logs/undercloud/var/log/extra/docker/containers/nova_placement/log/nova
  /nova-compute.log.txt.gz?level=ERROR#_2018-08-20_01_49_09_830

  ERROR nova.scheduler.client.report
  [req-a8752223-5d75-4fa2-9668-7c024d166f09 - - - - -] [req-
  561538c7-b837-448b-b25e-38a3505ab2e5] Failed to update inventory to
  [{u'CUSTOM_BAREMETAL': {'allocation_ratio': 1.0, 'total': 1,
  'reserved': 1, 'step_size': 1, 'min_unit': 1, 'max_unit': 1}}] for
  resource provider with UUID 3ee26a05-944b-42ba-b74d-42aa2fda5d73.  Got
  400: {"errors": [{"status": 400, "request_id": "req-561538c7-b837
  -448b-b25e-38a3505ab2e5", "detail": "The server could not comply with
  the request since it is either malformed or otherwise incorrect.\n\n
  Unable to update inventory for resource provider 3ee26a05-944b-42ba-
  b74d-42aa2fda5d73: Invalid inventory for 'CUSTOM_BAREMETAL' on
  resource provider '3ee26a05-944b-42ba-b74d-42aa2fda5d73'. The reserved
  value is greater than or equal to total.  ", "title": "Bad Request"}]}

  ERROR nova.compute.manager [req-a8752223-5d75-4fa2-9668-7c024d166f09 -
  - - - -] Error updating resources for node 3ee26a05-944b-42ba-b74d-
  42aa2fda5d73.: ResourceProviderSyncFailed: Failed to synchronize the
  placement service with resource provider information supplied by the
  compute host.

  Traceback (most recent call last):
  botkaERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7722, in 
_update_available_resource_for_node
  botkaERROR nova.compute.manager rt.update_available_resource(context, 
nodename)
  botkaERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 703, 
in update_available_resource
  botkaERROR nova.compute.manager self._update_available_resource(context, 
resources)
  botkaERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in 
inner
  botkaERROR nova.compute.manager return f(*args, **kwargs)
  botkaERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 726, 
in _update_available_resource
  botkaERROR nova.compute.manager self._init_compute_node(context, 
resources)
  botkaERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 593, 
in _init_compute_node
  botkaERROR nova.compute.manager self._update(context, cn)
  botkaERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/retrying.py", line 68, in wrapped_f
  botkaERROR nova.compute.manager return Retrying(*dargs, **dkw).call(f, 
*args, **kw)
  botkaERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/retrying.py", line 223, in call
  botkaERROR nova.compute.manager return attempt.get(self._wrap_exception)
  botkaERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/retrying.py", line 261, in get
  botkaERROR nova.compute.manager six.reraise(self.value[0], self.value[1], 
self.value[2])
  botkaERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/retrying.py", line 217, in call
  botkaERROR nova.compute.manager attempt = Attempt(fn(*args, **kwargs), 
attempt_number, False)
  botkaERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 938, 
in _update
  botkaERROR nova.compute.manager self._update_to_placement(context, 
compute_node)
  botkaERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 907, 
in _update_to_placement
  botkaERROR nova.compute.manager 
reportclient.update_from_provider_tree(context, prov_tree)
  botkaERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, 
in __run_method
  botkaERROR nova.compute.manager return getattr(self.instance, 
__name)(*args, **kwargs)
  botkaERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", l

[Yahoo-eng-team] [Bug 1776596] Re: [QUEENS] Promotion Jobs failing at overcloud deployment with AttributeError: 'IronicNodeState' object has no attribute 'failed_builds'

2018-06-14 Thread wes hayutin
https://review.rdoproject.org/jenkins/job/periodic-tripleo-ci-centos-7
-ovb-1ctlr_1comp-featureset002-queens-upload/296/console

** Changed in: tripleo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1776596

Title:
  [QUEENS] Promotion Jobs failing at overcloud deployment with
  AttributeError: 'IronicNodeState' object has no attribute
  'failed_builds'

Status in OpenStack Compute (nova):
  Fix Released
Status in tripleo:
  Fix Released

Bug description:
  Queens overcloud deployment in all ovb promotion jobs is failing with
  AttributeError: 'IronicNodeState' object has no attribute
  'failed_builds'.

  Logs:-
  
https://logs.rdoproject.org/openstack-periodic/periodic-tripleo-ci-centos-7-ovb-1ctlr_1comp-featureset002-queens-upload/556a09f/undercloud/home/jenkins/failed_deployment_list.log.txt.gz
  
https://logs.rdoproject.org/openstack-periodic/periodic-tripleo-ci-centos-7-ovb-1ctlr_1comp-featureset002-queens-upload/556a09f/undercloud/var/log/nova/nova-scheduler.log.txt.gz#_2018-06-13_01_08_25_689
  
https://logs.rdoproject.org/openstack-periodic/periodic-tripleo-ci-centos-7-ovb-1ctlr_1comp-featureset020-queens/3909a7f/undercloud/home/jenkins/failed_deployment_list.log.txt.gz

  This is happening with a cherry-picked patch in nova:-
  https://review.openstack.org/#/c/573239/

  In master it's not seen probably because of:-
  https://review.openstack.org/#/c/565805/ (Remove IronicHostManager and
  baremetal scheduling options)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1776596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp