[Yahoo-eng-team] [Bug 1646002] Re: periodic-tempest-dsvm-neutron-full-ssh-master fails on the gate - libguestfs installed but not usable (/usr/bin/supermin exited with error status 1.

2017-04-20 Thread Andrea Frittoli
** Changed in: tempest
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1646002

Title:
  periodic-tempest-dsvm-neutron-full-ssh-master fails on the gate -
  libguestfs installed but not usable (/usr/bin/supermin exited with
  error status 1.

Status in devstack:
  Fix Released
Status in OpenStack Compute (nova):
  Invalid
Status in tempest:
  Fix Released

Bug description:
  The log is http://logs.openstack.org/periodic/periodic-tempest-dsvm-
  neutron-full-ssh-master/14ef08a/logs/

  test_create_server_with_personality failed like

  Traceback (most recent call last):
File "tempest/api/compute/servers/test_server_personality.py", line 63, in 
test_create_server_with_personality
  validatable=True)
File "tempest/api/compute/base.py", line 233, in create_test_server
  **kwargs)
File "tempest/common/compute.py", line 167, in create_test_server
  % server['id'])
File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  self.force_reraise()
File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File "tempest/common/compute.py", line 149, in create_test_server
  clients.servers_client, server['id'], wait_until)
File "tempest/common/waiters.py", line 75, in wait_for_server_status
  server_id=server_id)
  tempest.exceptions.BuildErrorException: Server 
55df9d1c-3316-43a5-81fe-63ff10216b5e failed to build and is in ERROR status
  Details: {u'message': u'No valid host was found. There are not enough hosts 
available.', u'code': 500, u'created': u'2016-11-29T06:28:57Z'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1646002/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656386] Re: Reduce neutron services' memory footprint

2017-02-15 Thread Andrea Frittoli
** Also affects: openstack-gate
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1656386

Title:
  Reduce neutron services' memory footprint

Status in neutron:
  Confirmed
Status in OpenStack-Gate:
  New

Bug description:
  Couple examples of recent leakages for linuxbridge job [1], [2]

  [1] 
http://logs.openstack.org/73/373973/13/check/gate-tempest-dsvm-neutron-linuxbridge-ubuntu-xenial/295d92f/logs/syslog.txt.gz#_Jan_11_13_56_32
  [2] 
http://logs.openstack.org/59/382659/26/check/gate-tempest-dsvm-neutron-linuxbridge-ubuntu-xenial/7de01d0/logs/syslog.txt.gz#_Jan_11_15_54_36

  Close to the end of running tests, consumption of swap growths pretty 
quickly, exceeding 2GBs.
  I didn't find root cause of that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1656386/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1646779] Re: libvirt killed by kernel on general protection or stack segment traps

2016-12-20 Thread Andrea Frittoli
I marked this as incomplete from a Tempest POV - I couldn't find
anything wrong with the tests in Tempest that seem to trigger this,
apart from triggering sometimes what looks like a libvirt issue.

** Also affects: libvirt
   Importance: Undecided
   Status: New

** Changed in: tempest
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1646779

Title:
  libvirt killed by kernel on general protection or stack segment traps

Status in libvirt:
  New
Status in OpenStack Compute (nova):
  Incomplete
Status in tempest:
  Incomplete

Bug description:
  A VM fails to spawn with no host available. The nova-cpu logs reveals
  a problem connecting to libvirt. 84 hits since Nov 23rd:

  message: "libvirtError: Failed to connect socket to '/var/run/libvirt
  /libvirt-sock': Connection refused"

  Recent failure: http://logs.openstack.org/66/401366/4/gate/gate-
  tempest-dsvm-neutron-full-ubuntu-
  xenial/3deacc5/logs/screen-n-cpu.txt.gz?level=ERROR

  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host 
[req-12fbb338-7df0-4654-b686-257245421442 
tempest-ImagesOneServerNegativeTestJSON-1400886372 
tempest-ImagesOneServerNegativeTestJSON-1400886372] Connection to libvirt 
failed: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Connection 
refused
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host Traceback (most 
recent call last):
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host   File 
"/opt/stack/new/nova/nova/virt/libvirt/host.py", line 453, in get_connection
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host conn = 
self._get_connection()
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host   File 
"/opt/stack/new/nova/nova/virt/libvirt/host.py", line 436, in _get_connection
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host {'msg': ex})
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host 
self.force_reraise()
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host 
six.reraise(self.type_, self.value, self.tb)
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host   File 
"/opt/stack/new/nova/nova/virt/libvirt/host.py", line 425, in _get_connection
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host 
self._wrapped_conn = self._get_new_connection()
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host   File 
"/opt/stack/new/nova/nova/virt/libvirt/host.py", line 370, in 
_get_new_connection
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host wrapped_conn = 
self._connect(self._uri, self._read_only)
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host   File 
"/opt/stack/new/nova/nova/virt/libvirt/host.py", line 226, in _connect
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host 
libvirt.openAuth, uri, auth, flags)
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in 
proxy_call
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host rv = execute(f, 
*args, **kwargs)
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host six.reraise(c, 
e, tb)
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host rv = 
meth(*args, **kwargs)
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host   File 
"/usr/local/lib/python2.7/dist-packages/libvirt.py", line 105, in openAuth
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host if ret is 
None:raise libvirtError('virConnectOpenAuth() failed')
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host libvirtError: 
Failed to connect socket to '/var/run/libvirt/libvirt-sock': Connection refused
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host 
  2016-12-01 18:16:05.123 6160 ERROR nova.compute.manager 
[req-12fbb338-7df0-4654-b686-257245421442 
tempest-ImagesOneServerNegativeTestJSON-1400886372 
tempest-ImagesOneServerNegativeTestJSON-1400886372] [instance: 
6fa73b04-c6a7-47a8-908b-6738f36f6ffc] Instance failed to spawn
  2016-12-01 18:16:05.123 6160 ERROR nova.compute.manager [instance: 
6fa73b04-c6a7-47a8-908b-6738f36f6ffc] Traceback (most recent call last):
  2016-12-01 18:16:05.123 6160 ERROR nova.compute.manager [instance: 
6fa73b04-c6a

[Yahoo-eng-team] [Bug 1646779] Re: Cannot connect to libvirt

2016-12-07 Thread Andrea Frittoli
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1646779

Title:
  Cannot connect to libvirt

Status in OpenStack Compute (nova):
  New
Status in tempest:
  New

Bug description:
  A VM fails to spawn with no host available. The nova-cpu logs reveals
  a problem connecting to libvirt. 84 hits since Nov 23rd:

  message: "libvirtError: Failed to connect socket to '/var/run/libvirt
  /libvirt-sock': Connection refused"

  Recent failure: http://logs.openstack.org/66/401366/4/gate/gate-
  tempest-dsvm-neutron-full-ubuntu-
  xenial/3deacc5/logs/screen-n-cpu.txt.gz?level=ERROR

  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host 
[req-12fbb338-7df0-4654-b686-257245421442 
tempest-ImagesOneServerNegativeTestJSON-1400886372 
tempest-ImagesOneServerNegativeTestJSON-1400886372] Connection to libvirt 
failed: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Connection 
refused
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host Traceback (most 
recent call last):
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host   File 
"/opt/stack/new/nova/nova/virt/libvirt/host.py", line 453, in get_connection
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host conn = 
self._get_connection()
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host   File 
"/opt/stack/new/nova/nova/virt/libvirt/host.py", line 436, in _get_connection
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host {'msg': ex})
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host 
self.force_reraise()
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host 
six.reraise(self.type_, self.value, self.tb)
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host   File 
"/opt/stack/new/nova/nova/virt/libvirt/host.py", line 425, in _get_connection
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host 
self._wrapped_conn = self._get_new_connection()
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host   File 
"/opt/stack/new/nova/nova/virt/libvirt/host.py", line 370, in 
_get_new_connection
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host wrapped_conn = 
self._connect(self._uri, self._read_only)
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host   File 
"/opt/stack/new/nova/nova/virt/libvirt/host.py", line 226, in _connect
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host 
libvirt.openAuth, uri, auth, flags)
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 144, in 
proxy_call
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host rv = execute(f, 
*args, **kwargs)
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 125, in execute
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host six.reraise(c, 
e, tb)
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 83, in tworker
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host rv = 
meth(*args, **kwargs)
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host   File 
"/usr/local/lib/python2.7/dist-packages/libvirt.py", line 105, in openAuth
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host if ret is 
None:raise libvirtError('virConnectOpenAuth() failed')
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host libvirtError: 
Failed to connect socket to '/var/run/libvirt/libvirt-sock': Connection refused
  2016-12-01 18:16:05.117 6160 ERROR nova.virt.libvirt.host 
  2016-12-01 18:16:05.123 6160 ERROR nova.compute.manager 
[req-12fbb338-7df0-4654-b686-257245421442 
tempest-ImagesOneServerNegativeTestJSON-1400886372 
tempest-ImagesOneServerNegativeTestJSON-1400886372] [instance: 
6fa73b04-c6a7-47a8-908b-6738f36f6ffc] Instance failed to spawn
  2016-12-01 18:16:05.123 6160 ERROR nova.compute.manager [instance: 
6fa73b04-c6a7-47a8-908b-6738f36f6ffc] Traceback (most recent call last):
  2016-12-01 18:16:05.123 6160 ERROR nova.compute.manager [instance: 
6fa73b04-c6a7-47a8-908b-6738f36f6ffc]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2117, in _build_resources
  2016-12-01 18:16:05.123 6160 ERROR nova.compute.manager [instance: 
6fa73b04-c6a7-47a8-908b-6738f36f6ffc] yield resources
  2016-12-01 18:16:05.123 6160 ERROR nova.compute.manager [instance: 
6fa73b04-c6a7-47a8-908b-6738f36f

[Yahoo-eng-team] [Bug 1646002] Re: periodic-tempest-dsvm-neutron-full-ssh-master fails on the gate - libguestfs installed but not usable (/usr/bin/supermin exited with error status 1.

2016-12-05 Thread Andrea Frittoli
** Changed in: devstack
   Status: In Progress => Fix Released

** Changed in: tempest
   Status: Fix Released => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1646002

Title:
  periodic-tempest-dsvm-neutron-full-ssh-master fails on the gate -
  libguestfs installed but not usable (/usr/bin/supermin exited with
  error status 1.

Status in devstack:
  Fix Released
Status in OpenStack Compute (nova):
  Invalid
Status in tempest:
  Fix Committed

Bug description:
  The log is http://logs.openstack.org/periodic/periodic-tempest-dsvm-
  neutron-full-ssh-master/14ef08a/logs/

  test_create_server_with_personality failed like

  Traceback (most recent call last):
File "tempest/api/compute/servers/test_server_personality.py", line 63, in 
test_create_server_with_personality
  validatable=True)
File "tempest/api/compute/base.py", line 233, in create_test_server
  **kwargs)
File "tempest/common/compute.py", line 167, in create_test_server
  % server['id'])
File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  self.force_reraise()
File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File "tempest/common/compute.py", line 149, in create_test_server
  clients.servers_client, server['id'], wait_until)
File "tempest/common/waiters.py", line 75, in wait_for_server_status
  server_id=server_id)
  tempest.exceptions.BuildErrorException: Server 
55df9d1c-3316-43a5-81fe-63ff10216b5e failed to build and is in ERROR status
  Details: {u'message': u'No valid host was found. There are not enough hosts 
available.', u'code': 500, u'created': u'2016-11-29T06:28:57Z'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1646002/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1646002] Re: periodic-tempest-dsvm-neutron-full-ssh-master fails on the gate - libguestfs installed but not usable (/usr/bin/supermin exited with error status 1.

2016-12-05 Thread Andrea Frittoli
The root disk for the cirros image is blank before boot.

The boot process starts from initrd. The file system in initrd is then
copied to /dev/vda and boot continues from there. Injection happens
before boot, thus there's no /etc folder found.

The test should inject to / instead.


** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1646002

Title:
  periodic-tempest-dsvm-neutron-full-ssh-master fails on the gate -
  libguestfs installed but not usable (/usr/bin/supermin exited with
  error status 1.

Status in devstack:
  In Progress
Status in OpenStack Compute (nova):
  Invalid
Status in tempest:
  New

Bug description:
  The log is http://logs.openstack.org/periodic/periodic-tempest-dsvm-
  neutron-full-ssh-master/14ef08a/logs/

  test_create_server_with_personality failed like

  Traceback (most recent call last):
File "tempest/api/compute/servers/test_server_personality.py", line 63, in 
test_create_server_with_personality
  validatable=True)
File "tempest/api/compute/base.py", line 233, in create_test_server
  **kwargs)
File "tempest/common/compute.py", line 167, in create_test_server
  % server['id'])
File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  self.force_reraise()
File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File "tempest/common/compute.py", line 149, in create_test_server
  clients.servers_client, server['id'], wait_until)
File "tempest/common/waiters.py", line 75, in wait_for_server_status
  server_id=server_id)
  tempest.exceptions.BuildErrorException: Server 
55df9d1c-3316-43a5-81fe-63ff10216b5e failed to build and is in ERROR status
  Details: {u'message': u'No valid host was found. There are not enough hosts 
available.', u'code': 500, u'created': u'2016-11-29T06:28:57Z'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1646002/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298472] Re: SSHTimeout in tempest trying to verify that computes are actually functioning

2016-12-01 Thread Andrea Frittoli
This is a very old bug. Anna set it back to new in Aug 2016 as it may be 
related to https://bugs.launchpad.net/mos/+bug/1606218, which has been fixed 
since.
Hence I will set this to invalid. If some hits an ssh bug in the gate again, 
please file a new bug.

** Changed in: tempest
   Status: New => Invalid

** Changed in: tempest
   Importance: Critical => Undecided

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1298472

Title:
  SSHTimeout in tempest trying to verify that computes are actually
  functioning

Status in Cinder:
  Invalid
Status in OpenStack Compute (nova):
  Fix Released
Status in tempest:
  Invalid

Bug description:
  
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern
  failed at least once with the following traceback when trying to
  connect via SSH:

  Traceback (most recent call last):
File "tempest/scenario/test_volume_boot_pattern.py", line 156, in 
test_volume_boot_pattern
  ssh_client = self._ssh_to_server(instance_from_snapshot, keypair)
File "tempest/scenario/test_volume_boot_pattern.py", line 100, in 
_ssh_to_server
  private_key=keypair.private_key)
File "tempest/scenario/manager.py", line 466, in get_remote_client
  return RemoteClient(ip, username, pkey=private_key)
File "tempest/common/utils/linux/remote_client.py", line 47, in __init__
  if not self.ssh_client.test_connection_auth():
File "tempest/common/ssh.py", line 149, in test_connection_auth
  connection = self._get_ssh_connection()
File "tempest/common/ssh.py", line 65, in _get_ssh_connection
  timeout=self.channel_timeout, pkey=self.pkey)
File "/usr/local/lib/python2.7/dist-packages/paramiko/client.py", line 236, 
in connect
  retry_on_signal(lambda: sock.connect(addr))
File "/usr/local/lib/python2.7/dist-packages/paramiko/util.py", line 279, 
in retry_on_signal
  return function()
File "/usr/local/lib/python2.7/dist-packages/paramiko/client.py", line 236, 
in 
  retry_on_signal(lambda: sock.connect(addr))
File "/usr/lib/python2.7/socket.py", line 224, in meth
  return getattr(self._sock,name)(*args)
File 
"/usr/local/lib/python2.7/dist-packages/fixtures/_fixtures/timeout.py", line 
52, in signal_handler
  raise TimeoutException()
  TimeoutException

  Logs can be found at: 
http://logs.openstack.org/86/82786/1/gate/gate-tempest-dsvm-neutron-pg/1eaadd0/
  The review that triggered the issue is: 
https://review.openstack.org/#/c/82786/

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1298472/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506910] [NEW] Nova DB deadlock

2015-10-16 Thread Andrea Frittoli
Public bug reported:

I hit the following deadlock in a dsvm job:

http://paste.openstack.org/show/476503/

The full log is here:
http://logs.openstack.org/00/234200/5/experimental/gate-tempest-dsvm-neutron-full-test-accounts/4dccd24/logs/screen-n-api.txt.gz#_2015-10-16_13_23_36_379

The exception is:
2015-10-16 13:23:36.379 27391 ERROR nova.api.openstack.extensions DBDeadlock: 
(pymysql.err.InternalError) (1213, u'Deadlock found when trying to get lock; 
try restarting transaction') [SQL: u'INSERT INTO instance_extra (created_at, 
updated_at, deleted_at, deleted, instance_uuid, numa_topology, pci_requests, 
flavor, vcpu_model, migration_context) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, 
%s, %s)'] [parameters: (datetime.datetime(2015, 10, 16, 13, 23, 36, 360147), 
None, None, 0, 'b4091b06-48bf-4cc1-9348-54574f1c8537', None, '[]', '{"new": 
null, "old": null, "cur": {"nova_object.version": "1.1", "nova_object.name": 
"Flavor", "nova_object.data": {"disabled": false, "root_gb": 0, "name": 
"m1.nano", "flavorid": "42", "deleted": false, "created_at": 
"2015-10-16T13:21:28Z", "ephemeral_gb": 0, "updated_at": null, "memory_mb": 64, 
"vcpus": 1, "extra_specs": {}, "swap": 0, "rxtx_factor": 1.0, "is_public": 
true, "deleted_at": null, "vcpu_weight": 0, "id": 6}, "nova_object.namespace": 
"nova"}}', No
 ne, None)]

I have no details on how to reproduce - it's a random failure on a test
that otherwise normally passes.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1506910

Title:
  Nova DB deadlock

Status in OpenStack Compute (nova):
  New

Bug description:
  I hit the following deadlock in a dsvm job:

  http://paste.openstack.org/show/476503/

  The full log is here:
  
http://logs.openstack.org/00/234200/5/experimental/gate-tempest-dsvm-neutron-full-test-accounts/4dccd24/logs/screen-n-api.txt.gz#_2015-10-16_13_23_36_379

  The exception is:
  2015-10-16 13:23:36.379 27391 ERROR nova.api.openstack.extensions DBDeadlock: 
(pymysql.err.InternalError) (1213, u'Deadlock found when trying to get lock; 
try restarting transaction') [SQL: u'INSERT INTO instance_extra (created_at, 
updated_at, deleted_at, deleted, instance_uuid, numa_topology, pci_requests, 
flavor, vcpu_model, migration_context) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, 
%s, %s)'] [parameters: (datetime.datetime(2015, 10, 16, 13, 23, 36, 360147), 
None, None, 0, 'b4091b06-48bf-4cc1-9348-54574f1c8537', None, '[]', '{"new": 
null, "old": null, "cur": {"nova_object.version": "1.1", "nova_object.name": 
"Flavor", "nova_object.data": {"disabled": false, "root_gb": 0, "name": 
"m1.nano", "flavorid": "42", "deleted": false, "created_at": 
"2015-10-16T13:21:28Z", "ephemeral_gb": 0, "updated_at": null, "memory_mb": 64, 
"vcpus": 1, "extra_specs": {}, "swap": 0, "rxtx_factor": 1.0, "is_public": 
true, "deleted_at": null, "vcpu_weight": 0, "id": 6}, "nova_object.namespace": 
"nova"}}', 
 None, None)]

  I have no details on how to reproduce - it's a random failure on a
  test that otherwise normally passes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1506910/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471665] Re: Successive runs of identity tempest tests take more and more time to finish

2015-07-06 Thread Andrea Frittoli
The cause for the degradation could be anywhere - keystone or any other
co-located service which is hitting your devstack node resources. The
only way Tempest could make thing worst is by not cleaning test
resources properly, which still would hardly justify the slowdown.

Such a slow down is worrying, but there's not quite enough information in the 
ticket to really investigate.
Could you please attach at least your local.conf and the break down of the test 
run times?

Initial triage is probably best done by keystone, even though the issue
could be in any of the services as they are all running on the same box.

** Project changed: tempest => keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1471665

Title:
  Successive runs of identity tempest tests take more and more time to
  finish

Status in OpenStack Identity (Keystone):
  Incomplete

Bug description:
  If I first run tempest on a newly-created VM with and install devstack
  with only keystone setup, first parallel run of
  tempest.api.identity.admin.v3 tests take around 20 seconds. If I run
  them again, they take around 38 seconds, third run would be around 48
  seconds, and so on.

  I tried on local VM (virtualbox) as well as an AWS t2.medium instance.

  Only configuration worth mentioning was:

  ENABLED_SERVICES=key,mysql,tempest

  in local.conf

  Not sure if it's only keystone specific. Nevertheless, really
  annoying.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1471665/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456771] [NEW] Failure prepping block device in the gate

2015-05-19 Thread Andrea Frittoli
Public bug reported:

Since May 15th 2015 I sometimes see failures in both check and gate
pipelines with "Failure prepping block device in the gate" and the
following common signature:

http://logstash.openstack.org/#eyJzZWFyY2giOiJcImJsb2NrX2RldmljZV9vYmpba2V5XSA9IGRiX2Jsb2NrX2RldmljZVtrZXldXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MzIwNjE5OTMxOTZ9

req-843cf2f1-4d3a-4c1f-84f6-2c61abec72ac ListServersNegativeTestJSON-690147980 
ListServersNegativeTestJSON-786232548] [instance: 
76293503-6b94-4b80-8322-83b2ec6c898c] Failure prepping block device
2015-05-19 18:46:14.544 15691 TRACE nova.compute.manager [instance: 
76293503-6b94-4b80-8322-83b2ec6c898c] Traceback (most recent call last):
2015-05-19 18:46:14.544 15691 TRACE nova.compute.manager [instance: 
76293503-6b94-4b80-8322-83b2ec6c898c]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2156, in _build_resources
2015-05-19 18:46:14.544 15691 TRACE nova.compute.manager [instance: 
76293503-6b94-4b80-8322-83b2ec6c898c] block_device_mapping)
2015-05-19 18:46:14.544 15691 TRACE nova.compute.manager [instance: 
76293503-6b94-4b80-8322-83b2ec6c898c]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 1676, in 
_default_block_device_names
2015-05-19 18:46:14.544 15691 TRACE nova.compute.manager [instance: 
76293503-6b94-4b80-8322-83b2ec6c898c] root_bdm.save()
2015-05-19 18:46:14.544 15691 TRACE nova.compute.manager [instance: 
76293503-6b94-4b80-8322-83b2ec6c898c]   File 
"/opt/stack/new/nova/nova/objects/base.py", line 189, in wrapper
2015-05-19 18:46:14.544 15691 TRACE nova.compute.manager [instance: 
76293503-6b94-4b80-8322-83b2ec6c898c] self._context, self, fn.__name__, 
args, kwargs)
2015-05-19 18:46:14.544 15691 TRACE nova.compute.manager [instance: 
76293503-6b94-4b80-8322-83b2ec6c898c]   File 
"/opt/stack/new/nova/nova/conductor/rpcapi.py", line 266, in object_action
2015-05-19 18:46:14.544 15691 TRACE nova.compute.manager [instance: 
76293503-6b94-4b80-8322-83b2ec6c898c] objmethod=objmethod, args=args, 
kwargs=kwargs)
2015-05-19 18:46:14.544 15691 TRACE nova.compute.manager [instance: 
76293503-6b94-4b80-8322-83b2ec6c898c]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 
156, in call
2015-05-19 18:46:14.544 15691 TRACE nova.compute.manager [instance: 
76293503-6b94-4b80-8322-83b2ec6c898c] retry=self.retry)
2015-05-19 18:46:14.544 15691 TRACE nova.compute.manager [instance: 
76293503-6b94-4b80-8322-83b2ec6c898c]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, 
in _send
2015-05-19 18:46:14.544 15691 TRACE nova.compute.manager [instance: 
76293503-6b94-4b80-8322-83b2ec6c898c] timeout=timeout, retry=retry)
2015-05-19 18:46:14.544 15691 TRACE nova.compute.manager [instance: 
76293503-6b94-4b80-8322-83b2ec6c898c]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 350, in send
2015-05-19 18:46:14.544 15691 TRACE nova.compute.manager [instance: 
76293503-6b94-4b80-8322-83b2ec6c898c] retry=retry)
2015-05-19 18:46:14.544 15691 TRACE nova.compute.manager [instance: 
76293503-6b94-4b80-8322-83b2ec6c898c]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 341, in _send
2015-05-19 18:46:14.544 15691 TRACE nova.compute.manager [instance: 
76293503-6b94-4b80-8322-83b2ec6c898c] raise result
2015-05-19 18:46:14.544 15691 TRACE nova.compute.manager [instance: 
76293503-6b94-4b80-8322-83b2ec6c898c] TypeError: 'NoneType' object has no 
attribute '__getitem__'
2015-05-19 18:46:14.544 15691 TRACE nova.compute.manager [instance: 
76293503-6b94-4b80-8322-83b2ec6c898c] 
2015-05-19 18:46:14.544 15691 TRACE nova.compute.manager [instance: 
76293503-6b94-4b80-8322-83b2ec6c898c]   File 
"/opt/stack/new/nova/nova/conductor/manager.py", line 436, in _object_dispatch
2015-05-19 18:46:14.544 15691 TRACE nova.compute.manager [instance: 
76293503-6b94-4b80-8322-83b2ec6c898c] return getattr(target, method)(*args, 
**kwargs)
2015-05-19 18:46:14.544 15691 TRACE nova.compute.manager [instance: 
76293503-6b94-4b80-8322-83b2ec6c898c]   File 
"/opt/stack/new/nova/nova/objects/base.py", line 205, in wrapper
2015-05-19 18:46:14.544 15691 TRACE nova.compute.manager [instance: 
76293503-6b94-4b80-8322-83b2ec6c898c] return fn(self, *args, **kwargs)
2015-05-19 18:46:14.544 15691 TRACE nova.compute.manager [instance: 
76293503-6b94-4b80-8322-83b2ec6c898c]   File 
"/opt/stack/new/nova/nova/objects/block_device.py", line 175, in save
2015-05-19 18:46:14.544 15691 TRACE nova.compute.manager [instance: 
76293503-6b94-4b80-8322-83b2ec6c898c] self._from_db_object(self._context, 
self, updated)
2015-05-19 18:46:14.544 15691 TRACE nova.compute.manager [instance: 
76293503-6b94-4b80-8322-83b2ec6c898c]   File 
"/opt/stack/new/nova/nova/objects/block_device.py", line 89, in _from_db_object
2015-05

[Yahoo-eng-team] [Bug 1154809] Re: Volume detach fails via OSAPI: AmbiguousEndpoints

2014-02-16 Thread Andrea Frittoli
My two cents: all python bindings should rely on python-keystoneclient
for obtaining token and url from catalogue, rather than duplicating the
logic to get a token and parse the catalogue, like cinder and nova
client do. This will make it easier in future to support new version of
the identity API.

** Also affects: python-novaclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1154809

Title:
  Volume detach fails via OSAPI: AmbiguousEndpoints

Status in OpenStack Compute (Nova):
  Fix Released
Status in Python client library for Cinder:
  New
Status in Python client library for Keystone:
  Confirmed
Status in Python client library for Nova:
  New

Bug description:
  Not sure if this is a cinderclient bug or nova.  Attempting to detach
  a volume via the OSAPI ends with an AmbiguousEndpoints exception:

  2013-03-13 17:30:40.314 ERROR nova.api.openstack 
[req-9dc2a448-c2ed-4db3-be2f-6e1f8971d463 f1a96ff6310042f7b7a9b5acbb634a43 
e00a289cba9b45169f054067d7dd74e1] Caught error: AmbiguousEndpoints: [{u'url': 
u'http://test-07.os.magners.qa.lexington:8776/v1/e00a289cba9b45169f054067d7dd74e1',
 u'region': u'RegionOne', u'legacy_endpoint_id': 
u'8012ff386ffa4955b3eab965a4826b0e', 'serviceName': None, u'interface': 
u'internal', u'id': u'3b41c544eb24440b89946ab4da3c2524'}, {u'url': 
u'http://test-07.os.magners.qa.lexington:8776/v1/e00a289cba9b45169f054067d7dd74e1',
 u'region': u'RegionOne', u'legacy_endpoint_id': 
u'8012ff386ffa4955b3eab965a4826b0e', 'serviceName': None, u'interface': 
u'public', u'id': u'981469fb6b4f4928b1a59783bec445c4'}, {u'url': 
u'http://test-07.os.magners.qa.lexington:8776/v1/e00a289cba9b45169f054067d7dd74e1',
 u'region': u'RegionOne', u'legacy_endpoint_id': 
u'8012ff386ffa4955b3eab965a4826b0e', 'serviceName': None, u'interface': 
u'admin', u'id': u'c578676fd1c6404c9a4fddd8daf531a
 4'}]
  2013-03-13 17:30:40.314 8945 TRACE nova.api.openstack Traceback (most recent 
call last):
  2013-03-13 17:30:40.314 8945 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py", line 81, in 
__call__
  2013-03-13 17:30:40.314 8945 TRACE nova.api.openstack return 
req.get_response(self.application)
  2013-03-13 17:30:40.314 8945 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1296, in send
  2013-03-13 17:30:40.314 8945 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2013-03-13 17:30:40.314 8945 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1260, in 
call_application
  2013-03-13 17:30:40.314 8945 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2013-03-13 17:30:40.314 8945 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2013-03-13 17:30:40.314 8945 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-03-13 17:30:40.314 8945 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py", 
line 451, in __call__
  2013-03-13 17:30:40.314 8945 TRACE nova.api.openstack return 
self.app(env, start_response)
  2013-03-13 17:30:40.314 8945 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2013-03-13 17:30:40.314 8945 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-03-13 17:30:40.314 8945 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2013-03-13 17:30:40.314 8945 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-03-13 17:30:40.314 8945 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2013-03-13 17:30:40.314 8945 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-03-13 17:30:40.314 8945 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/routes/middleware.py", line 131, in __call__
  2013-03-13 17:30:40.314 8945 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2013-03-13 17:30:40.314 8945 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2013-03-13 17:30:40.314 8945 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-03-13 17:30:40.314 8945 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
  2013-03-13 17:30:40.314 8945 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2013-03-13 17:30:40.314 8945 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
  2013-03-13 17:30:40.314 8945 TRACE nova.api.openstack return 
self.func(req, *args, **

[Yahoo-eng-team] [Bug 1238536] Re: POST with empty body results in 411 Error

2014-01-28 Thread Andrea Frittoli
httplib2 is not setting content-length to 0 if the post body is empty
(https://code.google.com/p/httplib2/issues/detail?id=143).

Certain http servers won't be happy if the content length is not set.
Tempest rest client should do that.

** Also affects: tempest
   Importance: Undecided
   Status: New

** Bug watch added: code.google.com/p/httplib2/issues #143
   http://code.google.com/p/httplib2/issues/detail?id=143

** Changed in: tempest
 Assignee: (unassigned) => Andrea Frittoli (andrea-frittoli)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1238536

Title:
  POST with empty body results in 411 Error

Status in OpenStack Compute (Nova):
  Invalid
Status in Tempest:
  In Progress

Bug description:
  Some API commands don't need a body - for example allocating a
  floating IP.   However making a request without a body results in a
  411 error:

  curl -i 
https://compute.systestb.hpcloud.net/v2/21240759398822/os-floating-ips -H 
"Content-Type: application/xml" -H "Accept: application/xml" -H "X-Auth-Token: 
xxx" -X POST
  HTTP/1.1 411 Length Required
  nnCoection: close
  Content-Length: 284

  Fault Name: HttpRequestReceiveError
  Error Type: Default
  Description: Http request received failed
  Root Cause Code: -19013
  Root Cause : HTTP Transport: Couldn't determine the content length
  Binding State: CLIENT_CONNECTION_ESTABLISHED
  Service: null
  Endpoint: null

  
  Passing an Empty body works:
  curl -i 
https://compute.systestb.hpcloud.net/v2/21240759398822/os-floating-ips -H 
"Content-Type: application/xml" -H "Accept: application/xml" -H "X-Auth-Token: 
xxx" -X POST -d ''
  HTTP/1.1 200 OK
  Content-Length: 164
  Content-Type: application/xml; charset=UTF-8
  Date: Fri, 31 May 2013 11:13:26 GMT
  X-Compute-Request-Id: req-cc2ce740-6114-4820-8717-113ea1796142

  
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1238536/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1267544] [NEW] List of endpoints in v2 token response is "endpoint" in XML

2014-01-09 Thread Andrea Frittoli
Public bug reported:

The response to the token API in the v2 API is not consistent between
JSON and XML

In JSON the format is as follows:

"serviceCatalog": [
{
"endpoints": [
{
"adminURL": 
"http://127.0.0.1:8774/v2/aff91593f7fb43cc863a34cf718584cb";,
"id": "1f61239858ba4fc595284473a05c79a9",
"internalURL": 
"http://127.0.0.1:8774/v2/aff91593f7fb43cc863a34cf718584cb";,
"publicURL": 
"http://127.0.0.1:8774/v2/aff91593f7fb43cc863a34cf718584cb";,
"region": "RegionOne"
}
],
"endpoints_links": [],
"name": "nova",
"type": "compute"
},


While in XML the format is:

  

  
  http://127.0.0.1:8774/v2/8bae6214d4314a0aa1d5dce34c7a5f38"; 
region="RegionOne" 
publicURL="http://127.0.0.1:8774/v2/8bae6214d4314a0aa1d5dce34c7a5f38"; 
internalURL="http://127.0.0.1:8774/v2/8bae6214d4314a0aa1d5dce34
c7a5f38" id="1f61239858ba4fc595284473a05c79a9"/>


So it's "endpoints" for JSON and "endpoint" for XML.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1267544

Title:
  List of endpoints in v2 token response is "endpoint" in XML

Status in OpenStack Identity (Keystone):
  New

Bug description:
  The response to the token API in the v2 API is not consistent between
  JSON and XML

  In JSON the format is as follows:

  "serviceCatalog": [
  {
  "endpoints": [
  {
  "adminURL": 
"http://127.0.0.1:8774/v2/aff91593f7fb43cc863a34cf718584cb";,
  "id": "1f61239858ba4fc595284473a05c79a9",
  "internalURL": 
"http://127.0.0.1:8774/v2/aff91593f7fb43cc863a34cf718584cb";,
  "publicURL": 
"http://127.0.0.1:8774/v2/aff91593f7fb43cc863a34cf718584cb";,
  "region": "RegionOne"
  }
  ],
  "endpoints_links": [],
  "name": "nova",
  "type": "compute"
  },

  
  While in XML the format is:


  

http://127.0.0.1:8774/v2/8bae6214d4314a0aa1d5dce34c7a5f38"; 
region="RegionOne" 
publicURL="http://127.0.0.1:8774/v2/8bae6214d4314a0aa1d5dce34c7a5f38"; 
internalURL="http://127.0.0.1:8774/v2/8bae6214d4314a0aa1d5dce34
  c7a5f38" id="1f61239858ba4fc595284473a05c79a9"/>
  

  So it's "endpoints" for JSON and "endpoint" for XML.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1267544/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260644] [NEW] ServerRescueTest may fail due to RESCUE taking too long

2013-12-13 Thread Andrea Frittoli
Public bug reported:

In the grenade test [0] for a bp I'm working on, ServerRescueTestXML
rescue_unrescue test failed because the VM did not get into RESCUE state
in time. It seems that the test is flacky.

>From the tempest log [1] I see the sequence VM ACTIVE, RESCUE issues,
WAIT, timeout, DELETE VM.

>From the nova cpu log [1], following request ID: req-6c20654c-c00c-4932
-87ad-8cfec9866399, I see that the RESCUE RCP is received immediately by
n-cpu, however then the requests starves for 3 minutes waiting for a
"compute_resources" lock.

The VM is than deleted by the test and when nova tries to process the
RESCUE it throws and exception as the VM is not there:

bc-b27a-83c39b7566c8] Traceback (most recent call last):
bc-b27a-83c39b7566c8]   File "/opt/stack/new/nova/nova/compute/manager.py", 
line 2664, in rescue_instance
bc-b27a-83c39b7566c8] rescue_image_meta, admin_password)
bc-b27a-83c39b7566c8]   File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", 
line 2109, in rescue
bc-b27a-83c39b7566c8] write_to_disk=True)
bc-b27a-83c39b7566c8]   File "/opt/stack/new/nova/nova/virt/libvirt/driver.py", 
line 3236, in to_xml
bc-b27a-83c39b7566c8] libvirt_utils.write_to_file(xml_path, xml)
bc-b27a-83c39b7566c8]   File "/opt/stack/new/nova/nova/virt/libvirt/utils.py", 
line 494, in write_to_file
bc-b27a-83c39b7566c8] with open(path, 'w') as f:
bc-b27a-83c39b7566c8] IOError: [Errno 2] No such file or directory: 
u'/opt/stack/data/nova/instances/a5099beb-f4a2-47bc-b27a-83c39b7566c8/libvirt.xml'
bc-b27a-83c39b7566c8] 

There may be a problem in nova as well, as RESCUE is held for 3 minutes
waiting on a lock.

[0] https://review.openstack.org/#/c/60434/
[1] 
http://logs.openstack.org/34/60434/5/check/check-grenade-dsvm/1d2852d/logs/tempest.txt.gz
[2] 
http://logs.openstack.org/34/60434/5/check/check-grenade-dsvm/1d2852d/logs/new/screen-n-cpu.txt.gz?

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: tempest
 Importance: Undecided
 Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260644

Title:
  ServerRescueTest may fail due to RESCUE taking too long

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  New

Bug description:
  In the grenade test [0] for a bp I'm working on, ServerRescueTestXML
  rescue_unrescue test failed because the VM did not get into RESCUE
  state in time. It seems that the test is flacky.

  From the tempest log [1] I see the sequence VM ACTIVE, RESCUE issues,
  WAIT, timeout, DELETE VM.

  From the nova cpu log [1], following request ID: req-6c20654c-
  c00c-4932-87ad-8cfec9866399, I see that the RESCUE RCP is received
  immediately by n-cpu, however then the requests starves for 3 minutes
  waiting for a  "compute_resources" lock.

  The VM is than deleted by the test and when nova tries to process the
  RESCUE it throws and exception as the VM is not there:

  bc-b27a-83c39b7566c8] Traceback (most recent call last):
  bc-b27a-83c39b7566c8]   File "/opt/stack/new/nova/nova/compute/manager.py", 
line 2664, in rescue_instance
  bc-b27a-83c39b7566c8] rescue_image_meta, admin_password)
  bc-b27a-83c39b7566c8]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2109, in rescue
  bc-b27a-83c39b7566c8] write_to_disk=True)
  bc-b27a-83c39b7566c8]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 3236, in to_xml
  bc-b27a-83c39b7566c8] libvirt_utils.write_to_file(xml_path, xml)
  bc-b27a-83c39b7566c8]   File 
"/opt/stack/new/nova/nova/virt/libvirt/utils.py", line 494, in write_to_file
  bc-b27a-83c39b7566c8] with open(path, 'w') as f:
  bc-b27a-83c39b7566c8] IOError: [Errno 2] No such file or directory: 
u'/opt/stack/data/nova/instances/a5099beb-f4a2-47bc-b27a-83c39b7566c8/libvirt.xml'
  bc-b27a-83c39b7566c8] 

  There may be a problem in nova as well, as RESCUE is held for 3
  minutes waiting on a lock.

  [0] https://review.openstack.org/#/c/60434/
  [1] 
http://logs.openstack.org/34/60434/5/check/check-grenade-dsvm/1d2852d/logs/tempest.txt.gz
  [2] 
http://logs.openstack.org/34/60434/5/check/check-grenade-dsvm/1d2852d/logs/new/screen-n-cpu.txt.gz?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260644/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp