[Yahoo-eng-team] [Bug 1398086] [NEW] nova servers pagination does not work with deleted marker

2014-12-01 Thread Alvaro Lopez
Public bug reported:

Nova does not paginate correctly if the marker is a deleted server.

I am trying to get all of the servers for a given tenant. In total (i.e.
active, delete, error, etc.) there are 405 servers.

If I query the API without a marker and with a limit larger (for example, 500)
than the total number of servers I get all of them, i.e. the following query
correctly returns 405 servers:

curl (...) "http://cloud.example.org:8774/v1.1/foo/servers?changes-
since=2014-01-01&limit=500"

However, if I try to paginate over them, doing:

curl (...) "http://cloud.example.org:8774/v1.1/foo/servers?changes-
since=2014-01-01&limit=100"

I get the first 100 with a link to the next page. If I try to follow it:

curl (...) "http://cloud.example.org:8774/v1.1/foo/servers?changes-
since=2014-01-01&limit=100&marker=foobar"

I am always getting a "badRequest" error saying that the marker is not found. I
guess this is because of these lines in "nova/db/sqlalchemy/api.py"

2000 # paginate query
2001 if marker is not None:
2002 try:
2003 marker = _instance_get_by_uuid(context, marker, 
session=session)
2004 except exception.InstanceNotFound:
2005 raise exception.MarkerNotFound(marker)

The function "_instance_get_by_uuid" gets the machines that are not
deleted, therefore it fails to locate the marker if it is a deleted
server.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1398086

Title:
  nova servers pagination does not work with deleted marker

Status in OpenStack Compute (Nova):
  New

Bug description:
  Nova does not paginate correctly if the marker is a deleted server.

  I am trying to get all of the servers for a given tenant. In total
  (i.e. active, delete, error, etc.) there are 405 servers.

  If I query the API without a marker and with a limit larger (for example, 500)
  than the total number of servers I get all of them, i.e. the following query
  correctly returns 405 servers:

  curl (...) "http://cloud.example.org:8774/v1.1/foo/servers
  ?changes-since=2014-01-01&limit=500"

  However, if I try to paginate over them, doing:

  curl (...) "http://cloud.example.org:8774/v1.1/foo/servers
  ?changes-since=2014-01-01&limit=100"

  I get the first 100 with a link to the next page. If I try to follow
  it:

  curl (...) "http://cloud.example.org:8774/v1.1/foo/servers
  ?changes-since=2014-01-01&limit=100&marker=foobar"

  I am always getting a "badRequest" error saying that the marker is not found. 
I
  guess this is because of these lines in "nova/db/sqlalchemy/api.py"

  2000 # paginate query
  2001 if marker is not None:
  2002 try:
  2003 marker = _instance_get_by_uuid(context, marker, 
session=session)
  2004 except exception.InstanceNotFound:
  2005 raise exception.MarkerNotFound(marker)

  The function "_instance_get_by_uuid" gets the machines that are not
  deleted, therefore it fails to locate the marker if it is a deleted
  server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1398086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398086] Re: nova servers pagination does not work with deleted marker

2014-12-02 Thread Alvaro Lopez
** Changed in: nova
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1398086

Title:
  nova servers pagination does not work with deleted marker

Status in OpenStack Compute (Nova):
  New

Bug description:
  Nova does not paginate correctly if the marker is a deleted server.

  I am trying to get all of the servers for a given tenant. In total
  (i.e. active, delete, error, etc.) there are 405 servers.

  If I query the API without a marker and with a limit larger (for example, 500)
  than the total number of servers I get all of them, i.e. the following query
  correctly returns 405 servers:

  curl (...) "http://cloud.example.org:8774/v1.1/foo/servers
  ?changes-since=2014-01-01&limit=500"

  However, if I try to paginate over them, doing:

  curl (...) "http://cloud.example.org:8774/v1.1/foo/servers
  ?changes-since=2014-01-01&limit=100"

  I get the first 100 with a link to the next page. If I try to follow
  it:

  curl (...) "http://cloud.example.org:8774/v1.1/foo/servers
  ?changes-since=2014-01-01&limit=100&marker=foobar"

  I am always getting a "badRequest" error saying that the marker is not found. 
I
  guess this is because of these lines in "nova/db/sqlalchemy/api.py"

  2000 # paginate query
  2001 if marker is not None:
  2002 try:
  2003 marker = _instance_get_by_uuid(context, marker, 
session=session)
  2004 except exception.InstanceNotFound:
  2005 raise exception.MarkerNotFound(marker)

  The function "_instance_get_by_uuid" gets the machines that are not
  deleted, therefore it fails to locate the marker if it is a deleted
  server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1398086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1186907] Re: If ebtables is not present nova-network fails with misleading error

2015-01-19 Thread Alvaro Lopez
** Changed in: ubuntu
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1186907

Title:
  If ebtables is not present nova-network fails with misleading error

Status in OpenStack Compute (Nova):
  Invalid
Status in Ubuntu:
  Invalid

Bug description:
  In an upgrade from Folsom to Grizzly, nova-network fails as follows:

  2013-06-03 10:08:01.430 6947 CRITICAL nova [-] Interface eth1 not found.
  2013-06-03 10:08:01.430 6947 TRACE nova Traceback (most recent call last):
  2013-06-03 10:08:01.430 6947 TRACE nova   File "/usr/bin/nova-network", 
line 54, in 
  2013-06-03 10:08:01.430 6947 TRACE nova service.wait()
  2013-06-03 10:08:01.430 6947 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/nova/service.py", line 689, in wait
  2013-06-03 10:08:01.430 6947 TRACE nova _launcher.wait()
  2013-06-03 10:08:01.430 6947 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/nova/service.py", line 209, in wait
  2013-06-03 10:08:01.430 6947 TRACE nova super(ServiceLauncher, 
self).wait()
  2013-06-03 10:08:01.430 6947 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/nova/service.py", line 179, in wait
  2013-06-03 10:08:01.430 6947 TRACE nova service.wait()
  2013-06-03 10:08:01.430 6947 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 166, in wait
  2013-06-03 10:08:01.430 6947 TRACE nova return self._exit_event.wait()
  2013-06-03 10:08:01.430 6947 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/eventlet/event.py", line 116, in wait
  2013-06-03 10:08:01.430 6947 TRACE nova return hubs.get_hub().switch()
  2013-06-03 10:08:01.430 6947 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 177, in switch
  2013-06-03 10:08:01.430 6947 TRACE nova return self.greenlet.switch()
  2013-06-03 10:08:01.430 6947 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 192, in main
  2013-06-03 10:08:01.430 6947 TRACE nova result = function(*args, 
**kwargs)
  2013-06-03 10:08:01.430 6947 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/nova/service.py", line 147, in run_server
  2013-06-03 10:08:01.430 6947 TRACE nova server.start()
  2013-06-03 10:08:01.430 6947 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/nova/service.py", line 429, in start
  2013-06-03 10:08:01.430 6947 TRACE nova self.manager.init_host()
  2013-06-03 10:08:01.430 6947 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/nova/network/manager.py", line 1681, in 
init_host
  2013-06-03 10:08:01.430 6947 TRACE nova self.init_host_floating_ips()
  2013-06-03 10:08:01.430 6947 TRACE nova   File 
"/usr/lib/python2.7/dist-packages/nova/network/floating_ips.py", line 98, in 
init_host_floating_ips
  2013-06-03 10:08:01.430 6947 TRACE nova raise 
exception.NoFloatingIpInterface(interface=interface)
  2013-06-03 10:08:01.430 6947 TRACE nova NoFloatingIpInterface: Interface 
eth1 not found.
  2013-06-03 10:08:01.430 6947 TRACE nova 

  This is caused because ebtables is not present in the system, therefore the
  process that adds the floating ip fails, and the wrong error is shown.
  ebtables should be a dependency of nova-network and it is not:

  Package: nova-network
  State: installed
  Automatically installed: no
  Version: 1:2013.1-0ubuntu2.1~cloud0
  Priority: extra
  Section: net
  Maintainer: Ubuntu Developers 
  Architecture: all
  Uncompressed Size: 1934 k
  Depends: bridge-utils, dnsmasq-base, dnsmasq-utils, iptables, 
iputils-arping, netcat, nova-common (=
   1:2013.1-0ubuntu2.1~cloud0), vlan, upstart-job, python

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1186907/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424597] Re: Obscure 'No valid hosts found' if no free fixed IPs left in the network

2015-03-02 Thread Alvaro Lopez
*** This bug is a duplicate of bug 1394268 ***
https://bugs.launchpad.net/bugs/1394268

** This bug has been marked a duplicate of bug 1394268
   wrong error message when no IP addresses are available

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1424597

Title:
  Obscure 'No valid hosts found' if no free fixed IPs left in the
  network

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  If network have no free fixed IPs, new instances failed with 'No valid
  hosts found' without proper explanation.

  Example:

  nova boot foobar --flavor SSD.1 --image cirros --nic net-id=f3f2802a-
  c2a1-4d8b-9f43-cf24d0dc8233

  (There is no free IP left in network f3f2802a-c2a1-4d8b-
  9f43-cf24d0dc8233)

  nova show fb4552e5-50cb-4701-a095-c006e4545c04
  ...
  | status   | BUILD
 |

  (few seconds later)

  | fault| {"message": "No valid host was 
found. Exceeded max scheduling attempts 2 for instance 
fb4552e5-50cb-4701-a095-c006e4545c04. Last exception: [u'Traceback (most recent 
call last):\ |
  |  | ', u'  File 
\"/usr/lib/python2.7/dist-packages/nova/compute/manager.py\", line 2036, in 
_do", "code": 500, "details": "  File 
\"/usr/lib/python2.7/dist-packages/nova/conductor/manager.py\", line 612, in 
build_instances |
  |  | instances[0].uuid)   


  |
  |  |   File 
\"/usr/lib/python2.7/dist-packages/nova/scheduler/utils.py\", line 161, in 
populate_retry  
 |
  |  | raise 
exception.NoValidHost(reason=msg)   

 |
  | status   | ERROR


  |

  
  Expected behaviour: Compains about 'No free IP' before attempting to schedule 
instance.

  See https://bugs.launchpad.net/nova/+bug/1424594 for similar
  behaviour.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1424597/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340068] [NEW] Useless option mute_weight_value

2014-07-10 Thread Alvaro Lopez
Public bug reported:

The 'mute_weight_value' option for the 'MuteChildWeigher' weigher is
useless.

This configuration option was used to artificially inflate the returned
weight for a cell that was unavailable, but it is not needed anymore and
a multiplier should be used instead. Since the normalization process is
already in place, this variable has no effect at all and a muted child
will get a weight of 1.0 regardless of the 'mute_weight_value'.

** Affects: nova
 Importance: Undecided
 Assignee: Alvaro Lopez (aloga)
 Status: In Progress

** Changed in: nova
   Status: New => In Progress

** Changed in: nova
 Assignee: (unassigned) => Alvaro Lopez (aloga)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340068

Title:
  Useless option mute_weight_value

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  The 'mute_weight_value' option for the 'MuteChildWeigher' weigher is
  useless.

  This configuration option was used to artificially inflate the
  returned weight for a cell that was unavailable, but it is not needed
  anymore and a multiplier should be used instead. Since the
  normalization process is already in place, this variable has no effect
  at all and a muted child will get a weight of 1.0 regardless of the
  'mute_weight_value'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1340068/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361184] [NEW] Race condition in imagebackend.Image.cache downloads image several times

2014-08-25 Thread Alvaro Lopez
Public bug reported:

There's a race condition in imagebackend.Image.cache that makes nova
download an image N times when N requests requiring the same image are
scheduled in the same host during the image feching.

The imagebackend.Image.cache method only synchronizes on the image
fetching function, but the whole function should be synchronized (or the
create_image function). When several requests using the same image are
scheduled at the same time there's no synchronization when nova checks
if an image already exists or not, therefore several requests may check
that the image does not exist, and start a download for all of them (the
actual download will be syncronized, but the image will be download
several times, one for each request).

This can be seen requesting several instances into the same host:

nova boot --image  --flavor  --num-instances=4
--availability-zone :

In the host we can see:

-rw-r--r-- 1 nova nova 5.0G Aug 25 14:21 
243eccfbc52469947665a506145d798670e3fc88
-rw-r--r-- 1 nova nova 1.2G Aug 25 14:22 
243eccfbc52469947665a506145d798670e3fc88.part

** Affects: nova
 Importance: Undecided
 Assignee: Alvaro Lopez (aloga)
 Status: In Progress

** Changed in: nova
   Status: New => In Progress

** Changed in: nova
 Assignee: (unassigned) => Alvaro Lopez (aloga)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361184

Title:
  Race condition in imagebackend.Image.cache downloads image several
  times

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  There's a race condition in imagebackend.Image.cache that makes nova
  download an image N times when N requests requiring the same image are
  scheduled in the same host during the image feching.

  The imagebackend.Image.cache method only synchronizes on the image
  fetching function, but the whole function should be synchronized (or
  the create_image function). When several requests using the same image
  are scheduled at the same time there's no synchronization when nova
  checks if an image already exists or not, therefore several requests
  may check that the image does not exist, and start a download for all
  of them (the actual download will be syncronized, but the image will
  be download several times, one for each request).

  This can be seen requesting several instances into the same host:

  nova boot --image  --flavor  --num-instances=4
  --availability-zone :

  In the host we can see:

  -rw-r--r-- 1 nova nova 5.0G Aug 25 14:21 
243eccfbc52469947665a506145d798670e3fc88
  -rw-r--r-- 1 nova nova 1.2G Aug 25 14:22 
243eccfbc52469947665a506145d798670e3fc88.part

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361184/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1241482] Re: Authenticate to generate a token without tenant infos

2013-10-18 Thread Alvaro Lopez
This is the expected behaviour (although not documented). See this
https://www.ibm.com/developerworks/community/blogs/e93514d3-c4f0-4aa0-8844-497f370090f5/entry/openstack_keystone_workflow_token_scoping?lang=en

You are requesting an uscoped token, that can be used to discover the
list of tenats that you are able to access.

** Changed in: keystone
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1241482

Title:
  Authenticate to generate a token without  tenant  infos

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  1.authenticate to generate a token without the tenantName or tenantId, as 
bellow:
  {
  "auth": {
  "passwordCredentials": {
  "username": "admin",
  "password": "secrete"
  },
  }
  }

  2.then got the result,as bellow:
  {
  "access": {
  "token": {
  "issued_at": "2013-10-18T08:42:26.380926",
  "expires": "2013-10-19T08:42:26Z",
  "id": "93b6bfb248d94ac7b926dc0df5de87df"
  },
  "serviceCatalog": [],
  "user": {
  "username": "admin",
  "roles_links": [],
  "id": "380b66e1c6d4462d932db4a1d001c005",
  "roles": [],
  "name": "admin"
  },
  "metadata": {
  "is_admin": 0,
  "roles": []
  }
  }
  }

  3.when use the token id(e.g.,create a tenant) ,  the Unauthorized
  exception raised.

  4.I think it's a bug, authenticate to generate a token without the
  tenantName or tenantId should raise Bad Request(ValidationError)
  exception,otherwise the token is unauthorized.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1241482/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 962299] Re: Prevent duplicated VPN ports

2013-11-25 Thread Alvaro Lopez
This was rather old, and I think that this should have been fixed by
https://review.openstack.org/#/c/34529/

** Changed in: nova
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/962299

Title:
  Prevent duplicated VPN ports

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  The vpn_public_port column in the networks table should be made
  unique.

  This will prevent from having the same port for distinct projects and
  VLANs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/962299/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1275693] [NEW] Wrong oauth1_extension used instead of oauth_extension in documentation

2014-02-03 Thread Alvaro Lopez
Public bug reported:

The documentation in doc/source/extensions/oauth1-configuration.rst
states that oauth1_extension should be used. However, in the paste
configuration file and in the tests, oauth_extension is be used.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1275693

Title:
  Wrong oauth1_extension used instead of oauth_extension in
  documentation

Status in OpenStack Identity (Keystone):
  New

Bug description:
  The documentation in doc/source/extensions/oauth1-configuration.rst
  states that oauth1_extension should be used. However, in the paste
  configuration file and in the tests, oauth_extension is be used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1275693/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1275695] [NEW] Enabling Federation extension causes "Unregistered dependency: federation_api"

2014-02-03 Thread Alvaro Lopez
Public bug reported:

Enabling the Federation extension, according with "doc/source/extensions
/federation-configuration.rst" makes Keystone crash with the following
exception:

Traceback (most recent call last):
  File "/usr/lib/cgi-bin/keystone/main", line 60, in 
dependency.resolve_future_dependencies()
  File "/opt/stack/keystone/keystone/common/dependency.py", line 251, in 
resolve_future_dependencies
raise UnresolvableDependencyException(dependency)
UnresolvableDependencyException: Unregistered dependency: federation_api

Since the dependency is never registered.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1275695

Title:
  Enabling Federation extension causes "Unregistered dependency:
  federation_api"

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Enabling the Federation extension, according with
  "doc/source/extensions/federation-configuration.rst" makes Keystone
  crash with the following exception:

  Traceback (most recent call last):
File "/usr/lib/cgi-bin/keystone/main", line 60, in 
  dependency.resolve_future_dependencies()
File "/opt/stack/keystone/keystone/common/dependency.py", line 251, in 
resolve_future_dependencies
  raise UnresolvableDependencyException(dependency)
  UnresolvableDependencyException: Unregistered dependency: federation_api

  Since the dependency is never registered.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1275695/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1201334] Re: libvirt disk_mappings returns bad root disk for Xen paravirtualization

2014-03-26 Thread Alvaro Lopez
I think this was already solved in
https://review.openstack.org/#/c/58999/

** Tags added: xen

** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1201334

Title:
  libvirt disk_mappings returns bad root disk for Xen paravirtualization

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  
  I noticed that during launching Xen paravirtualization instances over libvirt 
driver, root_device_name is returned as '/dev/xvda' and has to be '/dev/xvda1'. 

  /usr/lib64/python2.6/site-packages/nova/virt/libvirt/driver.py

  lines 2190-2228
  if CONF.libvirt_type == "xen" and guest.os_type == vm_mode.XEN:
  >   guest.os_root = root_device_name
  else:
  guest.os_type = vm_mode.HVM

  I would like propose some simple patch for this and replace hard coded
  root device by pygrub option.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1201334/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298242] [NEW] live_migration_uri should be dependant on virt_type

2014-03-27 Thread Alvaro Lopez
Public bug reported:

The "live_migration_uri" default should be dependent on the "virt_type"
flag (this is the same behavior as "connection_uri").

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: libvirt

** Tags added: libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1298242

Title:
  live_migration_uri should be dependant on virt_type

Status in OpenStack Compute (Nova):
  New

Bug description:
  The "live_migration_uri" default should be dependent on the
  "virt_type" flag (this is the same behavior as "connection_uri").

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1298242/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298462] [NEW] Glance WSGI image upload fails

2014-03-27 Thread Alvaro Lopez
Public bug reported:

Hi there.

When running latest Havana Glance behind an Apache httpd server with
mod_wsgi, an image upload fails with the following error:

2014-03-27 14:15:25.656 7292 DEBUG glance.api.v1.upload_utils [-]
Client disconnected before sending all data to backend
upload_data_to_store /usr/lib/python2.7/dist-
packages/glance/api/v1/upload_utils.py:231

Narrowing this down, this is similar to this ML thread (except the error
shown): http://lists.openstack.org/pipermail/openstack-
dev/2013-December/022510.html

Applying the patch in that thread, solves the error.

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1298462

Title:
  Glance WSGI image upload fails

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Hi there.

  When running latest Havana Glance behind an Apache httpd server with
  mod_wsgi, an image upload fails with the following error:

  2014-03-27 14:15:25.656 7292 DEBUG glance.api.v1.upload_utils [-]
  Client disconnected before sending all data to backend
  upload_data_to_store /usr/lib/python2.7/dist-
  packages/glance/api/v1/upload_utils.py:231

  Narrowing this down, this is similar to this ML thread (except the
  error shown): http://lists.openstack.org/pipermail/openstack-
  dev/2013-December/022510.html

  Applying the patch in that thread, solves the error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1298462/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326238] [NEW] libvirt: use qdisk instead of blktap for Xen

2014-06-04 Thread Alvaro Lopez
Public bug reported:

Using libvirt and Xen, nova is using "tap" (Xen 4.0) and "tap2" (Xen >
4.0) for the disk driver.

According to the Xen documentation and ML, the usage of qdisk is preferred
against blktap2 [1].

[1] http://lists.xen.org/archives/html/xen-devel/2013-08/msg02633.html

Moreover, libxenlight (xl, libxl) is the replacement of xend (xm) from Xen >=
4.2. According to the disk configuration documentation for xl [2], the device
driver "(...) should not be specified, in which case libxl will automatically
 determine the most suitable backend."

[2] http://xenbits.xen.org/docs/unstable/misc/xl-disk-configuration.txt

** Affects: nova
 Importance: Undecided
 Assignee: Alvaro Lopez (aloga)
 Status: In Progress


** Tags: libvirt xen

** Changed in: nova
   Status: New => In Progress

** Changed in: nova
 Assignee: (unassigned) => Alvaro Lopez (aloga)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1326238

Title:
  libvirt: use qdisk instead of blktap for Xen

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Using libvirt and Xen, nova is using "tap" (Xen 4.0) and "tap2" (Xen >
  4.0) for the disk driver.

  According to the Xen documentation and ML, the usage of qdisk is preferred
  against blktap2 [1].

  [1] http://lists.xen.org/archives/html/xen-devel/2013-08/msg02633.html

  Moreover, libxenlight (xl, libxl) is the replacement of xend (xm) from Xen >=
  4.2. According to the disk configuration documentation for xl [2], the device
  driver "(...) should not be specified, in which case libxl will automatically
   determine the most suitable backend."

  [2] http://xenbits.xen.org/docs/unstable/misc/xl-disk-
  configuration.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1326238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500920] [NEW] SameHostFilter should fail if no instances on host

2015-09-29 Thread Alvaro Lopez
Public bug reported:

According to the docs, the SameHostFilter "schedules the instance on the
same host as another instance in a set of instances", so it should only
pass if the host is executing any of the instances passed as the
scheduler hint. However, the filter also passes if the host does not
have any instances.

** Affects: nova
 Importance: Undecided
 Assignee: Alvaro Lopez (aloga)
 Status: In Progress


** Tags: scheduler

** Tags added: scheduler

** Changed in: nova
   Status: New => In Progress

** Changed in: nova
 Assignee: (unassigned) => Alvaro Lopez (aloga)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1500920

Title:
  SameHostFilter should fail if no instances on host

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  According to the docs, the SameHostFilter "schedules the instance on
  the same host as another instance in a set of instances", so it should
  only pass if the host is executing any of the instances passed as the
  scheduler hint. However, the filter also passes if the host does not
  have any instances.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1500920/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368097] [NEW] UnicodeDecodeError using ldap backend

2014-09-11 Thread Alvaro Lopez
Public bug reported:

Using the LDAP backend if any attribute contains accents, the ldap2py
function in keystone/common/ldap/core.py fails with a
UnicodeDecodeError.

This function was introduced by commit
cbf805161b84f13f459a19bfd46220c4f298b264. That commit encodes and
decodes to and from utf8 the strings.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1368097

Title:
  UnicodeDecodeError using ldap backend

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Using the LDAP backend if any attribute contains accents, the ldap2py
  function in keystone/common/ldap/core.py fails with a
  UnicodeDecodeError.

  This function was introduced by commit
  cbf805161b84f13f459a19bfd46220c4f298b264. That commit encodes and
  decodes to and from utf8 the strings.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1368097/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298242] Re: live_migration_uri should be dependant on virt_type

2014-10-01 Thread Alvaro Lopez
Sorry, but I do not think this is Invalid.

The "live_migration_uri" value is dependent on the libvirt type being
used, and therefore, the default values should be dependent on that.

If I as an operator set libivrt_type to whatever I expect that I do not
have to tweak the default values for that connection type (for example
the connection url, the migration url, etc). With the current situation
I will get a broken deployment where I have to figure out that live
migration does not work, just because the default migration_url only
works with one type of libvirt connection type.

** Changed in: nova
   Status: Invalid => New

** Changed in: nova
     Assignee: Alvaro Lopez (aloga) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1298242

Title:
  live_migration_uri should be dependant on virt_type

Status in OpenStack Compute (Nova):
  New

Bug description:
  The "live_migration_uri" default should be dependent on the
  "virt_type" flag (this is the same behavior as "connection_uri").

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1298242/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1391101] Re: Illogical use of "aggregate_instance_extra_specs" prefix in nova AggregateInstanceExtraSpecsFilter

2014-11-12 Thread Alvaro Lopez
Hi Bartosz.

When you add the metadata to the aggregate you don't need to add the
prefix. The prefix is only set in the flavors, to specify its scope
(i.e. that it should be matched against an aggregate).

So , when you did:

nova flavor-key m1.nano set
aggregate_instance_extra_specs:cpu_info:vendor=Intel

You should have done:

nova flavor-key m1.nano set cpu_info:vendor=Intel

** Changed in: nova
   Status: In Progress => Invalid

** Changed in: nova
 Assignee: Bartosz Fic (bartosz-fic) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1391101

Title:
  Illogical use of "aggregate_instance_extra_specs"  prefix in nova
  AggregateInstanceExtraSpecsFilter

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  
  How to reproduce this bug

  1. Enable AggregateInstanceExtraSpecsFilter in /etc/nova/nova.conf by adding 
only 'AggregateInstanceExtraSpecsFilter' to scheduler_default_filters variable
  2. Example: Specify compute hosts with the same vendor: Intel

  # create new host-aggregate
  $ nova aggregate-create fast-io nova

  # set metadata for new created host-aggregate
  $ nova aggregate-set-metadata  
aggregate_instance_extra_specs:cpu_info:vendor=Intel

  # check aggregate details
  $ nova aggregate-details 

  http://paste.openstack.org/show/131393/

  # add a host to an aggregate
  $ nova aggregate-add-host  

  # list of all flavors
  $ nova flavor-list

  # set metadata for flavor m1.nano
  $ nova flavor-key m1.nano set 
aggregate_instance_extra_specs:cpu_info:vendor=Intel

  # show details of m1.nano flavor
  $ nova flavor-show m1.nano

  http://paste.openstack.org/show/131394/

  # try to launch instance - as a tip: m1.nano flavor id = 42 :)
  $ nova boot --flavor  --image  

  
  For flavor we have metadata: 
"aggregate_instance_extra_specs:cpu_info:vendor=Intel"

  and

  For aggregate we have metadata:
  "aggregate_instance_extra_specs:cpu_info:vendor=Intel"

  and as a result we get "No valid host was found. There are not
  enough hosts available."

  What cases are good for 'AggregateInstanceExtraSpecsFilter' ?

  a)

  flavor metadata: cpu_info:vendor=Intel
  aggregate metadata: aggregate_instance_extra_specs:cpu_info:vendor=Intel

  (I think it is about backwards compatibility with extra specs?)

  b)

  flavor metadata: 
aggregate_instance_extra_specs:aggregate_instance_extra_specs:cpu_info:vendor=Intel
  aggregate metadata: aggregate_instance_extra_specs:cpu_info:vendor=Intel

  What is the problem ?

  Please take a look at the AggregateInstanceExtraSpecsFilter code
  
(https://github.com/openstack/nova/blob/master/nova/scheduler/filters/aggregate_instance_extra_specs.py).

  The filter is looking for that prefix but in extra specs of flavor, not 
aggregate!
  Filter is matching metadata from flavor with metadata from host aggregate. 
  In the metadata from flavor prefix (aggregate_instance_extra_specs) is 
removed (if exist). 
  But it isn’t removed from aggregate metadata. 
  So if user doesn’t specify metadata on flavor in the form of: 
aggregate_instance_extra_specs:aggregate_instance_extra_specs:cpu_info:vendor 
(because first prefix is stripped from extra spec), 
  then the filter wouldn’t match any host.

  I'd like to disscuss about possible solution of this issue.
  Or maybe it is a feature and I didn't understand how 
AggregateInstanceExtraSpecsFilter should work ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1391101/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1075710] Re: Keystone REMOTE_USER with no metadata causes 404 on auth

2013-02-19 Thread Alvaro Lopez
This was already fixed on https://review.openstack.org/#/c/15403/

** Changed in: keystone
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1075710

Title:
  Keystone REMOTE_USER with no metadata causes 404 on auth

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  With the recent introduction of REMOTE_USER auth support (i.e. remote
  authn) in Keystone (see https://review.openstack.org/#/c/14823),
  there's a new bug under the following conditions:

  * REMOTE_USER is set by an external authenticator
  * There is no 'metadata' (i.e. no metadata for the given user in the given 
tenant)

  When the above conditions are true, the following error is returned on
  a POST to /tokens -->

  2012-11-05 14:30:37DEBUG [keystone.common.wsgi]  
RESPONSE BODY 
  2012-11-05 14:30:37DEBUG [keystone.common.wsgi] {"error": {"message": "An 
unhandled exception has occurred: Could not find metadata.", "code": 404, 
"title": "Not Found"}}
  2012-11-05 14:30:37DEBUG [eventlet.wsgi.server] 172.17.0.8 - - 
[05/Nov/2012 14:30:37] "POST /v2.0/tokens HTTP/1.1" 404 282 0.103857

  The root cause is located around line 358 of service.py where the
  'else' branch to handle remote authn tries to use a tenant_ref and
  metadata_ref. In this scenario the call to
  self.identity_api.get_metadata(...) throws an
  exception.MetadataNotFound exception which is not handled in
  service.py.

  If you look in one of the identity drivers (say the sql driver), you
  can see in the typical authenticate() flow the driver handles the
  exception and defaults the meta. From the sql identity driver:

  if tenant_id is not None:
  if tenant_id not in self.get_tenants_for_user(user_id):
  raise AssertionError('Invalid tenant')

  try:
  tenant_ref = self.get_tenant(tenant_id)
  metadata_ref = self.get_metadata(user_id, tenant_id)
  except exception.TenantNotFound:
  tenant_ref = None
  metadata_ref = {}
  except exception.MetadataNotFound:
  metadata_ref = {}

  return (filter_user(user_ref), tenant_ref, metadata_ref)

  
  That said, one fix for this bug is to update service.py to wrap the root 
error cause in a try/except. 

  Below is a diff including a test case that reproduces the error and
  the fix:

  
  diff --git a/keystone/service.py b/keystone/service.py
  index b6443a7..1e55348 100644
  --- a/keystone/service.py
  +++ b/keystone/service.py
  @@ -355,14 +355,19 @@ class TokenController(wsgi.Application):
   self.identity_api,
   user_id)
   if tenant_id:
  -if not tenant_ref:
  -tenant_ref = self.identity_api.get_tenant(
  +try:
  +if not tenant_ref:
  +tenant_ref = self.identity_api.get_tenant(
  +self.identity_api,
  +tenant_id)
  +metadata_ref = self.identity_api.get_metadata(
   self.identity_api,
  +user_id,
   tenant_id)
  -metadata_ref = self.identity_api.get_metadata(
  -self.identity_api,
  -user_id,
  -tenant_id)
  +except (exception.TenantNotFound,
  +exception.MetadataNotFound):
  +pass
  +
   auth_info = (user_ref, tenant_ref, metadata_ref)
   
   # If the user is disabled don't allow them to authenticate
  diff --git a/tests/test_service.py b/tests/test_service.py
  index 775b2ca..d9d3c17 100644
  --- a/tests/test_service.py
  +++ b/tests/test_service.py
  @@ -129,3 +129,16 @@ class RemoteUserTest(test.TestCase):
   self.api.authenticate,
   {'REMOTE_USER': 'FOOZBALL'},
   self._build_user_auth('FOO', 'nosir', 'BAR'))
  +
  +def test_remote_auth_no_metadata(self):
  +for meta in default_fixtures.METADATA:
  +self.identity_api.delete_metadata(
  +meta['user_id'],
  +meta['tenant_id'])
  +local_token = self.api.authenticate(
  +{},
  +self._build_user_auth('FOO', 'foo2', 'BAR'))
  +remote_token = self.api.authenticate(
  +{'REMOTE_USER': 'FOO'},
  +self._build_user_auth('FOO', 'nosir', 'BAR'))
  +self.assertEqualTokens(local_token, remote_token)

To manage notifications abou

[Yahoo-eng-team] [Bug 914717] Re: nova should point out which quota item exceed when booting failed

2013-03-15 Thread Alvaro Lopez
This is already solved:

ERROR: Quota exceeded for instances: Requested 1, but already used 1 of
1 instances (HTTP 413)

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/914717

Title:
  nova should point out which quota item exceed when booting failed

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  when starting an instance,thanks to quota exceed,we can not
  sucessfully start an instance,nova should point out which item(cores
  , memory,CPU, etc)exceed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/914717/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1094226] Re: Failure when creating more than n instances. networking fail.

2013-03-18 Thread Alvaro Lopez
*** This bug is a duplicate of bug 1009026 ***
https://bugs.launchpad.net/bugs/1009026

** This bug has been marked a duplicate of bug 1009026
   When launching 50 vms some of them fail to start vm_state=error 
task_state=networking.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1094226

Title:
  Failure when creating more than n instances. networking fail.

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When creating lots a instances simultaneously there is a point where
  it begins to fall apart.

  It seems to be related to the speed of the CPU. I tested this on a
  much slower E5430 and I could handle about 20 Instances before causing
  error. My machines with E5-2670 can hangle much more.

  Setting up multi_host networking alleviates the problem.

  
  2012-12-28 11:33:53 ERROR nova.compute.manager 
[req-fe42ec61-0892-46c1-83b2-d2c147291127 784701dc5dfe41a7811f7261d8345a9a 
a00f8763332d458fb69b88b555cb0ef9] [instance: 
5da184a6-df08-4da8-8a26-9bc0e44a9410] Instance failed network setup

  2012-12-28 11:33:46 ERROR nova.compute.manager [req-
  fe42ec61-0892-46c1-83b2-d2c147291127 784701dc5dfe41a7811f7261d8345a9a
  a00f8763332d458fb69b88b555cb0ef9] [instance: d376b834-d735-4ab9-861b-
  3c0743e116fe] Instance failed network setup

  
+--+--+++
  | ID   | Name | Status | Networks 
  |
  
+--+--+++
  | 05dc88ad-6274-4d42-b1aa-2d4e16d43800 | doom-tractor | ACTIVE | 
example-net=172.16.169.131 |
  | 0d526ecb-436d-424e-97bb-0891d57adf91 | doom-tractor | ACTIVE | 
example-net=172.16.169.100 |
  | 0dde2f3e-2859-482b-8951-cc618d768075 | doom-tractor | ERROR  |  
  |
  | 195abb5d-01ec-47c1-8721-2766b3485507 | doom-tractor | ERROR  | 
example-net=172.16.169.50  |
  | 2034e794-5eb8-447c-8100-cdfb2ffc6c3b | doom-tractor | ERROR  |  
  |
  | 248ef180-3ede-4e7d-9715-a99e9685182e | doom-tractor | ERROR  |  
  |
  | 2e2fe5cd-e0c1-4618-b55c-6eb5be93c120 | doom-tractor | ERROR  | 
example-net=172.16.169.53  |
  | 332dab10-631c-4765-b5b2-432765b09f4c | doom-tractor | ERROR  |  
  |
  | 33742704-dfc9-4a5e-bf3b-9dbf18cd6c89 | doom-tractor | ERROR  | 
example-net=172.16.169.110 |
  | 35a19396-8360-4b1f-9c86-ed9684ef0365 | doom-tractor | ERROR  | 
example-net=172.16.169.128 |
  | 39dea5b6-a04a-467a-8f05-4e61057cb162 | doom-tractor | ERROR  | 
example-net=172.16.169.139 |
  | 418bd648-509c-42cf-9e0b-adfec6b25e08 | doom-tractor | ERROR  |  
  |
  | 4204d869-2c13-45cf-bb0e-16360daefcfe | doom-tractor | ACTIVE | 
example-net=172.16.169.141 |
  | 44f35980-ec51-4734-a3f0-c7547a08b76f | doom-tractor | ACTIVE | 
example-net=172.16.169.102 |
  | 48ae3218-2a17-4676-ac78-dec267b01691 | doom-tractor | ERROR  |  
  |
  | 4ed6e0b3-08c3-40ba-bc89-a9dae82b1fa6 | doom-tractor | ACTIVE | 
example-net=172.16.169.133 |
  | 51e8471f-df25-4218-b8c3-572b354c | doom-tractor | ERROR  | 
example-net=172.16.169.49  |
  | 58b248d6-867d-4210-a342-75bddb92a1ef | doom-tractor | ERROR  | 
example-net=172.16.169.84  |
  | 5da184a6-df08-4da8-8a26-9bc0e44a9410 | doom-tractor | ERROR  |  
  |
  | 60523702-6cd2-4724-8e16-26536f173166 | doom-tractor | ACTIVE | 
example-net=172.16.169.51  |
  | 633c5d24-5cd9-4ca4-8f35-cca6958ee662 | doom-tractor | ERROR  | 
example-net=172.16.169.54  |
  | 67fbcfc7-5eb6-4182-9fa0-d32814a0eac0 | doom-tractor | ERROR  |  
  |
  | 692324e7-4c85-4ace-a701-2c504918c511 | doom-tractor | ACTIVE | 
example-net=172.16.169.140 |
  | 6af7f144-e03a-44dc-a041-17bbf7f136ed | doom-tractor | ERROR  |  
  |
  | 6b8cacb8-3e91-41c3-8c33-865be90a5e69 | doom-tractor | ACTIVE | 
example-net=172.16.169.107 |
  | 6dd85744-00ff-46b7-a8d5-28ec74db7c39 | doom-tractor | ACTIVE | 
example-net=172.16.169.108 |
  | 733377a3-871c-4350-b3d6-477c18efd47b | doom-tractor | ACTIVE | 
example-net=172.16.169.99  |
  | 73ad82d6-3c04-4975-9abb-c821c5967ed6 | doom-tractor | ERROR  |  
  |
  | 7435d347-ad9e-49a7-8ffd-0a403e7d8927 | doom-tractor | ERROR  |  
  |
  | 7796ef9c-5c2f-47fd-9352-1d83d1d6443b | doom-tractor | ACTIVE | 
example-net=172.16.169.78  |
  | 77a46a01-c09c-4435-af16-7a62e634a32f | doom-tractor | ERROR  | 
example-net=172.16.169.111 |
  | 78947a6f-00c0-4246-a508-c7693d97161f | doom-tractor | ERROR  | 
example-net=172.16.169.103 |
  | 78b6d9f2-00db-466e-a920-0c185ea72f9a | doom-tractor | ERROR  | 
example-net=172.16.169.44  |
  | 7b8f1f5e-7c3f-4634-b1e7-5702bead6619 | doom-tractor | ERROR  |  
  |
  | 7

[Yahoo-eng-team] [Bug 1202627] Re: non-reclaimed instances still consume RAM in libvirt+xen

2013-07-18 Thread Alvaro Lopez
Closing as invalid, since I've intestigated a bit more, and it is more a
feature than a bug.

I thought the behaviour was more oriented towards the resource provider
and not the user. I thought that the instances were not really deleted
(until the interval finishes) so that an operator could debug/audit any
deleted instances, therefore I was expecting is that if
reclaim_instance_interval was set, the instances should not consume any
resources (apart from disk), and they cannot be restored.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1202627

Title:
  non-reclaimed instances still consume RAM in libvirt+xen

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  If using libvirt+xen and reclaim_instance_interval is set, the deleted
  instances still compute for the RAM usage of a compute node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1202627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1232401] Re: Nova-compute version mismatch in Compute Node and controller node

2013-09-30 Thread Alvaro Lopez
** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1232401

Title:
   Nova-compute version mismatch in Compute Node and controller node

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Hi friends,

  I have done controller and compute node at two different machines.I
  got an operational error that the service table doesnot contain the
  column 'availability_zone' in compute node.After struggling with the
  various comments. i could found that the nova-compute version at
  compute(2012.1.4-dev (2012.1.4-LOCALBRANCH:LOCALREVISION)) and
  controller node(2013.2) may differ.So i need to upgrade nova at
  compute node.How do i do that.Pls..Any one can help me..

  
  Thanks in advance..

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1232401/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456955] Re: tox -epep8 fails due to tox picking python 3.x

2016-01-28 Thread Alvaro Lopez
** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

** No longer affects: nova (Ubuntu)

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1456955

Title:
  tox -epep8 fails due to tox picking python 3.x

Status in Designate:
  Fix Released
Status in OpenStack Compute (nova):
  New

Bug description:
  karlsone@workstation:~/projects/dnsaas/designate$ tox -epep8
  pep8 create: /home/karlsone/projects/dnsaas/designate/.tox/pep8
  pep8 installdeps: 
-r/home/karlsone/projects/dnsaas/designate/requirements.txt, 
-r/home/karlsone/projects/dnsaas/designate/test-requirements.txt
  pep8 develop-inst: /home/karlsone/projects/dnsaas/designate
  pep8 runtests: PYTHONHASHSEED='0'
  pep8 runtests: commands[0] | flake8
  Traceback (most recent call last):
File ".tox/pep8/bin/flake8", line 9, in 
  load_entry_point('flake8==2.1.0', 'console_scripts', 'flake8')()
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/flake8/main.py",
 line 24, in main
  flake8_style = get_style_guide(parse_argv=True, 
config_file=DEFAULT_CONFIG)
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/flake8/engine.py",
 line 79, in get_style_guide
  kwargs['parser'], options_hooks = get_parser()
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/flake8/engine.py",
 line 53, in get_parser
  parser_hook(parser)
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/hacking/core.py",
 line 146, in add_options
  factory = pbr.util.resolve_name(local_check_fact)
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/pbr/util.py",
 line 171, in resolve_name
  ret = __import__('.'.join(module_name), fromlist=[attr_name])
File "/home/karlsone/projects/dnsaas/designate/designate/__init__.py", line 
16, in 
  import eventlet
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/eventlet/__init__.py",
 line 10, in 
  from eventlet import convenience
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/eventlet/convenience.py",
 line 6, in 
  from eventlet.green import socket
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/eventlet/green/socket.py",
 line 17, in 
  from eventlet.support import greendns
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/eventlet/support/greendns.py",
 line 54, in 
  socket=_socket_nodns)
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/eventlet/patcher.py",
 line 119, in import_patched
  *additional_modules + tuple(kw_additional_modules.items()))
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/eventlet/patcher.py",
 line 93, in inject
  module = __import__(module_name, {}, {}, module_name.split('.')[:-1])
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/dns/resolver.py",
 line 32, in 
  import dns.flags
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/dns/flags.py",
 line 51, in 
  _by_value = dict([(y, x) for x, y in _by_text.iteritems()])
  AttributeError: 'dict' object has no attribute 'iteritems'
  ERROR: InvocationError: 
'/home/karlsone/projects/dnsaas/designate/.tox/pep8/bin/flake8'
  

 summary 
_
  ERROR:   pep8: commands failed

To manage notifications about this bug go to:
https://bugs.launchpad.net/designate/+bug/1456955/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1854041] [NEW] Keystone should propagate redirect exceptions from auth plugins

2019-11-26 Thread Alvaro Lopez
Public bug reported:

When a developer is implementing an Authentication plugin [1] they can
only return None and setup the relevant information in the auth context
or raise an Unauthorized exception. However, in some cases (like an
OpenID Connect plugin) it is needed to perform a redirect to the
provider to complete the flow. IIRC this was possible in the past
(before moving to Flask) by raising an exception with the proper HTTP
code set, but with the current implementation this is impossible.

[1]: https://docs.openstack.org/keystone/latest/contributor/auth-
plugins.html

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1854041

Title:
  Keystone should propagate redirect exceptions from auth plugins

Status in OpenStack Identity (keystone):
  New

Bug description:
  When a developer is implementing an Authentication plugin [1] they can
  only return None and setup the relevant information in the auth
  context or raise an Unauthorized exception. However, in some cases
  (like an OpenID Connect plugin) it is needed to perform a redirect to
  the provider to complete the flow. IIRC this was possible in the past
  (before moving to Flask) by raising an exception with the proper HTTP
  code set, but with the current implementation this is impossible.

  [1]: https://docs.openstack.org/keystone/latest/contributor/auth-
  plugins.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1854041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1691190] [NEW] Xen should use "console=hvc0" on boot

2017-05-16 Thread Alvaro Lopez
Public bug reported:

Xen's console is attached to hvc0 in PV guests [1]. If an image does set
it up on its grub configuration, the instance may not boot properly.

Nova could set the os_cmdline for that guest as it is doing for other
hypervisors (for example for LXC), adding the console setup there, so
that libvirt configures the domain properly.

[1] https://wiki.xen.org/wiki/Xen_FAQ_Console

** Affects: nova
 Importance: Undecided
 Assignee: Alvaro Lopez (aloga)
 Status: New


** Tags: libvirt xen

** Description changed:

  Xen's console is attached to hvc0 in PV guests [1]. If an image does set
- it up on its grub configuration, the instance won't boot.
+ it up on its grub configuration, the instance may not boot properly.
  
  Nova could set the os_cmdline for that guest as it is doing for other
- hypervisors, adding the console setup there, so that libvirt configures
- the domain properly.
+ hypervisors (for example for LXC), adding the console setup there, so
+ that libvirt configures the domain properly.
  
  [1] https://wiki.xen.org/wiki/Xen_FAQ_Console

** Changed in: nova
 Assignee: (unassigned) => Alvaro Lopez (aloga)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1691190

Title:
  Xen should use "console=hvc0" on boot

Status in OpenStack Compute (nova):
  New

Bug description:
  Xen's console is attached to hvc0 in PV guests [1]. If an image does
  set it up on its grub configuration, the instance may not boot
  properly.

  Nova could set the os_cmdline for that guest as it is doing for other
  hypervisors (for example for LXC), adding the console setup there, so
  that libvirt configures the domain properly.

  [1] https://wiki.xen.org/wiki/Xen_FAQ_Console

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1691190/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1625101] [NEW] Some cinder failures return "500" errors to users

2016-09-19 Thread Alvaro Lopez
Public bug reported:

When a user tries to delete a volume that is attached to a VM using nova
the user gets an "Unexpected Exception" error when cinderclient fails
with an InvalidInput exception. This is not particularly helpful for
troubleshooting and the message is misleading for users.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1625101

Title:
  Some cinder failures return "500" errors to users

Status in OpenStack Compute (nova):
  New

Bug description:
  When a user tries to delete a volume that is attached to a VM using
  nova the user gets an "Unexpected Exception" error when cinderclient
  fails with an InvalidInput exception. This is not particularly helpful
  for troubleshooting and the message is misleading for users.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1625101/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1587356] [NEW] nova raises "InstanceNotFound" when running the periodic reclaim

2016-05-31 Thread Alvaro Lopez
Public bug reported:


Nova sometimes raises an exception when running the periodic instance deleted,
see below. The error is quite spurious, and it is not happening always so I
cannot get the reason why this is happening. The periodic reclaim is trying to
delete the instance, but the DB is not returning the instance as it is marked
as deleted.

ERROR nova.compute.manager Traceback (most recent call last):
ERROR nova.compute.manager
ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 447, in 
_object_dispatch
ERROR nova.compute.manager return getattr(target, method)(*args, 
**kwargs)
ERROR nova.compute.manager
ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 213, in 
wrapper
ERROR nova.compute.manager return fn(self, *args, **kwargs)
ERROR nova.compute.manager
ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/objects/instance.py", line 673, in save
ERROR nova.compute.manager 
columns_to_join=_expected_cols(expected_attrs))
ERROR nova.compute.manager
ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/db/api.py", line 765, in 
instance_update_and_get_original
ERROR nova.compute.manager expected=expected)
ERROR nova.compute.manager
ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 216, in 
wrapper
ERROR nova.compute.manager return f(*args, **kwargs) 
ERROR nova.compute.manager
ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/oslo_db/api.py", line 146, in wrapper
ERROR nova.compute.manager ectxt.value = e.inner_exc 
ERROR nova.compute.manager
ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in __exit__
ERROR nova.compute.manager six.reraise(self.type_, self.value, self.tb)
ERROR nova.compute.manager
ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/oslo_db/api.py", line 136, in wrapper
ERROR nova.compute.manager return f(*args, **kwargs) 
ERROR nova.compute.manager
ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 2478, in 
instance_update_and_get_original
ERROR nova.compute.manager session=session)
ERROR nova.compute.manager
ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 1733, in 
_instance_get_by_uuid
ERROR nova.compute.manager raise 
exception.InstanceNotFound(instance_id=uuid)
ERROR nova.compute.manager
ERROR nova.compute.manager InstanceNotFound: Instance 
aceb4514-915e-4687-ac99-20f6b7d5ed6d could not be found.
ERROR nova.compute.manager
ERROR nova.compute.manager
WARNING nova.compute.manager [req-e62c7e7c-0799-4f87-8f06-a46cf5663733 - - 
- - -] [instance: aceb4514-915e-4687-ac99-20f6b7d5ed6d] Periodic reclaim failed 
to delete instance: Instance aceb4514-915e-4687-ac99-20f6b7d5ed6d could not be 
found.

As I said, this is an spurious error, but from time to time we get a bunch of
these errors and we need to manually delete those instances.

** Affects: nova
 Importance: Undecided
 Assignee: Alvaro Lopez (aloga)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Alvaro Lopez (aloga)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1587356

Title:
  nova raises "InstanceNotFound" when running the periodic reclaim

Status in OpenStack Compute (nova):
  New

Bug description:
  
  Nova sometimes raises an exception when running the periodic instance deleted,
  see below. The error is quite spurious, and it is not happening always so I
  cannot get the reason why this is happening. The periodic reclaim is trying to
  delete the instance, but the DB is not returning the instance as it is marked
  as deleted.

  ERROR nova.compute.manager Traceback (most recent call last):
  ERROR nova.compute.manager
  ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 447, in 
_object_dispatch
  ERROR nova.compute.manager return getattr(target, method)(*args, 
**kwargs)
  ERROR nova.compute.manager
  ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 213, in 
wrapper
  ERROR nova.compute.manager return fn(self, *args, **kwargs)
  ERROR nova.compute.manager
  ERROR nova.compute.manager   File 
"/usr/lib/python2.7/dist-packages/nova/objects/instance.py", line 673, in save
  ERROR nova.compute.manager

[Yahoo-eng-team] [Bug 1280522] Re: Replace assertEqual(None, *) with assertIsNone in tests

2016-03-22 Thread Alvaro Lopez
** Changed in: ooi
   Importance: Undecided => Low

** Changed in: ooi
   Status: Fix Committed => Fix Released

** Changed in: ooi
Milestone: 0.2 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1280522

Title:
  Replace assertEqual(None, *) with assertIsNone in tests

Status in Anchor:
  Fix Released
Status in bifrost:
  Fix Committed
Status in Blazar:
  In Progress
Status in Cinder:
  Fix Released
Status in congress:
  Fix Released
Status in dox:
  New
Status in Glance:
  Fix Released
Status in glance_store:
  Fix Released
Status in heat:
  Fix Released
Status in heat-cfntools:
  Fix Released
Status in Heat Translator:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Ironic:
  Fix Released
Status in ironic-python-agent:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystoneauth:
  Fix Released
Status in kolla-mesos:
  Fix Released
Status in Manila:
  Fix Released
Status in networking-cisco:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in octavia:
  Fix Released
Status in ooi:
  Fix Released
Status in os-client-config:
  Fix Released
Status in python-barbicanclient:
  Fix Released
Status in python-ceilometerclient:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-congressclient:
  Fix Released
Status in python-cueclient:
  Fix Released
Status in python-designateclient:
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-heatclient:
  Fix Released
Status in python-ironicclient:
  Fix Released
Status in python-manilaclient:
  Fix Released
Status in python-neutronclient:
  Fix Released
Status in python-openstackclient:
  In Progress
Status in OpenStack SDK:
  In Progress
Status in python-swiftclient:
  Fix Released
Status in python-troveclient:
  Fix Released
Status in Python client library for Zaqar:
  Fix Released
Status in Sahara:
  Fix Released
Status in Solum:
  Fix Released
Status in Stackalytics:
  In Progress
Status in tempest:
  Fix Released
Status in Trove:
  Fix Released
Status in tuskar:
  Fix Released
Status in zaqar:
  Fix Released
Status in designate package in Ubuntu:
  Fix Released
Status in python-tuskarclient package in Ubuntu:
  Fix Committed

Bug description:
  Replace assertEqual(None, *) with assertIsNone in tests to have
  more clear messages in case of failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anchor/+bug/1280522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp