[Yahoo-eng-team] [Bug 1792503] Re: allocation candidates "?member_of=" doesn't work with nested providers

2018-12-04 Thread Tony Breeds
** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Changed in: nova/rocky
   Status: New => In Progress

** Changed in: nova/rocky
 Assignee: (unassigned) => Tetsuro Nakamura (tetsuro0907)

** Changed in: nova/rocky
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1792503

Title:
  allocation candidates "?member_of=" doesn't work with nested providers

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) rocky series:
  In Progress

Bug description:
  "GET /allocation_candidates" now supports "member_of" parameter.
  With nested providers present, this should work with the following 
constraints.

  -
  (a)  With "member_of" qparam, aggregates on the root should span on the whole 
tree

  If a root provider is in the aggregate, which has been specified by 
"member_of" qparam,
  the resource providers under that root can be in allocation candidates even 
the root is absent.

  (b) Without "member_of" qparam, sharing resource provider should be
  shared with the whole tree

  If a sharing provider is in the same aggregate with one resource provider 
(rpA),
  and "member_of" hasn't been specified in qparam by user, the sharing provider 
can be in
  allocation candidates with any of the resource providers in the same tree 
with rpA.

  (c) With "member_of" qparam, the range of the share of sharing
  resource providers should shrink to the resource providers "under the
  specified aggregates" in a tree.

  Here, whether the rp is "under the specified aggregates" is determined with 
the constraints of (a). Namely, not only rps that belongs to the aggregates 
directly are "under the aggregates",
  but olso rps whose root is under the aggregates are also "under the 
aggregates".
  -

  So far at Stein PTG time, 2018 Sep. 13th, this constraint is broken in the 
point that
  when placement picks up allocation candidates, the aggregates of nested 
providers
  are assumed as the same as root providers. This means it ignores the 
aggregates of
  the nested provider itself. This could result in the lack of allocation 
candidates when
  an aggregate which on a nested provider but not on the root has been 
specified in
  the `member_of` query parameter.

  This bug is well described in a test case which is submitted shortly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1792503/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1776596] Re: [QUEENS] Promotion Jobs failing at overcloud deployment with AttributeError: 'IronicNodeState' object has no attribute 'failed_builds'

2018-06-13 Thread Tony Breeds
It looks like when we backported https://review.openstack.org/#/c/573248
to queens (and pike) we missed the fact that the Ironic Host Manger is
still in queens and needs an update that wasn't needed on master becuase
we removed it in https://review.openstack.org/#/c/565805/1

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1776596

Title:
  [QUEENS] Promotion Jobs failing at overcloud deployment with
  AttributeError: 'IronicNodeState' object has no attribute
  'failed_builds'

Status in OpenStack Compute (nova):
  New
Status in tripleo:
  New

Bug description:
  Queens overcloud deployment in all ovb promotion jobs is failing with
  AttributeError: 'IronicNodeState' object has no attribute
  'failed_builds'.

  Logs:-
  
https://logs.rdoproject.org/openstack-periodic/periodic-tripleo-ci-centos-7-ovb-1ctlr_1comp-featureset002-queens-upload/556a09f/undercloud/home/jenkins/failed_deployment_list.log.txt.gz
  
https://logs.rdoproject.org/openstack-periodic/periodic-tripleo-ci-centos-7-ovb-1ctlr_1comp-featureset002-queens-upload/556a09f/undercloud/var/log/nova/nova-scheduler.log.txt.gz#_2018-06-13_01_08_25_689
  
https://logs.rdoproject.org/openstack-periodic/periodic-tripleo-ci-centos-7-ovb-1ctlr_1comp-featureset020-queens/3909a7f/undercloud/home/jenkins/failed_deployment_list.log.txt.gz

  This is happening with a cherry-picked patch in nova:-
  https://review.openstack.org/#/c/573239/

  In master it's not seen probably because of:-
  https://review.openstack.org/#/c/565805/ (Remove IronicHostManager and
  baremetal scheduling options)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1776596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1751349] Re: Keystone auth parameters cannot be configured in [keystone] section

2018-02-28 Thread Tony Breeds
** Changed in: nova/pike
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1751349

Title:
  Keystone auth parameters cannot be configured in [keystone] section

Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Compute (nova) pike series:
  Invalid

Bug description:
  I am seeing nova-api attempting to use the keystone public endpoint
  when /v2.1/os-quota-sets is called on my Pike deployment. This is not
  valid in my environment; the API must use the internal endpoint to
  reach keystone. When the public endpoint is used, the connection sits
  in SYN_SENT state in netstat until it times out after a minute or two.

  Hacking the endpoint_filter at
  
https://github.com/openstack/nova/blob/d536bec9fc098c9db8d46f39aab30feb0783e428/nova/api/openstack/identity.py#L43-L46
  to include interface=internal fixes the issue.

  Unless I am mistaken this issue still exists in master:
  
https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/api/openstack/identity.py#L33-L35

  Something similar to the [placement] section should be implemented
  allowing os_interface to be configured.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1751349/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1737201] Re: TypeError when sending notification during attach_interface

2018-02-28 Thread Tony Breeds
** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1737201

Title:
  TypeError when sending notification during attach_interface

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  New
Status in OpenStack Compute (nova) pike series:
  New

Bug description:
  http://logs.openstack.org/50/524750/1/check/legacy-tempest-dsvm-
  neutron-
  full/eb8d805/logs/screen-n-api.txt.gz?level=TRACE#_Dec_04_13_34_20_635874

  Dec 04 13:34:20.635874 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]: ERROR nova.api.openstack.extensions [None 
req-2d1b063f-1324-4498-af68-ce48c6d8e5a3 
tempest-AttachInterfacesTestJSON-149718191 
tempest-AttachInterfacesTestJSON-149718191] Unexpected exception in API method: 
TypeError: 'NoneType' object has no attribute '__getitem__'
  Dec 04 13:34:20.636066 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]: Traceback (most recent call last):
  Dec 04 13:34:20.636202 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]:   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
163, in _process_incoming
  Dec 04 13:34:20.636336 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]: res = self.dispatcher.dispatch(message)
  Dec 04 13:34:20.636474 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]:   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
220, in dispatch
  Dec 04 13:34:20.636614 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]: return self._do_dispatch(endpoint, method, 
ctxt, args)
  Dec 04 13:34:20.636745 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]:   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
190, in _do_dispatch
  Dec 04 13:34:20.636892 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]: result = func(ctxt, **new_args)
  Dec 04 13:34:20.637049 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]:   File 
"/opt/stack/new/nova/nova/exception_wrapper.py", line 76, in wrapped
  Dec 04 13:34:20.637187 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]: function_name, call_dict, binary)
  Dec 04 13:34:20.637317 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]:   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  Dec 04 13:34:20.637442 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]: self.force_reraise()
  Dec 04 13:34:20.637607 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]:   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  Dec 04 13:34:20.637761 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]: six.reraise(self.type_, self.value, self.tb)
  Dec 04 13:34:20.637895 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]:   File 
"/opt/stack/new/nova/nova/exception_wrapper.py", line 67, in wrapped
  Dec 04 13:34:20.638044 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]: return f(self, context, *args, **kw)
  Dec 04 13:34:20.638183 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]:   File 
"/opt/stack/new/nova/nova/compute/utils.py", line 930, in decorated_function
  Dec 04 13:34:20.638306 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]: return function(self, context, *args, 
**kwargs)
  Dec 04 13:34:20.638433 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]:   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 215, in decorated_function
  Dec 04 13:34:20.638566 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]: kwargs['instance'], e, sys.exc_info())
  Dec 04 13:34:20.638696 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]:   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  Dec 04 13:34:20.638820 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]: self.force_reraise()
  Dec 04 13:34:20.638953 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]:   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  Dec 04 13:34:20.639076 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]: six.reraise(self.type_, self.value, self.tb)
  Dec 04 13:34:20.639200 ubuntu-xenial-inap-mtl01-0001196572 
devstack@n-api.service[6808]:   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 203, in decorated_function
  Dec 04 13:34:20.639325 ubuntu-xenial-inap-mtl01-0001196572 

[Yahoo-eng-team] [Bug 1702454] Re: Transforming the RequestSpec object into legacy dicts doesn't support the requested_destination field

2017-08-15 Thread Tony Breeds
** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Changed in: nova/ocata
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1702454

Title:
  Transforming the RequestSpec object into legacy dicts doesn't support
  the requested_destination field

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) ocata series:
  New

Bug description:
  We added a new field in the RequestSpec object called
  'requested_destination' and we began using it for evacuations by
  https://review.openstack.org/#/c/315572/ (Newton)

  That object was tranformed into legacy dictionaries (called
  "filter_properties" and "request_spec") before being rehydrated for
  the rebuild_instance() method in the conductor service. That said,
  when transforming, we were forgetting about the
  'requested_destination' field in the object so that when we were
  calling the scheduler, we were never using that field.

  That bug was fixed implicitly by
  https://review.openstack.org/#/c/469037/ which is now merged in
  master, but the issue is still there in stable branches, and if you
  need to use the legacy methods, you'll not have it.

  As a consequence, the feature to pass a destination for evacuation is
  not working in Newton and Ocata. Fortunately, given we didn't
  transformed the object into dicts before calling the scheduler for
  live-migrations, it does work for that action.

  A proper resolution would be to make sure that we pass the
  requested_destination field into 'filter_properties' so that when
  transforming again into an object, we set again the field.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1702454/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645263] Re: Unable to run stack.sh on fresh new Ubuntu Xenial 16.04 LTS, script fails with "No module named 'memcache' "

2016-11-28 Thread Tony Breeds
python-memcache is the canonical version.  I don't believe that this is
a devstack issue

** Changed in: openstack-requirements
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1645263

Title:
  Unable to run stack.sh on fresh new Ubuntu Xenial 16.04 LTS, script
  fails with "No module named 'memcache' "

Status in devstack:
  Incomplete
Status in OpenStack Identity (keystone):
  New
Status in OpenStack Global Requirements:
  Opinion

Bug description:
  Unable to run stack.sh on fresh new Ubuntu Xenial 16.04 LTS, script
  fails with "No module named 'memcache' "

  Traceback:

  +lib/keystone:bootstrap_keystone:630   /usr/local/bin/keystone-manage 
bootstrap --bootstrap-username admin --bootstrap-password ubuntu 
--bootstrap-project-name admin --bootstrap-role-name admin 
--bootstrap-service-name keystone --bootstrap-region-id RegionOne 
--bootstrap-admin-url http://192.168.0.115/identity_admin 
--bootstrap-public-url http://192.168.0.115/identity --bootstrap-internal-url 
http://192.168.0.115/identity
  2016-11-28 11:51:39.723 15663 CRITICAL keystone [-] ImportError: No module 
named 'memcache'
  2016-11-28 11:51:39.723 15663 TRACE keystone Traceback (most recent call 
last):
  2016-11-28 11:51:39.723 15663 TRACE keystone   File 
"/usr/local/bin/keystone-manage", line 10, in 
  2016-11-28 11:51:39.723 15663 TRACE keystone sys.exit(main())
  2016-11-28 11:51:39.723 15663 TRACE keystone   File 
"/opt/stack/keystone/keystone/cmd/manage.py", line 45, in main
  2016-11-28 11:51:39.723 15663 TRACE keystone cli.main(argv=sys.argv, 
config_files=config_files)
  2016-11-28 11:51:39.723 15663 TRACE keystone   File 
"/opt/stack/keystone/keystone/cmd/cli.py", line 1269, in main
  2016-11-28 11:51:39.723 15663 TRACE keystone CONF.command.cmd_class.main()
  2016-11-28 11:51:39.723 15663 TRACE keystone   File 
"/opt/stack/keystone/keystone/cmd/cli.py", line 365, in main
  2016-11-28 11:51:39.723 15663 TRACE keystone klass = cls()
  2016-11-28 11:51:39.723 15663 TRACE keystone   File 
"/opt/stack/keystone/keystone/cmd/cli.py", line 66, in __init__
  2016-11-28 11:51:39.723 15663 TRACE keystone self.load_backends()
  2016-11-28 11:51:39.723 15663 TRACE keystone   File 
"/opt/stack/keystone/keystone/cmd/cli.py", line 129, in load_backends
  2016-11-28 11:51:39.723 15663 TRACE keystone drivers = 
backends.load_backends()
  2016-11-28 11:51:39.723 15663 TRACE keystone   File 
"/opt/stack/keystone/keystone/server/backends.py", line 32, in load_backends
  2016-11-28 11:51:39.723 15663 TRACE keystone cache.configure_cache()
  2016-11-28 11:51:39.723 15663 TRACE keystone   File 
"/opt/stack/keystone/keystone/common/cache/core.py", line 124, in 
configure_cache
  2016-11-28 11:51:39.723 15663 TRACE keystone 
cache.configure_cache_region(CONF, region)
  2016-11-28 11:51:39.723 15663 TRACE keystone   File 
"/usr/local/lib/python3.5/dist-packages/oslo_cache/core.py", line 201, in 
configure_cache_region
  2016-11-28 11:51:39.723 15663 TRACE keystone '%s.' % 
conf.cache.config_prefix)
  2016-11-28 11:51:39.723 15663 TRACE keystone   File 
"/usr/local/lib/python3.5/dist-packages/dogpile/cache/region.py", line 552, in 
configure_from_config
  2016-11-28 11:51:39.723 15663 TRACE keystone "%swrap" % prefix, None),
  2016-11-28 11:51:39.723 15663 TRACE keystone   File 
"/usr/local/lib/python3.5/dist-packages/dogpile/cache/region.py", line 417, in 
configure
  2016-11-28 11:51:39.723 15663 TRACE keystone _config_prefix
  2016-11-28 11:51:39.723 15663 TRACE keystone   File 
"/usr/local/lib/python3.5/dist-packages/dogpile/cache/api.py", line 81, in 
from_config_dict
  2016-11-28 11:51:39.723 15663 TRACE keystone for key in config_dict
  2016-11-28 11:51:39.723 15663 TRACE keystone   File 
"/usr/local/lib/python3.5/dist-packages/dogpile/cache/backends/memcached.py", 
line 208, in __init__
  2016-11-28 11:51:39.723 15663 TRACE keystone super(MemcacheArgs, 
self).__init__(arguments)
  2016-11-28 11:51:39.723 15663 TRACE keystone   File 
"/usr/local/lib/python3.5/dist-packages/dogpile/cache/backends/memcached.py", 
line 108, in __init__
  2016-11-28 11:51:39.723 15663 TRACE keystone self._imports()
  2016-11-28 11:51:39.723 15663 TRACE keystone   File 
"/usr/local/lib/python3.5/dist-packages/dogpile/cache/backends/memcached.py", 
line 287, in _imports
  2016-11-28 11:51:39.723 15663 TRACE keystone import memcache  # noqa
  2016-11-28 11:51:39.723 15663 TRACE keystone ImportError: No module named 
'memcache'
  2016-11-28 11:51:39.723 15663 TRACE keystone 

  local.conf

  [[local|localrc]]

  USE_PYTHON3=True
  PYTHON3_VERSION=3.5

  Python: 3.5.2

  Ubuntu version (lsb_release -a):
  Distributor ID:   Ubuntu
  Description:  Ubuntu 16.04 LTS
  Release:  16.04
  Codename: xenial

To manage notifications about this 

[Yahoo-eng-team] [Bug 1630851] [NEW] test_create_with_live_time can fail if run at "just the wrong time"

2016-10-05 Thread Tony Breeds
Public bug reported:

In https://review.openstack.org/#/c/381890 we got the following failure:
| Captured traceback:
| ~~~
| Traceback (most recent call last):
|   File 
"/home/jenkins/workspace/gate-cross-glance-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py",
 line 1305, in patched
| return func(*args, **keywargs)
|   File "glance/tests/unit/v2/test_tasks_resource.py", line 367, in 
test_create_with_live_time
| self.assertEqual(CONF.task.task_time_to_live, task_live_time_hour)
|   File 
"/home/jenkins/workspace/gate-cross-glance-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
| self.assertThat(observed, matcher, message)
|   File 
"/home/jenkins/workspace/gate-cross-glance-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
| raise mismatch_error
| testtools.matchers._impl.MismatchError: 48 != 47


This can happen if the exires_at and updated_at are not 2 days apart (ignoring 
seconds and microsecond) (from [2])

# ignore second and microsecond to avoid flaky runs
task_live_time = (success_task.expires_at.replace(second=0,
  microsecond=0) -
  success_task.updated_at.replace(second=0,
  microsecond=0))

The following interactive example shows what I mean:

balder:glance tony8129$ python
Python 2.7.12 (default, Jun 29 2016, 12:46:54) 
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import datetime
>>> class FakeTask(object):
... pass
>>> success_task = FakeTask()
>>> success_task.expires_at = datetime.datetime(2016, 10, 3, 00, 00, 00)
>>> success_task.updated_at = datetime.datetime(2016, 10, 1, 00, 00, 00)
>>> task_live_time = (success_task.expires_at.replace(second=0,
>>>   microsecond=0) -
...   success_task.updated_at.replace(second=0,
...   microsecond=0))
... task_live_time
>>> datetime.timedelta(2)
>>> success_task.updated_at = datetime.datetime(2016, 10, 1, 00, 01, 00)
>>> task_live_time = (success_task.expires_at.replace(second=0,
>>>   microsecond=0) -
...   success_task.updated_at.replace(second=0,
...   microsecond=0))
... task_live_time
>>> datetime.timedelta(1, 86340)
task_live_time_hour = (task_live_time.days * 24 +
>>>task_live_time.seconds / 3600)
>>> task_live_time_hour
>>> 47

I couldn't find the specific code but I assume something like:
db.expires_at = now() + CONF.task.task_time_to_live #   mm:59
db_updated_at = now()   # 1+mm:00

Happens causing this false positive.


[1] 
http://logs.openstack.org/90/381890/3/gate/gate-cross-glance-python27-db-ubuntu-xenial/1dff2a8/console.html#_2016-10-06_02_26_57_318997
[2] 
https://github.com/openstack/glance/blob/master/glance/tests/unit/v2/test_tasks_resource.py#L361-L364

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1630851

Title:
  test_create_with_live_time can fail if run at "just the wrong time"

Status in Glance:
  New

Bug description:
  In https://review.openstack.org/#/c/381890 we got the following failure:
  | Captured traceback:
  | ~~~
  | Traceback (most recent call last):
  |   File 
"/home/jenkins/workspace/gate-cross-glance-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py",
 line 1305, in patched
  | return func(*args, **keywargs)
  |   File "glance/tests/unit/v2/test_tasks_resource.py", line 367, in 
test_create_with_live_time
  | self.assertEqual(CONF.task.task_time_to_live, task_live_time_hour)
  |   File 
"/home/jenkins/workspace/gate-cross-glance-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
  | self.assertThat(observed, matcher, message)
  |   File 
"/home/jenkins/workspace/gate-cross-glance-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
  | raise mismatch_error
  | testtools.matchers._impl.MismatchError: 48 != 47

  
  This can happen if the exires_at and updated_at are not 2 days apart 
(ignoring seconds and microsecond) (from [2])

  # ignore second and microsecond to avoid flaky runs
  task_live_time = 

[Yahoo-eng-team] [Bug 1618697] [NEW] os-brick 1.6.0 refactor was a major API change

2016-08-30 Thread Tony Breeds
Public bug reported:

With the release of os-brick 1.6.0 the following review[1] was created
to use it in upper-constraints.txt

This review is failing the nova[2] and cinder[3] unit tests

It's relatively simple to fix these problems to work with 1.6.0 but the
code needs to work with both 1.5.0 *and* 1.6.0.  This is where we have
problems.

The connector objects moved from
os_brick.initiator.connector.ISCSIConnector (1.5.0) to
os_brick.initiator.connectors.ISCSIConnector (1.6.0) so any tests need
shims in place to work with either name.  The shim could be removed once
global-requirements is bumped to use 1.6.0 as the minimum but it's very
late to be making that change as that'd cause a re-release of any
libraries (glance_store) using os-brick.


[1] https://review.openstack.org/#/c/360739/
[2] 
http://logs.openstack.org/39/360739/2/check/gate-cross-nova-python27-db-ubuntu-xenial/bb19321/console.html#_2016-08-31_02_20_59_089114
[3] 
http://logs.openstack.org/39/360739/2/check/gate-cross-cinder-python27-db-ubuntu-xenial/444b954/console.html#_2016-08-31_02_25_04_125200

** Affects: cinder
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: os-brick
 Importance: Undecided
 Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1618697

Title:
  os-brick 1.6.0 refactor was a major API change

Status in Cinder:
  New
Status in OpenStack Compute (nova):
  New
Status in os-brick:
  New

Bug description:
  With the release of os-brick 1.6.0 the following review[1] was created
  to use it in upper-constraints.txt

  This review is failing the nova[2] and cinder[3] unit tests

  It's relatively simple to fix these problems to work with 1.6.0 but
  the code needs to work with both 1.5.0 *and* 1.6.0.  This is where we
  have problems.

  The connector objects moved from
  os_brick.initiator.connector.ISCSIConnector (1.5.0) to
  os_brick.initiator.connectors.ISCSIConnector (1.6.0) so any tests need
  shims in place to work with either name.  The shim could be removed
  once global-requirements is bumped to use 1.6.0 as the minimum but
  it's very late to be making that change as that'd cause a re-release
  of any libraries (glance_store) using os-brick.


  
  [1] https://review.openstack.org/#/c/360739/
  [2] 
http://logs.openstack.org/39/360739/2/check/gate-cross-nova-python27-db-ubuntu-xenial/bb19321/console.html#_2016-08-31_02_20_59_089114
  [3] 
http://logs.openstack.org/39/360739/2/check/gate-cross-cinder-python27-db-ubuntu-xenial/444b954/console.html#_2016-08-31_02_25_04_125200

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1618697/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1421471] Re: os-simple-tenant-usage performs poorly with many instances

2016-05-31 Thread Tony Breeds
Confirmed with origin/master
SHA:ced89e7b26b3cff323852e1d8a9c6db80334f4dd

** Changed in: nova
   Status: Opinion => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1421471

Title:
  os-simple-tenant-usage performs poorly with many instances

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  The SQL underlying the os-simple-tenant-usage API call results in very
  slow operations when the database has many (20,000+) instances. In
  testing, the objects.InstanceList.get_active_by_window_joined call in
  
nova/api/openstack/compute/contrib/simple_tenant_usage.py:SimpleTenantUsageController._tenant_usages_for_period
  takes 24 seconds to run.

  Some basic timing analysis has shown that the initial query in
  nova/db/sqlalchemy/api.py:instance_get_active_by_window_joined runs in
  *reasonable* time (though still 5-6 seconds) and the bulk of the time
  is spent in the subsequent _instances_fill_metadata call which pulls
  in system_metadata info by using a SELECT with an IN clause containing
  the 20,000 uuids listed, resulting in execution times over 15 seconds.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1421471/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563157] [NEW] Routes 2.3 release breaks neutron-server

2016-03-28 Thread Tony Breeds
Public bug reported:

Routes released 2.3 2016-03-28T15:48:33  Since then we're seeing:

---
Traceback (most recent call last):
  File "/usr/local/bin/neutron-server", line 10, in 
sys.exit(main())
  File "/opt/stack/new/neutron/neutron/cmd/eventlet/server/__init__.py", line 
17, in main
server.main()
  File "/opt/stack/new/neutron/neutron/server/__init__.py", line 44, in main
neutron_api = service.serve_wsgi(service.NeutronApiService)
  File "/opt/stack/new/neutron/neutron/service.py", line 106, in serve_wsgi
LOG.exception(_LE('Unrecoverable error: please check log '
  File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
85, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File "/opt/stack/new/neutron/neutron/service.py", line 103, in serve_wsgi
service.start()
  File "/opt/stack/new/neutron/neutron/service.py", line 74, in start
self.wsgi_app = _run_wsgi(self.app_name)
  File "/opt/stack/new/neutron/neutron/service.py", line 169, in _run_wsgi
app = config.load_paste_app(app_name)
  File "/opt/stack/new/neutron/neutron/common/config.py", line 227, in 
load_paste_app
app = deploy.loadapp("config:%s" % config_path, name=app_name)
  File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 
247, in loadapp
return loadobj(APP, uri, name=name, **kw)
  File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 
272, in loadobj
return context.create()
  File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 
710, in create
return self.object_type.invoke(self)
  File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 
144, in invoke
**context.local_conf)
  File "/usr/local/lib/python2.7/dist-packages/paste/deploy/util.py", line 55, 
in fix_call
val = callable(*args, **kw)
  File "/usr/local/lib/python2.7/dist-packages/paste/urlmap.py", line 31, in 
urlmap_factory
app = loader.get_app(app_name, global_conf=global_conf)
  File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 
350, in get_app
name=name, global_conf=global_conf).create()
  File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 
710, in create
return self.object_type.invoke(self)
  File "/usr/local/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 
144, in invoke
**context.local_conf)
  File "/usr/local/lib/python2.7/dist-packages/paste/deploy/util.py", line 55, 
in fix_call
val = callable(*args, **kw)
  File "/opt/stack/new/neutron/neutron/auth.py", line 74, in pipeline_factory
app = filter(app)
  File "/opt/stack/new/neutron/neutron/api/extensions.py", line 392, in _factory
return ExtensionMiddleware(app, ext_mgr=ext_mgr)
  File "/opt/stack/new/neutron/neutron/api/extensions.py", line 293, in __init__
submap.connect(path)
  File "/usr/local/lib/python2.7/dist-packages/routes/mapper.py", line 168, in 
connect
routename, path = args
ValueError: need more than 1 value to unpack
---

logstash:
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22ValueError%3A%20need%20more%20than%201%20value%20to%20unpack%5C%22%20AND%20voting%3A%5C%221%5C%22

Affects stable/* and master (modulo upper-constraints.txt)

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: trove
 Importance: Undecided
 Status: New

** Also affects: trove
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1563157

Title:
  Routes 2.3 release breaks neutron-server

Status in neutron:
  New
Status in Trove:
  New

Bug description:
  Routes released 2.3 2016-03-28T15:48:33  Since then we're seeing:

  ---
  Traceback (most recent call last):
File "/usr/local/bin/neutron-server", line 10, in 
  sys.exit(main())
File "/opt/stack/new/neutron/neutron/cmd/eventlet/server/__init__.py", line 
17, in main
  server.main()
File "/opt/stack/new/neutron/neutron/server/__init__.py", line 44, in main
  neutron_api = service.serve_wsgi(service.NeutronApiService)
File "/opt/stack/new/neutron/neutron/service.py", line 106, in serve_wsgi
  LOG.exception(_LE('Unrecoverable error: please check log '
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
85, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File "/opt/stack/new/neutron/neutron/service.py", line 103, in serve_wsgi
  service.start()
File "/opt/stack/new/neutron/neutron/service.py", line 74, in start
  self.wsgi_app = _run_wsgi(self.app_name)
File "/opt/stack/new/neutron/neutron/service.py", line 169, in _run_wsgi
  app = config.load_paste_app(app_name)
File "/opt/stack/new/neutron/neutron/common/config.py", line 227, in 
load_paste_app
  app = deploy.loadapp("config:%s" % 

[Yahoo-eng-team] [Bug 1522735] Re: Instance failure shows redundant data in overview page

2016-01-11 Thread Tony Breeds
** Also affects: nova
   Importance: Undecided
   Status: New

** Tags added: quotas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1522735

Title:
  Instance failure shows redundant data in overview page

Status in OpenStack Dashboard (Horizon):
  Won't Fix
Status in OpenStack Compute (nova):
  New

Bug description:
  The overview section displays incorrect limit summary details when an 
instance creation has failed.
  I try to create an instance that needs 8192MB RAM and 4VCPUs (m1.large).
  The instance creations fails with an error status.
  Meanwhile the overview page shows instance, VCPUs and RAM summary 
details.(This should ideally be empty as the instance creation was 
unsuccessful).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1522735/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1522307] Re: Disk usage not work for shared storage

2016-01-11 Thread Tony Breeds
Ahh that's a known bug in nova.

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1522307

Title:
  Disk usage not work for shared storage

Status in OpenStack Dashboard (Horizon):
  Won't Fix
Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  We use a 50TB Ceph as backend but when we showing the hypervisor summary It 
shows double size of it (100TB).
  The cause of this is that Horizon didn't know storage backend for these two 
hypervisors are shared.

  The screen capture is attached fallowing

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1522307/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355573] Re: tempest volume scenarios periodically fail /w SSHTimeout

2016-01-10 Thread Tony Breeds
Added to nova as it's not grenade specific and I saw it in a tempest-
dvvm-full job

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1355573

Title:
  tempest volume scenarios periodically fail /w SSHTimeout

Status in grenade:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  Not sure this is a duplicate, but I've noticed failing tempest volume
  scenarios as part of grenade tests, ie

  http://logs.openstack.org/07/112707/1/check/check-grenade-dsvm-
  partial-ncpu/43dbf72/logs/testr_results.html.gz

  Traceback (most recent call last):
File "tempest/test.py", line 128, in wrapper
  return f(self, *func_args, **func_kwargs)
File "tempest/scenario/test_volume_boot_pattern.py", line 163, in 
test_volume_boot_pattern
  keypair)
File "tempest/scenario/test_volume_boot_pattern.py", line 116, in 
_ssh_to_server
  private_key=keypair.private_key)
File "tempest/scenario/manager.py", line 484, in get_remote_client
  linux_client.validate_authentication()
File "tempest/common/utils/linux/remote_client.py", line 53, in 
validate_authentication
  self.ssh_client.test_connection_auth()
File "tempest/common/ssh.py", line 150, in test_connection_auth
  connection = self._get_ssh_connection()
File "tempest/common/ssh.py", line 87, in _get_ssh_connection
  password=self.password)
  SSHTimeout: Connection to the 172.24.4.2 via SSH timed out.
  User: cirros, Password: None

  Checking logstash these seem to be happening frequently during grenade
  jobs, failing both test_snapshot_pattern and test_volume_boot_pattern
  tests. 141 failures over the last 7 days.

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiU1NIVGltZW91dDogQ29ubmVjdGlvbiB0byB0aGUgMTcyLjI0LjQuMiB2aWEgU1NIIHRpbWVkIG91dC5cIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwNzgxMzYzNjg1Nn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/grenade/+bug/1355573/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461406] Re: libvirt: missing iotune parse for LibvirtConfigGuestDisk

2015-12-20 Thread Tony Breeds
If we can find a valid consumer for this information from the domain
XMLt hen we can add the code as a specless blueprint or similar.

** Changed in: nova
   Status: In Progress => Opinion

** Changed in: nova
 Assignee: ChangBo Guo(gcb) (glongwave) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461406

Title:
  libvirt: missing  iotune parse for  LibvirtConfigGuestDisk

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  We support  instance disk IO control with  iotune like :


  102400


  we set iotune in class LibvirtConfigGuestDisk  in libvirt/config.py . The 
method parse_dom doesn't parse iotue options now.
  Need fix that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1461406/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525739] [NEW] Hyper-V: stable/liberty, mismatched requirements causes CI jobs to fail.

2015-12-13 Thread Tony Breeds
Public bug reported:

CI JObs in (at least) stable/liberty nova are failing with "Can not
start the nova-compute service. The manual run failed as well."

 . 
http://64.119.130.115/nova/256180/1/Hyper-V_logs/create-environment-c2-r2-u22.openstack.tld.log.gz
   indicates that nova-compute didn't start
 . 
http://64.119.130.115/nova/256180/1/Hyper-V_logs/c2-r2-u22/process_error.txt.gz
   Shows an issue with oslo.log semver
 . 
http://64.119.130.115/nova/256180/1/Hyper-V_logs/create-environment-c2-r2-u22.openstack.tld.log.gz
   Indicates that it's comming from networking-hyperv (oslo.log<1.12.0,>=1.8.0)

Looking a little deeper it seems that we have some mitaka libraries
installed which are incompatible with the liberty versions.

Opening this bug as a focal point for fixes.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1525739

Title:
  Hyper-V: stable/liberty, mismatched requirements causes CI jobs to
  fail.

Status in OpenStack Compute (nova):
  New

Bug description:
  CI JObs in (at least) stable/liberty nova are failing with "Can not
  start the nova-compute service. The manual run failed as well."

   . 
http://64.119.130.115/nova/256180/1/Hyper-V_logs/create-environment-c2-r2-u22.openstack.tld.log.gz
 indicates that nova-compute didn't start
   . 
http://64.119.130.115/nova/256180/1/Hyper-V_logs/c2-r2-u22/process_error.txt.gz
 Shows an issue with oslo.log semver
   . 
http://64.119.130.115/nova/256180/1/Hyper-V_logs/create-environment-c2-r2-u22.openstack.tld.log.gz
 Indicates that it's comming from networking-hyperv 
(oslo.log<1.12.0,>=1.8.0)

  Looking a little deeper it seems that we have some mitaka libraries
  installed which are incompatible with the liberty versions.

  Opening this bug as a focal point for fixes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1525739/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496650] Re: requirement conflict on Babel

2015-09-17 Thread Tony Breeds
Fixed with: https://review.openstack.org/224429 which has now merged.

** Changed in: oslo.utils
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1496650

Title:
  requirement conflict on Babel

Status in Keystone:
  Invalid
Status in oslo.utils:
  Fix Released

Bug description:
  message:"pkg_resources.ContextualVersionConflict: (Babel 2.0
  (/usr/local/lib/python2.7/dist-packages),
  Requirement.parse('Babel<=1.3,>=1.3'), set(['oslo.utils']))"

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwicGtnX3Jlc291cmNlcy5Db250ZXh0dWFsVmVyc2lvbkNvbmZsaWN0OiAoQmFiZWwgMi4wICgvdXNyL2xvY2FsL2xpYi9weXRob24yLjcvZGlzdC1wYWNrYWdlcyksIFJlcXVpcmVtZW50LnBhcnNlKCdCYWJlbDw9MS4zLD49MS4zJyksIHNldChbJ29zbG8udXRpbHMnXSkpXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjE3MjgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0NDI0NTE2OTc3ODl9

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1496650/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485416] Re: Soft reboot doesn't work for bare metal.

2015-08-17 Thread Tony Breeds
This is pretty clearly operating as intended:

http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/ironic/driver.py#n892

There are changes in progress to support soft reboot via ACPI (or
similar)

** Changed in: nova
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1485416

Title:
  Soft reboot doesn't work for bare metal.

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  When we use ironic, we can't reboot a bare metal instance with a graceful 
shutdown.
  We execute nova reboot command without --hard option. However, it 
performs a hard reboot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1485416/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470302] [NEW] gate-nova-python27 fails with RuntimeError: maximum recursion depth exceeded

2015-06-30 Thread Tony Breeds
Public bug reported:

Review: https://review.openstack.org/#/c/194325/ failed with $subject

Logstash:
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiUnVudGltZUVycm9yOiBtYXhpbXVtIHJlY3Vyc2lvbiBkZXB0aCBleGNlZWRlZFwiIEFORCB0YWdzOlwiY29uc29sZVwiIEFORCBidWlsZF9xdWV1ZTpcImdhdGVcIiBBTkQgYnVpbGRfbmFtZTpcImdhdGUtbm92YS1weXRob24yN1wiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDM1NzE1Njg3MDU0LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

Possibly related to: https://review.openstack.org/#/c/197176

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1470302

Title:
  gate-nova-python27 fails with RuntimeError: maximum recursion depth
  exceeded

Status in OpenStack Compute (Nova):
  New

Bug description:
  Review: https://review.openstack.org/#/c/194325/ failed with $subject

  Logstash:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiUnVudGltZUVycm9yOiBtYXhpbXVtIHJlY3Vyc2lvbiBkZXB0aCBleGNlZWRlZFwiIEFORCB0YWdzOlwiY29uc29sZVwiIEFORCBidWlsZF9xdWV1ZTpcImdhdGVcIiBBTkQgYnVpbGRfbmFtZTpcImdhdGUtbm92YS1weXRob24yN1wiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDM1NzE1Njg3MDU0LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

  Possibly related to: https://review.openstack.org/#/c/197176

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1470302/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469749] Re: RamFilter logging partially considers ram-allocation-ratio

2015-06-30 Thread Tony Breeds
The log message contains the information required,  The hypervisors has
10148 MB Ram of which 480.4 MB is usable.  The instance requires 2048MB.

** Changed in: nova
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1469749

Title:
  RamFilter logging partially considers ram-allocation-ratio

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  Package: nova-scheduler
  Version: 1:2014.1.4-0ubuntu2.1

  RamFilter accurately skips a host because RAM resource is not enough
  for requested VM. However, I think log should be more explicit on
  numbers, taking into account ram-allocation-ratio can be different
  from 1.0.

  Log excerpt:
  2015-06-29 12:04:21.422 15708 DEBUG nova.scheduler.filters.ram_filter 
[req-d14d9f04-c2b1-42be-b5b9-669318bb0030 3cca8ee6898e42f287adbd4f5dac1801 
a0ae7f82f577413ab0d73f3dc09fb906] (hostname, hostname.tld) ram:10148 
disk:264192 io_ops:0 instances:39 does not have 2048 MB usable ram, it only has 
480.4 MB usable ram. host_passes 
/usr/lib/python2.7/dist-packages/nova/scheduler/filters/ram_filter.py:60

  On log above, RAM says 10148 (MB), which seems enough for a 2048MB VM.
  First number (10148) is calculated as: TotalMB - UsedMB. Additional
  (real) number should be: TotalMB * RamAllocRatio - UsedMB.

  In this case, ram-allocatioin-ratio is 0.9, which results in 480.4MB.

  Please let me know if you'd need more details.

  Cheers,
  -Alvaro.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1469749/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467560] Re: RFE: add instance uuid field to nova.quota_usages table

2015-06-30 Thread Tony Breeds
This is reported against Icehouse which is closed for development.

Please reproduce with with Kilo or liberty-1 and reopen

** Changed in: nova
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1467560

Title:
  RFE: add instance uuid field to nova.quota_usages table

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  In Icehouse, the nova.quota_usages table frequently gets out-of-sync
  with the currently active/stopped instances in a tenant/project,
  specifically, there are times when the instance will be set to
  terminated/deleted in the instances table and the quota_usages table
  will retain the data, counting against the tenant's total quota.  As
  far as I can tell there is no way to correlate instances.uuid with the
  records in nova.quota_usages.

  I propose adding an instance uuid column to make future cleanup of
  this table easier.

  I also propose a housecleaning task that does this clean up
  automatically.

  Thanks,
  Dan

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1467560/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp