Public bug reported:
For virt drivers that require networks to be reallocated on nova
reschedules, the access_ip_v[4|6] fields on Instance are not updated.
This bug was introduced when the new build_instances path was added.
This new path updates access_ip_* before the instance goes ACTIVE...
: Chris Behrens (cbehrens)
Status: In Progress
** Changed in: nova
Status: New = In Progress
** Changed in: nova
Assignee: (unassigned) = Chris Behrens (cbehrens)
** Changed in: nova
Importance: Undecided = Medium
--
You received this bug notification because you
in: nova
Assignee: Robert Collins (lifeless) = Chris Behrens (cbehrens)
** Changed in: ironic
Assignee: (unassigned) = Chris Behrens (cbehrens)
** Changed in: ironic
Status: New = In Progress
--
You received this bug notification because you are a member of Yahoo!
Engineering Team
** Also affects: python-neutronclient
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1347778
Title:
raising Maximum number
Public bug reported:
http://logs.openstack.org/02/94402/18/check/gate-nova-python26/b20aa1d/testr_results.html.gz
** Affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to
The retry code does not check for this:
2014-04-19 00:37:39.354 13204 ERROR nova.compute.manager
[req-e7e92354-6e42-4955-9519-08a65872372d ]
[instance: 7a2b7c97-f793-4666-888d-430dXXX] Error:
[Errno 104] Connection reset by peer
in xenapi/client/session.py:
Public bug reported:
There are a couple of cases in the compute manager where we don't pass
reservations to _delete_instance(). For example, one of them is
cleaning up when we see a delete that is stuck in DELETING.
The only place we ever update quotas as part of delete should be when
the
This bug is not valid. The task needs to run periodically to expire any
reservations that should be expired ahead of QUOTAS.reserve() calls.
The only alternative is to expire reservations explicitly before every
single QUOTAS.reserve()... which is not as performant.
This task doesn't necessarily
sigh. click too fast on status and you get undesired results. heh.
** Changed in: nova
Assignee: (unassigned) = Chris Behrens (cbehrens)
** Changed in: nova
Status: Triaged = Won't Fix
** Changed in: nova
Status: Won't Fix = In Progress
--
You received this bug
Re-opening this bug as it is not actually fixed in h-1. The previous
fix needed to be reverted due to bug 1185190.
** Changed in: nova
Status: Fix Released = Triaged
** Changed in: nova
Assignee: Joe Gordon (jogo) = (unassigned)
** Changed in: nova
Milestone: havana-1 = None
Sigh. This appears to be how the python logging module works.
http://docs.python.org/2/library/logging.html
Under logger.debug they have an example use of 'extra'...and then under
it states:
If you choose to use these attributes in logged messages, you need to
exercise some care. In the above
Not sure if this is truly a bug or not. I can't seem to reproduce it
on a fresh compute_nodes table. However, if I have a compute_nodes
entry leftover... from switching from XenAPI - fake... I see this
problem just trying to build *1* instance. The problem is that the
'nodename' changes. The
Ah, key data is sent with each instance already... so we really don't
need this in child cells.
** Changed in: nova
Status: In Progress = Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
Turns out this is not a bug. There's a bit of trickery in cells_api's
version of HostAPI() in that unimplemented methods there use api.py's
HostAPI() versions of methods and the rpcapi class is swapped out
for one that proxies via cells to the correct cell and manager.
** Changed in: nova
14 matches
Mail list logo