** Also affects: neutron
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1852175
Title:
MessagingTimeout
Status in neutron:
New
Status in os
Public bug reported:
There is an issue with the configuration handling in oslo.policy and
keystone that causes cli args like --config-file to be ignored in the
keystone enforcer when running oslopolicy-list-redundant. Specifically,
because keystone re-initializes the global config object when crea
Public bug reported:
If I create a config file named fake.conf that looks like this:
[oslo_policy]
policy_file = /home/fedora/keystone/keystone.policy.yaml
and put a redundant rule in the referenced policy file, that rule should
get printed out when I run:
oslopolicy-list-redundant --config-fil
** Also affects: oslo.policy
Importance: Undecided
Status: New
** Changed in: oslo.policy
Status: New => Confirmed
** Changed in: oslo.policy
Importance: Undecided => Low
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subs
** Changed in: oslo.privsep
Status: Confirmed => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1810518
Title:
neutron-functional tests failing with oslo.privsep 1.31
Sta
** Also affects: oslo.privsep
Importance: Undecided
Status: New
** Changed in: oslo.privsep
Status: New => Confirmed
** Changed in: oslo.privsep
Importance: Undecided => Critical
** Changed in: oslo.privsep
Assignee: (unassigned) => Ben Nemec (bnemec)
--
You
** Changed in: oslo.messaging
Status: Incomplete => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1385234
Title:
OVS tunneling between multiple neutron nodes misconf
I would prefer not to have to apply the workaround to all of the Olso
projects when we need to migrate them to stestr anyway.
** Changed in: oslo.versionedobjects
Status: New => Won't Fix
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is s
Yeah, since we stopped calling get_logger from oslo.log I don't think we
can fix this in the library. I'm going to add Nova back to the bug so
they can re-triage base on this information.
** Also affects: nova
Importance: Undecided
Status: New
** Changed in: oslo.log
Status: New
It looks to me like this is set on a per-project basis. oslo.middleware
doesn't have any default headers:
https://github.com/openstack/oslo.middleware/blob/2c557312519cd368c50eaaa5448049da19cc6281/oslo_middleware/cors.py#L50
A quick search suggests that the accepted headers are being set in
Glanc
Looks like this was fixed in neutron. Let me know if there's anything
left to be done on the oslo side. Thanks.
** Changed in: oslo.rootwrap
Status: New => Won't Fix
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron
Do we still need the oslo.log change backported to ocata?
** Changed in: oslo.versionedobjects
Status: New => Invalid
** Also affects: oslo.log/ocata
Importance: Undecided
Status: New
** Changed in: oslo.log/ocata
Importance: Undecided => Medium
** Changed in: oslo.log/ocata
** Changed in: oslo.middleware
Status: New => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1508442
Title:
LOG.warn is deprecated
Status in anvil:
The oslo.privsep part of this bug was fixed in
https://review.openstack.org/#/c/329766/
I'm not sure why that didn't show up as it does appear to have a bug
reference.
** Changed in: oslo.privsep
Status: New => Fix Released
--
You received this bug notification because you are a member o
** Changed in: oslo.rootwrap
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1582807
Title:
neutron-rootwrap-daemon explodes because of misbehaving "ip rule"
Public bug reported:
The glance doc builds are all failing with the latest release of
openstackdocstheme. The full traceback looks like this:
Traceback (most recent call last):
File
"/opt/stack/glance/.tox/docs/lib/python2.7/site-packages/sphinx/setup_command.py",
line 191, in run
warnin
It looks like this was fixed a while ago. Feel free to reopen if I'm
mistaken.
** Changed in: tripleo
Status: Triaged => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs
As others have noted, the rpm upgrade process should handle updating the
rootwrap filters. The only exception would be if a user edited them
after installation, but in that case they're responsible for merging in
the updated ones themselves.
** Changed in: tripleo
Status: Triaged => Fix Re
Confirmed that this fixes the tripleo use case.
** Changed in: tripleo
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1696866
T
** Changed in: tripleo
Status: Triaged => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1513879
Title:
NeutronClientException: 404 Not Found
Status
I did some more digging yesterday but concluded that this is beyond my
ability to debug in a reasonable timeframe. I no longer think os-brick
is the problem though - it looks like the data passed in to os-brick is
already bad, so I think it's something in Cinder or Nova that is not
correctly handl
Public bug reported:
There seems to be an issue with how domains get assigned when booting
instances. My understanding is that with neutron, the neutron
dns_domain option should be what determines the resulting domain name of
the instances. However, when creating instances with the following
con
** Changed in: tripleo
Status: Incomplete => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1284424
Title:
nova quota statistics can be incorrect
St
Public bug reported:
I'm seeing this on a recent devstack. Without --progress it seems to
work fine.
[bnemec@Arisu ~]$ glance -d image-download 2974158b-383d-4fe6-9671-5248b9a5d07d
--file bmc-base.qcow2 --progress
DEBUG:keystoneauth.session:REQ: curl -g -i -X GET http://11.1.1.78:5000/v3 -H
"A
** Changed in: tripleo
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1654032
Title:
CI: unable to ping floating-ip in pingtest
Status in neut
Confirmed locally that reverting the above patch fixes this. I'm not
sure why it's only affecting Mitaka, so going to add keystone to the bug
and see if they have any thoughts.
** Also affects: keystone
Importance: Undecided
Status: New
--
You received this bug notification because yo
Dropping alert tag because I see the reverted package in the mitaka dlrn
repo, so this should be fixed. In fact, I see a passed Mitaka job in
the zuul status so I'm going to mark this fixed.
** Tags removed: alert
** Changed in: tripleo
Status: Triaged => Fix Released
--
You received th
Public bug reported:
When looking at hypervisor resources through either nova hypervisor-
stats or nova hypervisor-show with compute nodes of differing sizes, I
am getting incorrect/inconsistent values back for one of the
hypervisors. For example, in an environment with one 32 GB compute node
and
** Changed in: tripleo
Status: Triaged => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1290540
Title:
neutron_admin_tenant_name deprecation warning
** Changed in: tripleo
Status: Triaged => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1284431
Title:
nova-compute doesn't reconnect properly after
** Changed in: tripleo
Status: Triaged => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1282842
Title:
default nova+neutron setup cannot handle spawning 20 images
I believe this is no longer relevant to the current state of tripleo.
** Changed in: tripleo
Status: Triaged => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1273882
Title
** Changed in: tripleo
Status: In Progress => Confirmed
** Changed in: tripleo
Status: Confirmed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1353953
Title
It appears this has been fixed in Nova for a long time.
** Changed in: tripleo
Status: Triaged => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/127262
pyOpenSSL 16.0.0 has been released, which appears to have fixed this.
** Changed in: os-cloud-config
Status: Triaged => Fix Released
** Changed in: os-cloud-config
Assignee: (unassigned) => Ben Nemec (bnemec)
--
You received this bug notification because you are a member of
Public bug reported:
With cryptography 1.3, the unit tests are failing with:
Traceback (most recent call last):
File "os_cloud_config/tests/test_keystone_pki.py", line 36, in
test_create_ca_and_signing_pairs
self.assertTrue(ca_key.check())
File
"/home/fedora/os-cloud-config/.tox/py27/li
We've moved away from ephemeral partitions in TripleO, so this no longer
needs to be fixed there.
** Changed in: tripleo
Status: In Progress => Won't Fix
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://b
This appears to have been fixed in Nova, closing the TripleO bug as
well.
** Changed in: tripleo
Status: Triaged => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchp
Public bug reported:
This is a followup to the regression reported in
https://bugs.launchpad.net/nova/+bug/1464239 The problem there was that
Nova changed how it does block device mapping for ephemeral partitions,
and because Ironic isn't using that block device mapping the ephemeral
path returne
This doesn't appear to have anything to do with Oslo.
** Also affects: nova
Importance: Undecided
Status: New
** Changed in: oslo-incubator
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to O
I believe everyone is on oslo.concurrency now, so this should no longer
be an issue anywhere.
** Changed in: cinder
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.l
** Changed in: oslo.concurrency
Status: New => Fix Released
** Changed in: oslo.concurrency
Importance: Undecided => Critical
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchp
*** This bug is a duplicate of bug 1366189 ***
https://bugs.launchpad.net/bugs/1366189
** This bug has been marked a duplicate of bug 1366189
mask_password doesn't handle non-ASCII characters
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which
It appears this was fixed in Nova and there's nothing to be done in
Oslo. Let me know if I'm wrong about that.
** Changed in: oslo-incubator
Status: Triaged => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenSt
I suspect we omitted this on purpose because we didn't want to have to
support multiple methods of formatting strings, which is a little tricky
to do for lazy translation because you have to store all of the
parameters too so they can be lazily translated if necessary. Also, as
Doug noted the code
** Also affects: oslo.concurrency
Importance: Undecided
Status: New
** Changed in: oslo.concurrency
Status: New => Triaged
** Changed in: oslo.concurrency
Importance: Undecided => Medium
--
You received this bug notification because you are a member of Yahoo!
Engineering Tea
The in-memory cache isn't intended for production use anyway, so we
don't want to spend a bunch of time optimizing it.
** Changed in: oslo
Status: Triaged => Won't Fix
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenSta
This is not a bug in Oslo AFAICT. It sounds like an Oslo rpc reference
wasn't cleaned up in Nova when it converted to oslo.messaging. I'm not
even sure where this bad import is happening (it would be good to link
to the file where you see this), but it's likely that something needs to
be imported
** Changed in: oslo
Status: Invalid => Triaged
** Changed in: oslo
Importance: Undecided => Medium
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334661
Title:
** Also affects: glance
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1301036
Title:
openstack.common.db.sqlalchemy.migration utf8 table che
There is a mailing list discussion about this, but it didn't get a lot
of response. I do think it's something we need to address though.
http://lists.openstack.org/pipermail/openstack-
dev/2014-March/028939.html
** Also affects: oslo
Importance: Undecided
Status: New
** Changed in: os
I'm wondering if the correct thing to do here is to make the logging
code respect the reraise parameter and then use that to say that the
exception handler doesn't want to reraise the original exception. That
seems like the behavior we want with reraise anyway.
https://github.com/openstack/oslo-
Public bug reported:
I don't think this is related to my change and it doesn't reproduce
locally for me, so I'm going to recheck it as a bug. My best guess as
to the cause of the failure is
http://logs.openstack.org/37/61037/22/check/check-tempest-dsvm-
neutron/9aee88a/logs/screen-q-svc.txt.gz#_2
** Changed in: oslo
Status: Incomplete => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1199433
Title:
nova boot --num-instances=50 times out
Statu
This wasn't a bug in Nova.
** Changed in: nova
Status: Confirmed => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1282250
Title:
Unit tests fail with Co
Okay, the reason this works with site-packages disabled is that then it
pulls down oslo.config from pypi, which is older than the one installed
by devstack. I git bisected the problem down to this commit:
https://github.com/openstack/oslo.config/commit/2422d4118c97734067ea0b37ae159bc2e3c492c5
It
Public bug reported:
As of the following commit:
https://github.com/openstack/nova/commit/8a7b95dccdbe449d5235868781b30edebd34bacd
our nova-compute service on the seed node is throwing DBErrors. If I
reset my Nova tree to the previous commit and downgrade the database to
232 then I am able to use
** Also affects: nova
Importance: Undecided
Status: New
** Also affects: tempest
Importance: Undecided
Status: New
** Changed in: nova
Importance: Undecided => High
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscri
** Also affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1225191
Title:
add qpid-python to requirements.txt
Stat
IMHO this is not a bug in Oslo, but a feature request. Please open a
blueprint against Oslo to track this work (I do agree that common code
to do validation is a good idea) Obviously there is some significant
discussion needed first, so I would also suggest sending something to
openstack-dev on t
Marking this fixed based on Qing Xin's comment.
** Changed in: nova
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1168260
Titl
61 matches
Mail list logo