** Also affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1808010
Title:
Tempest cirros ssh setup fails due to lac
Public bug reported:
We are seeing this in tempest testing. In some tempest runs the test to
change the user password fails because the account is locked out.
Example traceback can be found at
http://logs.openstack.org/21/485221/2/gate/gate-tempest-dsvm-neutron-
full-ubuntu-xenial/4ecd651/console.
Public bug reported:
glance-manage throws errors under the python3.5 job because it attempts
to open and read a file with utf8 characters in it, but devstack has
hard set LANG=C.
ERROR glance.db.sqlalchemy.metadata [-] Failed to parse json file
/etc/glance/metadefs/compute-trust.json while popula
** Also affects: libvirt
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1643911
Title:
libvirt randomly crashes on xenial nod
Public bug reported:
The new subnet pool support in devstack breaks multinode testing bceause it
results in the route for 10.0.0.0/8 being set to via br-ex when the host has
interfaces that are actually on 10 nets and that is where we need the routes to
go out. Multinode testing is affected bec
** Changed in: nova
Status: Expired => New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1558807
Title:
Volume attached at different path than reported by nova/c
Public bug reported:
When building keystone's documentation with tox -e venv -- python
setup.py build_sphinx there are many errors/tracebacks. You can see
examples at http://logs.openstack.org/86/320586/61/check/gate-keystone-
docs/e10418b/console.html#_2016-07-12_18_34_14_520037.
To reproduce ru
Public bug reported:
Running the keystone opportunistic mysql and postgresql tests locally I
noticed that the mysql tests run in 18-19seconds each and the postgresql
tests run in 4-5seconds each. This may be a performance issue in
keystone itself, in mysql, or in the test framework but it is likel
Public bug reported:
When booting a node with config drive then attaching a cinder volume we
see conflicting information about where the cinder volume is attached.
In this case the config drive is at /dev/vdb, the cinder volume is at
/dev/vdc but volume show reports the cinder volume to be at /dev
Public bug reported:
This may be related to 1290635 but I am not familiar enough with Nova's
dhcp and shelve implementations to know for sure. Also the behavior I am
seeing seems to be slightly different.
In the multinode nova-net job
(http://logs.openstack.org/88/174288/1/check/check-tempest-dsv
Public bug reported:
`nova boot` requires you specify a neutron net-id via the --nic option
when you have more than one network available in neutron. When
attempting to list the networks with `nova network-list` in order to
select a network nova returns a 404.
Talked to mriedem briefly about this
** Also affects: neutron
Importance: Undecided
Status: New
** Also affects: grenade
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/140
Removed openstack-infra and added cinder, nova, tempest because this
looks like a legit failure to remove a volume from an instance.
** Also affects: cinder
Importance: Undecided
Status: New
** Also affects: nova
Importance: Undecided
Status: New
** Also affects: tempest
I
Moved this bug to neutron as it appears to be a valid test run failure
with neutron's test suite.
** Also affects: neutron
Importance: Undecided
Status: New
** No longer affects: openstack-ci
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, wh
I have removed openstack-ci from the bug as this appears to be an
interaction with nova and cinder (which I have added to the bug).
** Also affects: nova
Importance: Undecided
Status: New
** Also affects: cinder
Importance: Undecided
Status: New
** No longer affects: openstac
The openstack infra team does not run or care for any code that does vm
resizes. I think this is meant to be a nova bug.
** Also affects: nova
Importance: Undecided
Status: New
** Changed in: openstack-ci
Status: New => Invalid
--
You received this bug notification because you
** Also affects: keystone
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348818
Title:
Unittests do not succeed with random
** Also affects: ironic
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348818
Title:
Unittests do not succeed with random PYTHONHASHSEED valu
** Also affects: heat
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348818
Title:
Unittests do not succeed with random PYTHONHASHSEED value
** Also affects: designate
Importance: Undecided
Status: New
** Also affects: glance
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/13
Public bug reported:
New tox and python3.3 set a random PYTHONHASHSEED value by default.
These projects should support this in their unittests so that we do not
have to override the PYTHONHASHSEED value and potentially let bugs into
these projects.
To reproduce these failures:
# install latest t
This appears to be a failure in either glance or nova. If you have
reason to believe this is an infrastructure bug please add additional
info and I can reevaluate the status of the bug against infra.
** Changed in: openstack-ci
Status: New => Incomplete
** Also affects: glance
Importanc
This bug isn't invalid. The whole point of this test is that it uses old
code. We start with an icehouse deployment and test that. Then we
upgrade everything to master except for nova cpu and test that. So nova
cpu is running old code and will always run old code in this test.
One way to fix this
This appears to be a nova bug. Tempest has asked nova to perform a task
and it failed.
If you still believe this is an Infra bug please update this bug with
information on why that is the case so that we can debug it further and
fix it.
** Also affects: nova
Importance: Undecided
Status
This appears to be a nova bug. Tempest asked nova to perform an action
and the resulting response was unexpected. It may be a tempest bug but
usually it is an issue in the project being tested.
I have marked this bug as Incomplete for the Infra side, please feel
free to add more info indicating wh
This looks like a nova test fixture bug. I have added nova to the bug
and marked the Infra side incomplete. If you can provide more info that
indicates this is an Infra bug please do and we can update the bug and
hopefully fix it.
All that said I think this is a bug in nova.
** Also affects: nova
Public bug reported:
This unbound local error happens when running the grenade test that does
not upgrade nova-cpu. In this test grenade upgrades all services but
n-cpu then runs some tempest tests. Could be a backward compat issue?
In any case domain is an unbound local variable in
_create_domai
Public bug reported:
See http://logs.openstack.org/64/102664/1/check/gate-glance-
python27/36aa4a5/console.html for an example of what I am talking about
it. This makes it hard to understand when things actually break. Where
do you start debugging?
Can we clean this up so that the WARNINGS and ER
Public bug reported:
When unittests fail for nova and neutron the resulting console logs are
quite large.
Nova:
http://logs.openstack.org/56/83256/14/check/gate-nova-python26/294f78f/ 142MB
http://logs.openstack.org/56/83256/14/check/gate-nova-python27/195cbd3/ 142MB
Neutron:
http://logs.opensta
** Also affects: nova
Importance: Undecided
Status: New
** Also affects: neutron
Importance: Undecided
Status: New
** Changed in: openstack-ci
Status: New => Incomplete
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
** Changed in: openstack-ci
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1280134
Title:
Can't run python 2.6 tests with nosetests
Status in ANVIL for forgi
** Also affects: grenade
Importance: Undecided
Status: New
** Also affects: horizon
Importance: Undecided
Status: New
** Changed in: openstack-ci
Status: New => Incomplete
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which
** Changed in: zuul
Status: New => Fix Released
** Changed in: git-review
Status: New => Fix Released
** Changed in: python-swiftclient
Status: New => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed
Openstack Infra now runs jobs once a day that update the .pot file for
nova. These updates are then proposed as changes in gerrit so that they
can be reviewed and approved. This only happens if there are significant
changes to merge. And only one change is ever open at one time. If an
existing chan
The CI "unittest" nodes now provide both mysql and postgres servers for
testing against. The openstack_citest DB user has access to a schema
called openstack_citest against which DB testing can happen. Nova and
others are using this to test DB migrations.
In addition to unittest nodes providing ac
35 matches
Mail list logo