Public bug reported:
OpenStack operators and folks who automate openstack deployments with
tools like puppet rely on the release notes [1] and config guides [2] to
highlight new, changed, deleted, and deprecated config options. For
Keystone Newton both of these guides are missing many features,
** Also affects: keystone
Importance: Undecided
Status: New
** Description changed:
+ Upgrade Process Docs:
+ http://docs.openstack.org/developer/keystone/upgrading.html#upgrading-
+ without-downtime
+
The new keystone upgrade features (keystone-manage db_sync --expand)
require
Puppet stopped shipping this script, we get it from the packages or the
code depending on how you install, so this bug no longer applies to us.
** Changed in: puppet-keystone
Status: Confirmed => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering
This was fixed in keystone itself.
** Changed in: puppet-keystone
Status: Confirmed => Won't Fix
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1470635
Cannot repro.
** Changed in: nova
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1604456
Title:
cannot get extra_specs on private flavors
Public bug reported:
Making the call in the client to retrieve the extra_specs for a private
flavor (that is shared with the tenant being used) fails with a "not
found" exception. This is on Liberty.
(Pdb) print flavor
(Pdb) print flavor.name
mfisch
(Pdb) flavor.get_keys()
*** ClientException:
Public bug reported:
Sequence of events.
- Fernet tokens (didnt test with UUID)
- Running cluster with Liberty from about 6 weeks ago, so close to stable.
- Upgrade Keystone to Mitaka (automated)
- Tokens fail to issue for about 5 minutes, after this time, all the cached
tokens are gone
-
Public bug reported:
Keystone middleware's caching of tokens offers HMAC validation and
encryption of the tokens in the cache. This is important because
memcache has literally zero authentication or protection from any user
on the system. So this feature should be ported in from keystone
Public bug reported:
tokens stored in memcache have no (improper) expiration data when set.
Found on stable/mikata and stable/liberty using cachepool backend and
the non-pooled backend.
When you store in memcache you can optionally pass in a time at which
the value is no good, a ttl. Keystone
Public bug reported:
There seems to be a code error in the memcache pool cleanup. I'm seeing
this on stable/liberty (built as of 2 weeks ago). I don't have a
specific reproducer for this, it just seems to happen. Looking at the
code I don't really understand how this can happen.
2016-04-13
Marking invalid for Puppet due to lack of information per Emilien's
request.
** Changed in: puppet-keystone
Status: Incomplete => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
Public bug reported:
During a night maintenance and reboot of a control (non-keystone node)
that had been up for around 300 days, we found that we have over 144k
keystone-signing- folders in /tmp. This caused the maintenance window to
be missed because it took so long to clean /tmp on reboot. It
Public bug reported:
Our load balancer health checks (and other folks too) just load the main
glance URL and look for an http status of 300 to determine if glance is
okay. Starting I think in Kilo, glance changed and now logs a warning.
This is highly unnecessary and ends up generating gigs of
not a puppt bug.
** Project changed: puppet-neutron => neutron
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1474204
Title:
vlan id are invalid
Status in neutron:
New
Bug
Public bug reported:
keystone token validation operations are much slower than uuid
operations. The performance is up to 4x slower which makes other
openstack API calls slower too.
Numbers from Dolph:
Token validation performance
Response time Requests per second
UUID18.8 ms (baseline)
Public bug reported:
I am able to verify Fernet tokens that contain garbage at the end, not
so with UUID tokens.
For example.
UUID:
curl -H X-Auth-Token:84db9247b27d4fe6bd0a09b7b39281e2
http://localhost:35357/v2.0/tokens/84db9247b27d4fe6bd0a09b7b39281e2
Works
curl -H
Public bug reported:
The sample conf shows that a bunch of the swift options in glance-store
are deprecated. We set some of these with puppet so I'd like to know
where they're moving to or if they're just going away so that I can
update the puppet-glance module.
The configuring.rst doc needs to
Public bug reported:
tox -egenconfig is the right way to get sample config files since they
stopped being checked into the tree. It works for me on master, but not
on stable/juno.
Note, despite the statement that it succeeded, it in fact did not.
That's another bug.
Public bug reported:
Horizon is using the python object to make a page title for heat
resources, and hence it looks pretty horrid. See screenshot
** Affects: horizon
Importance: Undecided
Assignee: Eric Peterson (ericpeterson-l)
Status: New
** Attachment added:
Public bug reported:
When you use a Heat template that has parameters with default values but
they're hidden , the defaults are not loaded. This is super annoying
because then you have to unhide them in order to not have to set them.
This is easy to repro with the Wordpress 2 Instance template in
Public bug reported:
The revoke driver default in the sample config and in config.py is still
using the deprecated driver, causing deprecation warnings and scary
messages about removal to show up in log files.
2014-10-23 14:52:20.787 1236 WARNING
keystone.openstack.common.versionutils [-]
Public bug reported:
The XmlBodyMiddleware driver claims to have been deprecated since
Icehouse and claims that it will be removed, so it shouldnt be the
default because scary messages like this imply that Keystone will stop
working in the next version for us...
2014-10-23 14:52:20.754 1236
Public bug reported:
When using LDAP backend with keystone Juno RC2, the following error
occurs:
AttributeError: 'module' object has no attribute 'LDAP_CONTROL_PAGE_OID'
It looks like that attribute was removed in python-ldap 2.4 which breaks
Ubuntu Trusty and Utopic and probably RHEL7.
More
Public bug reported:
We have two regions in our environment. When OS_REGION_NAME is set to
WEST and I run nova endpoints, I get endpoints back for EAST. The
keystone CLI does the correct thing and returns endpoints matching the
region.
The east region comes first in our endpoint list which I
Public bug reported:
I deployed a bad endpoint today and Keystone's failure case was that it
was refusing to authenticate users. This was a rather severe failure for
a bad swift admin URL. An error level log is fine, but I'd prefer not to
impact the rest of my users.
2014-07-23 16:33:39.435 6722
Even when setting the available regions, Horizon is always talking to
the first Identity endpoint in the list, ignoring what's in that region
list except for the initial login. This does not seem to be the correct
behavior. In our case the identity system is global but the we have
separate VIPs
Public bug reported:
in the code to generate the openrc file, the region is not passed into
the code that finds the keystone endpoint. The result is that you end up
with the first endpoint in the list.
** Affects: horizon
Importance: Undecided
Assignee: Matt Fischer (mfisch
Public bug reported:
After some investigation it appears that when you login to Horizon and
go to create a new project, it gets the default neutron floating IP
quota value from the current value that the logged in user has. This was
quite confusing.
This will likely not be fixable until this
Public bug reported:
During user creation if the primary project is set an error message
pops-up.
Error: Unable to add user to primary project.
The logs contain this error:
2014-06-25 20:26:49,165 11965 WARNING horizon.exceptions Recoverable
error: Conflict occurred attempting to store role
Public bug reported:
See: http://docs.openstack.org/developer/horizon/topics/deployment.html
#secure-site-recommendations
The docs recommend setting SESSION_COOKIE_HTTPONLY = True, however this
is already the default:
Public bug reported:
The Secure Site Recommendations
(http://docs.openstack.org/developer/horizon/topics/deployment.html
#secure-site-recommendations) does not mention anything about the
LOGGING section. One specific issue that should be covered is that if
you ship the example config file, it
Public bug reported:
In Icehouse the volume limits screen is showing some django object names
or similar on the screen.
see attached screenshot.
** Affects: horizon
Importance: Undecided
Status: New
** Attachment added: upload.png
Public bug reported:
My horizon log is full of DEBUG messages even though I have debug
disabled. This is going to waste disk space and cause unnecessary
overhead.
From local_settings.py:
DEBUG = False
TEMPLATE_DEBUG = DEBUG
Here's a sample:
2014-06-18 16:41:29,857 6253 DEBUG horizon.base Panel
Public bug reported:
When creating a container, the Horizon behavior is confusing. It loads
you into the container, but the container is not listed. You have to
backup the URL and reload to get it to show up. Our users find this very
confusing.
As you can see in the screen shot the URL shows
Public bug reported:
Neutron does not seem to implement the default security groups calls, so
when neutron is managing security groups, nova tries to pass the call
off to it (I think) and fails. I think this bug is really against
neutron and nova, but I'm not sure where to start. I'm not sure if
This is a useful feature for us as an operator, so I'd like to see
option 2. I've added Neutron as an affected project. Depending on how
the discussion goes we can remove Nova as affected.
** Also affects: neutron
Importance: Undecided
Status: New
--
You received this bug notification
Public bug reported:
From a discussion in the operator conference at the openstack summit
As operators we sometimes need to look at the default config files to
ensure that we're set right, where we've varied from the defaults, and
even what's changed in trunk. It looks like that nova.conf
** Project changed: keystone = python-keystoneclient
** Changed in: python-keystoneclient
Status: New = Confirmed
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1313266
Title:
in: glance
Assignee: Matt Fischer (mfisch) = (unassigned)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1315121
Title:
lack of logging when an invalid store is used leads to confusion
Status
Public bug reported:
EDIT: Havana h.2 is what we're using on Ubuntu
When the http method in known_stores is disabled, mistakenly or not, the
failure mode is annoyingly silent. I spent a day trying to track down
why I could not load images from a URL and the logs were completely non-
helpful,
Public bug reported:
I spent some time today trying to debug why neutron was seeing some
token failures. The keystone logs unnecessarily add a period to the end
of many of these messages which makes the copy and paste to lookup these
tokens more work than it should be.
For example:
2014-04-28
Public bug reported:
The documentation on auth plugins
(http://docs.openstack.org/developer/keystone/configuration.html#how-to-
implement-an-authentication-plugin) does not state that it's a V3
feature. I did a bunch of tests today and found that it's being ignored.
You can set the config to
Public bug reported:
The default value for user_enabled_default is 'True', but the code
appears to want it to be an int, from ldap.py:
enabled = int(obj.get('enabled', self.enabled_default))
Since True in python is 1, I think changing all the examples and sample
conf to '1' would make sense.
Public bug reported:
I have a pending change for a doc and code comment and it's (just now)
throwing these errors with the s3 middleware tests. The python26 and 27
tests are failing.
FAIL:
keystone.tests.test_s3_token_middleware.S3TokenMiddlewareTestBad.test_unauthorized_token
FAIL:
Public bug reported:
I've been trying to boot a saucy guest and have been unable to get
landscape configured properly. Following the landscape example, I've run
into a few issues.
My first config was pretty full (like the example), but after lots of
failures, I pared it down to this simple case:
Importance: Undecided
Assignee: Matt Fischer (mfisch)
Status: In Progress
** Changed in: keystone
Assignee: (unassigned) = Matt Fischer (mfisch)
** Changed in: keystone
Status: New = Incomplete
** Changed in: keystone
Status: Incomplete = In Progress
--
You received
Public bug reported:
I added 2 log messages to glance and then my gate broke. It looks like
it got stuck in SHELVED and failed to reach SHELVED_OFFLOADED,
although I lack the knowledge to know what this really means. I'm not
even sure if this bug belongs in nova or elsewhere.
Public bug reported:
I'm on Ubuntu 12.04 using havana 2013.2.1. What I've found is that the
LDAP identity backend for keystone will not talk to my LDAP server
(using ldaps) unless I have an ldap.conf that contains a TLS_CACERT
line. This line duplicates the setting of tls_cacertfile in my
Public bug reported:
When I was first setting up a connection to LDAP via keystone I fought
through some configuration issues. One of the first issues is that I had
user_name_attribute incorrect so that it could not validate my specified
user on a a request like keystone user-list. Unfortunately
Public bug reported:
On both my Ubuntu box and my Mac, I've been unable to run the glance
tests since this evening due to a missing dependency, specifically a
version of psutil between 0.6 and 1.0. The archive only has 1.1 and up.
Here are the logs:
Downloading/unpacking psutil=0.6.1,1.0 (from
Public bug reported:
RHEL 6.5 running Havana
Apparently I'm using 120GB of 98GB of storage. This causes the disk usage graph
to appear about 75% used. I guess it overflowed? Anyway my suggestion would be
to make the graph max at 100% full, I don't know another way to express
overflow in a pie
51 matches
Mail list logo