On 05/21/2014 08:23 PM, John Dickinson wrote:
On May 21, 2014, at 4:26 PM, Adam Young <ayo...@redhat.com> wrote:

On 05/21/2014 03:36 PM, Kurt Griffiths wrote:
Good to know, thanks for clarifying. One thing I'm still fuzzy on, however, is 
why we want to deprecate use of UUID tokens in the first place? I'm just trying 
to understand the history here...
Because they are wasteful, and because they are the chattiest part of 
OpenStack.  I can go into it in nauseating detail if you really want, including 
the plans for future enhancements and the weaknesses of bearer tokens.


A token is nothing more than a snap shot of the data you get from Keystone 
distributed.  It is stored in Memcached and in the Horizon session uses the 
hash of it for a key.

You can do the same thing.  Once you know the token has been transferred once 
to a service, assuming that service has caching on, you can pass the hash of 
the key instead of the whole thing.
So this would mean that a Swift client would auth against Keystone to get the PKI token, 
send that to Swift, and then get back from Swift a "short" token that can be 
used for subsequent requests? It's an interesting idea to consider, but it is a new sort 
of protocol for clients to implement.
It would probably be more correct for Swift to calculate that, yes, but the client could also just calculate the hash and send it on subsequent requests. As you pointed out, it is a matter of performance.




Actually, you can do that up front, as auth_token middleware will just default 
to an online lookup. However, we are planning on moving to ephemeral tokens 
(not saved in the database) and an online lookup won't be possible with those.  
The people that manage Keystone will be happy with that, and forcing an online 
lookup will make them sad.
An "online lookup" is one that calls the Keystone service to validate a token? 
Which implies that by disabling online lookup there is enough info in the token to 
validate it without any call to Keystone?
Yes.  the whole popen call to openssl to verify the messages.

I understand how it's advantageous to offload token validation away from Keystone itself 
(helps with scaling), but the current "solution" here seems to be pushing a lot 
of pain to consumers and deployers of data APIs (eg Marconi and Swift and others).
We try to encapsulate it all within auth_token middleware, but the helper functions are in python-keystoneclient if you need more specific handling.



Hash is MD5 up through what is released in Icehouse.  The next version of 
auth_token middleware will support a configurable algorithm.  The default 
should be updated to sha256 in the near future.
If a service (like Horizon) is hashing the token and using that as a session 
key, then why does it matter what the auth_token middleware supports? Isn't the 
hashing handled in the service itself? I'm thinking in the context of how we 
would implement this idea in Swift (exploring possibilities, not committing to 
a patch).
That is after it has received the token. So, Horizon could send the hash to Nova, and Nova would then be required to make the call to Keystone, just like UUID tokens. That would break on the ephemeral approach.

I'm exploring the Horizon side of the equasion for some other reasons, primarily in the context of Kerberos support, but also for better revocation rules. If the onus is on the client (in this case Horizon) to remember if it has send a particular token in full form it might be a little hard to keep track.

What communication is most impacted by the large token size? Is it fetching out images for a web page or something like that?








From: Morgan Fainberg <morgan.fainb...@gmail.com>
Reply-To: OpenStack Dev <openstack-dev@lists.openstack.org>
Date: Wednesday, May 21, 2014 at 1:23 PM
To: OpenStack Dev <openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] Concerns about the ballooning size of keystone 
tokens

This is part of what I was referencing in regards to lightening the data stored in the 
token. Ideally, we would like to see an "ID only" token that only contains the 
basic information to act. Some initial tests show these tokens should be able to clock in 
under 1k in size. However all the details are not fully defined yet. Coupled with this 
data reduction there will be explicit definitions of the data that is meant to go into 
the tokens. Some of the data we have now is a result of convenience of accessing the data.

I hope to have this token change available during Juno development cycle.

There is a lot of work to be done to ensure this type of change goes smoothly. 
But this is absolutely on the list of things we would like to address.

Cheers,
Morgan

Sent via mobile

On Wednesday, May 21, 2014, Kurt Griffiths <kurt.griffi...@rackspace.com> wrote:
adding another ~10kB to each request, just to save a once-a-day call to
Keystone (ie uuid tokens) seems to be a really high price to pay for not
much benefit.
I have the same concern with respect to Marconi. I feel like KPI tokens
are fine for control plane APIs, but don't work so well for high-volume
data APIs where every KB counts.

Just my $0.02...

--Kurt

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list

OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to