Mark Hedges wrote:
Thanks I really do appreciate your comments.

On Mon, 29 Dec 2008, David Ihnen wrote:
Yes, I am aware of how OpenID works.  And it works in-band
unless the application explicitly sidelines it - there is
no inherent side-band communication that the client and
server will use - otherwise, you wouldn't EVER do a main
state redirect.

It does?
It does work in-band, yes.  The main session flow is going to be redirected.
That would be great. How?
... to do a sideline authentication? Since once the auth state is established however, the current pages will work fine - you don't have to do them in a linear way. Like I said, you can program the client application to handle the interruption transparently and resubmit the form that it maintained the status of. Or to pop up a new window to do the auth on, or any number of variants. I'm sure you understand once you nav away from a page through a whole-page submit, its state is (well can be) gone for good. You're in a linear sequence of redirects at that point and if for some reason you're not authenticated, you're not going to regain the state.
 Why does the consumer
object return a check url?
So that the client can use the url to check it.  You know this.
Why does it have the return_to parameter?
So that you can reinsert yourself into the flow of your web application. Heck, many applications just land you back at the home page - the return_to being pretty much statically configured. What happens when you time out in the middle of a session just isn't that critical to most specifications.
 From Net::OpenID::Consumer:

 # now your app has to send them at their identity server's endpoint
 # to get redirected to either a positive assertion that they own
 # that identity, or where they need to go to login/setup trust/etc.

 my $check_url = $claimed_identity->check_url(
   return_to  => "http://example.com/openid-check.app?yourarg=val";,
   trust_root => "http://example.com/";,
 );

 # so you send the user off there, and then they come back to
 # openid-check.app, then you see what the identity server said;

Is that module supposed to work some other way than
with redirects?
Couldn't say for sure as I haven't closely inspected the module in question. I don't think that this programming precludes utilization as a side-band authenticator, though it will dictate the particular form/method/sequence that your client-side application takes to deal with this particular matter. That is, if the client maintained the state of the form that it submitted while you were dealing with the reauthentication in an iframe, your 'bounce back' url can send instruction to the browser to resubmit the form, this time with the session intact.
I thought the point was that they log into the OpenID server
and bounce back to my app.  That way they never have to
trust my app with their password or other credentials.
Yes! That is the point of OpenID. And most auth systems work this way, in my experience (at least the ones that involve the like of authen handlers rather than application level programming alone) The fact that the server signature part is not done locally is merely a detail. The basic redirect to page -> submit authentication token is how this stuff generally works.
The moment you have to redirect to that openid server
page, you have sidelined the entire stream of
browser-server communication - and as you have found in
the problem you're trying to solve - the state inherent
therein, including the content of the original request. Is the utilization of the stored form data going to be
through a different connection/request entirely after
authentication verification?  Would require some tests to
see if the client behaves that way or not.  I suspect its
not defined to be one way or the other, but I may be
wrong.

Not following you there.
When you redirect a request with a body you lose the complete state of the original request, as you no longer have that request body. It seems you want to save this on the server, but thats problematic

client -> POST <formhandler> -> load balancer -> server 556 dublin -> saves<requestbody> -> redirect OPENID <success bounceback url>

client->OPENID -> verify -> redirect <success bounceback url>

in the meantime server 556 dublin suffered a network connector air gap issue. A trouble ticket has been created. These things happen.

client -> GET <success bounceback url> -> load balancer -> server 22 london -> looks up saved requestbody !!!

This is the problem point. Your framework would be depending on the request body save retrieve functionality to be operational on all servers that might serve the bounceback url request. Regardless of whether they're even in the same physical proximity or data realm. They must somehow share a backstored saved state, or depend on the server that saved the state being available when it needs it.

Is the controller framework going to require me to depend
my web system operations on some sort of semi-persistent
pan-server correlated session state system?  Would that
not be requiring me to implement my web application in a
particular way?  Okay, that may indeed be the role of a
framework though I'd no doubt chafe at the limitations
myself.  If I have to write my web application a certain
way, is it so unusual to have my pages need to interact
with that application a certain way?  They're almost
inevitably closely coupled.

That's a good point.  But no, it doesn't depend on the
session, you don't have to have a session attached to use
the controller framework.  You do have to have a session
attached to use the OpenID layer.
I don't quite understand. A session of what sort, exactly? A back-stored persistant across servers session? A secure ticket-cookie that tells my application that this client is known-and-authenticated already? To my understanding, once the OpenID server has the client post the signed result that originates from the openID provider, they're authenticated, and beyond tracking that 'this user is an authenticated user' in *SOME* way (keys in forms, cookies, url path fragments, what-have-you) there is no need to maintain any concept of a session beyond that.

Is this something about the framework that requires a backstore? Is that going to be scalable?
This is a fairly sticky issue - if you have run out of
local authentication token, its impolite to drop data they
were submitting.  But on the other hand, there's no
particularly good way of *not* dropping it - you can't
really handle information if they're not authenticated for
it.  And out of pure defensive network traffic handling,
we do the absolute minimum for people who aren't
authenticated - so they can't consume our system
resources, be that posts-to-sessions-that-don't-exist or
what.

That's true, that's why I think it will not try to preserve
the request body unless they already were authenticated once
and just timed out.  I think that's useful.

Here's a thought. what if you fully handled the post as it came in - a short-time reprieve from having to do the redirect - if you already know they WERE authenticated, just accept their slightly expired ID, handle the form submit appropriately, and then redirect when you're done. Have the bounceback go to the proper result page. It amounts to tri-state session, 'good', 're-auth', and 'defunct'.

I was seriously thinking you were in a situation where you honestly could not tell - maybe you set the cookie with an expiration date, and its gone, its not being sent, you have no idea who this request is coming from. Its different if you know. If you do know they are(were) authenticated, not receiving the request is just being stubborn and inflexible, isn't it?
I can see programming the client side of the web
application to handle this kind of token-loss-recovery
automatically - the client has the form state and being
able to recover the session state is valuable, and
entirely independent from the framework utilized.  But I'm
not convinced that the web server should be jumping
through hoops/proxies to make it happen.  (not that you
have to convince me, I'm trying to present a perspective
that may be novel and generally be helpful in improving
your software, and we may just disagree on the role of the
software involved)

That's probably what DAV clients expect to do, and probably
what an AJAX client would do too.  After thinking about it,
it's not clear that my conception of this module would be
useful to an AJAX application or really any other automated
type of code interface -- it would not make sense for an XML
PUT to get redispatched to a GET request for an HTML login
form.
Heh, you have a point there. I'd be more interested in getting an error response telling me that something had to be done than in getting willy-nilly redirects that violate the communication protocol established. if I'm doing some kind of RPC, redirect to html is definitely unexpected.
 I think that in those cases you would have to
configure it with absolute URL's so that redirects to
login/register/openidserver are used, instead of internal
redispatching to login/register.  An asynchronous component
would then have to watch for redirects and deal with it.
Hm. But arguably in the middle of a session you don't have this problem. The session is active.
For those types of cases, it would make more sense to use a
real Authen handler than returned DECLINED if they were not
logged in, something in the style of Apache2::AuthenOpenID.
Sounds reasonable to me. I like the flexibility of using the hooks of Apache.
Incidentally that uses redirects too, I don't see how you
get around "side band" communication with OpenID.
You still get to decide what you do when you're not authenticated. Just because you have a Authz hook defined does not mean you actually redirect - its entirely up to you what you do with your handler. Maybe it just returns an error in the appropriate protocol instead of redirecting to somewhere else. Though the particular implementation of Apache2::AuthenOpenID may differ from my concept of flexibility in this regard. Subclass it? *shrug*.
Hrmm, looking at danjou's module I'm not sure if I'm doing
the token checking correctly... but maybe that is
effectively done by keeping the session id current.
I think you may be on the right track there.
 Hrmm,
if I passed the token as a separate cookie would that be an
extra layer of security to "prove" they owned the session
id?  Not sure about this stuff.
Prove they own the session ID?

It took me awhile to figure out what you are suggesting. I assume this arises because you have a session key that rather than having the inherent session data in it, is a sequence number that could be mangled by an end user to try and step into an alternative session they don't own.

Easy to fix that. Brief recipe. Make cookie value with a simple signature. Validate it. Its just shared-secret validation, but it makes it almost impossible for the end users to mangle your cookies. And you can change the secret if its ever compromised. Forgive if I typo, this is off the cuff.

sub cookie_value {
 my $sessionid = shift;
 return join ':', $sessionid, signature($sessionid));
}

sub signature {
 use Digest::MD5 qw/md5hex/;
  sub secret { 'a;slho4hlzdjknv;lxza adih' }
 md5hex($sessionid . secret());
}

sub get_session_from_cookie_value {
 my $value = shift;
 my ($session_id, $signature) = split /:/, $value;
 return $session_id if ($signature eq signature($session_id));
 return 0;
}

So as long as you make it reasonable difficult to mangle their cookies, They have your token. You've got to accept that they are who you you authenticated that token for. You can certainly require re-authentication periodically to make it that much more difficult that any particular token be abused - its only good for so long, recording and replaying traffic won't give you tokens that are valid later. (the cookie validator above could also contain a time used to detect that state, I know mine do) But regardless of desire to force re-authentication in a window, this does not force you to reject the request out of hand - particularly if it passed a valid but expired cookie, you see?

I once programmed my session cookie system to validate the signed (so you couldn't mangle it) cookie contents against request metadata - IP source adress, user agent string, etc. It was unworkable - turns out that user agent changes (when accessing media files particularly) and people are on ip pools, where diff requests can come from diff ips within the same session. Yes, it made it almost impossible for people to snarf tokens and use them illegitimately, but it also made normal operation frustratingly unreliable. (Instead I ended up tracking the changes and watched for really odd things for abuse, like one token or user being used across dozens of ips)

David

Reply via email to