Re: RFC 4121 & acceptor subkey use in MIC token generation
On Fri, Oct 27, 2023 at 02:01:05PM -0400, Ken Hornstein via Kerberos wrote: > >Aren't you supposed to use CAC or PIV cards? > > Well, I hate to use the "Air Bud" loophole, but the rules as I > understand them don't ACTUALLY say that for ssh, and in some contexts > they explictly say that plaintext passwords are fine as long as you're > doing something like using a RADIUS server to verify the password. Yes, > the RADIUS protocol is terrible and has MD5 baked into the protocol and > no one has ever explained to me why the STIGS say FIPS mode is manditory > but RADIUS is fine. Uh... If someone was able to swing that then you should be able to swing use of MD5 for non-cryptographic purposes where a 20 year old RFC requires it. But, I know, I know, never mind. > >You can definitely use openssh clients with PIV cards and avoid > >kerberos altogether. > > I have done that! But that is actually TERRIBLE IMHO from a security > perspective unless you write a whole pile of infrastructure code; maybe > some sites actually do that but the people I've seen with that setup do > not and then get surprised when they get a new CAC and that breaks. If > you funnel all that through PKINIT then things are much nicer. IDEA: Patch ssh to support use of x.509 certificates. After all, you can't use OpenSSH certs because... that's not "the DoD PKI", and you can't use GSS-KEYEX because of the foregoing MD5 non-issue, so might as well do the one thing you are allowed to do: use the DoD PKI! And you're using Heimdal, right? Well, Heimdal has a very frickin' nice ASN.1 compiler that already has everything you need to be able to decode x.509 certificates. It even has a fantastic libhx509, though the only thing it doesn't have is support for x25519/x448 (I've a branch with that stuff I need to finish). Though you'll want to update to the as-yet unreleased master branch for this because it's more awesome there. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: RFC 4121 & acceptor subkey use in MIC token generation
On Thu, Oct 26, 2023 at 06:26:18PM -0400, Jeffrey Hutzelman wrote: > The gss-keyex userauth method is just an optimization; it prevents you > having to actually run the GSSAPI exchange again after you've already used > one of the GSSAPI-based keyex methods. The real win is in the GSSAPI-based > keyex methods themselves, which are useful (and exist) because they avoid > having to pick one of these: > > [...] All true. But you forgot the other benefit: automatic re-delegation of credentials prior to expiration. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: RFC 4121 & acceptor subkey use in MIC token generation
On Thu, Oct 26, 2023 at 05:57:37PM -0400, Ken Hornstein via Kerberos wrote: > You know that. I know that. But remember: "if you're explaining, > you're losing". When asked I can honestly say, "Kerberos is not > a PKI" and that's good enough, but I can't say with a straight > face, "This X.509 CA over here is not a PKI". Have you considered the private sector? More seriously, there must be an office that could evaluate the use of online CAs that issue short-lived certificates using issuer keys stored in HSMs (or software keys when the sub-CA has a very narrow applicability, meaning very few systems will trust it). Such CAs would be very useful, I'm sure, especially if you could dispense with revocation checking at the relying party because a) the certificate will be as short-lived as a Kerberos ticket, b) the online issuer will have checked revocation for the longer-lived credential used to authenticate to it. > >Presumably OpenSSH CAs are a different story because they're not x.509? :) > > Strangely enough, I am not aware of anyone in the DoD that uses OpenSSH > CAs (there probably are, I just don't know them). I could see it being > argued both ways. The people I know who use OpenSSH are (a) using > gssapi-with-mic like us, (b) just using passwords, or (c) using their > client smartcart key as a key for RSA authentication and they call that > "DOD PKI authentication". Again, you know and I know that isn't really > using PKI certificates, but the people up the chain aren't really smart > enough to understand the distinction; they see that you're using the > smartcard and that's good enough for them. But it is _a_ form of PKI, just not x.509/PKIX PKI, thus the smiley. > >Don't you have OCSP responders? > > We _do_, it's just a pain to find an OCSP responder that can handle that > many. If the official ones go offline that breaks our KDC so we run our > own locally. Ah, so what you mean is that you have a CRL replication problem. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: RFC 4121 & acceptor subkey use in MIC token generation
On Thu, Oct 26, 2023 at 05:10:39PM -0400, Ken Hornstein via Kerberos wrote: > Unfortunately, ANOTHER one of the "fun" rules I live under is, "Thou > shall have no other PKI than the DoD PKI". And as much as I can > legitimately argue for many of the unusual things that I do, I can't get > away with that one; [...] A CA that issues short-lived certificates (for keys that might be software keys) is morally equivalent to a Kerberos KDC. You ought to be able to deploy such online CAs that issue only short-lived certs. I understand how the politics of this works, so I'm just going to say that I feel your pain. Presumably OpenSSH CAs are a different story because they're not x.509? :) > We _do_ do PKINIT with the DoD PKI today; that is relatively > straightforward with the exception of dealing with certificate > revocation (last time I checked the total size of the DOD CRL package > was approximately 8 million serial numbers, sigh). Don't you have OCSP responders? See, that's the point of CAs that issue short-lived certificates: you don't have to worry about revocation any more than you do with Kerberos because tickets are short-lived. (Though one can easily issue 10 year tickets too. It's just that one should not. I'd like to say that I suspect that no one does, but I don't want to find out otherwise...) > We KIND do bridging, but it's at a higher level; since almost everyone > we deal with has an issued PKI client certificate on a smartcard we tend > to support a bunch of ways of working with that. So you can use your > client certificate do a bunch of things like get a Kerberos ticket, > but we can't turn a Kerberos ticket into a DOD PKI client certificate. Right, that makes sense. > I mean, it seems like gssapi-with-mic is relatively widely supported > and works (with the previously-discussed exception of the broken-assed > Tenable client and Heimdal servers). One of the problems I'm finding is that SSHv2 client implementations are proliferating, and IDEs nowadays tend to come with one, and not one of them supports GSS-KEYEX, though most of them support gssapi-with-mic, so it makes you want to give up on GSS-KEYEX. We have used GSS-KEYEX to do "credential cascading", and it's not enough to support GSS-KEYEX for that: the client has to also schedule re-keys to refresh the credentials delegated to the server. We're starting to do something completely different: make it so the user just does not need to delegate credentials. Typically that is because they are not even using ssh anymore but a tightly controlled and audited system for accessing privileged accounts, or because they are accessing a personal virtual home server, and in the latter case we'll ensure that they have credentials there provided by an orchestration system -- in neither case is credential delegation necessary, and certainly not credential cascading. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: RFC 4121 & acceptor subkey use in MIC token generation
On Thu, Oct 26, 2023 at 03:22:17PM -0500, Nico Williams wrote: > On Thu, Oct 26, 2023 at 03:58:57PM -0400, Jeffrey Hutzelman wrote: > > On Thu, Oct 26, 2023 at 3:41 PM Nico Williams wrote: > > > So what can you do? Well, you could build an online kerberized CA that > > > vends short-lived OpenSSH-style certificates, then use that for SSH. > > > > OpenSSH apparently does not support X.509 certificates because they believe > > there is too much complexity. This is roughly the same problem we had with > > getting GSS support into OpenSSH -- they are afraid of security technology > > they didn't invent. > > For GSS-KEYEX they have a point: that the CNAME chasing behavior of > Kerberos libraries is problematic. [...] Also, they can run GSS and PKI code privsep'ed, though they'd need a way to do that on the client side too (on OpenBSD they have pledge(2) for that, but that's not portable). For PKIX they could just have used Heimdal's ASN.1 compiler, and fuzz the crap out of it (we do), and that would probably have been better than building a new certificate system. Though ideally we should be using memory-safe languages for all of this and leave C in the dust. That's just a long, slow slog though. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: RFC 4121 & acceptor subkey use in MIC token generation
On Thu, Oct 26, 2023 at 03:58:57PM -0400, Jeffrey Hutzelman wrote: > On Thu, Oct 26, 2023 at 3:41 PM Nico Williams wrote: > > So what can you do? Well, you could build an online kerberized CA that > > vends short-lived OpenSSH-style certificates, then use that for SSH. > > OpenSSH apparently does not support X.509 certificates because they believe > there is too much complexity. This is roughly the same problem we had with > getting GSS support into OpenSSH -- they are afraid of security technology > they didn't invent. For GSS-KEYEX they have a point: that the CNAME chasing behavior of Kerberos libraries is problematic. That said there is a simple fix: use `gss_inquire_context()` to check that the name you got for the target is the name you asked for, and else either disable credentials forwarding and try again or refuse to use GSS-KEYEX. OpenSSH-style certificates have several serious problems resulting from NIH syndrome: - no certificate chaining, which therefore implies frequent updates of sshd_config and ssh_config files - authorization data is not encoded as an array of strings or blobs but as one string that uses commas to separate elements (!!) () - it's all too specific to OpenSSH - there's no tooling to deal with short-lived user certificates on the client side There are some good things about OpenSSH-style certificates, but the above problems are serious missed opportunities. > This is truly unfortunate, because we already have an onlike Kerberized CA > that vends short-lived X.509 certificates There's almost certainly lots of them. > > Though credential delegation becomes hairy since all you can do then is > > ssh-agent forwarding, and if you need Kerberos credentials on the target > > end well, you won't get them unless you build yet another bridge where > > you have your online kerberized CA vend certificates for use with PKINIT > > so that you can kinit w/ PKINIT using a private key accessed over the > > forwarded ssh-agent. > > The problem with this, of course, is that one must be careful not to permit > credentials to be renewed indefinitely by simply having the KDC and KCA > repeatedly issue new credentials. Fortunately, kx509 is careful not to > issue certificates valid past the ticket lifetime, and I believe compliant > PKINIT implementations don't issue tickets valid past the certificate "Not > After" time. Yes, it's absolutely essential to ensure that each credential issued is limited in lifetime to the credential used to authenticate to the bridge. At least for client credentials. It's not hard to get this right, and it's not hard to test either. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: RFC 4121 & acceptor subkey use in MIC token generation
On Thu, Oct 26, 2023 at 02:38:47PM -0400, Ken Hornstein via Kerberos wrote: > [...] Kerberos is becoming less relevant in general because for most apps running over TLS and using bearer tokens over TLS is Good Enough and also Much Easier Than Using Kerberos (whether directly or via GSS). That means that GSS too is becoming less relevant. On the other hand you still have Microsoft's Active Directory insisting on Kerberos, and you still have a lack of support for SSHv2 w/ bearer tokens, and you yourself might not even have a bearer token issuer infrastructure you could use if SSHv2 could support it. So what can you do? Well, you could build an online kerberized CA that vends short-lived OpenSSH-style certificates, then use that for SSH. Perhaps you'll find that easier to do than to send a PR for hard-coding mechanism OID->name mappings, and even if not, you may find it better for the long term anyways because it's fewer patches to maintain. Though credential delegation becomes hairy since all you can do then is ssh-agent forwarding, and if you need Kerberos credentials on the target end well, you won't get them unless you build yet another bridge where you have your online kerberized CA vend certificates for use with PKINIT so that you can kinit w/ PKINIT using a private key accessed over the forwarded ssh-agent. I'm a big proponent of authentication protocol bridging. I've written an online kerberized CA in Heimdal, though that one doesn't [yet] vend OpenSSH-style certificates. One site I'm familiar with has a kerberized JWT, OIDC, and PKIX certificate issuer, and they support PKINIT, so they can and do bridge all the tokens and all the Kerberos realms and all the PKIX and soon OpenSSH CAs. It's nice to not have to patch all the things and contribute patches upstream. Though because there's no open source universal authen. credential issuer bridge available the price one pays for not patching all the things is the cost of building and maintaining such a bridge. (The cost of operating such a bridge need not be significantly different from the cost of operating distinct JWT, OIDC, PKIX, and Kerberos issuers.) > >We accept PRs. > > I am SO many levels down from the people that manage the licenses that > figuring out how to file a PR upwards through the various levels of the > DoD would probably take me a few days (I don't have to convince RedHat > there's a problem, I have to convince those gatekeepers that there's > a problem first, that's where things go sideways). And those people are > the kind of people that as soon as the hear "MD5" and "FIPS mode" in > the same sentence, they're going to say, "THAT'S NOT ALLOWED". I feel you. I have had to deal with this sort of audit issue myself, and it's always a pain to convince an auditor that some particular thing that their book says is verboten is not security-relevant in this one case and should be permitted. I don't have the cycles to go do the hard-coding you need to satisfy your auditors. It's not that I don't care about that problem -- after all, I might have it myself eventually w.r.t. GSS-KEYEX. It's that I only touch GSS-KEYEX code once per biennium, and right now is not that time for me and I'm full up with other things. If now were that time I might add a table of OID->name mappings and have a ./configure switch for enabling (or disabling) use of MD5 for generating names for OIDs not included in that list. Therefore I have no problem with you not using SSHv2 GSS-KEYEX. Perhaps someone else wants to volunteer to solve your problem _now_ rather than later, but unfortunately it can't be me, not right now. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: RFC 4121 & acceptor subkey use in MIC token generation
On Thu, Oct 26, 2023 at 02:27:56PM -0400, Ken Hornstein wrote: > Ever hear the political adage, "If you're explaining yourself, you're > losing"?. The same adage applies when talking to security people, > especially the non-technical ones. The common gss-keyex code out there > calls the OpenSSL MD5 function at runtime, and some of the distributions > that do ship the gss-keyex code (RedHat) decided to simply disable > gss-keyex code when FIPS is turned on. So yes, you CAN hardcode the > OID->name mappings, but it seems that nobody actually does that. We accept PRs. Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: RFC 4121 & acceptor subkey use in MIC token generation
On Thu, Oct 26, 2023 at 01:41:42PM -0400, Ken Hornstein via Kerberos wrote: > >Yeah; IIRC that was to allow cases where the initiator would send the first > >context token in the same packet/message with early data, such as a MIC > >binding the exchange to some channel. In retrospect, perhaps it has caused > >more trouble than it was worth. We didn't use this in RFC 4462 userauth, > >which doesn't use mutual anyway. > > As a side note, my impression is that gss-keyex has fallen out of favor, > and at least for us part of the problem is the unfortunate decision > to use MD5 in that protocol. You and I both know that the use of MD5 > in there isn't security related, but if you live in a FIPS world > then any use of MD5 is a "challenge". What MD5? It's used for generating a mechanism name, which has no security implications. You can hardcode the OID->name mappings so you don't invoke MD5. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: RFC 4121 & acceptor subkey use in MIC token generation
On Wed, Oct 25, 2023 at 08:51:29AM -0400, Ken Hornstein wrote: > >While I'm on the subject of JWT, there are two reasons JWT is killing > >Kerberos: > > Are you sure one of the most important reasons ISN'T that the GSSAPI is > insanely complicted and people who look at it get confused and move to > something else that is much simpler? At $WORK that's definitely not the reason. It's the others I listed, though the one about authz data is a flavor of the API complexity issue only much worse: because not only is it insanely hard to get at authz data when you can get at it, it's also often not possible at all. So not just insanely complex, but often-not-even-possible. And yet as simple as JWT is, it's also not: - HTTP user-agents need to know how to fetch the rock that the server asks them to fetch, and most of them don't know (Which is basically why OIDC exists.) This is fixable if anyone cares to bother, but then OIDC exists. - HTTP user-agents that do know how to fetch the rock don't do rock caching Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: RFC 4121 & acceptor subkey use in MIC token generation
On Wed, Oct 25, 2023 at 12:16:15PM -0400, Jeffrey Hutzelman wrote: > In any case, I think the behavior Ken is seeing is that the initiator > doesn't even assert a subkey -- it always uses the ticket session key. That > seems... unfortunate. That is. Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: RFC 4121 & acceptor subkey use in MIC token generation
On Wed, Oct 25, 2023 at 08:51:29AM -0400, Ken Hornstein wrote: > I think we've lost the thread here; I do not think that any krb5 > mechanism today ever asserts PROT_READY before GSS_S_COMPLETE, but I > would love to be proven wrong. That's the whole point of being able to use the initiator sub-session key: to allow the Kerberos GSS mechanism to assert PROT_READY on the first call to GSS_Init_sec_context() even when mutual auth is requested. Yes, RFC 4121 didn't say so, but it's the point. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: RFC 4121 & acceptor subkey use in MIC token generation
On Tue, Oct 24, 2023 at 08:09:20PM -0400, Greg Hudson wrote: > On 10/24/23 15:50, Ken Hornstein via Kerberos wrote: > [Disputing the following comment in k5sealv3.c:] > > First, we can't really enforce the use of the acceptor's subkey, > > if we're the acceptor; the initiator may have sent messages > > before getting the subkey. We could probably enforce it if > > we're the initiator. Once you've seen a MIC/Wrap token made with the acceptor subkey you know that all subsequent sequence numbers must use the acceptor subkey. Until then you don't know because GSS doesn't know if some MIC/Wrap token it's consuming was made in response to an earlier MIC/Wrap/AP-REP token sent by the acceptor application to the initiator. Also, in practice no app that makes use of PROT_READY before GSS_S_COMPLETE on the initiator side will do so for more than one or maybe two per-message tokens (one for the app itself, and one for SPNEGO), so maybe we could have a hard cap[*] on the number of per-message tokens using the initiator sub-session key when the initiator requested mutual auth. So, yes, enforcement is tricky. But in practice it's probably not a problem because few apps make use of PROT_READY before GSS_S_COMPLETE on the initiator side -- that's a pretty lame reason to say this is not a problem... [*] Apps that don't request mutual auth, however, should get to send an unlimited number of per-message tokens using the initiator sub-session key because what else could they do? > I believe mutual authentication is frequently omitted for HTTP negotiate, > but that's a minor point as in that case there's no acceptor subkey. Yes. > Whether the initiator can generate per-message tokens before receiving the > subkey depends on whether the mechanism returned the prot_ready state (RFC > 2743 section 1.2.7) to the caller after generating the initiator token. RFC > 4121 does not mention prot_ready; I couldn't say whether that's an implicit > contraindication on setting the bit. I'm not aware of any krb5 mechs > setting the bit at that point in the initiator, although I recall Nico > talking about maybe wanting to do so. I'll have to check what MIT and Heimdal do. But yes, it'd be nice to be able to make use of PROT_READY when GSS_S_CONTINUE_NEEDED. Though GSS loses appeal every day, so we might never get to do a variety of interesting things in GSS space. Then again I know someone who badly wants a JWT client library that does krb5-style caching for audience-constrained JWT tokens, and we could always revive something like Luke Howard's BrowserID (a key exchanging GSS mechanism based on JWT) so that JWT could be used in application protocols where today it can't, and that might be interesting. While I'm on the subject of JWT, there are two reasons JWT is killing Kerberos: - scaling (which we've solved in Heimdal) To provision a server with Kerberos acceptor credentials is traditionally a real pain because orchestrating them requires writing to a database (the KDB). For JWT there's no provisioning, just a periodic download of fresh JWKs. Heimdal has a scheme where you can also periodically download Kerberos acceptor credentials w/o having to write the the HDB (we call this a virtual host-based service principal namespace, where all possible host-based principals below the namespace "exist" with keys derived from the namespace's, the principal's keys, and current time chunked into epochs. - ease of access to authz data In GSS/Kerberos getting to authz-data is insanely hard. In JWT it's just JSON, and all you need is a convention for object key naming. We have a solution to the scaling problem in Heimdal, but for the latter problem we really need a GSS_Inquire_context_authz_data() that outputs JSON just like JWT. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: appl/simple/client/sim_client.c uses internal APIs
On Fri, Feb 24, 2023 at 04:27:28PM -0800, Russ Allbery wrote: > Primarily what I'd want in a new mechanism is for it to be a protocol for > Kerberos authentication. (Or some other underlying authentication system > that we all use instead, although that would be off-topic for this group.) I'd settle for a new mechanism that can handle Kerberos naming. I don't care if there's a KDC or a CA or both or whatever other kind of trusted third party, and if it has a non-trusted-third-party mode, etc. Just the naming is enough for me because that's what I've to be compatible with. Specifically: a) user@REALM style naming (user@domain might do, with case insensitive domains so upcasing is safe), b) service/hostname[@REALM] style service naming. Certainly an API for such a mechanism only could be simpler than both, krb5 and GSS. If you want an example of what I object to in the krb5 API: everything to do with krb5_cred and everything to do with krb5_auth_context -- those APIs are rather complicated and require developer understanding of a myriad options that shouldn't be. > In other words, not generic. I understand why GSSAPI was made generic, > but that's not what I want, and I think the security world is starting to > realize that being able to negotiate every security property and mechanism > is more of a bug than a feature. At $WORK we've built bridging of all the auth methods for this sort of reason. Getting every app to use ${preferred_mechanism} turns out to be impossible. Bridging all the authentication infrastructures, OTOH, is possible. > Right now, it is possible to get into the truly absurd situation where to > authenticate a client to a server you use: > > * HTTP authentication, to negotiate > * SPNEGO, to negotiate > * GSASPI, to negotiate > * Kerberos, to do the actual authentication For HTTP you can just use Negotiate w/o SPNEGO. Just because it's called Negotiate doesn't mean you must use SPNEGO. Negotiate is a misnomer. > Something similar happens with SASL. This is three layers of negotiation > too many. [...] > > I understand the need for *a* negotiation layer. I think the error was in > accepting additional negotiation layers below that, as opposed to getting > out of the generic mode as quickly as possible and start working directly > with the true protocol. We've recognized that multi-layer negotiation is broken since at least 2004 or earlier, and we've studiously avoided it. SASL/GS1 and SASL/GS2 specifically forbid use of SPNEGO, and so does SSHv2, for this reason. I'm not aware of any Internet protocol, or even any proprietary application, that can end up doing multiple layers of negotiation. (I'm not counting algorithm negotiation within a mechanism as a distinct negotiation layer, mind you, because the topic is negotiation of mechanisms, not of mechanism-specific details.) > Essentially everything that I don't like about GSSAPI is a direct > consequence of the fact that it's a generic authentication protocol that > in theory (although essentially never in practice outside of toys and > science experiments) could negotiate a mechanism other than Kerberos. > Supporting that generality forces the addition of irreducible complexity > to the API. The Solaris/Illumos mech glue and SPNEGO implementation was in fact truly generic. I got ssh/sshd to work with both, mech_krb5 and mech_dh w/ zero mechanism-specific code in ssh/sshd, and that was more than 15 years ago. Granted, mech_dh was practically obsolete, but we could have tried to revive it, and I still think that a variation on mech_dh would be a good foundation for a replacement for Kerberos. Specifically my idea is to take JWT, enrich it with a standard fetch-a-rock protocol for it (like the TGS protocol is for Kerberos), enrich JWT tokens with client ECDH public keys, enrich the system with a lookup service for service ECDH public keys (either in the fetch-a-rock protocol or using DNS a la DANE), enrich JWT w/ Kerberos-style [public] authz data, and enrich JWT tokens with Kerberos style naming support. The result should be a) compatible with JWT, b) mech_dh-like in mechanics, c) compatible with Kerberos style naming and authz data, but it wouldn't be Kerberos as it is today. Part of the idea is to make it much easier to implement. > (There is the other problem that all of the effort, hardware support, and > optimization work is going into TLS now, and it feels like a huge waste of > energy to try to compete with TLS in the secure transport business. But > that's a whole different can of worms since TLS is very wedded to X.509 > certificates and there are a bunch of very good reasons to not want to use > X.509 certificates for client authentication in a lot of situations.) The only problem I have with x.509 is x.500 naming and the paucity of support for SAN-based authorization of _clients_. Otherwise I just don't mind the use of x.509. But I might be biased because the biggest problem many people have
Re: appl/simple/client/sim_client.c uses internal APIs
On Fri, Feb 24, 2023 at 05:57:22PM -0500, Ken Hornstein via Kerberos wrote: > I can't argue your preference, and I'll be the first to admit that > "simpler" can be subjective (although I would argue one metric, "lines > of code", the krb5 API would win). But let me point out a few things: Of course. Preferences are personal. > - I alluded to this on the kitten list (and I know you replied there > but I didn't get to reply to it yet), but the issue of multiple round > trips is a concern. You point out that even with SPNEGO you should > have a single round trip most of the time and that's a fair point, > but this puts you in a tough spot with the usage of GSS; you have to > assume your GSS mechanism is a single-trip and violate the API OR > complicate your protocol and implementation design and presume an > unspecified number of round trips. At least with the krb5 API you can > definitively design the protocol (and implementation) for a single > round trip. If you have a mechanism that could use 3 round trips, GSS can't take fewer. SPNEGO (which isn't GSS itself) could have been designed so that the initiator tries N mechanisms in parallel rather than in sequence. I suppose we could probably find a way to shoehorn that in. > - I don't want to crap over the work Ben did on RFC 7546, but I couldn't > help noticing that he skipped over the vital work of extracting out > a useful error message out of the GSSAPI; that code alone is always > a mess but you'd need it anything you'd use in production. I grant that gss_display_status() is a terrible API. It's easy to cargo-cult a wrapper around it though (and we should standardize one). > >GSS does have some ugly things, mainly OIDs, but also not having > >something like a krb5_context. Regarding not having a krb5_context, > >I've played with a couple of ways to fix that in Heimdal: either a) > >enhancing the `OM_uint32 *minor_status` to be a more complex, opaque > >object, or b) adding configuration key/value parameters to the > >`cred_store` used in `gss_acquire_cred_from()`. > > I was under the impression the "context_handle" served that purpose, > although I realize not everything takes that as an argument. If it > doesn't serve that purpose then I understand the GSSAPI even less than I > thought :-/ gss_ctx_id_t is the equivalent of krb5_auth_context, not of krb5_context. The word "context" here serves to confuse :( > I recognize that the issue of krb5 API vs GSS is something that we're > just never going to agree on. If ever we do replace Kerberos then you might have no choice but to deal with GSS or make krb5 APIs support the new thing. But part of the point of a new thing is to be simpler for implementors, while implementing a new and old thing with the same API generally isn't simpler for them. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: appl/simple/client/sim_client.c uses internal APIs
On Fri, Feb 24, 2023 at 12:19:53PM -0800, Russ Allbery wrote: > Nico Williams writes: > > If you're just trying to set up a GSS context between a client and a > > server, then GSS is really simple, and much simpler than the krb5 API. > > I'm very dubious about this statement. The requirement to handle > negotiation and potential multiple round trips and all the complexity with > major and minor status codes makes the equivalent GSS code complicated and > annoying. If you're using SPNEGO then you don't have to concern yourself with negotiation. If you're implementing SSHv2 or SASL it's another story, though not much more complicated because you're doing negotiation at a layer that already does it and all you have to do is maybe pick a GSS mechanism. RFC 7546 exists. I've written a fair amount of app code using krb5 and GSS APIs, and I strongly prefer GSS code. > GSS pays a significant price for being a generic mechanism with a > negotiation method, and the API does not hide that price from the > programmer. It does pay a price, but if all you need is encrypted sessions, then it's simple. GSS does have some ugly things, mainly OIDs, but also not having something like a krb5_context. Regarding not having a krb5_context, I've played with a couple of ways to fix that in Heimdal: either a) enhancing the `OM_uint32 *minor_status` to be a more complex, opaque object, or b) adding configuration key/value parameters to the `cred_store` used in `gss_acquire_cred_from()`. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: appl/simple/client/sim_client.c uses internal APIs
On Fri, Feb 24, 2023 at 01:50:58PM -0500, Ken Hornstein via Kerberos wrote: > >I have said this before on the list and it’s not a very popular thing to > >say, but I program to the krb5 public API, and it is a nice and clean and > >performant and simple and portable and flexible API, and GSSAPI looks like > >none of those things, it looks like a mess to use (just from looking at it > >for my needs, I have never programmed with it). So, I hope there isn’t > >some movement to deprecate the lowlevel public krb5 API, because it is very > >useful for me at least. If you're just trying to set up a GSS context between a client and a server, then GSS is really simple, and much simpler than the krb5 API. If you have to deal with where credentials are (what ccaches, etc.) or acquiring them, then historically you couldn't really do that with GSS, bit now with the new gss_acquire_cred_from() and gss_store_cred_into() functions you can. > Dude, you are NOT the only one who feels that way, and I can't even > BELIEVE people argue otherwise! Yes, the GSSAPI is a mess; there is > no getting around it. The krb5 API is about 100x simpler (there are > more functions, true, but most of the time you only need a handful > of them). I've used both; there's just no comparison. I understand > why the GSSAPI was created and the point of it and I use it when I > feel it is appropriate; I understand why it is specified in protocol > standards. But in the service of making it "generic" it ended up being > very complicated. And if you want to have your protocol only require a > single round trip, you're stuck either calling the krb5 API directly OR > assuming that your GSSAPI mechanism will complete in a single round trip > (the latter is what Microsoft chose for their GSSAPI HTTP protocol), > which in my mind kind of negates the "g" in GSSAPI. The krb5 API is a mess too. And API compatibility between Heimdal and MIT isn't complete. With GSS though, with the new gss_acquire_cred_from() and gss_store_cred_into(), I find there's very little need for krb5 APIs. For example, in a PR to Heimdal I've a GSS-based equivalent of kinit that has practically the same functionality as the Heimdal kinit command. The only thing it doesn't have is the ability to let the KDC drive prompting, though I think I could do something about that too by encoding the necessary information into minor status codes. > However, one thing is worth mentioning: in my experience the GSSAPI > is portable. The details of the krb5 API are basically tied to the > particular Kerberos implementation you're using, and that means you're > stuck either with a lot of compatibility code OR you have to compile > your preferred Kerberos implementation for your target platform, which > presents it's own issues. If I want a truly portable application then I > do use the GSSAPI. Basically. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Using a stub krb5.conf with "include"
On Mon, Dec 12, 2022 at 06:47:50PM -0500, Ken Hornstein via Kerberos wrote: > >The profile library has the concept of marking a section or subsection > >as "final", preventing further amendments to that section. But that > >concept does not apply to individual relations (although it was > >erroneously documented as applying to them prior to 1.17.1). > > When I looked at the finalization support, I found that it had two > unexpected features: > > 1) The finalization support only works across files; in other words, if >you have KRB5_CONFIG=/etc/file1:/etc/file2, a finalized section in file1 >suppresses the same section in file2. But it doesn't work if it's all >within file1. > > 2) An include statement in a krb5.conf file does NOT count as a new file for >the purposes of finalization. > > If I am wrong about these things, I'd sure love a correction. Honestly, > I can't see a reason why a finalized section in a file just doesn't > suppress further sections, even within the same file. Hmmm, this could be useful in Heimdal as well. We should at the very least not trip up over the finalizer token. Can we get the semantics nailed down? Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: MIT Kerberos Master principal deletion
On Thu, Jun 11, 2020 at 10:19:39PM +, Chris Hecker wrote: > Maybe dump the core of the running process so you don't accidentally crash > it while trying to debug it live? But that would make finding it in memory > even harder... I don't think it would make it harder. BTW, we should make it much harder to delete important principals... Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: MIT Kerberos Master principal deletion
On Thu, Jun 11, 2020 at 03:32:35AM +0100, Harshawardhan Kulkarni wrote: > I basically need an advice on an ongoing issue I am currently stuck on. > > We have a Kerberised Hadoop Cloudera Custer. KDC Admin server is on one of > the nodes. We don't have a failover node for KDC server yet. On the KDC > admin server while doing a clean up activity for unwanted kdc principals, I > deleted the master key principal (K/m...@realm.com) We never took a kdc dump > of the master key. So we don't have a backup to restore from. > > Is there any way I can restore the master key principal? If you have a running KDC you could use a debugger to recover that key. It won't be easy. It's not something anyone does on a regular basis, so I don't have instructions to give you. > I have tried creating with kdb5_util add_mkey but the error says that KDC > DB is not able to find a master key credential. I assume this would only > work when you want to create another master key without deleting the > primary key. Adding a new key won't help you: the existing records are encrypted in the old key. > Another option for me would be to de-kerberise the cluster and create the > same REALM and kerberise the cluster again. But there could be serious > issues if this doesn't fix as this is a live cluster where people are using > this on a daily basis. You could rebuild your realm, yes. That's a flag day. Users in that realm will need to be re-enrolled, keytabs will need to be re-created and distributed... Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: "revoking" a TGT?
On Wed, Aug 10, 2016 at 11:05:43AM -0500, Nico Williams wrote: > Even the simplest reliable revocation schemes beyond having TGSes check > the client principal's record presume a high-performance pub-sub > protocol and implementation(s). Reliable multicast type protocols would be nice for this, though a unicast (TCP-based, no doubt) protocol should be needed as well. I've tested a C10K tail-f service to 20k concurrent connections on loopback just fine, and that could be part of a unicast protocol. Modern async I/O APIs make this easy enough. Such a thing can scale by fanout too, so it's plenty scalable, though multicast would generally scale best in many networks. A revocation pub-sub protocol wouldn't be too difficult to design and implement. But it is work that would have to be done. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: "revoking" a TGT?
On Mon, Aug 08, 2016 at 01:35:07AM -0400, Greg Hudson wrote: > On 08/05/2016 09:51 AM, Jerry Shipman wrote: > > I am trying to do something like this: > > - I identify a user whose password is known to an attacker by some other > > process > > - I scramble the user's password and tell him he that needs to reset it by > > some outside process (e.g. a trip to the helpdesk with his ID) > > - When the user resets his password, then he can authenticate again. > > - But, there is a little gap... after I scramble the password, the > > attacker can no longer get new TGTs... but he might still have an old TGT > > for a few hours until it expires, which he can use to get new service > > tickets. Is there a way to prevent that? (He could also already have some > > active service tickets, but I don't think there is anything I can do about > > that.) > > Currently there is no way to prevent that. The TGS code path in the KDC > doesn't perform any policy checks on the client principal entry, so even > if the client principal is disabled, the KDC will continue issuing > service tickets for existing TGTs until they expire. I think the Heimdal's TGS does do such a check. However, that doesn't help in cross-realm cases. A workaround in cross-realm cases is to tune down the cross-realm TGT expiration times (which then limits service ticket expiration times). > historical viewpoint was that, because the attacker could have acquired > service tickets at the time the tickets were stolen, there isn't much > point in going to extra effort to make it possible to close the barn > door after the horse has escaped. (Also, if the client principal is in > another realm, the TGS server doesn't usually have access to its > database entry.) We have considered reversing this position at times, > but haven't implemented any changes to date. A revocation facility would require more infrastructure, and would require careful design to protect confidentiality (e.g., during layoffs; employers generally want to not publicize the names of employees being laid off until after the event completes). The simplest thing to do would be to have realms publish notBefore timestamps representing barriers invalidating all tickets issued before them. Note that because RFC1964/4121 doesn't support multiple round trips at this point, clients would see spurious failures unless they too check the revocation log to get ahead of it (and even then, since this would be racy), so this simplest thing may be too simple. Making the revocation log contain {H(name@REALM), notBefore} entries would get past this, but there's a trivial offline dictionary attack on confidentiality here that is difficult or impossible to counter (especially for realms with few principals). Even the simplest reliable revocation schemes beyond having TGSes check the client principal's record presume a high-performance pub-sub protocol and implementation(s). You can see why we don't have a revocation facility. The workaround is -and always has been- tuning down ticket expiration times (but not ticket renew lifetimes). > For this specific scenario, we would need more than to examine the > client principal entry for the usual policy checks; as you surmised, we > would need a way (perhaps a principal flag) to express that old kvnos > are invalid for this principal entry. Just a timestamp before which all tickets are to be considered invalid would go a long way, but that might be too big a hammer (see above). I wouldn't want any revocation system to depend on monotonically- increasing kvnos, BTW, and I'm sure others would agree, so using notBefore timestamps should be it. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: ktadd default enctype
On Fri, Jun 05, 2015 at 07:24:06AM -0400, John Devitofranceschi wrote: How is ktadd *supposed* to figure out which enctype(s) to use? Long ago I made Solaris' ktadd use the locally supported enctype list as the default for ktadd, as if they'd been passed via the -e option (which still works, natch). I am seeing an issue where kadmin’s ktadd, if left to its own devices, will generate a key with an encryption type that has nothing to do with the KDC’s supported_enctype list and ktadd seems to completely ignore the local client’s default/permitted enctype settings. Eh? No, it should not ignore the KDC's supported_enctype list unless it implements the change I mentioned above. The supported_enctypes list was meant to apply only when the client didn't use the -e option. KDC supports: des3-cbc-sha1 des-cbc-crc (I know, I know) Client's krb5.conf tells it to support: des-cbc-crc (I know, I know) phaser type=disapproval level=11 ... /phaser But when we run ktadd the resulting keytab’s key has des-cbc-md5 The client is an Oracle Linux with 1.6.1 krb5 client software. Also, the KDC is using Sun Solaris 10 Kerberos software (not MIT). Thanks for any insight! I bet the Oracle client is using the kadm5_create_principal_3() RPC, which means you don't get the supported_enctypes. Try using the -e option. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: ktadd default enctype
On Fri, Jun 05, 2015 at 11:11:12AM -0400, Greg Hudson wrote: Also, the KDC is using Sun Solaris 10 Kerberos software (not MIT). The short answer: Solaris implements transitional logic which isn't really compatible with the MIT kadmin client for this operation. We have a workaround in MIT krb5 release 1.13. Oh, right, that's right, I forgot to mention that in my other post: the Solaris kadmind uses a 1DES enctype as supported_enctypes for the old _1 kadm5 randkey_principal RPC. The idea (this was like a decade ago, maybe more) was that Solaris 8 and 9 used the _1 RPC and only supported 1DES, while Solaris 10's ktadd client would always use the _3 RPC. (I also confused create_principal with randkey_principal in my previous post. Sigh. Need more coffee.) So there you go. You should just use the ktadd -e option. Prior to release 1.13, the MIT krb5 behavior is to invoke chrand_principal3 if a keysalt list is specified, and chrand_principal otherwise, so that typical randomize-key requests work against old kadmind servers without an extra round trip. FYI (for JD), the difference between those two RPCs is mainly that one takes a keysalt list (-e) and the other one doesn't. But also the _3 RPCs were added much later, so Solaris 8 and 9 (for example) didn't support them. In Solaris Kerberos, however, the client behavior is to always invoke chrand_principal3 and then fall back to chrand_principal if that fails. Furthermore, the kadmind server assumes that a chrand_principal request must come from an old client, and picks a different keysalt list (which I guess is just des-cbc-md5:normal). That must be it, though why MD5 I don't know (or recall). Thanks, Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: A client name with an '@'
On Wed, Jun 03, 2015 at 11:21:04AM -0400, Ken Hornstein wrote: Or you might retain the uppercase realm and try to cross-sign between the uppercase and lowercase realms. Your (somewhat silly) clients logon to the lowercase realm and gain access to the (less errorprone) uppercase realm. I think if you had two realms that differed only by case, that would be a recipe for a disaster (what happened when you tried to look up realm information in DNS, which is case-insensitive for lookup?). Or hack on the KDCs to implement AD-style case-insensitive/preserving realm matching. I'm starting to think that we ought to do this in Heimdal and MIT Kerberos, at least as an option. Also, the venerably Russ Allberry created a lowercase realm for Stanford, and repeatedly has said that if he had to do it all over again he wouldn't have done a lowercase realm; too much software assumes an uppercase realm. Maybe that has changed in the intervening years. I'd stay away from lower-case realm naming. We keep putting off reckoning with I18N. But the more we do it the more we'll effectively end up with the right solution (namely, recognize that we just-send-8, say that only UTF-8 will interop reliably, then make KerberosString be UTF8String with an IA5String implicit universal tag, list domainname slots in the protocol and put U-labels in them, recognize A-labels as aliasing U-labels in KDBs; with IDNA2008 we could even do the right thing as to treating realms as domainnames that are strangely capitalized). Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: A client name with an '@'
On Wed, Jun 03, 2015 at 04:29:19PM +, Nordgren, Bryce L -FS wrote: Kind of moot. These smart cards are issued from GSA credentialing centers for USDA and certificate production is outside my sphere of influence. The really odd part is that the lowercase realm is encoded into the certificate, but the realm in Active Directory is uppercase. I don't know if this is some kind of oversight, some kind of requirement to make Active Directory canonicalize correctly, or if they're intentionally making it hard to use. AD matches realms case-insensitively (though case-preserving). Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Differentiate the ServiceTicket issued from Kinit vs PKinit
On Tue, Jun 02, 2015 at 10:57:59PM +, Brandon Allbery wrote: On Tue, 2015-06-02 at 11:13 -0700, Aravind Jerubandi wrote: Hello, Could you please answer my query? Did you miss http://mailman.mit.edu/pipermail/kerberos/2015-May/020765.html ? You know, *gmail* isn't showing it in the browser interface, but it's there in the IMAP interface; it was delivered. And it's there in the list archive, of course. The OP posted via gmail. It's likely that the OP is using gmail to read this... Now, what's with gmail?! Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Differentiate the ServiceTicket issued from Kinit vs PKinit
On Tue, Jun 02, 2015 at 11:29:35PM +, Brandon Allbery wrote: I wonder if it somehow has a duplicate message ID. I know gmail suppresses those (so for example I never see the copy of a list message that I get back, when sending from my gmail account). Gmail doesn't suppress duplicates, it just shows them in such a way that it's easy to ignore them. What's funny is that this is not specific to the recipient, but to the posted message. Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Issue with kvno
On Fri, May 29, 2015 at 04:51:37PM +, Brandon Allbery wrote: On Fri, 2015-05-29 at 11:45 -0400, Benjamin Kaduk wrote: I don't have a definite answer for you, but: 1.7 is very old. 4294967295 is 0x is -1 as a 32-bit twos-complement integer For what it's worth, we just had a customer report this problem --- after a Heimdal update. (I didn't think they'd reached 1.6 final yet, much less 1.7, though.) 1.6 never released. A 1.7 release should be coming soon. Heimdal in the master branch today does tolerate missing kvnos. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: A client name with an '@'
On Mon, Jun 01, 2015 at 10:04:46PM +, Nordgren, Bryce L -FS wrote: I then tried creating a 12001000550...@fedidcard.gov principal in my realm. Unfortunately, I cannot kinit using the principal 12001000550...@fedidcard.gov@FEDIDCARD.GOV. kinit gives a Malformed representation of principal when parsing name... error. You have to escape the first '@' with a backslah. Mind your shell quoting, since your shell may require you to escape the escape backslash. On a typical Unix shell you could: $ kinit 12001000550281\\@fedidcard@fedidcard.gov or $ kinit '12001000550281\@fedidcard@fedidcard.gov' Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Issue with kvno
On Mon, Jun 01, 2015 at 02:11:32PM -0400, Benjamin Kaduk wrote: On Fri, 29 May 2015, vishal wrote: My question is that why kvno is not always present in ticket and this ticket is basically which comes in TGS-RESP(from home domain) and sname is krbtgt for trusted domain in TGS-REQ. The kvno field in the ASN.1 EncryptedData type is an optional field, used to assist the recipient in selecting which key to use to decrypt the data. The kvno is not required, therefore it may be missing. Active Directory does not keep track of key version numbers, which is why you see kvno missing when using AD. When a service gets an AP-REQ with a Ticket that has no kvno, then the service has to do something like: - try the newest kvno and/or - try every key with the same enctype Actually, one has to do the latter because of key rollover and KDC replication latency issues. That AD does not keep track of key history is mostly not a problem, except for changing cross-realm keys. For cross-realm key rollover the lack of key history basically necessitates an outage. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: PKINIT cert chains
On Thu, May 21, 2015 at 05:35:23PM +, Nordgren, Bryce L -FS wrote: Cannot create cert chain: unable to get local issuer certificate What from? Again, there is a single AS_REQ/KRB_ERROR pair to request preauthentication, with no attempts to contact the KDC after I provide my PIN. Questions: 1] Does my KDC cert have to chain back to the same anchor as my smart card certificates? In principle, no. In a PKI each relying party can have distinct trust anchor sets for authenticating peers, and each node can have root CAs for its own certificate that are not in the local trust anchor set. Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: theory behind unique SPNs
On Fri, Apr 24, 2015 at 04:46:55PM -0400, Greg Hudson wrote: On 04/24/2015 03:37 PM, Ben H wrote: Why not simply use host/serverA.domain.com for both services? At a protocol level, it's to support privilege separation on the server. The CIFS server doesn't need access to the LDAP server key and vice versa. And, to a lesser extent, to prevent users from getting redirected from one service to another. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Does /etc/krb5.conf have to be present and identical on all Kerberos infrastructure participants?
On Wed, Nov 5, 2014 at 1:47 PM, Booker Bense bbe...@gmail.com wrote: [1]- a process can have more than one krb5_context, but let's not get too crazy. GSS-API acceptor apps that use the default acceptor credential can trivially be in multiple realms at once in one process. I've certainly seen this happen, and even set it up. For this and other reasons I don't think it's a good idea to tie realm to process. It's not too inaccurate, but it's not helpful enough to be worth the trouble. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: gssapi-with-mic vs gssapi-keyex SSH authentication difference?
GSS keyex authenticates the server to the client. The client can then be authenticated to the server with it tries gssapi-keyex userauth. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: gssapi-with-mic vs gssapi-keyex SSH authentication difference?
GSS key exchange alone does not authenticate the client to the server because a binding of the GSS security context to the Diffie-Hellman or RSA key exchange is not sent by the client, only by the server. There is not much point to authenticating the client at this point anyways because GSS authentication is not enough: we need a *username* to authorize the authenticated _principal_ to, and that comes later in the protocol. SSHv2 could well have been (and perhaps still could be) optimized quite a bit. As it is all of this takes quite a few messages: TCP handshake, version string scream exchange, KEX (one round-trip in the optimal case, with GSS and Kebreros), userauth (one more round-trip in the optimal case, with gss-api-keyex). If confidentiality protection of the client principal and username were not important this could be reduced by one round trip in an optimized form of the protocol. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Does /etc/krb5.conf have to be present and identical on all Kerberos infrastructure participants?
On Wed, Oct 29, 2014 at 3:39 PM, Russ Allbery ea...@eyrie.org wrote: Rufe Glick rufe.gl...@gmail.com writes: I'm trying to understand the inner workings of Kerberos here. The following question has arisen: Does /etc/krb5.conf have to be present and indentical on all Kerberos infrastructure participants? No, not really. All participants should probably agree on some things, such as the KDCs for the realm and probably the domain to realm mapping rules. You normally want them to agree on other things, such as the default ticket lifetime to request or whether tickets are normally forwardable, so it's common to synchronize this file. But it's not at all required. They can just agree to use DNS for most things. There are some things that you can't securely discover w/o DNSSEC, of which the main one is: - default_realm (if you need it, which generally implementations do) Other things have sane defaults: domain_realm, capaths, ... In particular, if you have a realm set up with SRV and TXT records in DNS, it's quite possible to have a zero-configuration Kerberos client that simply pulls the information it needs from DNS queries. (Although I think the Kerberos libraries generally like to have the file exist, even if it's empty.) Yes. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: What happened to PKCROSS?
FYI, I just submitted draft-williams-kitten-krb5-pkcross-03. It still needs some work, obviously (e.g., DANE RRset stapling). But it's closer. In particular I've added details on how a TGS can drive PKCROSS. It turns out to be quite simple... TODO: - add a new KDC error code by which a KDC can indicate that it is rejecting a foreign realm PKINIT request by a non-KDC client - add a reference(s) for DANE stapling - maybe remove all TOFU/LoF text (since it could go in a separate I-D) - ... Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: What happened to PKCROSS?
On Mon, Oct 20, 2014 at 09:04:18AM +0200, Rick van Rein wrote: Hello Nico, Are you still working on your neo-PKCROSS draft? I’d love to see it I'll update the I-D soon, but I still don't have cycles for implementing. move forward! Me too. You may have seen draft-vanrein-dnstxt-krb1 pop up; it arranges that a No, I hadn't. client (or its KDC) can figure out under what realm to address for a given hostname. This is based on DNS TXT RR + DNSSEC. This will cause realm crossover inquiries, even for hitherto unknown realms. To enable the KDC to resolve such inquiries, we’ll need some form of PKCROSS based on “remote KDC credentials in DNSSEC”. Well, I envision two options: - client-driven PKCROSS (as described earlier) - TGS-driven PKCROSS, which would work for existing, unmodified clients, and which would not create persistent, long-term symmetric cross-realm trust principals. Here the TGS would use the client-drivern PKCROSS protocol as a client, but would obtain a cacheable, short-lived cross-realm credential that can be used to issue cross-realm TGTs for the given x-realm TGS principal name. In both cases using DANE as much as possible, with stapled DANE as Google wanted to do for HTTPS (though they've backed off for now). My thoughts were: * KDC’s peer to cross realms * publish the KDC server key using DANE * employ PKINIT with DH to establish a one-sided krbtgt Yes. Your thoughts were (AFAIK): * clients hold a certificate (possibly from their local KX509 service) * clients connect to remote KDC’s * publish CA certs for clients using DANE The first is tighter on security, the second supports more flows. The first can be implemented on the basis of the second: the TGS uses PKINIT at the remote realm to acquire a special (because of the TGS' client principal name in its PKINIT certificate) cross-realm TGT whose purpose is to enable the client TGS to issue x-realm TGTs for that one x-realm. Mixing the two will probably lead to mutual weakening, so I am thinking that it might be useful to split the two, but ensuring that they remain as compatible as can be. Does that sound wise to you? I don't agree. A client-driven PKCROSS is feasible now. In fact, I heard earlier this week of a couple of environments where it actually *is* used in practice. (The user has a TGT in a given realm, uses kx509/kca to get a certificate, then PKINIT to get a TGT at the remote realm.) I don't have specifics, and I gather that the AS at the remote realm had to have local enhancements added to make this possible. A client-driven PKCROSS is deployable even if you use a KDC whose vendor isn't likely to add KDC-driven PKCROSS any time soon. That's convenient! A client-driven PKCROSS protects the client's privacy relative to its home KDC. A client-driven PKCROSS is sub-optimal though: a TGS-driven PKCROSS can significantly reduce the number of PK operations needed, compared to a client-driven PKCROSS. The TGS-driven PKCROSS can be substantially similar to the client-driven one, with the TGS being the client. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: revocation feature in Kerberos
On Sun, Aug 03, 2014 at 11:33:58AM -0700, Booker Bense wrote: This whole conversation seems misguided to me. Kerberos is an authentication system, not an authorization one. Access to a service is an authorization issue. Since there is no universal authorization scheme for kerberos applications, any workable revocation system will have to build that first. That would be a very useful tool, but I'm afraid it might be about 20 years too late. This isn't about authorization. The thing being revoked is the principal and/or its extant tickets. Kerberos' design specifically obviates the need for a revocation system: use short-lived tickets and you're mostly set. That said, we've long ago stopped arguing about Kerberos as an authentication system, and its relevance to authorization. Kerberos is relevant even to the simplest authorization schemes just by dint of delivering the key to those schemes: the authenticated identity (principal name). Often Kerberos also carries authorization-specific attributes (e.g., PAC, CAMMAC). Either way Kerberos is orthogonal to authorization, but authentication is integral to authorization, therefore it's hard to separate the two. Incidentally, the rest of the world (e.g., SAML) long ago accepted that an attribute model of identity (and therefore authentication) is more important than the more traditional Kerberos model. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: revocation feature in Kerberos
In general Kerberos doesn't need a revocation system because ticket lifetimes should be short enough. Within a realm it's often the case that other methods are used for revocation (e.g., lock the _account_, which will generally replicate with less latency than tickets will expire). Across realms is where things get interesting. A TGS would have to track the x-realm ticket vending it has done so that it could selectively propagate revocation notifications to those realms that ought to see them. Otherwise the system can't scale. In practice cross-realm TGTs tend to be shorter lived than local TGTs, and for this reason. A not-before timestamp in tickets might be useful, but not sufficient. A revocation system would have to involve an actual service sourcing and propagating revocation notifications to all those services that might need them, which in turn requires KDCs to keep track of all extant (as yet not expired) tickets vended. IMO it's worth exploring, but how worthwhile this is will depend on how common it is for people to run with a) very long-lived local TGTs and no other revocation scheme in place, and/or b) very long-lived cross-realm TGTs. Assuming the worst, then it's worthwhile, of course. In a way such a system would scale better than revocation does for PKI, where CRLs regularly go unchecked, and where OCSP responses have Kerberos ticket-like lifetimes. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: revocation feature in Kerberos
On Thu, Jul 31, 2014 at 5:47 PM, Benjamin Kaduk ka...@mit.edu wrote: On Jul 31, 2014 3:15 PM, Roland C. Dowdeswell el...@imrryr.org wrote: On Thu, Jul 31, 2014 at 04:34:42PM -0500, Nico Williams wrote: In general Kerberos doesn't need a revocation system because ticket lifetimes should be short enough. The structure of a kerberos ticket doesn't include what key was used for the initial authentication that generated it. That means that the only revocation hammer available is a big one, to disallow all tickets for that principal. (As Chris notes, the MIT KDC does not at present even enforce that flag at renewal time.) Or all tickets prior to the revocation event (e.g., password change, if you thought your password was compromised). David Benjamin has an idea for how to work around this, and be able to only revoke tickets issued with a compromised password, while still allowing tickets issued from a newer password to be usable. I should probably write this up and get some more eyes on it; it's been back-burner for a while. For TGTs it's trivial: store a not-before timestamp in the KDB entry and check it in the TGS. For services it won't help you and you have to rely on ticket expiration. Within a realm it's often the case that other methods are used for revocation (e.g., lock the _account_, which will generally replicate with less latency than tickets will expire). Right and by using different lifetimes for service tickets as their TGT, you can break apart the ``check that the user knows their passwd'' check from the ``check that the account is still valid'' check. That requires all this checking to actually be done. Per the above, it may not be how you think it is or want it to be, at present. Roland and I were referring to things like Unix accounts. In order to login you need your tickets, yes, but also your account to not be locked. In many environments the name service update and cache entry expiration times are such that revocation by locking the _account_ has lower latency than the typical service ticket lifetime. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: revocation feature in Kerberos
On Thu, Jul 31, 2014 at 6:22 PM, Nordgren, Bryce L -FS bnordg...@fs.fed.us wrote: Revocation schemes must account for situations where parties other than the authenticated user cannot contact the user's home KDC. A revocation protocol that propagates revocation notices towards the services accessed by the user will not require connectivity in the opposite direction, and it might not even involve any firewall configuration changes. KDC-KDC revocation notices should be sent on port 88, and KDC-service notices should be sent in realm- or app-specific manner, so no problem there either. A revocation protocol more like OCSP (not stapled, so it'd have the problem you mention) would be silly: you might as well just turn service ticket lifetimes down and be done! No, the only way in which a revocation protocol for Kerberos makes any sense to me is one that involves propagating notices to those services (TGSes included) for which the principal in question got extant tickets. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: revocation feature in Kerberos
On Thu, Jul 31, 2014 at 6:49 PM, Nordgren, Bryce L -FS bnordg...@fs.fed.us wrote: No, the only way in which a revocation protocol for Kerberos makes any sense to me is one that involves propagating notices to those services (TGSes included) for which the principal in question got extant tickets. Good. :) Do that. Seems that the KDC would have to be upgraded with connection info for services (can't trust that instance name == dns; can't trust that the service is running on the standard port). Oh, and if the service is httpd, slapd, or nfs using principal host/example.com, how does one figure out which service to contact? The KDC would have to know how to contact them, or infer it from the principal name. As for _how_ to communicate the revocation, one possibility would be for their realm's revocation service to connect and authenticate as anonymous (say) with a ticket bearing authz-data listing the revoked principal (or not-before time, if revoking only tickets issued before a password change). (Revoking _many_ principals would be done by revoking an entire realm with a not-before time.) Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: What happened to PKCROSS?
On Tue, Jul 15, 2014 at 6:32 AM, Rick van Rein r...@openfortress.nl wrote: (*) List, if this discussion should (or should not) take place here, let me/us know. I’m not sure what is desired. kit...@ietf.org is the right place to discuss this. ## Summary and positioning • Looks like your new draft is the “client empowering” variation, which should work regardless of support by the KDC (well, assuming kx509 of course); this is useful in the interest of individual clients and in situations where the client’s software can be influenced; Well, ASes need some changes. • Although the krb5 - x509 - krb5 path could emulate an infrastructural mode inside a user’s KDC, I think it would not be as scalable as it could/should be; it requires a public key exchange per user principal, so it scales less well than a direct PKINIT to crossover between KDCs; also, there will be more delay times; No, this proposal scales. The only problem is that it doesn't optimize away PK as much as possible. I've an update where TGSes noting requests from x-realm TGTs for non-existing krbtgt principals will initiate a PKCROSS exchange to obtain a non-persistent, cacheable trust relation that doesn't require replication. • I therefore think that PKCROSS needs to describe two approaches; one is “client empowering” and the other is “infrastructural” in style. Only the first is really needed. The latter is an optimization -- a very worthwhile one, so we should add it. On a related note I think we need a KRB-ERROR response that can tell a client to retry the request in N milliseconds -- i.e., give an ETA. This is important for QoS purposes: we should want ASes to get less CPU than TGSes, but it's not easy to separate the two services by port number, so we have to resort to something akin to task queues in the AS. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: What happened to PKCROSS?
On Wed, Jul 2, 2014 at 6:23 AM, Rick van Rein r...@openfortress.nl wrote: Hi Nico, But mainly the appeal of this approach is that the pieces needed all exist. Are you talking of http://www.citi.umich.edu/projects/kerb_pki/ as your kx509 implementation? It appears to be based on Kerberos4… No. Heimdal has a kx509 server and client. And there are other implementations: https://secure-endpoints.com/kcacred/index.html http://www.umich.edu/~x509/ It's actually in fairly widespread use too: https://fermi.service-now.com/kb_view.do?sysparm_article=KB0010800 Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: What happened to PKCROSS?
BTW, DANE stapling is not that hard. I have been pointed at AGL's code for it. The RP side doesn't need a DNSSEC resolver to implement it because all the records are stapled, and the RP doesn't need to implement non-existence checking and so on -- just validate the signature chain to the RP's DNSSEC root and check name constraints. Producing the stapled data is not hard either. There's a Python script that uses dig(1) that supports this. It needs to learn to be a daemon that wakes before the shortest TTL passes to refresh the chain. Stapling should result in fewer external dependencies for the Kerberos libraries, so that's a big win. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: What happened to PKCROSS?
On Tue, Jul 1, 2014 at 1:01 PM, Rick van Rein r...@openfortress.nl wrote: I’ve been thinking about realm-crossing lately, specifically between hitherto unknown parties — that is, for use across the general Internet. I have too. I've an Internet-Draft on the subject. I intend to update it soon. If all goes well I might find myself implementing a few months from now, or if not maybe we can con someone else into doing it. My plan is roughly: - kx509 (local realm) - PKINIT at remote realm to get a TGT for krbtgt/REMOTE@REMOTE - add an ephemeral, cacheable mechanism by which KDCs can bootstrap a symmetric x-realm principal key - add a way to make one of those keys permanent - use DANE for realm public key authentication - use DANE stapling to avoid the need for slow I/O in KDCs The only part of this that's difficult at all is the DANE stapling part. The PKINIT part is just a matter of tweaking policy code on the AS side. The kx509 part is easy (though I think the protocol should be revised so it can go on the Standards track) as code exists and the protocol is rather simple (it's just a kerberized service that takes a public key from the client and returns a short-lived certificate for the same key with the client's principal name as the subject). Transit path handling is easy: all transit paths become hierarchical paths when using DANE. (But when using PKIX transit path processing gets complicated as we must then implement X500 style realm naming.) Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: What happened to PKCROSS?
I'll add that it's really shocking that we don't yet have PKCROSS. Lack of PKCROSS greatly hurts Kerberos' scalability. Also, Kerberos w/ PKCROSS is much closer to something like what PKI should have been: short-lived credentials, no need for revocation protocols (CRLs, OCSP). Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: What happened to PKCROSS?
On Tue, Jul 1, 2014 at 4:11 PM, Rick van Rein r...@openfortress.nl wrote: I've an Internet-Draft on the subject. I intend to update it soon. Excellent! Bookmarked it on http://realm-xover.arpa2.net/kerberos.html and am printing it for review. Great! That'd be very welcome. If all goes well I might find myself implementing a few months from now, or if not maybe we can con someone else into doing it. Hero! Let's not count my (or anyone's) chickens before they hatch. I have a ton of things to do. What I like about this one is that most of the pieces exist, so it might be easy and have a decent payoff. - kx509 (local realm) - PKINIT at remote realm to get a TGT for krbtgt/REMOTE@REMOTE Oh, that’s an interesting angle! So, unlike earlier PKCROSS proposals you intend to change the client code. That's one option. The client might be a KDC doing it on behalf of its clients. However, letting the client do this has the benefit that the client's realm then need not see where the client is transiting to (privacy protection). But mainly the appeal of this approach is that the pieces needed all exist. - add an ephemeral, cacheable mechanism by which KDCs can bootstrap a symmetric x-realm principal key I’m exploring a similar thing (that I was hoping to present a bit later, it’s still shaky) namely Kerberos + Diffie-Hellman for AP_REQ / AP_REP, which may turn out to be fairly simple to add through the “subkey” mechanism, http://tls-kdh.arpa2.net/conceptual.html but that doesn’t hold for AS_REQ / AS_REP as far as I can tell. What a pitty :’-( Look at https://github.com/elric/krb5_admin . It has support for setting keys via a multi-party DH exchange, and gets cluster key update atomicity right. - use DANE for realm public key authentication Mind you, DANE is a bit of a beast to operate, due to the same-time changes in DNS and the server at hand. That’s something we’re working on at SURFnet. It needn't be though, and anyways, I'm making that someone else's problem :) (one that people are committing to take on; passing this buck is realistic). - use DANE stapling to avoid the need for slow I/O in KDCs The only part of this that's difficult at all is the DANE stapling part. If I understand it correctly, it’s passing DNS data through a TLS pipe… It's gathering all the responses to all the queries that the peer would have to make, then sending them to them (in ticket authorization data) so that they can validate them without having to do any DNS queries. The main problems here: a) the need to gather and keep up to date all those responses, b) how to encode the set of them, c) how to validate them. Because existing DNSSEC libraries I've looked at don't support this. However, Google did try DANE stapling for DNS (which is why you thought TLS was part of it, no doubt), so code must exist, and the feasibility is established (even though it turned out not to work well for TLS for reasons that I think won't apply here). IF sending along the RRSIG chain of trust THEN need to constantly update the DNS data known in the TLS server; Yep, that's a local service daemon. arrival of DNS data in application software which doesn’t have a clue and doesn’t have a cache Nor could it safely seed its cache with that data. ELSE still need to inquire with DNS to get the RRSIG, and that involves doing the DNS queries again END IF Right. It seems that if you're going to staple DANE (or OCSP) you have to do it from day one otherwise it doesn't happen. The alternative here would be to make KDCs able to task-queue dispatch handling of AS-REQs, and make them be able to handle them in an more async manner. That's desirable anyways given how asymmetric the costs of PKINIT vs. not-PKINIT are. Maybe I should punt on DANE stapling and stick to DANE -- it'd be easier to implement with just DANE. And we can add class of service support to the KDC instead. I hardly think a mere optimisation could be worth the conceptual mayhem that it provokes… Maybe. But Stapling is in many ways the best approach. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: tickets with wrong DNS
Making sure that the client's host-based principal name matches its IP address is something best done asynchronously by scraping the logs. Adding synchronous DNSSEC validation of this in the KDC (obviously the KDC internally would do things asynchronously) would add to latency. Probably not a big deal. It would also require significant restructuring of KDC implementations, for relatively little value. Though to be frank, I do think it'd be good for KDCs to be so structured anyways so that various slow operations could be added by pre-auth and authz plugins of various types. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Advice on cross-realm PKINIT?
On Mon, Jun 9, 2014 at 7:36 PM, Nordgren, Bryce L -FS bnordg...@fs.fed.us wrote: I think it's a bit harsh to claim cross-realm AS is not supported by the protocol. [...] Indeed, the fact that the client and server realm can't differ in the AS-REQ doesn't mean that the pre-auth in the AS-REQ can't indicate the client's true realm. The problem is that other invariants are violated by using AS for x-realm, as I mentioned earlier. Nonthing that can't be overcome, and my idea is to use TGS anyways, but with a PKINIT pre-auth instead of PA-TGS, and with a cross-realm certificate (really, a cert issued most-likely by a kx509 CA -- an issuer that wouldn't be part of the target TGS' issuers for its realm's client principals). Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Accessing Kerberos NFS version 4 (not 2, 3) via /net automounter with kinit only (no /etc/krb5.conf access)
Will, Mobile devices don't really have stable hostnames, so the system should support non-hostbased host/root credentials. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Accessing Kerberos NFS version 4 (not 2, 3) via /net automounter with kinit only (no /etc/krb5.conf access)
On Tue, Apr 15, 2014 at 11:54 AM, Simo Sorce s...@redhat.com wrote: On Tue, 2014-04-15 at 11:36 -0500, Nico Williams wrote: Will, Mobile devices don't really have stable hostnames, so the system should support non-hostbased host/root credentials. The hostname is pretty stable, unless you allow dhcp to push an hostname unto you (bad idea). I think what you mean is that not all mobile devices can use dyndns to update the name - ip map, but this shouldn't be a problem in the NFS case. Sure. But there's no need for the client to have any particular sort of name for itself, so why pretend that it's name is host-based? (For the share -o root=... option Solaris really wants a root/hostname credential that it then checks against the reverse lookup on the client IP address. I'm not too hot on this, but at least that's only for root-equivalent access, not for general access.) Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Accessing Kerberos NFS version 4 (not 2, 3) via /net automounter with kinit only (no /etc/krb5.conf access)
There is nothing in NFSv4 requiring the use of any sort of client credentials other than user credentials. However, for multi-user clients it's important to have a credential for some session state and for callbacks. For single-user clients there's no need to have any device credentials at all for NFSv4 -- if you have none then the device should use the one user's credentials for all NFSv4 purposes. That said, it's best practice to key all devices. Still, nothing in NFSv4 requires such keys to be named in host-based ways. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Accessing Kerberos NFS version 4 (not 2, 3) via /net automounter with kinit only (no /etc/krb5.conf access)
On Tue, Apr 15, 2014 at 2:22 PM, Will Fiveash will.five...@oracle.com wrote: But if this is a work laptop, which is typically a single user system and operates as a client in various contexts, requiring IT provision it with a keytab seems onerous to me. Note that a Solaris NFS v3 client does not require root have a krb cred to operation, even when automounting -- it only requires the user that triggered the automount have a krb cred. What should happen is that there should be a way to enroll a device. That could be as simple as a kadm5 (or HTTP, or RFC3244 extension) API that allows a user to create and key a principal of a form such as device/username/random@REALM or just device/random@REALM. The random should have no periods and should be illegal as a hostname, and it should mostly be a base64 encoding of a few bytes of /dev/urandom output. (Roland's tools have a mechanism for joining a host to a realm using multi-party ECDH to key it, and a site-local procedure for blessing a host principal. A similar but simplified approach could work here.) I think part of the problem is that the gss security context protecting the channel along with the user's krb cred could expire at any time. I think that's why they wanted root to use a key stored in the keytab (I could be wrong of course). No, that is a problem. NFSv4.1 does something to address this, IIRC, though I forget the details. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Accessing Kerberos NFS version 4 (not 2, 3) via /net automounter with kinit only (no /etc/krb5.conf access)
On Tue, Apr 15, 2014 at 2:48 PM, Tomas Kuthan tomas.kut...@oracle.com wrote: On 04/15/14 21:16, Nico Williams wrote: That said, it's best practice to key all devices. Still, nothing in NFSv4 requires such keys to be named in host-based ways. Makes sense ... but still, basing on host is a nifty way of constructing unique principal name. Is there a meaningful alternative for mobile devices? But it isn't nifty. You quickly run into the issue that the hostname has to have a record in whatever manages your DNS zones, else someone might use that hostname and now some device has keys for its principal. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Accessing Kerberos NFS version 4 (not 2, 3) via /net automounter with kinit only (no /etc/krb5.conf access)
On Tue, Apr 15, 2014 at 4:48 PM, Will Fiveash will.five...@oracle.com wrote: On Tue, Apr 15, 2014 at 02:34:11PM -0500, Nico Williams wrote: What should happen is that there should be a way to enroll a device. If a keytab is really needed. On the otherhand, if a laptop is only acting as a client then why bother? Assuming the logged-in user has a way of acquiring their krb cred that's all they should need if the laptop is acting as a NFS, ssh or any other client that tries to do gss/krb auth. Sure, that's a fair thing to do in the short-term. In the long term I suspect you'll have many reasons to want to enroll a device (e.g., to do FAST w/o PKINIT). And in order to make this short-term fix workable you need a way to configure the system to make the user's Kerberos credential also be the system's (root's). Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: NSA backdoor risks in Kerberos
On Wed, Apr 2, 2014 at 1:10 AM, Chris Hecker chec...@d6.com wrote: I hope this won't turn into a giant thread, I'm just looking for some succinct facts and/or links to thoughtful discussion, I'm not interested in a bunch of opinions or a flame war or anything like that, and I don't think that'd be appropriate for this list or help anybody. But here goes: Has there been a technical writeup of potential backdoor risks in Kerberos, similar to the stuff that keeps coming out about various RSA products: http://www.reuters.com/article/2014/03/31/us-usa-security-nsa-rsa-idUSBREA2U0TY20140331 Kerberos doesn't have large-enough nonces for a Dual_EC-style attack. Kerberos isn't used on a large enough scale to be worth backdooring. Any backdoor is likely to be found only in implementations, not the protocol on account of backdooring protocols being a difficult and risky task. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: ping for kdc utility?
http://oskt.secure-endpoints.com/k5ping.html https://github.com/elric1/k5ping Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Request to change MIT Kerberos behavior when principal is expired, deleted or password changed
On Fri, Mar 7, 2014 at 5:16 PM, Greg Hudson ghud...@mit.edu wrote: On 03/07/2014 05:17 PM, Edgecombe, Jason wrote: I don't see how anyone can object to rejecting requests for expired or deleted principals. I don't think anyone has. In the past I have mentioned performance as a [...] +1. No one has objected yet. The problem with applying this to password changes is that if you're logged in on multiple systems you'll have to kinit on each of them... *or* have SSH cascading credential delegation going. Now, if your cascading credential delegation clients are using timers based on ticket expiration... then your new tickets won't be forwarded soon enough. This could get annoying. Password changes are annoying enough as it is, so I could see someone objecting to that. Preemptively making this configurable seems like a win to me. The change may not be a trivial one to make safely, because there are so many edge cases in modern TGS request processing. I don't think it's unsafe. I do think it will annoy users as to password changes. Be aware that: * We cannot generally do these checks for cross-realm TGS requests. The MIT and Heimdal KDCs support multiple realms in the same DB. (MIT's kadmind doesn't, but so what). Such usage will surely be rare, but if the TGS has the crealm's DB, then it should check it. * The KDC cannot revoke already-issued service tickets. Right, we have no revocation protocol, and we almost certainly won't develop one either. This is a strong incentive to make all but start TGTs fairly short-lived. (In practice people tend to have something like revocation via, e.g., marking user accounts locked in the Unix passwd(4) name service.) Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Request to change MIT Kerberos behavior when principal is expired, deleted or password changed
FWIW, Heimdal's TGS already does reject requests for clients whose principals should exist int he local HDB but don't. (Obviously this can only be done when the client's realm is also a realm for which the KDC has a database.) Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Request to change MIT Kerberos behavior when principal is expired, deleted or password changed
On Thu, Mar 6, 2014 at 1:31 PM, Edgecombe, Jason jwedg...@uncc.edu wrote: Does Heimdal reject requests for expired/disabled accounts as well? It rejects in these cases: - the HDB doesn't have an entry for the client principal but should have - the HDB did have an entry and the client principal was marked locked out - the HDB did have an entry and the client principal was marked invalid - the HDB did have an entry and the client principal was marked not a client - the HDB did have an entry and the client principal's valid_start (which is only really supported via the LDAP HDB backend) - the HDB did have an entry and the client principal requires a password change - the HDB did have an entry and the client principal's password is expired It'd be trivial to reject requests using tickets predating the last password change. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: On IETF JGSS interface
On Tue, Mar 4, 2014 at 8:59 AM, Arpit Srivastava arpit@gmail.com wrote: What is the license of IETF JGSS interface ( http://www.ietf.org/rfc/rfc2853.txt), given that I would be implementing the interface using MIT Kerberos APIs, defined in the Java package org.ietf.jgss ? RFC5653 obsoletes RFC2853 and uses a three-clause revised BSD license for the code embedded in it. Assuming you'd be using the GSS-API C bindings to implement the Java bindings, I don't see how the MIT Kerberos license comes into play, since you could link with any GSS-API C bindings implementation, of which there are at least four to my knowledge. The C bindings licensing would be specified in RFC2744, but it doesn't really have a license for the C header bits. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: ACL for Constrained Delegation?
On Wed, Feb 19, 2014 at 11:41 PM, Daniel Kahn Gillmor d...@fifthhorseman.net wrote: This arrangement seems to suggest that the delegation constraint is something that will be managed for all principals by the KDC explicitly, rather than the end user being able to decide (or even know?) what explicit delegations are being offered. Am i understanding this right? That's exactly right. Is there any mechanism for user-controllable delegation? (or perhaps more fundamentally, does this question even make sense, given the power held by the KDC already?) The question very much makes sense. The original Kerberos design required that the applications have the final say on policy as to, e.g., cross-realm transit path policy and authorization in general. KDCs get to reject things (e.g., if there's no cross-realm trust relationship they must reject), and they get to indicate approval (e.g., TRANSIT-POLICY-CHECKED), but in principle they leave policy to the service application. I missed the cut-off for -00 Internet-Drafts for IETF89, so the following is as-yet not submitted, but it will be submitted soon, and its goal is to address this problem: https://raw.github.com/nicowilliams/kitten/master/gss-authzid.txt Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Challenging clients, why another ping-pong?
I brain-o'ed on privacy protection. I understand what you meant now. See what Greg and Russ have to say. But I'll add a piece here as well: - HTTP is not a simple protocol: there are proxies and routers involved. - HTTP servers often act as routers. - There can be many hops. - A notional service might be composed of many sub-services. How to authenticated them to the user? - HTTP is NOT connection-oriented. Requests and responses go over the same pipe, but that's about as far as connections relate to requests. Clearly a single GSS security context token exchange per-connection isn't going to cut it, even with TLS and channel binding to it. Clearly a GSS security context token exchange per-request (!) is awful, though it is what actually happens in many cases. Several attempts have been made to address this. At the moment there seems to be no interest in actually implementing and standardizing any proposals other than Google's channel-bound cookie concept. I believe that to be a fine solution. I'll explain more later. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Clock skew too great status code
On Wed, Feb 5, 2014 at 11:05 AM, Greg Hudson ghud...@mit.edu wrote: This could all work better if krb5 had used a ticket lifetime instead of an end time (like krb4 did, but without the crazy 8-bit representation of the lifetime). But the protocol was designed under the assumption that clients, servers, and KDCs would all have mostly synchronized clocks, so it went with the simplification of always using absolute timestamps and never relative intervals. And yet implementation-wise relative times are still needed... I agree, 'twould have been better to have relative lifetime. Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Challenging clients, why another ping-pong?
On Tue, Feb 4, 2014 at 5:58 AM, Rick van Rein r...@openfortress.nl wrote: Hello Greg, What are you looking at specifically? GSSAPI exchanges begin with the client. I thought you might say that. I was looking at SPNEGO, which embeds GSSAPI but where the initiative is (usually) taken by the server. It’s a waste that SPNEGO doesn’t communicate a challenge at that time. GSS-API exchanges always begin with an initial security context token. SPNEGO can carry an initial security context token for an optimistically selected mechanism. We could extend SPNEGO (or MSFT's NegoEx) to provide a challenge to be used in some way (there's various GSS-API extensions -some standardized, some proposed, some implemented-but-not-proposed, where the challenge could be made available). What is your goal? I'm guessing: optimize Kerberos to avoid the need for a replay cache. If so, Roland Dowdeswell and I have some proposals in that space. This is probably the result of embedding protocol into protocol into protocol, but there are no “real” reasons for not sending the challenge that I could think of. API compatibility and protocol interoperability must be maintained. Extensions which when used produce more optimal results are permitted. Flag-days are not permitted. Given that the KDC has told me how to securely communicate with a server, I am thinking about the client proving who he is, and that it is not using a ticket that it observed in transit. Specifically SPNEGO seems fragile to me for leaked certificates, because the ticket is not used to decrypt anything — authenticaiton is accepted for every first one to provide the right ticket. (AFAIK) OK, you're thinking about privacy protection for the initiator's identity. There are better ways to do this, such as building perfect forward security (PFS) into the mechanisms, into SPNEGO, or else into a stackable mechanism. The most likely of these to happen is to build PFS into the mechanism. If the initiator knew any sort of public key that is suitable for encryption to the acceptor, then the client could use that, PFS or no PFS. (PFS means, of course, doing something like Diffie-Hellman key agreement. If you want the acceptor to be authenticated before the acceptor is able to see the initiator's Ticket/credentials/name then you'll want a public DH key to be made available to the initiator by a trusted third party.) That's a minor point, though; if the server could speak first with a challenge, preventing replays would be as simple as incorporating the server challenge into the authenticator checksum. …or the server could hold off client checking the response until it has the authenticated decryption function available — given the random input that’s simply retained, he’d be doing it after the client but with the exact same key material. Yep, minor :) The server clearly has the keys available at the point where an authenticator checksum could be checked. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: k5start -K and ticket renewals
On Tue, Jan 28, 2014 at 5:10 AM, moritz.will...@ubs.com wrote: If the behaviour is changing and k5start refresh the ticket more regularly, then the updating of the CC must always be atomic. If I remember correctly, this is right now only the case if -o, -g or -m are specified. As to atomicity... the FILE ccache currently depends on POSIX file locking at least for additions of tickets, and this is a disaster because POSIX file locking is a disaster (because of its drop locks on first close semantics). But yes, *renewal* and refresh should always result in a rename(2) into place, which should be atomic. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: k5start -K and ticket renewals
On Tue, Jan 28, 2014 at 12:42 PM, Russ Allbery ea...@eyrie.org wrote: Nico Williams n...@cryptonector.com writes: As to atomicity... the FILE ccache currently depends on POSIX file locking at least for additions of tickets, and this is a disaster because POSIX file locking is a disaster (because of its drop locks on first close semantics). But yes, *renewal* and refresh should always result in a rename(2) into place, which should be atomic. By that do you mean that the Kerberos libraries do that, or that I need to do that in k5start? I mean that rename-into-place is what the krb5 libraries ought to do. It's not what MIT does, MIT truncates to initialize. Heimdal does rename in the krb5_cc_move() case, but only in that case, otherwise Heimdal unlink()s then creates a new ccache to initialize. I assume krenew is fine (to the extent that POSIX locking is fine). Sort of. POSIX file locking within a single-threaded program can be sufficient, and it is when that program is truncating and writing a new ccache under lock. But you might be racing against threaded programs, and since the libraries (MIT and Heimdal both) misuse POSIX file locks... POSIX file locking is utterly useless in a threaded program unless one also applies other synchronization within that process with great care to never close(2) a file descriptor for a regular file that another thread might also have independently opened and locked (see what SQLite3 does), and even then there are situations where that's insufficient (namely, when multiple libraries in the same process might be accessing the same file). In particular it's possible for a threaded app and krenew to race. Imagine a thread A in an app acquiring a read lock to read the ccache and another thread B blocking to get a write lock to add a service ticket (it's already read the ccache and not found that ticket)... The thread A finishes its search, drops the lock, then B gets the lock, then A closes its file descriptor thus dropping B's lock so that krenew can get its write lock and start writing. End result: B can write over krenew's entries. Most likely B won't write over krenew's entries because most likely B will have found the end of the ccache farther off than where krenew would leave it. Instead B will likely cause a hole between the end of krenew's entries and B's, but if krenew were to obtain a *larger* ticket than before, then B could write over krenew's writes. Regardless, threaded ccache reader-writers can step on each other, so krenew really plays no special role here. We've observed ccache corruption that can only (I think) be explained by POSIX file locking issues in threaded programs. The only solution is to always rename into place. See my separate posts to the list about this. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: k5start -K and ticket renewals
Ideally the auto-renewal wake-up timer should be automatically set from the TGT's lifetime (and libkrb5 should automatically handle any faster expiration of non-initial tickets). Then -K shouldn't be needed. The hard part is how to handle transient renewal errors, particularly when the ticket's original lifetime was short (but renew lifetime long). Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Anonymous kerberos and bootstrapping new hosts - how to?
Roland Dowdeswell's krb5_admin and krb5_keytab tool suite support bootstrapping and changing host keys using N-way Diffie-Hellman key exchanges (which includes support for race-free clustered host key updates). Bootstrapping keys requires a locally-defined (site-specific) process for verifying host identity. That process can be as simple as any host gets to bootstrap keys for any host-based principal for which there are no keys yet and which exists in DNS to confirm host identity via service processors automatically (e.g., if you have a datacenter with a gateway'ed service processor network so you can trust that if you can reach a service processor you are talking to a racked server, so then you leverage datacenter physical access policies) to a sysadmin must manually confirm the host identity. A key is bootstrapped before the host identification process, using a principal name derived from the N-way DH exchange, so, for example, if you can get console access via a gateway'ed service processor then you can use that key to complete the bootstrap process securely. See: http://oskt.secure-endpoints.com/ https://github.com/elric1/ Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Anonymous kerberos and bootstrapping new hosts - how to?
Make sure you're using the right kadmin. Maybe kadmin(1) lacks support for this? In that case use kinit(1) -S kadmin principal then kadmin -c ccache created by kinit. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: disable KADM5_PASS_REUSE error case?
Remember, policies are now extensible. So you could add a bit in the *policy* that says that it's OK for a user to change the password to one used previously. OR, we might extend *principals* to say this. Chris' use case is for password resets, so setting a flag in the principal when resetting its password, then resetting that flag upon password change would suffice. Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Forgot Kerberos Master Key
On Mon, Jun 3, 2013 at 11:17 AM, Greg Hudson ghud...@mit.edu wrote: B. Transition to a new master key using kdb5_util dump -mkey_convert and kdb5_util load. This requires scheduling some downtime. Downtime only for kadmind though. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Conditional prompting with PKINIT preauth
On Mon, May 27, 2013 at 3:51 PM, Greg Hudson ghud...@mit.edu wrote: More generally, I'm not sure the pam_krb5 module ought to be driving the decision to use PKINIT. [...] Well, certainly the KDC must decide how some principal is authenticated. But local policy must also be allowed to set a bar. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Multiple principals in a single application
On Sun, May 19, 2013 at 9:44 AM, Bernardo Pastorelli berp...@hotmail.com wrote: I run on an OS where the available version of the cyrus-sasl library does not support SASL_GSS_CREDS. So openldap has LDAP_OPT_X_SASL_GSS_CREDS, but then when calling cyrus-sasl, it fails because it is not able to handle SASL_GSS_CREDS. This is the reason why my code is failing (I didn't properly check the return codes). Is there any alternative to setting this option? You could interpose on gss_acquire_cred(), but really, I'd just build a recent version of these two libraries. Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Incorrect delegation state shown on acceptor side by context flags
On Mon, May 20, 2013 at 5:20 AM, Vipul Mehta vipulmehta.1...@gmail.com wrote: One more question, what is the exact use of context delegation flag if it doesn't need to be same on initiator and acceptor side. The initiator gets to ask for credential delegation. The acceptor gets to receive delegated credentials. The acceptor also gets to impersonate the initiator principal to the extent that the credential issuers prefer. The acceptor doesn't really get to tell much about this case: since the extent to which it can impersonate the initiator could vary by the time of the day, phases of the moon, ... Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Multiple principals in a single application
If you drop the need for having the tickets only in memory, then you can drop a lot of C code: just kinit all the ccaches in the DIR and then let the Kerberos GSS mechanism take care of the rest. Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Multiple principals in a single application
On Wed, May 8, 2013 at 2:05 AM, Bernardo Pastorelli berp...@hotmail.com wrote: My application uses openldap and GSSAPI to connect to a remote LDAP server. GSSAPI leverages kerberos as the transport mechanism. a) It's one user at a time per-connection for LDAP. You can't multiplex multiple user's LDAP PDUs over a single connection. b) First use gss_acquire_cred() with the given user's gss_name_t as the desired name, then call ldap_int_sasl_set_option() with LDAP_OPT_X_SASL_GSS_CREDS as the option and the gss_cred_id_t as the value. c) Then call ldap_sasl_bind_s(). You need a version of OpenLDAP that has this option, and a version of Cyrus SASL that has the SASL_GSS_CREDS options. But IIRC they've had these for several years now. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Multiple principals in a single application
Oh, and yeah, you need DIR ccaches too. Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: incremental propagation gets stuck with UPDATE_FULL_RESYNC_NEEDED
iprop and kprop use different ports. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: incremental propagation gets stuck with UPDATE_FULL_RESYNC_NEEDED
On Wed, May 1, 2013 at 11:20 PM, Dave Steiner stei...@oit.rutgers.edu wrote: Because we have multiple realms, we run several kpropd's with the -P. When I run kprop I give the port. But when kprop is run from kadmin for incremental propagation, where is it going to get the correct port number from? In src/kadmin/server/ipropd_svc.c I see: /* XXX Yuck! */ if (getenv(KPROP_PORT)) { pret = execl(kprop, kprop, -f, dump_file, -P, getenv(KPROP_PORT), clhost, NULL); } else { pret = execl(kprop, kprop, -f, dump_file, clhost, NULL); } There's your answer. Either from KPROP_PORT in the environment, or by having a per-kadmind instance krb5.conf and KRB5_CONFIG in the environment. Ideally all of the KDC-side daemons/tools would support multi-realm operation, but kadmind doesn't quite at this time. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: incremental propagation gets stuck with UPDATE_FULL_RESYNC_NEEDED
On Mon, Apr 29, 2013 at 4:09 PM, Dave Steiner stei...@oit.rutgers.edu wrote: I've turned on incremental propagation for my two test Kerberos machines but continually tries to do a full sync but doesn't. What version of MIT krb5 are you using? Before starting this (as I had worked with iprop a few months back) did a full kprop and deleted the principal.ulog files to start fresh. BTW, there's a kproplog -R option to reset the ulog now. You should use that. One odd thing about our setup is we have multiple realms. As far as I can tell from previously playing with iprop is that it doesn't work on multiple realms. But at this time, I just want to iprop my default realm. Multiple realms in one KDB principal file? Or just multiple realms on a host? IIUC krb5kdc supports multiple realms in a single KDB just fine, but kadmind doesn't, and kadmind plays a big role in iprop. Any ideas why (1) it thinks it needs to do a full resync (kproplog shows one new update on the master), and (2) why it's not doing the full resync? What can I check to see why it's not working. Can you truss/strace the kadmind (and follow fork and exec) and see what's happening? It's probably a misconfiguration that will be come evident as soon as you see open(2) return some ENOENT in the truss/strace output. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Kerberos behavior in the presence of multiple PTR records
On Fri, Mar 15, 2013 at 9:04 AM, Yury Sulsky yury.sul...@gmail.com wrote: On Thu, Mar 14, 2013 at 8:55 PM, Nico Williams n...@cryptonector.com wrote: So... there should be just one canonical name (see definition of CNAME) and PTRs (pointers) should point to the primary (canonical) name of the thing. So why does RFC2181 say that this does not imply that there should only be one PTR RR in any PTR RRSet?! I don't know. It seems wrong to me. Nico, thanks for the pointer ( :-) ) to that RFC. This part clears it up for me: 10. Naming issues It has sometimes been inferred from some sections of the DNS specification [RFC1034, RFC1035] that a host, or perhaps an interface of a host, is permitted exactly one authoritative, or official, name, called the canonical name. There is no such requirement in the DNS. It seems that an IP address may belong to multiple canonical names (i.e. there may be multiple A and PTR records referring to a single IP), but an alias may only point to one of these names (i.e. there can only be one CNAME record for a given alias). Yes, but it doesn't follow that in one case canonical means one and in the other it means many. And the APIs have resolved this problem anyways. This is a case where what the RFCs say is irrelevant because what got deployed wins. On Thu, Mar 14, 2013 at 9:39 PM, Greg Hudson ghud...@mit.edu wrote: There is no check to see if that result is the same as the forward lookup. Take a look at what happens to the remote_host variable after the getnameinfo call. Right, thanks. I should have read more carefully. Still, wouldn't it make sense to iterate through all PTR records and search for one that matches the canonical name returned from the forward lookup? If a record like that does exist, returning that one would allow the user to specify a host that has other canonical names (and multiple PTR records). The code here isn't seeing the PTR records. Instead MIT Kerberos is calling system library functions (getnameinfo(3)) that do that, and those functions, as I've explained, only look at one PTR RR. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Kerberos behavior in the presence of multiple PTR records
To my knowledge no RFC says that only one PTR RR may exist in any given PTR RRSet. In practice all implementations of getnameinfo(), gethostbyaddr(), and the like, use only the first PTR RR in the PTR RRSet for obvious semantic reasons: such code is looking for a canonical name for an IP address, and more than one name means there's no canonical name, thus either failure or pick one are the only options. In any case, you should never want to use PTR RR lookups for principal name canonicalization. (Not unless you are using DNSSEC, which you're almost certainly not.) Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Upgrade strategies
On Thu, Feb 28, 2013 at 6:12 AM, John Devitofranceschi j...@optonline.net wrote: When upgrading from MIT Kerberos 1.X to 1.X+N what kind of general rules of thumb can one rely on in terms of compatibility? Should the slave KDCs be upgraded first, then the master? Or upgrade the master first? You can do the master first, but you must not use new features that affect the KDB in ways that will break your slaves (e.g., multiple MKVNOs) or which affect the KDB in ways that the slaves will ignore but ignoring it is bad (e.g., new enctypes -- old KDCs won't be able to decrypt tickets issued by new KDCs). In fact, you should hold back from using such new features even once all KDCs are updated until you're ready to say there will be no rollback. I think you'd be better off upgrading the slaves first as they'll understand the older master's KDB format fully (the database format hasn't really changed much since 1.3, except for extensions to the policy DB and new TL_DATA in the principal DB; unknown TL_DATA types are ignored, which in some cases is fine, in others not so much). This isn't a hard and fast rule; you can upgrade the master first too, but you have to be mindful of not using new features that might break the slaves. You might want to hear this from the MIT folks though! Any considerations when using incremental v. full propagation? I wouldn't use iprop with any version prior to 1.11, as you know. When upgrading to 1.11, what's the oldest previous version that requires more effort than just replacing the binaries (and possibly dealing with the 'weak crypto' changes)? 1.4 is the youngest old version that I know upgrade from which is relatively simple. And yes, you have to mind your configuration (allow_weak_crypto, permitted_enctypes, ...), not just upgrade the binaries. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Upgrade strategies
It might be useful to have a list of all features that should not be used on a master with downlevel slaves. Here's a few that I know of: - newer enctypes (AES was added in... 1.4 and since then Camellia is the newest) for service keys, particularly krbtgt keys - multiple MKVNOs (I forget when this was added) - n-strikes user principal locking (IIRC that was in 1.8) - extended policies (1.11) There are probably others. I'm guessing PKINIT is a feature you don't want to use in a master with downlevel slaves. Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Strange behavior with kadmind and incremental propagation in 1.8.3
On Wed, Feb 27, 2013 at 5:23 PM, vs_krb vs.krb...@gmail.com wrote: Looking at the release log for 1.8.6 from http://web.mit.edu/kerberos/krb5-1.8/krb5-1.8.6.html , I see the issue mentioned in this thread is addressed with that realease. I am wondering if you would still recommend folks on 1.8.6 to upgrade as well. We are running 1.8.6 and we are planning to turn on incremental propagation. Seeing the MIT documentation and forums, I get the impression that iprop is not as stable as a straight kprop push from master to slaves(I understand it is more complicated than a simple dump of db and shooting it across). But looking at this thread, the question that I get is whether we need to go 1.11 or at least 1.10 before turning on iprop. I can't speak for MIT, but IMO if you're using iprop then you should upgrade to 1.11. There are a lot of iprop bug fixes in 1.11. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Functional test of KDC for monitoring?
On Wed, Feb 13, 2013 at 6:12 AM, John Devitofranceschi j...@optonline.net wrote: One thing that we do is monitor propagation. Something like: lpc=get_last_princ_changed; master_lpc_kvno=get_kvno(master_kdc, lpc); init_error_state; foreach kdc (@slave_kdc_list) ; do slave_lpc_kvno= get_kvno(kdc, lpc); if (master_lpc_kvno != slave_lpc_kvno) then set_error_state; fi done report_error_state; Note that this will fail to detect failures to iprop other principals. Ideally there'd be a cheap, constant-time way compare DBs. Something like having a Merkle hash tree so we need only compare root hashes. But changing the KDB to have such a form is involved, and it implies some additional trade-offs (e.g., can't possibly have higher write concurrency without at least serializing the last part of each commit). The challenge that I see is getting the last princ changed. You can scrape the logs or run the monitor on the master and use kproplog. Yup! What would be nice is if kadmin had client-visible requests that gave you visibility into iprop status. Interesting idea. Basically an RPC for getting the ulog. In fact, the kadmin server has this -- it's the kadmin client that lacks a UI to it. Also, we really should want to keep complete ulogs on the slaves as well so we can inspect them and see if any changes were missed. Among other things it would make it easier to write thorough tests! Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Strange behavior with kadmind and incremental propagation in 1.8.3
Re-reading more closely it seems that krb5_put_principal() failed because of the locking issue but nonetheless still created the iprop ulog entry and marked it as committed. That would be a nasty bug I was not aware, but I believe it's fixed in master, and likely fixed in 1.11. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Strange behavior with kadmind and incremental propagation in 1.8.3
On Mon, Feb 11, 2013 at 6:14 PM, John Devitofranceschi foo...@gmail.com wrote: I was hoping to get an opinion on this. Maybe it's something that's been seen before. I see this in the kadmind log: Feb 7 17:00:10 server.realm.com kadmind[17670]: [ID 702911 local0.notice] Request: kadm5_create_principal, bar...@realm.com, Cannot lock database, client=f...@realm.com, service=kadmin/server.realm@realm.com, addr=1.2.3.4 Cannot lock database should not happen with 1.11 or master. It does happen with older releases and is a serious bug that has been discussed here. See these commits b2e7deb7cb2d9c37f00599778f4a102feaf6149d~4..b2e7deb7cb2d9c37f00599778f4a102feaf6149d for more details. (I.e., start here: https://github.com/krb5/krb5/commit/b2e7deb7cb2d9c37f00599778f4a102feaf6149d and look at the preceding three commits as well.) The create_principal failed...we even check in our provisioning client: Feb 7 17:00:39 server.realm.com kadmind[17670]: [ID 702911 local0.notice] Request: kadm5_get_principal, bar...@realm.com, Principal does not exist, client=f...@realm.com, service=kadmin/server.realm@realm.com, addr=1.2.3.4 Right: the DB couldn't be locked when you attempted to create barney... BUT the propagation log says this: Update Entry Update serial # : 150028 Update operation : Add Update principal : bar...@realm.com Update size : 524 Update committed : True Update time stamp : Thu Feb 7 17:00:00 2013 Attributes changed : 12 Attribute flags Maximum ticket life Maximum renewable life Principal expiration Password expiration Principal Key data Password last changed Modifying principal Modification time TL data Length Note the timestamps...the update entry appeared 10s earlier than the kadmind log error! There was no previous create_principal request 10s earlier. A few things are worth noting: a) there's an iprop bug (fixed in master) where slaves can't get updates within ten seconds of the last transaction on the master; b) a create of ba...@realm.com must have succeeded at some point :) The fun really starts when the incremental propagation kicks in. Oh? That wasn't fun enough? Bummer! We have 7 secondary kdc's and in the case that I am investigating; 4 of them got Update 150028 right away, on its own, with no problems. So far no real fun :) The other 3 had Update 150028 bundled with one or more additional updates and the kpropd on these three servers started core dumping over and over again until we forced a full resync. Do you have stack traces from the cores? Please let me know if this is a known bug. If it is not, maybe the code changes from 1.8 to 1.11 have made this behavior stop. It would be nice to be able to reliably reproduce this so we can then test post-1.8.3 versions of the code. So far, this has his us twice in the last couple of years. Oh, you should *definitely* try 1.11, or even master. 1.8's iprop has bugs for sure. Just trawl through the git log of master. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: client's system clock is ahead of KDC system clock
On Tue, Jan 29, 2013 at 5:09 PM, Marcus Watts m...@umich.edu wrote: Hi, if a client's system clock is one hour ahead of KDC system clock, should I get a valid TGT?, or should I get clock skewed error? We have clients that are able to get TGT when system clock is ahead of server c lock. Any idea if this is client issue? a KDC server issue? Thanks Actually it's a perfectly valid case (so far as the kdc is concerned); you're just getting postdated tickets that will be valid in one hour. So if you're patient... But the clients generally don't specify a from time. And to get a postdated ticket the client would have to set the postdated flag. In practice it will work (see Greg's reply). The more interesting case is if the clock is only a fraction of a second fast. This isn't a problem for users, but it is a problem for scripts that get a ticket and immediately use it: the result is sometimes the ticket will work, and sometimes it won't. That's within the typical (default) skew allowance of 5 minutes. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Principal naming
My problem is that I don't like a multiplicity of names for a single user entity. Instead I'd like much more in the way of attributes being passed in ancillary data like, e.g., authorization-data. I.e., I prefer the Windows/AD model. I get that in general that's a difficult model to apply outside Windows, but still, I prefer it. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Principal naming
On Fri, Jan 18, 2013 at 1:35 PM, Russ Allbery r...@stanford.edu wrote: Nico Williams n...@cryptonector.com writes: There's really no point to the /admin thing: since the server requires INITIAL tickets there's no risk of use of stolen TGTs for accessing kadmin, and if you were to have different pre-authentication requirements for kadmin than for initial TGTs the protocol does allow that. Er, it's still a good security practice to use a separate set of credentials that you don't type into everything all the time to do your daily work. Particularly given that we still live in a world where there's a lot of SASL PLAIN over TLS. That might be true, but a) do you really think that people use different passwords for */admin principals than their regular user principals? and b) there's no reason that we couldn't have different credentials for this without having different identifiers. It also lets you do things like assign /admin principals randomized keys and require that people use PKINIT. kadmind could just require that hardware pre-auth have been done in order to allow certain operations. See also (b) above. Granted, (b) could only work as long as kadmind requires INITIAL tickets, or, if it didn't, as long as the client knew how to request extra/different pre-auth and the KDC knew how to label the resulting tickets as being differently pre-authenticated. And yes, we can do that. So no, there is definitely a point. But I don't believe that distinct names is necessary for this. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Principal naming
On Fri, Jan 18, 2013 at 11:25 AM, Jeff Blaine jbla...@kickflop.net wrote: Can anyone explain away the reasoning behind the decision to make user principals need the form: specific_part/contextual_part e.g. jennifer/admin and service principals the OPPOSITE - of the form contextual_part/specific_part e.g. host/daffodil.mit.edu What happened? Who knows the history and reason for this? I wasn't there, so I don't know, but it's something to live with. Well, there's actually no need for /admin principals -- you could just not have them and modify the kadmin client to stop baking that in (or use it with the -c ccache option). There's really no point to the /admin thing: since the server requires INITIAL tickets there's no risk of use of stolen TGTs for accessing kadmin, and if you were to have different pre-authentication requirements for kadmin than for initial TGTs the protocol does allow that. So, yeah, I think it'd be a good idea to start making changes to kadmin to stop insisting on /admin principals. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Functional test of KDC for monitoring?
On Tue, Jan 15, 2013 at 12:38 AM, Roland C. Dowdeswell el...@imrryr.org wrote: And [to the MIT developers], I think that it would be nice if there were either (1) functionality within Kerberos which allowed for the writing of programs such as this without overriding functions, i.e. allow library users to tell the libs to use a particular KDC; or (2) if k5ping or a similar program were integrated into MIT Kerberos to aid in monitoring as this is a need that all enterprise deployments of Kerberos need. I second this. k5ping is much too useful and conceptually simple to be so difficult to implement. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: kadmin/host principals
The Solaris kadmin/kadmind use kadmin/fqdn principals. The MIT kadmin/kadmind can do that or use kadmin/admin (and kadmin/changepw, for password changes). Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Kerberos wrapper
On Dec 5, 2012 11:00 AM, Oliver Loch grime...@gmx.net wrote: Both look interesting, but they can't be started from inetd - can they? If one uses KNC, one needs to write a wrapper around it that it is started again when it exits? KNC most certainly can be launched from inetd, and indeed that's the most common use of it. Also, there's a couple of helpers in Roland's github account (http://github.com/elric1) for a) pre-forking a pool of knc processes (prefork), b) an equivalent of inetd for Unix domain sockets (lnetd), which can be put together to builds high performance kerberized services quite easily. Nico -- Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos
Re: Kerberos wrapper
See http://github.com/elric1/knc . Kerberos mailing list Kerberos@mit.edu https://mailman.mit.edu/mailman/listinfo/kerberos