In my opinion NFS actually works fine for realistic cases, once a couple of
bugs are fixed and some other tools are put in place.
In real cases, the user logins in with a principal username@DOMAIN. That is
always placed in the default collection defined in /etc/krb5.conf. At least for
us, they
On 7/22/19 1:39 PM, Charles Hedrick wrote:
> Please be aware that I’m using Redhat’s KCM implementation in sssd. It’s
> supposed to be compatible with Heimdal’s, but based on documentation it
> appears that it may not be.
>
> The default value of KRB5CCNAME is simply KCM: It had better be
> user-
Some more testing on MacOS. With the native Mac utilities, it uses credential
type API:
It appears that if you set KRB5CCNAME to API or API:uid, it behaves the same
way: If creates new unique names like API:027B19DC-01E6-4610-9300-7E3E1DFF706A.
Even if I set KRB5CCNAME to a specific cache, if I
On Jul 22, 2019, at 1:00 PM, Greg Hudson
mailto:ghud...@mit.edu>> wrote:
By my reading, KEYRING also doesn't generally include the uid in the name.
Again, I can only speak for what I see in Redhat and Ubuntu. The default for
KRB5CCNAME is KEYRING:persistent:UID. Something (I think a combination
Please be aware that I’m using Redhat’s KCM implementation in sssd. It’s
supposed to be compatible with Heimdal’s, but based on documentation it appears
that it may not be.
The default value of KRB5CCNAME is simply KCM: It had better be user-specific,
or everybody shares a collection.
geneva:
On 7/22/19 11:16 AM, Charles Hedrick wrote:
> I was surprised to find the methods to do these things aren’t present. Here’s
> what I’ve defined:
Some of this is covered in
https://k5wiki.kerberos.org/wiki/Projects/Credential_cache_collection_improvements
(which unfortunately has not been worked o
I have code to deal with a number of difficulties in implementing kerberos
transparently to users.
Some of this code needs to know whether a KRB5CCNAME is a collection or a
specific cache, and to be able to find the collection if it’s a cache.
I was surprised to find the methods to do these thi
> 3) anyway the best would be to pull old key from backups (either from
> kdc or server backup) and put it back to KDC under correct kvno
>
> depending on your skills and other factors of your environment,
> restoring whole KDC db might be easier than to mess with single entry ...
btw, just putti
I'm definitely not an expert on the field, but I'd guess you'd have to:
1) wait until client tickets expires and clients requests new ones for
current kvno
2) due to linux NFS credential storage burried deep in the kernel,
reboot all clients (sometimes just restarting services helps,
s
Probably. If I interpret your email, you recreated the key table for the
server. I assume you either rebooted the server or restarted everything
relevant (most critical would be rpc.svcgssd).
I agree that rebooting clients would probably do it on the client side, except
that not all systems are
Hi Charles,
Surely the action of rebooting the client would do all of that ?
‐‐‐ Original Message ‐‐‐
On Monday, July 22, 2019 2:13 PM, Charles Hedrick wrote:
> Unfortunately it’s likely to take some experimentation. My starting point
> would be on each client, unmount the file system
Unfortunately it’s likely to take some experimentation. My starting point would
be on each client, unmount the file system, maybe delete /tmp/krb5ccmachine*,
restart rpc.gssd, and remount.
> On Jul 22, 2019, at 6:22 AM, Laura Smith
> wrote:
>
> Ok, I hold my hand up, I messed up. So the ques
I'm not an expert but I'd try:
1) check if the keys for service are in sync in KDB and service keytab.
if client reboot does not help, i'd guess keys are not in proper sync
2) pull old keytab from NFS server backup and merge it with current
keytab
client with not-yet expired tickets s
Maybe a couple of hours or so.
"klist -l" shows empty on a client I've tried.
When mounting, the client now shows :
mount.nfs4: access denied by server while mounting (null)
mount: mounting foo.example.com:/srv/share/foo on /mnt/foo failed: Invalid
argument
Demsg on the client shows:
NFS: state
How long has it been since this happened?
I think that the clients will be fine once their old ccaches expire. Have you
tried forcing the issue by manually refreshing one of the clients?
Sent from my iPhone
> On Jul 22, 2019, at 06:22, Laura Smith
> wrote:
>
> Ok, I hold my hand up, I mess
Ok, I hold my hand up, I messed up. So the question is, how do I get myself
out of this mess ?
A summary of how I got here:
• I have an NFS server and a bunch of clients connecting and auth using krb5.
• This was all working beautifully until today.
• Through an act of pure fat-fingered stup
16 matches
Mail list logo