On 04/19/2012 04:10 PM, Dmitri Pal wrote:
On 04/19/2012 09:03 AM, Simo Sorce wrote:
On Thu, 2012-04-19 at 14:18 +0200, Ondrej Hamada wrote:
On 04/18/2012 08:30 PM, Rich Megginson wrote:
* Credentials expiration on replica should be configurable
What does this mean ?
We should store credentials for a subset of users only. As this subset
might change over time, we should flush the credentials for users that
haven't showed up for some while (even despite the credentials are not
expired yet).
This should be determined through group membership or similar mechanism,
talking about 'expiration' seem wrong and confusing, perhaps just a
language problem ?
Right, thanks for correction.

fractional replication had originally planned to support search
filters in addition to attribute lists - I think Ondrej wants to
include or exclude certain entries from being replicated
Yes, my point is, that the Consumer should strore credentials only for
users that are authenticating against him, so we need to exclude some
attributes, but just for specific subset of users.
I am not sure we can achieve this, with just a fractional replication
filter, not easily anyway. A search filter singles out entire entries.
In order to have different sets of attributes replicated we need an
additional, per-filter attribute exclusion list.

      3) find master dynamically - Consumers and Hubs will be in fact
master
         servers (from 389-DS point of view), this means that every
consumer or hub
         knows his direct suppliers a they know their suppliers ...
Not clear what this means, can you elaborate ?
Replication agreements posses the information about suppliers. It means
we can dynamically discover where are the masters by going through all
nodes and asking who's their supplier. Thinking about it again, it will
be probably very slow and less reliable. The lookup of dns records in
LDAP would be better.
Neither, we have the list of masters in LDAP in the cn=etc subtree for
these uses, it's a simple search, and it is the authoritative list.
Remember we may not always control the DNS, so relying on a manually
maintained DNS would be bad.
Good point, i forget about the master entries.
* SSSD must be improved to allow cooperation with more than one LDAP
server
Can you elaborate what you think is missing in SSSD ? Is it about the
need to fix referrals handling ? Or something else ?
I'm afraid of the situation when user authenticates and the information
is not present on Consumer. If we'll use referrals and the
authentication will have to be done against master, would the SSSD be
able to handle it?
Currently SSSD can handle referrals, although it does so poorly due to
issues with the openldap libraries. Stephen tells me there are plans to
handle referrals in the SSSD code directly instead of deferring to
openldap libs. When that is done we should have no more issues.
However, for authentication purposes I am not sure referrals are the way
to go.
For the Kerberos case referrals won't work, because we will not let a
consumer have read access to keys in a master (besides the consumer will
not have the same master key so will not be able to decrypt them), so we
will need to handle the Krb case differently.
For ldap binds, we might do referrals, or we could chain binds and avoid
that issue entirely. If we chain binds we can also temporarily cache
credentials in the same way we do in SSSD so that if the server get cut
off the network it can keep serving requests. I am not thrilled about
caching users passwords this way and should probably not enabled by
default, but we'd have the option.

* authentication policies, every user must authenticate against master
server by
default
If users always contact the master, what are the consumers for ?
Need to elaborate on this and explain.
As was mentioned earlier in the discussion, there are two scenarios - in
the first one the consumer serves only as a source of
information(dns,ntp,accounts...), the second one allows distribution of
credentials and thus enables the authentication against the consumer
locally. The first one is more secure since the creds are not stored on
consumers that might be more easily corrupted.
Ok, makes sense, but I would handle this transparently to the clients,
as noted above. Trying to build knowledge in clients or rely on
referrals is going to work poorly with a lot of clients, making the
solution not really useful in real deployments where a mix of machines
that do not use SSSD is present.

     - The policy must also specify the credentials expiration time. If
user tries to
       authenticate with expired credential, he will be refused and
redirected to Master
       server for authentication.
How is this different from current status ? All accounts already have
password expiration times and account expiration times. What am I
missing ?
Sorry, I wrote it unclear. I meant that the credentials, we store on
Consumer should be there available only for a specified period of time.
Why ?

After that time they should be flushed away (means they are still valid,
just not stored on the Consumer), no matter what is their expiration
time.
I do not see what's the point. If we are replicating the keys to a
consumer why would it try to delete them after a while ?
Security? My original idea was, that if consumer gets corrupted, there should be stored as less credentials as possible. This behaviour should mainly flush credentials of users, who don't auth against the consumer regularly. All the paragraphs about flushing credentials below were inspired by this idea.

This is mainly for the scenario when someone authenticates against
our Consumer on some occasion and then he never gets back - it's
worthless storing his credentials any more, so I think that the it
should be possible to define some limit for storing credentials.
Ok so we are mixing things here.

I guess the scenario you are referring to here, is the one where we do
not replicate any key to the consumer, and some user does an ldap bind
against with chained-binds and we do decide to cache the password as a
hash if auth is successful. This is a rare corner case in my mind.

Note that we cannot do this with Kerberos. we can't "cache" kerberos
keys, we either have a copy permanently (or until the policy/group
membership of the consumer is changed) or never have them.


Because
       of the removal of expired creds. we will have to grant the
Consumer the
       permission to delete users from the Consumer specific user group
(but only
       deleting, adding users will be possible on Masters only).
I do not understand this.
When user hasn't authenticated against Consumer for a long time and his
credentials were flushed from Consumer, his credentials should be also
omitted from being replicated to the Consumer. This might be solved by
the proposed plugin as well.
Either the user is marked as part of the location server by this
consumer, and therefore we replicate keys or we do not. We cannot delete
keys, as nothing would replicate them back to us until a password change
occurs. Also, you have no way to tell the master what it should
replicate, dynamically.
I would remove this point, it is not something we need or want, I think.
The point was that not only the credentials will be removed, but also the user will be unmarked.

     - to be able to do that, both Consumers and Hubs must be
Masters(from
     389-DS point of view).
This doesn't sound right at all. All server can always write locally,
what prevents them from doing so are referrals/configuration. Consumers
and hubs do not and cannot be masters.
But what about the information of account getting locked? We need to
lock the account locally immediately and also tell the master (and thus
every other node) that the specific account is locked.
For account lockouts we will need to do an explicit write to a master.
(probably yet another plugin or an option to the password plugin). We
cannot use replication to forward this information, as consumers do not
have a replication agreement that go from consumer to master.
If the consumers and hubs were masters in fact, it would be possible to replicate it. But if we can manage it via plugin and thus keep then consumers/hubs to be really read-only, then it's definitely a better solution.
   When the Master<->Consumer connection is
broken, the
     lockup information is saved only locally and will be pushed to
Master
     on connection restoration. I suppose that only the lockup
information
should
     be replicated. In case of lockup the user will have to authenticate
against
     Master server only.
What is the lookup information ? What connection is broken ? There
aren't persistent connections between masters and consumers (esp. when
hubs are in between there are none).
typo there probably - by lock up information I mean reporting the
situation, when the account gets locked due to too many failed logins.
This will be hard indeed.

Broken connection - when it is not possible to tell the master, that the
account got locked.
Ok replace with "when the masters are unreachable".

Simo.


There is one aspect that is missing in this discussion. If we are
talking about a remote office and about a Consumer that serves this
office we need to understand not only the flow of the initial
authentication but are there other authentications happening. I mean are
we just talking about logging into the machines in the remote office
then LDAP auth with pass-through and caching would be sufficient on the
consumer (I will explain how it could be done below) or there is an eSSO
involved and expected?

I guess if the eSSO is required for example to access NFS shares there
should be a local IPA server with KDC in the remote office. In this case
it probably makes sense to make it just a normal replica but with
limited modification capabilities and potentially with a subset of users
and other entries replicated to that location.

If the eSSO is not required and we talk about the initial login only we
can have a DS instance as a consumer do not need to have the whole IPA
becuase KDC, CA and management frameworks are not needed. This DS can
replicate a subset of the users, groups and other data using fractional
replication for the identity lookups can and use PAM pass-through
feature with SSSD configured to go to the real master for authentication.

So effectively there are two different use cases:
1) eSSO server in the remote office
2) Login server in the remote office

The solutions seem completely different so I suggest starting with one
or another.

So far the discussion seems to be more about the second option (login server in the remote office), so I would prefer to stick with it for now.


--
Regards,

Ondrej Hamada
FreeIPA team
jabber: oh...@jabbim.cz
IRC: ohamada

_______________________________________________
Freeipa-devel mailing list
Freeipa-devel@redhat.com
https://www.redhat.com/mailman/listinfo/freeipa-devel

Reply via email to