Re: Syncrepl partial replication based on attribute problem

2012-06-01 Thread Jeffrey Crawford
Thanks Howard,

Let me make sure I understand your response. I'm not changing any ACL's,
they are staying the same. Just the attributes in the record are changing.
Are you saying that syncprov looks at the account that is bound and sends
deletes if a record would become invisible after a modification? If that's
the case it doesn't seem to be working right.

If syncprov is being used and looks at which user is bound would it have
trouble if there are multiple replicas and replica accounts binding to get
replication data? we have two distinct downstream replicas that each have
different criteria for which records get replicated to them.

Jeffrey

On Thu, May 31, 2012 at 6:38 PM, Howard Chu h...@symas.com wrote:

 Jeffrey Crawford wrote:

 Hello,

 I had thought I tested this beforehand but I seem to be able to reliably
 reproduce the following situation:

 We have an installation where the provider server has information that is
 replicated to downstream replicas using the syncrepl protocol. The account
 used to replicate is allowed to see records where certain attributes meet
 specific values, a silly example is an attribute is set

 dn: uid=somerecord,ou=people,dc=**ucsc,dc=edu
 replicateMe: TRUE
 ...

 When an account has that attribute set it then replicates to the
 downstream
 replica, however if later we set replicateMe to FALSE so that the account
 used
 for replication can no longer see the entry, it seems to be orphaned and
 is
 not removed in the replica.

 We are using OpenLDAP 2.4.26 and I have the syncprov sessionlog set to
 500 and
 the replica is set to refreshAndPersist.

 Is this something that is simply not supported? or would a case like this
 be
 expected to work and I've either got a configuration issue or a bug?


 Visibility changes due to ACL rules are not detected. syncprov only checks
 an entry against the search parameters of the original sync search
 operation, i.e., the base, scope, and filter. If an entry matches these
 params before the modification, and no longer matches after the operation,
 syncprov will send a delete message for that entry. (Likewise if an entry
 doesn't match before, but matches after, syncprov will send an Add for the
 entry.)

 --
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  
 http://www.openldap.org/**project/http://www.openldap.org/project/





Re: Syncrepl partial replication based on attribute problem

2012-06-01 Thread Nick Milas

On 1/6/2012 8:54 πμ, Jeffrey Crawford wrote:

Are you saying that syncprov looks at the account that is bound and 
sends deletes if a record would become invisible after a modification?


I understand the opposite: syncprov will only send add/delete message 
based on base/scope/filter and not on ACL-visibility. So in essence 
Howard says that ACL-based filtering in replication does not result in 
proper results to consumers.


This is tricky! (I didn't know either.) It means that we should *not* 
design our replication based on ACL-filtering (which, unfortunately, we 
have done too), but, on the contrary, that we should design our DIT so 
that it can cover our replication needs based on consumer 
base/scope/filter configuration, and we should design/adapt our ACLs 
with the above rule in mind.


Please confirm the above thoughts.

Thanks,
Nick



Re: Syncrepl partial replication based on attribute problem

2012-06-01 Thread Jeffrey Crawford
Humm and taking this one step further I'm guessing that the replication
account probably needs to see at least the entryUUID and entryCSN for all
accounts to make sure that it can see the records it needs to delete. Okay
at least I have some direction to go on now.

Jeffrey

On Fri, Jun 1, 2012 at 9:06 AM, Nick Milas n...@eurobjects.com wrote:

 On 1/6/2012 8:54 πμ, Jeffrey Crawford wrote:

  Are you saying that syncprov looks at the account that is bound and sends
 deletes if a record would become invisible after a modification?


 I understand the opposite: syncprov will only send add/delete message
 based on base/scope/filter and not on ACL-visibility. So in essence Howard
 says that ACL-based filtering in replication does not result in proper
 results to consumers.

 This is tricky! (I didn't know either.) It means that we should *not*
 design our replication based on ACL-filtering (which, unfortunately, we
 have done too), but, on the contrary, that we should design our DIT so that
 it can cover our replication needs based on consumer base/scope/filter
 configuration, and we should design/adapt our ACLs with the above rule in
 mind.

 Please confirm the above thoughts.

 Thanks,
 Nick




-- 
I fly because it releases my mind from the tyranny of petty things . . .

— Antoine de Saint-Exupéry

Jeffrey E. Crawford
ITS Application Administrator (IDM)
831-459-4365
jeffr...@ucsc.edu


Re: Syncrepl partial replication based on attribute problem

2012-06-01 Thread Jeffrey Crawford
Ok I think I got this to work I didn't add a filter to the syncrepl
parameter so I'm using ACL's as before, however I changed the acls to
allow the replica account access to the attributes entry and entryUUID
only on every item in the directory, now setting attributes to values
so that they no longer match the access to the rest of the acls seem
to do what I wan't however I'm not quite sure how. so in short the
provider looks something like this:

olcAccess: to dn.subtree=dc=example,dc=org
  attrs=entry,entryUUID
  by dn.base=cn=replica,ou=admins,dc=example,dc=org ssf=128 read
  by * none break
olcAccess: to dn.subtree=dc=example,dc=org
  filter=(replicateMe=TRUE)
  
attrs=@inetOrgPerson,@posixAccount,entryCSN,contextCSN,createTimestamp,modifyTimestamp,structuralObjectClass
  by dn.base=cn=replica,ou=admins,dc=example,dc=org ssf=128 read
  by * none break

And the replicas are configured as so:

dn: olcDatabase={1}hdb,cn=config
changeType: modify
replace: olcSyncrepl
olcSyncrepl: rid=1
  provider=ldap://ldap-ENV-1.example.org:1389/
  starttls=critical
  tls_reqcert=never
  bindmethod=simple
  retry=60 10 900 92 86400 3
  binddn=cn=replica,ou=admins,dc=example,dc=org
  credentials=PW
  schemachecking=off
  searchbase=dc=example,dc=org
  type=refreshAndPersist
olcSyncrepl: rid=2
  provider=ldap://ldap-ENV-2.example.org:1389/
  starttls=critical
  tls_reqcert=never
  bindmethod=simple
  retry=60 10 900 92 86400 3
  binddn=cn=replica,ou=admins,dc=example,dc=org
  credentials=PW
  schemachecking=off
  searchbase=dc=example,dc=org
  type=refreshAndPersist

now the following will add a record to the replica:
dn: uid=user,ou=people,dc=example,dc=org
changeType: modify
replace: replicateMe
replicateMe: TRUE

And the following will take it away (delete):
dn: uid=user,ou=people,dc=example,dc=org
changeType: modify
replace: replicateMe
replicateMe: FALSE

However I don't know enough about the underlying code to be sure the
following is working as designed. Meaning I don't want to be relying
on a bug that might get fixed later.

Thanks
Jeffrey

access:

On Fri, Jun 1, 2012 at 9:26 AM, Jeffrey Crawford jeffr...@ucsc.edu wrote:

 Humm and taking this one step further I'm guessing that the replication 
 account probably needs to see at least the entryUUID and entryCSN for all 
 accounts to make sure that it can see the records it needs to delete. Okay at 
 least I have some direction to go on now.

 Jeffrey


 On Fri, Jun 1, 2012 at 9:06 AM, Nick Milas n...@eurobjects.com wrote:

 On 1/6/2012 8:54 πμ, Jeffrey Crawford wrote:

 Are you saying that syncprov looks at the account that is bound and sends 
 deletes if a record would become invisible after a modification?


 I understand the opposite: syncprov will only send add/delete message based 
 on base/scope/filter and not on ACL-visibility. So in essence Howard says 
 that ACL-based filtering in replication does not result in proper results to 
 consumers.

 This is tricky! (I didn't know either.) It means that we should *not* design 
 our replication based on ACL-filtering (which, unfortunately, we have done 
 too), but, on the contrary, that we should design our DIT so that it can 
 cover our replication needs based on consumer base/scope/filter 
 configuration, and we should design/adapt our ACLs with the above rule in 
 mind.

 Please confirm the above thoughts.

 Thanks,
 Nick




 --
 I fly because it releases my mind from the tyranny of petty things . . .

 — Antoine de Saint-Exupéry

 Jeffrey E. Crawford
 ITS Application Administrator (IDM)
 831-459-4365
 jeffr...@ucsc.edu




--
I fly because it releases my mind from the tyranny of petty things . . .

— Antoine de Saint-Exupéry

Jeffrey E. Crawford
ITS Application Administrator (IDM)
831-459-4365
jeffr...@ucsc.edu



Syncrepl partial replication based on attribute problem

2012-05-31 Thread Jeffrey Crawford
Hello,

I had thought I tested this beforehand but I seem to be able to reliably
reproduce the following situation:

We have an installation where the provider server has information that is
replicated to downstream replicas using the syncrepl protocol. The account
used to replicate is allowed to see records where certain attributes meet
specific values, a silly example is an attribute is set

dn: uid=somerecord,ou=people,dc=ucsc,dc=edu
replicateMe: TRUE
...

When an account has that attribute set it then replicates to the downstream
replica, however if later we set replicateMe to FALSE so that the account
used for replication can no longer see the entry, it seems to be orphaned
and is not removed in the replica.

We are using OpenLDAP 2.4.26 and I have the syncprov sessionlog set to 500
and the replica is set to refreshAndPersist.

Is this something that is simply not supported? or would a case like this
be expected to work and I've either got a configuration issue or a bug?

Thanks
Jeffrey


Re: Syncrepl partial replication based on attribute problem

2012-05-31 Thread Howard Chu

Jeffrey Crawford wrote:

Hello,

I had thought I tested this beforehand but I seem to be able to reliably
reproduce the following situation:

We have an installation where the provider server has information that is
replicated to downstream replicas using the syncrepl protocol. The account
used to replicate is allowed to see records where certain attributes meet
specific values, a silly example is an attribute is set

dn: uid=somerecord,ou=people,dc=ucsc,dc=edu
replicateMe: TRUE
...

When an account has that attribute set it then replicates to the downstream
replica, however if later we set replicateMe to FALSE so that the account used
for replication can no longer see the entry, it seems to be orphaned and is
not removed in the replica.

We are using OpenLDAP 2.4.26 and I have the syncprov sessionlog set to 500 and
the replica is set to refreshAndPersist.

Is this something that is simply not supported? or would a case like this be
expected to work and I've either got a configuration issue or a bug?


Visibility changes due to ACL rules are not detected. syncprov only checks an 
entry against the search parameters of the original sync search operation, 
i.e., the base, scope, and filter. If an entry matches these params before the 
modification, and no longer matches after the operation, syncprov will send a 
delete message for that entry. (Likewise if an entry doesn't match before, but 
matches after, syncprov will send an Add for the entry.)


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/




Partial replication, remove branch

2012-03-20 Thread jehan procaccia


I would like to replicate only some OUs under the baseDN ; ou=people and 
 ou=group,ou=system,  but not the remaining of OUs below ou=system = 
ou=Hosts , ou=Networks, ou=Protocol.

How can I remove those branches to replicate ?
my actual syncrepl config that replicate all the subtree branches:
syncreplrid=001
provider=ldaps://master.domain.fr
type=refreshAndPersist
searchbase=dc=int-evry,dc=fr
filter=(objectClass=*)
attrs=*
scope=sub
schemachecking=on
bindmethod=simple
retry=60 10 300 +
binddn=cn=replic,ou=System,dc=int-evry,dc=fr
credentials=secret
updateref   ldaps://master.domain.fr:636




Re: Partial replication, remove branch

2012-03-20 Thread anax



On 03/20/2012 10:54 AM, jehan procaccia wrote:


I would like to replicate only some OUs under the baseDN ; ou=people and
ou=group,ou=system, but not the remaining of OUs below ou=system =
ou=Hosts , ou=Networks, ou=Protocol.
How can I remove those branches to replicate ?
my actual syncrepl config that replicate all the subtree branches:
syncrepl rid=001
provider=ldaps://master.domain.fr
type=refreshAndPersist
searchbase=dc=int-evry,dc=fr
filter=(objectClass=*)
attrs=*
scope=sub
schemachecking=on
bindmethod=simple
retry=60 10 300 +
binddn=cn=replic,ou=System,dc=int-evry,dc=fr
credentials=secret
updateref ldaps://master.domain.fr:636


Define the ACL for binddn=cn=replic,ou=System,dc=int-evry,dc=fr such 
that it cannot access the ou's you don't want  to sync.


suomi



Re: Partial replication, remove branch

2012-03-20 Thread jehan procaccia

Le 20/03/2012 12:37, anax a écrit :



On 03/20/2012 10:54 AM, jehan procaccia wrote:


I would like to replicate only some OUs under the baseDN ; ou=people and
ou=group,ou=system, but not the remaining of OUs below ou=system =
ou=Hosts , ou=Networks, ou=Protocol.
How can I remove those branches to replicate ?
my actual syncrepl config that replicate all the subtree branches:
syncrepl rid=001
provider=ldaps://master.domain.fr
type=refreshAndPersist
searchbase=dc=int-evry,dc=fr
filter=(objectClass=*)
attrs=*
scope=sub
schemachecking=on
bindmethod=simple
retry=60 10 300 +
binddn=cn=replic,ou=System,dc=int-evry,dc=fr
credentials=secret
updateref ldaps://master.domain.fr:636


Define the ACL for binddn=cn=replic,ou=System,dc=int-evry,dc=fr such 
that it cannot access the ou's you don't want  to sync.


suomi

Thanks, I achieved a partial replication to only wanted branches, as you 
suggested by restricting ACL to the replica's account on the 
branches/attributes I want.
However that's not an easy config to set up , I noticed that as soon as 
I forgot to mention an attribute in a subtree object, all the objects in 
that subtree aren't replicate, that's the same for a branches DN node , 
I initially forgot the attribute associatedDomain which was part of that 
object for example, then that object node and all subtree objects below 
weren't replicated .

So I ended with many more ACLs like that :

#ou=system,dc=int-evry,dc=fr BaseDN ACL to get ou=system object node
access to dn.exact=ou=system,dc=int-evry,dc=fr
by dn=cn=admin,dc=int-evry,dc=fr  write
by dn=cn=replic,ou=System,dc=int-evry,dc=fr read
by usersread
#Goups and associeted attributes
access to dn.subtree=ou=Group,ou=System,dc=int-evry,dc=fr 
attrs=cn,sn,memberuid,member,mail,description,entry,objectclass,associatedDomain,gidNumber,ou

by dn=cn=admin,dc=int-evry,dc=fr  write
by dn=cn=replic,ou=System,dc=int-evry,dc=fr read
by usersread

How can I check performance issue with all the ACL I added ? is there a 
program to test / bench the ACLs or optimise them ?


Thanks .



cn=config partial replication

2011-12-24 Thread The Ranger

Hello,

I have multiple v. 2.4.23 and 2.4.26 servers doing the master-slave 
replication using syncrepl.


The main server contains multiple subordinate DIT-s that get replicated 
to different servers:


* DIT1 from master to server A, B, C
* DIT2 from master to server D, E, F
* DIT3 from master to server G

etc.

Now I would like also to setup the cn=config replication. Actually the 
most important for me would be the cn=schema,cn=config since everything 
else is rather static.


What would be the best setup with minimal configuration settings/values 
duplication?


There are many howtos on the net how to sync only cn=schema,cn=config, 
but putting the olcSyncrepl value to olcDatabase={0}config will make the 
whole DB shadowed and redirect the database config changes to master 
server which is not the reasonable solution.


The best solution would be when the cn=schema,cn=config (and maybe olso 
the proper olcDatabase subtree) would be synchronized with the master. 
All the rest of the config database should be locally manageable.


I read about the suffixmassage, but this needs ldap server upgrade on 
2.4.23 servers (there is no package in debian 6.0.3 for that). And as 
far as I understand it also requires separate cn=config,cn=slave 
subtrees on the server with duplicated database configuration and acl 
definitions etc.


I simply try to find the best solution to hit as much problems at once 
as I can and reduce the config overhead as much as possible.


Could you please advise what are my options?


--
rgrds,
Ivari




Re: Partial replication

2010-04-07 Thread Zdenek Styblik

On 04/06/10 14:55, Andrew Findlay wrote:

On Thu, Apr 01, 2010 at 09:53:07PM +0200, Zdenek Styblik wrote:


you want to replicate. So, let's say you use cn=mirrorA,dc=domain,dc=tld
for replication, then allow this cn=mirrorA to read only
o=support,dc=example,dc=com and o=location_A,dc=example,dc=com, but nowhere
else.


I have used that technique for a fairly complex design with a central
office and many small satellites. It works OK *provided* you never change
the list of entries that can be seen by the replicas. The syncrepl
system has no way to evaluate the effect of an ACL change (and probably
no way to know that one has happenned).



Could you please elaborate more on this one?


My design requirements were similar to Joe's: I had a large central
server holding the master data for a lot of customers. Each customer
needed a local replica of their own data plus some subset of the
service-provider data. In my case the subset was not even complete
subtrees: the customers were allowed to see certain attributes of
certain entries only. I had to protect against the possibility that
someone might modify the config on a customer server to obtain data
that they should not have.

As there was already a comprehensive default-deny access-control
policy in place, I just factored in the replica servers as principals
with the right to see all data that should be replicated to that site
and nothing else. That meant that every replica server could have an
identical syncrepl clause which just copies everything it can see from
the entire DIT.



If I got your reply right, I haven't suggested otherwise than put ACLs 
at provider side, not consumer.



The downside is that if any access permissions change then the
replicas may not reflect the correct new subset of data.


Because I'd say if you refuse
access later to some DN then it must be like DN has been deleted. Same goes
for adding. I mean, syncrepl won't see data. And it checks, well it should
check, for changes in some regular intervals, right?


The problem is that syncrepl does not check every entry exhaustively.
That would be very inefficient (though I would like a way to force it
periodically). The master server maintains something like a timestamp
on the whole DIT, and when the replica server connects they just have
to compare timestamps and transfer things that have changed in the
interval between the two. (This is a gross simplification of the
actual protocol, but close enough for the discussion).

Now imagine that I change an ACL which affects the visibility of some
entries. The entries themselves have not changed, so the timestamps do
not change and the replication process will not know that the replica
data should change.

Worse still, I might change the membership of a group that is
referenced in an ACL. The replication process would transfer the group
but would not know that some other entries have changed visibility.



To make it short - I take your word for it :) In other words, it's 
probably done as best as it could be for the time of being.

I've written my assumptions and...that's probably all.
[some blabbering deleted/replaced here]


I have no need for nor experience with this, yet it's somewhat interesting.


It is a powerful technique, but the designer *and operators* of such a
system must be aware of the pitfalls.


ACLs of anykind in OpenLDAP are kinda ... PITA, no offense to anybody!!! :)
It just needs a lot of work to maintain and stuff (please please, no
bashing).


ACLs of any kind in any system (LDAP, file system, RDBMS etc) can be
hard to get right and harder to modify correctly at a later date. It
all depends on the policy that you are trying to implement. You should
think of ACLs as programs and expect to need programmer-level skill to
work on them. You may find this paper helpful:

http://www.skills-1st.co.uk/papers/ldap-acls-jan-2009/

Of all the LDAP servers that I have worked with, I find OpenLDAP's
ACLs are the easiest for implementing non-trivial policies.



Well, right now all [2 :) ] replicas are 1:1 and they should maintain 
the very same ACLs as provider. I know this can be managed eg. via batch 
script (or dynamic ACL), still- That's what I've meant.

But I agree, it's not much better anywhere else.

I guess right fun will begin when/if I decide to replicate only data 
that are really needed and of some use to [certain] consumer.



Andrew


Zdenek

--
Zdenek Styblik
Net/Linux admin
OS TurnovFree.net
email: sty...@turnovfree.net
jabber: sty...@jabber.turnovfree.net


Re: Partial replication

2010-04-06 Thread Andrew Findlay
On Thu, Apr 01, 2010 at 09:53:07PM +0200, Zdenek Styblik wrote:

 you want to replicate. So, let's say you use cn=mirrorA,dc=domain,dc=tld
 for replication, then allow this cn=mirrorA to read only
 o=support,dc=example,dc=com and o=location_A,dc=example,dc=com, but nowhere
 else.

 I have used that technique for a fairly complex design with a central
 office and many small satellites. It works OK *provided* you never change
 the list of entries that can be seen by the replicas. The syncrepl
 system has no way to evaluate the effect of an ACL change (and probably
 no way to know that one has happenned).


 Could you please elaborate more on this one?

My design requirements were similar to Joe's: I had a large central
server holding the master data for a lot of customers. Each customer
needed a local replica of their own data plus some subset of the
service-provider data. In my case the subset was not even complete
subtrees: the customers were allowed to see certain attributes of
certain entries only. I had to protect against the possibility that
someone might modify the config on a customer server to obtain data
that they should not have.

As there was already a comprehensive default-deny access-control
policy in place, I just factored in the replica servers as principals
with the right to see all data that should be replicated to that site
and nothing else. That meant that every replica server could have an
identical syncrepl clause which just copies everything it can see from
the entire DIT.

The downside is that if any access permissions change then the
replicas may not reflect the correct new subset of data.

 Because I'd say if you refuse 
 access later to some DN then it must be like DN has been deleted. Same goes 
 for adding. I mean, syncrepl won't see data. And it checks, well it should 
 check, for changes in some regular intervals, right?

The problem is that syncrepl does not check every entry exhaustively.
That would be very inefficient (though I would like a way to force it
periodically). The master server maintains something like a timestamp
on the whole DIT, and when the replica server connects they just have
to compare timestamps and transfer things that have changed in the
interval between the two. (This is a gross simplification of the
actual protocol, but close enough for the discussion).

Now imagine that I change an ACL which affects the visibility of some
entries. The entries themselves have not changed, so the timestamps do
not change and the replication process will not know that the replica
data should change.

Worse still, I might change the membership of a group that is
referenced in an ACL. The replication process would transfer the group
but would not know that some other entries have changed visibility.

 I have no need for nor experience with this, yet it's somewhat interesting.

It is a powerful technique, but the designer *and operators* of such a
system must be aware of the pitfalls.

 ACLs of anykind in OpenLDAP are kinda ... PITA, no offense to anybody!!! :) 
 It just needs a lot of work to maintain and stuff (please please, no 
 bashing).

ACLs of any kind in any system (LDAP, file system, RDBMS etc) can be
hard to get right and harder to modify correctly at a later date. It
all depends on the policy that you are trying to implement. You should
think of ACLs as programs and expect to need programmer-level skill to
work on them. You may find this paper helpful:

http://www.skills-1st.co.uk/papers/ldap-acls-jan-2009/

Of all the LDAP servers that I have worked with, I find OpenLDAP's
ACLs are the easiest for implementing non-trivial policies.

Andrew
-- 
---
| From Andrew Findlay, Skills 1st Ltd |
| Consultant in large-scale systems, networks, and directory services |
| http://www.skills-1st.co.uk/+44 1628 782565 |
---


RE: Partial replication

2010-04-06 Thread Joe Friedeggs

The e-mail thread seems to have wandered a bit, hoping I am replying to the 
correct one.

I've tested both methods, ACL vs 'syncrepl search filter', both seem to work 
well for me.  I agree with Andrew's point that controlling this via the ACLs on 
the provider is more secure (in my case).

Thanks for all the help and insight.

Joe


 On Thu, Apr 01, 2010 at 09:53:07PM +0200, Zdenek Styblik wrote:

 you want to replicate. So, let's say you use cn=mirrorA,dc=domain,dc=tld
 for replication, then allow this cn=mirrorA to read only
 o=support,dc=example,dc=com and o=location_A,dc=example,dc=com, but nowhere
 else.

 I have used that technique for a fairly complex design with a central
 office and many small satellites. It works OK *provided* you never change
 the list of entries that can be seen by the replicas. The syncrepl
 system has no way to evaluate the effect of an ACL change (and probably
 no way to know that one has happenned).


 Could you please elaborate more on this one?

 My design requirements were similar to Joe's: I had a large central
 server holding the master data for a lot of customers. Each customer
 needed a local replica of their own data plus some subset of the
 service-provider data. In my case the subset was not even complete
 subtrees: the customers were allowed to see certain attributes of
 certain entries only. I had to protect against the possibility that
 someone might modify the config on a customer server to obtain data
 that they should not have.

 As there was already a comprehensive default-deny access-control
 policy in place, I just factored in the replica servers as principals
 with the right to see all data that should be replicated to that site
 and nothing else. That meant that every replica server could have an
 identical syncrepl clause which just copies everything it can see from
 the entire DIT.

 The downside is that if any access permissions change then the
 replicas may not reflect the correct new subset of data.

 Because I'd say if you refuse
 access later to some DN then it must be like DN has been deleted. Same goes
 for adding. I mean, syncrepl won't see data. And it checks, well it should
 check, for changes in some regular intervals, right?

 The problem is that syncrepl does not check every entry exhaustively.
 That would be very inefficient (though I would like a way to force it
 periodically). The master server maintains something like a timestamp
 on the whole DIT, and when the replica server connects they just have
 to compare timestamps and transfer things that have changed in the
 interval between the two. (This is a gross simplification of the
 actual protocol, but close enough for the discussion).

 Now imagine that I change an ACL which affects the visibility of some
 entries. The entries themselves have not changed, so the timestamps do
 not change and the replication process will not know that the replica
 data should change.

 Worse still, I might change the membership of a group that is
 referenced in an ACL. The replication process would transfer the group
 but would not know that some other entries have changed visibility.

 I have no need for nor experience with this, yet it's somewhat interesting.

 It is a powerful technique, but the designer *and operators* of such a
 system must be aware of the pitfalls.

 ACLs of anykind in OpenLDAP are kinda ... PITA, no offense to anybody!!! :)
 It just needs a lot of work to maintain and stuff (please please, no
 bashing).

 ACLs of any kind in any system (LDAP, file system, RDBMS etc) can be
 hard to get right and harder to modify correctly at a later date. It
 all depends on the policy that you are trying to implement. You should
 think of ACLs as programs and expect to need programmer-level skill to
 work on them. You may find this paper helpful:

 http://www.skills-1st.co.uk/papers/ldap-acls-jan-2009/

 Of all the LDAP servers that I have worked with, I find OpenLDAP's
 ACLs are the easiest for implementing non-trivial policies.

 Andrew
 --

  
_
The New Busy is not the old busy. Search, chat and e-mail from your inbox.
http://www.windowslive.com/campaign/thenewbusy?ocid=PID28326::T:WLMTAGL:ON:WL:en-US:WM_HMP:042010_3

Re: Partial replication

2010-04-01 Thread Andrew Findlay
On Wed, Mar 31, 2010 at 08:43:19AM +0200, Zdenek Styblik wrote:

 How about to refuse rights to the syncrepl user?
 Actually, you could apply this to the whole tree. Just allow read to DNs 
 you want to replicate. So, let's say you use cn=mirrorA,dc=domain,dc=tld 
 for replication, then allow this cn=mirrorA to read only 
 o=support,dc=example,dc=com and o=location_A,dc=example,dc=com, but nowhere 
 else.

I have used that technique for a fairly complex design with a central
office and many small satellites. It works OK *provided* you never change
the list of entries that can be seen by the replicas. The syncrepl
system has no way to evaluate the effect of an ACL change (and probably
no way to know that one has happenned).

In this case it may be better to set up multiple replication agreements
to cover the multiple subtrees required at the slave server. That would
also make it possible to chain or refer queries for the rest of the
DIT back to the master.

Andrew
-- 
---
| From Andrew Findlay, Skills 1st Ltd |
| Consultant in large-scale systems, networks, and directory services |
| http://www.skills-1st.co.uk/+44 1628 782565 |
---


Re: Partial replication

2010-04-01 Thread Zdenek Styblik

On 04/01/10 21:43, Andrew Findlay wrote:

On Wed, Mar 31, 2010 at 08:43:19AM +0200, Zdenek Styblik wrote:


How about to refuse rights to the syncrepl user?
Actually, you could apply this to the whole tree. Just allow read to DNs
you want to replicate. So, let's say you use cn=mirrorA,dc=domain,dc=tld
for replication, then allow this cn=mirrorA to read only
o=support,dc=example,dc=com and o=location_A,dc=example,dc=com, but nowhere
else.


I have used that technique for a fairly complex design with a central
office and many small satellites. It works OK *provided* you never change
the list of entries that can be seen by the replicas. The syncrepl
system has no way to evaluate the effect of an ACL change (and probably
no way to know that one has happenned).



Could you please elaborate more on this one? Because I'd say if you 
refuse access later to some DN then it must be like DN has been deleted. 
Same goes for adding. I mean, syncrepl won't see data. And it checks, 
well it should check, for changes in some regular intervals, right?

I have no need for nor experience with this, yet it's somewhat interesting.

ACLs of anykind in OpenLDAP are kinda ... PITA, no offense to anybody!!! 
:) It just needs a lot of work to maintain and stuff (please please, no 
bashing).


Thanks,
Zdenek


In this case it may be better to set up multiple replication agreements
to cover the multiple subtrees required at the slave server. That would
also make it possible to chain or refer queries for the rest of the
DIT back to the master.

Andrew


Re: Partial replication

2010-04-01 Thread Howard Chu

Andrew Findlay wrote:

On Wed, Mar 31, 2010 at 08:43:19AM +0200, Zdenek Styblik wrote:


How about to refuse rights to the syncrepl user?
Actually, you could apply this to the whole tree. Just allow read to DNs
you want to replicate. So, let's say you use cn=mirrorA,dc=domain,dc=tld
for replication, then allow this cn=mirrorA to read only
o=support,dc=example,dc=com and o=location_A,dc=example,dc=com, but nowhere
else.


I have used that technique for a fairly complex design with a central
office and many small satellites. It works OK *provided* you never change
the list of entries that can be seen by the replicas. The syncrepl
system has no way to evaluate the effect of an ACL change (and probably
no way to know that one has happenned).

In this case it may be better to set up multiple replication agreements
to cover the multiple subtrees required at the slave server. That would
also make it possible to chain or refer queries for the rest of the
DIT back to the master.


Multiple agreements with the same provider won't work, since there will only 
be one contextCSN sent from the master. After the first consumer runs, the 
second one will assume it is up to date.


The correct solution here is to use a extended filter with dnSubtreeMatch on 
each desired branch.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/


Re: Partial replication

2010-03-31 Thread Zdenek Styblik

On 03/31/10 01:28, Joe Friedeggs wrote:





On 03/30/10 18:36, Joe Friedeggs wrote:


Is it possible to replicate, on a slave, two branches of the DIT (only)? I have 
several instances of LDAP running on servers throughout the world. Connection 
to some of these from our support location is not dependable. I want to do 
something similar to this:

Main LDAP (here, master):

dc=example,dc=com
|
+--o=support
|
+--o=location_A
|
+--o=location_B
|
+--o=location_C


In Location A (remote slave):

dc=example,dc=com
|
+--o=support
|
+--o=location_A



In Location B (remote slave):

dc=example,dc=com
|
+--o=support
|
+--o=location_B



Location A  B are two different customers, therefore it would not be prudent 
to replicate Location B's users in Locations A. But I need the Support group to 
exist in all locations.



Hello,


Can this be done using syncrepl?



I believe this could be done via 'searchbase=dc=domain,dc=tld' option.



I wish it was that easy.  What I need is both

o=support,dc=example,dc=com
AND
o=location_A,dc=example,dc=com

replicated in the Location_A database, but I don't want

o=location_B,dc=example,dc=com

in the database of Location_A

I have not found a way to make that work with syncrepl searchbase.



How about to refuse rights to the syncrepl user?
Actually, you could apply this to the whole tree. Just allow read to DNs 
you want to replicate. So, let's say you use cn=mirrorA,dc=domain,dc=tld 
for replication, then allow this cn=mirrorA to read only 
o=support,dc=example,dc=com and o=location_A,dc=example,dc=com, but 
nowhere else.


How about that?

Zdenek


Thanks,
Joe


...
Thanks,
Joe


Regards,
Zdenek




_
Hotmail: Trusted email with powerful SPAM protection.
http://clk.atdmt.com/GBL/go/210850553/direct/01/



--
Zdenek Styblik
Net/Linux admin
OS TurnovFree.net
email: sty...@turnovfree.net
jabber: sty...@jabber.turnovfree.net


Partial replication

2010-03-30 Thread Joe Friedeggs

Is it possible to replicate, on a slave, two branches of the DIT (only)?  I 
have several instances of LDAP running on servers throughout the world.  
Connection to some of these from our support location is not dependable.  I 
want to do something similar to this:

Main LDAP (here, master):

dc=example,dc=com
    |
    +--o=support
    |
    +--o=location_A
    |

    +--o=location_B
    |

    +--o=location_C







In Location A (remote slave):

dc=example,dc=com

    |

    +--o=support

    |

    +--o=location_A



In Location B (remote slave):



dc=example,dc=com


    |


    +--o=support


    |


    +--o=location_B





Location A  B are two different customers, therefore it would not be prudent 
to replicate Location B's users in Locations A.  But I need the Support group 
to exist in all locations.

Can this be done using syncrepl?  

Another thought is to have LDAP Masters existing in each location, and somehow 
replicate the Support branch to each (mirrormode?).  Should this be the 
approach?

Thanks,
Joe

  

  
_
Hotmail has tools for the New Busy. Search, chat and e-mail from your inbox.
http://www.windowslive.com/campaign/thenewbusy?ocid=PID27925::T:WLMTAGL:ON:WL:en-US:WM_HMP:032010_1

Re: Partial replication

2010-03-30 Thread Zdenek Styblik

On 03/30/10 18:36, Joe Friedeggs wrote:


Is it possible to replicate, on a slave, two branches of the DIT (only)?  I 
have several instances of LDAP running on servers throughout the world.  
Connection to some of these from our support location is not dependable.  I 
want to do something similar to this:

Main LDAP (here, master):

dc=example,dc=com
 |
 +--o=support
 |
 +--o=location_A
 |

 +--o=location_B
 |

 +--o=location_C







In Location A (remote slave):

dc=example,dc=com

 |

 +--o=support

 |

 +--o=location_A



In Location B (remote slave):



dc=example,dc=com


 |


 +--o=support


 |


 +--o=location_B





Location A  B are two different customers, therefore it would not be prudent 
to replicate Location B's users in Locations A.  But I need the Support group to 
exist in all locations.



Hello,


Can this be done using syncrepl?



I believe this could be done via 'searchbase=dc=domain,dc=tld' option.


...
Thanks,
Joe


Regards,
Zdenek

--
Zdenek Styblik
Net/Linux admin
OS TurnovFree.net
email: sty...@turnovfree.net
jabber: sty...@jabber.turnovfree.net


RE: Partial replication

2010-03-30 Thread Joe Friedeggs



 On 03/30/10 18:36, Joe Friedeggs wrote:

 Is it possible to replicate, on a slave, two branches of the DIT (only)? I 
 have several instances of LDAP running on servers throughout the world. 
 Connection to some of these from our support location is not dependable. I 
 want to do something similar to this:

 Main LDAP (here, master):

 dc=example,dc=com
 |
 +--o=support
 |
 +--o=location_A
 |
 +--o=location_B
 |
 +--o=location_C


 In Location A (remote slave):

 dc=example,dc=com
 |
 +--o=support
 |
 +--o=location_A



 In Location B (remote slave):

 dc=example,dc=com
 |
 +--o=support
 |
 +--o=location_B



 Location A B are two different customers, therefore it would not be prudent 
 to replicate Location B's users in Locations A. But I need the Support group 
 to exist in all locations.


 Hello,

 Can this be done using syncrepl?


 I believe this could be done via 'searchbase=dc=domain,dc=tld' option.


I wish it was that easy.  What I need is both

   o=support,dc=example,dc=com
   AND
   o=location_A,dc=example,dc=com

replicated in the Location_A database, but I don't want

   o=location_B,dc=example,dc=com

in the database of Location_A

I have not found a way to make that work with syncrepl searchbase.

Thanks,
Joe

 ...
 Thanks,
 Joe

 Regards,
 Zdenek


  
_
Hotmail: Trusted email with powerful SPAM protection.
http://clk.atdmt.com/GBL/go/210850553/direct/01/