Is it possible to replicate, on a slave, two branches of the DIT (only)? I have several instances of LDAP running on servers throughout the world. Connection to some of these from our support location is not dependable. I want to do something similar to this:
Main LDAP (here, master):
dc=example,dc=com | +--o=support | +--o=location_A |
+--o=location_B |
+--o=location_C
In Location A (remote slave):
dc=example,dc=com
|
+--o=support
|
+--o=location_A
In Location B (remote slave):
dc=example,dc=com
|
+--o=support
|
+--o=location_B
Location A & B are two different customers, therefore it would not be prudent to replicate Location B's users in Locations A. But I need the Support group to exist in all locations.
Can this be done using syncrepl?
Another thought is to have LDAP Masters existing in each location, and somehow replicate the Support branch to each (mirrormode?). Should this be the approach?
Thanks, Joe
_________________________________________________________________ Hotmail has tools for the New Busy. Search, chat and e-mail from your inbox. http://www.windowslive.com/campaign/thenewbusy?ocid=PID27925::T:WLMTAGL:ON:W...
On 03/30/10 18:36, Joe Friedeggs wrote:
Is it possible to replicate, on a slave, two branches of the DIT (only)? I have several instances of LDAP running on servers throughout the world. Connection to some of these from our support location is not dependable. I want to do something similar to this:
Main LDAP (here, master):
dc=example,dc=com | +--o=support | +--o=location_A |
+--o=location_B | +--o=location_C
In Location A (remote slave):
dc=example,dc=com
| +--o=support | +--o=location_A
In Location B (remote slave):
dc=example,dc=com
| +--o=support | +--o=location_B
Location A& B are two different customers, therefore it would not be prudent to replicate Location B's users in Locations A. But I need the Support group to exist in all locations.
Hello,
Can this be done using syncrepl?
I believe this could be done via 'searchbase="dc=domain,dc=tld"' option.
... Thanks, Joe
Regards, Zdenek
On 03/30/10 18:36, Joe Friedeggs wrote:
Is it possible to replicate, on a slave, two branches of the DIT (only)? I have several instances of LDAP running on servers throughout the world. Connection to some of these from our support location is not dependable. I want to do something similar to this:
Main LDAP (here, master):
dc=example,dc=com | +--o=support | +--o=location_A | +--o=location_B | +--o=location_C
In Location A (remote slave):
dc=example,dc=com | +--o=support | +--o=location_A
In Location B (remote slave):
dc=example,dc=com | +--o=support | +--o=location_B
Location A& B are two different customers, therefore it would not be prudent to replicate Location B's users in Locations A. But I need the Support group to exist in all locations.
Hello,
Can this be done using syncrepl?
I believe this could be done via 'searchbase="dc=domain,dc=tld"' option.
I wish it was that easy. What I need is both
o=support,dc=example,dc=com AND o=location_A,dc=example,dc=com
replicated in the Location_A database, but I don't want
o=location_B,dc=example,dc=com
in the database of Location_A
I have not found a way to make that work with syncrepl searchbase.
Thanks, Joe
... Thanks, Joe
Regards, Zdenek
_________________________________________________________________ Hotmail: Trusted email with powerful SPAM protection. http://clk.atdmt.com/GBL/go/210850553/direct/01/
On 03/31/10 01:28, Joe Friedeggs wrote:
On 03/30/10 18:36, Joe Friedeggs wrote:
Is it possible to replicate, on a slave, two branches of the DIT (only)? I have several instances of LDAP running on servers throughout the world. Connection to some of these from our support location is not dependable. I want to do something similar to this:
Main LDAP (here, master):
dc=example,dc=com | +--o=support | +--o=location_A | +--o=location_B | +--o=location_C
In Location A (remote slave):
dc=example,dc=com | +--o=support | +--o=location_A
In Location B (remote slave):
dc=example,dc=com | +--o=support | +--o=location_B
Location A& B are two different customers, therefore it would not be prudent to replicate Location B's users in Locations A. But I need the Support group to exist in all locations.
Hello,
Can this be done using syncrepl?
I believe this could be done via 'searchbase="dc=domain,dc=tld"' option.
I wish it was that easy. What I need is both
o=support,dc=example,dc=com AND o=location_A,dc=example,dc=com
replicated in the Location_A database, but I don't want
o=location_B,dc=example,dc=com
in the database of Location_A
I have not found a way to make that work with syncrepl searchbase.
How about to refuse rights to the syncrepl user? Actually, you could apply this to the whole tree. Just allow read to DNs you want to replicate. So, let's say you use cn=mirrorA,dc=domain,dc=tld for replication, then allow this cn=mirrorA to read only o=support,dc=example,dc=com and o=location_A,dc=example,dc=com, but nowhere else.
How about that?
Zdenek
Thanks, Joe
... Thanks, Joe
Regards, Zdenek
Hotmail: Trusted email with powerful SPAM protection. http://clk.atdmt.com/GBL/go/210850553/direct/01/
On Wed, Mar 31, 2010 at 08:43:19AM +0200, Zdenek Styblik wrote:
How about to refuse rights to the syncrepl user? Actually, you could apply this to the whole tree. Just allow read to DNs you want to replicate. So, let's say you use cn=mirrorA,dc=domain,dc=tld for replication, then allow this cn=mirrorA to read only o=support,dc=example,dc=com and o=location_A,dc=example,dc=com, but nowhere else.
I have used that technique for a fairly complex design with a central office and many small satellites. It works OK *provided* you never change the list of entries that can be seen by the replicas. The syncrepl system has no way to evaluate the effect of an ACL change (and probably no way to know that one has happenned).
In this case it may be better to set up multiple replication agreements to cover the multiple subtrees required at the slave server. That would also make it possible to chain or refer queries for the rest of the DIT back to the master.
Andrew
On 04/01/10 21:43, Andrew Findlay wrote:
On Wed, Mar 31, 2010 at 08:43:19AM +0200, Zdenek Styblik wrote:
How about to refuse rights to the syncrepl user? Actually, you could apply this to the whole tree. Just allow read to DNs you want to replicate. So, let's say you use cn=mirrorA,dc=domain,dc=tld for replication, then allow this cn=mirrorA to read only o=support,dc=example,dc=com and o=location_A,dc=example,dc=com, but nowhere else.
I have used that technique for a fairly complex design with a central office and many small satellites. It works OK *provided* you never change the list of entries that can be seen by the replicas. The syncrepl system has no way to evaluate the effect of an ACL change (and probably no way to know that one has happenned).
Could you please elaborate more on this one? Because I'd say if you refuse access later to some DN then it must be like DN has been deleted. Same goes for adding. I mean, syncrepl won't see data. And it checks, well it should check, for changes in some regular intervals, right? I have no need for nor experience with this, yet it's somewhat interesting.
ACLs of anykind in OpenLDAP are kinda ... PITA, no offense to anybody!!! :) It just needs a lot of work to maintain and stuff (please please, no bashing).
Thanks, Zdenek
In this case it may be better to set up multiple replication agreements to cover the multiple subtrees required at the slave server. That would also make it possible to chain or refer queries for the rest of the DIT back to the master.
Andrew
On Thu, Apr 01, 2010 at 09:53:07PM +0200, Zdenek Styblik wrote:
you want to replicate. So, let's say you use cn=mirrorA,dc=domain,dc=tld for replication, then allow this cn=mirrorA to read only o=support,dc=example,dc=com and o=location_A,dc=example,dc=com, but nowhere else.
I have used that technique for a fairly complex design with a central office and many small satellites. It works OK *provided* you never change the list of entries that can be seen by the replicas. The syncrepl system has no way to evaluate the effect of an ACL change (and probably no way to know that one has happenned).
Could you please elaborate more on this one?
My design requirements were similar to Joe's: I had a large central server holding the master data for a lot of customers. Each customer needed a local replica of their own data plus some subset of the service-provider data. In my case the subset was not even complete subtrees: the customers were allowed to see certain attributes of certain entries only. I had to protect against the possibility that someone might modify the config on a customer server to obtain data that they should not have.
As there was already a comprehensive default-deny access-control policy in place, I just factored in the replica servers as principals with the right to see all data that should be replicated to that site and nothing else. That meant that every replica server could have an identical syncrepl clause which just copies everything it can see from the entire DIT.
The downside is that if any access permissions change then the replicas may not reflect the correct new subset of data.
Because I'd say if you refuse access later to some DN then it must be like DN has been deleted. Same goes for adding. I mean, syncrepl won't see data. And it checks, well it should check, for changes in some regular intervals, right?
The problem is that syncrepl does not check every entry exhaustively. That would be very inefficient (though I would like a way to force it periodically). The master server maintains something like a timestamp on the whole DIT, and when the replica server connects they just have to compare timestamps and transfer things that have changed in the interval between the two. (This is a gross simplification of the actual protocol, but close enough for the discussion).
Now imagine that I change an ACL which affects the visibility of some entries. The entries themselves have not changed, so the timestamps do not change and the replication process will not know that the replica data should change.
Worse still, I might change the membership of a group that is referenced in an ACL. The replication process would transfer the group but would not know that some other entries have changed visibility.
I have no need for nor experience with this, yet it's somewhat interesting.
It is a powerful technique, but the designer *and operators* of such a system must be aware of the pitfalls.
ACLs of anykind in OpenLDAP are kinda ... PITA, no offense to anybody!!! :) It just needs a lot of work to maintain and stuff (please please, no bashing).
ACLs of any kind in any system (LDAP, file system, RDBMS etc) can be hard to get right and harder to modify correctly at a later date. It all depends on the policy that you are trying to implement. You should think of ACLs as programs and expect to need programmer-level skill to work on them. You may find this paper helpful:
http://www.skills-1st.co.uk/papers/ldap-acls-jan-2009/
Of all the LDAP servers that I have worked with, I find OpenLDAP's ACLs are the easiest for implementing non-trivial policies.
Andrew
On 04/06/10 14:55, Andrew Findlay wrote:
On Thu, Apr 01, 2010 at 09:53:07PM +0200, Zdenek Styblik wrote:
you want to replicate. So, let's say you use cn=mirrorA,dc=domain,dc=tld for replication, then allow this cn=mirrorA to read only o=support,dc=example,dc=com and o=location_A,dc=example,dc=com, but nowhere else.
I have used that technique for a fairly complex design with a central office and many small satellites. It works OK *provided* you never change the list of entries that can be seen by the replicas. The syncrepl system has no way to evaluate the effect of an ACL change (and probably no way to know that one has happenned).
Could you please elaborate more on this one?
My design requirements were similar to Joe's: I had a large central server holding the master data for a lot of customers. Each customer needed a local replica of their own data plus some subset of the service-provider data. In my case the subset was not even complete subtrees: the customers were allowed to see certain attributes of certain entries only. I had to protect against the possibility that someone might modify the config on a customer server to obtain data that they should not have.
As there was already a comprehensive default-deny access-control policy in place, I just factored in the replica servers as principals with the right to see all data that should be replicated to that site and nothing else. That meant that every replica server could have an identical syncrepl clause which just copies everything it can see from the entire DIT.
If I got your reply right, I haven't suggested otherwise than put ACLs at provider side, not consumer.
The downside is that if any access permissions change then the replicas may not reflect the correct new subset of data.
Because I'd say if you refuse access later to some DN then it must be like DN has been deleted. Same goes for adding. I mean, syncrepl won't see data. And it checks, well it should check, for changes in some regular intervals, right?
The problem is that syncrepl does not check every entry exhaustively. That would be very inefficient (though I would like a way to force it periodically). The master server maintains something like a timestamp on the whole DIT, and when the replica server connects they just have to compare timestamps and transfer things that have changed in the interval between the two. (This is a gross simplification of the actual protocol, but close enough for the discussion).
Now imagine that I change an ACL which affects the visibility of some entries. The entries themselves have not changed, so the timestamps do not change and the replication process will not know that the replica data should change.
Worse still, I might change the membership of a group that is referenced in an ACL. The replication process would transfer the group but would not know that some other entries have changed visibility.
To make it short - I take your word for it :) In other words, it's probably done as "best" as it could be for the time of being. I've written my assumptions and...that's probably all. [some blabbering deleted/replaced here]
I have no need for nor experience with this, yet it's somewhat interesting.
It is a powerful technique, but the designer *and operators* of such a system must be aware of the pitfalls.
ACLs of anykind in OpenLDAP are kinda ... PITA, no offense to anybody!!! :) It just needs a lot of work to maintain and stuff (please please, no bashing).
ACLs of any kind in any system (LDAP, file system, RDBMS etc) can be hard to get right and harder to modify correctly at a later date. It all depends on the policy that you are trying to implement. You should think of ACLs as programs and expect to need programmer-level skill to work on them. You may find this paper helpful:
http://www.skills-1st.co.uk/papers/ldap-acls-jan-2009/
Of all the LDAP servers that I have worked with, I find OpenLDAP's ACLs are the easiest for implementing non-trivial policies.
Well, right now all [2 :) ] replicas are 1:1 and they should maintain the very same ACLs as provider. I know this can be managed eg. via batch script (or dynamic ACL), still- That's what I've meant. But I agree, it's not much better anywhere else.
I guess right fun will begin when/if I decide to replicate only data that are really needed and of some use to [certain] consumer.
Andrew
Zdenek
Andrew Findlay wrote:
On Wed, Mar 31, 2010 at 08:43:19AM +0200, Zdenek Styblik wrote:
How about to refuse rights to the syncrepl user? Actually, you could apply this to the whole tree. Just allow read to DNs you want to replicate. So, let's say you use cn=mirrorA,dc=domain,dc=tld for replication, then allow this cn=mirrorA to read only o=support,dc=example,dc=com and o=location_A,dc=example,dc=com, but nowhere else.
I have used that technique for a fairly complex design with a central office and many small satellites. It works OK *provided* you never change the list of entries that can be seen by the replicas. The syncrepl system has no way to evaluate the effect of an ACL change (and probably no way to know that one has happenned).
In this case it may be better to set up multiple replication agreements to cover the multiple subtrees required at the slave server. That would also make it possible to chain or refer queries for the rest of the DIT back to the master.
Multiple agreements with the same provider won't work, since there will only be one contextCSN sent from the master. After the first consumer runs, the second one will assume it is up to date.
The correct solution here is to use a extended filter with dnSubtreeMatch on each desired branch.
On Thu, Apr 01, 2010 at 03:53:32PM -0700, Howard Chu wrote:
Multiple agreements with the same provider won't work, since there will only be one contextCSN sent from the master. After the first consumer runs, the second one will assume it is up to date.
Good point - I had forgotten that.
The correct solution here is to use a extended filter with dnSubtreeMatch on each desired branch.
So in this case with the tree:
dc=example,dc=com | +--o=support | +--o=location_A | +--o=location_B | +--o=location_C
the syncrepl clause on the location A slave would contain something like this:
searchbase="dc=example,dc=com" filter="(|(entrydn:dnSubtreeMatch:=o=support,dc=example,dc=com)(entrydn:dnSubtreeMatch:=o=location_A,dc=example,dc=com))
Unfortunately, when I look back at the original question I see that the slave server is physically located at location A and the security policy does not permit people at that location to see any data belonging to the other locations. Limiting the replication by this method leaves open the possibility that someone at location A might change the config to allow them to see data from location B, so the master server is still going to need ACLs to prevent that.
Andrew
Going back to the original question...
On Tue, Mar 30, 2010 at 11:36:09AM -0500, Joe Friedeggs wrote:
Location A & B are two different customers, therefore it would not be prudent to replicate Location B's users in Locations A. But I need the Support group to exist in all locations.
That is a critical part of the requirement, so you cannot depend on the config of the customer-site machines to protect other customers' data.
Can this be done using syncrepl?
Another thought is to have LDAP Masters existing in each location, and somehow replicate the Support branch to each (mirrormode?). Should this be the approach?
That could be a very good approach, especially if the changes to the data are mostly done from the customer sites (i.e. site A data is mostly updated by people located at site A).
You would probably want to have a separate database for each suffix (support, site A, site B etc) and then use the relay backend to glue it all together so that searches could cover both the site data and the support data from one suffix.
It would not be necessary to use mirrormode, and as you said that connectivity is flaky I would certainly advise against it. One-way replication should be enough, and will certainly be safe.
Andrew
openldap-technical@openldap.org