I have a question about mirror mode, and how it's different from "multimaster".
In servers like Sun or Red Hat's directory server, a simplified description of what they term multimaster is that more than one server can accept writes simultaneously, and it will then propogate all changes to other servers. If there is a conflicting change, some form of conflict resolution logic is implemented.
If I understand Mirror Mode correctly, the major difference is that it does not contain code for conflict resolution. I.e. both masters are live, and will accept changes, and try to replicate them to the other master (the hot standby master does not reject changes when the other master is the "live" master). If a conflict occurs, bad things happen. It depends on some third resource (a load balancer, slapd in proxy mode, etc) ensuring that all writes go to only one master server at a time, and that all clients doing writes use that frontend to send writes to. At the same time, the individual masters don't do anything to enforce the fact that writes are coming through that load balancer/proxy/etc, so it's possible to write to both masters concurrently (though not desirable).
So.. a couple of questions: 1. Is this understanding a correct one? Have I missed any key points (or completely gotten it wrong)?
2. If so, and I write conflicting changes to both master servers directly, what happens when they try to sync? (What does it do with the conflicting changes, does it prevent any future changes from being replicated or can it "skip" the conflicting change, how do I detect a conflict to manually fix it, and what do I need to do actually fix it (can I just resolve the data issue, or do I have to remove something from a replication queue and how, etc)?)
3. I can envision an HA configuration having 2 sites. At each site I have a proxy and a master, with one site having the primary master, the other having a hot standby. Both proxies direct traffic to the primary master if it's available, or redirect to the hot standby master if not. Connection between sites goes down, so one proxy is directing writes to the primary master, but the other now can't get to the primary, so sends writes to the hot standby master, thinking it is down. Assuming this happens, and NO conflicting modifications are made, when the connection between sites comes back up, will everything sync up properly so everything has the same content (if a conflicting change occurs, it would be covered by the previous point).
Thanks, - Jeff
Perhaps this is more of a development question than a usage question, but I thought this would be a decent forum to see if the idea gets any feedback (positive or negative).
I don't know how much others manipulate their olcAccess, or even if most people just maintain them in slapd.conf and restart slapd when necessary. I'm treating olcAccess entries more or less like firewall rules, adding and removing on the fly, thanks to the olcAccess attributes in backend DB definitions.
To me, it would seem a lot more convenient if the olcAccess in the backend DB definition pointed to a objectClass groupOfOlcAccess (for lack fo a better term). Each entry would have a single <what> attribute, and multiple "<who> <access>" attributes. For example (in incorrect pseudo-LDIF format):
dn: olcDatabase={1}bdb objectClass: olcDatabaseConfig objectClass: olcBdbConfig olcDatabase: {1}bdb [...] olcAccessList: ou=bdb1OlcAccessList
----
dn: ou=bdb1OlcAccessList ou: bdb1OlcAccessList objectClass: groupOfOlcAccess
----
dn: cn=olc1,ou=bdb1OlcAccess cn: olc1 objectClass: olcAccess what: ou=people,dc=domain whoaccess: {1}group="cn=readgroup,dc=domain" read whoaccess: {2}cn="ldapadmin,dc=domain" write whoaccess: {3}* none
----
Obviously, how useful this kind of change would be depends on how often people update their ACLs. In my case, I can see it happening often enough that I'd prefer to add/delete cn's or "whoaccess" attributes, rather than editing olcAccess attributes in the backend config. I'd want the backend config to be fairly static, as opposed to the ACLs in my environment which would be fairly dynamic.
Feel free to tell me it's a terrible idea. I may not know enough background information about how slapd works and how this type of schema change would affect performance.
Romain Komorn
Romain Komorn wrote:
Perhaps this is more of a development question than a usage question, but I thought this would be a decent forum to see if the idea gets any feedback (positive or negative).
I don't know how much others manipulate their olcAccess, or even if most people just maintain them in slapd.conf and restart slapd when necessary. I'm treating olcAccess entries more or less like firewall rules, adding and removing on the fly, thanks to the olcAccess attributes in backend DB definitions.
To me, it would seem a lot more convenient if the olcAccess in the backend DB definition pointed to a objectClass groupOfOlcAccess (for lack fo a better term). Each entry would have a single <what> attribute, and multiple "<who> <access>" attributes. For example (in incorrect pseudo-LDIF format):
dn: olcDatabase={1}bdb objectClass: olcDatabaseConfig objectClass: olcBdbConfig olcDatabase: {1}bdb [...] olcAccessList: ou=bdb1OlcAccessList
dn: ou=bdb1OlcAccessList ou: bdb1OlcAccessList objectClass: groupOfOlcAccess
dn: cn=olc1,ou=bdb1OlcAccess cn: olc1 objectClass: olcAccess what: ou=people,dc=domain whoaccess: {1}group="cn=readgroup,dc=domain" read whoaccess: {2}cn="ldapadmin,dc=domain" write whoaccess: {3}* none
Obviously, how useful this kind of change would be depends on how often people update their ACLs. In my case, I can see it happening often enough that I'd prefer to add/delete cn's or "whoaccess" attributes, rather than editing olcAccess attributes in the backend config. I'd want the backend config to be fairly static, as opposed to the ACLs in my environment which would be fairly dynamic.
Feel free to tell me it's a terrible idea. I may not know enough background information about how slapd works and how this type of schema change would affect performance.
Romain Komorn
I would start a new thread (not hijacking an existing one) and resend to -devel.
Thanks.
Gavin Henry wrote:
Romain Komorn wrote:
Perhaps this is more of a development question than a usage question, but I thought this would be a decent forum to see if the idea gets any feedback (positive or negative).
I don't know how much others manipulate their olcAccess, or even if most people just maintain them in slapd.conf and restart slapd when necessary. I'm treating olcAccess entries more or less like firewall rules, adding and removing on the fly, thanks to the olcAccess attributes in backend DB definitions.
To me, it would seem a lot more convenient if the olcAccess in the backend DB definition pointed to a objectClass groupOfOlcAccess (for lack fo a better term).
I don't see anything in this proposal that increases convenience. Perhaps you first need to define what's "inconvenient" in the current approach.
Obviously, how useful this kind of change would be depends on how often people update their ACLs. In my case, I can see it happening often enough that I'd prefer to add/delete cn's or "whoaccess" attributes, rather than editing olcAccess attributes in the backend config. I'd want the backend config to be fairly static, as opposed to the ACLs in my environment which would be fairly dynamic.
Feel free to tell me it's a terrible idea. I may not know enough background information about how slapd works and how this type of schema change would affect performance.
OK, it's a terrible idea. Performance isn't the prime concern with access controls, security is. The point is that access controls apply to specific database instances. Moving them off to completely independent entries, introducing a level of indirection, hides/obscures their relevance.
That's the downside. I don't really see your suggested upside either, an attribute is edited either way. What difference does it make in terms of static vs dynamic whether the attribute is in one place or a different place?
Romain Komorn
I would start a new thread (not hijacking an existing one) and resend to -devel.
No point in moving it.
Perhaps this is more of a development question than a usage question, but I thought this would be a decent forum to see if the idea gets any feedback (positive or negative).
I don't know how much others manipulate their olcAccess, or even if most people just maintain them in slapd.conf and restart slapd when necessary. I'm treating olcAccess entries more or less like firewall rules, adding and removing on the fly, thanks to the olcAccess attributes in backend DB definitions.
To me, it would seem a lot more convenient if the olcAccess in the backend DB definition pointed to a objectClass groupOfOlcAccess (for lack fo a better term).
I don't see anything in this proposal that increases convenience. Perhaps you first need to define what's "inconvenient" in the current approach.
The current approach isn't inconvenient. It's sensible enough, I didn't mean to say it needs to be fixed, but I do think it could be improved. I think there's a difference.
Obviously, how useful this kind of change would be depends on how often people update their ACLs. In my case, I can see it happening often enough that I'd prefer to add/delete cn's or "whoaccess" attributes, rather than editing olcAccess attributes in the backend config. I'd want the backend config to be fairly static, as opposed to the ACLs in my environment which would be fairly dynamic.
Feel free to tell me it's a terrible idea. I may not know enough background information about how slapd works and how this type of schema change would affect performance.
OK, it's a terrible idea. Performance isn't the prime concern with access controls, security is. The point is that access controls apply to specific database instances. Moving them off to completely independent entries, introducing a level of indirection, hides/obscures their relevance.
I disagree that it obscures the access control's reference. It simply creates a "pointer" to the list of access controls that apply to this database instance. I don't see much obscurity there is in looking up the access list that's pointed to, versus going through the olcAccess attributes of the database entry.
When you look at network switching ACLs, it's not uncommon to find cases where you define ACLs separately, and subsequently bind/apply them to network ports. If you take iptables as an example, you can get a similar effect by running traffic through different tables based on criteria like source IP, interface, port, etc.
One thing I could see as an added benefit would be the ability to apply the same ACL set to different database instances, but that may not improve security.
Obviously, OpenLDAP isn't a network switch, but that kind of ACL definition and binding makes sense to me, so I'm likely biased towards it.
That's the downside. I don't really see your suggested upside either, an attribute is edited either way. What difference does it make in terms of static vs dynamic whether the attribute is in one place or a different place?
I'll be a little 'dramatic' here. I tend to look at any change as a potential "threat" to my network's stability and security. As such, on any given change, I like to try and impact as little as I can get away with.
When I say that the database entry configuration is 'static', I mean that virtually all the attributes in the entry will remain the same indefinitely (beyond the original setup/definition).
I would consider changes to attributes like the olcRootDN|PW, or olcIndex, maybe, but I would definne those as pretty "major" changes that I don't anticpate making often, if at all.
Changes to the olcAccess attributes, on the other hand, I consider 'dynamic' because I plan to actually modify these fairly often. So to me, there is a very distinct difference between attributes that I want to call 'static' versus 'dynamic'. Going a little further still, I can also imagine that, when looking at olcAccess entries, I'm likely to change the 'who-access' more often than the 'what'.
When I look at a 'change' (ie: granting/revoking access to data), I tend to look at what collateral damage might occur. If I modify the olcAccess attribute of a database entry, I look at it as modifying the database entry itself. On the other hand, if I modify an olcAccess entry in a groupOfOlcAccess that is separate from the database entry, the worst that can happen is that the groupOfOlcAccess will be affected. Going a level deeper, if I modify a "who-access" attribute in an olcAccess entry, the worst case is that that olcAccess entry is affected.
So in an order of preference from most to least, I'd rather:
1- modify one 'who-access' attribute of an olcAccess entry 2- modify one 'what' attribute of an olcAccess entry 3- add/remove an olcAccess entry in a groupOfOlcAccess 4- modify an attribute on the database entry to point to a specific groupOfOlcAccess 5- modify an olcAccess attribute in a database entry.
Naturaly, if I follow the "Don't do anything stupid." rule, it doesn't really matter whether the schema remains the way it is now, or takes on a form closer to what I suggested.
I would start a new thread (not hijacking an existing one) and resend to -devel.
I'm guilty there. I'd just sent my email when I realized replying and changing the subject doesn't clear out headrers. My apologies.
My apologies also for the length of the email. Hopefully some/most of it was comprehensible enough. I know that a lot of it hinges strictly on opinion about how OpenLDAP looks from a user/admin point of view, so mine is probably all too subjective and personal.
Romain Komorn
Clowser, Jeff (Contractor) wrote:
I have a question about mirror mode, and how it's different from "multimaster".
In servers like Sun or Red Hat's directory server, a simplified description of what they term multimaster is that more than one server can accept writes simultaneously, and it will then propogate all changes to other servers. If there is a conflicting change, some form of conflict resolution logic is implemented.
If I understand Mirror Mode correctly, the major difference is that it does not contain code for conflict resolution. I.e. both masters are live, and will accept changes, and try to replicate them to the other master (the hot standby master does not reject changes when the other master is the "live" master). If a conflict occurs, bad things happen. It depends on some third resource (a load balancer, slapd in proxy mode, etc) ensuring that all writes go to only one master server at a time, and that all clients doing writes use that frontend to send writes to. At the same time, the individual masters don't do anything to enforce the fact that writes are coming through that load balancer/proxy/etc, so it's possible to write to both masters concurrently (though not desirable).
So.. a couple of questions:
- Is this understanding a correct one? Have I missed any key points
(or completely gotten it wrong)?
That's pretty much it.
- If so, and I write conflicting changes to both master servers
directly, what happens when they try to sync? (What does it do with the conflicting changes, does it prevent any future changes from being replicated or can it "skip" the conflicting change, how do I detect a conflict to manually fix it, and what do I need to do actually fix it (can I just resolve the data issue, or do I have to remove something from a replication queue and how, etc)?)
When MirrorMode was first implemented, the resulting behavior was totally undefined. We told you "Don't Do This" - if you did it anyway, you were on your own.
In the meantime, between when MirrorMode was first implemented and when 2.4 was publicly released, we made further changes to support full multi-master. This includes entry-level conflict resolution in the current version. As such, MirrorMode will be able to tolerate this misuse, but still You Should Not Do This.
- I can envision an HA configuration having 2 sites. At each site I
have a proxy and a master, with one site having the primary master, the other having a hot standby. Both proxies direct traffic to the primary master if it's available, or redirect to the hot standby master if not. Connection between sites goes down, so one proxy is directing writes to the primary master, but the other now can't get to the primary, so sends writes to the hot standby master, thinking it is down. Assuming this happens, and NO conflicting modifications are made, when the connection between sites comes back up, will everything sync up properly so everything has the same content (if a conflicting change occurs, it would be covered by the previous point).
When a network partition occurs, there are a number of cases where synchronization may still fail. I.e., we don't yet support attribute-level conflict resolution, so if multiple changes are made to the same entry, even if they are non-conflicting from a logical standpoint, they may not apply correctly in the current version.
Support for attribute-level conflict resolution using delta-syncrepl will appear in a later release.
Howard Chu wrote:
When a network partition occurs, there are a number of cases where synchronization may still fail. I.e., we don't yet support attribute-level conflict resolution, so if multiple changes are made to the same entry, even if they are non-conflicting from a logical standpoint, they may not apply correctly in the current version.
I should restate this - syncrepl guarantees eventual convergence; when network connectivity is restored the two servers will eventually synchronize. But because we're doing entry-level resolution with last-writer-wins semantics, the end result may not be what you expected.
Support for attribute-level conflict resolution using delta-syncrepl will appear in a later release.
Howard Chu wrote:
When a network partition occurs, there are a number of cases where synchronization may still fail. I.e., we don't yet support
attribute-level
conflict resolution, so if multiple changes are made to the same
entry, even
if they are non-conflicting from a logical standpoint, they may not
apply
correctly in the current version.
I should restate this - syncrepl guarantees eventual convergence; when
network
connectivity is restored the two servers will eventually synchronize.
But
because we're doing entry-level resolution with last-writer-wins
semantics,
the end result may not be what you expected.
I take this as: If I update the cn on one server, then update the password on the other, the end result may be that the cn change gets undone/overwritten when the password gets synced, or something similar - as you said, not the results one would expect in a "true" multiple master setup.
In any case, sounds like the servers do at least eventually come to a common state, where everything again matches. Not ideal, I can deal with that, if that's the worse that happens when the "wrong thing" is done like this. What I can't deal with is if the servers don't eventually match, or if they break replication or become corrupt such that I have to do major work to get things "back to normal".
I imagine if we are proxying this through an openldap server in proxy mode, the "real" master server will see writes coming from the IP address of this proxy, and we could tighten up write acl's to only allow writes from these proxy IP's(?) That wouldn't really solve the problem of the 2 datacenters getting disconnected, but would at least mostly prevent anything writing directly to the individual backend masters.
I suppose the *real* solution is to use the multi-mastering capability in 2.4 to keep it in sync, but use it as if it's mirror mode (i.e. all writes to a single master, with the second as a hot standby), with the MM conflict resolution kicking in if needed because someone wrote to the hot standby when they shouldn't have.
- Jeff
Clowser, Jeff (Contractor) wrote:
Howard Chu wrote:
When a network partition occurs, there are a number of cases where synchronization may still fail. I.e., we don't yet support
attribute-level
conflict resolution, so if multiple changes are made to the same
entry, even
if they are non-conflicting from a logical standpoint, they may not
apply
correctly in the current version.
I should restate this - syncrepl guarantees eventual convergence; when
network
connectivity is restored the two servers will eventually synchronize.
But
because we're doing entry-level resolution with last-writer-wins
semantics,
the end result may not be what you expected.
I take this as: If I update the cn on one server, then update the password on the other, the end result may be that the cn change gets undone/overwritten when the password gets synced, or something similar - as you said, not the results one would expect in a "true" multiple master setup.
Right.
In any case, sounds like the servers do at least eventually come to a common state, where everything again matches. Not ideal, I can deal with that, if that's the worse that happens when the "wrong thing" is done like this. What I can't deal with is if the servers don't eventually match, or if they break replication or become corrupt such that I have to do major work to get things "back to normal".
Right, replication won't break, no corruption.
I imagine if we are proxying this through an openldap server in proxy mode, the "real" master server will see writes coming from the IP address of this proxy, and we could tighten up write acl's to only allow writes from these proxy IP's(?) That wouldn't really solve the problem of the 2 datacenters getting disconnected, but would at least mostly prevent anything writing directly to the individual backend masters.
Sure, that would work.
I suppose the *real* solution is to use the multi-mastering capability in 2.4 to keep it in sync, but use it as if it's mirror mode (i.e. all writes to a single master, with the second as a hot standby), with the MM conflict resolution kicking in if needed because someone wrote to the hot standby when they shouldn't have.
That's our preferred/recommended usage. As I read somewhere else recently, "the best solution is not to have problems." Conflict resolution is messy; it's best to avoid it...
I suppose the *real* solution is to use the multi-mastering
capability
in 2.4 to keep it in sync, but use it as if it's mirror mode (i.e.
all
writes to a single master, with the second as a hot standby), with
the
MM conflict resolution kicking in if needed because someone wrote to
the
hot standby when they shouldn't have.
That's our preferred/recommended usage. As I read somewhere else
recently,
"the best solution is not to have problems." Conflict resolution is
messy;
it's best to avoid it...
Agreed - having a single "active" master and a "hot"/active but unused standby master solves most HA issues without introducing the conflicts a full active-active multimaster setup creates. But if that master is there and accepts writes, it's inevitable that someone will some day write to it out of ignorance, and *may* write a conflicting change, so I see conflict resolution as a last ditch fallback for this situation (and nothing more) to prevent corruption or breakage of replication. (Plus, I like to close up or at least be fully aware of all the edge cases that exist, so I know how best to avoid them :) ).
You said at one point that OpenLDAP (2.4.6?) currently does entry level conflict resolution, and does not do attribute level conflict resolution yet - i.e. if the entry was updated on 2 separate servers with different updates, conflicting or not, the most recently changed version of the *entry* wins. If I change the cn on one master, and after that (but before replication has occurred) I change the userpassword on another master, then the sync up occurs, I won't see the entry with the cn and password changed on all servers, I'll see the entry as it is in the master most recently changed (i.e. in my example, I'll see a changed password, but the cn will revert). Is there a roadmap/timeline for doing attribute level conflict resolution?
Also, I was looking at the admin guide and syncprov man pages on how to set up replication. N-Way multi-mastering details are kinda sparce :). Is there any documentation elsewhere on setting this up? OR... Is the setup exactly the same as setting up Mirror-mode (per 2.3.x), but the 2.4.x code just automatically does conflict resolution (i.e. was mirror-mode a 2.3 feature, with multimaster transparently replacing it in 2.4 by adding conflict resolution to mirror-mode, using the same setup?)
Is it possible for a consumer to replicate from multiple masters? I'm thinking along the lines of a master server at 2 locations (for HA/DR purposes), plus each location also has multiple read-only slave consumers. My first thought is that these slave servers point to the local master, but if that master goes down, the slaves under that master stop getting updates. My second thought is to have a load balancer at each site, which directs all traffic connecting to a "master ldap" vip to route connections to the primary master if it's up, or the secondary master if the primary is unavailable. But... (I'm still absorbing syncrepl and rfc 4533) will all the contextCSNs and cookies and so forth match up well enough to allow this kind of failover for *syncrepl*? Is it possible, and what's the best way to set this up, such that I have multiple masters for DR purposes, and such that the failure of any single master does not cause some subset of my read-only slave consumers to stop getting updated?
Syncrepl (in refreshAndPersist mode), as I understand it, generally has the slave consumer contacting the master server, retrieving an updated list of changes since the last time it was running (refresh), then leaves a persistent search running that gets changed entries from the master server as they happen (persist), so replication is near real-time. If the master server crashes and then is restarted or the connection is broken/dropped (common if a load balancer is inbetween), how well does the consumer detect this and reconnect, or do the consumers tend to have to be restarted after this occurs? (This is a broken/dropped connection, *not* one cleanly closed by a master server clean shutdown or idle timeout, and many apps have trouble detecting this - the client still thinks it has a valid tcp connection, but nothing is coming over it, so never gets new updates. Does the consumer send keepalive packets or anything to cause it to realize the connection has died and to reconnect?)
When initializing a consumer using an LDIF backup of the master, should this be a slapcat export to get everything needed to support syncrepl (such as contextCSN, entryUUIDS, etc)?
Thanks, - Jeff
Clowser, Jeff (Contractor) wrote:
Agreed - having a single "active" master and a "hot"/active but unused standby master solves most HA issues without introducing the conflicts a full active-active multimaster setup creates. But if that master is there and accepts writes, it's inevitable that someone will some day write to it out of ignorance, and *may* write a conflicting change, so I see conflict resolution as a last ditch fallback for this situation (and nothing more) to prevent corruption or breakage of replication. (Plus, I like to close up or at least be fully aware of all the edge cases that exist, so I know how best to avoid them :) ).
Makes sense.
You said at one point that OpenLDAP (2.4.6?) currently does entry level conflict resolution, and does not do attribute level conflict resolution yet - i.e. if the entry was updated on 2 separate servers with different updates, conflicting or not, the most recently changed version of the *entry* wins. If I change the cn on one master, and after that (but before replication has occurred) I change the userpassword on another master, then the sync up occurs, I won't see the entry with the cn and password changed on all servers, I'll see the entry as it is in the master most recently changed (i.e. in my example, I'll see a changed password, but the cn will revert). Is there a roadmap/timeline for doing attribute level conflict resolution?
There are no set dates, but I expect it to be later in the 2.4 stream.
Also, I was looking at the admin guide and syncprov man pages on how to set up replication. N-Way multi-mastering details are kinda sparce :). Is there any documentation elsewhere on setting this up? OR... Is the setup exactly the same as setting up Mirror-mode (per 2.3.x), but the 2.4.x code just automatically does conflict resolution (i.e. was mirror-mode a 2.3 feature, with multimaster transparently replacing it in 2.4 by adding conflict resolution to mirror-mode, using the same setup?)
Yes, set it up pretty much like MirrorMode. MirrorMode was 2.4.1-2.4.4, which were only alpha releases, not general/public releases.
Is it possible for a consumer to replicate from multiple masters?
Yes in 2.4.
I'm thinking along the lines of a master server at 2 locations (for HA/DR purposes), plus each location also has multiple read-only slave consumers. My first thought is that these slave servers point to the local master, but if that master goes down, the slaves under that master stop getting updates. My second thought is to have a load balancer at each site, which directs all traffic connecting to a "master ldap" vip to route connections to the primary master if it's up, or the secondary master if the primary is unavailable. But... (I'm still absorbing syncrepl and rfc 4533) will all the contextCSNs and cookies and so forth match up well enough to allow this kind of failover for *syncrepl*? Is it possible, and what's the best way to set this up, such that I have multiple masters for DR purposes, and such that the failure of any single master does not cause some subset of my read-only slave consumers to stop getting updated?
Syncrepl (in refreshAndPersist mode), as I understand it, generally has the slave consumer contacting the master server, retrieving an updated list of changes since the last time it was running (refresh), then leaves a persistent search running that gets changed entries from the master server as they happen (persist), so replication is near real-time. If the master server crashes and then is restarted or the connection is broken/dropped (common if a load balancer is inbetween), how well does the consumer detect this and reconnect, or do the consumers tend to have to be restarted after this occurs? (This is a broken/dropped connection, *not* one cleanly closed by a master server clean shutdown or idle timeout, and many apps have trouble detecting this - the client still thinks it has a valid tcp connection, but nothing is coming over it, so never gets new updates. Does the consumer send keepalive packets or anything to cause it to realize the connection has died and to reconnect?)
Currently the consumer relies on TCP keepalives. We've discussed adding LDAP-level keepalives so we're not dependent on the kernel TCP timers, but that hasn't been done yet.
When initializing a consumer using an LDIF backup of the master, should this be a slapcat export to get everything needed to support syncrepl (such as contextCSN, entryUUIDS, etc)?
That's the fastest way. But you can just bring up a consumer with an empty database and let it pull the entire DB down during its refresh pass, it will work regardless. Unlike some other replication schemes you may have used, we don't require any special considerations for initial load vs reload or recovery. Turn it on and it works.
openldap-software@openldap.org