I've been trying to create a complex multi-master replication of cn=config for a week now... I'm using the core debian package: slapd 2.4.40+dfsg-1+deb8u1
I've seen someone claiming it could work but cannot find configs related to this kind of topology. http://www.slideshare.net/ghenry/openldap-replication-strategies (slide #24)
When building a simple multimaster (two or three nodes) everything works as planned. But in the case some nodes cannot talk to others, i cannot find a way to make it works.
Let's take this example
+-------+ +-------+ +-------+ | ldap1 | <---> | ldap2 | <---> | ldap3 | +-------+ +-------+ +-------+
and say: olcSyncRepl: rid=001 searchbase="cn=config" type=refreshAndPersist provider=ldap://ldap1 olcSyncRepl: rid=002 searchbase="cn=config" type=refreshAndPersist provider=ldap://ldap2 olcSyncRepl: rid=003 searchbase="cn=config" type=refreshAndPersist provider=ldap://ldap3
Where initialy: ldap1 have rid=002 ldap2 have rid=001 and rid=003 ldap3 have rid=002
Soon everyone get rid=001 rid=002 rid=003 and ldap3 cannot talk to ldap1 and it does not work...
And even if i don't care about the connexion between ldap1 and ldap3 failing, the replication does not work either... if ldap1 change something, it gets replicated to ldap2 but not ldap3.
Also, i've been trying to use exattrs=Syncrepl but if someone change his Syncrepl, it get deleted on the other node... Someone seems to have seen this with memberof overlay ?
http://www.openldap.org/lists/openldap-technical/201505/msg00124.html
Anyone have references to help me get to my goal?
--On Thursday, October 01, 2015 12:26 PM -0400 Patrick pbrideau@kronostechnologies.com wrote:
I've been trying to create a complex multi-master replication of cn=config for a week now... I'm using the core debian package: slapd 2.4.40+dfsg-1+deb8u1
I've seen someone claiming it could work but cannot find configs related to this kind of topology. http://www.slideshare.net/ghenry/openldap-replication-strategies (slide #24)
When building a simple multimaster (two or three nodes) everything works as planned. But in the case some nodes cannot talk to others, i cannot find a way to make it works.
Let's take this example
+-------+ +-------+ +-------+ | ldap1 | <---> | ldap2 | <---> | ldap3 | +-------+ +-------+ +-------+
and say: olcSyncRepl: rid=001 searchbase="cn=config" type=refreshAndPersist provider=ldap://ldap1 olcSyncRepl: rid=002 searchbase="cn=config" type=refreshAndPersist provider=ldap://ldap2 olcSyncRepl: rid=003 searchbase="cn=config" type=refreshAndPersist provider=ldap://ldap3
Where initialy: ldap1 have rid=002 ldap2 have rid=001 and rid=003 ldap3 have rid=002
Soon everyone get rid=001 rid=002 rid=003 and ldap3 cannot talk to ldap1 and it does not work...
And even if i don't care about the connexion between ldap1 and ldap3 failing, the replication does not work either... if ldap1 change something, it gets replicated to ldap2 but not ldap3.
Also, i've been trying to use exattrs=Syncrepl but if someone change his Syncrepl, it get deleted on the other node... Someone seems to have seen this with memberof overlay ?
http://www.openldap.org/lists/openldap-technical/201505/msg00124.html
Anyone have references to help me get to my goal?
While I avoid replicating cn=config, we have several customers with 3+ MMR setups for their back-mdb databases that work just fine. You fail to note what your *serverID* is set to on each master, as that's required to be different.
I would suggest you set your log level to "sync stats" and determine why the various masters are unable to talk to one another.
--Quanah
--
Quanah Gibson-Mount Platform Architect Zimbra, Inc. -------------------- Zimbra :: the leader in open source messaging and collaboration
On 01/10/15 04:48 PM, Quanah Gibson-Mount wrote:
--On Thursday, October 01, 2015 12:26 PM -0400 Patrick pbrideau@kronostechnologies.com wrote:
I've been trying to create a complex multi-master replication of cn=config for a week now... I'm using the core debian package: slapd 2.4.40+dfsg-1+deb8u1
I've seen someone claiming it could work but cannot find configs related to this kind of topology. http://www.slideshare.net/ghenry/openldap-replication-strategies (slide #24)
When building a simple multimaster (two or three nodes) everything works as planned. But in the case some nodes cannot talk to others, i cannot find a way to make it works.
Let's take this example
+-------+ +-------+ +-------+ | ldap1 | <---> | ldap2 | <---> | ldap3 | +-------+ +-------+ +-------+
and say: olcSyncRepl: rid=001 searchbase="cn=config" type=refreshAndPersist provider=ldap://ldap1 olcSyncRepl: rid=002 searchbase="cn=config" type=refreshAndPersist provider=ldap://ldap2 olcSyncRepl: rid=003 searchbase="cn=config" type=refreshAndPersist provider=ldap://ldap3
Where initialy: ldap1 have rid=002 ldap2 have rid=001 and rid=003 ldap3 have rid=002
Soon everyone get rid=001 rid=002 rid=003 and ldap3 cannot talk to ldap1 and it does not work...
And even if i don't care about the connexion between ldap1 and ldap3 failing, the replication does not work either... if ldap1 change something, it gets replicated to ldap2 but not ldap3.
Also, i've been trying to use exattrs=Syncrepl but if someone change his Syncrepl, it get deleted on the other node... Someone seems to have seen this with memberof overlay ?
http://www.openldap.org/lists/openldap-technical/201505/msg00124.html
Anyone have references to help me get to my goal?
While I avoid replicating cn=config, we have several customers with 3+ MMR setups for their back-mdb databases that work just fine. You fail to note what your *serverID* is set to on each master, as that's required to be different.
I would suggest you set your log level to "sync stats" and determine why the various masters are unable to talk to one another.
--Quanah
--
Quanah Gibson-Mount Platform Architect Zimbra, Inc.
Zimbra :: the leader in open source messaging and collaboration
Thanks for the reply.
the serverID config where all set as:
dn: cn=config objectClass: olcGlobal [...] olcServerID: 1 ldap://ldap1 olcServerID: 2 ldap://ldap2 olcServerID: 3 ldap://ldap3
Sorry it was not clear enough, it is by design that ldap1 and ldap3 cannot talk. They are in different networks and ldap2 is the one in between every networks.
Without sync of cn=config, you replicate olcAccess,schemas,olcDbIndex, etc manually between servers?
Patrick Brideau Administrateur Système Kronos Technologies - http://www.kronos-web.com tel: 418 877-5400 p.216
Patrick wrote:
dn: cn=config objectClass: olcGlobal [...] olcServerID: 1 ldap://ldap1 olcServerID: 2 ldap://ldap2 olcServerID: 3 ldap://ldap3
Note that
1. you should probably use FQDNs instead of short names
2. you must explicitly start slapd to -h ldap://ldap1 etc. to really assign the server-ID to a certain replica.
BTW: Personally I prefer to not replicate cn=config (I'm using static configuration anyway) and just add one server ID per instance to avoid the strong dependency on -h option.
Ciao, Michael.
On 02/10/15 02:35 PM, Michael Ströder wrote:
Patrick wrote:
dn: cn=config objectClass: olcGlobal [...] olcServerID: 1 ldap://ldap1 olcServerID: 2 ldap://ldap2 olcServerID: 3 ldap://ldap3
Note that
you should probably use FQDNs instead of short names
you must explicitly start slapd to -h ldap://ldap1 etc. to really assign
the server-ID to a certain replica.
BTW: Personally I prefer to not replicate cn=config (I'm using static configuration anyway) and just add one server ID per instance to avoid the strong dependency on -h option.
Ciao, Michael.
Yeah, for simplicity purpose, i removed the fqdn, ssl stuff and everything from my post... i see i should have included it all.
but yeah, it is all present, starting with -h ldaps://ldap1.fdqn, getting my /etc/hosts with the required stuff.
it works when every master talk to each other, but i'm one step further where not every ldap will be available to talk to each other in our prod environment
This works:
+-------------------------------+ v V +-------+ +-------+ +-------+ | ldap1 | <---> | ldap2 | <---> | ldap3 | +-------+ +-------+ +-------+
this doesn.t:
+-------+ +-------+ +-------+ | ldap1 | <---> | ldap2 | <---> | ldap3 | +-------+ +-------+ +-------+
Patrick Brideau Administrateur Système Kronos Technologies - http://www.kronos-web.com tel: 418 877-5400 p.216
Patrick wrote:
On 02/10/15 02:35 PM, Michael Ströder wrote:
Patrick wrote:
dn: cn=config objectClass: olcGlobal [...] olcServerID: 1 ldap://ldap1 olcServerID: 2 ldap://ldap2 olcServerID: 3 ldap://ldap3
Note that
you should probably use FQDNs instead of short names
you must explicitly start slapd to -h ldap://ldap1 etc. to really assign
the server-ID to a certain replica.
BTW: Personally I prefer to not replicate cn=config (I'm using static configuration anyway) and just add one server ID per instance to avoid the strong dependency on -h option.
Ciao, Michael.
Yeah, for simplicity purpose, i removed the fqdn, ssl stuff and everything from my post... i see i should have included it all.
but yeah, it is all present, starting with -h ldaps://ldap1.fdqn, getting my /etc/hosts with the required stuff.
it works when every master talk to each other, but i'm one step further where not every ldap will be available to talk to each other in our prod environment
This works:
+-------------------------------+ v V
+-------+ +-------+ +-------+ | ldap1 | <---> | ldap2 | <---> | ldap3 | +-------+ +-------+ +-------+
this doesn.t:
+-------+ +-------+ +-------+ | ldap1 | <---> | ldap2 | <---> | ldap3 | +-------+ +-------+ +-------+
Yeah, replicating cn=config is only viable if all servers work with identical configuration. Making this configuration work would require adding a qualifier to the syncrepl config to restrict which server nodes it activates on. I think it would be worthwhile to add a feature for this, but it doesn't exist at the moment. Feel free to submit an Enhancement request to the ITS.
Patrick Brideau Administrateur Système Kronos Technologies - http://www.kronos-web.com tel: 418 877-5400 p.216
Howard Chu wrote:
Yeah, replicating cn=config is only viable if all servers work with identical configuration. Making this configuration work would require adding a qualifier to the syncrepl config to restrict which server nodes it activates on. I think it would be worthwhile to add a feature for this, but it doesn't exist at the moment. Feel free to submit an Enhancement request to the ITS.
Wouldn't it make more sense to introduce configuration vars with some filled by querying system parameters?
Hmm..better not re-invent configuration mgmt systems though...
Ciao, Michael.
Michael Ströder wrote:
Howard Chu wrote:
Yeah, replicating cn=config is only viable if all servers work with identical configuration. Making this configuration work would require adding a qualifier to the syncrepl config to restrict which server nodes it activates on. I think it would be worthwhile to add a feature for this, but it doesn't exist at the moment. Feel free to submit an Enhancement request to the ITS.
Wouldn't it make more sense to introduce configuration vars with some filled by querying system parameters?
No. That would require a lot of platform-dependent system knowledge and it's not even needed here.
Hmm..better not re-invent configuration mgmt systems though...
The particular config could be implemented just by adding a serverID keyword to the syncrepl config clause. Then that particular consumer instance would only activate if the current server matches the specified serverID(s).
On Fri, Oct 02, 2015 at 20:35:12 +0200, Michael Ströder wrote:
- you must explicitly start slapd to -h ldap://ldap1 etc. to really
assign the server-ID to a certain replica.
Not in my experience, we have multiple serverID's in the config and use just -h ldap:/// on each. The serverID matching the system's hostname is used. This allows for identical configuration everywhere.
Geert
Geert Hendrickx wrote:
On Fri, Oct 02, 2015 at 20:35:12 +0200, Michael Ströder wrote:
- you must explicitly start slapd to -h ldap://ldap1 etc. to really
assign the server-ID to a certain replica.
Not in my experience, we have multiple serverID's in the config and use just -h ldap:/// on each. The serverID matching the system's hostname is used. This allows for identical configuration everywhere.
Yes, this works.
But in many cases the system's canonical hostname is not the service's hostname. This will be even more true with today's IPv6 setups.
Ciao, Michael.
On Sat, Oct 03, 2015 at 09:20:32 +0200, Michael Ströder wrote:
Yes, this works.
But in many cases the system's canonical hostname is not the service's hostname. This will be even more true with today's IPv6 setups.
We use the canonical hostnames for replication between the servers, exactly because the serverID is tied to the system itself. And we use service hostnames for queries from external LDAP clients, because they can float between servers for failover.
Geert
openldap-technical@openldap.org