hello,
I've had problems with an upgrade to openldap 2.4.43, and I reviewed my configuration to check for errors I have two slaves which replicate all databases from the master, including cn=config.
My reasoning is in case of a severe crash, I'd like one of the slaves to be able to become the new master. To allow such a scenario, each database in the master have a syncrepl directive to replicate from itself. It obviously have no effect on the master, but the slave that replicate this configuration get the right directives to replicate from the master. so far so good...
now this is my question: should/could the accesslog configured this way? or should the accesslog be strictly local to a server? I mean, should I remove the syncrepl directive from the accesslog database?
This configuration have run without errors until 2.4.41, but I got a weird error about accesslog corruption when I upgraded to 2.4.43 (sorry I didn't write the error down), hence my question
Thanks in advance for any response. best regards, Jephté CLAIN
PS: I hope it's clear enough. English is not my native language
--On Monday, December 28, 2015 10:44 AM +0400 Jephte Clain jephte.clain@univ-reunion.fr wrote:
hello,
I've had problems with an upgrade to openldap 2.4.43, and I reviewed my configuration to check for errors I have two slaves which replicate all databases from the master, including cn=config.
If the replicas are replicating the provider's cn=config, I doubt they are truly replicas.
My reasoning is in case of a severe crash, I'd like one of the slaves to be able to become the new master. To allow such a scenario, each database in the master have a syncrepl directive to replicate from itself. It obviously have no effect on the master, but the slave that replicate this configuration get the right directives to replicate from the master. so far so good...
now this is my question: should/could the accesslog configured this way? or should the accesslog be strictly local to a server? I mean, should I remove the syncrepl directive from the accesslog database?
I'm not sure what you mean about a syncrepl directive on the accesslog DB. That sounds like an incorrect configuration.
--Quanah
--
Quanah Gibson-Mount Platform Architect Zimbra, Inc. -------------------- Zimbra :: the leader in open source messaging and collaboration
Le 28/12/2015 19:12, Quanah Gibson-Mount a écrit :
--On Monday, December 28, 2015 10:44 AM +0400 Jephte Clain jephte.clain@univ-reunion.fr wrote:
hello,
I've had problems with an upgrade to openldap 2.4.43, and I reviewed my configuration to check for errors I have two slaves which replicate all databases from the master, including cn=config.
If the replicas are replicating the provider's cn=config, I doubt they are truly replicas.
Hello,
I'm not sure to understand what you are saying. the "master" and the "slaves" are identical, and the cn=config are the same on all hosts so they are truly replicates, right?
My reasoning is in case of a severe crash, I'd like one of the slaves to be able to become the new master. To allow such a scenario, each database in the master have a syncrepl directive to replicate from itself. It obviously have no effect on the master, but the slave that replicate this configuration get the right directives to replicate from the master. so far so good...
now this is my question: should/could the accesslog configured this way? or should the accesslog be strictly local to a server? I mean, should I remove the syncrepl directive from the accesslog database?
I'm not sure what you mean about a syncrepl directive on the accesslog DB. That sounds like an incorrect configuration.
In fact, the configuration is generated by a script I wrote several years ago, and all databases for a "master" configuration are generated with a syncprov overlay and a syncrepl directive, to enable "seed replication":
dn: olcDatabase={0}config,cn=config ... olcSyncrepl: {0}bindmethod=sasl saslmech=digest-md5 authcid=_config credentials=PASSWORD rid=0 provider="ldap://HOST" searchbase=cn=config type=refreshAndPersist retry="60 10 300 +" schemachecking=off olcUpdateRef: ldap://HOST
dn: olcOverlay=syncprov,olcDatabase={0}config,cn=config ...
dn: olcDatabase={1}mdb,cn=config ... olcSuffix: cn=modiflog olcSyncrepl: {0}bindmethod=sasl saslmech=digest-md5 authcid=_syncrepl credentials=PASSWORD rid=1 provider="ldap://HOST/" searchbase=cn=modiflog type=refreshAndPersist retry="60 10 300 +" schemachecking=off olcUpdateRef: ldap://HOST/
dn: olcOverlay=syncprov,olcDatabase={1}mdb,cn=config ...
dn: olcDatabase={2}mdb,cn=config ... olcSuffix: dc=univ-reunion,dc=fr olcSyncrepl: {0}bindmethod=sasl saslmech=digest-md5 authcid=_syncrepl credentials=PASSWORD rid=2 provider="ldap://HOST/" searchbase=dc=univ-reunion,dc=fr type=refreshAndPersist retry="60 10 300 +" schemachecking=off olcUpdateRef: ldap://HOST/
dn: olcOverlay=syncprov,olcDatabase={2}mdb,cn=config ...
After the problems I had, I took the time to read the admin guide again and noticed the accesslog database was not to be replicated, or it seemed so I then wondered if the replication I configured on the accesslog database was the cause for my issues... hence my question to be sure I understood correctly
Here is another question: if an accesslog database is stricly local to a server, how should two masters in mirror mode be configured? I have a bi-master setup with an active/passive configuration: the loadbalancer only send the requests to the first master, unless it stop responding. if the first master crashes, and writes are diriged toward the second master, won't I lose the accesslog informations?
thanks for any input. best regards, Jephté CLAIN
--On Tuesday, December 29, 2015 10:39 AM +0400 Jephte Clain jephte.clain@univ-reunion.fr wrote:
Le 28/12/2015 19:12, Quanah Gibson-Mount a écrit :
I'm not sure to understand what you are saying. the "master" and the "slaves" are identical, and the cn=config are the same on all hosts so they are truly replicates, right?
No. Replicas do not accept writes. Replicas do not have a master configuration for cn=config. Replica's do not have server IDs.
In fact, the configuration is generated by a script I wrote several years ago, and all databases for a "master" configuration are generated with a syncprov overlay and a syncrepl directive, to enable "seed replication":
After the problems I had, I took the time to read the admin guide again and noticed the accesslog database was not to be replicated, or it seemed so I then wondered if the replication I configured on the accesslog database was the cause for my issues... hence my question to be sure I understood correctly
Accesslog is unique to a given master.
Here is another question: if an accesslog database is stricly local to a server, how should two masters in mirror mode be configured? I have a bi-master setup with an active/passive configuration: the loadbalancer only send the requests to the first master, unless it stop responding. if the first master crashes, and writes are diriged toward the second master, won't I lose the accesslog informations?
Every master must have a unique server ID. Each master will replicate the writes from another master, and update their accesslog accordingly. You will not lose any writes.
I'm guessing your configurations are generally incorrect.
--Quanah
--
Quanah Gibson-Mount Platform Architect Zimbra, Inc. -------------------- Zimbra :: the leader in open source messaging and collaboration
Le 29/12/2015 11:04, Quanah Gibson-Mount a écrit :
--On Tuesday, December 29, 2015 10:39 AM +0400 Jephte Clain jephte.clain@univ-reunion.fr wrote:
Le 28/12/2015 19:12, Quanah Gibson-Mount a écrit :
I'm not sure to understand what you are saying. the "master" and the "slaves" are identical, and the cn=config are the same on all hosts so they are truly replicates, right?
No. Replicas do not accept writes. Replicas do not have a master configuration for cn=config. Replica's do not have server IDs.
ok I guess I understand. this is the reason why I usually call them "slaves", not replicas (but I messed things up and called them replicas this time ^^) I also have replicas that only replicate data (or a subset of data) for some services
the slaves are there in case of catastrophic failure of both masters (we had one of these failure for another service due to a problem with the shared storage. No one want to have this kind of emergency...) If the master(s) crash, I just have to choose a slave as the new master, slapcat the cn=config database, update the provider address, slapadd the updated config, and update the loadbalancer settings. this is a bit of work but at least we can restore service in a (relatively) small amount of time.
In fact, the configuration is generated by a script I wrote several years ago, and all databases for a "master" configuration are generated with a syncprov overlay and a syncrepl directive, to enable "seed replication":
After the problems I had, I took the time to read the admin guide again and noticed the accesslog database was not to be replicated, or it seemed so I then wondered if the replication I configured on the accesslog database was the cause for my issues... hence my question to be sure I understood correctly
Accesslog is unique to a given master.
Ok that's what I wanted to know for sure
Shouldn't the doc stat this clearly?
Here is another question: if an accesslog database is stricly local to a server, how should two masters in mirror mode be configured? I have a bi-master setup with an active/passive configuration: the loadbalancer only send the requests to the first master, unless it stop responding. if the first master crashes, and writes are diriged toward the second master, won't I lose the accesslog informations?
Every master must have a unique server ID. Each master will replicate the writes from another master, and update their accesslog accordingly. You will not lose any writes.
that's a relief
I'm guessing your configurations are generally incorrect.
Yes, I'm updating them right now to disable replication of the accesslog
Thanks a lot for the clarification. In case you come to the reunion island someday, I owe you a beer!
best regards, Jephté CLAIN
--On Tuesday, December 29, 2015 12:48 PM +0400 Jephte Clain jephte.clain@univ-reunion.fr wrote:
No. Replicas do not accept writes. Replicas do not have a master configuration for cn=config. Replica's do not have server IDs.
ok I guess I understand. this is the reason why I usually call them "slaves", not replicas (but I messed things up and called them replicas this time ^^) I also have replicas that only replicate data (or a subset of data) for some services
No. The terms replica and slave are interchangeable. As are master and provider. Given the very negative connotations of the concept of masters and slaves, the preferred terms are "provider" instead of "master" and "replica" instead of "slaves".
the slaves are there in case of catastrophic failure of both masters (we had one of these failure for another service due to a problem with the shared storage. No one want to have this kind of emergency...) If the master(s) crash, I just have to choose a slave as the new master, slapcat the cn=config database, update the provider address, slapadd the updated config, and update the loadbalancer settings. this is a bit of work but at least we can restore service in a (relatively) small amount of time.
If they accept writes, they are not slaves/replicas. If you are replicating cn=config across all the systems, then they must all be masters. Your general description above sounds like you do not correctly understand how MMR functions.
Accesslog is unique to a given master.
Ok that's what I wanted to know for sure
Shouldn't the doc stat this clearly?
Please file an ITS noting the docs should be updated on this point.
Thanks a lot for the clarification. In case you come to the reunion island someday, I owe you a beer!
:)
--Quanah
--
Quanah Gibson-Mount Platform Architect Zimbra, Inc. -------------------- Zimbra :: the leader in open source messaging and collaboration
openldap-technical@openldap.org