Hi,
I have been thinking about a scalable multi site deployment architecture for openldap where I would like to:
- Have a small number of master servers centrally in the enterprise with MMR.
- All account provisioning would be at the central sites.
- Have multiple edge sites replicate of those masters in a star toplogy with MMR.
- Allow writes to those edge sites for the purpososes of slapo_ppolicy, slapo_lastbind and password changes.
I would like to avoid fully meshing all servers for MMR and would prefer a star topology where each edge site only replicates with the central site.
I would also like to avoid chaining. See my previous posts why.
Before I set this up in my lab I would like a second opinion. The customer is asking for best practice in large deployments.
Any comments ?
Greetings Christian
On Fri, 13 Dec 2013 18:40:02 +0100 (CET) Christian Kratzer ck-lists@cksoft.de wrote
- Allow writes to those edge sites for the purpososes of slapo_ppolicy, slapo_lastbind and password changes.
Note that with OpenLDAP operational attributes set by slapo-ppolicy and slapo-lastbind are not replicated anyway (with some exceptions like pwdChangedTime).
Ciao, Michael.
Hi Michael,
On Fri, 13 Dec 2013, Michael Ströder wrote:
On Fri, 13 Dec 2013 18:40:02 +0100 (CET) Christian Kratzer ck-lists@cksoft.de wrote
- Allow writes to those edge sites for the purpososes of slapo_ppolicy, slapo_lastbind and password changes.
Note that with OpenLDAP operational attributes set by slapo-ppolicy and slapo-lastbind are not replicated anyway (with some exceptions like pwdChangedTime).
For slapo-ppoolicy I do see pwdFailureTime, pwdAccountLockedTime, pwdChangedTime being replicated which is enough for my use case.
For slapo-lastbind pwdAuthTimestamp is not replicated by default. I have local patches from (ITS#7721) to also replicate authTimestamp.
I am planning on setting olcLastBindPrecision to a large value of 8 hours or more which is also more than enough for the customers requirement of finding users who have not logged in for 6 months.
I am thinking about having MMR write access upto the edges where I would usually have read only slaves in order to have above attributes propagete.
Greetings Christian
2013/12/13 Michael Ströder michael@stroeder.com
On Fri, 13 Dec 2013 18:40:02 +0100 (CET) Christian Kratzer < ck-lists@cksoft.de> wrote
- Allow writes to those edge sites for the purpososes of slapo_ppolicy, slapo_lastbind and password changes.
Note that with OpenLDAP operational attributes set by slapo-ppolicy and slapo-lastbind are not replicated anyway (with some exceptions like pwdChangedTime).
Not exactly, but I think there are still some bugs in the current implementation (I just opened an ITS on the subject: http://www.openldap.org/its/index.cgi/Incoming?id=7766).
When the entry is created on the slave, all ppolicy attributes are replicated (seems logical to start with the same values as the master). You can then authenticate on slave and have differences between the slave entry and the master entry on failure time or unlock time. But some problems occurs when master entry is modified and replicated on the slave...
Clément.
Am Fri, 13 Dec 2013 18:40:02 +0100 (CET) schrieb Christian Kratzer ck-lists@cksoft.de:
Hi,
I have been thinking about a scalable multi site deployment architecture for openldap where I would like to:
- Have a small number of master servers centrally in the enterprise
with MMR.
All account provisioning would be at the central sites.
Have multiple edge sites replicate of those masters in a star
toplogy with MMR.
- Allow writes to those edge sites for the purpososes of
slapo_ppolicy, slapo_lastbind and password changes.
I would like to avoid fully meshing all servers for MMR and would prefer a star topology where each edge site only replicates with the central site.
I would also like to avoid chaining. See my previous posts why.
Before I set this up in my lab I would like a second opinion. The customer is asking for best practice in large deployments.
Michael is quite correct in his comments regarding slapo_policy, but in priciple i have realised this design in a cascading directory with more than 100 slaves.
-Dieter
Hi,
On Fri, 13 Dec 2013, Dieter Klünter wrote:
Michael is quite correct in his comments regarding slapo_policy, but in priciple i have realised this design in a cascading directory with more than 100 slaves.
read only slaves or slaves with full write access ?
The latter is the thing I need and the thing I am concerned about.
Greetings Christian
Am Sat, 14 Dec 2013 11:31:09 +0100 (CET) schrieb Christian Kratzer ck-lists@cksoft.de:
Hi,
On Fri, 13 Dec 2013, Dieter Klünter wrote:
Michael is quite correct in his comments regarding slapo_policy, but in priciple i have realised this design in a cascading directory with more than 100 slaves.
read only slaves or slaves with full write access ?
The latter is the thing I need and the thing I am concerned about.
read, and write operations by chaining.
-Dieter
Hi,
On Sat, 14 Dec 2013, Dieter Klünter wrote:
Am Sat, 14 Dec 2013 11:31:09 +0100 (CET) schrieb Christian Kratzer ck-lists@cksoft.de:
Hi,
On Fri, 13 Dec 2013, Dieter Klünter wrote:
Michael is quite correct in his comments regarding slapo_policy, but in priciple i have realised this design in a cascading directory with more than 100 slaves.
read only slaves or slaves with full write access ?
The latter is the thing I need and the thing I am concerned about.
read, and write operations by chaining.
chaining seems to block on a global lock for olcConnectionTimeout when connectivity to the referral servers is lost. This took down the customers slaves with chaining when a network issue separated the slaves from the masters.
See my other post about a week ago.
I would need at least one or at best two masters servers per site as a target for chaining to cater for network partitions.
An option would be to fix the chaining issues.
Thinking about my options I wanted to explore the other option of having the clients on MMR masters. Also those MMR masters would not be fully meshed with the other sites.
Greetings Christian
Am Sat, 14 Dec 2013 20:48:37 +0100 (CET) schrieb Christian Kratzer ck-lists@cksoft.de:
Hi,
On Sat, 14 Dec 2013, Dieter Klünter wrote:
Am Sat, 14 Dec 2013 11:31:09 +0100 (CET) schrieb Christian Kratzer ck-lists@cksoft.de:
Hi,
On Fri, 13 Dec 2013, Dieter Klünter wrote:
Michael is quite correct in his comments regarding slapo_policy, but in priciple i have realised this design in a cascading directory with more than 100 slaves.
read only slaves or slaves with full write access ?
The latter is the thing I need and the thing I am concerned about.
read, and write operations by chaining.
chaining seems to block on a global lock for olcConnectionTimeout when connectivity to the referral servers is lost. This took down the customers slaves with chaining when a network issue separated the slaves from the masters.
See my other post about a week ago.
I would need at least one or at best two masters servers per site as a target for chaining to cater for network partitions.
An option would be to fix the chaining issues.
Thinking about my options I wanted to explore the other option of having the clients on MMR masters. Also those MMR masters would not be fully meshed with the other sites.
I am quite aware of your posts. Please note that i have set up a cascading directory design.
-Dieter
openldap-technical@openldap.org