Hi there!
We are working on a new installation and decided to try something new..
In the past I would have gone with multi-master with ldap balancer but after reading and researching more and more on MDB, we decided to try to integrate OpenLDAP into our current CI/CD pipelines using K8s.
What we tried so far and it seems to work is initialize a common persistence storage and then an auto scaling group that shares that common drive. Ech pod has as many threads as virtual CPU it may have, and none of the pods can write, except a dedicated write pod (single instance) with multiple threads for writing.
Is there anything else we are missing here? Any experience scaling OpenLDAP with Kubernetes or other container technology.
Thank you in advance for any comments, pointers or recommendations!
Hi Alejandro,
There is a long list of considerations/preparation needed when running OpenLDAP in a container setup (we use Nomad). From memory: - use the HA proxy protocol, now supported in 2.5/2.6 so you see client IP's - DB persistence: make sure each container always has the same db files. - Sync cookies: make sure the containers sync from the same node each time. - Backups? (We use netapp mounts) - Logging? (I bundle rsyslogd in the container that handles queueing and fwd files to remote rsyslog through TCP). - Support for operations like provisioning, indexing and debugging.
Furthermore, I would separate the clusters in a simple replica only one (ro), and the one that is provisioned (rw).
C.
Le ven. 27 oct. 2023 à 18:11, Alejandro Imass aimass@yabarana.com a écrit :
Hi there!
We are working on a new installation and decided to try something new..
In the past I would have gone with multi-master with ldap balancer but after reading and researching more and more on MDB, we decided to try to integrate OpenLDAP into our current CI/CD pipelines using K8s.
What we tried so far and it seems to work is initialize a common persistence storage and then an auto scaling group that shares that common drive. Ech pod has as many threads as virtual CPU it may have, and none of the pods can write, except a dedicated write pod (single instance) with multiple threads for writing.
Is there anything else we are missing here? Any experience scaling OpenLDAP with Kubernetes or other container technology.
Thank you in advance for any comments, pointers or recommendations!
-- Alex
Hi C. sorry for the late reply, but we've been researching a lot on this before answering. I have decent experience with 2-node multi master replication with BDB from earlier 2.x versions.
On Mon, Nov 6, 2023 at 10:43 AM C R publist.cr@gmail.com wrote:
Hi Alejandro,
There is a long list of considerations/preparation needed when running OpenLDAP in a container setup (we use Nomad). From memory:
- use the HA proxy protocol, now supported in 2.5/2.6 so you see client
IP's
how does knowledge about the client IP help in containerization ?
- DB persistence: make sure each container always has the same db files.
You mean a shared volume across all pods, or that they obtain a updated local replica when the pod bootstraps ?
- Sync cookies: make sure the containers sync from the same node each time.
What cookies are you referring to?
- Backups? (We use netapp mounts)
We do slapcat from a a single master node to S3. The backup node is designated by an env var, so only a single pod (a master) runs the slapcat.
- Logging? (I bundle rsyslogd in the container that handles queueing
and fwd files to remote rsyslog through TCP).
No issue there.
- Support for operations like provisioning, indexing and debugging.
Furthermore, I would separate the clusters in a simple replica only one (ro), and the one that is provisioned (rw).
yeah, we have more or less the same design:
multi AZ, multi-region N-way master replication (one master node per Region/AZ). Then auto-scaling groups are read-only slaves handling queries and authentications. We use ARGON2 so auths can easily take 3 or more secs and goggle up 64MB of RAM each, plus a lot of CPU time.
Using OpenLDAP this way is a bit avant-garde so I think there should be a working group, or maybe a separate list of folks using OpenLDAP with MDB in containers.
Best,
There is a long list of considerations/preparation needed when running OpenLDAP in a container setup (we use Nomad). From memory:
- use the HA proxy protocol, now supported in 2.5/2.6 so you see
client IP's
Is it not enough to just have multiple tasks with different ips on the same host/task name. Dns should do the rest, not?
how does knowledge about the client IP help in containerization ?
- DB persistence: make sure each container always has the same db
files.
You mean a shared volume across all pods, or that they obtain a updated local replica when the pod bootstraps ?
I don't have that many changes to ldap. So it could be sufficient to just work with stateless containers. That update on startup. I have the replication id change automatically on the assigned ip.
yeah, we have more or less the same design:
multi AZ, multi-region N-way master replication (one master node per Region/AZ). Then auto-scaling groups are read-only slaves handling queries and authentications. We use ARGON2 so auths can easily take 3 or more secs and goggle up 64MB of RAM each, plus a lot of CPU time.
Using ARGON2 auth takes 3 seconds (was thinking of switching to this)?
On Fri, Jan 5, 2024 at 9:03 PM Marc Marc@f1-outsourcing.eu wrote:
...
Using ARGON2 auth takes 3 seconds (was thinking of switching to this)?
You should fine tune it to the actual deployment environment. We use a lot of Perl so I use this utility to calibrate it on a typical pod: https://metacpan.org/dist/Crypt-Argon2/view/script/argon2-calibrate
This is our current setup:
Y => 'argon2id', # type P => 2, # threads M => '64M', # mem T => 17, # passes SL => 128, # salt len TL => 128, # tag len
takes about 2 secs on the LDAP pod, and 3-5 secs from the outside when you add our OUath2 server and all the network latency.
openldap-technical@openldap.org