Hi C. sorry for the late reply, but we've been researching a lot on this before answering.
I have decent experience with 2-node multi master replication with BDB from earlier 2.x versions.

On Mon, Nov 6, 2023 at 10:43 AM C R <publist.cr@gmail.com> wrote:
Hi Alejandro,

There is a long list of considerations/preparation needed when running
OpenLDAP in a container setup (we use Nomad). From memory:
- use the HA proxy protocol, now supported in 2.5/2.6 so you see client IP's

how does knowledge about the client IP help in containerization ?

- DB persistence: make sure each container always has the same db files.

You mean a shared volume across all pods, or that they obtain a updated local replica when the pod bootstraps ?
 
- Sync cookies: make sure the containers sync from the same node each time.

What cookies are you referring to?
 
- Backups? (We use netapp mounts)

We do slapcat from a a single master node to S3. The backup node is designated by an env var, so only a single pod (a master) runs the slapcat.
 
- Logging? (I bundle rsyslogd in the container that handles queueing
and fwd files to remote rsyslog through TCP).

No issue there.
 
- Support for operations like provisioning, indexing and debugging.

Furthermore, I would separate the clusters in a simple replica only
one (ro), and the one that is provisioned (rw).


yeah, we have more or less the same design:

multi AZ, multi-region N-way master replication (one master node per Region/AZ). Then auto-scaling groups are read-only slaves handling queries and authentications. We use ARGON2 so auths can easily take 3 or more secs and goggle up 64MB of RAM each, plus a lot of CPU time. 


Using OpenLDAP this way is a bit avant-garde so I think there should be a working group, or maybe a separate list of folks using OpenLDAP with MDB in containers. 
 
Best,

-- 
Alex

C.

Le ven. 27 oct. 2023 à 18:11, Alejandro Imass <aimass@yabarana.com> a écrit :
>
> Hi there!
>
> We are working on a new installation and decided to try something new..
>
> In the past I would have gone with multi-master with ldap balancer but after reading and researching more and more on MDB, we decided to try to integrate OpenLDAP into our current CI/CD pipelines using K8s.
>
> What we tried so far and it seems to work is initialize a common persistence storage and then an auto scaling group that shares that common drive. Ech pod has as many threads as virtual CPU it may have, and none of the pods can write, except a dedicated write pod (single instance) with multiple threads for writing.
>
> Is there anything else we are missing here? Any experience scaling OpenLDAP with Kubernetes or other container technology.
>
> Thank you in advance for any comments, pointers or recommendations!
>
> --
> Alex