I was thinking of putting read-only slapd('s) in a container environment so other tasks can query their data. Up until now I have had replication only between vm's.
To be more flexible I thought of using stateless containers. Things that could be caveats
- replication id's say I spawn another instance, I need to have a new replication id to get updates from the master. But what if the tasks is killed, should I keep this replication id? Or better just always use a random unique replication id whenever a slapd container is launched? Maybe use launch date/time (date +'%g%H%M%S%2N') as repid? Is this giving issues with the master? What if I test with launching instances and the master will think there are a hundred slaves that are not connecting anymore?
- updating of a newly spawned slapd instance When the new task is launched, it is not up to date with its database, can I prevent connections to the slapd until it is fully synced? Say I have user id's in slapd, it could be that when launching a new instance, this user is not available yet. When clients are requesting this data, they do not get it, and this user could be 'offline' until that specific instance of slapd is fully updated.
- to prevent lots of records syncing Can I just copy the data of /var/lib/ldap of any running instance to the container default image? Or does it have some unique id's that will prevent this data to be run multiple times? Is there some advice on how to do this?
- doing some /var/lib/ldap cleanup I am cleaning with db_checkpoint -1 -h /var/lib/ldap, and db_archive -d. Is there an option slapd can initiate this?
- keep uniform configuration environment, or better a few different slapd instances? In my current environment vm slave slapd's only sync data from the master that the masters acls allow access to. That results in that on some vm's the ldap database is quite small and on other it is larger. I think for the container slapd instances to have all data, and just limit client access via the acls. But this means a lot more indexes on the slapd
What else am I missing?
Ok so long rep id is not going to work modifying entry "olcDatabase={2}hdb,cn=config" ldap_modify: Other (e.g., implementation specific) error (80) additional info: Error: parse_syncrepl_line: syncrepl id 1911533132 is out of range [0..999]
-----Original Message----- From: Marc Roos Sent: zaterdag 10 augustus 2019 1:24 To: openldap-technical@openldap.org Subject: Openldap in container advice, how have you done it?
I was thinking of putting read-only slapd('s) in a container environment so other tasks can query their data. Up until now I have had replication only between vm's.
To be more flexible I thought of using stateless containers. Things that could be caveats
- replication id's say I spawn another instance, I need to have a new replication id to get updates from the master. But what if the tasks is killed, should I keep this replication id? Or better just always use a random unique replication id whenever a slapd container is launched? Maybe use launch date/time (date +'%g%H%M%S%2N') as repid? Is this giving issues with the master? What if I test with launching instances and the master will think there are a hundred slaves that are not connecting anymore?
- updating of a newly spawned slapd instance When the new task is launched, it is not up to date with its database, can I prevent connections to the slapd until it is fully synced? Say I have user id's in slapd, it could be that when launching a new instance, this user is not available yet. When clients are requesting this data, they do not get it, and this user could be 'offline' until that specific instance of slapd is fully updated.
- to prevent lots of records syncing Can I just copy the data of /var/lib/ldap of any running instance to the container default image? Or does it have some unique id's that will prevent this data to be run multiple times? Is there some advice on how to do this?
- doing some /var/lib/ldap cleanup I am cleaning with db_checkpoint -1 -h /var/lib/ldap, and db_archive -d. Is there an option slapd can initiate this?
- keep uniform configuration environment, or better a few different slapd instances? In my current environment vm slave slapd's only sync data from the master that the masters acls allow access to. That results in that on some vm's the ldap database is quite small and on other it is larger. I think for the container slapd instances to have all data, and just limit client access via the acls. But this means a lot more indexes on the slapd
What else am I missing?
On 8/10/19 1:23 AM, Marc Roos wrote:
I was thinking of putting read-only slapd('s)
I assume you want to implement read-only consumer replicas.
- replication id's
say I spawn another instance, I need to have a new replication id to get updates from the master.
Are you talking about the serverID?
serverID is not needed on a read-only consumer. Just leave it out.
Ciao, Michael.
--On Saturday, August 10, 2019 6:54 PM +0200 Michael Ströder michael@stroeder.com wrote:
Are you talking about the serverID?
serverID is not needed on a read-only consumer. Just leave it out.
He's talking about replication ID (rid), and it's clearly out of bounds in his post. The slapd.conf/slapd-config man pages clearly document the allowed range that can be used for a RID.
rid identifies the current syncrepl directive within the replication consumer site. It is a non-negative integer not greater than 999 (limited to three decimal digits
--Quanah
--
Quanah Gibson-Mount Product Architect Symas Corporation Packaged, certified, and supported LDAP solutions powered by OpenLDAP: http://www.symas.com
On Sat, Aug 10, 2019 at 01:23:41AM +0200, Marc Roos wrote:
- updating of a newly spawned slapd instance
When the new task is launched, it is not up to date with its database, can I prevent connections to the slapd until it is fully synced?
This is not implemented at this time. See ITS#7616 https://openldap.org/its/?findid=7616.
- to prevent lots of records syncing
Can I just copy the data of /var/lib/ldap of any running instance to the container default image?
Maybe, if they are all running identical software and configuration. The more robust way to do it is slapcat the database on a known-good system, and slapadd it on the new one you're bringing up. In current versions it is safe to use slapcat (but not slapadd) while slapd is running.
- doing some /var/lib/ldap cleanup
I am cleaning with db_checkpoint -1 -h /var/lib/ldap, and db_archive -d. Is there an option slapd can initiate this?
See https://www.openldap.org/doc/admin24/maintenance.html.
Checkpointing can be configured using the 'checkpoint' directive (with slapd.conf, olcDbCheckpoint with slapd-config).
The DB_CONFIG flag DB_LOG_AUTOREMOVE causes transaction logs to be cleaned up automatically.
But please consider migrating to the LMDB backend, which does not require any such maintenance.
On Sat, Aug 10, 2019 at 01:23:41AM +0200, Marc Roos wrote:
- updating of a newly spawned slapd instance
When the new task is launched, it is not up to date with its
database,
can I prevent connections to the slapd until it is fully synced?
This is not implemented at this time. See ITS#7616 https://openldap.org/its/?findid=7616.
Hmm interesting. Maybe we can differentiate between a recent startup and
getting up-to-date with the provider. As opposed to blocking client requests with LDAP_BUSY during a 'normal' sync
- to prevent lots of records syncing
Can I just copy the data of /var/lib/ldap of any running instance to
the
container default image?
Maybe, if they are all running identical software and configuration.
The
more robust way to do it is slapcat the database on a known-good
system,
and slapadd it on the new one you're bringing up. In current versions
it
is safe to use slapcat (but not slapadd) while slapd is running.
Yes doing this now with creating the docker image.
- doing some /var/lib/ldap cleanup
I am cleaning with db_checkpoint -1 -h /var/lib/ldap, and db_archive
-d.
Is there an option slapd can initiate this?
See https://www.openldap.org/doc/admin24/maintenance.html.
Checkpointing can be configured using the 'checkpoint' directive (with
slapd.conf, olcDbCheckpoint with slapd-config).
The DB_CONFIG flag DB_LOG_AUTOREMOVE causes transaction logs to be cleaned up automatically.
Thanks!
But please consider migrating to the LMDB backend, which does not require any such maintenance.
When I have finished migrating the centos7 vm to centos7 containers, do not want to do to many changes at once.
openldap-technical@openldap.org