Fresh install changing the hdb to mdb
by Marc Roos
This is the default file that rhel/centos have in their slapd.d dir for
the database. I thought I would just remove this one and place the one
for mdb, seems to work, don't know about this entryUUID? Or can I do
this with ldapmodify?
[@53386e4b0025 cn=config]# cat /tmp/olcDatabase\=\{2\}hdb.ldif
# AUTO-GENERATED FILE - DO NOT EDIT!! Use ldapmodify.
# CRC32 4f2ac1fc
dn: olcDatabase={2}hdb
objectClass: olcDatabaseConfig
objectClass: olcHdbConfig
olcDatabase: {2}hdb
olcDbDirectory: /var/lib/ldap
olcSuffix: dc=my-domain,dc=com
olcRootDN: cn=Manager,dc=my-domain,dc=com
olcDbIndex: objectClass eq,pres
olcDbIndex: ou,cn,mail,surname,givenname eq,pres,sub
structuralObjectClass: olcHdbConfig
entryUUID: 537b0adc-5476-1039-9bf9-1dc025e1859d
creatorsName: cn=config
createTimestamp: 20190816133433Z
entryCSN: 20190816133433.095410Z#000000#000#000000
modifiersName: cn=config
modifyTimestamp: 20190816133433Z
[@53386e4b0025 cn=config]# cat olcDatabase\=\{2\}mdb.ldif
# AUTO-GENERATED FILE - DO NOT EDIT!! Use ldapmodify.
# CRC32 b6a274bd
dn: olcDatabase={2}mdb
objectClass: olcDatabaseConfig
objectClass: olcMdbConfig
olcDatabase: {2}mdb
olcDbDirectory: /var/lib/ldap
olcSuffix: dc=my-domain,dc=com
olcRootDN: cn=Manager,dc=my-domain,dc=com
olcDbIndex: objectClass eq,pres
olcDbIndex: ou,cn,mail,surname,givenname eq,pres,sub
structuralObjectClass: olcMdbConfig
entryUUID: 537b0adc-5476-1039-9bf9-1dc025e1859d
creatorsName: cn=config
createTimestamp: 20190816133433Z
entryCSN: 20190816133433.095410Z#000000#000#000000
modifiersName: cn=config
modifyTimestamp: 20190816133433Z
3 years, 9 months
Initial syncreplication details
by Marc Roos
Am I correct to understand from this page[0] that the consumer gets its
'new' contextCSN from the slapcat import. (I saw it in the file). And
will get all data since that date at startup?
The replication id is of no influence. So if I would stop slapd import
again the same old slapcat file. It would again receive all new data
since that contextCSN? Regardless of the same rid being used with the
provider.
The idea behind this is that if you create a container with a default
imported slapcat/slapadd file. Every time the task is stopped and
started. It will go back to it initial container state. (unless you make
it statefull of course)
I am not adding that much data so I could make a new default image every
week or so, this would be used every time a container is launched.
(which only happens in a failover situation)
[0]
https://www.openldap.org/doc/admin24/replication.html
3 years, 9 months
Environment variable in slapd config
by Marc Roos
Is it possible to reference an environment variable in olcSyncrepl:
{0}rid= ?
--On Saturday, August 10, 2019 6:54 PM +0200 Michael Ströder
<michael(a)stroeder.com> wrote:
> Are you talking about the serverID?
>
> serverID is not needed on a read-only consumer. Just leave it out.
He's talking about replication ID (rid), and it's clearly out of bounds
in his post. The slapd.conf/slapd-config man pages clearly document the
allowed range that can be used for a RID.
rid identifies the current syncrepl directive
within
the
replication consumer site. It is a non-negative
integer not
greater than 999 (limited to three decimal digits
--Quanah
--
3 years, 9 months
RE: Environment variable in slapd config
by Marc Roos
I guess I am missing some info??? Is there any animosity between
openldap and redhat? My general perspective is RedHat has made a core
business of supporting a linux enterprise os. So I do hope that, if the
shit hits the fan, you can rely on the paid services based on the
knowledge of their 10k+ employees or so? These guys that work on ceph
are doing a great job, looks like very competent people. And now they
are owned by IBM I like them even more. :) Always have had IBM in high
regards with their r&d.
-----Original Message-----
Subject: RE: Environment variable in slapd config
--On Friday, August 16, 2019 5:17 PM +0200 Marc Roos
<M.Roos(a)f1-outsourcing.eu> wrote:
> I am more fan of Centos because then I can fall back on RedHat
> support, especially for production environments.
That's the most laughable statement (in relation to OpenLDAP at least)
that I've heard in years. Thanks for the morning chuckle.
--Quanah
--
Quanah Gibson-Mount
Product Architect
Symas Corporation
Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
<http://www.symas.com>
3 years, 9 months
RE: Make slapadd faster?
by Marc Roos
Ok ok I will look at this mdb again.
-----Original Message-----
From: Quanah Gibson-Mount [mailto:quanah@symas.com]
Subject: Re: Make slapadd faster?
--On Friday, August 16, 2019 10:14 AM +0200 Marc Roos
<M.Roos(a)f1-outsourcing.eu> wrote:
>
> I know you can disable some checks to make slapadd faster. But I think
> in my test vm with limited disk iops, it looks like this disk io is
> the problem.
> I am not sure how slapadd adds entries, I guess one at a time? You
> could get a significant improvement by reading more entries and
> writing more entries at once? And if such a batch transaction fails
> you can always go back to submitting the transactions of the batch one
> by one, to identify which one is failing.
Stop using back-hdb, it's signficantly slower than back-mdb. If you're
going to insist on using back-hdb, then you *must* have a well tuned
DB_CONFIG before you import.
For any backend, ensure you have tool-threads set appropriately (For
back-mdb, values > 2 are ignored).
Use the -q flag if you know the LDIF is good.
slapadd has been heavily profiled and tested.
--Quanah
--
Quanah Gibson-Mount
Product Architect
Symas Corporation
Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
<http://www.symas.com>
3 years, 9 months
Make slapadd faster?
by Marc Roos
I know you can disable some checks to make slapadd faster. But I think
in my test vm with limited disk iops, it looks like this disk io is the
problem.
I am not sure how slapadd adds entries, I guess one at a time? You could
get a significant improvement by reading more entries and writing more
entries at once? And if such a batch transaction fails you can always go
back to submitting the transactions of the batch one by one, to identify
which one is failing.
3 years, 9 months
Openldap in container advice, how have you done it?
by Marc Roos
I was thinking of putting read-only slapd('s) in a container environment
so other tasks can query their data. Up until now I have had replication
only between vm's.
To be more flexible I thought of using stateless containers. Things that
could be caveats
- replication id's
say I spawn another instance, I need to have a new replication id to get
updates from the master. But what if the tasks is killed, should I keep
this replication id? Or better just always use a random unique
replication id whenever a slapd container is launched? Maybe use launch
date/time (date +'%g%H%M%S%2N') as repid? Is this giving issues with the
master? What if I test with launching instances and the master will
think there are a hundred slaves that are not connecting anymore?
- updating of a newly spawned slapd instance
When the new task is launched, it is not up to date with its database,
can I prevent connections to the slapd until it is fully synced?
Say I have user id's in slapd, it could be that when launching a new
instance, this user is not available yet. When clients are requesting
this data, they do not get it, and this user could be 'offline' until
that specific instance of slapd is fully updated.
- to prevent lots of records syncing
Can I just copy the data of /var/lib/ldap of any running instance to the
container default image? Or does it have some unique id's that will
prevent this data to be run multiple times? Is there some advice on how
to do this?
- doing some /var/lib/ldap cleanup
I am cleaning with db_checkpoint -1 -h /var/lib/ldap, and db_archive -d.
Is there an option slapd can initiate this?
- keep uniform configuration environment, or better a few different
slapd instances?
In my current environment vm slave slapd's only sync data from the
master that the masters acls allow access to. That results in that on
some vm's the ldap database is quite small and on other it is larger.
I think for the container slapd instances to have all data, and just
limit client access via the acls. But this means a lot more indexes on
the slapd
What else am I missing?
3 years, 9 months
RE: Antw: RE: Openldap in container advice, how have you done it?
by Marc Roos
The ip address is known when I start the container that would mean I
need to sed some ready ldif and import it into the slapd at runtime.
That would also require the the availability of some secret to be able
to import it.
Although I have prepared the container for ldif fetching. Nicer would be
if I could specify something like an environment variable in olcSyncrepl
-----Original Message-----
From: Ulrich Windl [mailto:Ulrich.Windl@rz.uni-regensburg.de]
Sent: maandag 12 augustus 2019 8:56
To: Marc Roos
Subject: Antw: RE: Openldap in container advice, how have you done it?
>>> "Marc Roos" <M.Roos(a)f1-outsourcing.eu> schrieb am 10.08.2019 um
14:07 in Nachricht
<"H00000710014b895.1565438831.sx.f1-outsourcing.eu*"@MHS>:
> Ok so long rep id is not going to work modifying entry
> "olcDatabase={2}hdb,cn=config"
> ldap_modify: Other (e.g., implementation specific) error (80)
> additional info: Error: parse_syncrepl_line: syncrepl id
> 1911533132 is out of range [0..999]
Why not derive the ID from some container ID or from the container's IP
address?
>
>
>
>
> ‑‑‑‑‑Original Message‑‑‑‑‑
> From: Marc Roos
> Sent: zaterdag 10 augustus 2019 1:24
> To: openldap‑technical(a)openldap.org
> Subject: Openldap in container advice, how have you done it?
>
>
>
> I was thinking of putting read‑only slapd('s) in a container
> environment so other tasks can query their data. Up until now I have
> had replication only between vm's.
>
> To be more flexible I thought of using stateless containers. Things
> that could be caveats
>
> ‑ replication id's
> say I spawn another instance, I need to have a new replication id to
> get updates from the master. But what if the tasks is killed, should I
> keep this replication id? Or better just always use a random unique
> replication id whenever a slapd container is launched? Maybe use
> launch date/time (date +'%g%H%M%S%2N') as repid? Is this giving issues
> with the master? What if I test with launching instances and the
> master will think there are a hundred slaves that are not connecting
anymore?
>
> ‑ updating of a newly spawned slapd instance When the new task is
> launched, it is not up to date with its database, can I prevent
> connections to the slapd until it is fully synced?
> Say I have user id's in slapd, it could be that when launching a new
> instance, this user is not available yet. When clients are requesting
> this data, they do not get it, and this user could be 'offline' until
> that specific instance of slapd is fully updated.
>
> ‑ to prevent lots of records syncing
> Can I just copy the data of /var/lib/ldap of any running instance to
> the container default image? Or does it have some unique id's that
> will prevent this data to be run multiple times? Is there some advice
> on how to do this?
>
> ‑ doing some /var/lib/ldap cleanup
> I am cleaning with db_checkpoint ‑1 ‑h /var/lib/ldap, and db_archive
‑d.
> Is there an option slapd can initiate this?
>
> ‑ keep uniform configuration environment, or better a few different
> slapd instances?
> In my current environment vm slave slapd's only sync data from the
> master that the masters acls allow access to. That results in that on
> some vm's the ldap database is quite small and on other it is larger.
> I think for the container slapd instances to have all data, and just
> limit client access via the acls. But this means a lot more indexes on
> the slapd
>
> What else am I missing?
3 years, 9 months
PID file /var/run/openldap/slapd.pid not readable (yet?) after start
by Paul Pathiakis
Hi....
After the previous issue... I went to startup slapd and got the error above.
I don't even know how to address that.
Slapd won't even start. I'm on CentOS 7. :(
systemctl status slapd.service
● slapd.service - OpenLDAP Server Daemon
Loaded: loaded (/usr/lib/systemd/system/slapd.service; enabled; vendor preset: disabled)
Active: failed (Result: timeout) since Wed 2019-08-14 11:34:15 EDT; 2min 7s ago
Docs: man:slapd
man:slapd-config
man:slapd-hdb
man:slapd-mdb
file:///usr/share/doc/openldap-servers/guide.html
Process: 15117 ExecStart=/usr/sbin/slapd -u ldap -h ${SLAPD_URLS} $SLAPD_OPTIONS (code=exited, status=0/SUCCESS)
Process: 15102 ExecStartPre=/usr/libexec/openldap/check-config.sh (code=exited, status=0/SUCCESS)
Main PID: 14277 (code=exited, status=0/SUCCESS)
Aug 14 11:32:45 NewLDAP.hq.boston-engineering.com systemd[1]: Starting OpenLDAP Server Daemon...
Aug 14 11:32:45 NewLDAP.hq.boston-engineering.com runuser[15105]: pam_unix(runuser:session): session opened for user ldap by (uid=0)
Aug 14 11:32:45 NewLDAP.hq.boston-engineering.com runuser[15105]: pam_unix(runuser:session): session closed for user ldap
Aug 14 11:32:45 NewLDAP.hq.boston-engineering.com slapd[15117]: @(#) $OpenLDAP: slapd 2.4.44 (Jan 29 2019 17:42:45) $
mockbuild@x86-01.bsys.centos.org:/builddir/build/BUILD/openldap-2.4.44/openlda...s/slapd
Aug 14 11:32:45 NewLDAP.hq.boston-engineering.com systemd[1]: PID file /var/run/openldap/slapd.pid not readable (yet?) after start.
Aug 14 11:34:15 NewLDAP.hq.boston-engineering.com systemd[1]: slapd.service start operation timed out. Terminating.
Aug 14 11:34:15 NewLDAP.hq.boston-engineering.com systemd[1]: Failed to start OpenLDAP Server Daemon.
Aug 14 11:34:15 NewLDAP.hq.boston-engineering.com systemd[1]: Unit slapd.service entered failed state.
Aug 14 11:34:15 NewLDAP.hq.boston-engineering.com systemd[1]: slapd.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
[root@NewLDAP openldap]# systemctl start slapd.service
Job for slapd.service failed because a timeout was exceeded. See "systemctl status slapd.service" and "journalctl -xe" for details.
Thank you,
P.
3 years, 10 months