My company maintains an openldap server which stores the information of all the employees. All company internal systems authenticate with it when users login.
My department is responsible for software developing/testing and divided into many teams. I want to add the employees of my department to corresponding team in openldap so that I can manage user permission based on teams in jira/confluence/gerrit/gitlab/svn/jenkins and so on. However, I have no permission to add team or group to company openldap server.
My plan is to :
1. set up a new openldap server inside my department.
2. synchronize the user data necessary from the company openldap server to my department openldap server.
3. create groups in department openldap server.
4. add users to corresponding group in department openldap server.
5. jira/confluence/gerrit/gitlab/svn/jenkins with authenticate with department openldap server instead of the company one.
How to configure openldap to achieve this？I have googled for two days about replication/meta-directory, but still have no idea.
BTW, I know Jira have similar functionality and can authenticate for confluence, but Jira can not authenticate for other sofeware such as gerrit/gitlab/svn/jenkins.
Any help is appreciated.
we got a quite strange behaviour in which a slapd server stops
processing connections for some tens of seconds while a single thread is
running 100% on a single CPU and all other CPU are almost idle.
When the problem arise there is no significant iowait or disk I/O (and
no swapping, that's disabled). Context switches just go near zero (from
some tens of thousand to some hundreds). Load average is almost always
The server has 32G of RAM and 4 HT processors, is running
openldap-2.4.54 in mirror mode (but no delta replication) using the mdb
backend. The same behaviour was found also with 2.4.53. OpenLDAP is the
only service running on it, apart SSH and some monitoring tools.
Database maxsize is 25G around 17G are used.
I'm attaching a redacted configuration of the main server (the secondary
one is the same, with IDs reverted for mirror mode use)
Most of the time it works just fine, processing a up to a few thousand
of read query per second while having some tens of write per second.
Connections are managed by HA-proxy, sending them to this server by
default (used as main node). Many times these stop are short (around 10
second) and we don't lost connections, but when the problem arise and
last for enough time, HAproxy switch to the second node, and we got
downtimes. Staying with the secondary node we have the same behaviour.
The problem manifests itself without periodicity and looking on the
number of connection before it we could not see any usage peak. We tried
to strace slapd threads during the problem, and they seem blocked on a
mutex waiting for the one running at 100% (in a single CPU, user time).
I'm attaching a top results during one of these events.
From the behaviour I was suspecting (just a wild and uninformated guess)
some indexing issue, blocking all access.
We tried to change tool-threads to 4 because I found it cited in some
example as related to threads used for indexing, but the change has no
effect. Re-reading last version of man-page, if I understand it
correctly, it's effective only for slapadd etc.
So a first question is: there is any other configuration parameter about
indexing that I can try?
Anyway I'm not sure if there is an effective indexing issue (indexes are
quite basic). I was suspecting this because there are lot of writes, and
there is no strace activity during the stop. I should look somewhere else?
Any suggestion on further checks or configuration changes will be more
The latest version of Symas OpenLDAP for Linux, 2.4.56-1, is now available.
Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
I have migrated from HDB to MDB backend and I am seeing higher CPU usage on my MDB openldap consumers. Has anyone else seen the same?
Testing in my stage environment showed MDB to use less or the same amount of CPU than HDB - but now with real traffic and a large dataset I see sustained high CPU utilization.
My production environment has the following specs:
6 consumer servers with 8vCPU x 16G RAM
openldap version 2.4.45
Syncrepl enabled (with a single openldap provider server which is also MDB and has no issues and no high cpu).
The database has ~230K users.
data.mdb is about 1.8G in size.
MDB database directives include:
olcDbCheckpoint: 102400 10
The rest are defaults.
olcDbIndex: businessCategory eq
olcDbIndex: cn eq,sub
olcDbIndex: description eq
olcDbIndex: displayName eq,sub
olcDbIndex: entryCSN eq
olcDbIndex: entryUUID eq
olcDbIndex: gidNumber eq
olcDbIndex: givenName eq,sub
olcDbIndex: mail eq
olcDbIndex: member eq
olcDbIndex: memberOf eq
olcDbIndex: memberUid eq
olcDbIndex: objectClass pres,eq
olcDbIndex: sn eq,sub
olcDbIndex: uid eq,sub
olcDbIndex: uidNumber eq
olcDbIndex: uniqueMember eq
These consumer servers are used for reads only.
The initial sync with the provider is ok but once the consumers are actively handling read requests, CPU jumps to 60% usage on average.
Our HDB consumers had half the resources (4vCPU and 8GB RAM) and less than half the CPU usage (average of 25% utilization).
I have tested adding other MDB directives (writemap, mapasync, nordahead) but cannot get CPU utilization to come down close to what we see with the HDB backend.
I have also load tested in my stage environment and was unable to reproduce (MDB generally utilized the same or less resources than HDB, but never double).
There has been no change in the data or traffic between migration. We have also reverted some servers back to HDB and then back to MDB to confirm the high utilization.
Has anyone else come across this with MDB and if so, were you able to alleviate CPU utilization? I can provide more details if needed. Any input welcome.
I've created the following Grafana dashboard for recent OpenLDAP releases
(2.4.49 or later) that obtains data from the MDB backend database in
addition to the normal OpenLDAP stats. May need adjustment for which
database the primary DB is (It assumes DB#1)
Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
We want to reproduce a scenario where user's OpenLDAP password is expired.
For doing that we need to edit the pwdChangedTime attribute and set its
value to a past date-timestamp.
This attribute is not editable and we need to edit it manually for our
Can you please let us know how we can edit the value of pwdChangedTime?
I looking to replace RDBMS with openldap as datastore for one of the product. One blocker we have is handling the password migration.
The password is stored in hashed format in RDBMS. However, I am not able to get the password migrated.
My goal is to migrate the data from RDBMS to OpenLDAP but without asking the end user to reset or change their password post the migration.
1. Do we have any way to intercept the ldap bind verification and put my own logic?
2. Do we have any way to modify or customize the password hash calculation that is used by opendlap during LDAP bind.
3. Any other suggestion?