Considering the following assumptions;
- OpenLDAP version 2.4.51
- attributes objectClass and abc are indexed based on equality
- the EQUALITY of attribute abc is based on distinguishedNameMatch
- The database contains roughly 2 million entries
- 2 entries have defined the attribute abc with a dn value cn=foo,dc=bar and objectClass=someClass
- 2 entries have defined the attribute abc with a dn value cn=bar,dc=baz and objectClass=someClass
Now, the issue started with really slow search performance using objectClass=someClass & abc=cn=foo,dc=bar as filter criteria. Debugging a while seems to indicate that the objectClass filter returns roughly 2 million entries as candidates. Now, one would expect that the second filter would return only the 2 potential candidates from the abc index, or a subset of the whole database but this is not the case. The second filter also returns nearly the whole database entries as potential candidates and causes really slow query performance. Interestingly, this only occurs when attribute abc has value cn=foo,dc=bar, but for some reason for the entry having attribute abc with value cn=bar,dc=baz the query returns immediately. In both cases, the actual entries matching the search return immediately but for the problematic search "(&(objectClass=someClass)(abc=cn=foo,dc=bar))", the completion of the search takes a long time (around 15 seconds to be precise).
The issue started suddenly and wasn't a degradation of query performance over time.
Few things I have tried
- Rebuilt the whole database again
- Reindex the existing database again
- Testing with bdb and mdb as backends
- Increased cache sizes for bdb to hold the whole database in cache
- For bdb adjust the page size of the indexes according to suggestion by db_tuner
- Change the order of the filters
None of these made any difference. At the moment, there does not seem to be any good options to try. Any ideas or help would be greatly appreciated!
This post outlines a few changes to LMDB I had to do to make it work in a specific use case. I’d like to see those changes upstream, but I understand that they may be/are not relevant for e.g. OpenLDAP.
The use case is multiple databases on disks with long running large write transactions.
1. Option to not use custom memory allocator/page pool
LMDB has a custom malloc() implementation that re-uses pages (me_dpages). I understand that this improves the performance at bit (depending on the malloc implementation). But there should at least be the option to not do that (for many reasons). I would even make not using it the default.
2. Large transactions and spilling
In a large write transaction, it will use a lot of memory per default (512MiB) which won’t get freed when the transaction commits (see 1.). If one has a lot of databases it uses a lot of memory that never gets freed.
Alternatively, one can use MDB_WRITEMAP, but (i) per default Linux isn’t tuned to delay writing pages to disk and (ii) before commit LMDB has to remove a dirty bit, so each page is written twice.
Both problems would be fixed by making when pages get spilled configurable (mt_dirty_room as MDB_IDL_UM_MAX currently) and reducing the default non-spill memory amount for at least the MDB_WRITEMAP case. If this memory amount is low mt_spill_pgs gets sorted often so maybe this needs to be converted to a different data structure (e.g. red-black tree).
3. LMDB causes crashes if database is corrupted
If the database is corrupted it can cause the application to crash. I have fixed those cases when they (randomly) occurred. Properly fixing this would probably be best done with some fuzzing.
4. Allow LMDB to reside on a device
I used dm-cache to improve LMDB read performance. It needed a bit of adjustment to get the correct size of the device via ioctl BLKGETSIZE64.
I’ve fixed those issues w.r.t. my application. If there is interest in any of those application specific changes, I’ll clean them up and post them.
Hi all, I'm running version 2.4.49 on Ubuntu 20.04. I've been unable to add
the olcTLSCipherSuite configuration attribute.
# ldapmodify -H ldapi:// -Y EXTERNAL -f set-ciphersuite.ldif
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
modifying entry "cn=config"
ldap_modify: Other (e.g., implementation specific) error (80)
set-ciphersuite.ldif contains the following:
I was able to successfully configure (and confirmed working) TLS by setting
the following attributes:
and was just looking to limit which ciphers would be offered.
I've found several discussions (here, on stackoverflow, etc.) that mention
this error, but those discussions concerned these latter TLS attributes
(which I had no problem adding) and not the olcTLSCipherSuite attribute.
They also pointed to file permissions being the issue for the certificate
files, which I've confirmed is not an issue. I would be grateful if anyone
could point me in the right direction
I understood from slapd-ldap(5) description of "idle-timeout" that cached
connections towards remote LDAP server would be automatically dropped after
Problem: cached connections that are idle do not get dropped.
(1) Is this expected?
(2) Are idle connections kept due to limitation in the implementation:
when connection is idle, back-ldap does not have a trigger that could be used
to drop idle connections?
While experimenting with this, it seems that idle timeout is only checked when
there is new activity towards the cached connection i.e. connection needs to
become active before idle timeout is checked. If the connection just remains
idle, nothing will happen.
I'm trying to study the timeout handling in back-ldap code, and I believe I
found relevant code at the end of ldap_back_getconn() in bind.c. It will
eventually trigger unbind and disconnect, but only when new activity happens
after the idle period is reached. I did not find other paths that could
trigger unbind of cached connection.
So management is insisting that we migrate our openLDAP systems from on
premise into the cloud <sigh>. Specifically, AWS behind one of their
However, we currently rely upon some level of IP address based access
control to distinguish between on-campus and off-campus clients. The
Amazon load balancers do client NAT, so the back end servers have no
idea who is connecting at the TCP/IP level.
They do support the haproxy in band protocol for supplying this
information from the load balancer to the server, but that requires
specific support from the server to do. I don't see any such support in
openldap or any evidence of past discussion regarding it.
Is this something that would be considered as a possible feature to be
included at some point, or something not desired as part of the code base?
I have a question regarding libldap function ldap_install_tls().
If it fails, is it the right thing to call ldap_unbind_ext() after that?
If we call it, does it mean that ldap_install_tls() made a bind?
Or do we call ldap_install_tls() on the connection that is already bound?
Sorry if the information is available somewhere, but I missed to find it.
The only thing I found is that OpenLDAP server calls ldap_unbind_ext() in
case of failure but maybe I miss something...
>>> "Dr. Ogg" <ogg(a)sr375.com> schrieb am 18.11.2020 um 17:55 in Nachricht
> for reference.
> From: Howard Chu <hyc(a)symas.com>
> Date: Wednesday, November 18, 2020 at 8:51 AM
> To: Paul B. Henson <henson(a)acm.org>, openldap‑technical(a)openldap.org
> Subject: Re: HAProxy protocol support?
> Paul B. Henson wrote:
>> So management is insisting that we migrate our openLDAP systems from on
> premise into the cloud <sigh>. Specifically, AWS behind one of their load
>> However, we currently rely upon some level of IP address based access
> control to distinguish between on‑campus and off‑campus clients. The Amazon
> load balancers
>> do client NAT, so the back end servers have no idea who is connecting at
> TCP/IP level.
>> They do support the haproxy in band protocol for supplying this information
> from the load balancer to the server, but that requires specific support
>> server to do. I don't see any such support in openldap or any evidence of
> past discussion regarding it.
>> Is this something that would be considered as a possible feature to be
> included at some point, or something not desired as part of the code base?
> Depends on what that feature actually looks like. Feel free to submit a
> on the ‑devel mailing list, including background info on what HAproxy
> looks like, and what exact behaviors you want it to provide.
I wonder: Would it be possible to use a specific named bind for on-campus
hosts, and use the name used for binding to controll further access?
> ‑‑ Howard Chu
> CTO, Symas Corp. http://www.symas.com
> Director, Highland Sun http://highlandsun.com/hyc/
> Chief Architect, OpenLDAP http://www.openldap.org/project/
I have a proxy application acting as a ntlm server, that supports a NTLM
handshake between web based clients.
If using NTLMv1, sending the NTLM credential blob to an Active Directory
over LDAP using openldap client works.
openldap client Version. 2.4.32
Basically just taking the ntlm response from the NTLM 3 message from client
and sending over LDAP.
However using NTLMv2 , the active directory always issues invalid
credentials even though the user name and password that the client entered
are good. It passes with successful ldap bind with NTLMv1.
Can or should this work with ntlmv2? It seems that when EPA and MIC is
present from client in the NTLM3, that the ldap exchange does not work. I
guess that may be an active directory issue but wanted to check if experts
here think it should work.
Thanks for your review.
Our code gets references to 8 or 16 byte structs by casting pointers into
1. In a database with no DUPs and 8 byte (u64) keys, can we expect the
corresponding value to have alignment 8?
2. In a database with DUPs and 8 byte (u64) keys, if a DUP value (stored as
a key internally) is 16 bytes, does it mean that its alignment is 16? Does
the 8 byte key size impact the alignment of its DUPs as it does in 1.?