hyc(a)symas.com wrote in ITS#8240:
> Our patch response was too hasty. There is no OpenLDAP bug here, the real
> issue is production binaries being built with asserts enabled instead of
> compiling with -DNDEBUG. That's an issue for packagers and distros to resolve.
> Closing this ITS, not an OpenLDAP bug.
Maybe I missed something. But this is the first time I've heard about -DNDEBUG
being mandatory when compiling binary packages for production use. Does it
have other effects?
And what are general rules for assert statements in OpenLDAP code?
In my own (Python) code assert statements are supposed to be only triggered if
something goes wrong *internally* (type issues etc.). If somebody manages to
trigger an assert statement with invalid input from "outside" I always
consider this to be a serious bug revealing insufficient error handling even
though e.g. web2ldap just logs the exception but won't crash. YMMV, but please
I also wonder whether there are more mandatory rules for building packages and
where I can find them.
Please don't get me wrong: My inquiry is in good faith to avoid unnecessary
ITS based on misunderstanding.
Just some initial thoughts on what a new logging daemon should do for us:
The primary goal - we want to use a binary message format with as few format conversions as possible between log
sender and log processor.
I'm thinking that we use message catalogs; we will need a tool to preprocess every logging
invocation in the source tree and replace them with a integer messageID. So at runtime only
the messageID and the message parameters need to be sent, not any plaintext.
The message catalog will be compiled into the binary. When it performs its "openlog" to talk
to the logging server, it will send the UUID of its catalog. If the logging server doesn't
know this UUID, it will transmit the message catalog to the logging server, before doing
anything else. (It may make more sense just to use a SHA hash here instead of a UUID.)
This way the logging server will work with any version of the binaries, and we don't need
to do special coordination to update message catalogs between revisions. The logging server
will just know that a specific catalog is to be used with a particular logging session.
The message protocol will be length-prefixed. We may even just use DER, since that would
allow us to encode arrays of parameters, and other such stuff.
The logging server will write the log messages to disk/network verbatim, doing no
parsing at all. It may prefix the records with a log session ID, so that a postprocessor
can lookup the catalog that belongs to the session, for dumping out as text.
The logging server can store its received catalogs in an LMDB database. The postprocessor
can then lookup individual messageIDs in this database, interpolate the parameters, and
dump out in text.
... that's what I have so far. It's a bit worrisome because of the additional moving parts:
message catalog creator, log server, log postprocessor. There's definitely more complexity
here, but most of it is moved out of the runtime hot path, which is the main goal. Suggestions?
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
Quanah and Howard,
Thanks for your quick replies! I'm glad to hear there's interest in this. I
think 2.6 is a more realistic target, as I'll need to get my boss to
allocate time for this work amongst other wolfSSL tasks I've been assigned.
Look forward to a merge request in the (hopefully near) future!
On Thu, Feb 25, 2021 at 1:17 PM Quanah Gibson-Mount <quanah(a)symas.com>
> --On Thursday, February 25, 2021 12:38 PM -0600 Hayden Roche
> <haydenroche5(a)gmail.com> wrote:
> > (thanks JoBbZ). I was also pointed to this
> > issue in your issue tracking system, where a developer (Quanah
> > Gibson-Mount)
> Same person. ;)
> > Is there still interest in getting wolfSSL working with OpenLDAP's latest
> > version and integrated upstream?
> OpenLDAP 2.4 is closed to development. If you want this in for OpenLDAP
> 2.5, you'll need to get the work in ASAP, otherwise it will have to wait
> for 2.6
> Sign up for an account on our gitlab instance: https://git.openldap.org
> Fork a copy of the openldap repo.
> Create a branch for ITS9303 and do the work in that branch
> Push the branch
> Open a merge request for review
> Additionally, you'll need to add an IPR statement to ITS#9303 as
> at <https://www.openldap.org/devel/contributing.html#notice>
> A link to the MR should also be put into the ITS.
> Quanah Gibson-Mount
> Product Architect
> Symas Corporation
> Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
I'm a software engineer with wolfSSL, which is a fast, lightweight, and
FIPS-certified TLS implementation written in C. wolfSSL offers an OpenSSL
compatibility layer that presents the same API as OpenSSL, but under the
hood, calls into wolfSSL and woflCrypt (our crypto library) functions. One
of our commercial users recently had us port OpenLDAP to use wolfSSL. With
some modifications to the OpenSSL backend code (primarily in tls_o.c), I
was able to get OpenLDAP 2.4.47 building and (to my knowledge) working with
wolfSSL's OpenSSL compatibility layer. I recently reached out on your IRC
channel to see if there was any interest in supporting wolfSSL as a TLS
backend for OpenLDAP upstream and was directed to this mailing list (thanks
JoBbZ). I was also pointed to this issue in your issue tracking system,
where a developer (Quanah Gibson-Mount) expressed interest in using
Is there still interest in getting wolfSSL working with OpenLDAP's latest
version and integrated upstream? If so, I imagine we'd want to make wolfSSL
a first class citizen among the TLS backends (i.e. rather than using our
OpenSSL compatibility layer and modifying tls_o.c, use wolfSSL's native
functions and create a new tls_w.c). Looking forward to hearing from you.
As a user of slapd-ldap I've bumped into few corner cases related to handling
retries and timeouts . I think it demonstrates how non-trivial
problem proxying really is, even if it might seem quite simple for casual user
at first. While working with a patch for  I was wondering following:
My use case:
I have many proxies in the network: one per Kubernetes cluster, but large
number of clusters in the network. I'd like to reduce the number of long-
running connections to centralized server to the absolute minimum. The number
of concurrent TCP connections handled by the remote LDAP server is the
bottleneck. Optimally, all connections should be dropped as soon as client
is done with the LDAP query.
Would it be possible to disable all (or only some) caching and retry logic and
instead have the proxy mirror the behavior of the clients and remote server:
(1) Disconnect the client connection when corresponding remote connection got
(2) Disconnect the connection to the remote server when the client disconnects
from the proxy (or if remote connection is shared between many clients:
disconnect when last client disconnects)
In other words, delegate the complications back to the remote server and
clients, instead of trying to solve them at the proxy.
Could this simplify the proxy?
What would be the performance implications? In my use case the concurrent TCP
connections towards remote server would reduce, but the number of individual
connections could increase due to (2).
 Idle and connection timeout implementation
 crash if rebinding after retry fails
 retry fails after remote server disconnected
 rebind-as-user credentials lost after retrying remote connection
As usual I'm using openSUSE Build Service to build openldap2 RPMs. This
smoothly works with 2.4.x.
But building 2.5 branch snapshot fails.
Maybe OBS compiler options are set pretty strict because it outputs:
[ 147s] cc1: some warnings being treated as errors
[ 147s] make: *** [<builtin>: slapd-watcher.o] Error 1
You can look at the full build log with some warnings here: