hyc(a)symas.com wrote in ITS#8240:
> Our patch response was too hasty. There is no OpenLDAP bug here, the real
> issue is production binaries being built with asserts enabled instead of
> compiling with -DNDEBUG. That's an issue for packagers and distros to resolve.
> Closing this ITS, not an OpenLDAP bug.
Maybe I missed something. But this is the first time I've heard about -DNDEBUG
being mandatory when compiling binary packages for production use. Does it
have other effects?
And what are general rules for assert statements in OpenLDAP code?
In my own (Python) code assert statements are supposed to be only triggered if
something goes wrong *internally* (type issues etc.). If somebody manages to
trigger an assert statement with invalid input from "outside" I always
consider this to be a serious bug revealing insufficient error handling even
though e.g. web2ldap just logs the exception but won't crash. YMMV, but please
I also wonder whether there are more mandatory rules for building packages and
where I can find them.
Please don't get me wrong: My inquiry is in good faith to avoid unnecessary
ITS based on misunderstanding.
Just some initial thoughts on what a new logging daemon should do for us:
The primary goal - we want to use a binary message format with as few format conversions as possible between log
sender and log processor.
I'm thinking that we use message catalogs; we will need a tool to preprocess every logging
invocation in the source tree and replace them with a integer messageID. So at runtime only
the messageID and the message parameters need to be sent, not any plaintext.
The message catalog will be compiled into the binary. When it performs its "openlog" to talk
to the logging server, it will send the UUID of its catalog. If the logging server doesn't
know this UUID, it will transmit the message catalog to the logging server, before doing
anything else. (It may make more sense just to use a SHA hash here instead of a UUID.)
This way the logging server will work with any version of the binaries, and we don't need
to do special coordination to update message catalogs between revisions. The logging server
will just know that a specific catalog is to be used with a particular logging session.
The message protocol will be length-prefixed. We may even just use DER, since that would
allow us to encode arrays of parameters, and other such stuff.
The logging server will write the log messages to disk/network verbatim, doing no
parsing at all. It may prefix the records with a log session ID, so that a postprocessor
can lookup the catalog that belongs to the session, for dumping out as text.
The logging server can store its received catalogs in an LMDB database. The postprocessor
can then lookup individual messageIDs in this database, interpolate the parameters, and
dump out in text.
... that's what I have so far. It's a bit worrisome because of the additional moving parts:
message catalog creator, log server, log postprocessor. There's definitely more complexity
here, but most of it is moved out of the runtime hot path, which is the main goal. Suggestions?
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
We currently run our openLDAP service on our campus behind an F5 load
balancer which preserves the IP address of the connecting client through
to the backend servers, which we rely on for a small amount of IP
address based authorization differentiating between on-campus and
However, management is strongly pushing us to migrate the service to the
Amazon cloud, using Amazon's load balancer. Unfortunately, Amazon's load
balancer only supports client NAT for directing connections to the back
end servers, so they have no idea who the actual client is, it just
appears to be the load balancer itself.
Amazon's solution for that is to support HAProxy's proxy protocol in
their load balancer:
Basically, this is an in band signaling mechanism that inserts an
additional header in the initial connection data containing the original
client IP address/source port and destination IP address/source port,
allowing the server to utilize that information for the connection
rather than the actual details of the network connection from the proxy
This requires support from the application running on the server, as it
must remove and process that proxy header from the connection data
before moving on with whatever data would normally be passed on the
There are some fair number of services that support this proxy,
including of course HAProxy itself, such as the apache web server and
the postfix mail server.
openLDAP does not support the protocol, and I was unable to find any
past discussion of it.
I was wondering if this feature would be something acceptable for
inclusion to openLDAP, or if from an architectural perspective it would
be considered undesirable.
In general, I believe applications listening on a specific port are
either expecting the proxy protocol header, or not, I do not think it is
dynamically determined. As such, from an implementation perspective, my
initial thought is that it would be implemented in terms of
configuration as an additional URL specified via the -h option,
something like "ldapp://" or "ldap_p://", "ldapsp://" or "ldaps_p://" or
whatever seems most desirable. A server might listen on the standard
ports accepting only proxied connections, or it might listen for normal
connections on the standard ports and for proxy connections on
When a connection is accepted on a port marked as requiring the proxy
protocol, it would read and process the proxy header to populate the
appropriate data structures regarding connection, and then move on as it
normally would to deal with the connection.
If this feature is of interest, I will probably spend a little time
poking at it and seeing how much trouble it will be to implement.
After reading the slapo-constraint man page and searching online for a
possible solution it is clear that the overlay doesn't conveniently allow
setting a constraint with a negated regex.
The root cause is that negative lookahead isn't supported by extended POSIX
regex. One could argue that the complement of a regular language is itself
regular again and therefore it is certainly possible to write a regex that
doesn't allow certain values, however any regex of this sort quickly becomes
Taking grep as an example (i.e. --invert-match), I propose adding a constraint
type that allows using a regex in a negated way. When a match is found a
constraint error is raised. Looking at the constraint overlay code it seems
pretty trivial and I am willing to submit myself a patch that allows setting
constraint_attribute mail negregex ^.*(a)somedomain\.com$
I already have an initial implementation and first tests seem to work as
intended. Would such a patch be accepted? If so, could anyone guide me with
getting the patch merged?
Thanks in advance,