asserts and manadatory build instructions (was ITS#8240)
by Michael Ströder
hyc(a)symas.com wrote in ITS#8240:
> Our patch response was too hasty. There is no OpenLDAP bug here, the real
> issue is production binaries being built with asserts enabled instead of
> compiling with -DNDEBUG. That's an issue for packagers and distros to resolve.
> Closing this ITS, not an OpenLDAP bug.
Maybe I missed something. But this is the first time I've heard about -DNDEBUG
being mandatory when compiling binary packages for production use. Does it
have other effects?
And what are general rules for assert statements in OpenLDAP code?
In my own (Python) code assert statements are supposed to be only triggered if
something goes wrong *internally* (type issues etc.). If somebody manages to
trigger an assert statement with invalid input from "outside" I always
consider this to be a serious bug revealing insufficient error handling even
though e.g. web2ldap just logs the exception but won't crash. YMMV, but please
clarify.
I also wonder whether there are more mandatory rules for building packages and
where I can find them.
Please don't get me wrong: My inquiry is in good faith to avoid unnecessary
ITS based on misunderstanding.
Ciao, Michael.
1 year, 6 months
New logging system ideas
by Howard Chu
Just some initial thoughts on what a new logging daemon should do for us:
The primary goal - we want to use a binary message format with as few format conversions as possible between log
sender and log processor.
I'm thinking that we use message catalogs; we will need a tool to preprocess every logging
invocation in the source tree and replace them with a integer messageID. So at runtime only
the messageID and the message parameters need to be sent, not any plaintext.
The message catalog will be compiled into the binary. When it performs its "openlog" to talk
to the logging server, it will send the UUID of its catalog. If the logging server doesn't
know this UUID, it will transmit the message catalog to the logging server, before doing
anything else. (It may make more sense just to use a SHA hash here instead of a UUID.)
This way the logging server will work with any version of the binaries, and we don't need
to do special coordination to update message catalogs between revisions. The logging server
will just know that a specific catalog is to be used with a particular logging session.
The message protocol will be length-prefixed. We may even just use DER, since that would
allow us to encode arrays of parameters, and other such stuff.
The logging server will write the log messages to disk/network verbatim, doing no
parsing at all. It may prefix the records with a log session ID, so that a postprocessor
can lookup the catalog that belongs to the session, for dumping out as text.
The logging server can store its received catalogs in an LMDB database. The postprocessor
can then lookup individual messageIDs in this database, interpolate the parameters, and
dump out in text.
... that's what I have so far. It's a bit worrisome because of the additional moving parts:
message catalog creator, log server, log postprocessor. There's definitely more complexity
here, but most of it is moved out of the runtime hot path, which is the main goal. Suggestions?
--
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
1 year, 10 months
HAProxy proxy protocol support
by Paul B. Henson
We currently run our openLDAP service on our campus behind an F5 load
balancer which preserves the IP address of the connecting client through
to the backend servers, which we rely on for a small amount of IP
address based authorization differentiating between on-campus and
off-campus access.
However, management is strongly pushing us to migrate the service to the
Amazon cloud, using Amazon's load balancer. Unfortunately, Amazon's load
balancer only supports client NAT for directing connections to the back
end servers, so they have no idea who the actual client is, it just
appears to be the load balancer itself.
Amazon's solution for that is to support HAProxy's proxy protocol in
their load balancer:
https://www.haproxy.com/blog/haproxy/proxy-protocol/
Basically, this is an in band signaling mechanism that inserts an
additional header in the initial connection data containing the original
client IP address/source port and destination IP address/source port,
allowing the server to utilize that information for the connection
rather than the actual details of the network connection from the proxy
itself.
This requires support from the application running on the server, as it
must remove and process that proxy header from the connection data
before moving on with whatever data would normally be passed on the
connection.
There are some fair number of services that support this proxy,
including of course HAProxy itself, such as the apache web server and
the postfix mail server.
openLDAP does not support the protocol, and I was unable to find any
past discussion of it.
I was wondering if this feature would be something acceptable for
inclusion to openLDAP, or if from an architectural perspective it would
be considered undesirable.
In general, I believe applications listening on a specific port are
either expecting the proxy protocol header, or not, I do not think it is
dynamically determined. As such, from an implementation perspective, my
initial thought is that it would be implemented in terms of
configuration as an additional URL specified via the -h option,
something like "ldapp://" or "ldap_p://", "ldapsp://" or "ldaps_p://" or
whatever seems most desirable. A server might listen on the standard
ports accepting only proxied connections, or it might listen for normal
connections on the standard ports and for proxy connections on
alternative ports.
When a connection is accepted on a port marked as requiring the proxy
protocol, it would read and process the proxy header to populate the
appropriate data structures regarding connection, and then move on as it
normally would to deal with the connection.
If this feature is of interest, I will probably spend a little time
poking at it and seeing how much trouble it will be to implement.
Thanks…
2 years, 5 months
backport ITS#9264 to
by Nikos Voutsinas
Hello all,
The fix for the Issue 9264 (Add lock to slapo-unique to delay new ops until
current op is complete) <https://bugs.openldap.org/show_bug.cgi?id=9264>
prevents an inconsistent or unexpected behavior of slapd in the case of
multiple concurrent modifiers, thus I would suggest to backport the fix to
the OPENLDAP_REL_ENG_2_4.
The conditions that trigger this is not that rare and we have encountered
it multiple times, mostly but not only during batch operations
Best Regards,
Nikos
2 years, 5 months