We are heavily utilising back-sql on our product. Granted it has its issues
but it does so far fulfill our needs. We are currently running on 2.4.58
which we build ourselves for Debian and RHEL/CentOS based systems. We
needed couple of patches to back-sql to make it work for us. I just created
issues (and added my patches) for them. I don't have a slightest idea if
the patches are of any use for you but they make our environments work.
Removing back-sql from future releases would make us stuck with 2.4 release.
--- Aapo Romu
--- Software Architect
--- Eficode Oy
On Mon, 9 Aug 2021 at 00:02, Quanah Gibson-Mount <quanah(a)symas.com> wrote:
> --On Sunday, August 8, 2021 6:32 PM +0100 Howard Chu <hyc(a)symas.com>
> > Quanah Gibson-Mount wrote:
> >> For 2.5, we deprecated:
> >> back-ndb
> >> back-sql
> >> back-perl
> >> Should these be removed for 2.6?
> > I still routinely build back-perl in master. Is there any reason to
> > remove it?
> Not necessarily, that's why I started the discussion. back-bdb was
> deprecated with 2.3, but was around for all of 2.4 as well. I see no
> reason to keep back-ndb around. back-sql has numerous open issues, but
> I've no real insight into whether it retains any usefulness.
> Quanah Gibson-Mount
> Product Architect
> Symas Corporation
> Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
hyc(a)symas.com wrote in ITS#8240:
> Our patch response was too hasty. There is no OpenLDAP bug here, the real
> issue is production binaries being built with asserts enabled instead of
> compiling with -DNDEBUG. That's an issue for packagers and distros to resolve.
> Closing this ITS, not an OpenLDAP bug.
Maybe I missed something. But this is the first time I've heard about -DNDEBUG
being mandatory when compiling binary packages for production use. Does it
have other effects?
And what are general rules for assert statements in OpenLDAP code?
In my own (Python) code assert statements are supposed to be only triggered if
something goes wrong *internally* (type issues etc.). If somebody manages to
trigger an assert statement with invalid input from "outside" I always
consider this to be a serious bug revealing insufficient error handling even
though e.g. web2ldap just logs the exception but won't crash. YMMV, but please
I also wonder whether there are more mandatory rules for building packages and
where I can find them.
Please don't get me wrong: My inquiry is in good faith to avoid unnecessary
ITS based on misunderstanding.
For a future version 2.x or maybe 3.x but hopefully sooner:
The original idea behind sl_malloc / op->o_tmpalloc was to have per-operation memory allocation that
never needed an explicit free(), the memory would simply be discarded/reset when the operation finished.
This idea has been subverted over time, and the code is now littered with ch_free/tmpfree everywhere,
which is exactly what sl_malloc was supposed to eliminate.
There was one key problem with the original sl_malloc idea, it only accounted for two types of memory
but in practice we really have three: global memory, whose allocations must persist beyond the life of
a single operation, per-operation memory, and actual scratch/temporary memory. In a future version I'd
like to add an opalloc() function for the per-operation memory.
Rationale: most global allocations occur at startup time, processing the config. Generally this stuff
never needed explicit freeing because it only went away at shutdown time, but now that we have runtime
config with delete support we need to handle that too. The other obvious case is for per-connection
state, such as established after a Bind op. Back when we still used BerkeleyDB backends, the backend's
various caches would also need global memory. All of these would be allocated using ch_malloc.
The per-operation memory is primarily the per-operation ACL cache. The other case that makes sense would
be to use it for all per-op callback structures. Overhauling overlays to only use opalloc() for these
(instead of the stack, which is frequently being used now) would allow many overlays to work correctly
with asynchronous backends.
The scratch memory usage remains the most frequently used, typically for DN/attribute normalization,
entry construction, etc. For LDAP operations that only affect a single entry, like every operation
besides Search, there usually wouldn't be much difference in memory lifetime between opalloc and
tmpalloc memory. But for Search, the normal use pattern would be to do a sl_mark() before constructing
a search response, send the response, then do an sl_release() before constructing the next response,
and so on.
Another item to overhaul would be the use of op->o_bd->bd_info for invoking backend/overlay functions.
Currently we create an entire dummy copy of the original op->o_bd so we can override the bd_info as
we walk thru the overlays. That has caused the need for a few other ridiculous things (like bd_self to
point back to the real backend structure). We should have just added a new op->o_bdinfo pointer to the
Operation struct and left the backend structure alone. This will reduce a bit of pointless memory
copying and speed up overlay processing overall.
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/