hyc@symas.com writes:
If these two behaviors are related, that says to me that we should just go the rest of the way with the slab allocator and allocate additional slabs when the current one is exhausted.
By itself that solves one problem, but introduces another. The slab allocator does almost no garbage collection until the memory context is reset. So a task could use up an arbitrary number of slabs, slap_sl_free()ing almost all memory in each, before it ends and its slabs are all reset. The fallback to plain malloc() avoids this.
A fix would be for extra slabs to manage their memory with a proper malloc algorithm. The first slab can use the current fast algorithm.
Or, would it get too fragile to require slapd to track whether a memory block comes from plain malloc() or one of the OpenLDAP malloc functions? That would allow each slapd memory block to identify the slab it came from, if any. A two-byte header would allow 65535 slabs, if 8 bytes for a ctx pointer per sl_malloced block is too much.
The purpose of the slab allocator was two-fold - to provide thread-local memory that needs no locking, and also to provide local memory that needs no housekeeping/explicit frees.
And #3: to avoid memory fragmentation from plain malloc.