On 8/25/17 2:30 AM, Quanah Gibson-Mount wrote:
If I could, I would delete 8664 from the ITS system entirely as it was filed
based on invalid information that was provided to me. It generally should be
When a write operation is performed with LMDB, the freelist is scanned for
available space to reuse if possible. The larger the size of the freelist,
the longer amount of time it will take for the operation to complete
successfully. When the database has gotten to a certain point of
fragmentation (This differs based on any individual use case), it will be
start taking a noticeable amount of time for those write operations to
complete and the server processing the write operation does essentially come
to a halt during this process.
Hope this helps!
Hmm, I am a bit alarmed by this. I would have expected that the free blocks
would be sorted by size to some extent, so that suitable blocks are found fairly
fast. But I already had the impression that this is not the case when I analyzed
mdb_stat.c how it calculates the size of free space...
Since I ran into an allocation problem with my software on a test system anyway
-- the database was "full" despite of reported gigabytes free space --, I wonder
whether I should limit the size of larger data values and also round the sizes
up, e.g. to the next power of two, in order to reduce the risk of such problems.
From that perspective it would be also interesting to me from which size on
LMDB allocates extents to store the data (please forgive me if this is obvious
and I missed that or if I have a conceptual misunderstanding).