Emmanuel Lécharny wrote:
Restarting this thread...
we have had some interesting discussion today that I wanted to share.
Hypothesis : 1 server has been down for a long time, and the contextCSN
is older than the one of the other servers, forcing a refresh mode with
more than the content of the AccessLog.
Quanah said that in some heavily servers, the only way for the consumer
to catch up is to slapcat/slapadd/restart the consumer. I wonder if it
would not be a way to deal with server that are to far behind the
running server, but as a mechanism that is included in the refresh phase
(ie, the restarted server will detect that it has to grab the set of
entries and load them, os if a human being was doing a
More specifically, is there a way to know how many entries we will have
to update, and is there a way to know when it will be faster to be
brutal (the Quanah way) compared to let the refresh mechanism doing its
Not a worthwhile direction to pursue. Doing the equivalent of a full
slapcat/slapadd across the network will use even more bandwidth than the
current syncrepl. None of this addresses the underlying causes of why the
consumer is slow, so the original problem will remain.
There are two main problems:
1) the AVL tree used for presentlist is still extremely inefficient in both
CPU and memory use.
2) the consumer does twice as much work for a single modification as the
provider. I.e., the consumer does a write op to the backend for the
modification, and then a second write op to update its contextCSN. The
provider only does the original modification, and caches the contextCSN update.
If we fix both of these issues, consumer speed should be much faster. Nothing
else is worth investigating until these two areas are reworked.
For (1) I've been considering a stripped down memory-only version of LMDB.
There are plenty of existing memory-only Btree implementations out there
already though, if anyone has a favorite it would probably save us some time
to use an existing library. The Linux kernel has one (lib/btree.c) but it's
under GPL so we can't use it directly.
Another point : as soon as the server is restarted, it can receive
incoming requests, which will send back outdated response, until the
refresh is completed (and i'm not talking about updates that could also
be applied on an outdated base, with the consequences if there are some
missing parents). In many cases, that would be a real problem, typically
if the LDAP servers are considered as part of a shared pool of server,
with a load balance mecahnism to spread the load. Wouldn't be more
realistic to simply consider the server as not available until the
refresh phase is completed ?
This was ITS#7616. We tried it and it caused a lot of problems. It has been
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/