https://bugs.openldap.org/show_bug.cgi?id=9652
--- Comment #1 from Ondřej Kuzník ondra@mistotebe.net --- On Thu, Aug 26, 2021 at 05:19:35PM +0000, openldap-its@openldap.org wrote:
This is a request for an enhancement that would add a "tee" or "fan-out" capability to load balancer, where received operations are sent to two or more destinations simultaneously.
The primary goal or the enhancement is to make it possible to keep multiple independent and likely dissimilar directory systems in lock-step with each other over hours, days, or possibly even weeks.
The enhancement would not necessarily need to include a mechanism for converging the target systems should they become out of sync.
This is not intended to be a replication solution, rather it is viewed more as a "copy" solution intended to be used for specific short-term tasks that need multiple directory systems to be exactly synchronized but where replication is not desirable or even possible.
At least two uses come to mind:
- Test harnesses, evaluating side-by-side operation of separate directory
systems over time
Directory system transition validation harnesses
(maybe) Part of a test harness to record or replay LDAP workloads
First thoughts:
Assuming all backends will react identically (do we need to serialise *all* operations to do that?), there are two approaches to this: - we send the operation to the first backend, when it's processed, second, etc. (what happens if the client drops dead or abandons the request?) - we send the operation to all backends in parallel
Some of this could be implemented as a new "tier" implementation (invalidating the name "tier", but never mind, maybe we can rename that to "group" or something eventually). AFAIK both options would still require some changes in how we process operation handling on the main path. In the former, we need to hook into response processing to redirect the operation to the next backend in list, in the latter, we need to duplicate the operation received before sending it out and be able to record all the duplicates on the client for Abandon processing etc.
Irrespective of these, when we send the response back can vary too, assuming the first configured backend is the one we care about: - forward the response as soon as we have the response from the first configured backend - wait until all backends have finished with the request
Obvious limitations: - operations that change the state of the connection, especially where the server is in charge of how that happens, are not going to work (SASL multi-step binds, TXNs, paging, etc. come to mind) - different server capacity could mean we get to send the request to one server but the others will be over their configured limits/unavailable, we need to decide whether we (even can) go ahead with sending the request