So I've been doing some more digging and I was hoping someone can point me in the right direction. With the setup outlined in my previous message, writes directed to the proxy fail with: modifications require authentication.
At first I thought the issue had to do with the client's identity not being relayed to the proxied master, so I added: olcDbIDAssertBind: mode=self flags=prescriptive bindmethod=simple timeout=0 network-timeout=0 binddn="cn=admin,dc=domain,dc=tld" credentials="s3krit" keepalive=0:0:0 olcDbChaseReferrals: TRUE in the back-ldap backend configuration on the proxy, where mode=self "essentially means that the identity of the client is asserted" according to the FAQ.
However, the behaviour is still the same, writes will fail with "modifications require authentication". The DN used by the client to bind with the proxy is present on the master, and the client can write directly to the master using the same DN for binding.
In addition, the FAQ AT https://www.openldap.org/faq/data/cache/1434.html suggests using a chain overlay to proxy changes back to the master. However I'm not sure if that's applicable in my case as my version of "proxy" is merely that, a proxy, there are no databases configured as backends on it. Additionally the slapo-chain documentation says that the overlay is redundant with an ldap-back backend because the functionality should already be there.
Is there any way of achieving this or is my idea of a proxy all wrong? Is the syncrepl approach outlined in the MirrorMode documentation relevant in my case, or does it apply only when clients connect to replicas sitting between the proxy and the masters? Thanks.
On Tue, 19 Mar 2019 at 18:31, George Diamantopoulos georgediam@gmail.com wrote:
Hello all,
I've successfully set-up a 2-node LDAP cluster, where each node is a provider to the other according to section 18.3.4 of the Administrator's Guide. The next logical step is to implement Load-Balancer/Proxy entities, which will ensure that writes always go to the same node.
So far my preliminary proxy configuration allows reading from the cluster successfully. Here are the relevant bits (LDIF whitespace manipulated for readability):
dn: olcDatabase={2}ldap,cn=config objectClass: olcDatabaseConfig objectClass: olcLDAPConfig olcDatabase: {2}ldap olcAccess: {0}to * by dn.exact=gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth manage by * break olcAccess: {1}to * by * write olcRootDN: cn=admin,dc=domain,dc=tld olcRootPW: {SSHA}s3krit olcSuffix: olcDbStartTLS: start olcDbURI: ldap://ldap1.domain.tld,ldap://ldap2.domain.tld
I understand that this configuration will always use the first URI in olcDbURI unless there is a failure, in which case it will fall back to the second URI (apparently after a timeout, and will then use it for subsequent operations until that fails too). I'm happy with this, although if there were a way to perform round-robin between the two for read operations, it would be ideal (is there?).
However, writes to the proxy won't work with this configuration. In the Administrator Manual it is stated that one should use the proxy "as a syncrepl provider", but I am not sure I understand how this is supposed to work. Am I supposed to add another olcSyncrepl attribute (there's already one for syncing the two MirrorMode nodes themselves) to the MirrorMode nodes pointing to the proxy? And if I have more than one proxy, should I add an olcSyncrepl attribute for each? And how do I ensure that only one of the MirrorMode nodes fetches data from the proxy provider(s) at any given time?
I've spent quite some time googling this to no avail. Any insight would be greatly appreciated. Thank you!
Best regards, George