I'm doing a OpenLDAP test with a master/slave replication configuration including ppolicy overlay. I would like to enable password change from the slave replica with chain overlay, in order to validate the ppolicy olcPPolicyForwardUpdates attribute to TRUE. I'm using LDAPS from slave to master with SASL External authentication with client certificate. The client certificate correspond to a user DN entry with "manage" rights on the master server (the same used for the replication). This user DN has authzTo attribute in order to match the correct PROXYAUTHZ request from its dn to user DN.
All of this configuration works on replica when i do first a failed authentication (err=49) on replica. The pwdFailureTime value is updated on the DN entry from replica to slave normally. I'm also able to do after some self entry update on some attribute such as password or others from replica to master.
But the weird behavior is that i need to run first an failed authentication, otherwise if i try to change attribute on the slave server, it respond an err=80 "Error: ldap_back_is_proxy_authz returned 0, misconfigured URI?". The only way to retrieve correct behavior is to restart slapd, and redo one failed authentication first. It seems that the chain overlay do not connect the master server at startup
Do you have any ideas why i have this behavior ?
I'm using a 2.4.49 build of openldap, and inside logs on master server i see that the slave use the same connection.
Here is the LDIF change and configuration on my replica :
olcDbIDAssertBind: bindmethod=sasl saslmech=external starttls=no tls_cert="/usr/local/openldap/etc/openldap-valid7/tls/db1_rid001_cert.pem" tls_key="/usr/local/openldap/etc/openldap-valid7/tls/db1_rid001_key.pem" tls_cacert="/usr/local/openldap/etc/openldap-valid7/tls/cacert.pem" tls_reqcert=demand tls_crlcheck=none mode=self
Here is LDIF change on my master :
Thanks in advance for your reply
I'm experiencing an issue with Slapd 2.4.49 on Debian Buster.
I use Slapd as a proxy / lb with two AD nodes behind it.
If I reboot one of the AD nodes, everything is fine. As soon as I
reboot the seconds one (while the first is back and available),
OpenLDAP shuts down immediately:
Apr 06 22:03:33 ldap-lb1.ldap.example.com slapd: Stopping
Apr 06 22:03:33 ldap-lb1.ldap.example.com systemd: slapd.service: Succeeded.
There is no traffic during this time and it matches the exact time
when node two is down.
Debian still uses an init script for slapd but I did not find anything
interesting in it.
Is there a config setting that I missed in the docs that explains this behavior?
As it's not crashing, HA in systemd won't catch this issue.
On Thu, Mar 26, 2020 at 3:44 PM Quanah Gibson-Mount <quanah(a)symas.com>
> Try it and see? I can't find any concrete documentation one way or the
> other, although it seems BuildRequires generally follows what Requires
> does. You may need to ask the people who maintain RPM.
> There are other issues in the spec file, however, like:
> --with-libfreeradius-ldap-include-dir=/usr/local/openldap/include \
> --with-libfreeradius-ldap-lib-dir=/usr/local/openldap/lib64 \
> are specific to LTB, might want to do something like they did around line
> 447 to only do this if the LTB openldap is in use.
Thank you very much for the input!
Will report how things go
I'm using the openldap c client library (2.4.45).
I would like to somehow have two client instances (not necessarily simultaneously) within the same application, but I'm having issues with the second instance I create.
I have not found a way to clear the global options so the new ones (different ca cerficiate, different client certificate) can be used with the second instance. With the second ldap_initialize, global options are already initialized. Same with the TLS context, it's initialized too.
I have seen that the function that destroys the global options is called only when the program exits, or the dyn library is unloaded. Is this correct? Is this somehow a limitation of the library?