Michael Ströder wrote:
hyc@symas.com wrote:
slapd occasionally seg faults. We can't reproduce it with a certain configuration.
This is a custom Debian Wheezy package based on OpenLDAP 2.4.39 linked against OpenSSL under a separate prefix.
slapo-rwm is patched according to ITS#7723 but I can't tell whether it's related to ITS#7723 or not. Hence the separate ITS.
What behavior do you get by reverting the #7723 patch?
The patch was applied because we had these seg faults before. I'd rather say nothing changed in our case with the patch. Crashes happen once or twice a week or so. Also it was impossible to crash the non-patched installation with a test script acting like described in ITS#7723.
Fixing this would be highly appreciated but I can't provide a simple config reproducing it.
Can you run slapd with ElectricFence and post stack traces and diagnostics from any crashes there?
The following rwm directives are in the frontend part:
overlay rwm rwm-rewriteEngine on rwm-drop-unrequested-attrs no # uid=foo,ou=xxxxx -> entryDN of entry within ou=xxxxx matching (uid=foo) rwm-rewriteMap slapd uid2dn "ldap:///ou=xxxxx?entryDN?sub?" rwm-rewriteContext bindDN rwm-rewriteRule "^(uid=[^,]+),ou=xxxxx$" "${uid2dn($1)}" ":@I" # serverFqdn=foo,ou=xxxxx -> entryDN of entry within ou=xxxxx matching (serverFqdn=foo) rwm-rewriteMap slapd fqdn2dn "ldap:///ou=xxxxx?entryDN?sub?" rwm-rewriteContext bindDN rwm-rewriteRule "^(serverFqdn=[^,]+),ou=xxxxx$" "${fqdn2dn($1)}" ":@I"
In a former configuration version these directives were in the backend ou=xxxxx part. Because of the seg faults I moved it which made things slightly better but hard to tell. In another configuration variant I even experienced seg faults with *slapcat*.
This is a two-layer replication topology with several MMR providers and read-only consumers which use SASL/EXTERNAL with client certs for authentication and authz-regexp mapping to authz-DNs. If things are wrong during consumer initialization sometimes even the providers crashes.
Ciao, Michael.