Well, it looks like using single user for replication is bad idea for MDB.
debug log: slapd[23170]: do_bind: version=3 dn="cn=repmgr,ou=ldapusers,o=test1" method=128 slapd[23170]: daemon: epoll: listen=7 active_threads=0 tvp=zero slapd[23170]: => mdb_entry_get: ndn: "cn=repmgr,ou=ldapusers,o=test1" slapd[23170]: daemon: epoll: listen=8 active_threads=0 tvp=zero slapd[23170]: => mdb_entry_get: oc: "(null)", at: "(null)" slapd[23170]: daemon: epoll: listen=9 active_threads=0 tvp=zero
after this strace show mentioned ‚Assertion failed'.
Wiadomość napisana przez Aleksander Dzierżanowski olo@e-lista.pl w dniu 13 lis 2013, o godz. 22:17:
Hi.
I have properly runnig setup of three multimaster OpenLDAP servers (version 2.4.36 from ltb project) with bdb database backend. Everything was working flawless so I decided to try out ‚new shiny' mdb database with the same configuration - the only thing I changed was removing ‚cache’ settings and adding ‚maxsize’.
What I’m doing and observing:
- clear all config and database on all masters. Generate new configuration from slapd.conf using ‚slaptest’ tool.
- on master1 I add three base organizations let’s say o=test1 + o=test2 + o=test3 using slapadd [without -w switch]
- on master1 I add some entries using ldapadd command so all organizations have now contextCSN attribute.
- starting master1 - everything OK
- starting master2 - everything OK, including succesfull replication from master1
- starting master3 - everything OK and including replication, but… some or all other master are dying unexpectedly.
strace of dying process show:
write(2, "slapd: id2entry.c:509: mdb_opinfo_get: Assertion `!rc' failed.\n", 63) = 63 —
debug log last lines: — => mdb_entry_get: ndn: „o=test1” => mdb_entry_get: oc: "(null)", at: „contextCSN" —
But when I do ‚slapcat’ I can clearly see contextCSN for all o=test[123] databases...
Is it bug or some possible replication configuration issue? — Olo