Full_Name: Markus Version: 2.4.8 OS: SLES 10 URL: ftp://ftp.openldap.org/incoming/ Submission from: (NULL) (212.185.43.218)
When using multi master replication und start the tree from scratch, I'm able to ldapadd to one of the masters and all the stuff is replciated. But wenn afterwards ldapadd a new entry to one of the other servers the server crashes with the segmentation error.
The following log outout is generated:
---------------------------------------------- conn=11 op=1 SEARCH RESULT tag=101 err=0 nentries=20 text= connection_get(19) connection_get(19): got connid=12 connection_read(19): checking for input on id=12 ber_get_next ber_get_next: tag 0x30 len 162 contents: ber_get_next conn=12 op=1 do_search ber_scanf fmt ({miiiib) ber:
dnPrettyNormal: <dc=example,dc=com>
=> ldap_bv2dn(dc=example,dc=com,0) <= ldap_bv2dn(dc=example,dc=com)=0 => ldap_dn2bv(272) <= ldap_dn2bv(dc=example,dc=com)=0 => ldap_dn2bv(272) <= ldap_dn2bv(dc=example,dc=com)=0 <<< dnPrettyNormal: <dc=example,dc=com>, <dc=example,dc=com> SRCH "dc=example,dc=com" 2 0 0 0 0 ber_scanf fmt (m) ber: filter: (objectClass=*) ber_scanf fmt ({M}}) ber: => get_ctrls ber_scanf fmt ({m) ber: ber_scanf fmt (m) ber: => get_ctrls: oid="1.3.6.1.4.1.4203.1.9.1.1" (noncritical) ber_scanf fmt ({i) ber: ber_scanf fmt (m) ber: ber_scanf fmt (b) ber: ber_scanf fmt (}) ber: <= get_ctrls: n=1 rc=0 err="" attrs: * + conn=12 op=1 SRCH base="dc=example,dc=com" scope=2 deref=0 filter="(objectClass=*)" conn=12 op=1 SRCH attr=* + => bdb_search bdb_dn2entry("dc=example,dc=com") search_candidates: base="dc=example,dc=com" (0x00000001) scope=2 => bdb_dn2idl("dc=example,dc=com") => bdb_equality_candidates (entryCSN) <= bdb_equality_candidates: (entryCSN) not indexed bdb_search_candidates: id=-1 first=1 last=20 bdb_search: 1 does not match filter bdb_search: 2 does not match filter bdb_search: 3 does not match filter bdb_search: 4 does not match filter bdb_search: 5 does not match filter bdb_search: 6 does not match filter bdb_search: 7 does not match filter bdb_search: 8 does not match filter bdb_search: 9 does not match filter bdb_search: 10 does not match filter bdb_search: 11 does not match filter bdb_search: 12 does not match filter bdb_search: 13 does not match filter bdb_search: 14 does not match filter bdb_search: 15 does not match filter bdb_search: 16 does not match filter bdb_search: 17 does not match filter bdb_search: 18 does not match filter bdb_search: 20 does not match filter send_ldap_result: conn=12 op=1 p=3 send_ldap_result: err=0 matched="" text="" => bdb_search bdb_dn2entry("dc=example,dc=com") search_candidates: base="dc=example,dc=com" (0x00000001) scope=2 => bdb_dn2idl("dc=example,dc=com") => bdb_presence_candidates (objectClass) bdb_search_candidates: id=-1 first=1 last=20 send_ldap_result: conn=12 op=1 p=3 send_ldap_result: err=0 matched="" text="" send_ldap_intermediate: err=0 oid=1.3.6.1.4.1.4203.1.9.1.4 len=368 send_ldap_response: msgid=2 tag=121 err=0 ber_flush2: 409 bytes to sd 19 send_ldap_result: conn=12 op=1 p=3 send_ldap_result: err=0 matched="" text="" slap_sl_malloc of 136867984 bytes failed, using ch_malloc
----------------------------------------------
It seems as the malloc requets of almost 130MB is simple to much.
The above log was generated using the test050 script just adding a line which enters information also to a second server. Perhaps this test would also make sense for further releases.