Full_Name: Larry Newman Version: 2.4.40 OS: Linux URL: ftp://ftp.openldap.org/incoming/ Submission from: (NULL) (135.245.10.2)
My application uses OpenLDAP as a client with long duration connections used to send many requests - multiple TCP connections are pre-established and then ldap_init_fd() is invoked for each of them. After a while (and many request/response exchanges) memory exhaustion is observed, and this report relates to the possibility that ldap_parse_result() might be leaking memory. I believe the ber_scanf() calls in ldap_parse_result() that search for matched DN/error message/referrals can dynamically allocate memory (in my case, for an error message) that is being linked to the LDAP structure. This memory doesn't appear to be freed before ldap_parse_result() returns and the allocation isn't visible to the caller (the LDAP structure is supposed to be "opaque"). This appears to be different memory from that passed back to the caller via the ldap_parse_result() pointer parameters (that memory is conditionally created by LDAP_STRDUP() and can/should be freed by the caller if requested) E Early in ldap_parse_result(), there is some code that invokes LDAP_FREE()/LDAP_VFREE() for these dynamic memory pointers in the LDAP structure - but it doesn't seem like that will happen until the next invocation of ldap_parse_result() (if ever). The implementation of ldap_parse_extended_result() seems similar. Shouldn't memory allocated (for internal use only) by these functions be freed before the functions return?