Full_Name: Himmelbauer Version: 2.4.19 OS: Gentoo Linux URL: ftp://ftp.openldap.org/incoming/himmelbauer-100301.patch Submission from: (NULL) (92.248.100.200)
Was not able to retrieve database entries from a corrupted LDAP database using slapcat (openldap 2.4.19) with option -c. It aborts after the first corrupted entry with
<= entry_decode: slap_str2undef_ad(object�!p): AttributeDescription contains inappropriate characters # no data for entry id=xxxxxxxx
and doesn't continue with the next valid entry as I would have expected.
A damaged database is not good, but as slapcat is described as the tool to recover the non-corrupted data in various forums, it would be good when it really continues on errors. Other ldap-related tools like ldapsearch ignore the damaged data, so it is really annoying, when you want to recover your data and don't even get all the readable entries.
The patch corrects the handling of damaged entries by slapcat when -c is used, so that the output is continued with the next non-damaged entry. When -c is not used slapcat stops after the first corrupted entry (as expected).
The patch works for me, but I know that it is not really correct, because when there would be no non-damaged entry left it would either cause a endless loop or some crashes.
So a modification would be needed to stop the loop when the last database entry was processed, but I didn't know how to access that id.
I do not quite understand this report and your fix. The code seems to work as intended. "continue" here means keep processing the database if an entry can't be extracted from the database. This does by no means imply that processing can always continue. If the database is corrupted, it is likely that from the first appearance of the corruption, not even the next entry id can be retrieved at all. Otherwise, if the only error consists in not being able to extract an entry, continuation occurs as expected in the current code.
What likely happened in your case was that as soon as the database corruption was encountered, be_entry_next() was unable to compute the id of the next entry. This is why you don't even get a log of the ids of the entries that couldn't be extracted.
I'd close this ITS.
p.