https://bugs.openldap.org/show_bug.cgi?id=10162
Issue ID: 10162 Summary: Fix for binary attributes data corruption in back-sql Product: OpenLDAP Version: unspecified Hardware: All OS: All Status: UNCONFIRMED Keywords: needs_review Severity: normal Priority: --- Component: backends Assignee: bugs@openldap.org Reporter: dex.tracers@gmail.com Target Milestone: ---
Created attachment 1006 --> https://bugs.openldap.org/attachment.cgi?id=1006&action=edit Fix for binary attributes corruption on backed-sql
I've configured slapd to use back-sql (mariadb through odbc) and observed issues with the BINARY data retrievals from the database. The length of the attributes was properly reported, but the correct data inside was always 16384 bytes and after that point - some junk (usually filled-up with AAAAAAAA and some other attributes data from memory).
During the debugging - I've noticed that: - The MAX_ATTR_LEN (16384 bytes) is used to set the length of the data for BINARY columns when SQLBindCol is done inside of the "backsql_BindRowAsStrings_x" function - After SQLFetch is done - data in row->cols[i] is fetched up to the specified MAX_ATTR_LEN - After SQLFetch is done - the correct data size (greater than MAX_ATTR_LEN) is represented inside of the row->value_len
I'm assuming that slapd allocates the pointer in memory (row->cols[i]), fills it with the specified amount of data (MAX_ATTR_LEN), but when forming the actual attribute data - uses the length from row->value_len and so everything from 16384 bytes position till row->value_len is just a junk from the memory (uninitialized, leftovers, data from other variables).
After an investigation, I've find-out that: - for BINARY or variable length fields - SQLGetData should be used - SQLGetData supports chunked mode (if length is unknown) or full-read mode if the length is known - it could be used in pair with SQLBindCol after SQLFetch (!)
Since we have the correct data length inside of row->value_len, I've just added the code to the backsql_get_attr_vals() function to overwrite the corrupted data with the correct data by issuing SQLGetData request. And it worked - binary data was properly retrieved and reported over LDAP!
My current concerns / help needed - I'm not very familiar with the memory allocation/deallocation mechanisms, so I'm afraid that mentioned change can lead to memory corruption (so far not observed).
Please review attached patch (testing was done on OPENLDAP_REL_ENG_2_5_13, and applied on the master branch for easier review/application).
https://bugs.openldap.org/show_bug.cgi?id=10162
dex.tracers@gmail.com changed:
What |Removed |Added ---------------------------------------------------------------------------- Summary|Fix for binary attributes |back-sql: Fix for binary |data corruption in back-sql |attributes data corruption
https://bugs.openldap.org/show_bug.cgi?id=10162
Quanah Gibson-Mount quanah@openldap.org changed:
What |Removed |Added ---------------------------------------------------------------------------- Keywords|needs_review | Target Milestone|--- |2.5.18 Assignee|bugs@openldap.org |hyc@openldap.org
https://bugs.openldap.org/show_bug.cgi?id=10162
--- Comment #1 from Howard Chu hyc@openldap.org --- Thanks for the report.
The code in sql-wrap.c backsql_BindRowAsStrings_x() only allocates MAX_ATTR_LEN for these values. Returning more than that size would eventually corrupt memory. A safer fix would be to just cap value_len to MAX_ATTR_LEN and return only the truncated value.
I don't know enough about back-sql to know why there is a MAX_ATTR_LEN at all. But if you're going to be dealing with larger binary values, you probably should just redefine this to a suitable size for your installation.
https://bugs.openldap.org/show_bug.cgi?id=10162
dex.tracers@gmail.com changed:
What |Removed |Added ---------------------------------------------------------------------------- Attachment #1006|0 |1 is obsolete| |
--- Comment #2 from dex.tracers@gmail.com --- Created attachment 1007 --> https://bugs.openldap.org/attachment.cgi?id=1007&action=edit Temporary workaround with memory handling
Thanks a lot for your feedback, glad to hear that my findings are valuable.
I've reworked the patch to include memory handling (freeing and allocation with proper size) according to the way it's done inside of the sql-wrap.c for the row.cols[i] variable.
Since currently it is not an option for me to use some fixed value - I'll proceed with the potentially risky approach but add validators for data corruption on the client's side (to protect data consistency if corruption happens).
Regards, Volodymyr.
https://bugs.openldap.org/show_bug.cgi?id=10162
--- Comment #3 from Howard Chu hyc@openldap.org --- Really, you should not allocate a new buffer unless you know the existing one is too small. That is, you should check if (row.value_len[i] > row.col_prec[i]).
https://bugs.openldap.org/show_bug.cgi?id=10162
Quanah Gibson-Mount quanah@openldap.org changed:
What |Removed |Added ---------------------------------------------------------------------------- Target Milestone|2.5.18 |2.5.19
https://bugs.openldap.org/show_bug.cgi?id=10162
Quanah Gibson-Mount quanah@openldap.org changed:
What |Removed |Added ---------------------------------------------------------------------------- Target Milestone|2.5.19 |2.5.20