Hallvard B Furuseth wrote:
Michael Ströder writes:
[Bug c/27180] New: pointer arithmetic overflow handling broken http://gcc.gnu.org/ml/gcc-bugs/2006-04/msg01297.html
That code, "(char *)buf + (unsigned long)-1", yields undefined behavior if buf points at an object smaller than (unsigned long)-1 bytes. Pointer arithmetic is only valid within a single object.
However the bug it is marked as a dup of, miscompiles valid code: int *start /* size 100 */, *tmp; ... for (tmp = start + 100; tmp> start; --tmp); OpenLDAP has code which scans a struct berval backwards from bv_val+bv_len to bv_val.
Hm, that may explain some of the various gcc optimizer-related bugs we've encountered recently. According to the gcc bugzilla it was reported against 4.1 and 4.2, and fixed two years ago. So I expect any recent build of gcc should be OK here.
But the bug description is quite explicit, it was a problem with constant folding, e.g. "pointer + constant" compared to another "pointer + constant". OpenLDAP code scanning a struct berval would not be using any constants; bv_len is a variable. As such, I doubt that any struct berval processing loops are affected by this bug.
And for the record, checking for pointer wraparound is the wrong thing to check. All you want to know is if the new data will fit into the provided buffer. Therefore that is precisely what you should test. In plain terms, you can test this in two phases: 1) would this data ever fit in this buffer? 2) will the data fit in the buffer as it is now?
Given
char buf[MYSIZE]; ber_len_t len; /* length of current buffer content */ struct berval *in; /* passed in, to be moved into buf */
You just test: if ( in->bv_len > MYSIZE || in->bv_len + len > MYSIZE ) return FAIL;
If the new data is "too large" for whatever definition of "too large", you'll break out, without ever having to even think of word size issues. Writing code that depends on particular word sizes and arithmetic overflows and wraparounds is just asking for trouble.