Full_Name: Hallvard B Furuseth Version: HEAD OS: URL: http://folk.uio.no/hbf/OpenLDAP/oid-encdec.txt Submission from: (NULL) (129.240.202.105) Submitted by: hallvard
The 40*X+Y rule for first 2 components is decoded wrong, it should be X = val < 80 ? val/40 : 2 # X = 0, 1 or 2 Y = val - 40*X # Y is unlimited when X==2, <40 otherwise See X.690 8.19.4: ... (X*40) + Y: NOTE - This packing of the first two object identifier components recognizes that only three values are allocated from the root node, and at most 39 subsequent values from nodes reached by X = 0 and X = 1.
40*X+Y can take more than one octet.
The functions allow buffer overruns. ber_decode_oid() if the oid has many small components. ber_encode_oid() for negative input (which becomes a large ulong), or if sizeof(long) == 8. The latter can just as easily be avoided by producing the bytes backwards and then reversing.
They do not check much for invalid input or too large values. Possibly they ought to be a bit paranoid about that since they are used for certificates, I don't know. E.g. check that the BER encoding has "fewest possible octets" (X.690 8.19.2), i.e. no initial 0x80 in subidentifiers. ber_encode_oid() seems a bit strange about which errors it accepts and which ones it rejects. E.g. garbage after first 2 components is OK.
The "assert(foo != NULL)"s are after "der = foo->bv_val":-)
I enclose untested new versions. Will test and commit when I have time if someone says OK, but it depends e.g. on how paranoid the routines should or should not be. Or if they should trust the ber values to end with \0, for that matter.
Finally, libldap/tls.c does not check if ber_decode_oid() fails.