Hello,
I’m using a work queue to manage concurrency.
Currently jobs manage their own transactions, and barriers are used to enforce exclusive writer access.
Naturally, MDB_NOTLS and MDB_NOLOCK are set on the environment.
However, I need to implement a concurrent map algorithm over a sub database.
The algorithm would be simple: a reader would enqueue worker jobs–each having a strong, immutable reference to the transaction and cursor, and copies of the key and value pointers. The transaction and cursor would only be destroyed after all of the sibling jobs have completed/canceled/failed.
I’ve read that transactions and cursors should not be shared between threads, but in this use case, the creation and destruction operations are atomic since only 1 thread, or 2 distinct threads, receive the “beginReadTransaction” and “endReadTransaction” jobs at two distinct moments in time, in a guaranteed order.
Are there any caveats in this approach?
Thanks,
A. Robinson
openldap-technical@openldap.org