Re-re-sending this because it appears twice not to have initially made it through:
I have recently taken up stewardship of the Ruby binding for LMDB. It did not take me long to find problems in its design pertaining to concurrent transactions in a multithreaded environment. I would like to fix these problems but I am afraid I still have a few questions after carefully reading the LMDB documentation.
First, I would like to confirm my understanding that there may be only one active read-write transaction per environment, irrespective of processes and/or threads attached, although this transaction may be nested.
This raises some subsidiary questions:
1) The documentation states specifically that *read-write* transactions may be nested, but what about read-only?
2) Must (read-only) transactions always be in a single hierarchy per thread or can there be many “roots” at once?
3) Given that the relevant LMDB structs appear not to discriminate between transaction types, are there consequences for opening, e.g., a read-write transaction subordinate to a read-only one?
The reason why I ask is that the current (inherited) design of the binding keeps a hash table of transactions keyed by thread, and does not distinguish between read-write and read-only, and affords only a single “root” transaction per thread (whether read-write or read-only). I don’t need the memory leaks, double-frees, deadlocks and other bad behaviour to infer that this structure is probably wrong.
Based on what I can glean from the LMDB documentation, is that I probably want to separate the read-write and read-only transactions, make the former a singleton (since there can only be one read-write transaction per environment), artificially flatten the latter (since it probably isn’t meaningful to nest a read-only transaction anyway) and then wrap the transaction code so it does the right thing. What I suppose I’m looking for here is confirmation that my assumptions are correct.
Thanks in advance,
-- Dorian Taylor Make things. Make sense. https://doriantaylor.com