>> Jürgen Baier <baier(a)semedy.com> schrieb am 29.11.2017
um 14:43 in
thanks for the answer. However, I still have a follow-up question on
When I add 1 billion key/value pairs (16 byte MD5) to the LMDB database
(in a single transaction (but I also get similar results when I add the
same data in multiple transactions)) I get the following results:
Windows, without MDB_WRITEMAP: 46h
Windows, with MDB_WRITEMAP: 6h (!)
Linux (ext4), without MDB_WRITEMAP: 75h
Linux (ext4), with MDB_WRITEMAP: 73h
MDB_WRITEMAP seems to have a huge impact on write performance on
Windows, but on Linux I do not see similar improvements.
So I have two questions:
1) Could the the difference between Linux and Windows performance
regarding the MDB_WRITEMAP option be related to the fact that LMDB
currently uses sparse files on Linux, but not on Windows?
2) Is there a way to speed up Linux? Is there a way to pre-allocate the
data.mdb on startup?
It might be worth the efforts to do a block trace to see the access pattern on
the block device. There are numerous tuning parameters (block device and file
system), but there hardly is a "one size fits all" setting.
On 21.11.17 21:17, Howard Chu wrote:
> Jürgen Baier wrote:
>> I have a question about LMDB (I hope this is the right mailing list
>> for such a question).
>> I'm running a benchmark (which is similar to my intended use case)
>> which does not behave as I hoped. I store 1 billion key/value pairs
>> in a single LMDB database. _In a single transaction._ The keys are
>> MD5 hash codes from random data (16 bytes) and the value is the
>> string "test".
>> The documentation about mdb_page_spill says (as far as I understand)
>> that this function is called to prevent MDB_TXN_FULL situations. Does
>> this mean that my transaction is simply too large to be handled
>> efficiently by LMDB?