Hi,

If it was simply writing to the memory map, shouldn't memory usage decrease as soon as everything is written? The memory usage continuous to be high for as long as the database is open, even if the program just waits afterwards. Is that to be expected as well? Because that would mean that the process would simply run out of memory if more data is writting than the machine has as ram.

Regards,
Luc

Met vriendelijke groeten,
Luc Vlaming
KXA Software Innovations

voorheen Dysi Software Innovations 

bezoekadres: 
Hoendiep Noordzijde 21 
9843TG, Grijpskerk 

Luc Vlaming
tel: 06 16 353 426
email: vlaming@softwareinnovations.nl 
url: www.softwareinnovations.nl


On Wed, Jan 15, 2014 at 11:10 PM, Howard Chu <hyc@symas.com> wrote:
Luc Vlaming wrote:
Hi,

Currently I am creating support for using LMDB as a new storage backend for
one of our products.
At the moment I am testing import bulk data into lmdb using transactions that
span a single record of 10MB. The total db size afterwards is 5GB. I also
tested with records of 1MB.

I noticed a very odd thing: when using the MDB_WRITEMAP option, memory usage
grows very quickly and linear with the amount of data stored into the
database. (memory usage ends up a bit higher than 5GB). when not using
MDB_WRITEMAP, however, memory usage stays very low. Does anyone have a
suggestion what might be wrong and what causes such different behaviour with
and without using the memorymap option?

There is nothing wrong. It is simply writing to the shared memory map.

--
  -- Howard Chu
  CTO, Symas Corp.           http://www.symas.com
  Director, Highland Sun     http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/