Nat! wrote:
Am 24.02.2014 um 04:21 schrieb btb@bitrate.net:
generally speaking, i’d discourage you from looking at that limit from the perspective of “how large will my data be?”. instead, consider it a safeguard, for the os/environment. evaluate your particular environment, and use values amongst your various instances such that, were something unexpected to happen, the entire disk/partition/etc is not consumed to the point of choking out the os [or perhaps other more important processes, etc].
-ben
I think this is valid, if you're thinking in terms of this is my database
and this is my server where it runs on. I am more trying to use lmdb as a persistable hashtable, that I could put into a variety of my applications that me and other people would use. I have no idea beforehand, what the use is going to be on other peoples devices and most probably the other people wouldn't know either.
Currently I am making a clone of the environment and then create a new
bigger environment and copy from small into big. This seems to work so far, but it's just doesn't feel right to me.
Certainly haven't seen the behavior you describe, but I seldom test on MacOS or HFS+. I would use FFS, since it supports sparse files.
On Windows, Linux, and FreeBSD, there's no problem increasing the mapsize and preserving the existing data.