We have encountered an unexpected performance impact only by moving an LMDB to a different linux machine with similar hardware characteristics.
We have a 93GB LMDB environment with 3 databases. The database in question is 13GB. The test executable loops over key/value pairs in the 13GB database with a read-only cursor. For the same executable, we observe two different behaviors on different machines (the same lmdb environment was copied to the machines with scp). First machine has 148GB RAM and the second has 105GB RAM and the same CPU.
The expected and desired behavior on linux kernel 3.13 / eglibc 2.19 shows that the process takes 13GB of shared memory (seen by top and confirmed with /proc/<pid>/smaps below)). In the alternate behavior on kernel 4.15 / glibc 2.27 the process reads from disk 83GB into shared memory (which takes 16 minutes instead of 8 minutes on the correct machine in the corresponding initial runs).
/proc/<pid>/smaps
Machine with expected behavior:
7f3595da8000-7f4cde517000 r--s 00000000 fb:10 6442473002 /fusionio1/lmdb/db.0/dbgraph/data.mdb
Size: 97656252 kB
Rss: 13203648 kB
Pss: 13203648 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean: 13203648 kB
Private_Dirty: 0 kB
Referenced: 13203648 kB
Anonymous: 0 kB
AnonHugePages: 0 kB
Swap: 0 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
Locked: 0 kB
VmFlags: rd sh mr mw me ms sd
Machine with excessive Rss and slower read time:
7f55990aa000-7f6ce1819000 r--s 00000000 fc:02 7077908 /lmdbguest/lmdb/db.0/dbgraph/data.mdb
Size: 97656252 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
Rss: 82587036 kB
Pss: 82587036 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean: 82587036 kB
Private_Dirty: 0 kB
Referenced: 82587036 kB
Anonymous: 0 kB
LazyFree: 0 kB
AnonHugePages: 0 kB
ShmemPmdMapped: 0 kB
Shared_Hugetlb: 0 kB