Ulrich Windl wrote:
>>> Howard Chu <hyc(a)symas.com> schrieb am 07.06.2020 um
22:44 in Nachricht
<14412_1591562670_5EDD51AD_14412_294_1_79b319e0-fa23-a622-893b-b1b558a9385c@syma
.com>:
> Alec Matusis wrote:
>> 2. dd reads the entire environment file into system file
buffers (93GB).
> Then when the entire environment is cached, I run the binary with
> MDB_NORDAHEAD, but now it reads 80GB into shared memory, like when
> MDB_NORDAHEAD is not set. Is this expected? Can it be prevented?
>
> It's not reading anything, since the data is already cached in memory.
>
> Is this expected? Yes - the data is already present, and LMDB always
> requests a single mmap for the entire size of the environment. Since
> the physical memory is already assigned, the mmap contains it all.
>
> Can it be prevented - why does it matter? If any other process needs
> to use the RAM, it will get it automatically.
While reading this: The amount of memory could suggest that using the
hugepages feature could speed up things, especially if most of the mmapped data
is expected to reside in RAM. Hugepages need to be enabled using
vm.nr_hugepages=... in /etc/sysctl.conf (or corresponding). However I don't
know whether LMDB can use them.
A current AMD CPU offers these page sizes: 4k, 2M, and 1G, but some VMs (like
Xen) can't use it. On the system I see, hugepages are 2MB on size. I don't know
what the internal block size of LMDB is, but likely it would benefit to match
the hugepage size if using it...
https://git.openldap.org/openldap/openldap/-/blob/13f3bcd59c2055d53e4759b...
We have already had this discussion, and your suggestion was irrelevant back then too.
https://www.openldap.org/lists/openldap-technical/201401/msg00213.html
Please stop posting disinformation.
--
-- Howard Chu
CTO, Symas Corp.
http://www.symas.com
Director, Highland Sun
http://highlandsun.com/hyc/
Chief Architect, OpenLDAP
http://www.openldap.org/project/