Ulrich Windl wrote:
Hi!
(Sorry this mail frontent is unable to quote properly, so I top-post)
In
https://git.openldap.org/openldap/openldap/-/blob/13f3bcd59c2055d53e4759b...
the comment basically says "Note that we don't currently support Huge
pages."
The comment says huge pages are not pageable. Which is still true in all
current versions of Linux, as well as every other operating system.
In
https://www.openldap.org/lists/openldap-technical/201401/msg00213.html I had asked
whether pages will be swapped when loading a 20GB database into 4GB of RAM and you said
"No". I doubted that and in your unique way (not to call it
"insulting") you said "You've already demonstrated multiple times that
"as far as you know" is not far at all."
Howard I think there is no need to be that rude.
I have zero tolerance for bullshit, which is what you post. You make stupid guesses
when the facts are already clearly documented, but you're too lazy to read them
yourself. Your guesses are worthless, and the actual facts are readily available.
Guesses contribute nothing but noise.
Regarding the comment in
https://git.openldap.org/openldap/openldap/-/blob/13f3bcd59c2055d53e4759b...,
it seems the comment contradicts to what you claimed in
https://www.openldap.org/lists/openldap-technical/201401/msg00213.html, namely "
We rely on the OS
* demand-pager to read our data and page it out when memory
* pressure from other processes is high.
". In mail you doubted that pages would be "swapped".
I did not "doubt" - I *know*. Because again, these are readily verifiable
facts.
Which you are still ignorant of, and you continue to neglect educating yourself
about them.
The mmap'd pages that LMDB uses are pageable. They never get swapped. These are
two similar but distinct operations. If you would bother to read and educate
yourself you would understand that. Instead you continue to spout unsubstantiated
nonsense.
Regards,
Ulrich
>>> Howard Chu 08.06.2020, 14:02 >>>
Ulrich Windl wrote:
>>>> Howard Chu <hyc(a)symas.com> schrieb am 07.06.2020 um 22:44 in
Nachricht
> <14412_1591562670_5EDD51AD_14412_294_1_79b319e0-fa23-a622-893b-b1b558a9385c@syma
> .com>:
>> Alec Matusis wrote:
>>> 2. dd reads the entire environment file into system file buffers (93GB).
>> Then when the entire environment is cached, I run the binary with
>> MDB_NORDAHEAD, but now it reads 80GB into shared memory, like when
>> MDB_NORDAHEAD is not set. Is this expected? Can it be prevented?
>>
>> It's not reading anything, since the data is already cached in memory.
>>
>> Is this expected? Yes - the data is already present, and LMDB always
>> requests a single mmap for the entire size of the environment. Since
>> the physical memory is already assigned, the mmap contains it all.
>>
>> Can it be prevented - why does it matter? If any other process needs
>> to use the RAM, it will get it automatically.
>
> While reading this: The amount of memory could suggest that using the
> hugepages feature could speed up things, especially if most of the mmapped data
> is expected to reside in RAM. Hugepages need to be enabled using
> vm.nr_hugepages=... in /etc/sysctl.conf (or corresponding). However I don't
> know whether LMDB can use them.
>
> A current AMD CPU offers these page sizes: 4k, 2M, and 1G, but some VMs (like
> Xen) can't use it. On the system I see, hugepages are 2MB on size. I don't
know
> what the internal block size of LMDB is, but likely it would benefit to match
> the hugepage size if using it...
https://git.openldap.org/openldap/openldap/-/blob/13f3bcd59c2055d53e4759b...
We have already had this discussion, and your suggestion was irrelevant back then too.
https://www.openldap.org/lists/openldap-technical/201401/msg00213.html
Please stop posting disinformation.
--
-- Howard Chu
CTO, Symas Corp.
http://www.symas.com
Director, Highland Sun
http://highlandsun.com/hyc/
Chief Architect, OpenLDAP
http://www.openldap.org/project/