Ulrich Windl wrote:
(Sorry this mail frontent is unable to quote properly, so I top-post)
the comment basically says "Note that we don't currently support Huge
The comment says huge pages are not pageable. Which is still true in all
current versions of Linux, as well as every other operating system.
I had asked
whether pages will be swapped when loading a 20GB database into 4GB of RAM and you said
"No". I doubted that and in your unique way (not to call it
"insulting") you said "You've already demonstrated multiple times that
"as far as you know" is not far at all."
Howard I think there is no need to be that rude.
I have zero tolerance for bullshit, which is what you post. You make stupid guesses
when the facts are already clearly documented, but you're too lazy to read them
yourself. Your guesses are worthless, and the actual facts are readily available.
Guesses contribute nothing but noise.
Regarding the comment in
it seems the comment contradicts to what you claimed in
, namely "
We rely on the OS
* demand-pager to read our data and page it out when memory
* pressure from other processes is high.
". In mail you doubted that pages would be "swapped".
I did not "doubt" - I *know*. Because again, these are readily verifiable
Which you are still ignorant of, and you continue to neglect educating yourself
The mmap'd pages that LMDB uses are pageable. They never get swapped. These are
two similar but distinct operations. If you would bother to read and educate
yourself you would understand that. Instead you continue to spout unsubstantiated
>>> Howard Chu 08.06.2020, 14:02 >>>
Ulrich Windl wrote:
>>>> Howard Chu <hyc(a)symas.com> schrieb am 07.06.2020 um 22:44 in
>> Alec Matusis wrote:
>>> 2. dd reads the entire environment file into system file buffers (93GB).
>> Then when the entire environment is cached, I run the binary with
>> MDB_NORDAHEAD, but now it reads 80GB into shared memory, like when
>> MDB_NORDAHEAD is not set. Is this expected? Can it be prevented?
>> It's not reading anything, since the data is already cached in memory.
>> Is this expected? Yes - the data is already present, and LMDB always
>> requests a single mmap for the entire size of the environment. Since
>> the physical memory is already assigned, the mmap contains it all.
>> Can it be prevented - why does it matter? If any other process needs
>> to use the RAM, it will get it automatically.
> While reading this: The amount of memory could suggest that using the
> hugepages feature could speed up things, especially if most of the mmapped data
> is expected to reside in RAM. Hugepages need to be enabled using
> vm.nr_hugepages=... in /etc/sysctl.conf (or corresponding). However I don't
> know whether LMDB can use them.
> A current AMD CPU offers these page sizes: 4k, 2M, and 1G, but some VMs (like
> Xen) can't use it. On the system I see, hugepages are 2MB on size. I don't
> what the internal block size of LMDB is, but likely it would benefit to match
> the hugepage size if using it...
We have already had this discussion, and your suggestion was irrelevant back then too.
Please stop posting disinformation.
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/