Dear everybody, I'm using one of your component (LMDB) via a java JNDI bindings implementation (https://github.com/deephacks/lmdbjni https://github.com/deephacks/lmdbjni) and I'm having an issue when I deploy my LMDB file on a tempfs filesystem in RAM.
The issue do not occur when the LMDB files are stored on a "normal" filesystem. When the data is in the tempfs ramdisk all the allocated memory ends up being in the Dirty area (it has not been written back to the Filesytem).
Here an example using the ramdisk:
7ce320000000-7cfc20000000 r--s 00000000 00:26 2459 /ramfs/nerd/data/db/db-en/entityEmbeddings/data.mdb Size: 104857600 kB Rss: 1255680 kB Pss: 1255680 kB Shared_Clean: 0 kB Shared_Dirty: 0 kB Private_Clean: 0 kB Private_Dirty: 1255680 kB <--- Referenced: 1255680 kB Anonymous: 0 kB AnonHugePages: 0 kB Shared_Hugetlb: 0 kB Private_Hugetlb: 0 kB Swap: 0 kB SwapPss: 0 kB KernelPageSize: 4 kB MMUPageSize: 4 kB Locked: 0 kB VmFlags: rd sh mr mw me ms sd
and here an example without:
7ca4fc000000-7cbdfc000000 r--s 00000000 fd:00 11154951 /data/workspace/shared/nerd-data/db/db-en/entityEmbeddings/data.mdb Size: 104857600 kB Rss: 838124 kB Pss: 838124 kB Shared_Clean: 0 kB Shared_Dirty: 0 kB Private_Clean: 838124 kB <---- Private_Dirty: 0 kB Referenced: 764872 kB Anonymous: 0 kB AnonHugePages: 0 kB ShmemPmdMapped: 0 kB Shared_Hugetlb: 0 kB Private_Hugetlb: 0 kB Swap: 0 kB SwapPss: 0 kB KernelPageSize: 4 kB MMUPageSize: 4 kB Locked: 0 kB VmFlags: rd sh mr mw me ms sd
According to my understanding the memory is dirty when 1)there are open transactions, 2) the data has not been written back to the filesystem
What I don't understand is why there is a difference between filesystem and ramdisk? Is there any reason? The application (listed above) is not writing on the lmdb, but just reading (using reading transaction).
Thank you Luca
Luca Foppiano wrote:
Dear everybody, I'm using one of your component (LMDB) via a java JNDI bindings implementation (https://github.com/deephacks/lmdbjni) and I'm having an issue when I deploy my LMDB file on a tempfs filesystem in RAM. The issue do not occur when the LMDB files are stored on a "normal" filesystem. When the data is in the tempfs ramdisk all the allocated memory ends up being in the Dirty area (it has not been written back to the Filesytem). Here an example using the ramdisk:
7ce320000000-7cfc20000000 r--s 00000000 00:26 2459 /ramfs/nerd/data/db/db-en/entityEmbeddings/data.mdb Size: 104857600 kB Rss: 1255680 kB Pss: 1255680 kB Shared_Clean: 0 kB Shared_Dirty: 0 kB Private_Clean: 0 kB Private_Dirty: 1255680 kB <--- Referenced: 1255680 kB Anonymous: 0 kB AnonHugePages: 0 kB Shared_Hugetlb: 0 kB Private_Hugetlb: 0 kB Swap: 0 kB SwapPss: 0 kB KernelPageSize: 4 kB MMUPageSize: 4 kB Locked: 0 kB VmFlags: rd sh mr mw me ms sd
and here an example without:
7ca4fc000000-7cbdfc000000 r--s 00000000 fd:00 11154951 /data/workspace/shared/nerd-data/db/db-en/entityEmbeddings/data.mdb Size: 104857600 kB Rss: 838124 kB Pss: 838124 kB Shared_Clean: 0 kB Shared_Dirty: 0 kB Private_Clean: 838124 kB <---- Private_Dirty: 0 kB Referenced: 764872 kB Anonymous: 0 kB AnonHugePages: 0 kB ShmemPmdMapped: 0 kB Shared_Hugetlb: 0 kB Private_Hugetlb: 0 kB Swap: 0 kB SwapPss: 0 kB KernelPageSize: 4 kB MMUPageSize: 4 kB Locked: 0 kB VmFlags: rd sh mr mw me ms sd
According to my understanding the memory is dirty when 1)there are open transactions, 2) the data has not been written back to the filesystem
Your understanding is incorrect. Dirty pages remain dirty until they are written to stable storage (e.g., disk). A tmpfs/RAMdisk has no stable storage, all of its pages reside only in RAM. That's the point of a RAMdisk.
What I don't understand is why there is a difference between filesystem and ramdisk? Is there any reason? The application (listed above) is not writing on the lmdb, but just reading (using reading transaction). Thank you Luca
Using tmpfs is a waste of RAM. Just use LMDB on a regular filesystem and let the system's pagecache manager take care of memory.
On 22 Mar 2018, at 11:37, Howard Chu hyc@symas.com wrote:
According to my understanding the memory is dirty when 1)there are open transactions, 2) the data has not been written back to the filesystem
Your understanding is incorrect. Dirty pages remain dirty until they are written to stable storage (e.g., disk). A tmpfs/RAMdisk has no stable storage, all of its pages reside only in RAM. That's the point of a RAMdisk.
Ok, thanks for have it clarified. I was just “hoping” LMDB would have not notice the type of storage was syncing to.
What I don't understand is why there is a difference between filesystem and ramdisk? Is there any reason? The application (listed above) is not writing on the lmdb, but just reading (using reading transaction). Thank you Luca
Using tmpfs is a waste of RAM. Just use LMDB on a regular filesystem and let the system's pagecache manager take care of memory.
Got your point, but does make sense then to use a regular filesystem if the storage is a “slow” non-SDD Hard drive?
Thanks Luca
Luca Foppiano wrote:
On 22 Mar 2018, at 11:37, Howard Chu hyc@symas.com wrote:
According to my understanding the memory is dirty when 1)there are open transactions, 2) the data has not been written back to the filesystem
Your understanding is incorrect. Dirty pages remain dirty until they are written to stable storage (e.g., disk). A tmpfs/RAMdisk has no stable storage, all of its pages reside only in RAM. That's the point of a RAMdisk.
Ok, thanks for have it clarified. I was just “hoping” LMDB would have not notice the type of storage was syncing to.
And LMDB doesn't. The behavior you see is due to how tmpfs works.
Luca Foppiano luca.foppiano@inria.fr schrieb am 21.03.2018 um 18:26 in
Nachricht 0453D312-E481-4F93-BF31-7A5BFF55E73C@inria.fr:
Dear everybody, I'm using one of your component (LMDB) via a java JNDI bindings implementation (https://github.com/deephacks/lmdbjni https://github.com/deephacks/lmdbjni) and I'm having an issue when I deploy my LMDB file on a tempfs filesystem in RAM.
The issue do not occur when the LMDB files are stored on a "normal" filesystem. When the data is in the tempfs ramdisk all the allocated memory ends up being in the Dirty area (it has not been written back to the Filesytem).
So where do you expect a dirty buffer for a RAM filesystemn to be written? To RAM? Then obviously copying RAM to RAM is just a waste of time. My guess is that ist working as designed.
Here an example using the ramdisk:
7ce320000000-7cfc20000000 r--s 00000000 00:26 2459 /ramfs/nerd/data/db/db-en/entityEmbeddings/data.mdb Size: 104857600 kB Rss: 1255680 kB Pss: 1255680 kB Shared_Clean: 0 kB Shared_Dirty: 0 kB Private_Clean: 0 kB Private_Dirty: 1255680 kB <--- Referenced: 1255680 kB Anonymous: 0 kB AnonHugePages: 0 kB Shared_Hugetlb: 0 kB Private_Hugetlb: 0 kB Swap: 0 kB SwapPss: 0 kB KernelPageSize: 4 kB MMUPageSize: 4 kB Locked: 0 kB VmFlags: rd sh mr mw me ms sd
and here an example without:
7ca4fc000000-7cbdfc000000 r--s 00000000 fd:00 11154951 /data/workspace/shared/nerd-data/db/db-en/entityEmbeddings/data.mdb Size: 104857600 kB Rss: 838124 kB Pss: 838124 kB Shared_Clean: 0 kB Shared_Dirty: 0 kB Private_Clean: 838124 kB <---- Private_Dirty: 0 kB Referenced: 764872 kB Anonymous: 0 kB AnonHugePages: 0 kB ShmemPmdMapped: 0 kB Shared_Hugetlb: 0 kB Private_Hugetlb: 0 kB Swap: 0 kB SwapPss: 0 kB KernelPageSize: 4 kB MMUPageSize: 4 kB Locked: 0 kB VmFlags: rd sh mr mw me ms sd
According to my understanding the memory is dirty when 1)there are open transactions, 2) the data has not been written back to the filesystem
What I don't understand is why there is a difference between filesystem and ramdisk? Is there any reason? The application (listed above) is not writing on the lmdb, but just reading (using reading transaction).
Thank you Luca
On 22 Mar 2018, at 11:45, Ulrich Windl Ulrich.Windl@rz.uni-regensburg.de wrote:
So where do you expect a dirty buffer for a RAM filesystemn to be written? To RAM? Then obviously copying RAM to RAM is just a waste of time. My guess is that ist working as designed.
Just to give you some explanation, the idea came out because on the deployment environment 1) there is no SDD or fast HD, and 2) the application is using LMDB only to read
It was although possible to get lot of RAM, that’s the reason why it was decided to try this approach.
I see your point, anyway. I just though LMDB would have not noticed ;-)
Thank you Luca
openldap-technical@openldap.org