Hello,
A friend told me about his findings on slapd memory usage.
setup: openldap-2.4.47 back_mdb slapd running as PID 1 inside a docker container docker host and docker conatiner based on Debian 9 / 64 bit
finding: with minimal / trivial data slapd consume happily 20% of available phys. memory:
# top -p $( pidof slapd) top - 21:47:10 up 10 days, 10 min, 5 users, load average: 0,06, 0,08, 0,09 Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie %Cpu(s): 0,9 us, 0,5 sy, 0,0 ni, 97,2 id, 1,3 wa, 0,0 hi, 0,1 si, 0,0 st KiB Mem : 3926252 total, 142672 free, 1517516 used, 2266064 buff/cache KiB Swap: 975868 total, 913316 free, 62552 used. 2065320 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2604 165534 20 0 1039308 732000 5628 S 0,0 18,6 0:00.38 slapd
workaround: https://discuss.linuxcontainers.org/t/empty-openldap-slapd-consuming-800-mb-... -> limit open files to 1024
# top -p $( pidof slapd) top - 21:49:16 up 10 days, 12 min, 5 users, load average: 0,07, 0,11, 0,10 Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie %Cpu(s): 1,7 us, 0,6 sy, 0,0 ni, 90,6 id, 7,1 wa, 0,0 hi, 0,0 si, 0,0 st KiB Mem : 3926252 total, 863500 free, 796492 used, 2266260 buff/cache KiB Swap: 975868 total, 913320 free, 62548 used. 2786248 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2993 165534 20 0 48864 7820 5684 S 0,0 0,2 0:00.01 slapd
as far as I can tell from a short test there are no functional drawbacks.
Any idea why the memory usage is so different?
Andreas
A. Schulze wrote:
Hello,
A friend told me about his findings on slapd memory usage.
setup: openldap-2.4.47 back_mdb slapd running as PID 1 inside a docker container docker host and docker conatiner based on Debian 9 / 64 bit
finding: with minimal / trivial data slapd consume happily 20% of available phys. memory:
# top -p $( pidof slapd) top - 21:47:10 up 10 days, 10 min, 5 users, load average: 0,06, 0,08, 0,09 Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie %Cpu(s): 0,9 us, 0,5 sy, 0,0 ni, 97,2 id, 1,3 wa, 0,0 hi, 0,1 si, 0,0 st KiB Mem : 3926252 total, 142672 free, 1517516 used, 2266064 buff/cache KiB Swap: 975868 total, 913316 free, 62552 used. 2065320 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2604 165534 20 0 1039308 732000 5628 S 0,0 18,6 0:00.38 slapd
workaround: https://discuss.linuxcontainers.org/t/empty-openldap-slapd-consuming-800-mb-... -> limit open files to 1024
# top -p $( pidof slapd) top - 21:49:16 up 10 days, 12 min, 5 users, load average: 0,07, 0,11, 0,10 Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie %Cpu(s): 1,7 us, 0,6 sy, 0,0 ni, 90,6 id, 7,1 wa, 0,0 hi, 0,0 si, 0,0 st KiB Mem : 3926252 total, 863500 free, 796492 used, 2266260 buff/cache KiB Swap: 975868 total, 913320 free, 62548 used. 2786248 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2993 165534 20 0 48864 7820 5684 S 0,0 0,2 0:00.01 slapd
as far as I can tell from a short test there are no functional drawbacks.
Any idea why the memory usage is so different?
If the only difference is that you set the open file limit to 1024, then it sounds like whatever your default file limit is is much larger.
Howard Chu:
Any idea why the memory usage is so different?
If the only difference is that you set the open file limit to 1024, then it sounds like whatever your default file limit is is much larger.
Hello Howard,
yes, it's unlimited by default. Tons of other daemon also run without this limits here.
But in contrast: all other daemons don't let explode the memory usage. Maybe, it worth to find the difference?
Andreas
A. Schulze wrote:
Howard Chu:
Any idea why the memory usage is so different?
If the only difference is that you set the open file limit to 1024, then it sounds like whatever your default file limit is is much larger.
Hello Howard,
yes, it's unlimited by default. Tons of other daemon also run without this limits here.
But in contrast: all other daemons don't let explode the memory usage. Maybe, it worth to find the difference?
That *is* the difference. slapd allocates an array of connection info, one slot per file descriptor. Running with "unlimited" files is clearly a bad idea here.
In general, running with larger limits than you actually need is a bad idea. This is elementary system administration.
Howard Chu wrote:
A. Schulze wrote:
Howard Chu:
Any idea why the memory usage is so different?
If the only difference is that you set the open file limit to 1024, then it sounds like whatever your default file limit is is much larger.
Hello Howard,
yes, it's unlimited by default. Tons of other daemon also run without this limits here.
But in contrast: all other daemons don't let explode the memory usage. Maybe, it worth to find the difference?
That *is* the difference. slapd allocates an array of connection info, one slot per file descriptor. Running with "unlimited" files is clearly a bad idea here.
In general, running with larger limits than you actually need is a bad idea. This is elementary system administration.
Couldn't one s/running with larger limits/consuming more resources/ and s/system administration/software development/ and produce an equally valid argument though? (If anything, larger-than-necessary limits seem the more justifiable of the two to me -- it allows for future growth, which can be hard to predict.)
More to the crux of the matter: why does slapd need to preallocate all OPEN_MAX possible connection info records at once instead of dynamically as connections are actually created?
I've actually been bitten by the inverse problem when slapd ran up against my distro's default FD limit (causing no small amount of grief to the various client systems on my network). I of course remedied this by cranking up said limit a fair amount, but the number I chose was basically just a hand-wavy, seat-of-the-pants guess at something that would last a while before I had to tweak it again, and thus means that slapd's going to be sitting on significantly more memory than it really needs.
So as an administrator I'm left with the question of how to balance slapd's file descriptor requirements against my desire to not have it tying up a bunch of memory it's never actually going to use. It seems like that balancing act would be a lot easier if slapd could dynamically allocate memory for connections.
Zev Weiss
(Apologies for any strangeness with the formatting/headers of this message; I wasn't subscribed to the list when the message quoted above was sent and hence have sort of manually synthesized this reply via copy/paste from the mailman web archive.)
Zev Weiss wrote:
Couldn't one s/running with larger limits/consuming more resources/ and s/system administration/software development/ and produce an equally valid argument though? (If anything, larger-than-necessary limits seem the more justifiable of the two to me -- it allows for future growth, which can be hard to predict.)
slapd is using the resources that the system says it is able to use.
More to the crux of the matter: why does slapd need to preallocate all OPEN_MAX possible connection info records at once instead of dynamically as connections are actually created?
Already answered. http://www.openldap.org/lists/openldap-technical/201902/msg00011.html
I've actually been bitten by the inverse problem when slapd ran up against my distro's default FD limit (causing no small amount of grief to the various client systems on my network). I of course remedied this by cranking up said limit a fair amount, but the number I chose was basically just a hand-wavy, seat-of-the-pants guess at something that would last a while before I had to tweak it again, and thus means that slapd's going to be sitting on significantly more memory than it really needs.
So as an administrator I'm left with the question of how to balance slapd's file descriptor requirements against my desire to not have it tying up a bunch of memory it's never actually going to use. It seems like that balancing act would be a lot easier if slapd could dynamically allocate memory for connections.
It's not tying up a bunch of memory. It's using up some address space, but if it is indeed unused, the OS will not dedicate any actual RAM to that address space.
People really need to learn more about computer system architecture and virtual memory.
On Tue, Feb 12, 2019 at 06:09:36AM CST, Howard Chu wrote:
Zev Weiss wrote:
Couldn't one s/running with larger limits/consuming more resources/ and s/system administration/software development/ and produce an equally valid argument though? (If anything, larger-than-necessary limits seem the more justifiable of the two to me -- it allows for future growth, which can be hard to predict.)
slapd is using the resources that the system says it is able to use.
More to the crux of the matter: why does slapd need to preallocate all OPEN_MAX possible connection info records at once instead of dynamically as connections are actually created?
Already answered. http://www.openldap.org/lists/openldap-technical/201902/msg00011.html
I've actually been bitten by the inverse problem when slapd ran up against my distro's default FD limit (causing no small amount of grief to the various client systems on my network). I of course remedied this by cranking up said limit a fair amount, but the number I chose was basically just a hand-wavy, seat-of-the-pants guess at something that would last a while before I had to tweak it again, and thus means that slapd's going to be sitting on significantly more memory than it really needs.
So as an administrator I'm left with the question of how to balance slapd's file descriptor requirements against my desire to not have it tying up a bunch of memory it's never actually going to use. It seems like that balancing act would be a lot easier if slapd could dynamically allocate memory for connections.
It's not tying up a bunch of memory. It's using up some address space, but if it is indeed unused, the OS will not dedicate any actual RAM to that address space.
Andreas's initial message showed a pretty large difference in top's RES column -- wouldn't that indicate that that virtual address space is in fact backed by actual allocated physical pages?
Zev
Zev Weiss wrote:
On Tue, Feb 12, 2019 at 06:09:36AM CST, Howard Chu wrote:
It's not tying up a bunch of memory. It's using up some address space, but if it is indeed unused, the OS will not dedicate any actual RAM to that address space.
Andreas's initial message showed a pretty large difference in top's RES column -- wouldn't that indicate that that virtual address space is in fact backed by actual allocated physical pages?
It also showed that there was still 140MB free RAM out of a total of 390MB. There was no memory pressure to cause the OS to reclaim those pages.
Look at the big picture, not just a single detail.
On 2/12/19 12:29 PM, Zev Weiss wrote:
Couldn't one s/running with larger limits/consuming more resources/ and s/system administration/software development/ and produce an equally valid argument though? (If anything, larger-than-necessary limits seem the more justifiable of the two to me -- it allows for future growth, which can be hard to predict.)
This text contains too many implicit branches. ;-)
Mainly I understand that you're asking for another config option.
Well, the various rlimit config of your OS applies. You can even set them in your service-specific systemd units or whatever.
More to the crux of the matter: why does slapd need to preallocate all OPEN_MAX possible connection info records at once instead of dynamically as connections are actually created?
As I understand Howard's former posting the resources are preallocated to have them ready in case the system gets under high pressure later. Makes sense to me.
Ciao, Michael.
openldap-technical@openldap.org