I'm still trying to figure out why my servers sometimes get into a state where queries requesting the memberOf attribute take an exceedingly long time to process, for example:
Feb 13 19:46:05 ldap-02 slapd[13564]: conn=297643 fd=104 ACCEPT from PATH=/var/symas/run/ldapi (PATH=/var/symas/run/ldapi) Feb 13 19:46:05 ldap-02 slapd[13564]: conn=297643 op=0 BIND dn="cn=ldaproot,dc=cpp,dc=edu" method=128 Feb 13 19:46:05 ldap-02 slapd[13564]: conn=297643 op=0 BIND dn="cn=ldaproot,dc=cpp,dc=edu" mech=SIMPLE bind_ssf=0 ssf=71 Feb 13 19:46:05 ldap-02 slapd[13564]: conn=297643 op=0 RESULT tag=97 err=0 qtime=0.000006 etime=0.000050 text= Feb 13 19:46:05 ldap-02 slapd[13564]: conn=297643 op=1 SRCH base="dc=cpp,dc=edu" scope=2 deref=0 filter="(uid=henson)" Feb 13 19:46:05 ldap-02 slapd[13564]: conn=297643 op=1 SRCH attr=memberOf Feb 13 19:46:42 ldap-02 slapd[13564]: conn=297643 op=1 SEARCH RESULT tag=101 err=0 qtime=0.000012 etime=36.955710 nentries=1 text=
How is the qtime calculated? It is nice and short, despite the wall clock reading over 30 seconds :(.
Usually I have to reboot the server completely to clear this up, but this time I just had to restart it, and then the queries were lickity split again:
Feb 13 19:55:01 ldap-02 slapd[89655]: conn=1556 fd=40 ACCEPT from PATH=/var/symas/run/ldapi (PATH=/var/symas/run/ldapi) Feb 13 19:55:01 ldap-02 slapd[89655]: conn=1556 op=0 BIND dn="cn=ldaproot,dc=cpp,dc=edu" method=128 Feb 13 19:55:01 ldap-02 slapd[89655]: conn=1556 op=0 BIND dn="cn=ldaproot,dc=cpp,dc=edu" mech=SIMPLE bind_ssf=0 ssf=71 Feb 13 19:55:01 ldap-02 slapd[89655]: conn=1556 op=0 RESULT tag=97 err=0 qtime=0.000009 etime=0.000088 text= Feb 13 19:55:01 ldap-02 slapd[89655]: conn=1556 op=1 SRCH base="dc=cpp,dc=edu" scope=2 deref=0 filter="(uid=henson)" Feb 13 19:55:01 ldap-02 slapd[89655]: conn=1556 op=1 SRCH attr=memberOf Feb 13 19:55:01 ldap-02 slapd[89655]: conn=1556 op=1 SEARCH RESULT tag=101 err=0 qtime=0.000013 etime=0.213149 nentries=1 text=
Over 30 seconds elapsed time to .2 seconds? I'd like to see the .2 all the time :).
When the server gets like this, there's a very high read IO load, 200-300Mb/s, compared to generally less than 1Mb/s when things are working right.
It almost seems like it's doing a full disk scan searching for members every time.
Any suggestions on what to dig into?
Thanks...
--On Sunday, February 13, 2022 8:00 PM -0800 "Paul B. Henson" henson@acm.org wrote:
I'm still trying to figure out why my servers sometimes get into a state where queries requesting the memberOf attribute take an exceedingly long time to process, for example:
When the server gets like this, there's a very high read IO load, 200-300Mb/s, compared to generally less than 1Mb/s when things are working right.
It almost seems like it's doing a full disk scan searching for members every time.
Any suggestions on what to dig into?
How large is the total size of all mdb databases? How much RAM do you have on the system? Is it possible that you've exceeded the index slot size?
Regards, Quanah
On Mon, Feb 14, 2022 at 09:57:06AM -0800, Quanah Gibson-Mount wrote:
How large is the total size of all mdb databases?
According to mdb_stat the primary db is:
Page size: 4096 Number of pages used: 500651
4k * 500651 = 2002604
which lines up with du:
# du -k data.mdb 2002604 data.mdb
Then the accesslog db is:
Page size: 4096 Number of pages used: 74753
# du -k data.mdb 299016 data.mdb
So a total of a bit over 2GB.
How much RAM do you have on the system?
2GB. I don't think I'm running low on memory, there's usually a bit free:
# free -m total used free shared buff/cache available Mem: 1949 423 79 0 1446 1372 Swap: 1023 291 732
And when it's being really slow for memberOf there's no swapping or other signs of low memory. Other queries are still very responsive, it's just memberOf that gets slow. The only metric that seems ramped when it occurs is high read IO.
Is it possible that you've exceeded the index slot size?
I'm not sure. How would I check that? The issue only happens occasionally though, with generally the same load and query set. If it was a problem like that I would think it would happen all the time.
Is there anything I could poke with mdb_stat or the slapd monitor interface the next time it happens to try and narrow down the root cause?
Thanks much...
"Paul B. Henson" henson@acm.org schrieb am 15.02.2022 um 03:01 in Nachricht
YgsJhzmWrDPDMvti@zaphod.pbhware.com:
...
How much RAM do you have on the system?
2GB. I don't think I'm running low on memory, there's usually a bit free:
...
Independent of LDAP my guess is that 2GB is somewhat tight these days, and my guess is that it's virtual machine. Some rules say you should have 0.5GB per core reserved for the OS, so if your DB size is 2GB, it looks a bit tight to me.
Don't get me wrong: We _are_ running a tiny BDB-based OpenLDAP on a two-CPU VM with 1GB RAM (actually it also runs a tiny apache web server, too), but for real servers, it's probably too tight. # free -m total used free shared buffers cached Mem: 953 882 70 48 151 448 -/+ buffers/cache: 282 671 Swap: 2047 62 1985
Regards, Ulrich
On Mon, Feb 14, 2022 at 06:01:43PM -0800, Paul B. Henson wrote:
On Mon, Feb 14, 2022 at 09:57:06AM -0800, Quanah Gibson-Mount wrote:
How large is the total size of all mdb databases?
[...] So a total of a bit over 2GB.
How much RAM do you have on the system?
2GB. I don't think I'm running low on memory, there's usually a bit free:
# free -m total used free shared buff/cache available Mem: 1949 423 79 0 1446 1372 Swap: 1023 291 732
And when it's being really slow for memberOf there's no swapping or other signs of low memory. Other queries are still very responsive, it's just memberOf that gets slow. The only metric that seems ramped when it occurs is high read IO.
So think about it: - your DB is just over the size of available RAM by itself - after a while using the system, other processes (and slapd) will carve out a fair amount of it that the system will be unwilling/unable to page out - if, to answer that query, you need to crawl a large part of the DB, the OS will have to page that part into memory, at the beginning, there is enough RAM to do it all just once, later, you've reached a threshold and it needs to page bits in and then drop them again to fetch others you develop these symptoms - lots or read I/O and a delay in processing
Figure out what is involved in that search and see if you can tweak it so it doesn't need to crawl the whole DB or make sure you always have enough RAM to page the whole DB in, even when the other processes have been running for a bit.
Regards,
On 2/14/2022 11:55 PM, Ulrich Windl wrote:
Independent of LDAP my guess is that 2GB is somewhat tight these days, and my guess is that it's virtual machine.
It is a virtual machine indeed. I generally try to minimize the memory allocated to my linux instances to make up for the bloated Windows instances that are sucking up 32-64GB on the same virtualization infrastructure 8-/.
I can certainly just throw memory at it and hope the problem goes away, but I will discuss a bit more why I don't think it is memory in my reply to Ondřej.
Thanks…
On 2/15/2022 1:57 AM, Ondřej Kuzník wrote:
- your DB is just over the size of available RAM by itself
Yes, but that size includes not just the data itself, but all of the indexes as well, right?
- after a while using the system, other processes (and slapd) will carve out a fair amount of it that the system will be unwilling/unable to page out
Yes. But that is not currently the case. Here is a slapd process on one of our nodes that has been up about a week and a half:
ldap 1207 1 9 Feb04 ? 1-01:46:47 /opt/symas/lib/slapd -d 0 -h ldap:/// ldaps:/// ldapi:/// -u ldap -g ldap
It's resident set is a bit less than a gigabyte:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1207 ldap 20 0 8530708 954688 829836 S 28.1 47.8 1546:40 slapd
While unused (ie wasted) memory is only 82M, the amount of memory in use by buffer/cache that the system would be willing to give up at any time is more than a gigabyte:
total used free shared buff/cache available Mem: 1949 413 82 0 1453 1382 Swap: 1023 295 728
When the problem occurs, there isn't a memory shortage. There is still free memory. Nothing is getting paged in or out, the only IO is application read, not system swap.
- if, to answer that query, you need to crawl a large part of the DB, the OS will have to page that part into memory, at the beginning, there is enough RAM to do it all just once, later, you've reached a threshold and it needs to page bits in and then drop them again to fetch others you develop these symptoms - lots or read I/O and a delay in processing
Intuitively that does sound like a good description of the problem I'm having. But the only thing that takes a long time is returning the memberOf attribute. When queries requesting that are taking more than 30 seconds or even minutes to respond, all other queries remain instantaneous. It seems unlikely that under memory pressure the only queries that would end up having to page out stuff and be degraded would be those? Every other query just happens to have what it needs still in memory?
Figure out what is involved in that search and see if you can tweak
It's not a very complicated query:
# ldapsearch -x -H ldapi:/// uid=henson memberOf [...] dn: uid=henson,ou=user,dc=cpp,dc=edu
memberOf: uid=idm,ou=group,dc=cpp,dc=edu
memberOf: uid=iit,ou=group,dc=cpp,dc=edu
If I understand correctly, this just needs to access the index on uid to find my entry, and then the dynlist module presumably does something like this:
# ldapsearch -x -H ldapi:/// member=uid=henson,ou=user,dc=cpp,dc=edu dn [...] # cppnet, group, cpp.edu dn: uid=cppnet,ou=group,dc=cpp,dc=edu # cppnet-admin, group, cpp.edu dn: uid=cppnet-admin,ou=group,dc=cpp,dc=edu
this just needs to access the index on member to find all of the group objects, which in my case is 36.
So it only needs to have two indexes and 37 objects in memory to perform quickly, right?
When performance on memberOf queries is degraded, this takes more than 30 seconds to run. Every single time. I could run it 20 times in a row and it always takes more than 30 seconds. If it was a memory issue, you would think that at least some of the queries would get lucky and the pages needed would be in memory, given they had just been accessed moments before?
I can certainly just throw memory at it and hope the problem goes away. But based on the observations when it occurs it does not feel like just a memory problem. The last time it happened I pulled the node out of the load balancer so nothing else was poking at it and the test query was still taking more than 30 seconds.
I'm going to bump the production nodes up to 4G, which should be more than enough to run the OS and always have the entire database plus all indexes in memory. I will keep my fingers crossed this problem just goes away, but if it doesn't, what else can I do when it occurs to help track it down?
Thanks much…
On 2/15/2022 1:57 AM, Ondřej Kuzník wrote:
- if, to answer that query, you need to crawl a large part of the DB, the OS will have to page that part into memory
Do you know if there's any way to tell which pages of a memory mapped file are actually in memory at any given time? The process map just shows 5G (the max size I currently have configured for the database):
00007fece8cfb000 5242880K rw-s- data.mdb
Thanks…
"Paul B. Henson" henson@acm.org schrieb am 16.02.2022 um 04:10 in Nachricht
114ede97-a51b-5fbd-0613-47208945aa88@acm.org:
...
I can certainly just throw memory at it and hope the problem goes away.
Remember there are some classic tools like sar, vmstat, iostat, etc. to display or store some interesting information about what the OS is doing. Maybe youngsters will prefer something like "watch cat /proc/meminfo" to see live how RAM is shuffled around.
But based on the observations when it occurs it does not feel like just a memory problem. The last time it happened I pulled the node out of the load balancer so nothing else was poking at it and the test query was still taking more than 30 seconds.
iotop is another nice utility.
I'm going to bump the production nodes up to 4G, which should be more than enough to run the OS and always have the entire database plus all indexes in memory. I will keep my fingers crossed this problem just goes away, but if it doesn't, what else can I do when it occurs to help track it down?
You didn't tell what hypervisor you are using, bu tdid you know that Xen PVMs support "memory hotplugging"? You can add RAM while the OS is running (also works with CPUs), but it has to be prepared in config...
Regards, Ulrich
"Paul B. Henson" henson@acm.org schrieb am 16.02.2022 um 06:13 in
Nachricht 5f015d2d-8965-6c70-0c6d-7a96e9ec290c@acm.org:
On 2/15/2022 1:57 AM, Ondřej Kuzník wrote:
- if, to answer that query, you need to crawl a large part of the DB, the OS will have to page that part into memory
Do you know if there's any way to tell which pages of a memory mapped file are actually in memory at any given time? The process map just shows 5G (the max size I currently have configured for the database):
00007fece8cfb000 5242880K rw-s- data.mdb
"man pmap", maybe?
# pmap $$ 21561: -bash Address Kbytes RSS PSS Dirty Swap Mode Mapping 0000558d2476d000 964 904 904 0 0 r-xp- /usr/bin/bash 0000558d24a5e000 8 8 8 8 0 r--p- /usr/bin/bash 0000558d24a60000 16 16 16 16 0 rw-p- /usr/bin/bash 0000558d24a64000 56 44 44 44 0 rw-p- [ anon ] 0000558d25ff7000 4464 4384 4384 4384 0 rw-p- [ anon ] 00007f1ca39a8000 2528 192 35 0 0 r--p- /usr/lib/locale/en_US.utf8/LC_COLLATE 00007f1ca3c20000 28 28 0 0 0 r-xp- /lib64/libnss_compat-2.31.so 00007f1ca3c27000 2048 0 0 0 0 ---p- /lib64/libnss_compat-2.31.so 00007f1ca3e27000 4 4 4 4 0 r--p- /lib64/libnss_compat-2.31.so 00007f1ca3e28000 4 4 4 4 0 rw-p- /lib64/libnss_compat-2.31.so 00007f1ca3e29000 152 152 38 0 0 r-xp- /lib64/libtinfo.so.6.1 00007f1ca3e4f000 2044 0 0 0 0 ---p- /lib64/libtinfo.so.6.1 [...]
Thanks…
Paul B. Henson wrote:
I'm going to bump the production nodes up to 4G, which should be more than enough to run the OS and always have the entire database plus all indexes in memory. I will keep my fingers crossed this problem just goes away, but if it doesn't, what else can I do when it occurs to help track it down?
Use perf (formerly oprofile) and get a trace of slapd's execution during one of these searches. Also get a trace when it is performing normally, for comparison.
On 2/16/2022 3:08 AM, Ulrich Windl wrote:
Remember there are some classic tools like sar, vmstat, iostat, etc. to display or store some interesting information about what the OS is
Like I mentioned, the OS doesn't appear to be doing anything interesting when this has occurred. There's plenty of free memory, there is no swap paging, and CPU utilization isn't particularly high.
iotop is another nice utility.
Yes, I was using that; the only abnormal behavior when the problem occurred was extremely high read IO by the slapd process. Normally, read IO is virtually nil, jumping up to a MB/s or two occasionally. When the memberOf query is taking excessive time, there is like continuous 200-400 MB/s read IO being performed by slapd. Despite the high read IO, all queries that don't involve memberOf continue to operate with usual speed. Just the queries that return memberOf are degraded.
That's why it feels to me like an issue somehow with slapd, not the OS.
You didn't tell what hypervisor you are using, bu tdid you know that Xen PVMs support "memory hotplugging"?
We are using VMware ESXi, which also supports memory hot plugging. Although they do something weird where if you have less than 3G of RAM allocated they only let you hot plug up to 3G. If you have more than 3G allocated, you can hot plug up to the max. There is some knowledge base article about it, ah, here it is:
https://kb.vmware.com/s/article/2008405
It claims that linux "freezes" if you increase from less than 3G to more than 3G? I can believe that maybe that occurred once upon a time due to a bug, but find it hard to believe that it still does. It seems like VMware implemented a kludge workaround and has never removed it...
Yup. I just bumped the memory from 2GB to 8GB on an instance running under kvm, no problem. Stupid vmware. Hmm, nifty, kvm even lets you dynamically remove memory, another thing VMware doesn't let you do.
Thanks…
On 2/16/2022 3:11 AM, Ulrich Windl wrote:
"man pmap", maybe?
You must be using a different version of pmap than me? My line came from pmap, which on my system emits the following when given no options:
ldap-01 ~ # pmap 1207
1207: /opt/symas/lib/slapd -d 0 -h ldap:/// ldaps:/// ldapi:/// -u ldap -g ldap 0000556f4f390000 1328K r-x-- slapd
0000556f4f6dc000 16K r---- slapd
0000556f4f6e0000 36K rw--- slapd
0000556f4f6e9000 348K rw--- [ anon ]
Ah, or perhaps you have pmap aliased to 'pmap -x'?
ldap-01 ~ # pmap -x 1207 1207: /opt/symas/lib/slapd -d 0 -h ldap:/// ldaps:/// ldapi:/// -u ldap -g ldap Address Kbytes RSS Dirty Mode Mapping 0000556f4f390000 1328 604 0 r-x-- slapd 0000556f4f6dc000 16 12 8 r---- slapd 0000556f4f6e0000 36 16 16 rw--- slapd
<donning dunce cap> I did neglect to look at the pmap man page d'oh 8-/. I did a Google search looking for how to get that information but found no results, and there it was right in front of my face as an option to the command I was already using <sigh>. I apologize, my group has shrunk from three members down to one and I am ridiculously overloaded and somewhat thrashing and flailing 8-/.
So it appears that right now there is about 720M mapped of the primary mdb, and about 171M mapped for the accesslog:
00007fece8cfb000 5242880 738648 0 rw-s- data.mdb 00007fee28efc000 2097152 175864 0 rw-s- data.mdb
The next time the problem occurs I will take a look at this and see if it has changed or is doing anything different.
Thanks for the pointer :).
# pmap $$ 21561: -bash Address Kbytes RSS PSS Dirty Swap Mode Mapping 0000558d2476d000 964 904 904 0 0 r-xp- /usr/bin/bash 0000558d24a5e000 8 8 8 8 0 r--p- /usr/bin/bash 0000558d24a60000 16 16 16 16 0 rw-p- /usr/bin/bash 0000558d24a64000 56 44 44 44 0 rw-p- [ anon ] 0000558d25ff7000 4464 4384 4384 4384 0 rw-p- [ anon ] 00007f1ca39a8000 2528 192 35 0 0 r--p- /usr/lib/locale/en_US.utf8/LC_COLLATE 00007f1ca3c20000 28 28 0 0 0 r-xp- /lib64/libnss_compat-2.31.so 00007f1ca3c27000 2048 0 0 0 0 ---p- /lib64/libnss_compat-2.31.so 00007f1ca3e27000 4 4 4 4 0 r--p- /lib64/libnss_compat-2.31.so 00007f1ca3e28000 4 4 4 4 0 rw-p- /lib64/libnss_compat-2.31.so 00007f1ca3e29000 152 152 38 0 0 r-xp- /lib64/libtinfo.so.6.1 00007f1ca3e4f000 2044 0 0 0 0 ---p- /lib64/libtinfo.so.6.1 [...]
Thanks…
On 2/16/2022 11:25 AM, Howard Chu wrote:
Use perf (formerly oprofile) and get a trace of slapd's execution during one of these searches. Also get a trace when it is performing normally, for comparison.
Cool, thanks much. Now I've just got to wait until it happens again; which given I'm ready this time to investigate it better means it probably will work fine for a while :).
On Sun, Feb 13, 2022 at 08:00:29PM -0800, Paul B. Henson wrote:
I'm still trying to figure out why my servers sometimes get into a state where queries requesting the memberOf attribute take an exceedingly long
So one of my servers got into this state again:
Total DISK READ : 89.60 M/s | Total DISK WRITE : 241.97 K/s [0/9406] Actual DISK READ: 91.61 M/s | Actual DISK WRITE: 140.50 K/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 430373 be/4 ldap 34.88 M/s 0.00 B/s 0.00 % 96.44 % slapd -d 0 -h ldap:/// ~dapi:/// -u ldap -g ldap 430064 be/4 ldap 45.04 M/s 0.00 B/s 0.00 % 94.93 % slapd -d 0 -h ldap:/// ~dapi:/// -u ldap -g ldap 430069 be/4 ldap 6.34 M/s 0.00 B/s 0.00 % 8.22 % slapd -d 0 -h ldap:/// ~dapi:/// -u ldap -g ldap
There's plenty of free memory:
ldap-02 ~ # free -m total used free shared buff/cache available Mem: 3901 418 113 0 3368 3255 Swap: 2047 763 1284
Just for giggles, I removed all swap:
total used free shared buff/cache available Mem: 3901 730 102 1 3068 2949 Swap: 0 0 0
The problem immediately went away. Didn't restart slapd, didn't do anything, other than remove all swap and force it to use the plethora of memory it had available. Disk read went back to virtually 0, response time went back to subsecond.
I updated vm.swappiness to 1 (it defaulted to 30) and added swap back, I'm going to see what happens.
Have no idea what's causing it, but it seems somehow the system and slapd get into a state where doing memberof queries make it read lmdb pages from disk rather than keeping them in memory?
Paul B. Henson wrote:
On Sun, Feb 13, 2022 at 08:00:29PM -0800, Paul B. Henson wrote:
I'm still trying to figure out why my servers sometimes get into a state where queries requesting the memberOf attribute take an exceedingly long
So one of my servers got into this state again:
Total DISK READ : 89.60 M/s | Total DISK WRITE : 241.97 K/s [0/9406] Actual DISK READ: 91.61 M/s | Actual DISK WRITE: 140.50 K/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 430373 be/4 ldap 34.88 M/s 0.00 B/s 0.00 % 96.44 % slapd -d 0 -h ldap:/// ~dapi:/// -u ldap -g ldap 430064 be/4 ldap 45.04 M/s 0.00 B/s 0.00 % 94.93 % slapd -d 0 -h ldap:/// ~dapi:/// -u ldap -g ldap 430069 be/4 ldap 6.34 M/s 0.00 B/s 0.00 % 8.22 % slapd -d 0 -h ldap:/// ~dapi:/// -u ldap -g ldap
There's plenty of free memory:
ldap-02 ~ # free -m total used free shared buff/cache available Mem: 3901 418 113 0 3368 3255 Swap: 2047 763 1284
Just for giggles, I removed all swap:
total used free shared buff/cache available
Mem: 3901 730 102 1 3068 2949 Swap: 0 0 0
The problem immediately went away. Didn't restart slapd, didn't do anything, other than remove all swap and force it to use the plethora of memory it had available. Disk read went back to virtually 0, response time went back to subsecond.
I updated vm.swappiness to 1 (it defaulted to 30) and added swap back, I'm going to see what happens.
Have no idea what's causing it, but it seems somehow the system and slapd get into a state where doing memberof queries make it read lmdb pages from disk rather than keeping them in memory?
Interesting.
Fyi, setting a smaller swappiness will lower the priority of filesystem cache, and should make the OS favor program heap memory instead. So if the OS was swapping out LMDB pages then a lower swappiness should exacerbate the problem.
Let us know how things go.
On Mon, May 30, 2022 at 02:38:11PM +0100, Howard Chu wrote:
Let us know how things go.
Arg. Seems to have been a red herring. Blew up again with swappiness set to 1, and then again with swap completely disabled :(. Usual symptoms of crazy high disk reads:
Total DISK READ : 389.05 M/s | Total DISK WRITE : 3.93 K/s Actual DISK READ: 391.50 M/s | Actual DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 577547 be/4 ldap 36.88 M/s 0.00 B/s 0.00 % 97.92 % slapd -d 0 -h ldap:/// ~dapi:/// -u ldap -g ldap 577546 be/4 ldap 32.27 M/s 0.00 B/s 0.00 % 97.88 % slapd -d 0 -h ldap:/// ~dapi:/// -u ldap -g ldap 575034 be/4 ldap 29.47 M/s 0.00 B/s 0.00 % 97.72 % slapd -d 0 -h ldap:/// ~dapi:/// -u ldap -g ldap 572838 be/4 ldap 27.38 M/s 0.00 B/s 0.00 % 97.66 % slapd -d 0 -h ldap:/// ~dapi:/// -u ldap -g ldap 575308 be/4 ldap 24.47 M/s 0.00 B/s 0.00 % 97.50 % slapd -d 0 -h ldap:/// ~dapi:/// -u ldap -g ldap 572866 be/4 ldap 91.55 M/s 0.00 B/s 0.00 % 97.33 % slapd -d 0 -h ldap:/// ~dapi:/// -u ldap -g ldap 572841 be/4 ldap 26.96 M/s 0.00 B/s 0.00 % 96.87 % slapd -d 0 -h ldap:/// ~dapi:/// -u ldap -g ldap 572836 be/4 ldap 43.90 M/s 0.00 B/s 0.00 % 96.84 % slapd -d 0 -h ldap:/// ~dapi:/// -u ldap -g ldap 577508 be/4 ldap 76.17 M/s 0.00 B/s 0.00 % 95.96 % slapd -d 0 -h ldap:/// ~dapi:/// -u ldap -g ldap
Even though there's plenty of memory:
total used free shared buff/cache available Mem: 3901 944 109 1 2847 2715 Swap: 0 0 0
Looking at the lmdb mapping:
00007f662ea25000 5242880 325560 0 rw-s- data.mdb 00007f676ec26000 2097152 0 0 rw-s- data.mdb
There seems to be fewer pages mapped in than on one that isn't blowing up:
00007f6ab1606000 5242880 560712 0 rw-s- data.mdb 00007f6bf1807000 2097152 120772 0 rw-s- data.mdb
Memory use is similar:
total used free shared buff/cache available Mem: 3896 725 156 0 3014 2893 Swap: 2047 127 1920
The one that's unhappy is generating a lot of page faults:
ldap-02 ~ # ps -o min_flt,maj_flt 572833; sleep 10; ps -o min_flt,maj_flt 572833 MINFL MAJFL 11924597 3715970 MINFL MAJFL 11931358 3718833 ldap-02 ~ # ps -o min_flt,maj_flt 572833; sleep 10; ps -o min_flt,maj_flt 572833 MINFL MAJFL 11949883 3726966 MINFL MAJFL 11957081 3730080
Compared to the one that's working properly, which has none:
ldap-01 ~ # ps -o min_flt,maj_flt 1227; sleep 10; ps -o min_flt,maj_flt 1227 MINFL MAJFL 1282224 221928 MINFL MAJFL 1282224 221928 ldap-01 ~ # ps -o min_flt,maj_flt 1227; sleep 10; ps -o min_flt,maj_flt 1227 MINFL MAJFL 1282225 221928 MINFL MAJFL 1282225 221928
But why? Arg. All the slow queries are asking for memberOf:
May 30 21:54:25 ldap-02 slapd[572833]: conn=120576 op=1 SRCH base="ou=user,dc=cpp,dc=edu" scope=2 deref=3 filter="(&(objectClass=person)(calstateEduPersonEmplID=014994057))" May 30 21:54:25 ldap-02 slapd[572833]: conn=120576 op=1 SRCH attr=memberOf May 30 21:56:59 ldap-02 slapd[572833]: conn=120576 op=1 SEARCH RESULT tag=101 err=0 qtime=0.000016 etime=154.273556 nentries=1 text=
There's something going on with the dynlist overlay and memberof queries, but I still can't figure out what <sigh>. It's not a low on memory issue, there's plenty of free memory. But for some reason the read IO goes through the roof. I'm pretty sure it has the same query load while it's freaking as it did when it was running fine.
openldap-technical@openldap.org