I have an ldap server that appears to slow down significantly even under moderate load, and wondered if there were any suggestions as to where to look.
Using openldap-2.4.40 with mdb backend, on a dual Xeon (6 cores each) server with 128GB ram. Linux kernel 3.13, separate ext4 db volume (SAS). Slapd configured with 48 threads.
The problem appears that as soon as there are a handful of write operations, slapd appears to stall. Binding to the ldap port takes many seconds (instead of instant), and the slapd process appear to occupy 100% of a single CPU core.
On Tue, Apr 21, 2015 at 08:52:29PM +1000, Geoff Swan wrote:
I have an ldap server that appears to slow down significantly even under moderate load, and wondered if there were any suggestions as to where to look.
Using openldap-2.4.40 with mdb backend, on a dual Xeon (6 cores each) server with 128GB ram. Linux kernel 3.13, separate ext4 db volume (SAS). Slapd configured with 48 threads.
The problem appears that as soon as there are a handful of write operations, slapd appears to stall. Binding to the ldap port takes many seconds (instead of instant), and the slapd process appear to occupy 100% of a single CPU core.
What does your config file look like?
In particular, what does this setting look like for you:
# Threads - four per CPU threads 8
--On Tuesday, April 21, 2015 11:54 AM -0400 Brian Reichert reichert@numachi.com wrote:
On Tue, Apr 21, 2015 at 08:52:29PM +1000, Geoff Swan wrote:
I have an ldap server that appears to slow down significantly even under moderate load, and wondered if there were any suggestions as to where to look.
Using openldap-2.4.40 with mdb backend, on a dual Xeon (6 cores each) server with 128GB ram. Linux kernel 3.13, separate ext4 db volume (SAS). Slapd configured with 48 threads.
The problem appears that as soon as there are a handful of write operations, slapd appears to stall. Binding to the ldap port takes many seconds (instead of instant), and the slapd process appear to occupy 100% of a single CPU core.
What does your config file look like?
In particular, what does this setting look like for you:
# Threads - four per CPU threads 8
According to his summary, he's using 48 threads. 4 per CPU/core was a good rule of thumb with bdb/hdb. So far in playing with back-mdb, it's seemed closer to 2 per CPU/core for me in benchmarking.
I'd generally advise testing current RE24 out, as there were some significant issues in 2.4.40 release.
--Quanah
--
Quanah Gibson-Mount Platform Architect Zimbra, Inc. -------------------- Zimbra :: the leader in open source messaging and collaboration
On Tue, Apr 21, 2015 at 08:23:31AM -0700, Quanah Gibson-Mount wrote:
--On Tuesday, April 21, 2015 11:54 AM -0400 Brian Reichert reichert@numachi.com wrote:
What does your config file look like?
In particular, what does this setting look like for you:
# Threads - four per CPU threads 8
According to his summary, he's using 48 threads.
Thanks for pointing that out; I should finish my coffee before posting. :)
4 per CPU/core was a good rule of thumb with bdb/hdb. So far in playing with back-mdb, it's seemed closer to 2 per CPU/core for me in benchmarking.
Useful to note. Has this detail ended up in any docs yet?
I'd generally advise testing current RE24 out, as there were some significant issues in 2.4.40 release.
--Quanah
--
Quanah Gibson-Mount Platform Architect Zimbra, Inc.
Zimbra :: the leader in open source messaging and collaboration
--On Tuesday, April 21, 2015 12:37 PM -0400 Brian Reichert reichert@numachi.com wrote:
4 per CPU/core was a good rule of thumb with bdb/hdb. So far in playing with back-mdb, it's seemed closer to 2 per CPU/core for me in benchmarking.
Useful to note. Has this detail ended up in any docs yet?
No, not so far. Unfortunately, my time to spend on benchmarking LDAP is much more limited these days. :/
--Quanah
--
Quanah Gibson-Mount Platform Architect Zimbra, Inc. -------------------- Zimbra :: the leader in open source messaging and collaboration
Brian Reichert wrote:
On Tue, Apr 21, 2015 at 08:23:31AM -0700, Quanah Gibson-Mount wrote:
--On Tuesday, April 21, 2015 11:54 AM -0400 Brian Reichert reichert@numachi.com wrote:
What does your config file look like?
In particular, what does this setting look like for you:
# Threads - four per CPU threads 8
According to his summary, he's using 48 threads.
Thanks for pointing that out; I should finish my coffee before posting. :)
4 per CPU/core was a good rule of thumb with bdb/hdb. So far in playing with back-mdb, it's seemed closer to 2 per CPU/core for me in benchmarking.
Useful to note. Has this detail ended up in any docs yet?
No, nor should it. Performance depends on system environment and workload - the right value is one that each site must discover for themselves in their own deployment.
On 2015-04-22 6:04 AM, Howard Chu wrote:
Brian Reichert wrote:
On Tue, Apr 21, 2015 at 08:23:31AM -0700, Quanah Gibson-Mount wrote:
--On Tuesday, April 21, 2015 11:54 AM -0400 Brian Reichert reichert@numachi.com wrote:
What does your config file look like?
In particular, what does this setting look like for you:
# Threads - four per CPU threads 8
According to his summary, he's using 48 threads.
Thanks for pointing that out; I should finish my coffee before posting. :)
4 per CPU/core was a good rule of thumb with bdb/hdb. So far in playing with back-mdb, it's seemed closer to 2 per CPU/core for me in benchmarking.
Interesting. What is the relationship between the number of threads and the number of concurrent bind operations? If I have, say, 500 clients wanting access to perform simple authentication/bind and perform some read/write operation, how is this usually handled within slapd?
Useful to note. Has this detail ended up in any docs yet?
No, nor should it. Performance depends on system environment and workload - the right value is one that each site must discover for themselves in their own deployment.
Are there any clues about key factors affecting this? Linux, in this case, has vm.swappiness set to 10, vm.dirty_ratio at 12 and vm.dirty_background at 3. However I've noticed that when dirty pages are flushed to disc, the system stalls. And that operation appears to take a relatively long time. Disc write speed should be close to 130MB/s (file copy, dd test etc) however it appears to be much slower than this with the page flush.
On 2015-04-22 1:37 AM, Brian Reichert wrote:
On Tue, Apr 21, 2015 at 08:23:31AM -0700, Quanah Gibson-Mount wrote:
--On Tuesday, April 21, 2015 11:54 AM -0400 Brian Reichert reichert@numachi.com wrote:
What does your config file look like?
In particular, what does this setting look like for you:
# Threads - four per CPU threads 8
According to his summary, he's using 48 threads.
Thanks for pointing that out; I should finish my coffee before posting. :)
4 per CPU/core was a good rule of thumb with bdb/hdb. So far in playing with back-mdb, it's seemed closer to 2 per CPU/core for me in benchmarking.
Useful to note. Has this detail ended up in any docs yet?
I'd generally advise testing current RE24 out, as there were some significant issues in 2.4.40 release.
--Quanah
In case it helps, these are the build config directives. Built from source downloaded from opernldap.org.
./configure --prefix=/usr \ --libexecdir=/usr/sbin \ --sysconfdir=/etc \ --disable-static \ --disable-debug \ --with-tls=openssl \ --without-cyrus-sasl \ --enable-slapd \ --enable-mdb=yes \ --enable-bdb=no \ --enable-hdb=no \ --enable-sql=no \ --enable-monitor=yes \ --enable-overlays=yes
make test passed the tests OK (except invalid credentials when attempting sasl bind).
--
Quanah Gibson-Mount Platform Architect Zimbra, Inc.
Zimbra :: the leader in open source messaging and collaboration
openldap-technical@openldap.org