Geoff Swan wrote:
On 2015-04-22 4:49 PM, Ulrich Windl wrote:
Geoff Swan gswan3@bigpond.net.au schrieb am 21.04.2015 um 23:19 in Nachricht
5536BEC9.3040902@bigpond.net.au:
On 2015-04-22 6:04 AM, Howard Chu wrote:
Brian Reichert wrote:
On Tue, Apr 21, 2015 at 08:23:31AM -0700, Quanah Gibson-Mount wrote:
--On Tuesday, April 21, 2015 11:54 AM -0400 Brian Reichert reichert@numachi.com wrote: > What does your config file look like? > > In particular, what does this setting look like for you: > > # Threads - four per CPU > threads 8 According to his summary, he's using 48 threads.
Thanks for pointing that out; I should finish my coffee before posting. :)
4 per CPU/core was a good rule of thumb with bdb/hdb. So far in playing with back-mdb, it's seemed closer to 2 per CPU/core for me in benchmarking.
Interesting. What is the relationship between the number of threads and the number of concurrent bind operations? If I have, say, 500 clients wanting access to perform simple authentication/bind and perform some read/write operation, how is this usually handled within slapd?
Useful to note. Has this detail ended up in any docs yet?
No, nor should it. Performance depends on system environment and workload - the right value is one that each site must discover for themselves in their own deployment.
Are there any clues about key factors affecting this? Linux, in this case, has vm.swappiness set to 10, vm.dirty_ratio at 12 and vm.dirty_background at 3. However I've noticed that when dirty pages are flushed to disc, the system stalls. And that operation appears to take a relatively long time. Disc write speed should be close to 130MB/s (file copy, dd test etc) however it appears to be much slower than this with the page flush.
Did you try NOT tuning those? A swapped in-memory database is not the thing you usually want.
Swappiness for an out-of-the-box kernel was 60, which sounds way too high. So I reduced it to 10.
Yes, you want swappiness to be very low to give regular program data higher cache priority than the filesystem buffer cache. The default setting of 60 is terrible for systems with heavy filesystem I/O traffic and high memory pressure. I usually set swappiness to 0.
If you're seeing stalls then you should check iostat or vmstat, most likely it will show that your storage is 100% busy. There was a kernel bug in much of the 3.x Linux kernel series that could cause this
http://www.openldap.org/lists/openldap-devel/201309/msg00008.html
Aside from that, the usual advice is to tune the dirty_ratio and dirty_background to flush frequently, to prevent each individual flush from getting too large and swamping the storage system. But in addition (and counter) to that, you want to tune the dirty_expire to be as slow as possible, to give any cached pages more chance to be reused before being flushed. It seems you already have pretty small dirty_ratio and dirty_background but you might want to try decreasing them even more, particularly if you're on one of the buggy kernels.