Hallvard B Furuseth wrote:
Howard Chu writes:
Seems to have helped with 16 threads on the T5120, back-null peak went from 17,900 auths/sec at 32 connections to 18,500 auths/sec at 96 connections.
Not much improvement with 24 threads, from a peak of 17,500 at 32 connections to a peak of 17,000 at 60 connections. So the overall peak is a little slower, but it can handle a heavier load before maxing out.
Hm, ±3% with back-null. But I'm not sure how I got the decrease. I've committed a slight cleanup now which might help.
The latest code got 19,500 auths/sec at 100 connections for 16 threads. Quite a jump.
For 24 threads, the peak was 17,230 at 52 connections.
I had swapped the tests before and after '&&' in pool_submit() here, since the 1st now is shorter: if (pool->ltp_vary_open_count> 0&& pool->ltp_open_count< pool->ltp_active_count+pool->ltp_pending_count) The first checks if we may open a thread, the 2nd if we want to. If slapd had less than 24 threads, there would be one extra test.
Could test with usleep(1) when adding/removing a task, first in _submit() and next in _wrapper(), and see which one leads to more mutex contention.
And at the other mutexes I mentioned, for that matter.
I think we need to get more detailed profile traces now. But I still have some other work to finish before I can spend any time in depth here.
We're still only getting about 20% total CPU utilization on the Sun T5120. Given how slow a single thread is on this machine, I think we're going to need multiple listener threads to really make effective use of it.