Buchan Milne wrote:
On Wednesday 18 April 2007, Quanah Gibson-Mount wrote:
--On Wednesday, April 18, 2007 11:49 PM +0200 Raphaël Ouazana-Sustowski
raphael.ouazana@linagora.com wrote:
Le Mer 18 avril 2007 23:17, Quanah Gibson-Mount a écrit :
I reached the same conclusions on my old Solaris V120's, with RW tests too. ;) If we get that build farm proposal, that might be a good opportunity to do some testing of small DB's on a variety of platforms, I suppose. However, I'm guessing the majority of OpenLDAP users fall into the Linux and Solaris categories.
What sort of tests are you doing ?
A series of tests where I have mixed read/write ratios. In particular:
30% read, 70% write 50% read, 50% write 70% read, 30% write
with multiple increasing numbers of threads.
And, how many databases ?
I assume the recommendation is based on one database? In my deployment, I have 3 relatively large databases (~400 000, ~500 000, ~800 000), so since better performance at the reduced thread number is probably due to reduced database contention, 24 threads may be more appropriate for me?
Or, should the number of threads be configurable at the database level ?
That wouldn't make any sense, since threads are a global resource for the process. Also, a single operation can span multiple databases (e.g. using glue/subordinate) so there's really no point to that.
It really comes down to read vs write contention. With the current entry cache in OpenLDAP 2.4, I can get a single client to read an entire 380,000 entry database (contained completely in the back-bdb entry cache) in only 0.70 seconds. On the same system (which has a dual-core processor) two clients can perform the same search in 0.71 seconds. I.e., database contention is almost nonexistent. The real problem seems to be that thread scheduling overhead is too high relative to the CPU cost of a single LDAP read operation. When you have a large number of slower operations (e.g. writes with a lot of index updates) then the thread overhead becomes proportionately smaller.