On Wed, Apr 2, 2008 at 10:41 AM, Ralf Narozny rnarozny@web.de wrote:
Pierangelo Masarati schrieb:
Ralf Narozny wrote:
Hi,
Pierangelo Masarati schrieb:
Buchan Milne wrote:
/me notes that it would be nice to have more detail on the entry
cache available via back-monitor, such as the number of entries in the cache, and the amount of entry cache that is used ...
Something like
bash-3.1$ ldapsearch -x -H ldap://:9011 -b 'cn=Databases,cn=Monitor' \ '(objectclass=olmBDBDatabase)' @olmBDBDatabase
I configured the slapd to create a monitor, but the information you want
is not present.
Maybe I missed something to configure, but the manual is not too
thouroughly written by now ;-)
database monitor rootdn "cn=root,cn=monitor" rootpw {SSHA}...
ldapsearch -D 'cn=root,cn=monitor' -W -b 'cn=Databases,cn=Monitor'
'objectclass=*' '*' '+'
(as far as I understood, this should show all data for the entries below
cn=Databases,cn=Monitor')
Well, that information is only available since OpenLDAP 2.4; I infer
you're using an earlier distribution. In any case, the monitor has nothing to do with the entry cache configuration, it only shows the current usage. Refer to slapd.conf or back-config for what is configured for your system.
Yep, as I wrote in my initial mail, we are using 2.3.32 (for testing so far).
And I wrote that we are using BDB which is configured to use 4GB of shared mem. The only problem I have is that with 1000000 entries configured as entry cache, slapd uses 11GB out of 16 GB of RAM after the insert with ldapadd.
Firstly, what is the problem with slapd using 11 of 16GB ? My production LDAP servers typically run consuming at least 4GB of the available 6GB, and that's the way I want it (or, maybe using a tad more, but leaving enough free to run a slapcat without causing the server to swap). Unused ram is wasted ram (at least on Unix) ...
Which makes it use 7GB for entry cache (and whatever else).
Plus the overhead of approx 10MB per-thread, a few kB per file descriptor etc.
Our entries have (in LDIF of course) an average size of below 200 Bytes. So taking 6GB out of the 7GB used as size of the entry cache, it would mean that each entry consumes about 6K of RAM. Is that correct?
Roughly ... assuming that you are using a decent memory allocator, and that you have applied the memory leak patches for Berkeley DB (I don't see that you provide your Berkeley DB version). The glibc memory allocator is probably going to do quite badly in this specific scenario (bulk add over the wire), using one of the better allocators (e.g. hoard, tcalloc), would probably reduce this value considerably. Howard has published extensive benchmarks on this ...
If so, is there any documentation on how to configure the slapd for a larger amount of entries like ours?
Yes, which ones have you read so far?
Regards, Buchan