Buchan Milne schrieb:
On Wed, Apr 2, 2008 at 10:41 AM, Ralf Narozny rnarozny@web.de wrote:
Pierangelo Masarati schrieb:
Ralf Narozny wrote:
Hi,
Pierangelo Masarati schrieb:
Buchan Milne wrote:
/me notes that it would be nice to have more detail on the entry
cache available via back-monitor, such as the number of entries in the cache, and the amount of entry cache that is used ...
Something like
bash-3.1$ ldapsearch -x -H ldap://:9011 -b 'cn=Databases,cn=Monitor' \ '(objectclass=olmBDBDatabase)' @olmBDBDatabase
I configured the slapd to create a monitor, but the information you want
is not present.
Maybe I missed something to configure, but the manual is not too
thouroughly written by now ;-)
database monitor rootdn "cn=root,cn=monitor" rootpw {SSHA}...
ldapsearch -D 'cn=root,cn=monitor' -W -b 'cn=Databases,cn=Monitor'
'objectclass=*' '*' '+'
(as far as I understood, this should show all data for the entries below
cn=Databases,cn=Monitor')
Well, that information is only available since OpenLDAP 2.4; I infer
you're using an earlier distribution. In any case, the monitor has nothing to do with the entry cache configuration, it only shows the current usage. Refer to slapd.conf or back-config for what is configured for your system.
Yep, as I wrote in my initial mail, we are using 2.3.32 (for testing so far).
And I wrote that we are using BDB which is configured to use 4GB of shared mem. The only problem I have is that with 1000000 entries configured as entry cache, slapd uses 11GB out of 16 GB of RAM after the insert with ldapadd.
Firstly, what is the problem with slapd using 11 of 16GB ? My production LDAP servers typically run consuming at least 4GB of the available 6GB, and that's the way I want it (or, maybe using a tad more, but leaving enough free to run a slapcat without causing the server to swap). Unused ram is wasted ram (at least on Unix) ...
No problem, I want to have slapd use about 14GB of the memory, but I'm not able to predict the RAM slapd uses, that's why I ask :-) On the other side I read about the importance of the BDB cache and that it should be using most of the available resources. But I cannot raise it above 4GB, because my machine will start to swap after inserting a few million entries. And that is really worst case.
Which makes it use 7GB for entry cache (and whatever else).
Plus the overhead of approx 10MB per-thread, a few kB per file descriptor etc.
Our entries have (in LDIF of course) an average size of below 200 Bytes. So taking 6GB out of the 7GB used as size of the entry cache, it would mean that each entry consumes about 6K of RAM. Is that correct?
Roughly ... assuming that you are using a decent memory allocator, and that you have applied the memory leak patches for Berkeley DB (I don't see that you provide your Berkeley DB version). The glibc memory allocator is probably going to do quite badly in this specific scenario (bulk add over the wire), using one of the better allocators (e.g. hoard, tcalloc), would probably reduce this value considerably. Howard has published extensive benchmarks on this ...
It is hard to know what info to provide for you to be able to help:
BDB 4.4.20 with no patches. Linux Kernel 2.6.21.5 SMP glibc-2.7-1 libc6 2.7-6
I'm not the one compiling the packet so I mainly have no idea of how to change anything like that.
If so, is there any documentation on how to configure the slapd for a larger amount of entries like ours?
Yes, which ones have you read so far?
I searched the OpenLDAP docs and a few pages everywhere in the net, but since they all think about 500k entries are a lot, it does not really help me with my growing 23 million entries. :-(
The size of the bdb files: 792M cid.bdb 5,7G cn.bdb 4,1G dn2id.bdb 36M folderName.bdb 15G id2entry.bdb 948K locked.bdb 1,9M objectClass.bdb
Which will never fit into my 16 GB :-)
Anything else you would need?
Regards, Buchan