Thanks, I've implemented these changes on our "spare" box and it seems to be
handling these large result searches without stalling and it appears to be faster too
which is a bonus :-)
I'll look to implementing these settings and SHM on the production servers sometime
It might be worth adding a note not to change the sysctl settings unless they are
currently too small on your given platform- taking that into account implementing the SHM
key was simply a case
of running ipcs -m and picking a value that wasn't already in use.
Many thanks and have a good weekend,
On 27 Sep 2012, at 16:46, Quanah Gibson-Mount wrote:
--On Thursday, September 27, 2012 11:21 AM +0100 Mark Cairney
> On 26 Sep 2012, at 22:36, Quanah Gibson-Mount wrote:
>> --On Wednesday, September 26, 2012 11:59 AM +0100 Mark Cairney
>> <mark.cairney(a)ed.ac.uk> wrote:
>>> My olcDB values are listed below (minus the olcDbConfig entries). I'm
>>> not sure if you need the indexes but I've left them in anyway:
>>> olcDbCacheFree: 1
>>> olcDbCacheSize: 400000
>>> olcDbIDLcacheSize: 1200000
>>> set_cachesize 4 0 1
>> This may be a little small. I prefer to fully cache my DB.
> Which one in particular? I thought the set_cachesize had an upper limit
> of 4GB but your guidance on the Zimbra website suggests this is an old
> limit and I now can't find the page on the OpenLDAP site which discusses
> it. Alternatively would increasing the number of caches from 2 to 2 or 3
> be a suitable workaround?
BDB 4.2.52 had an upper limit of 4 GB segments. Since you aren't running BDB 4.2.52,
you have no such limit.
> Given that I've got a relatively healthy amount of RAM available would
> the following sound sensible to you
> olcDbCacheSize: 1000000
> olcDbIDLcacheSize: 1200000
> set_cachesize 4 0 2
I would do set_cachesize 8 0 0
> I also want to give a bit of headroom for new user accounts (approx rate
> of increase 80,000/y) and creating a group object for each user.
>>> I'm not using an SHM key (should I be?).
>> So with 300,000 users, your caches look fine. I would definitely
>> recommend using an SHM key if you are going to stick with using BDB. I
>> personally prefer using MDB with current RE24 these days. It is
>> magnitudes faster than BDB in all aspects if you enable write map.
> I've looked at your guidance on using SHM keys but I'm slightly reluctant
> to start playing around with kernel settings on production servers :-)
> The existing default settings on RHEL 5 seem massive in comparison though:
> kernel.shmmax = 68719476736
> kernel.shmall = 4294967296
> Whereas based on the ZImbra performance tuning page I calculated (based
> on 8GB BDB cache size + 0.5GB for other stuff)
> shmall would be: 2228224
> and shmmax: 8589934592
> Both of which appear to be an order of magnitude smaller than the
> defaults! Then there appears to be some Zimbra-specific commands but I'm
> guessing the equivalent is just setting the olcDBShmKey in slapd.d on
> vanilla OpenLDAP?
> My plan in the longer term is to move to MDB but when I tried it out on
> one of our test VMs (40GB HD) it pretty much devoured all available disk
> space. Is there a rule of thumb for deriving probable MDB disk space
> requirements based on existing BDB size?
> Thanks for the help and apologies for all the additional questions!
> Kind regards,
You only have to adjust the SHM bits in sysctl if the default values are not large
enough. As for MDB, it generally takes about 2/3rds the space of BDB.
Sr. Member of Technical Staff
A Division of VMware, Inc.
Zimbra :: the leader in open source messaging and collaboration
Mark R Cairney
ITI UNIX Section
Tel: 0131 650 6565
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.