My olcDB values are listed below (minus the olcDbConfig entries). I'm not sure if you need the indexes but I've left them in anyway:
olcDbDirectory: /usr/local/authz/var/openldap-data/authorise olcAddContentAcl: FALSE olcDbCacheFree: 1 olcDbCacheSize: 400000 olcDbCheckpoint: 10240 10
olcDbDirtyRead: FALSE olcDbDNcacheSize: 0 olcDbIDLcacheSize: 1200000 olcDbIndex: autoMountKey eq olcDbIndex: cn pres,eq,sub olcDbIndex: eduniCategory eq olcDbIndex: eduniCollegeCode eq olcDbIndex: eduniIdmsId pres,eq olcDbIndex: eduniIDStatus eq olcDbIndex: eduniLibraryBarcode pres,eq olcDbIndex: eduniOrganisation pres,eq,sub olcDbIndex: eduniOrgCode eq olcDbIndex: eduniSchoolCode eq olcDbIndex: eduniServiceCode pres,eq olcDbIndex: eduniType eq olcDbIndex: eduniUnitCode eq olcDbIndex: eduPersonAffiliation pres,eq olcDbIndex: eduPersonEntitlement pres,eq olcDbIndex: eduPersonPrimaryAffiliation eq olcDbIndex: eduPersonPrincipalName eq olcDbIndex: eduPersonScopedAffiliation pres,eq olcDbIndex: eduPersonTargetedID eq olcDbIndex: entryCSN eq olcDbIndex: entryUUID eq olcDbIndex: gecos pres,eq,sub olcDbIndex: gidNumber pres,eq olcDbIndex: krbName pres,eq olcDbIndex: mail pres,eq,sub olcDbIndex: memberOf pres,eq olcDbIndex: memberUid eq olcDbIndex: objectClass eq olcDbIndex: sn pres,eq,sub olcDbIndex: uid pres,eq,sub olcDbIndex: uidNumber pres,eq olcDbIndex: uniqueMember pres,eq olcDbIndex: userPassword eq olcDbLinearIndex: FALSE olcDbMode: 0600 olcDbNoSync: FALSE olcDbSearchStack: 16 olcDbShmKey: 0
and my DB_CONFIG file contains:
set_cachesize 4 0 1 set_flags DB_LOG_AUTOREMOVE set_lk_max_objects 5000 set_lk_max_lockers 5000 set_lg_regionmax 41943040 set_lg_dir /usr/local/authz/slapd/trans-logs set_lk_max_locks 10000 set_lg_bsize 20971520 set_lg_max 83886080
I'm not using an SHM key (should I be?).
The search that I spotted this behaviour probably means nothing outside our environment as it's against a value in our local schema but for completeness it was:
(&(eduniCategory=201)(objectclass=inetorgperson))
I've seen the same behaviour with similar searches which we would expect to return a large number of results e.g. (&(eduPersonAffiliation=student)(objectclass=inetorgperson)) or indeed against groups with a lot of members.
You may wish to read over https://wiki.zimbra.com/wiki/OpenLDAP_Performance_Tuning. You can easily map the information there to a non-zimbra installation.
Thanks- your documentation on the DB tuning and SHM key is excellent. I can see a couple of things which I might change based on it but I'll wait to hear back before I jump in and start playing around with parameters.
--Quanah
--
Quanah Gibson-Mount Sr. Member of Technical Staff Zimbra, Inc A Division of VMware, Inc.
Zimbra :: the leader in open source messaging and collaboration
/****************************
Mark R Cairney ITI UNIX Section Information Services
Tel: 0131 650 6565 Email: Mark.Cairney@ed.ac.uk
****************************/
--On Wednesday, September 26, 2012 11:59 AM +0100 Mark Cairney mark.cairney@ed.ac.uk wrote:
My olcDB values are listed below (minus the olcDbConfig entries). I'm not sure if you need the indexes but I've left them in anyway:
olcDbCacheFree: 1 olcDbCacheSize: 400000 olcDbIDLcacheSize: 1200000
set_cachesize 4 0 1
This may be a little small. I prefer to fully cache my DB.
I'm not using an SHM key (should I be?).
So with 300,000 users, your caches look fine. I would definitely recommend using an SHM key if you are going to stick with using BDB. I personally prefer using MDB with current RE24 these days. It is magnitudes faster than BDB in all aspects if you enable writemap.
--Quanah
--
Quanah Gibson-Mount Sr. Member of Technical Staff Zimbra, Inc A Division of VMware, Inc. -------------------- Zimbra :: the leader in open source messaging and collaboration
On 26 Sep 2012, at 22:36, Quanah Gibson-Mount wrote:
--On Wednesday, September 26, 2012 11:59 AM +0100 Mark Cairney mark.cairney@ed.ac.uk wrote:
My olcDB values are listed below (minus the olcDbConfig entries). I'm not sure if you need the indexes but I've left them in anyway:
olcDbCacheFree: 1 olcDbCacheSize: 400000 olcDbIDLcacheSize: 1200000
set_cachesize 4 0 1
This may be a little small. I prefer to fully cache my DB.
Which one in particular? I thought the set_cachesize had an upper limit of 4GB but your guidance on the Zimbra website suggests this is an old limit and I now can't find the page on the OpenLDAP site which discusses it. Alternatively would increasing the number of caches from 2 to 2 or 3 be a suitable workaround?
Given that I've got a relatively healthy amount of RAM available would the following sound sensible to you
olcDbCacheSize: 1000000 olcDbIDLcacheSize: 1200000 set_cachesize 4 0 2
I also want to give a bit of headroom for new user accounts (approx rate of increase 80,000/y) and creating a group object for each user.
I'm not using an SHM key (should I be?).
So with 300,000 users, your caches look fine. I would definitely recommend using an SHM key if you are going to stick with using BDB. I personally prefer using MDB with current RE24 these days. It is magnitudes faster than BDB in all aspects if you enable write map.
I've looked at your guidance on using SHM keys but I'm slightly reluctant to start playing around with kernel settings on production servers :-) The existing default settings on RHEL 5 seem massive in comparison though: kernel.shmmax = 68719476736
kernel.shmall = 4294967296
Whereas based on the ZImbra performance tuning page I calculated (based on 8GB BDB cache size + 0.5GB for other stuff)
shmall would be: 2228224
and shmmax: 8589934592
Both of which appear to be an order of magnitude smaller than the defaults! Then there appears to be some Zimbra-specific commands but I'm guessing the equivalent is just setting the olcDBShmKey in slapd.d on vanilla OpenLDAP?
My plan in the longer term is to move to MDB but when I tried it out on one of our test VMs (40GB HD) it pretty much devoured all available disk space. Is there a rule of thumb for deriving probable MDB disk space requirements based on existing BDB size?
Thanks for the help and apologies for all the additional questions!
Kind regards,
Mark
Mark Cairney wrote:
My plan in the longer term is to move to MDB but when I tried it out on one
of our test VMs (40GB HD) it pretty much devoured all available disk space. Is there a rule of thumb for deriving probable MDB disk space requirements based on existing BDB size?
In all my tests MDB consistently uses less disk space than BDB. What exactly were you doing when this happened?
On 27 Sep 2012, at 11:59, Howard Chu wrote:
In all my tests MDB consistently uses less disk space than BDB. What exactly were you doing when this happened?
Nothing exotic- I slapcat'ed the existing data, reconfigured the system to use MDB instead of BDB. The data slap added fine but when running slapindex it kept on hitting the olcDBMaxsize value until eventually the system ran out of disk space. This was a while ago though (December according to the timestamp on the directory so probably 2.4.30) and I had left the old BDB settings in there and the only setting I remember tweaking was the MaxSize.
I do have 2.4.32 running on a spare box of the same spec as the production box so I'll try setting up MDB on that.
Cheers,
Mark
Mark Cairney wrote:
On 27 Sep 2012, at 11:59, Howard Chu wrote:
In all my tests MDB consistently uses less disk space than BDB. What exactly were you doing when this happened?
Nothing exotic- I slapcat'ed the existing data, reconfigured the system to use MDB instead of BDB. The data slap added fine but when running slapindex it kept on hitting the olcDBMaxsize value until eventually the system ran out of disk space. This was a while ago though (December according to the timestamp on the directory so probably 2.4.30) and I had left the old BDB settings in there and the only setting I remember tweaking was the MaxSize.
I do have 2.4.32 running on a spare box of the same spec as the production box so I'll try setting up MDB on that.
Ah, don't bother. You hit ITS#7386, the fix is not in 2.4.32. (But is in RE24 for 2.4.33.)
--On Thursday, September 27, 2012 11:21 AM +0100 Mark Cairney mark.cairney@ed.ac.uk wrote:
On 26 Sep 2012, at 22:36, Quanah Gibson-Mount wrote:
--On Wednesday, September 26, 2012 11:59 AM +0100 Mark Cairney mark.cairney@ed.ac.uk wrote:
My olcDB values are listed below (minus the olcDbConfig entries). I'm not sure if you need the indexes but I've left them in anyway:
olcDbCacheFree: 1 olcDbCacheSize: 400000 olcDbIDLcacheSize: 1200000
set_cachesize 4 0 1
This may be a little small. I prefer to fully cache my DB.
Which one in particular? I thought the set_cachesize had an upper limit of 4GB but your guidance on the Zimbra website suggests this is an old limit and I now can't find the page on the OpenLDAP site which discusses it. Alternatively would increasing the number of caches from 2 to 2 or 3 be a suitable workaround?
BDB 4.2.52 had an upper limit of 4 GB segments. Since you aren't running BDB 4.2.52, you have no such limit.
Given that I've got a relatively healthy amount of RAM available would the following sound sensible to you
olcDbCacheSize: 1000000 olcDbIDLcacheSize: 1200000 set_cachesize 4 0 2
I would do set_cachesize 8 0 0
I also want to give a bit of headroom for new user accounts (approx rate of increase 80,000/y) and creating a group object for each user.
Ok.
I'm not using an SHM key (should I be?).
So with 300,000 users, your caches look fine. I would definitely recommend using an SHM key if you are going to stick with using BDB. I personally prefer using MDB with current RE24 these days. It is magnitudes faster than BDB in all aspects if you enable write map.
I've looked at your guidance on using SHM keys but I'm slightly reluctant to start playing around with kernel settings on production servers :-) The existing default settings on RHEL 5 seem massive in comparison though: kernel.shmmax = 68719476736
kernel.shmall = 4294967296
Whereas based on the ZImbra performance tuning page I calculated (based on 8GB BDB cache size + 0.5GB for other stuff)
shmall would be: 2228224
and shmmax: 8589934592
Both of which appear to be an order of magnitude smaller than the defaults! Then there appears to be some Zimbra-specific commands but I'm guessing the equivalent is just setting the olcDBShmKey in slapd.d on vanilla OpenLDAP?
My plan in the longer term is to move to MDB but when I tried it out on one of our test VMs (40GB HD) it pretty much devoured all available disk space. Is there a rule of thumb for deriving probable MDB disk space requirements based on existing BDB size?
Thanks for the help and apologies for all the additional questions!
Kind regards,
You only have to adjust the SHM bits in sysctl if the default values are not large enough. As for MDB, it generally takes about 2/3rds the space of BDB.
--Quanah
--
Quanah Gibson-Mount Sr. Member of Technical Staff Zimbra, Inc A Division of VMware, Inc. -------------------- Zimbra :: the leader in open source messaging and collaboration
Thanks, I've implemented these changes on our "spare" box and it seems to be handling these large result searches without stalling and it appears to be faster too which is a bonus :-)
I'll look to implementing these settings and SHM on the production servers sometime next week.
It might be worth adding a note not to change the sysctl settings unless they are currently too small on your given platform- taking that into account implementing the SHM key was simply a case of running ipcs -m and picking a value that wasn't already in use.
Many thanks and have a good weekend,
Mark
On 27 Sep 2012, at 16:46, Quanah Gibson-Mount wrote:
--On Thursday, September 27, 2012 11:21 AM +0100 Mark Cairney mark.cairney@ed.ac.uk wrote:
On 26 Sep 2012, at 22:36, Quanah Gibson-Mount wrote:
--On Wednesday, September 26, 2012 11:59 AM +0100 Mark Cairney mark.cairney@ed.ac.uk wrote:
My olcDB values are listed below (minus the olcDbConfig entries). I'm not sure if you need the indexes but I've left them in anyway:
olcDbCacheFree: 1 olcDbCacheSize: 400000 olcDbIDLcacheSize: 1200000
set_cachesize 4 0 1
This may be a little small. I prefer to fully cache my DB.
Which one in particular? I thought the set_cachesize had an upper limit of 4GB but your guidance on the Zimbra website suggests this is an old limit and I now can't find the page on the OpenLDAP site which discusses it. Alternatively would increasing the number of caches from 2 to 2 or 3 be a suitable workaround?
BDB 4.2.52 had an upper limit of 4 GB segments. Since you aren't running BDB 4.2.52, you have no such limit.
Given that I've got a relatively healthy amount of RAM available would the following sound sensible to you
olcDbCacheSize: 1000000 olcDbIDLcacheSize: 1200000 set_cachesize 4 0 2
I would do set_cachesize 8 0 0
I also want to give a bit of headroom for new user accounts (approx rate of increase 80,000/y) and creating a group object for each user.
Ok.
I'm not using an SHM key (should I be?).
So with 300,000 users, your caches look fine. I would definitely recommend using an SHM key if you are going to stick with using BDB. I personally prefer using MDB with current RE24 these days. It is magnitudes faster than BDB in all aspects if you enable write map.
I've looked at your guidance on using SHM keys but I'm slightly reluctant to start playing around with kernel settings on production servers :-) The existing default settings on RHEL 5 seem massive in comparison though: kernel.shmmax = 68719476736
kernel.shmall = 4294967296
Whereas based on the ZImbra performance tuning page I calculated (based on 8GB BDB cache size + 0.5GB for other stuff)
shmall would be: 2228224
and shmmax: 8589934592
Both of which appear to be an order of magnitude smaller than the defaults! Then there appears to be some Zimbra-specific commands but I'm guessing the equivalent is just setting the olcDBShmKey in slapd.d on vanilla OpenLDAP?
My plan in the longer term is to move to MDB but when I tried it out on one of our test VMs (40GB HD) it pretty much devoured all available disk space. Is there a rule of thumb for deriving probable MDB disk space requirements based on existing BDB size?
Thanks for the help and apologies for all the additional questions!
Kind regards,
You only have to adjust the SHM bits in sysctl if the default values are not large enough. As for MDB, it generally takes about 2/3rds the space of BDB.
--Quanah
--
Quanah Gibson-Mount Sr. Member of Technical Staff Zimbra, Inc A Division of VMware, Inc.
Zimbra :: the leader in open source messaging and collaboration
/****************************
Mark R Cairney ITI UNIX Section Information Services
Tel: 0131 650 6565 Email: Mark.Cairney@ed.ac.uk
****************************/
openldap-technical@openldap.org