Kyle Smith wrote:
Ok, I have been running 2.4.32 for some time with no issues. Yesterday, 2 different servers (both part of a 4-way MMR) produced an "index add failure" and an "index delete failure". I went back over the bdb DB_CONFIG Settings (listed below) and everything looks nominal to me. Would it just make more sense to switch from bdb to mdb instead of troubleshooting these "random" errors too much? I also noticed that the number of "deadlocks" corresponds to the number of errors that were produced. Is there correlation there?
Probably, but that's not an indication of any actual failure. Deadlocks are normal occurrences in BerkeleyDB and the backends automatically retry when they occur. You can basically ignore any error that accompanies a deadlock.
And yes, if you switch to MDB, all of these issues go away.
Given that reads in MDB are 5-20x faster than BDB, and writes are 2-5x faster, and MDB uses 1/4 as much RAM as BDB, there's hardly any reason to use BDB any more*. No tuning, no maintenance. MDB just works, quickly and efficiently.
*If you're still using a 32 bit machine, you may be better off using BDB, especially if you have databases 1GB or larger. But seriously, why are you still using a 32 bit machine?
Thanks!
578 Last allocated locker ID 0x7fffffff Current maximum unused locker ID 9 Number of lock modes 3000 Maximum number of locks possible 1500 Maximum number of lockers possible 1500 Maximum number of lock objects possible 1 Number of lock object partitions 15 Number of current locks 1029 Maximum number of locks at any one time 17 Maximum number of locks in any one bucket 0 Maximum number of locks stolen by for an empty partition 0 Maximum number of locks stolen for any one partition 123 Number of current lockers 224 Maximum number of lockers at any one time 15 Number of current lock objects 526 Maximum number of lock objects at any one time 5 Maximum number of lock objects in any one bucket 0 Maximum number of objects stolen by for an empty partition 0 Maximum number of objects stolen for any one partition 3581M Total number of locks requested (3581768929) 3581M Total number of locks released (3581768869) 0 Total number of locks upgraded 77 Total number of locks downgraded 7041 Lock requests not available due to conflicts, for which we waited 43 Lock requests not available due to conflicts, for which we did not wait 2 Number of deadlocks 0 Lock timeout value 0 Number of locks that have timed out 0 Transaction timeout value 0 Number of transactions that have timed out 1MB 392KB The size of the lock region 0 The number of partition locks that required waiting (0%) 0 The maximum number of times any partition lock was waited for (0%) 0 The number of object queue operations that required waiting (0%) 577 The number of locker allocations that required waiting (0%) 32148 The number of region locks that required waiting (0%) 5 Maximum hash bucket length
On Wed, Aug 29, 2012 at 12:04 PM, Quanah Gibson-Mount <quanah@zimbra.com mailto:quanah@zimbra.com> wrote:
--On Wednesday, August 29, 2012 11:32 AM -0400 Kyle Smith <alacer.cogitatus@gmail.com <mailto:alacer.cogitatus@gmail.com>> wrote: Quanah, Thanks for the info, I have confirmed I'm hitting the lock maxes of 1000. And I will be upgrading to 2.4.32. I was wondering, what steps should be done to have the changes in DB_CONFIG take effect? stop slapd make changes to DB_CONFIG db_recover start slapd Will this also auto remove the log.* files? ( I plan on setting this: "set_flags DB_LOG_AUTOREMOVE" in DB_CONFIG) If you have checkpointing set in slapd.conf/cn=config, it should, yes. --Quanah -- Quanah Gibson-Mount Sr. Member of Technical Staff Zimbra, Inc A Division of VMware, Inc. -------------------- Zimbra :: the leader in open source messaging and collaboration