Hi,
We are in the middle of a move from 2.4/bdb to 2.5/mdb, and I am working on implementing monitoring for the new systems.
In the 2.4/bdb environment, we run several scripts to check sanity of the servers:
* checks of the contextCSN value across the masters/slaves * regular db_verify on the actual bdb files * compare all DN & entryCSN in the DB across all the masters/slaves * do a write on a master, and verify that it is replicated everywhere
Most of these things can be done similarly in 2.4/bdb and 2.5/mbd but not the db_verify, which is bdb specific.
I don't find a lot of info on the maintenance / integrity checks that can be done on mdb databases. In fact: Mostly what I can find is: "MDB uses no caching and requires no tuning to deliver maximum search performance"
That is very good. :-) We have already discovered that we need to increase maxsize, as our database is (much) larger than 10MB.
Two questions:
- Is there really nothing else to tune or adjust?
- Is there really no way to verify the internal structures within the MDB file, to make sure everything is valid and healthy? Perhaps things I have missed with our four checks above?
Anyone else with tips and tricks on daily maintenance or monitoring? Scripts to share..? Perhaps zabbix templates..?
If anyone would be interested in (for example) my script to verify the complete directory contents based on DN&entryCSN across a (multi)master/(multi)slave setup, I'd be happy to share too, of course.
Thanks!
--On Thursday, June 15, 2023 10:51 PM +0200 sacawulu cyusedfzfb@gmail.com wrote:
Most of these things can be done similarly in 2.4/bdb and 2.5/mbd but not the db_verify, which is bdb specific.
I don't find a lot of info on the maintenance / integrity checks that can be done on mdb databases. In fact: Mostly what I can find is: "MDB uses no caching and requires no tuning to deliver maximum search performance"
There's very little necessary to be done when using back-mdb as you've already discovered
That is very good. :-) We have already discovered that we need to increase maxsize, as our database is (much) larger than 10MB.
The rule of thumb for maxsize is to set it to something larger than you ever expect to hit. For example, with a database I'm using that's ~3.4GB in size, I have a 20GB maxsize set. You may want to monitor the size of your database vs the configured maxsize. This can be done with data available in the back-monitor database (at least with 2.6, not sure with 2.5) or via the mdb_stat utility.
python-ldap3 snippet using back-monitor:
try: tls = Tls(validate=ssl.CERT_NONE, version=ssl.PROTOCOL_TLSv1_2) conn = Connection(Server(ldap_url, tls=tls, use_ssl=True), auto_bind=True) conn.search( "cn=monitor", "(&(objectClass=olmMDBDatabase))", attributes=["olmMDBPagesMax", "olmMDBPagesUsed", "namingContexts"], )
for db in conn.entries: suffix = db.namingContexts.value # 4096 is the page size in use # can be found with mdb_stat -e /path/to/database max_size = int(db.olmMDBPagesMax.value) * 4096 current_size = int(db.olmMDBPagesUsed.value) * 4096 pct_used = float(current_size / max_size * 100)
Anyone else with tips and tricks on daily maintenance or monitoring? Scripts to share..? Perhaps zabbix templates..?
The only other bits I would recommend is if you have object in the directory that have very large multi-valued attribute data. For example, if you use groups extensively, and the 'member' attribute has hundreds of entries, you would probably want to do something like:
a) Add member as an attribute to be handled by olcSortvals
b) Add a multival configuration to back-mdb if the attribute is indexed. For example, if you indexed member "eq". I usually do 100,10. This will put the index databse into its own subdb, which helps keep fragmentation from being an issue.
For example, I had a ~5.5GB DB that was swelling to over 11GB in size due to fragmentation before implementing those two configuration options.
Regards, Quanah
openldap-technical@openldap.org