Hi Quanah,
Thanks for your answer and kind suggestions! We will implement them.
And anyone here using zabbix, and has some scripting for monitoring laying around..?
Thanks!
On Fri, 16 Jun 2023 at 16:42, Quanah Gibson-Mount quanah@fast-mail.org wrote:
--On Thursday, June 15, 2023 10:51 PM +0200 sacawulu cyusedfzfb@gmail.com
wrote:
Most of these things can be done similarly in 2.4/bdb and 2.5/mbd but not the db_verify, which is bdb specific.
I don't find a lot of info on the maintenance / integrity checks that can be done on mdb databases. In fact: Mostly what I can find is: "MDB uses no caching and requires no tuning to deliver maximum search performance"
There's very little necessary to be done when using back-mdb as you've already discovered
That is very good. :-) We have already discovered that we need to increase maxsize, as our database is (much) larger than 10MB.
The rule of thumb for maxsize is to set it to something larger than you ever expect to hit. For example, with a database I'm using that's ~3.4GB in size, I have a 20GB maxsize set. You may want to monitor the size of your database vs the configured maxsize. This can be done with data available in the back-monitor database (at least with 2.6, not sure with 2.5) or via the mdb_stat utility.
python-ldap3 snippet using back-monitor:
try: tls = Tls(validate=ssl.CERT_NONE, version=ssl.PROTOCOL_TLSv1_2) conn = Connection(Server(ldap_url, tls=tls, use_ssl=True),
auto_bind=True) conn.search( "cn=monitor", "(&(objectClass=olmMDBDatabase))", attributes=["olmMDBPagesMax", "olmMDBPagesUsed", "namingContexts"], )
for db in conn.entries: suffix = db.namingContexts.value # 4096 is the page size in use # can be found with mdb_stat -e /path/to/database max_size = int(db.olmMDBPagesMax.value) * 4096 current_size = int(db.olmMDBPagesUsed.value) * 4096 pct_used = float(current_size / max_size * 100)
Anyone else with tips and tricks on daily maintenance or monitoring? Scripts to share..? Perhaps zabbix templates..?
The only other bits I would recommend is if you have object in the directory that have very large multi-valued attribute data. For example, if you use groups extensively, and the 'member' attribute has hundreds of entries, you would probably want to do something like:
a) Add member as an attribute to be handled by olcSortvals
b) Add a multival configuration to back-mdb if the attribute is indexed. For example, if you indexed member "eq". I usually do 100,10. This will put the index databse into its own subdb, which helps keep fragmentation from being an issue.
For example, I had a ~5.5GB DB that was swelling to over 11GB in size due to fragmentation before implementing those two configuration options.
Regards, Quanah
Le mer. 21 juin 2023 à 08:39, cYuSeDfZfb cYuSeDfZfb cyusedfzfb@gmail.com a écrit :
Hi Quanah,
Thanks for your answer and kind suggestions! We will implement them.
And anyone here using zabbix, and has some scripting for monitoring laying around..?
Hello,
we provide some monitoring scripts in LDAP Tool Box project, for example: https://ltb-project.org/documentation/check_lmdb_usage.html
Clément.
Hi Clément,
Thank you for the link!
On Wed, 21 Jun 2023 at 08:58, Clément OUDOT clem.oudot@gmail.com wrote:
Le mer. 21 juin 2023 à 08:39, cYuSeDfZfb cYuSeDfZfb cyusedfzfb@gmail.com a écrit :
Hi Quanah,
Thanks for your answer and kind suggestions! We will implement them.
And anyone here using zabbix, and has some scripting for monitoring laying around..?
Hello,
we provide some monitoring scripts in LDAP Tool Box project, for example: https://ltb-project.org/documentation/check_lmdb_usage.html
Clément.
openldap-technical@openldap.org