Ulrich Windl wrote:
Issues with LDAP Monitoring
Aren't you mentioning issues with monitoring in general?
"Uptime" is in whole seconds only (minor issue). SNMP uptime has a finer resolution (but limited range, unfortunately).
Detailed data per peer can only be retrieved through the "Connections", but that's a moment's view only: So if a client opens a connection, does a few operations, then closes the connection, a polling client of the monitor will never see those client operations. Also when needing a cumulative count of operations per peer (or just the number of connections per peer (for a rate)), a monitor client will have to accumulate the numbers from all peer connections. If a connection (with significant operations being done) was closed since the last poll, the total number will look negative. So the monitor client will have to store accumulated numbers for closed connections per peer also (Keeping numbers for all closed connections seems inefficient).
"Current Connections" is returned as monitor _counter_ object (monitorCounter), where in fact it's of type "gauge", opposed to "Total Connections" (which is also returned as monitor counter) which is actually a counter. This makes the code harder than necessary.
Of course Shannon's sampling theorem also applies to IT monitoring.
And of course if your scripts calculate rates, it has to deal with counter reset etc. BTDT.
In general polling based monitoring system like Nagios, check_mk etc. are pretty poor regarding fine-grained performance monitoring. You will always loose information about peak loads happening in those pretty wide time slots of 30+ secs.
If you really need it you can send the logged events to the usual ELK stack (or similar) and analyze whatever you want there [1]. Of course, depending on your OpenLDAP load, you need big and fast log stores.
[1] https://github.com/coudot/openldap-elk
What I'm missing are some database (BDB/HDB) runtime statistics.
Forget about BDB/HDB. MDB is the way to go. ;-)
https://www.openldap.org/its/index.cgi?findid=7770
Ciao, Michael.