Thanks for all the feedback on this guys.
This would generally indicate that you've failed to properly tune
Completely possible! :) I worked through the stuff in the openldap
faq-a-matic in the past, but it was complicated, and we've added alot of
users since that time.
Is there a better tuning guide out there? One that relates more to the
actual data structures being used?
Or he's got a ~35GB database...
the backend database files 'weigh' ~4.4 GB alone after running
'db_archive -d' to clean up the old 'log.' files.
We went to some effort to make sure that it's safe to run
is running, to allow hot backups to be performed. Ignoring this
pretty counterproductive. BerkeleyDB itself also provides
how to perform a hot backup of the raw DB files. Both of these options
and are already documented; anything else you do at your own risk.
I'd much prefer to use the slapcat method, but as I mentioned the
import time has grown rather substantially in the past half year as
we've added so much data - to the point that it takes 4 hours to do an
We're talking reasonably fast hardware too: Its a poweredge 1850, dual
2.8 GHz Xeons with 4GB of memory and a RAID 1 array dedicated to the
The schema we use is highly customized for our application. And Im
working on a project to 'trim the fat', as there is definitely
improvement to do there.
But what can I do to learn more about this fine art of tuning