Quanah Gibson-Mount wrote:
I'm using a database where id2entry.bdb is ~6.6Gb. and it's taking surprisingly long time. After 18 hours it has gotten around 1/4 of the way.
I'm wondering if I could speed it up by loading an LDIF backup on the empty server before I start it.
Yep. This is the recommended method for large databases.
Ok. - one thing which makes me wonder though: If I monitor the process with "slapcat | grep 'dn: o=' | wc -l" to get an estiamte of how much has been replicated, it sometimes decreases !?!?
The database has a lot of nodes (~160.000) below the root with DN o=...
Is it expected that that number will not be strictly rising, or is it a sign that there's some other thing going on making it slow?
/Peter