Quanah Gibson-Mount wrote:
--On October 7, 2009 11:09:18 PM +0200 Emmanuel Lecharny elecharny@apache.org wrote:
Quanah Gibson-Mount wrote:
--On October 7, 2009 3:32:51 PM -0400 Aaron Richton richton@nbcs.rutgers.edu wrote:
On Wed, 7 Oct 2009, iz1ksw iz1ksw wrote:
What is the fastest way (in terms of openldap settings) to perform a massive load (~200MB ldif file) of data into openldap directory?
Try "slapadd -q" (read slapadd(8) man page to get started).
slapadd -q is important, but so is having a large enough BDB cache in the DB_CONFIG file, plus setting the right number of tool threads. I took a load down from 17 hours to less than 2 hours adjusting all of those correctly and adding -q.
The LDIF file is only 200Mb large. It should be a matter of minutes to load it on a decent machine, even when using the standard parameters :)
There is not a 1 to 1 correlation between LDIF file size and resulting database size. My LDIF file was 300MB and the resulting database was 12.5 GB.
Sure enough. My own test used a 200 Mb ldif file for a 6 Gb base.
I guess that it's all depending on the number of indexes. the best is to start with a smaller file, evaluate the time it will take to load it, and then try to extrapolate to the full file (roughly).
In any case, 200Mb for a ldif file will represent something between 100 000 to 1 000 000 entries, not a lot more (but it can be a much lower number if you have JpegPhotos ...).
That being said, I doubt that the injection of a 200 Mb ldif will take hours...