I am trying to perform some benchmarking against OpenLDAP. So far I have ran
my tests against a 100K and 1Million entry database and I have had rather
decent numbers. My final set of tests were to be ran against a 10Million
entry database. Unfortunately, I am having difficult loading the database
with this many entries. I have generated 10 1Million LDIF files. I am using
"slapadd -c -q -v -l <file>" to import each file. The first 2 files took
approximately 15 minutes each to load. The remaining 8 files are taking
progressively longer and longer. So much longer that I anticipate the entire
proceess to take well over 24 hours. My question is is there anything I can
do to increase the performance of slapadd. I assume that since slapd is not
running at this point that the normal DB_CONFIG and slapd.conf settings do
not have much affect.
I currently have a single database configured to hold all 10 Million
entries. Is this an unrealtistic expectation? Should I instead be planning
on having multiple databases and using chaining/referrals to link them
together. What is the optimal configuration for handling this number of
entries?
Thanks,
Pete