I am trying to perform some benchmarking against OpenLDAP. So far I have ran my tests against a 100K and 1Million entry database and I have had rather decent numbers. My final set of tests were to be ran against a 10Million entry database. Unfortunately, I am having difficult loading the database with this many entries. I have generated 10 1Million LDIF files. I am using "slapadd -c -q -v -l <file>" to import each file. The first 2 files took approximately 15 minutes each to load. The remaining 8 files are taking progressively longer and longer. So much longer that I anticipate the entire proceess to take well over 24 hours. My question is is there anything I can do to increase the performance of slapadd. I assume that since slapd is not running at this point that the normal DB_CONFIG and slapd.conf settings do not have much affect.
I currently have a single database configured to hold all 10 Million entries. Is this an unrealtistic expectation? Should I instead be planning on having multiple databases and using chaining/referrals to link them together. What is the optimal configuration for handling this number of entries?
Thanks, Pete
--On Wednesday, March 18, 2009 1:25 PM -0400 Pete Giesin pgiesin@hubcitymedia.com wrote:
I am trying to perform some benchmarking against OpenLDAP. So far I have ran my tests against a 100K and 1Million entry database and I have had rather decent numbers. My final set of tests were to be ran against a 10Million entry database. Unfortunately, I am having difficult loading the database with this many entries. I have generated 10 1Million LDIF files. I am using "slapadd -c -q -v -l <file>" to import each file. The first 2 files took approximately 15 minutes each to load. The remaining 8 files are taking progressively longer and longer. So much longer that I anticipate the entire proceess to take well over 24 hours. My question is is there anything I can do to increase the performance of slapadd. I assume that since slapd is not running at this point that the normal DB_CONFIG and slapd.conf settings do not have much affect.
What are the settings in your DB_CONFIG file? It is absolutely critical to the performance of slapadd. What version of OpenLDAP are you using? What version of BDB? What operating system?
--Quanah
--
Quanah Gibson-Mount Principal Software Engineer Zimbra, Inc -------------------- Zimbra :: the leader in open source messaging and collaboration
Pete Giesin wrote:
I am trying to perform some benchmarking against OpenLDAP. So far I have ran my tests against a 100K and 1Million entry database and I have had rather decent numbers. My final set of tests were to be ran against a 10Million entry database. Unfortunately, I am having difficult loading the database with this many entries. I have generated 10 1Million LDIF files. I am using "slapadd -c -q -v -l <file>" to import each file. The first 2 files took approximately 15 minutes each to load. The remaining 8 files are taking progressively longer and longer. So much longer that I anticipate the entire proceess to take well over 24 hours.
Sounds like something is misconfigured. I've loaded 5 billion entries in about 7 days, you should be able to easily load 10 million entries in about an hour. What version of OpenLDAP are you using, and which backend are you using? What version of database library are you using?
My question is is there anything I can do to increase the performance of slapadd. I assume that since slapd is not running at this point that the normal DB_CONFIG and slapd.conf settings do not have much affect.
DB_CONFIG is always used.
I currently have a single database configured to hold all 10 Million entries. Is this an unrealtistic expectation? Should I instead be planning on having multiple databases and using chaining/referrals to link them together. What is the optimal configuration for handling this number of entries?
For the most optimal configuration you need everything to reside in cache and that will require a 64 bit system. (10 million entries -> ~24 bits. If each entry is only 1KB that means you need 34 bits of user memory space, too much for a 32 bit system that typically only gives you 30 or 31 bits of user memory space.)
openldap-software@openldap.org