Hi,

Thank you for your reply.
Actually I have only one CPU. So I don't think I can use this option.

Regards,
Ashok Kumar

On Thu, Jun 11, 2009 at 10:18 PM, Bill MacAllister <whm@stanford.edu> wrote:
--On Thursday, June 11, 2009 21:11:40 +0530 ashok kumar <ashok.kumar.iitkgp@gmail.com> wrote:

Hi,
I am new to openldap and I am trying to use it for 50 million
records. I am using slapadd to add this data but it is processing
around 1 million records in 5 hour. So this will take days to stage
properly. I will have to update database every 20 days with about 10
million records. I wish to know what enhancements I have to do in my
setup considering following configurations. I am using openldap
2.4.16 with Berkley DB 4.5.20 on CentOS linux OS 32 bit machine with
4GB RAM.
slapd.conf :

include /usr/local/etc/openldap/schema/core.schema
include /usr/local/etc/openldap/schema/xyz.schema

pidfile /usr/local/var/run/slapd.pid
argsfile /usr/local/var/run/slapd.args

backend bdb

database bdb
suffix "o=sgi,c=us"
rootdn "o=sgi,c=us"
checkpoint 128 15

rootpw xyz

directory /usr/local/var/openldap-data

index objectClass eq
index attribute1 eq
index attribute2  eq

DBCONFIG:
set_cachesize 1 268435456 1
set_flags DB_LOG_AUTOREMOVE


Regards,
Ashok Kumar

First, you should set shm_key.  Setting it to any non-zero value disables memory mapped files and turns on shared memory.

Also, you will want to set the tool-threads to match the number of CPUs that you have available.  In your case since you only have 3 indexes you won't see performance gains if you set it higher.

Bill



--

Bill MacAllister, System Software Programmer
Unix Systems Group, Stanford University