Hi i am suffering from slow slapadd for a 100,000 subscriber. It is 27 minutes. this is not what i am expecting .
My test environment and the openLDAP release i have installed symas-openldap-silver-2.4.11.0.sun4u.pkg to a System Configuration: Sun Microsystems sun4v SPARC Enterprise T5220 Memory size: 32640 Megabytes
Is it rational that slapadd took 26 minutes for 100,000 entries(23Kbyte per subscriber i guess) without index.? I think it is so slow.isn't it? What have i done wrong then?Could you please help to reduce this time ? And how can i learn what db_stat exactly mean to me while doing slapadd?
Configuration information about my Environment: --------------------------------------------------------------------- symas-openldap-silver-2.4.11.0.sun4u.pkg installed on a System with Configuration: Sun Microsystems sun4v SPARC Enterprise T5220 Memory size: 32640 Megabytes
My slapadd command: ------------------------------------- /opt/symas/bin/sparcv9/slapadd -l /opt/symas/etc/openldap/ldifs/subscribersPart100.ldif -f /opt/symas/etc/openldap/slapd.conf -b o=sdftest -q
my DB_CONFIG file is: --------------------------------------- set_cachesize 10 0 0 set_flags DB_LOG_AUTOREMOVE set_flags DB_TXN_NOSYNC set_lg_max 10485760 set_lg_bsize 2097152 set_lg_dir /opt/symas/etc/openldap/transactionlog
my slapd.conf file is: --------------------------------- tool-threads 2 access to dn="" by * read access to * by self write by users read by anonymous auth
database bdb suffix "o=sdftest" rootdn "cn=sdf,o=sdftest" rootpw admin234 index default eq index objectClass index cn directory /var/symas/openldap-data/ checkpoint 256000 60
one of the db_stat result
root@typhoon:/# /opt/symas/bin/sparcv9/db_stat -h /var/symas/openldap-data/ -m 2GB 25MB Total cache size 1 Number of caches 1 Maximum number of caches 2GB 25MB Pool individual cache size 0 Maximum memory-mapped file size 0 Maximum open file descriptors 0 Maximum sequential buffer writes 0 Sleep after writing maximum sequential buffers 0 Requested pages mapped into the process' address space 91M Requested pages found in the cache (99%) 294 Requested pages not found in the cache 307399 Pages created in the cache 294 Pages read into the cache 307935 Pages written from the cache to the backing file 31233 Clean pages forced from the cache 33201 Dirty pages forced from the cache 103701 Dirty pages written by trickle-sync thread 243256 Current total page count 243256 Current clean page count 0 Current dirty page count 262147 Number of hash buckets used for page location 90M Total number of times hash chains searched for a page (90995964) 2 The longest hash chain searched for a page 120M Total number of hash chain entries checked for page (120028164) 31 The number of hash bucket locks that required waiting (0%) 4 The maximum number of times any hash bucket lock was waited for (0%) 9 The number of region locks that required waiting (0%) 0 The number of buffers frozen 0 The number of buffers thawed 0 The number of frozen buffers freed 307713 The number of page allocations 257629 The number of hash buckets examined during allocations 60205 The maximum number of hash buckets examined for an allocation 64434 The number of pages examined during allocations 5844 The max number of pages examined for an allocation 21 Threads waited on page I/O Pool File: id2entry.bdb 16384 Page size 0 Requested pages mapped into the process' address space 2999318 Requested pages found in the cache (99%) 3 Requested pages not found in the cache 136414 Pages created in the cache 3 Pages read into the cache 136659 Pages written from the cache to the backing file Pool File: dn2id.bdb 4096 Page size 0 Requested pages mapped into the process' address space 53M Requested pages found in the cache (99%) 2 Requested pages not found in the cache 168604 Pages created in the cache 2 Pages read into the cache 168607 Pages written from the cache to the backing file Pool File: objectClass.bdb 4096 Page size 0 Requested pages mapped into the process' address space 25M Requested pages found in the cache (99%) 2 Requested pages not found in the cache 694 Pages created in the cache 2 Pages read into the cache 695 Pages written from the cache to the backing file Pool File: cn.bdb 4096 Page size 0 Requested pages mapped into the process' address space 9191165 Requested pages found in the cache (99%) 287 Requested pages not found in the cache 1687 Pages created in the cache 287 Pages read into the cache 1974 Pages written from the cache to the backing file
--------------------------------------
Bu elektronik posta ve onunla iletilen bütün dosyalar gizlidir sadece yukarıda isimleri belirtilen kişiler arasında özel haberleşme amacını taşımaktadır. Size yanlışlıkla ulaşmıssa bu elektonik postanın içeriğini açıklamanız , kopyalamanız, yönlendirmeniz ve kullanmanız kesinlikle yasaktır. Lütfen mesajı geri gönderiniz ve sisteminizden siliniz. Vodafone Teknoloji Hizmetleri A.Ş. bu mesajın içeriği ile ilgili olarak hiç bir hukuksal sorumluluğu kabul etmez.
This electonic mail and any files transmitted with it are intended for the private use of the persons named above. If you received this message in error, forwarding, copying or use of any of the information is strictly prohibited. Please immediately notify the sender and delete it from your system. Vodafone Teknoloji Hizmetleri A.S. does not accept legal responsibility for the contents of this message. --------------------------------------
Hello, Elcin.
Hi i am suffering from slow slapadd for a 100,000 subscriber. It is 27 minutes. this is not what i am expecting.
We wouldn't normally expect slapadd for 100k entries to take that long either. Your case is a little different though; see below.
My test environment and the openLDAP release i have installed symas-openldap-silver-2.4.11.0.sun4u.pkg to a System Configuration: Sun Microsystems sun4v SPARC Enterprise T5220 Memory size: 32640 Megabytes
That sounds good so far.
Is it rational that slapadd took 26 minutes for 100,000 entries (23Kbyte per subscriber i guess) without index.?
The difference you are seeing is from the 23kB per object; most directories consist of objects in the 2kB to 4kB range.
If you look at your databases (the *.bdb files) using db_stat -d (be sure to use /opt/symas/db_stat), you can see the page sizes configured. OpenLDAP currently is hardcoded to use 16kB pages for id2entry, where all of the entry data is stored. Since your entries are larger, db_stat -d should show a signficant number of overflow pages. BDB is much slower when your dataset is using large numbers of overflow pages.
I think it is so slow.isn't it?
What you are seeing is very slow, but expected given the tuning and your data-set.
There are several possible work-arounds that could greatly improve the performance you see. If you'd like to contact support@symas.com we can work with you more closely.
Matthew Backes Symas Corporation mbackes@symas.com
openldap-software@openldap.org