Hi all,
I am also doing some testing with mdb at the moment, and my initial testing indicates that mdb is faster for reads but slower for writes than bdb. I am using openldap 2.4.32 on centos 6, on a 24 core box with 132 Gb RAM.
My test directory has ~ 3 million entries, and I loaded it into mdb using slapadd which took over 2 days (by comparison, the same load into bdb takes 2-3 hours). (as an aside, I initially tried using 2.4.31, but slapadd crashed after having loaded about 90% of the data, and this was repeatable).
On disk the directory takes up ~ 13 Gb for mdb and ~ 18Gb for bdb. Cache size for bdb is set to 63 Gb in DB_CONFIG. Directory size for mdb is set to 63 Gb
Adding 120000 entries from an ldif file using ldapadd took ~ 10 minutes for mdb and ~ 2 minutes for bdb. Deleting 120000 entries using ldapdelete took ~ 10 minutes for mdb and ~ 3 minutes for bdb
A search returning ~ 300000 DNs took ~ 6 seconds for mdb and for bdb it took ~ 6 minutes from a cold start of slapd and then ~ 35 seconds.
Chris
Chris Card wrote:
Hi all,
I am also doing some testing with mdb at the moment, and my initial testing
indicates that mdb is faster for reads but slower for writes than bdb.
This is generally true, and is documented in the papers I've published on the subject.
I am using openldap 2.4.32 on centos 6, on a 24 core box with 132 Gb RAM.
My test directory has ~ 3 million entries, and I loaded it into mdb using
slapadd which took over 2 days (by comparison, the same load into bdb takes 2-3 hours).
(as an aside, I initially tried using 2.4.31, but slapadd crashed after
having loaded about 90% of the data, and this was repeatable).
This is not normal. With slapadd -q MDB is faster than BDB assuming you're using a decent filesystem and sensible mount options. JFS, EXT2, do better than other filesystems in my tests. Very recent EXT4 may be better than EXT3 as well.
On disk the directory takes up ~ 13 Gb for mdb and ~ 18Gb for bdb. Cache size for bdb is set to 63 Gb in DB_CONFIG. Directory size for mdb is set to 63 Gb
The relative sizes sound right.
Adding 120000 entries from an ldif file using ldapadd took ~ 10 minutes for
mdb and ~ 2 minutes for bdb.
Deleting 120000 entries using ldapdelete took ~ 10 minutes for mdb and ~ 3
minutes for bdb
Again, this is filesystem dependent, but basically right. If you're doing a purely serial test, however, you're not seeing the true picture. With multiple concurrent writers, BDB's throughput will degrade (due to transaction deadlocks) while MDB's will remain constant. Again, this is documented in the published papers on MDB.
MDB's write speed on my tests generally remains constant with a standard deviation of essentially zero under all load conditions. BDB's write speed will degrade under heavy load (to slower than MDB) and the standard deviation widens as load increases.
A search returning ~ 300000 DNs took ~ 6 seconds for mdb and for bdb it took
~ 6 minutes from a cold start of slapd and then ~ 35 seconds.
Sounds right. MDB's cold start overhead is measurable but essentially zero.
Chris
Chris Card wrote:
I am using openldap 2.4.32 on centos 6, on a 24 core box with 132 Gb RAM.
My test directory has ~ 3 million entries, and I loaded it into mdb using
slapadd which took over 2 days (by comparison, the same load into bdb takes 2-3 hours).
(as an aside, I initially tried using 2.4.31, but slapadd crashed after
having loaded about 90% of the data, and this was repeatable).
This is not normal. With slapadd -q MDB is faster than BDB assuming you're using a decent filesystem and sensible mount options. JFS, EXT2, do better than other filesystems in my tests. Very recent EXT4 may be better than EXT3 as well.
The filesystem is xfs, mounted as a drbd device (although at the moment the other half of the drbd pair is not configured, so it doesn't have to wait for synchronous writes across the network)
Chris
-- -- Howard Chu CTO, Symas Corp. http://www.symas.com Director, Highland Sun http://highlandsun.com/hyc/ Chief Architect, OpenLDAP http://www.openldap.org/project/
Chris Card wrote:
Chris Card wrote:
I am using openldap 2.4.32 on centos 6, on a 24 core box with 132 Gb RAM.
My test directory has ~ 3 million entries, and I loaded it into mdb using
slapadd which took over 2 days (by comparison, the same load into bdb takes 2-3 hours).
This is not normal. With slapadd -q MDB is faster than BDB assuming you're using a decent filesystem and sensible mount options. JFS, EXT2, do better than other filesystems in my tests. Very recent EXT4 may be better than EXT3 as well.
The filesystem is xfs, mounted as a drbd device (although at the moment the other half of the drbd pair is not configured, so it doesn't have to wait for synchronous writes across the network)
Sounds like you're not using slapadd -q. Either that, or your filesystem cache settings are way off.
Your system has enough RAM to hold the entire DB in the filesystem cache. The speed you're reporting indicates that it's not doing so. As another sanity check, look at slapadd -q with the DB on a tmpfs. With correct FS cache settings, the performance delta between tmpfs and a real disk should only be a few percent.
The important sysctl settings on a Linux system are vm.dirty_writeback_centisecs vm.dirty_expire_centisecs vm.dirty_ratio vm.dirty_background_ratio
Note that ext3/ext4 have their own writeback timer, which overrides the sysctl settings, so you need to set theirs at mount time.
The defaults for these settings tend to be low on most Linux systems; their tuning is aimed at machines that are essentially memory-starved so they flush the caches before they get very full. The net effect of this is that even though some userland code performs asynchronous writes, they are effectively synchronous because the OS flushes them out almost immediately. slapadd -q attempts to perform asynchronous writes but it's nullified as a result of these settings.
I am using openldap 2.4.32 on centos 6, on a 24 core box with 132 Gb RAM.
My test directory has ~ 3 million entries, and I loaded it into mdb using
slapadd which took over 2 days (by comparison, the same load into bdb takes 2-3 hours).
This is not normal. With slapadd -q MDB is faster than BDB assuming you're using a decent filesystem and sensible mount options. JFS, EXT2, do better than other filesystems in my tests. Very recent EXT4 may be better than EXT3 as well.
The filesystem is xfs, mounted as a drbd device (although at the moment the other half of the drbd pair is not configured, so it doesn't have to wait for synchronous writes across the network)
Sounds like you're not using slapadd -q. Either that, or your filesystem cache
Oh ****! You're quite right, I managed to lose the -q from the slapadd command when copy/pasting from a script. I'll try running it again with -q.
Do you have an ETA for improvements to mdb write performance?
Chris
On Aug 24, 2012, at 7:19 AM, Chris Card ctcard@hotmail.com wrote:
I am using openldap 2.4.32 on centos 6, on a 24 core box with 132 Gb RAM.
My test directory has ~ 3 million entries, and I loaded it into mdb using
slapadd which took over 2 days (by comparison, the same load into bdb takes 2-3 hours).
This is not normal. With slapadd -q MDB is faster than BDB assuming you're using a decent filesystem and sensible mount options. JFS, EXT2, do better than other filesystems in my tests. Very recent EXT4 may be better than EXT3 as well.
The filesystem is xfs, mounted as a drbd device (although at the moment the other half of the drbd pair is not configured, so it doesn't have to wait for synchronous writes across the network)
Sounds like you're not using slapadd -q. Either that, or your filesystem cache
Oh ****! You're quite right, I managed to lose the -q from the slapadd command when copy/pasting from a script. I'll try running it again with -q.
Do you have an ETA for improvements to mdb write performance?
Chris
In my testing, only a setting of "2" for tool threads when using MDB with slapadd made sense. Any higher value degraded perf. Also, current RE24 has one write perf change in it from Howard over 2.4.32 although I have not yet tested its impact any. It made some differences for Howard in his tests.
--Quanah
Chris Card wrote:
I am using openldap 2.4.32 on centos 6, on a 24 core box with 132 Gb RAM.
My test directory has ~ 3 million entries, and I loaded it into mdb using
slapadd which took over 2 days (by comparison, the same load into bdb takes 2-3 hours).
This is not normal. With slapadd -q MDB is faster than BDB assuming you're using a decent filesystem and sensible mount options. JFS, EXT2, do better than other filesystems in my tests. Very recent EXT4 may be better than EXT3 as well.
The filesystem is xfs, mounted as a drbd device (although at the moment the other half of the drbd pair is not configured, so it doesn't have to wait for synchronous writes across the network)
Sounds like you're not using slapadd -q. Either that, or your filesystem cache
Oh ****! You're quite right, I managed to lose the -q from the slapadd command when copy/pasting from a script. I'll try running it again with -q.
Do you have an ETA for improvements to mdb write performance?
Not at this point. There are several approaches to test; most of them will probably be dead ends.
Hi ,
Chirs and Howard,
Please share the slapd.conf for MDB for which you have done the performance testing..
BR's, Haroon
On Fri, Aug 24, 2012 at 11:46 PM, Howard Chu hyc@symas.com wrote:
Chris Card wrote:
I am using openldap 2.4.32 on centos 6, on a 24 core box with 132 Gb
RAM.
My test directory has ~ 3 million entries, and I loaded it into mdb
using
slapadd which took over 2 days (by comparison, the same load into bdb
takes
2-3 hours).
This is not normal. With slapadd -q MDB is faster than BDB assuming
you're
using a decent filesystem and sensible mount options. JFS, EXT2, do
better
than other filesystems in my tests. Very recent EXT4 may be better
than EXT3
as well.
The filesystem is xfs, mounted as a drbd device (although at the
moment the other
half of the drbd pair is not configured, so it doesn't have to wait
for synchronous
writes across the network)
Sounds like you're not using slapadd -q. Either that, or your
filesystem cache
Oh ****! You're quite right, I managed to lose the -q from the slapadd
command when copy/pasting
from a script. I'll try running it again with -q.
Do you have an ETA for improvements to mdb write performance?
Not at this point. There are several approaches to test; most of them will probably be dead ends.
-- -- Howard Chu CTO, Symas Corp. http://www.symas.com Director, Highland Sun http://highlandsun.com/hyc/ Chief Architect, OpenLDAP http://www.openldap.org/project/
Dear Experts,
Today we tried with to run the load with the below schema. We added about .6M entries in the DB. Still our performance is severely poor. (10 TPS)
Can anybody review our slapd.conf file and point us where we are wrong, is there any other config we have missed out.
System we have is 64 bit 12 core machine RHEL having total memory of 49418952 kB.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ dn: dc=example,dc=com objectClass: dcObject objectClass: organization o: Example Company dc: example
dn: cn=Manager0,dc=example,dc=com objectClass: organizationalRole cn: Manager0 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- Thanks and Regards
Yajuvendra
Insanity: D*oing* the same *thing* over and over *again and expecting* different results. Albert Einstein
On Sat, Aug 25, 2012 at 2:37 PM, aryan rawat aryanrawat24@gmail.com wrote:
Hi ,
Chirs and Howard,
Please share the slapd.conf for MDB for which you have done the performance testing..
BR's, Haroon
On Fri, Aug 24, 2012 at 11:46 PM, Howard Chu hyc@symas.com wrote:
Chris Card wrote:
> I am using openldap 2.4.32 on centos 6, on a 24 core box with 132
Gb RAM.
> My test directory has ~ 3 million entries, and I loaded it into mdb
using
slapadd which took over 2 days (by comparison, the same load into
bdb takes
2-3 hours).
This is not normal. With slapadd -q MDB is faster than BDB assuming
you're
using a decent filesystem and sensible mount options. JFS, EXT2, do
better
than other filesystems in my tests. Very recent EXT4 may be better
than EXT3
as well.
The filesystem is xfs, mounted as a drbd device (although at the
moment the other
half of the drbd pair is not configured, so it doesn't have to wait
for synchronous
writes across the network)
Sounds like you're not using slapadd -q. Either that, or your
filesystem cache
Oh ****! You're quite right, I managed to lose the -q from the slapadd
command when copy/pasting
from a script. I'll try running it again with -q.
Do you have an ETA for improvements to mdb write performance?
Not at this point. There are several approaches to test; most of them will probably be dead ends.
-- -- Howard Chu CTO, Symas Corp. http://www.symas.com Director, Highland Sun http://highlandsun.com/hyc/ Chief Architect, OpenLDAP http://www.openldap.org/project/
--On Tuesday, August 28, 2012 5:29 PM +0530 Yajuvendra Singh yajuvendra.singh@gmail.com wrote:
Dear Experts,
Today we tried with to run the load with the below schema. We added about .6M entries in the DB. Still our performance is severely poor. (10 TPS)
Can anybody review our slapd.conf file and point us where we are wrong, is there any other config we have missed out.
You don't provide any useful or relevant information, so it is impossible to help you.
What version of OpenLDAP are you using? What type of disk? What type of file system? What is your *exact* slapadd command? etc.
There is virtually no tuning involved with MDB, although I strongly recommend you read Howard's notes about the writeback bits for EXT4 etc he made in a recent post to -technical about MDB.
--Quanah
--
Quanah Gibson-Mount Sr. Member of Technical Staff Zimbra, Inc A Division of VMware, Inc. -------------------- Zimbra :: the leader in open source messaging and collaboration
openldap-technical@openldap.org