Was reading thru Google's leveldb stuff and found their benchmark page
http://leveldb.googlecode.com/svn/trunk/doc/benchmark.html
I adapted their sqlite test driver for MDB, attached.
On my laptop I get:
violino:/home/software/leveldb> ./db_bench_mdb
MDB: version MDB 0.9.0: ("September 1, 2011")
Date: Mon Jul 2 07:17:09 2012
CPU: 4 * Intel(R) Core(TM)2 Extreme CPU Q9300 @ 2.53GHz
CPUCache: 6144 KB
Keys: 16 bytes each
Values: 100 bytes each (50 bytes after compression)
Entries: 1000000
RawSize: 110.6 MB (estimated)
FileSize: 62.9 MB (estimated)
------------------------------------------------
fillseq : 9.740 micros/op; 11.4 MB/s
fillseqsync : 8.182 micros/op; 13.5 MB/s (10000 ops)
fillseqbatch : 0.502 micros/op; 220.5 MB/s
fillrandom : 11.558 micros/op; 9.6 MB/s
fillrandint : 9.593 micros/op; 10.3 MB/s
fillrandibatch : 6.288 micros/op; 15.8 MB/s
fillrandsync : 8.399 micros/op; 13.2 MB/s (10000 ops)
fillrandbatch : 7.206 micros/op; 15.4 MB/s
overwrite : 14.253 micros/op; 7.8 MB/s
overwritebatch : 9.075 micros/op; 12.2 MB/s
readrandom : 0.261 micros/op;
readseq : 0.079 micros/op; 1392.5 MB/s
readreverse : 0.085 micros/op; 1301.9 MB/s
fillrand100K : 106.695 micros/op; 894.0 MB/s (1000 ops)
fillseq100K : 93.626 micros/op; 1018.8 MB/s (1000 ops)
readseq100K : 0.095 micros/op; 1005185.9 MB/s
readrand100K : 0.368 micros/op;
Compared to the leveldb:
violino:/home/software/leveldb> ./db_bench
LevelDB: version 1.5
Date: Mon Jul 2 07:18:35 2012
CPU: 4 * Intel(R) Core(TM)2 Extreme CPU Q9300 @ 2.53GHz
CPUCache: 6144 KB
Keys: 16 bytes each
Values: 100 bytes each (50 bytes after compression)
Entries: 1000000
RawSize: 110.6 MB (estimated)
FileSize: 62.9 MB (estimated)
WARNING: Snappy compression is not enabled
------------------------------------------------
fillseq : 1.752 micros/op; 63.1 MB/s
fillsync : 13.877 micros/op; 8.0 MB/s (1000 ops)
fillrandom : 2.836 micros/op; 39.0 MB/s
overwrite : 3.723 micros/op; 29.7 MB/s
readrandom : 5.390 micros/op; (1000000 of 1000000 found)
readrandom : 4.811 micros/op; (1000000 of 1000000 found)
readseq : 0.228 micros/op; 485.1 MB/s
readreverse : 0.520 micros/op; 212.9 MB/s
compact : 439250.000 micros/op;
readrandom : 3.269 micros/op; (1000000 of 1000000 found)
readseq : 0.197 micros/op; 560.4 MB/s
readreverse : 0.438 micros/op; 252.5 MB/s
fill100K : 504.147 micros/op; 189.2 MB/s (1000 ops)
crc32c : 4.134 micros/op; 944.9 MB/s (4K per op)
snappycomp : 6863.000 micros/op; (snappy failure)
snappyuncomp : 8145.000 micros/op; (snappy failure)
acquireload : 0.439 micros/op; (each op is 1000 loads)
Interestingly enough, MDB wins on one or two write tests. It clearly wins on
all of the read tests. MDB databases don't require compaction, so that's
another win. MDB doesn't do compression, so those tests are disabled.
I haven't duplicated all of the test scenarios described on the web page yet,
you can do that yourself with the attached code. It's pretty clear that
nothing else even begins to approach MDB's read speed.
MDB sequential write speed is dominated by the memcpy's required for
copy-on-write page updates. There's not much that can be done to eliminate
that, besides batching writes. For random writes the memcmp's on the key
comparisons become more of an issue. The fillrandi* tests use an integer key
instead of a string-based key, to show the difference due to key comparison
overhead.
For the synchronous writes, MDB is also faster, because it doesn't need to
synchronously write a transaction logfile.
--
-- Howard Chu
CTO, Symas Corp.
http://www.symas.com
Director, Highland Sun
http://highlandsun.com/hyc/
Chief Architect, OpenLDAP
http://www.openldap.org/project/