Luke Kenneth Casson Leighton wrote:
hi all, and a special hello to howard, i had forgotten you're
Heya Luke, long time no see.
i've just discovered lmdb and its python bindings, and i am to
absolutely honest completely astounded. that's for two reasons: the
first is how steeupidly quick lmdb is, and secondly because bizarrely
although it walks all over the alternatives it isn't more widely
adopted. mariadb added leveldb last year, mongodb i believe are
looking at a leveldb port.
We're not exactly a high powered marketing engine. The mariaDB guys are
morons, LevelDB is non-transactional and they've had to bend over backwards to
make it kinda-sorta work. We (Symas) had multiple conversations with them
about LMDB vs LevelDB but they never seemed to understand. They apparently
like DBs that pause for compaction/garbage collection in the middle of a
stream of transactions and they don't mind that LevelDB also lacks the I in ACID.
MongoDB - pathetic.
i have been looking at a *lot* of key-value database stores
to find the fastest possible one after realising that standard SQL and
NOSQL databases simply aren't fast enough. there is something called
structchunk which instead of storing the object in a leveldb just
stores an mmap file offset in the value and stores the actual object
in an mmap'd file... and it's *still* nowhere near as quick as lmdb.
Not surprising. Remember, I've been writing world's-fastest <whatevers>
the 1980s. There is no other KV store that is full ACID and anywhere near as
small or as fast as LMDB.
so i am both impressed and puzzled :) i have created a debian RFP
the python bindings, in case it helps.
- can i
suggest that anyone wishing to see that in there send a message
seconding it to be added: because as a general rule debian developers
do not like my blunt honesty and they tend to ignore my advice, so if
you would like to see py-lmdb packaged someone needs to speak up.
Since Debian is already packaging LMDB itself, I don't see any obstacles here
i wrote some python evaluation code that stored 5,000 records with
8-byte keys and 100-byte values before doing a transaction commit: it
managed 900,000 records per second (which is ridiculously fast even
for python. however: when i enabled append mode (on the cursor
put)... and yes i used the db stats to create a key that each time was
1 bigger lexicographically than all other keys... bizarrely things
*slowed down* ever so slightly - maybe about 3 to 5%.
what gives, there? the benchmarks show that this is supposed to be
faster (a *lot* faster) and that is simply not happening. is the
overhead from python that large it wipes out the speed advantages?
No idea. I don't use python enough to have any insight there.
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/