>>> Brent Bice bbice@sgi.com schrieb am 18.09.2013 um 22:01 in Nachricht 523A068F.1000100@sgi.com:
I've started testing an LDAP server here using MDB and ran across a few caveats that might be of use to others looking into using it. But first off, let me say a hearty THANKS to anyone who's contributed to it. In this first OpenLDAP server I've converted over to MDB it's *dramatically* faster and it's definitely nice to not worry about having to setup script/s to occasionally (carefully) commit/flush DB logs, etc.
One caveat that might be worth mentioning in release notes
somewhere... Not all implementations of memory mapped I/O are created equal. I ran into this a long time back when I wrote a multi-threaded quicksort program for a friend to had to sort text files bigger than 10 gigs and didn't want to wait for the unix sort command. :-) The program I banged together for him used memory mapped I/O and one of the things I found was that while Solaris would let me memory map a file bigger than I had physical or virtual memory for, linux wouldn't. It appeared that
I doubt that Solaris allows you to mmap() a file to an area larger than the virtual address space, however you can mmap() a file area larger than RAM+swap when a demand paging strategy is used. However once you start modifying the mapped pages you may run out of memory, so thing twice.
some versions of the 2.x kernels wouldn't let me memory-map a file bigger than the total *virtual* memory size, and I think MDB is running into the same limitation. On a SLES11 system, for instance with the 2.6.32.12 kernel, I can't specify a maxsize bigger than the total of my physical memory and swap space. So just something to keep in mind if
Also be aware that in SLES11 SP2 the kernel update release some weeks ago strengthened the checks for mmap()ed areas: I had a program that started to fail when I tried to change one byte after the end of the file, while this worked with the kernel before.
you're using MDB on the 2.x kernels - you may need a big swap area even though the memory mapped I/O routines in the kernel seem to be smart enough to avoid swapping like mad.
I'd like to object: AFAIR, MDB used mmap()ed areas in strictly read-only fashion, so the backing store is the original file, being demand paged. When data is write()n, the system will dirty buffers in real RAM that are eventually written back to the file blocks. I see no path where dirty buffers should be swapped unless the mapping is PRIVATE.
On a newish ubuntu system with a 3.5 kernel this doesn't seem to be
an issue - tell OpenLDAP to use whatever maxsize you want and it just works. :-)
I'd also only use MDB on a 64 bit linux system. One of the other
headaches I remember running into with memory mapped I/O was adding in support for 64 bit I/O on 32 bit systems. Best to avoid that whole mess and just use a 64 bit OS in the first place.
For 32bit remember that the thread's stacks also consume virtual address space. So the maximum database size may be significantly below 4GB.
Lastly... At the risk of making Howard and Quanah cringe... :-) The
OpenLDAP DB I've been testing this with is the back-end to an email tracking tool I setup several years ago. More as an excuse to edjimicate myself on the java API for LDAP than anything else, I wrote a quick bit of java that watches postfix and sendmail logs and writes pertinent bits of info into an LDAP database, and a few PHP scripts to then query that database for things like to/from addresses, queue IDs, and message IDs. 'Makes it easy for junior admins to quickly search through gigabytes of logs to see what path an email took to get from point A to point B, who all received it (after it went through one or more list servers and a few aliases got de-ref'd, etc).
Yeah, it's an utter abuse of LDAP which is supposed to be
write-rarely and read-mostly, especially as our postfix relays handle anywhere from 1 to 10 messages per second on average. :-) But what the heck, it works fine and was a fun weekend project. It's also served as a way to stress-test new versions of OpenLDAP before I deploy them elsewhere. :-)
Anyway, thanks again to everyone who contributed to MDB. It's lots
faster than BerkeleyDB in all of my testing so far. 'Looking forward to gradually shifting more of my LDAP servers over to it.
Brent