Hi,
On Saturday, 18. August 2012, Howard Chu wrote:
Peter Marschall wrote:
But this brought up a question: Given an existing HDB database, is there a formula or something to calculate the 'maxsize' config option of MDB from the existing information.
In my testing, MDB typically uses about 60% as much space as HDB. Factor in however much future growth you anticipate and go from there.
As an example, I have a test LDIF that's 558694630 bytes, containing 380836 entries. Looking at info from [m]db_stat, we can compare the number of pages used for each index:
hdb mdb
branch leaf overflow branch leaf overflow dn2id 328 8097 0 67 7625 0 id2e 344 249856 59368 263 29681 293169 oc 11 154 0 1 3 0 uid 2487 26392 0 65 10895 0
(page sizes are normalized here; hdb id2entry uses 16K pages while all other databases use 4K pages)
This is with index objectclass eq index uid eq,sub
The dn2id, oc, and uid database formats are logically identical between hdb and mdb, so the difference in size is due to the difference in BDB and MDB. The id2entry database in mdb uses a slightly different encoding than hdb, so there are both library and backend format differences there.
As you can see, the more indexing you use, the bigger the difference between mdb and hdb.
Thanks for the explanation, Howard.
I may be a bit thick today, but I do not see, how I can determine a minimal value for MDB's *maxsize* parameter from the values given above. (I do not want to waste memory ;-)
Shall I simply take the LDIF size as maxsize? Shall I simply take the added sizes of the files in the HDB database? ...
Thanks in advance EPter