> When using MDB_VL32 to access a 64bit DB on a 32bit CPU, the DB
> into a single mmap.
Long story short: I did not understand MDB_VL32 in the first place. I
thought it would just enable 64 bit sizes.
Pointless, if you're still limited to 2GB. It would only eat up 4 extra bytes
overhead in every internal integer.
> Ideally, you should quit using obsolete 32bit CPUs.
Of course. Think about mobile and IoT however: Here, there are still 32 bit
CPUs and it's not uncommon to even make 64 bit ARM CPUs operate in 32 bit
mode to reduce binary size etc.
I'd like to propose a different approach to 64 bits on 32 bit CPUs;
something like this:
typedef uint64_t mdb_size_t;
Why? MDB_VL32 adds complexity and it would take additional coding and
testing efforts to get it production ready.
If you had used the API as intended, with txn_reset/renew as I originally
suggested, there would be no problem.
I think MDB_SIZE64 might be a better trade-off. Of course this would only
work with smaller DBs files (< 2GB/4GB). But I don't see a huge scenario for
32 bit CPUs being able to work with huge files anyway.
The reason MDB_VL32 exists at all is because of exactly this - people wanted
to use 32 bit CPUs with 64 bit databases.
The main reason I see
to use 64 bits consistently across CPU architecture is binary compatibility.
This is a big thing for Android (shipping apps with prefilled DBs). Also it
enables getting DB files from 32 bit devices and open those on the desktop,
What do you think?
Bad idea. MDB_VL32 provides *full* compatibility between 32bit and 64bit CPUs
*regardless of filesize*. Your suggestion will only cause confusion, as people
wonder why their DB dies after reaching 2GB in size. It offers none of the
benefits of 64bit capability, while still paying all of the cost in terms of
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/