When using MDB_VL32 to access a 64bit DB on a 32bit CPU, the DB cannot fit into a single mmap. [...]
Long story short: I did not understand MDB_VL32 in the first place. I thought it would just enable 64 bit sizes.
Ideally, you should quit using obsolete 32bit CPUs.
Of course. Think about mobile and IoT however: Here, there are still 32 bit CPUs and it's not uncommon to even make 64 bit ARM CPUs operate in 32 bit mode to reduce binary size etc.
I'd like to propose a different approach to 64 bits on 32 bit CPUs; something like this: #ifdef MDB_SIZE64 typedef uint64_t mdb_size_t; ...
Why? MDB_VL32 adds complexity and it would take additional coding and testing efforts to get it production ready.
I think MDB_SIZE64 might be a better trade-off. Of course this would only work with smaller DBs files (< 2GB/4GB). But I don't see a huge scenario for 32 bit CPUs being able to work with huge files anyway. The main reason I see to use 64 bits consistently across CPU architecture is binary compatibility. This is a big thing for Android (shipping apps with prefilled DBs). Also it enables getting DB files from 32 bit devices and open those on the desktop, etc.
What do you think?
Thanks, Markus
PS.: Here's the full patch (adding MDB_SIZE64), which runs fine at least with our test suite:
lmdb.h @@ -187,7 +187,15 @@ # define MDB_FMT_Z "z" /**< printf/scanf format modifier for size_t */ #endif
-#ifndef MDB_VL32 +#if defined(MDB_VL32) && defined(MDB_SIZE64) +#error either define MDB_VL32 OR MDB_SIZE64 +#endif +#ifdef MDB_SIZE64 +typedef uint64_t mdb_size_t; +# define MDB_SIZE_MAX SIZE_MAX /**< max #mdb_size_t */ +# define MDB_PRIy(t) PRI##t##64 +# define MDB_SCNy(t) SCN##t##64 +#elif !defined(MDB_VL32)) /** Unsigned type used for mapsize, entry counts and page/transaction IDs. * * It is normally size_t, hence the name. Defining MDB_VL32 makes it