https://bugs.openldap.org/show_bug.cgi?id=9291
--- Comment #6 from Markus markus@objectbox.io --- (In reply to Markus from comment #5)
Yes. There's a trade-off. Checking the roots can be done simply and in constant time. ...
Maybe it makes sense to level this trade-off out a bit. Let's say a branch page can reference up to around 40 page numbers. Checking the 4 root pages with a total of around 160 page numbers still seems quite doable without noticeable performance impact.
Going down another level might get noticeable with 160 pages (640 KB), but still doable. Actually, going down could be limited to the more recent meta tree and thus cutting it by half to 80 pages. It would give us up to 3200 additional page numbers we could check for consistency. The performance penalty is at least partially compensated by the fact that this "warms up" (caches) the top of the tree for future access.
What do you think?