https://bugs.openldap.org/show_bug.cgi?id=9211
Bug ID: 9211
Summary: Relax control is not consistently access-restricted
Product: OpenLDAP
Version: 2.4.49
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: ryan(a)openldap.org
Target Milestone: ---
The following operations can be performed by anyone having 'write' access (not
even 'manage') using the Relax control:
- modifying/replacing structural objectClass
- adding/modifying OBSOLETE attributes
Some operations are correctly restricted:
- adding/modifying NO-USER-MODIFICATION attributes marked as manageable
(Modification of non-conformant objects doesn't appear to be implemented at
all.)
In the absence of ACLs for controls, I'm of the opinion that all use of the
Relax control should require manage access. The Relax draft clearly and
repeatedly discusses its use cases in terms of directory _administrators_
temporarily relaxing constraints in order to accomplish a specific task.
--
You are receiving this mail because:
You are on the CC list for the bug.
https://bugs.openldap.org/show_bug.cgi?id=9204
Bug ID: 9204
Summary: slapo-constraint allows anyone to apply Relax control
Product: OpenLDAP
Version: 2.4.49
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: overlays
Assignee: bugs(a)openldap.org
Reporter: ryan(a)openldap.org
Target Milestone: ---
slapo-constraint doesn't limit who can use the Relax control, beyond the global
limits applied by slapd. In practice, for many modifications this means any
configured constraints are advisory only.
In my opinion this should be considered a bug, in design if not implementation.
I expect many admins would not read the man page closely enough to realize the
behaviour does technically adhere to the letter of what's written there.
Either slapd should require manage privileges for the Relax control globally,
or slapo-constraint should perform a check for manage privilege itself, like
slapo-unique does.
Quoting ando in https://bugs.openldap.org/show_bug.cgi?id=5705#c4:
> Well, a user with "manage" privileges on related data could bypass
> constraints enforced by slapo-constraint(5) by using the "relax"
> control. The rationale is that a user with manage privileges could be
> able to repair an entry that needs to violate a constraint for good
> reasons. Note that the user:
>
> - must have enough privileges to do it (manage)
>
> - must inform the DSA that intends to violate the constraint (by using
> the control)
but such privileges are currently not being required.
--
You are receiving this mail because:
You are on the CC list for the bug.
https://bugs.openldap.org/show_bug.cgi?id=9291
Issue ID: 9291
Summary: Detection of corrupted database files
Product: LMDB
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: liblmdb
Assignee: bugs(a)openldap.org
Reporter: markus(a)objectbox.io
Target Milestone: ---
Let's assume we have to deal with a corrupted database for whatever reason
(e.g. broken hardware or file system). Current behavior seems to be mostly
undefined, which is understandable as it's not known what is broken (e.g. there
are no checksums).
For example, I'm seeing a SIGBUS in mdb_page_touch because the cursor's top
page (mp) is pointing to invalid memory (0x7f99cf004000) during a commit:
mdb_page_touch mdb.c:2772
mdb_page_search mdb.c:6595
mdb_freelist_save mdb.c:3575
mdb_txn_commit mdb.c:4060
Cursor data at that point: mc_snum = 1, mc_top = 0; myki[0] = 0
A SIGBUS is troublesome as it crashes the process, and I wonder if there are
other ways to detect such inconsistencies. If that be possible there could be
user-specific handling in place. E.g. a user might start a new database file.
This issue was reported by our users, which also provided DB files:
https://github.com/objectbox/objectbox-java/issues/859
I did not find a lot of consistency checks besides MDB_PAGE_NOTFOUND and
MDB_CORRUPTED. Also, I think there's no current way to thoroughly check a DB
file (e.g. like fsck for the DB file)?
My first idea other than checksums was to walk through the branch pages from
the root and check if the referenced pages are within reasonable bounds. Also
check the page content (e.g. nodes, flags). Additionally (optionally?), it
should be possible to check that the key values are actually sorted.
So, it boils down to 3 points in summary:
1.) If there no way to check the DB file for consistency yet(?), which approach
do you think would make sense? There might be two modes; one for a through
check through all data, and a quick check that does not take long and could be
e.g. done when opening the DB. Goal is to avoid process crashes and let users
handle the situation.
2.) In general, is it possible to add more consistency checks in regular DB
operations?
3.) Could the the particular situation (for which I provided the stack trace)
detected (e.g. is myki[0] = 0 legal here?)
I'd be happy to provide a patch if you provide some direction where you want to
take that.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9388
Issue ID: 9388
Summary: mdb_stat for DupSort DBI shows incorrect data
Product: LMDB
Version: 0.9.26
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: liblmdb
Assignee: bugs(a)openldap.org
Reporter: AskAlexSharov(a)gmail.com
Target Milestone: ---
It doesn't include pages pages used for values.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9223
Bug ID: 9223
Summary: Add support for incremental backup
Product: LMDB
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: liblmdb
Assignee: bugs(a)openldap.org
Reporter: quanah(a)openldap.org
Target Milestone: ---
For LMDB 1.0, add support for incremental backups
--
You are receiving this mail because:
You are on the CC list for the bug.
https://bugs.openldap.org/show_bug.cgi?id=9360
Issue ID: 9360
Summary: MDB_BAD_TXN: Transaction must abort, has a child, or
is invalid
Product: LMDB
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: liblmdb
Assignee: bugs(a)openldap.org
Reporter: spam(a)markandruth.co.uk
Target Milestone: ---
I have 2 python scripts writing to a database (lmdb 0.9.26, py-lmdb 0.98) and
5-10 lua processes (with lightningmdb module which uses lmdb 0.9.22) which are
long-running serving queries from the database.
The database seems fine, not corrupted, and the python writes still working all
the time. But periodically (perhaps 10-20% of the time), in a way I am unable
to reliably reproduce, when the lua starts up every time a query is issued txn
dbi_open returns "MDB_BAD_TXN: Transaction must abort, has a child, or is
invalid". A direct restart of the processes does not fix this issue, however
stopping lua+python and then starting again after a 5-20s wait usually fixes
the issue. This has been reproduced over multiple servers but I'm at a loss as
to how to debug this any further?
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9208
Bug ID: 9208
Summary: LMDB feature request: variant of mdb_env_copy{,fd2}
that takes transaction as parameter
Product: LMDB
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: liblmdb
Assignee: bugs(a)openldap.org
Reporter: github(a)nicwatson.org
Target Milestone: ---
The mdb_env_copy* functions create a read transaction themselves to run the
backup on. New variants of these functions (one for mdb_env_copy2 and one for
mdb_env_copyfd2) would have a transaction parameter. This transaction would be
used instead of creating a new transaction.
Application code could use these new functions to synchronize consistent live
backups across multiple LMDB instances (potentially across multiple hosts).
--
You are receiving this mail because:
You are on the CC list for the bug.
https://bugs.openldap.org/show_bug.cgi?id=9397
Issue ID: 9397
Summary: LMDB: A second process opening a file with
MDB_WRITEMAP can cause the first to SIGBUS
Product: LMDB
Version: 0.9.26
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: liblmdb
Assignee: bugs(a)openldap.org
Reporter: github(a)nicwatson.org
Target Milestone: ---
Created attachment 780
--> https://bugs.openldap.org/attachment.cgi?id=780&action=edit
Full reproduction of SIGBUS MDB_WRITEMAP issue (works on Linux only)
The fundamental problem is that a ftruncate() on Linux that makes a file
smaller will cause accesses past the new end of the file to SIGBUS (see the
mmap man page).
The sequence that causes a SIGBUS involves two processes.
1. The first process opens a new LMDB file with MDB_WRITEMAP.
2. The second process opens the same LMDB file with MDB_WRITEMAP and with an
explicit map_size smaller than the first process's map size.
* This causes an ftruncate that makes the underlying file *smaller*.
3. (Optional) The second process closes the environment and exits.
4. The first process opens a write transaction and writes a bunch of data.
5. The first process commits the transaction. This causes a memory read from
the mapped memory that's now past the end of the file. On Linux, this triggers
a SIGBUS.
Attached is code that fully reproduces the problem on Linux.
The most straightforward solution is to allow ftruncate to *reduce* the file
size if it is the only reader. Another possibility is check the file size and
ftruncate if necessary every time a write transaction is opened. A third
possibility is to catch the SIGBUS signal.
Repro note: I used clone() to create the subprocess to most straightforwardly
demonstrate that the problem is not due to inherited file descriptors. The
problem still manifests when the processes are completely independent.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9207
Bug ID: 9207
Summary: Remove Moznss compatibility layer
Product: OpenLDAP
Version: 2.5
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: libraries
Assignee: bugs(a)openldap.org
Reporter: quanah(a)openldap.org
Target Milestone: ---
For the 2.5 release, remove the MozNSS compatibility layer.
--
You are receiving this mail because:
You are on the CC list for the bug.
https://bugs.openldap.org/show_bug.cgi?id=9316
Issue ID: 9316
Summary: performance issue when writing a high number of large
objects
Product: LMDB
Version: 0.9.24
Hardware: x86_64
OS: Linux
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: liblmdb
Assignee: bugs(a)openldap.org
Reporter: JGabler(a)univa.com
Target Milestone: ---
Created attachment 755
--> https://bugs.openldap.org/attachment.cgi?id=755&action=edit
lmdb performance test reproducing the issue
When writing a high number of big objects we see an extreme variation in
performance from very fast to extremely slow.
In the test scenario we write 10 chunks of 10.000 "jobs" (some 10kB) and their
corresponding "job script" (some 40kB), 200.000 objects in total.
Then delete all objects.
We do 10 iterations of this scenario.
When running this scenario as part of Univa Grid Engine with LMDB as database
backend we get the following performance values (rows are the iteration,
columns the chunk of jobs):
Iteration 0 1 2 3 4 5 6 7 8 9
0 21.525 21.250 21.574 21.722 22.693 21.992 22.438 22.650
21.972 22.017
1 22.262 21.656 22.339 22.914 21.549 24.906 23.862 1531.189
1695.041 1491.255
2 36.071 21.619 22.074 22.927 23.455 27.239 22.640 22.802
633.956 1882.008
3 52.163 21.651 21.571 22.686 22.727 22.024 40.980 22.156
22.429 595.362
4 64.977 21.511 22.519 22.148 22.354 23.292 57.740 20.835
37.680 250.594
5 54.724 21.074 21.200 23.744 22.109 21.351 62.225 21.447
91.292 375.260
6 49.065 21.573 22.309 26.084 21.226 21.248 68.580 22.531
59.338 249.936
7 44.666 21.830 21.009 28.760 21.533 21.611 72.291 23.144
86.281 118.326
8 35.486 21.720 21.840 24.729 22.045 20.877 76.473 21.193
120.387 136.836
9 41.159 23.365 21.721 23.024 21.835 20.972 77.409 21.784
193.885 306.158
So usually writing of 10.000 "jobs"+"job_script" takes some 22 seconds but
after some time performance massively breaks in.
With other database backends we do not see this behaviour, see the following
performance data of the same test done with PostgreSQL backend which is slower
(as expected going over the network) but provides constant throughput:
Iteration 0 1 2 3 4 5 6 7 8 9
0 36.937 37.110 36.952 37.279 37.580 37.364 37.950 37.390
37.682 37.439
1 37.464 38.110 37.679 38.366 37.576 37.624 37.476 37.412
37.265 37.727
2 36.394 37.635 37.347 37.603 37.402 37.515 37.802 37.898
37.355 37.939
3 37.213 37.539 36.771 37.706 37.055 37.780 37.283 37.488
36.955 37.460
4 36.554 37.557 37.368 37.960 37.070 37.892 37.459 37.857
37.228 37.833
5 37.047 38.164 37.167 37.885 37.268 37.676 37.355 37.572
37.347 37.569
6 37.118 37.735 36.857 37.602 36.717 37.716 37.444 37.685
37.085 38.151
7 36.787 37.647 36.844 37.601 36.934 37.440 37.632 37.291
37.174 37.926
8 36.884 37.560 37.117 37.239 37.034 37.748 37.289 37.635
36.822 37.693
9 37.178 37.496 36.849 37.799 37.289 37.644 37.461 37.622
37.022 37.670
We can reproduce the issue with a small C program (see attachment) which does
essentially the same database operations as our database layer in Univa Grid
Engine but depends only on liblmdb.
It simulates the scenario described above and gives us the following
performance data
showing the extreme performance variation:
Iteration 0 1 2 3 4 5 6 7 8 9
0 0.686 0.625 0.660 0.637 0.631 0.741 0.757 0.658
0.651 0.614
1 0.705 0.838 0.690 0.772 0.663 3.248 0.605 542.762
1114.374 898.477
2 13.336 1.299 0.659 0.637 0.626 0.712 11.172 0.663
29.833 1161.884
3 26.774 0.647 0.607 0.586 0.583 0.639 24.893 0.629
3.837 423.248
4 32.802 0.629 0.616 0.560 0.550 0.605 31.133 0.625
6.606 195.150
5 34.819 0.623 0.628 0.582 0.564 0.609 32.275 0.607
7.599 134.106
6 26.319 0.622 0.582 0.548 0.551 0.590 28.536 0.611
36.429 160.781
7 21.878 0.814 0.668 0.736 0.614 0.543 24.355 0.626
36.583 148.337
8 4.129 0.654 5.674 0.596 0.566 0.554 7.158 0.633
0.599 48.799
9 30.278 0.608 0.608 0.560 0.549 0.587 29.253 0.606
9.593 128.339
It can be compiled on Linux 64bit with
gcc -I <path to lmdb>/include -L <path to lmdb>/lib -o test_lmdb_perf
test_lmdb_perf.c -llmdb
To run the given scenario call it with the following parameters:
./test_lmdb_perf <path to database directory> 10 10 10000
We built and ran it on
- CentOS Linux release 7.7.1908 (Core)
- Linux biber 3.10.0-1062.9.1.el7.x86_64 #1 SMP Fri Dec 6 15:49:49 UTC 2019
x86_64 x86_64 x86_64 GNU/Linux
- it was built with gcc (GCC) 7.2.1 20170829 (Red Hat 7.2.1-1) from
devtoolset-7
--
You are receiving this mail because:
You are on the CC list for the issue.