https://bugs.openldap.org/show_bug.cgi?id=10344
Issue ID: 10344
Summary: Potential memory leak in function
firstComponentNormalize and objectClassPretty.
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: alexguo1023(a)gmail.com
Target Milestone: ---
Created attachment 1071
--> https://bugs.openldap.org/attachment.cgi?id=1071&action=edit
Patch: Ensure the first argument passed to `ber_dupbv_x` is not `NULL`.
In `firstComponentNormalize`, the code calls `ber_dupbv_x` but ignores its
return value.
```c
ber_dupbv_x(normalized, val, ctx);
```
When `normalized` is `NULL`, `ber_dupbv_x` allocates a new `struct berval` and
returns it; failing to capture that pointer means we lose ownership and leak
the allocation. The same issue arises in `objectClassPretty`. We should follow
the pattern in function `hexNormalize`, which asserts that its `normalized`
argument is non-NULL before use:
```c
assert(normalized != NULL);
```
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10335
Issue ID: 10335
Summary: [PATCH] ldapsearch: fix handling of -LL in
print_reference()
Product: OpenLDAP
Version: 2.6.9
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: client tools
Assignee: bugs(a)openldap.org
Reporter: bolek(a)live.com
Target Milestone: ---
Created attachment 1066
--> https://bugs.openldap.org/attachment.cgi?id=1066&action=edit
[PATCH] ldapsearch: fix handling of -LL in print_reference()
I, Boleslaw Ciesielski, hereby place the following modifications to OpenLDAP
Software (and only these modifications) into the public domain. Hence, these
modifications may be freely used and/or redistributed for any purpose with or
without attribution and/or other notice.
The attached patch (against master) fixes a bug in ldapsearch where the -LL (or
-LLL) option is ignored when printing the reference comments in
print_reference().
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10339
Issue ID: 10339
Summary: config_add_internal() use-after-free on failed
cn=config mod
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: ondra(a)mistotebe.net
Target Milestone: ---
If overlay startup/callback fails, bconfig.c:5593 uses the value of ca->argv[1]
despite it being freed already. I guess we can just log the entry's DN.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=7981
--- Comment #7 from Quanah Gibson-Mount <quanah(a)openldap.org> ---
• d0d07810
by Ondřej Kuzník at 2025-06-23T16:47:48+00:00
ITS#7981 Allow setting a default hash per policy
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=7981
Quanah Gibson-Mount <quanah(a)openldap.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
Resolution|--- |TEST
Status|IN_PROGRESS |RESOLVED
--- Comment #6 from Quanah Gibson-Mount <quanah(a)openldap.org> ---
• dad90d66
by Ondřej Kuzník at 2025-06-23T16:47:48+00:00
ITS#7981 Move default hash selection to slap_passwd_hash_type
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10355
Issue ID: 10355
Summary: mplay doesn't compile on musl
Product: LMDB
Version: 0.9.33
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: tools
Assignee: bugs(a)openldap.org
Reporter: bgilbert(a)backtick.net
Target Milestone: ---
With the musl C library (e.g. Alpine Linux) mplay doesn't compile:
gcc -pthread -O2 -g -W -Wall -Wno-unused-parameter -Wbad-function-cast
-Wuninitialized -c mplay.c
mplay.c: In function 'addpid':
mplay.c:490:23: error: assignment of read-only variable 'stdin'
490 | stdin = fdopen(0, "r");
| ^
mplay.c:491:24: error: assignment of read-only variable 'stdout'
491 | stdout = fdopen(1, "w");
| ^
make: *** [Makefile:99: mplay.o] Error 1
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10346
Issue ID: 10346
Summary: mdb_env_copy2 on a database with a value larger than
(2GB-16) results in a corrupt copy
Product: LMDB
Version: 0.9.31
Hardware: x86_64
OS: Linux
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: liblmdb
Assignee: bugs(a)openldap.org
Reporter: mike.moritz(a)vertex.link
Target Milestone: ---
Created attachment 1072
--> https://bugs.openldap.org/attachment.cgi?id=1072&action=edit
reproduction source code
Running mdb_env_copy2 with compaction on a database with a value larger than
(2GB-16)bytes appears to complete successfully in that there are no errors, but
the copied database cannot be opened and throws an MDB_CORRUPTED error. Looking
at the copied database size, it appears that the value is either being skipped
or significantly truncated. Running mdb_env_copy2 without compaction also
completes successfully, and the copied database can be opened.
I initially encountered this while using py-lmdb with v0.9.31 of LMDB, but was
able to write up a simple script that uses the library directly. The source for
the script is attached, and the results below are from running it with the
latest from master.
Without compaction:
$ ./lmdb_repro test.lmdb $((2 * 1024 * 1024 * 1024 - 16 + 1)) testbak.lmdb
LMDB Version: LMDB 0.9.70: (December 19, 2015)
Set LMDB map size to 21474836330 bytes
Successfully inserted key with 2147483633 bytes of zero-filled data
Retrieved 2147483633 bytes of data
First 16 bytes (hex): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ...
Copying database to testbak.lmdb...
Database copy completed successfully.
Opening copied database and reading value...
Retrieved 2147483633 bytes of data from copied database
First 16 bytes from copy (hex): 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ...
Data size matches between original and copy
With compaction:
$ ./lmdb_repro -c test.lmdb $((2 * 1024 * 1024 * 1024 - 16 + 1))
testbak.lmdb
LMDB Version: LMDB 0.9.70: (December 19, 2015)
Set LMDB map size to 21474836330 bytes
Successfully inserted key with 2147483633 bytes of zero-filled data
Retrieved 2147483633 bytes of data
First 16 bytes (hex): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ...
Copying database to testbak.lmdb (with compaction)...
Database copy completed successfully.
Opening copied database and reading value...
mdb_get (copy) failed: MDB_CORRUPTED: Located page was wrong type
Size difference on corrupt DB:
$ du -sh ./*
312K ./lmdb_repro
24K ./testbak.lmdb
2.1G ./test.lmdb
With compaction at the perceived max size:
$ ./lmdb_repro -c test.lmdb $((2 * 1024 * 1024 * 1024 - 16)) testbak.lmdb
LMDB Version: LMDB 0.9.70: (December 19, 2015)
Set LMDB map size to 21474836320 bytes
Successfully inserted key with 2147483632 bytes of zero-filled data
Retrieved 2147483632 bytes of data
First 16 bytes (hex): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ...
Copying database to testbak.lmdb (with compaction)...
Database copy completed successfully.
Opening copied database and reading value...
Retrieved 2147483632 bytes of data from copied database
First 16 bytes from copy (hex): 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 ...
Data size matches between original and copy
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10342
Issue ID: 10342
Summary: Potential Memory Leak in function mdb_txn_begin
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: libraries
Assignee: bugs(a)openldap.org
Reporter: alexguo1023(a)gmail.com
Target Milestone: ---
Created attachment 1069
--> https://bugs.openldap.org/attachment.cgi?id=1069&action=edit
Free txn->mt_u.dirty_list before freeing txn
The function `mdb_txn_begin` allocates the dirty list via
```c
txn->mt_u.dirty_list = malloc(sizeof(MDB_ID2) * MDB_IDL_UM_SIZE);
```
Later, when `txn != env->me_txn0`, it calls
```c
free(txn);
```
without first freeing `txn->mt_u.dirty_list`. This orphaned allocation leads to
a memory leak.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10274
Issue ID: 10274
Summary: Replication issue on MMR configuration
Product: OpenLDAP
Version: 2.5.14
Hardware: All
OS: Linux
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: falgon.comp(a)gmail.com
Target Milestone: ---
Created attachment 1036
--> https://bugs.openldap.org/attachment.cgi?id=1036&action=edit
In this attachment you will find 2 openldap configurations for 2 instances +
slamd conf exemple + 5 screenshots to show the issue and one text file to
explain what you see
Hello we are openning this issue further to the initial post in technical :
https://lists.openldap.org/hyperkitty/list/openldap-technical@openldap.org/…
Issue :
We are working on a project and we've come across an issue with the replication
after performance testing :
*Configuration :*
RHEL 8.6
OpenLDAP 2.5.14
*MMR-delta *configuration on multiple servers attached
300,000 users configured and used for tests
*olcLastBind: TRUE*
Use of SLAMD (performance shooting)
*Problem description:*
We are currently running performance and resilience tests on our infrastructure
using the SLAMD tool configured to perform BINDs and MODs on a defined range of
accounts.
We use a load balancer (VIP) to poll all of our servers equally. (but it is
possible to do performance tests directly on each of the directories)
With our current infrastructure we're able to perform approximately 300
MOD/BIND/s. Beyond that, we start to generate delays and can randomly come
across one issue.
However, when we run performance tests that exceed our write capacity, our
replication between servers can randomly create an incident with directories
being unable to catch up with their replication delay.
The directories update their contextCSNs, but extremely slowly (like freezing).
From then on, it's impossible for the directories to catch again. (even with no
incoming traffic)
A restart of the instance is required to perform a full refresh and solve the
incident.
We have enabled synchronization logs and have no error or refresh logs to
indicate a problem ( we can provide you with logs if necessary).
We suspect a write collision or a replication conflict but this is never write
in our sync logs.
We've run a lot of tests.
For example, when we run a performance test on a single live server, we don't
reproduce the problem.
Anothers examples: if we define different accounts ranges for each server with
SALMD, we don't reproduce the problem either.
If we use only one account for the range, we don't reproduce the problem
either.
______________________________________________________________________
I have add some screenshots on attachement to show you the issue and all the
explanations.
______________________________________________________________________
*Symptoms :*
One or more directories can no longer be replicated normally after performance
testing ends.
No apparent error logs.
Need a restart of instances to solve the problem.
*How to reproduce the problem:*
Have at least two servers in MMR mode
Set LastBind to TRUE
Perform a SLAMD shot from a LoadBalancer in bandwidth mode OR start multiple
SLAMD test on same time for each server with the same account range.
Exceed the maximum write capacity of the servers.
*SLAMD configuration :*
authrate.sh --hostname ${HOSTNAME} --port ${PORTSSL} \
--useSSL --trustStorePath ${CACERTJKS} \
--trustStorePassword ${CACERTJKSPW} --bindDN "${BINDDN}" \
--bindPassword ${BINDPW} --baseDN "${BASEDN}" \
--filter "(uid=[${RANGE}])" --credentials ${USERPW} \
--warmUpIntervals ${WARMUP} \
--numThreads ${NTHREADS} ${ARGS}
--
You are receiving this mail because:
You are on the CC list for the issue.