https://bugs.openldap.org/show_bug.cgi?id=10168
Issue ID: 10168
Summary: olcdbindex doesn't cleanup cleanly
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: ondra(a)mistotebe.net
Target Milestone: ---
if you run the following modify against slapd (notice the olcDbMultival data is
wrong), slapd aborts in mdb_cf_cleanup->mdb_attr_dbs_open when cleaning up the
olcDbIndex changes:
dn: olcDatabase={1}mdb,cn=config
changetype: modify
add: olcDbIndex
olcDbIndex: member eq
olcDbIndex: memberof eq
-
add: olcDbMultival
olcDbMultival: member,memberOf 5,15
-
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10270
Issue ID: 10270
Summary: Issues with pcache when refresh/persistPcache used
Product: OpenLDAP
Version: 2.5.18
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: overlays
Assignee: bugs(a)openldap.org
Reporter: aweits(a)rit.edu
Target Milestone: ---
Greetings, OpenLDAP-folk.
We've been running with the pcache overlay in 2.5.18
with both query refresh and pcachePersist for a bit
and have observed some oddities:
1.) Negative queries don't get refreshed
2.) Queries don't seem to be persisted
These behaviors are all exhibited from the current
git version as well - code/patches below for clarity
of communication:
Thanks!
Andy
commit c0b4fe92c8df746c0e6a777f93f1687135114eb9
Author: Andrew Elble <aweits(a)rit.edu>
Date: Fri Oct 11 08:43:47 2024 -0400
negative cache entries are not loaded when pcachePersist is on
diff --git a/servers/slapd/overlays/pcache.c b/servers/slapd/overlays/pcache.c
index 9ef78fd6bf43..9fd72e6d7261 100644
--- a/servers/slapd/overlays/pcache.c
+++ b/servers/slapd/overlays/pcache.c
@@ -802,7 +802,11 @@ url2query(
goto error;
}
- cq = add_query( op, qm, &query, qt, PC_POSITIVE, 0 );
+ if (BER_BVISNULL( &uuid )) {
+ cq = add_query( op, qm, &query, qt, PC_NEGATIVE, 0 );
+ } else {
+ cq = add_query( op, qm, &query, qt, PC_POSITIVE, 0 );
+ }
if ( cq != NULL ) {
cq->expiry_time = expiry_time;
cq->refresh_time = refresh_time;
commit 8f7b50dfcec69fa01f8cf0a4b77f3dee8ef9f0f6
Author: Andrew Elble <aweits(a)rit.edu>
Date: Fri Oct 11 08:38:36 2024 -0400
queries with ttr/x-refresh are not loaded when pcachePersist is on
diff --git a/servers/slapd/overlays/pcache.c b/servers/slapd/overlays/pcache.c
index 40c1f9673776..9ef78fd6bf43 100644
--- a/servers/slapd/overlays/pcache.c
+++ b/servers/slapd/overlays/pcache.c
@@ -749,7 +749,7 @@ url2query(
}
}
- if ( got != GOT_ALL ) {
+ if ( (got & GOT_ALL) != GOT_ALL) {
rc = 1;
goto error;
}
commit c7e52c90192a43876d40b9776a58db951d27937c
Author: Andrew Elble <aweits(a)rit.edu>
Date: Fri Oct 11 08:37:13 2024 -0400
ttr was not being applied to negatively cached entries
diff --git a/servers/slapd/overlays/pcache.c b/servers/slapd/overlays/pcache.c
index 1d6e4ba4edcf..40c1f9673776 100644
--- a/servers/slapd/overlays/pcache.c
+++ b/servers/slapd/overlays/pcache.c
@@ -1580,6 +1580,8 @@ add_query(
case PC_NEGATIVE:
ttl = templ->negttl;
+ if ( templ->ttr )
+ ttr = now + templ->ttr;
break;
case PC_SIZELIMIT:
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10169
Issue ID: 10169
Summary: Add support for token only authentication with otp
overlay
Product: OpenLDAP
Version: 2.6.6
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: overlays
Assignee: bugs(a)openldap.org
Reporter: quanah(a)openldap.org
Target Milestone: ---
Currently the OTP overlay is password + token. It would be nice to be able to
configure it so it can run in a token only mode, similar to the slapo-totp
overlay in contrib. This would allow us to have a project supported solution
and retire that contrib module.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10020
Issue ID: 10020
Summary: dynlist's @groupOfUniqueNames is considered only for
the first configuration line
Product: OpenLDAP
Version: 2.5.13
Hardware: x86_64
OS: Linux
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: overlays
Assignee: bugs(a)openldap.org
Reporter: msl(a)touk.pl
Target Milestone: ---
If we consider the following configuration of dynlist:
{0}toukPerson labeledURI uniqueMember+memberOf@groupOfUniqueNames
{1}groupOfURLs memberURL uniqueMember+dgMemberOf@groupOfUniqueNames
The {0} entry will correctly populate the memberOf relatively to static group
membership.
The {1} entry will produce dgMemberOf with dynamic group membership correctly
(based on memberURL query) but it will not populate static entries IF {0} entry
in configuration is present. IF I remove {0} from the dynlist configuration -
or - remove @groupOfUniqueNames part from this configuration line, then both
dynamic and static entries will be populated correctly for {1}.
So the effects are as follows on some user entry:
if both {0} and {1} are present - {1} produced only dynamic groups:
memberOf: cn=adm,ou=touk,ou=group,dc=touk,dc=pl
memberOf: cn=touk,ou=touk,ou=group,dc=touk,dc=pl
dgMemberOf: cn=dyntouk,ou=dyntest,ou=group,dc=touk,dc=pl
if both {0} and {1} are present and @groupOfUniqueNames is removed from {0} -
{1} produced static+dynamic groups:
dgMemberOf: cn=adm,ou=touk,ou=group,dc=touk,dc=pl
dgMemberOf: cn=touk,ou=touk,ou=group,dc=touk,dc=pl
dgMemberOf: cn=dyntouk,ou=dyntest,ou=group,dc=touk,dc=pl
If only {1} is present - {1} produced static+dynamic groups:
dgMemberOf: cn=adm,ou=touk,ou=group,dc=touk,dc=pl
dgMemberOf: cn=touk,ou=touk,ou=group,dc=touk,dc=pl
dgMemberOf: cn=dyntouk,ou=dyntest,ou=group,dc=touk,dc=pl
For completness - if only {0} is present:
memberOf: cn=adm,ou=touk,ou=group,dc=touk,dc=pl
memberOf: cn=touk,ou=touk,ou=group,dc=touk,dc=pl
I would expect this behavior to be correct for the first case - {0} and {1}.
memberOf: cn=adm,ou=touk,ou=group,dc=touk,dc=pl
memberOf: cn=touk,ou=touk,ou=group,dc=touk,dc=pl
dgMemberOf: cn=dyntouk,ou=dyntest,ou=group,dc=touk,dc=pl
dgMemberOf: cn=adm,ou=touk,ou=group,dc=touk,dc=pl
dgMemberOf: cn=touk,ou=touk,ou=group,dc=touk,dc=pl
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9934
Issue ID: 9934
Summary: slapd-config(5) should document how to store
certificates for slapd usage
Product: OpenLDAP
Version: 2.5.13
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: documentation
Assignee: bugs(a)openldap.org
Reporter: quanah(a)openldap.org
Target Milestone: ---
Commit 7b41feed83b expanded the ability of cn=config to save the certificates
used for TLS by slapd directly in the config database. However the
documentation for the new parameters was never added to the slapd-config(5) man
page.
olcTLSCACertificate $ olcTLSCertificate $ olcTLSCertificateKey
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10258
Issue ID: 10258
Summary: test050 failure: connection_close race?
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: ondra(a)mistotebe.net
Target Milestone: ---
Created attachment 1032
--> https://bugs.openldap.org/attachment.cgi?id=1032&action=edit
tail of slapd log
Running test050 repeatedly, the slapd managed to get itself into an apparent
inconsistency in the connections structure. The logs suggest that there might
be a race closing the connection. Unfortunately the sanitiser didn't initiate a
core dump in this case.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10274
Issue ID: 10274
Summary: Replication issue on MMR configuration
Product: OpenLDAP
Version: 2.5.14
Hardware: All
OS: Linux
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: falgon.comp(a)gmail.com
Target Milestone: ---
Created attachment 1036
--> https://bugs.openldap.org/attachment.cgi?id=1036&action=edit
In this attachment you will find 2 openldap configurations for 2 instances +
slamd conf exemple + 5 screenshots to show the issue and one text file to
explain what you see
Hello we are openning this issue further to the initial post in technical :
https://lists.openldap.org/hyperkitty/list/openldap-technical@openldap.org/…
Issue :
We are working on a project and we've come across an issue with the replication
after performance testing :
*Configuration :*
RHEL 8.6
OpenLDAP 2.5.14
*MMR-delta *configuration on multiple servers attached
300,000 users configured and used for tests
*olcLastBind: TRUE*
Use of SLAMD (performance shooting)
*Problem description:*
We are currently running performance and resilience tests on our infrastructure
using the SLAMD tool configured to perform BINDs and MODs on a defined range of
accounts.
We use a load balancer (VIP) to poll all of our servers equally. (but it is
possible to do performance tests directly on each of the directories)
With our current infrastructure we're able to perform approximately 300
MOD/BIND/s. Beyond that, we start to generate delays and can randomly come
across one issue.
However, when we run performance tests that exceed our write capacity, our
replication between servers can randomly create an incident with directories
being unable to catch up with their replication delay.
The directories update their contextCSNs, but extremely slowly (like freezing).
From then on, it's impossible for the directories to catch again. (even with no
incoming traffic)
A restart of the instance is required to perform a full refresh and solve the
incident.
We have enabled synchronization logs and have no error or refresh logs to
indicate a problem ( we can provide you with logs if necessary).
We suspect a write collision or a replication conflict but this is never write
in our sync logs.
We've run a lot of tests.
For example, when we run a performance test on a single live server, we don't
reproduce the problem.
Anothers examples: if we define different accounts ranges for each server with
SALMD, we don't reproduce the problem either.
If we use only one account for the range, we don't reproduce the problem
either.
______________________________________________________________________
I have add some screenshots on attachement to show you the issue and all the
explanations.
______________________________________________________________________
*Symptoms :*
One or more directories can no longer be replicated normally after performance
testing ends.
No apparent error logs.
Need a restart of instances to solve the problem.
*How to reproduce the problem:*
Have at least two servers in MMR mode
Set LastBind to TRUE
Perform a SLAMD shot from a LoadBalancer in bandwidth mode OR start multiple
SLAMD test on same time for each server with the same account range.
Exceed the maximum write capacity of the servers.
*SLAMD configuration :*
authrate.sh --hostname ${HOSTNAME} --port ${PORTSSL} \
--useSSL --trustStorePath ${CACERTJKS} \
--trustStorePassword ${CACERTJKSPW} --bindDN "${BINDDN}" \
--bindPassword ${BINDPW} --baseDN "${BASEDN}" \
--filter "(uid=[${RANGE}])" --credentials ${USERPW} \
--warmUpIntervals ${WARMUP} \
--numThreads ${NTHREADS} ${ARGS}
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10026
Issue ID: 10026
Summary: Refresh handling can skip entries (si_dirty not
managed properly)
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: overlays
Assignee: bugs(a)openldap.org
Reporter: ondra(a)mistotebe.net
Target Milestone: ---
Take MPR plain syncrepl with 3+ providers.
When a provider's own syncrepl session transitions to persist and a it starts a
new parallel session towards another host, that session always has to start as
a refresh. If that refresh serves entries to us, our handling of si_dirty is
not consistent:
- if the existing persist session serves some of these entries to us, we can
"forget" to pass the others to a newly connected consumer
- same if the refresh is abandoned and we start refreshing from a different
provider that might be behind what we were being served (again our consumers
could suffer)
- if we restart, si_dirty is forgotten and our consumers suffer even worse
We might need to be told (at the beginning of the refresh?) what the end state
we're going for is, so we can keep si_dirty on until then. And somehow persist
that knowledge in the DB...
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10205
Issue ID: 10205
Summary: SSL handshake blocks forever in async mode if server
unaccessible
Product: OpenLDAP
Version: 2.5.17
Hardware: x86_64
OS: Linux
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: libraries
Assignee: bugs(a)openldap.org
Reporter: regtube(a)hotmail.com
Target Milestone: ---
When ldaps:// scheme is used to connect to currently unaccessible server with
LDAP_OPT_CONNECT_ASYNC and LDAP_OPT_NETWORK_TIMEOUT options set, it blocks
forever on SSL_connect.
Here is a trace:
ldap_sasl_bind
ldap_send_initial_request
ldap_new_connection 1 1 0
ldap_int_open_connection
ldap_connect_to_host: TCP winserv.test.net:636
ldap_new_socket: 3
ldap_prepare_socket: 3
ldap_connect_to_host: Trying 192.168.56.2:636
ldap_pvt_connect: fd: 3 tm: 30 async: -1
ldap_ndelay_on: 3
attempting to connect:
connect errno: 115
ldap_int_poll: fd: 3 tm: 0
ldap_err2string
[2024-04-25 15:41:27.112] [error] [:1] bind(): Connecting (X)
[2024-04-25 15:41:27.112] [error] [:1] err: -18
ldap_sasl_bind
ldap_send_initial_request
ldap_int_poll: fd: 3 tm: 0
ldap_is_sock_ready: 3
ldap_ndelay_off: 3
TLS trace: SSL_connect:before SSL initialization
TLS trace: SSL_connect:SSLv3/TLS write client hello
Looks like it happens because non-blocking mode is cleared from the socket
(ldap_ndelay_off) after the first poll for write, and non-blocking mode is
never restored before attempt to do tls connect, because of the check that
assumes that non-blocking mode has already been set for async mode:
if ( !async ) {
/* if async, this has already been set */
ber_sockbuf_ctrl( sb, LBER_SB_OPT_SET_NONBLOCK, (void*)1 );
}
while in fact it was cleared.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10162
Issue ID: 10162
Summary: Fix for binary attributes data corruption in back-sql
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: backends
Assignee: bugs(a)openldap.org
Reporter: dex.tracers(a)gmail.com
Target Milestone: ---
Created attachment 1006
--> https://bugs.openldap.org/attachment.cgi?id=1006&action=edit
Fix for binary attributes corruption on backed-sql
I've configured slapd to use back-sql (mariadb through odbc) and observed
issues with the BINARY data retrievals from the database. The length of the
attributes was properly reported, but the correct data inside was always 16384
bytes and after that point - some junk (usually filled-up with AAAAAAAA and
some other attributes data from memory).
During the debugging - I've noticed that:
- The MAX_ATTR_LEN (16384 bytes) is used to set the length of the data for
BINARY columns when SQLBindCol is done inside of the
"backsql_BindRowAsStrings_x" function
- After SQLFetch is done - data in row->cols[i] is fetched up to the specified
MAX_ATTR_LEN
- After SQLFetch is done - the correct data size (greater than MAX_ATTR_LEN) is
represented inside of the row->value_len
I'm assuming that slapd allocates the pointer in memory (row->cols[i]), fills
it with the specified amount of data (MAX_ATTR_LEN), but when forming the
actual attribute data - uses the length from row->value_len and so everything
from 16384 bytes position till row->value_len is just a junk from the memory
(uninitialized, leftovers, data from other variables).
After an investigation, I've find-out that:
- for BINARY or variable length fields - SQLGetData should be used
- SQLGetData supports chunked mode (if length is unknown) or full-read mode if
the length is known
- it could be used in pair with SQLBindCol after SQLFetch (!)
Since we have the correct data length inside of row->value_len, I've just added
the code to the backsql_get_attr_vals() function to overwrite the corrupted
data with the correct data by issuing SQLGetData request. And it worked -
binary data was properly retrieved and reported over LDAP!
My current concerns / help needed - I'm not very familiar with the memory
allocation/deallocation mechanisms, so I'm afraid that mentioned change can
lead to memory corruption (so far not observed).
Please review attached patch (testing was done on OPENLDAP_REL_ENG_2_5_13, and
applied on the master branch for easier review/application).
--
You are receiving this mail because:
You are on the CC list for the issue.