https://bugs.openldap.org/show_bug.cgi?id=9393
Issue ID: 9393
Summary: Provider a LDAP filter validation function
Product: OpenLDAP
Version: 2.4.56
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: libraries
Assignee: bugs(a)openldap.org
Reporter: best(a)univention.de
Target Milestone: ---
In many situations I need to validate if a user submitted LDAP filter has valid
syntax.
It seems there is no official function to check this.
Could you provide one?
libraries/libldap/filter.c: ldap_pvt_put_filter() can be used as a basis.
--
My current workaround is using a unconnected ldap connection and do a search
with that filter. This yields a FILTER_ERROR (invalid filter) or a SERVER_DOWN
error (invalid filter).
See also:
https://github.com/python-ldap/python-ldap/pull/272
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9042
Quanah Gibson-Mount <quanah(a)openldap.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
Resolution|--- |FIXED
Status|IN_PROGRESS |RESOLVED
--- Comment #3 from Quanah Gibson-Mount <quanah(a)openldap.org> ---
• 4b8e60f8
by OndÅ™ej KuznÃk at 2024-10-25T20:02:19+00:00
ITS#9042 Log modify values under STATS2
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9914
Issue ID: 9914
Summary: Add OS pagesize to the back-mdb monitor information
Product: OpenLDAP
Version: 2.6.3
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: backends
Assignee: bugs(a)openldap.org
Reporter: quanah(a)openldap.org
Target Milestone: ---
The pagesize that back-mdb is using for pages should be exposed via the
cn=monitor backend, as a remote client doing a query will not have that
information available to it.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10283
Issue ID: 10283
Summary: Search result internal error: mdb_opinfo_get:
thread_pool_setkey failed err
Product: OpenLDAP
Version: 2.5.13
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: asilva(a)wirelessmundi.com
Target Milestone: ---
Hi,
From time to time i got errors when perform searches and i need to restart the
server to establish the service.
This is the event log when it happens:
Nov 08 10:07:39 main slapd[3946463]: conn=1780 fd=182 ACCEPT from
IP=113.53.215.185:41404 (IP=0.0.0.0:389)
Nov 08 10:07:39 main slapd[3946463]: conn=1780 op=0 BIND
dn="cn=admin,ou=users,o=cptrabajosocial.local.com" method=128
Nov 08 10:07:39 main slapd[3946463]: mdb_opinfo_get: thread_pool_setkey failed
err (12)
Nov 08 10:07:39 main slapd[3946463]: conn=1780 op=0 RESULT tag=97 err=12
qtime=0.000027 etime=0.000159 text=internal error
Nov 08 10:07:39 main slapd[3946463]: conn=1780 op=1 UNBIND
Nov 08 10:07:39 main slapd[3946463]: conn=1780 fd=182 closed
In the code i found:
if ( ( rc = ldap_pvt_thread_pool_setkey( ctx, mdb->mi_dbenv,
data, mdb_reader_free, NULL, NULL ) ) ) {
mdb_txn_abort( moi->moi_txn );
moi->moi_txn = NULL;
Debug( LDAP_DEBUG_ANY, "mdb_opinfo_get: thread_pool_setkey failed err
(%d)\n", rc );
return rc;
}
and libraries/libldap/tpool.c i see that the error 12 (ENOMEM) is returned when
MAXKEYS is reached:
if ( data || kfree ) {
if ( i>=MAXKEYS )
return ENOMEM;
ctx->ltu_key[i].ltk_key = key;
ctx->ltu_key[i].ltk_data = data;
ctx->ltu_key[i].ltk_free = kfree;
} else if ( found ) {
clear_key_idx( ctx, i );
}
Found an old mail about this issue:
https://openldap-technical.openldap.narkive.com/Th0WLQYh/max-numbers-of-sub…
In my setup I've configured 47 MBs databases, for contacts only, one for each
company in the server, so they cannot see each others data.
I can't understand how it happens and how to prevent it, is there any
configuration i can set to avoid reach the MAXKEYS?
Thanks,
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10257
Issue ID: 10257
Summary: Documentation assessment
Product: OpenLDAP
Version: 2.6.8
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: documentation
Assignee: bugs(a)openldap.org
Reporter: hyc(a)openldap.org
Target Milestone: ---
Created attachment 1031
--> https://bugs.openldap.org/attachment.cgi?id=1031&action=edit
Admin guide assessment
Last month I commissioned a tech writer to review the current 2.6 Admin Guide
to see what needs to be worked on before the 2.7 release. I'm attaching the
report here so we can reference it. Should have distributed it sooner but Life
got in the way.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10278
Issue ID: 10278
Summary: Move away from python-ldap0 in the test suite
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: test suite
Assignee: bugs(a)openldap.org
Reporter: ondra(a)mistotebe.net
Target Milestone: ---
While python-ldap0 has a superior API, it seems the module is no longer
receiving any development. We should move our python test suite over to another
module that can be supported long-term.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10277
Issue ID: 10277
Summary: How to deal with desync between cn=config and
back-ldif DNs
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: ondra(a)mistotebe.net
Target Milestone: ---
If someone deletes a cn=config entry offline (or through bugs in cn=config,
they exist, will file as I isolate), the X-ORDERED RDNs will not be contiguous.
cn=config papers over this internally at a cost of never being able to modify
the entries affected.
Right now the only remedy is slapcat+slapadd of the whole config DB, is that
the best we can do? When we detect this (doesn't always happen), should we fix
the on-disk copy on startup?
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10276
Issue ID: 10276
Summary: crash using pcache overlay on mdb backend
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: overlays
Assignee: bugs(a)openldap.org
Reporter: chris.paul(a)rexconsulting.net
Target Milestone: ---
Created attachment 1038
--> https://bugs.openldap.org/attachment.cgi?id=1038&action=edit
backtrace
When using the pcache overlay with a local mdb backend, slapd crashes. The
intention is to use pcache to reduce lookup times for dynamic (dynlist) groups
and memberOf values.
GDB backtrace attached. Config available upon request.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10268
Issue ID: 10268
Summary: Operation rate limiting
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: chris.paul(a)rexconsulting.net
Target Milestone: ---
Please consider this request for enhancement. It would be very useful for slapd
to have some basic rate limiting per connection or per IP. The
monitorConnectionsOpsCompleted counts are available in cn=monitor. A dependency
of cn=monitor seems reasonable.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9211
Bug ID: 9211
Summary: Relax control is not consistently access-restricted
Product: OpenLDAP
Version: 2.4.49
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: ryan(a)openldap.org
Target Milestone: ---
The following operations can be performed by anyone having 'write' access (not
even 'manage') using the Relax control:
- modifying/replacing structural objectClass
- adding/modifying OBSOLETE attributes
Some operations are correctly restricted:
- adding/modifying NO-USER-MODIFICATION attributes marked as manageable
(Modification of non-conformant objects doesn't appear to be implemented at
all.)
In the absence of ACLs for controls, I'm of the opinion that all use of the
Relax control should require manage access. The Relax draft clearly and
repeatedly discusses its use cases in terms of directory _administrators_
temporarily relaxing constraints in order to accomplish a specific task.
--
You are receiving this mail because:
You are on the CC list for the bug.
https://bugs.openldap.org/show_bug.cgi?id=9920
Issue ID: 9920
Summary: MDB_PAGE_FULL with master3 (encryption) because there
is no room for the authentication data (MAC)
Product: LMDB
Version: unspecified
Hardware: x86_64
OS: Mac OS
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: liblmdb
Assignee: bugs(a)openldap.org
Reporter: info(a)parlepeuple.fr
Target Milestone: ---
Created attachment 915
--> https://bugs.openldap.org/attachment.cgi?id=915&action=edit
proposed patch
Hello,
on master3, using the encryption at rest feature,
I am testing as follow:
- on a new named database, i set the encryption function with
mdb_env_set_encrypt(env, encfunc, &enckey, 32)
- note that I chose to have a size parameter (The size of authentication data
in bytes, if any. Set this to zero for unauthenticated encryption mechanisms.)
of 32 bytes.
- I add 2 entries on the DB, trying to saturate the first page. I chose to add
a key of 33 Bytes and a value of 1977 Bytes, so the size of each node is 2010
Bytes (obviously the 2 keys are different).
- This passes and the DB has just one leaf_pages, no overflow_pages, no
branch_pages, an a depth of 1.
- If I add one byte to the values I insert (starting again from a blank DB),
then , instead of seeing 2 overflow_pages, I get an error : MDB_PAGE_FULL.
- this clearly should not have happened.
- Here is some tracing :
add to leaf page 2 index 0, data size 48 key size 7 [74657374646200]
add to leaf page 3 index 0, data size 1978 key size 33
[000000000000000000000000000000000000000000000000000000000000000000]
add to branch page 5 index 0, data size 0 key size 0 [null]
add to branch page 5 index 1, data size 0 key size 33
[000000000000000000000000000000000000000000000000000000000000000000]
add to leaf page 4 index 0, data size 1978 key size 33
[000000000000000000000000000000000000000000000000000000000000000000]
add to leaf page 4 index 1, data size 1978 key size 33
[020202020202020202020202020202020202020202020202020202020202020202]
not enough room in page 4, got 1 ptrs
upper-lower = 2020 - 2 = 2016
node size = 2020
Looking at the code, I understand that there is a problem at line 9005 :
} else if (node_size + data->mv_size > mc->mc_txn->mt_env->me_nodemax) {
where me_nodemax is incorrect, as it is not taking into account that some bytes
will be needed for the MAC authentication code, which size is in
env->me_esumsize.
me_nodemax is calculated at line 5349:
env->me_nodemax = (((env->me_psize - PAGEHDRSZ ) / MDB_MINKEYS) & -2)
- sizeof(indx_t);
So I substract me_esumsize with a "- env->me_esumsize" here:
env->me_nodemax = (((env->me_psize - PAGEHDRSZ - env->me_esumsize) /
MDB_MINKEYS) & -2)
- sizeof(indx_t);
I also substract it from me_maxfree_1pg in the line above, and in pmax in line
10435.
I do not know if my patch is correct, but it solves the issue.
Maybe there are other places in the code where the me_esumsize should be
substracted from the available size. By example, when calculating the number of
overflow pages in OVPAGES, it does not take into account me_esumsize, but I
think it is ok, because there is only one MAC for the entire set of OV pages,
and there is room for it in the first OV page.
See the attached proposed patch.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9596
Issue ID: 9596
Summary: Python test suite
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: build
Assignee: bugs(a)openldap.org
Reporter: ondra(a)mistotebe.net
Target Milestone: ---
The bash test suite is extremely limited, hard to write for and slow. We can't
lose it as it is also portable, but something should be introduced for
developers/CI on more modern systems and increase coverage.
A Python 3 seed for one is in development in MR!347.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=8149
Quanah Gibson-Mount <quanah(a)openldap.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
Assignee|bugs(a)openldap.org |hyc(a)openldap.org
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9786
Issue ID: 9786
Summary: liblber: missing export of ber_pvt_wsa_err2string
Product: OpenLDAP
Version: 2.6.1
Hardware: All
OS: Windows
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: libraries
Assignee: bugs(a)openldap.org
Reporter: tobias.junghans(a)veyon.io
Target Milestone: ---
When building (cross-compiling) OpenLDAP via GCC/mingw-w64, an undefined
reference to ber_pvt_wsa_err2string() is reported when libldap.dll is linked.
This can be fixed easily by adding ber_pvt_wsa_err2string() to
libraries/liblber/lber.map
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9982
Issue ID: 9982
Summary: Linker error when building with LDAP_CONNECTIONLESS
Product: OpenLDAP
Version: 2.6.3
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: build
Assignee: bugs(a)openldap.org
Reporter: invokesus+openldap(a)gmail.com
Target Milestone: ---
Created attachment 942
--> https://bugs.openldap.org/attachment.cgi?id=942&action=edit
Build log
I'm encountering the following linker error when building from the master
branch, with LDAP_CONNECTIONLESS defined.
/nix/store/jbnmj9fljgnfyc1iswnrpfhlkpnnwiii-binutils-2.39/bin/ld:
./.libs/libldap.so: undefined reference to `ber_sockbuf_io_udp'
Seems to have been happening since
https://git.openldap.org/openldap/openldap/-/commit/4a87d7aad200aaa91cb0cb8….
Attaching the full build log.
Also, attaching in the next update, a patch to fix the error.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=8070
Quanah Gibson-Mount <quanah(a)openldap.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
See Also| |https://bugs.openldap.org/s
| |how_bug.cgi?id=9596
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=8677
Quanah Gibson-Mount <quanah(a)openldap.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
Assignee|bugs(a)openldap.org |hyc(a)openldap.org
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=8677
Quanah Gibson-Mount <quanah(a)openldap.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|IN_PROGRESS |RESOLVED
Resolution|--- |FIXED
--- Comment #4 from Quanah Gibson-Mount <quanah(a)openldap.org> ---
• 66edd345
by Howard Chu at 2023-11-14T17:02:18+00:00
ITS#8677 back-sock: return error for CONTINUE
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=5738
Quanah Gibson-Mount <quanah(a)openldap.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
Assignee|bugs(a)openldap.org |hyc(a)openldap.org
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9909
Issue ID: 9909
Summary: slap* tools leak cn=config entries on shutdown
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: ondra(a)mistotebe.net
Target Milestone: ---
slap* tools set up their in-memory cn=config structures but cfb->cb_root is
never released on shutdown.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10273
Issue ID: 10273
Summary: Unable to run multiple bitnami openldap containers
with common shared volume
Product: OpenLDAP
Version: 2.6.0
Hardware: All
OS: Linux
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: jvishwanath66(a)gmail.com
Target Milestone: ---
**Name and Version:**
openldap2.6
**What architecture are you using?:**
amd64
**What steps will reproduce the bug?**
- Add custom ldif files under the /ldifs directory and create another
container image named `localhost:32000/custom-openldap`
- create a common directory that will be mounted to all the ldap containers
(`/root/openldap`)
Create multiple container images which are mounted to the same directory
(`/root/openldap`) using the following command
- Add custom ldif files under the /ldifs directory and create another
container image named localhost:32000/custom-openldap
- create a common directory that will be mounted to all the ldap containers
(`/root/openldap`)
Create multiple container images which are mounted to the same directory
(`/root/openldap`) using the following command
```
docker run -d -e BITNAMI_DEBUG="true" -e LDAP_ADMIN_USERNAME="superuser" -e
LDAP_BINDDN="cn=ldap_bind_user,ou=people,dc=example,dc=com" -e
LDAP_ENABLE_TLS="no" -e
LDAP_EXTRA_SCHEMAS="cosine,general-acl,my-permissions,my-roles,ppolicy,nis,inetorgperson"
-e LDAP_ROOT="dc=example,dc=com" -e LDAP_SKIP_DEFAULT_TREE="yes" -e
LDAP_URI="ldap://ldap-server-service.my-namespace.svc.cluster.local" -e
USER_DESCRIPTION_MAX_LEN="100" -e USER_FIRST_AND_LAST_NAME_MAX_LEN="100" -e
USER_NAME_MAX_LEN="100" -e LDAP_ADMIN_PASSWORD="admin123" -e
LDAP_READONLY_USER_PASSWORD="admin123" -e proxyBindPassword="" -v
/root/openldap:/bitnami/openldap localhost:32000/custom-openldap
```
- List container images using the docker ps command:
```
docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS
NAMES
f77ef5455f5f localhost:32000/custom-openldap "/opt/bitnami/script…" 2
minutes ago Up 2 minutes 1389/tcp, 1636/tcp
upbeat_raman
9cccd41f02d2 localhost:32000/custom-openldap "/opt/bitnami/script…" 17
minutes ago Up 17 minutes 1389/tcp, 1636/tcp
nostalgic_antonelli
5434761c9281 localhost:32000/custom-openldap "/opt/bitnami/script…" 23
minutes ago Up 23 minutes 1389/tcp, 1636/tcp
objective_mayer
ca40ef1a68a2 localhost:32000/custom-openldap "/opt/bitnami/script…" 26
minutes ago Up 26 minutes 1389/tcp, 1636/tcp
angry_margulis
```
- Execute the following ldapsearch command in all the containers
```
ldapsearch -H ldap://localhost:1389 -b "dc=example, dc=com" -D
"cn=superuser,dc=example,dc=com" -w admin123
```
**What is the expected behavior?**
The expected behavior is that ldapsearch should work on all the pods correctly
**What do you see instead?**
Ldapsearch is working on one container image whereas on other container images,
we see the following error
```
$ ldapsearch -H ldap://localhost:1389 -b "dc=example, dc=com" -D
"cn=superuser,dc=example,dc=com" -w admin123
# extended LDIF
#
# LDAPv3
# base <dc=example, dc=com> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# search result
search: 2
result: 80 Other (e.g., implementation specific) error
text: internal error
# numResponses: 1
```
I wanted to know whether it is feasible/possible to use the same mount point
for multiple openldap containers.
**Additional information**
Following is the list of container images
```
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS
NAMES
f77ef5455f5f localhost:32000/custom-openldap "/opt/bitnami/script…" 2
minutes ago Up 2 minutes 1389/tcp, 1636/tcp
upbeat_raman
9cccd41f02d2 localhost:32000/custom-openldap "/opt/bitnami/script…" 17
minutes ago Up 17 minutes 1389/tcp, 1636/tcp
nostalgic_antonelli
5434761c9281 localhost:32000/custom-openldap "/opt/bitnami/script…" 23
minutes ago Up 23 minutes 1389/tcp, 1636/tcp
objective_mayer
ca40ef1a68a2 localhost:32000/custom-openldap "/opt/bitnami/script…" 26
minutes ago Up 26 minutes 1389/tcp, 1636/tcp
angry_margulis
```
And following is the ldapsearch output on all the containers:
f77ef5455f5f
```
$ docker exec -it f77ef5455f5f bash
$ ldapsearch -H ldap://localhost:1389 -b "dc=example, dc=com" -D
"cn=superuser,dc=example,dc=com" -w admin123
# extended LDIF
#
# LDAPv3
# base <dc=example, dc=com> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# search result
search: 2
result: 80 Other (e.g., implementation specific) error
text: internal error
# numResponses: 1
```
9cccd41f02d2:
```
$ ldapsearch -H ldap://localhost:1389 -b "dc=example, dc=com" -D
"cn=superuser,dc=example,dc=com" -w admin123
# extended LDIF
#
# LDAPv3
# base <dc=example, dc=com> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# search result
search: 2
result: 80 Other (e.g., implementation specific) error
text: internal error
# numResponses: 1
```
5434761c9281:
```
$ ldapsearch -H ldap://localhost:1389 -b "dc=example, dc=com" -D
"cn=superuser,dc=example,dc=com" -w admin123
# extended LDIF
#
# LDAPv3
# base <dc=example, dc=com> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# example.com
dn: dc=example,dc=com
objectClass: top
objectClass: domain
dc: example
# groups, example.com
dn: ou=groups,dc=example,dc=com
objectClass: top
objectClass: organizationalUnit
ou: groups
```
- ca40ef1a68a2 (Somehow LDAP bind failed on this container, there seems to be
some environmental issue)
```$ ldapsearch -H ldap://localhost:1389 -b "dc=example, dc=com" -D
"cn=superuser,dc=example,dc=com" -w admin123
ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)```
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=8047
--- Comment #13 from maxime.besson(a)worteks.com <maxime.besson(a)worteks.com> ---
Hi Ondřej, your patch worked for me, TLS handshake timeout was properly applied
in all (synchronous) combinations of:
* unreachable network / unresponsive service
* OpenSSL / GnuTLS
* ldaps:// / ldap:// + StartTLS
Thanks!
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9042
OndÅ™ej KuznÃk <ondra(a)mistotebe.net> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|UNCONFIRMED |IN_PROGRESS
Ever confirmed|0 |1
--- Comment #2 from OndÅ™ej KuznÃk <ondra(a)mistotebe.net> ---
https://git.openldap.org/openldap/openldap/-/merge_requests/728
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=8047
OndÅ™ej KuznÃk <ondra(a)mistotebe.net> changed:
What |Removed |Added
----------------------------------------------------------------------------
Ever confirmed|0 |1
Status|UNCONFIRMED |IN_PROGRESS
Assignee|bugs(a)openldap.org |ondra(a)mistotebe.net
--- Comment #12 from OndÅ™ej KuznÃk <ondra(a)mistotebe.net> ---
Hi Maxine/Allen, can you try MR!727 linked below and tell us if this fixes your
issues?
https://git.openldap.org/openldap/openldap/-/merge_requests/727
It would be helpful if people setting LDAP_BOOL_CONNECT_ASYNC could also
confirm this doesn't cause regressions for them.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=8047
--- Comment #11 from maxime.besson(a)worteks.com <maxime.besson(a)worteks.com> ---
Hi,
I understand that fixing timeouts during SSL handshake is not possible in the
current state of SSL libraries, but I would like to report what seems to me
like a regression in OpenLDAP 2.5/2.6
On my RHEL7 and RHEL8 systems ( OpenLDAP 2.4 built with OpenSSL), I was somehow
not able to reproduce the issue mentioned here: NETWORK_TIMEOUT works for
ldaps:// URLs and TIMEOUT works for ldap:// URLS + StartTLS
In OpenLDAP 2.4.49 + GnuTLS (Ubuntu Focal), NETWORK_TIMEOUT does not work for
ldaps:// URLs but TIMEOUT works for StartTLS
However starting with 2.5 (Debian Bookworm, RHEL9, and also reproduced with a
source build of OPENLDAP_REL_ENG_2_6 as well), I observe a different behavior,
affecting both GnuTLS and OpenSSL:
The following command enters the CPU loop reported in ITS#10141, and never
times out
> ldapsearch -H ldaps://stuck-slapd -o network_timeout=5
The following command still times out as expected
> ldapsearch -H ldap://stuck-slapd -Z -o timeout=5
To clarify, users of OpenLDAP + OpenSSL will start noticing a large CPU spike
that does not timeout, instead of a clean timeout, when a LDAP server becomes
unreachable for non-TCP reasons. For instance: I discovered this issue while
investigating an outage following a storage issue.
--
You are receiving this mail because:
You are on the CC list for the issue.