https://bugs.openldap.org/show_bug.cgi?id=5738
Quanah Gibson-Mount <quanah(a)openldap.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
Assignee|bugs(a)openldap.org |hyc(a)openldap.org
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9909
Issue ID: 9909
Summary: slap* tools leak cn=config entries on shutdown
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: ondra(a)mistotebe.net
Target Milestone: ---
slap* tools set up their in-memory cn=config structures but cfb->cb_root is
never released on shutdown.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10273
Issue ID: 10273
Summary: Unable to run multiple bitnami openldap containers
with common shared volume
Product: OpenLDAP
Version: 2.6.0
Hardware: All
OS: Linux
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: jvishwanath66(a)gmail.com
Target Milestone: ---
**Name and Version:**
openldap2.6
**What architecture are you using?:**
amd64
**What steps will reproduce the bug?**
- Add custom ldif files under the /ldifs directory and create another
container image named `localhost:32000/custom-openldap`
- create a common directory that will be mounted to all the ldap containers
(`/root/openldap`)
Create multiple container images which are mounted to the same directory
(`/root/openldap`) using the following command
- Add custom ldif files under the /ldifs directory and create another
container image named localhost:32000/custom-openldap
- create a common directory that will be mounted to all the ldap containers
(`/root/openldap`)
Create multiple container images which are mounted to the same directory
(`/root/openldap`) using the following command
```
docker run -d -e BITNAMI_DEBUG="true" -e LDAP_ADMIN_USERNAME="superuser" -e
LDAP_BINDDN="cn=ldap_bind_user,ou=people,dc=example,dc=com" -e
LDAP_ENABLE_TLS="no" -e
LDAP_EXTRA_SCHEMAS="cosine,general-acl,my-permissions,my-roles,ppolicy,nis,inetorgperson"
-e LDAP_ROOT="dc=example,dc=com" -e LDAP_SKIP_DEFAULT_TREE="yes" -e
LDAP_URI="ldap://ldap-server-service.my-namespace.svc.cluster.local" -e
USER_DESCRIPTION_MAX_LEN="100" -e USER_FIRST_AND_LAST_NAME_MAX_LEN="100" -e
USER_NAME_MAX_LEN="100" -e LDAP_ADMIN_PASSWORD="admin123" -e
LDAP_READONLY_USER_PASSWORD="admin123" -e proxyBindPassword="" -v
/root/openldap:/bitnami/openldap localhost:32000/custom-openldap
```
- List container images using the docker ps command:
```
docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS
NAMES
f77ef5455f5f localhost:32000/custom-openldap "/opt/bitnami/script…" 2
minutes ago Up 2 minutes 1389/tcp, 1636/tcp
upbeat_raman
9cccd41f02d2 localhost:32000/custom-openldap "/opt/bitnami/script…" 17
minutes ago Up 17 minutes 1389/tcp, 1636/tcp
nostalgic_antonelli
5434761c9281 localhost:32000/custom-openldap "/opt/bitnami/script…" 23
minutes ago Up 23 minutes 1389/tcp, 1636/tcp
objective_mayer
ca40ef1a68a2 localhost:32000/custom-openldap "/opt/bitnami/script…" 26
minutes ago Up 26 minutes 1389/tcp, 1636/tcp
angry_margulis
```
- Execute the following ldapsearch command in all the containers
```
ldapsearch -H ldap://localhost:1389 -b "dc=example, dc=com" -D
"cn=superuser,dc=example,dc=com" -w admin123
```
**What is the expected behavior?**
The expected behavior is that ldapsearch should work on all the pods correctly
**What do you see instead?**
Ldapsearch is working on one container image whereas on other container images,
we see the following error
```
$ ldapsearch -H ldap://localhost:1389 -b "dc=example, dc=com" -D
"cn=superuser,dc=example,dc=com" -w admin123
# extended LDIF
#
# LDAPv3
# base <dc=example, dc=com> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# search result
search: 2
result: 80 Other (e.g., implementation specific) error
text: internal error
# numResponses: 1
```
I wanted to know whether it is feasible/possible to use the same mount point
for multiple openldap containers.
**Additional information**
Following is the list of container images
```
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS
NAMES
f77ef5455f5f localhost:32000/custom-openldap "/opt/bitnami/script…" 2
minutes ago Up 2 minutes 1389/tcp, 1636/tcp
upbeat_raman
9cccd41f02d2 localhost:32000/custom-openldap "/opt/bitnami/script…" 17
minutes ago Up 17 minutes 1389/tcp, 1636/tcp
nostalgic_antonelli
5434761c9281 localhost:32000/custom-openldap "/opt/bitnami/script…" 23
minutes ago Up 23 minutes 1389/tcp, 1636/tcp
objective_mayer
ca40ef1a68a2 localhost:32000/custom-openldap "/opt/bitnami/script…" 26
minutes ago Up 26 minutes 1389/tcp, 1636/tcp
angry_margulis
```
And following is the ldapsearch output on all the containers:
f77ef5455f5f
```
$ docker exec -it f77ef5455f5f bash
$ ldapsearch -H ldap://localhost:1389 -b "dc=example, dc=com" -D
"cn=superuser,dc=example,dc=com" -w admin123
# extended LDIF
#
# LDAPv3
# base <dc=example, dc=com> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# search result
search: 2
result: 80 Other (e.g., implementation specific) error
text: internal error
# numResponses: 1
```
9cccd41f02d2:
```
$ ldapsearch -H ldap://localhost:1389 -b "dc=example, dc=com" -D
"cn=superuser,dc=example,dc=com" -w admin123
# extended LDIF
#
# LDAPv3
# base <dc=example, dc=com> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# search result
search: 2
result: 80 Other (e.g., implementation specific) error
text: internal error
# numResponses: 1
```
5434761c9281:
```
$ ldapsearch -H ldap://localhost:1389 -b "dc=example, dc=com" -D
"cn=superuser,dc=example,dc=com" -w admin123
# extended LDIF
#
# LDAPv3
# base <dc=example, dc=com> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# example.com
dn: dc=example,dc=com
objectClass: top
objectClass: domain
dc: example
# groups, example.com
dn: ou=groups,dc=example,dc=com
objectClass: top
objectClass: organizationalUnit
ou: groups
```
- ca40ef1a68a2 (Somehow LDAP bind failed on this container, there seems to be
some environmental issue)
```$ ldapsearch -H ldap://localhost:1389 -b "dc=example, dc=com" -D
"cn=superuser,dc=example,dc=com" -w admin123
ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)```
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9952
Issue ID: 9952
Summary: Crash on exit with OpenSSL 3
Product: OpenLDAP
Version: 2.6.2
Hardware: All
OS: Linux
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: libraries
Assignee: bugs(a)openldap.org
Reporter: artur.zaprzala(a)gmail.com
Target Milestone: ---
A program using libldap will crash on exit after using SSL connection.
How to reproduce on CentOS 9:
Uncomment the following lines in /etc/pki/tls/openssl.cnf:
[provider_sect]
legacy = legacy_sect
[legacy_sect]
activate = 1
Run the command (you must enter a valid LDAP server address):
python3 -c "import ldap; ldap.initialize('ldaps://<LDAP SERVER
ADDRESS>').whoami_s()"
Another example (no server required):
python3 -c "import ctypes;
ctypes.CDLL('libldap.so.2').ldap_pvt_tls_init_def_ctx(0)"
Results:
Segmentation fault (core dumped)
Backtrace from gdb:
Program received signal SIGSEGV, Segmentation fault.
0 ___pthread_rwlock_rdlock (rwlock=0x0) at pthread_rwlock_rdlock.c:27
1 0x00007ffff7c92f3d in CRYPTO_THREAD_read_lock (lock=<optimized out>) at
crypto/threads_pthread.c:85
2 0x00007ffff7c8b126 in ossl_lib_ctx_get_data (ctx=0x7ffff7eff540
<default_context_int.lto_priv>, index=1, meth=0x7ffff7eb8a00
<provider_store_method.lto_priv>) at crypto/context.c:398
3 0x00007ffff7c98bea in get_provider_store (libctx=<optimized out>) at
crypto/provider_core.c:334
4 ossl_provider_deregister_child_cb (handle=0x5555555ed620) at
crypto/provider_core.c:1752
5 0x00007ffff7c8bf2f in ossl_provider_deinit_child (ctx=0x5555555d2650) at
crypto/provider_child.c:279
6 OSSL_LIB_CTX_free (ctx=0x5555555d2650) at crypto/context.c:283
7 OSSL_LIB_CTX_free (ctx=0x5555555d2650) at crypto/context.c:276
8 0x00007ffff7634af6 in legacy_teardown (provctx=0x5555555ee9f0) at
providers/legacyprov.c:168
9 0x00007ffff7c9901b in ossl_provider_teardown (prov=0x5555555ed620) at
crypto/provider_core.c:1477
10 ossl_provider_free (prov=0x5555555ed620) at crypto/provider_core.c:683
11 0x00007ffff7c63956 in ossl_provider_free (prov=<optimized out>) at
crypto/provider_core.c:668
12 evp_cipher_free_int (cipher=0x555555916c10) at crypto/evp/evp_enc.c:1632
13 EVP_CIPHER_free (cipher=0x555555916c10) at crypto/evp/evp_enc.c:1647
14 0x00007ffff7a6bc1d in ssl_evp_cipher_free (cipher=0x555555916c10) at
ssl/ssl_lib.c:5925
15 ssl_evp_cipher_free (cipher=0x555555916c10) at ssl/ssl_lib.c:5915
16 SSL_CTX_free (a=0x555555ec1020) at ssl/ssl_lib.c:3455
17 SSL_CTX_free (a=0x555555ec1020) at ssl/ssl_lib.c:3392
18 0x00007fffe95edb89 in ldap_int_tls_destroy (lo=0x7fffe9616000
<ldap_int_global_options>) at
/usr/src/debug/openldap-2.6.2-1.el9_0.x86_64/openldap-2.6.2/libraries/libldap/tls2.c:104
19 0x00007ffff7fd100b in _dl_fini () at dl-fini.c:138
20 0x00007ffff7873475 in __run_exit_handlers (status=0, listp=0x7ffff7a11658
<__exit_funcs>, run_list_atexit=run_list_atexit@entry=true,
run_dtors=run_dtors@entry=true) at exit.c:113
21 0x00007ffff78735f0 in __GI_exit (status=<optimized out>) at exit.c:143
22 0x00007ffff785be57 in __libc_start_call_main (main=main@entry=0x55555556aa20
<main>, argc=argc@entry=4, argv=argv@entry=0x7fffffffe2b8) at
../sysdeps/nptl/libc_start_call_main.h:74
23 0x00007ffff785befc in __libc_start_main_impl (main=0x55555556aa20 <main>,
argc=4, argv=0x7fffffffe2b8, init=<optimized out>, fini=<optimized out>,
rtld_fini=<optimized out>, stack_end=0x7fffffffe2a8) at ../csu/libc-start.c:409
24 0x000055555556b575 in _start ()
The problem is that ldap_int_tls_destroy() is called after the clean up of
libssl.
On program exit, at first default_context_int is cleaned up (OPENSSL_cleanup()
was registered with atexit()):
0 ossl_lib_ctx_default_deinit () at crypto/context.c:196
1 OPENSSL_cleanup () at crypto/init.c:424
2 OPENSSL_cleanup () at crypto/init.c:338
3 0x00007ffff7873475 in __run_exit_handlers (status=0, listp=0x7ffff7a11658
<__exit_funcs>, run_list_atexit=run_list_atexit@entry=true,
run_dtors=run_dtors@entry=true) at exit.c:113
4 0x00007ffff78735f0 in __GI_exit (status=<optimized out>) at exit.c:143
5 0x00007ffff785be57 in __libc_start_call_main (main=main@entry=0x55555556aa20
<main>, argc=argc@entry=4, argv=argv@entry=0x7fffffffe2c8) at
../sysdeps/nptl/libc_start_call_main.h:74
6 0x00007ffff785befc in __libc_start_main_impl (main=0x55555556aa20 <main>,
argc=4, argv=0x7fffffffe2c8, init=<optimized out>, fini=<optimized out>,
rtld_fini=<optimized out>, stack_end=0x7fffffffe2b8) at ../csu/libc-start.c:409
7 0x000055555556b575 in _start ()
Then ossl_lib_ctx_get_data() tries to use default_context_int.lock, which is
NULL. ldap_int_tls_destroy() is called by ldap_int_destroy_global_options(),
registered by "__attribute__ ((destructor))".
It seems that shared library destructors are always called before functions
registered with atexit().
A solution may be to modify libraries/libldap/init.c to use atexit() instead of
"__attribute__ ((destructor))". atexit() manual page says: "Since glibc 2.2.3,
atexit() can be used within a shared library to establish functions that are
called when the shared library is unloaded.".
Functions registered with atexit() are called in the reverse order of their
registration, so libssl must by initialized before libldap. If the order is
wrong, libldap should detect it somehow and exit with abort().
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=8047
--- Comment #13 from maxime.besson(a)worteks.com <maxime.besson(a)worteks.com> ---
Hi Ondřej, your patch worked for me, TLS handshake timeout was properly applied
in all (synchronous) combinations of:
* unreachable network / unresponsive service
* OpenSSL / GnuTLS
* ldaps:// / ldap:// + StartTLS
Thanks!
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10271
Issue ID: 10271
Summary: EINTR is handled as LDAP_SERVER_DOWN in socket
operation in ldap client APIs
Product: OpenLDAP
Version: 2.5.18
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: libraries
Assignee: bugs(a)openldap.org
Reporter: volan.shu(a)nokia.com
Target Milestone: ---
In case EINTR fired by OS in any case in ldap client api for socket related
operation, the ldap client API returns LDAP_SERVER_DOWN which is not correct.
In this case, I suppose the ldap client need retry socket operateion.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9042
Ondřej Kuzník <ondra(a)mistotebe.net> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|UNCONFIRMED |IN_PROGRESS
Ever confirmed|0 |1
--- Comment #2 from Ondřej Kuzník <ondra(a)mistotebe.net> ---
https://git.openldap.org/openldap/openldap/-/merge_requests/728
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=8047
Ondřej Kuzník <ondra(a)mistotebe.net> changed:
What |Removed |Added
----------------------------------------------------------------------------
Ever confirmed|0 |1
Status|UNCONFIRMED |IN_PROGRESS
Assignee|bugs(a)openldap.org |ondra(a)mistotebe.net
--- Comment #12 from Ondřej Kuzník <ondra(a)mistotebe.net> ---
Hi Maxine/Allen, can you try MR!727 linked below and tell us if this fixes your
issues?
https://git.openldap.org/openldap/openldap/-/merge_requests/727
It would be helpful if people setting LDAP_BOOL_CONNECT_ASYNC could also
confirm this doesn't cause regressions for them.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=8047
--- Comment #11 from maxime.besson(a)worteks.com <maxime.besson(a)worteks.com> ---
Hi,
I understand that fixing timeouts during SSL handshake is not possible in the
current state of SSL libraries, but I would like to report what seems to me
like a regression in OpenLDAP 2.5/2.6
On my RHEL7 and RHEL8 systems ( OpenLDAP 2.4 built with OpenSSL), I was somehow
not able to reproduce the issue mentioned here: NETWORK_TIMEOUT works for
ldaps:// URLs and TIMEOUT works for ldap:// URLS + StartTLS
In OpenLDAP 2.4.49 + GnuTLS (Ubuntu Focal), NETWORK_TIMEOUT does not work for
ldaps:// URLs but TIMEOUT works for StartTLS
However starting with 2.5 (Debian Bookworm, RHEL9, and also reproduced with a
source build of OPENLDAP_REL_ENG_2_6 as well), I observe a different behavior,
affecting both GnuTLS and OpenSSL:
The following command enters the CPU loop reported in ITS#10141, and never
times out
> ldapsearch -H ldaps://stuck-slapd -o network_timeout=5
The following command still times out as expected
> ldapsearch -H ldap://stuck-slapd -Z -o timeout=5
To clarify, users of OpenLDAP + OpenSSL will start noticing a large CPU spike
that does not timeout, instead of a clean timeout, when a LDAP server becomes
unreachable for non-TCP reasons. For instance: I discovered this issue while
investigating an outage following a storage issue.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9881
Issue ID: 9881
Summary: Ability to track last authentication for database
objects
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: quanah(a)openldap.org
Target Milestone: ---
For simple binds, we have the ability to track the last success via the
lastbind functionality (pwdLastSuccess attribute). However this doesn't allow
one to see when an object that exists in a database last authenticated via
SASL.
It would be useful to add similar functionality for SASL binds.
This can be useful information that allows one to tell if an object is being
actively authenticated to (generally, users and system accounts, etc).
Obviously if something is directly mapped to an identity that doesn't exist in
the underlying DB, that cannot be tracked.
--
You are receiving this mail because:
You are on the CC list for the issue.