https://bugs.openldap.org/show_bug.cgi?id=9909
Issue ID: 9909
Summary: slap* tools leak cn=config entries on shutdown
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: ondra(a)mistotebe.net
Target Milestone: ---
slap* tools set up their in-memory cn=config structures but cfb->cb_root is
never released on shutdown.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10273
Issue ID: 10273
Summary: Unable to run multiple bitnami openldap containers
with common shared volume
Product: OpenLDAP
Version: 2.6.0
Hardware: All
OS: Linux
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: jvishwanath66(a)gmail.com
Target Milestone: ---
**Name and Version:**
openldap2.6
**What architecture are you using?:**
amd64
**What steps will reproduce the bug?**
- Add custom ldif files under the /ldifs directory and create another
container image named `localhost:32000/custom-openldap`
- create a common directory that will be mounted to all the ldap containers
(`/root/openldap`)
Create multiple container images which are mounted to the same directory
(`/root/openldap`) using the following command
- Add custom ldif files under the /ldifs directory and create another
container image named localhost:32000/custom-openldap
- create a common directory that will be mounted to all the ldap containers
(`/root/openldap`)
Create multiple container images which are mounted to the same directory
(`/root/openldap`) using the following command
```
docker run -d -e BITNAMI_DEBUG="true" -e LDAP_ADMIN_USERNAME="superuser" -e
LDAP_BINDDN="cn=ldap_bind_user,ou=people,dc=example,dc=com" -e
LDAP_ENABLE_TLS="no" -e
LDAP_EXTRA_SCHEMAS="cosine,general-acl,my-permissions,my-roles,ppolicy,nis,inetorgperson"
-e LDAP_ROOT="dc=example,dc=com" -e LDAP_SKIP_DEFAULT_TREE="yes" -e
LDAP_URI="ldap://ldap-server-service.my-namespace.svc.cluster.local" -e
USER_DESCRIPTION_MAX_LEN="100" -e USER_FIRST_AND_LAST_NAME_MAX_LEN="100" -e
USER_NAME_MAX_LEN="100" -e LDAP_ADMIN_PASSWORD="admin123" -e
LDAP_READONLY_USER_PASSWORD="admin123" -e proxyBindPassword="" -v
/root/openldap:/bitnami/openldap localhost:32000/custom-openldap
```
- List container images using the docker ps command:
```
docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS
NAMES
f77ef5455f5f localhost:32000/custom-openldap "/opt/bitnami/script…" 2
minutes ago Up 2 minutes 1389/tcp, 1636/tcp
upbeat_raman
9cccd41f02d2 localhost:32000/custom-openldap "/opt/bitnami/script…" 17
minutes ago Up 17 minutes 1389/tcp, 1636/tcp
nostalgic_antonelli
5434761c9281 localhost:32000/custom-openldap "/opt/bitnami/script…" 23
minutes ago Up 23 minutes 1389/tcp, 1636/tcp
objective_mayer
ca40ef1a68a2 localhost:32000/custom-openldap "/opt/bitnami/script…" 26
minutes ago Up 26 minutes 1389/tcp, 1636/tcp
angry_margulis
```
- Execute the following ldapsearch command in all the containers
```
ldapsearch -H ldap://localhost:1389 -b "dc=example, dc=com" -D
"cn=superuser,dc=example,dc=com" -w admin123
```
**What is the expected behavior?**
The expected behavior is that ldapsearch should work on all the pods correctly
**What do you see instead?**
Ldapsearch is working on one container image whereas on other container images,
we see the following error
```
$ ldapsearch -H ldap://localhost:1389 -b "dc=example, dc=com" -D
"cn=superuser,dc=example,dc=com" -w admin123
# extended LDIF
#
# LDAPv3
# base <dc=example, dc=com> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# search result
search: 2
result: 80 Other (e.g., implementation specific) error
text: internal error
# numResponses: 1
```
I wanted to know whether it is feasible/possible to use the same mount point
for multiple openldap containers.
**Additional information**
Following is the list of container images
```
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS
NAMES
f77ef5455f5f localhost:32000/custom-openldap "/opt/bitnami/script…" 2
minutes ago Up 2 minutes 1389/tcp, 1636/tcp
upbeat_raman
9cccd41f02d2 localhost:32000/custom-openldap "/opt/bitnami/script…" 17
minutes ago Up 17 minutes 1389/tcp, 1636/tcp
nostalgic_antonelli
5434761c9281 localhost:32000/custom-openldap "/opt/bitnami/script…" 23
minutes ago Up 23 minutes 1389/tcp, 1636/tcp
objective_mayer
ca40ef1a68a2 localhost:32000/custom-openldap "/opt/bitnami/script…" 26
minutes ago Up 26 minutes 1389/tcp, 1636/tcp
angry_margulis
```
And following is the ldapsearch output on all the containers:
f77ef5455f5f
```
$ docker exec -it f77ef5455f5f bash
$ ldapsearch -H ldap://localhost:1389 -b "dc=example, dc=com" -D
"cn=superuser,dc=example,dc=com" -w admin123
# extended LDIF
#
# LDAPv3
# base <dc=example, dc=com> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# search result
search: 2
result: 80 Other (e.g., implementation specific) error
text: internal error
# numResponses: 1
```
9cccd41f02d2:
```
$ ldapsearch -H ldap://localhost:1389 -b "dc=example, dc=com" -D
"cn=superuser,dc=example,dc=com" -w admin123
# extended LDIF
#
# LDAPv3
# base <dc=example, dc=com> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# search result
search: 2
result: 80 Other (e.g., implementation specific) error
text: internal error
# numResponses: 1
```
5434761c9281:
```
$ ldapsearch -H ldap://localhost:1389 -b "dc=example, dc=com" -D
"cn=superuser,dc=example,dc=com" -w admin123
# extended LDIF
#
# LDAPv3
# base <dc=example, dc=com> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# example.com
dn: dc=example,dc=com
objectClass: top
objectClass: domain
dc: example
# groups, example.com
dn: ou=groups,dc=example,dc=com
objectClass: top
objectClass: organizationalUnit
ou: groups
```
- ca40ef1a68a2 (Somehow LDAP bind failed on this container, there seems to be
some environmental issue)
```$ ldapsearch -H ldap://localhost:1389 -b "dc=example, dc=com" -D
"cn=superuser,dc=example,dc=com" -w admin123
ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)```
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=8047
--- Comment #13 from maxime.besson(a)worteks.com <maxime.besson(a)worteks.com> ---
Hi Ondřej, your patch worked for me, TLS handshake timeout was properly applied
in all (synchronous) combinations of:
* unreachable network / unresponsive service
* OpenSSL / GnuTLS
* ldaps:// / ldap:// + StartTLS
Thanks!
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9042
OndÅ™ej KuznÃk <ondra(a)mistotebe.net> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|UNCONFIRMED |IN_PROGRESS
Ever confirmed|0 |1
--- Comment #2 from OndÅ™ej KuznÃk <ondra(a)mistotebe.net> ---
https://git.openldap.org/openldap/openldap/-/merge_requests/728
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=8047
OndÅ™ej KuznÃk <ondra(a)mistotebe.net> changed:
What |Removed |Added
----------------------------------------------------------------------------
Ever confirmed|0 |1
Status|UNCONFIRMED |IN_PROGRESS
Assignee|bugs(a)openldap.org |ondra(a)mistotebe.net
--- Comment #12 from OndÅ™ej KuznÃk <ondra(a)mistotebe.net> ---
Hi Maxine/Allen, can you try MR!727 linked below and tell us if this fixes your
issues?
https://git.openldap.org/openldap/openldap/-/merge_requests/727
It would be helpful if people setting LDAP_BOOL_CONNECT_ASYNC could also
confirm this doesn't cause regressions for them.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=8047
--- Comment #11 from maxime.besson(a)worteks.com <maxime.besson(a)worteks.com> ---
Hi,
I understand that fixing timeouts during SSL handshake is not possible in the
current state of SSL libraries, but I would like to report what seems to me
like a regression in OpenLDAP 2.5/2.6
On my RHEL7 and RHEL8 systems ( OpenLDAP 2.4 built with OpenSSL), I was somehow
not able to reproduce the issue mentioned here: NETWORK_TIMEOUT works for
ldaps:// URLs and TIMEOUT works for ldap:// URLS + StartTLS
In OpenLDAP 2.4.49 + GnuTLS (Ubuntu Focal), NETWORK_TIMEOUT does not work for
ldaps:// URLs but TIMEOUT works for StartTLS
However starting with 2.5 (Debian Bookworm, RHEL9, and also reproduced with a
source build of OPENLDAP_REL_ENG_2_6 as well), I observe a different behavior,
affecting both GnuTLS and OpenSSL:
The following command enters the CPU loop reported in ITS#10141, and never
times out
> ldapsearch -H ldaps://stuck-slapd -o network_timeout=5
The following command still times out as expected
> ldapsearch -H ldap://stuck-slapd -Z -o timeout=5
To clarify, users of OpenLDAP + OpenSSL will start noticing a large CPU spike
that does not timeout, instead of a clean timeout, when a LDAP server becomes
unreachable for non-TCP reasons. For instance: I discovered this issue while
investigating an outage following a storage issue.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9220
Bug ID: 9220
Summary: Rewrite Bind and Exop result handling
Product: OpenLDAP
Version: 2.5
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: quanah(a)openldap.org
Target Milestone: ---
Bind and Exop result handling needs a rewrite so it is no longer a special case
for overlays.
--
You are receiving this mail because:
You are on the CC list for the bug.
https://bugs.openldap.org/show_bug.cgi?id=9881
Issue ID: 9881
Summary: Ability to track last authentication for database
objects
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: quanah(a)openldap.org
Target Milestone: ---
For simple binds, we have the ability to track the last success via the
lastbind functionality (pwdLastSuccess attribute). However this doesn't allow
one to see when an object that exists in a database last authenticated via
SASL.
It would be useful to add similar functionality for SASL binds.
This can be useful information that allows one to tell if an object is being
actively authenticated to (generally, users and system accounts, etc).
Obviously if something is directly mapped to an identity that doesn't exist in
the underlying DB, that cannot be tracked.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9640
Issue ID: 9640
Summary: ACL privilege for MOD_INCREMENT
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: michael(a)stroeder.com
Target Milestone: ---
I'm using LDAP write operations with MOD_INCREMENT with pre-read-control for
uidNumber/gidNumber generation.
I'd like to limit write access to an Integer attribute "nextID" to
MOD_INCREMENT, ideally even restricting the de-/increment value.
(Uniqueness is achieved with slapo-unique anyway but still I'd like to avoid
users messing with this attribute).
IMHO the ideal solution would be a new privilege "i".
Example for limiting write access to increment by one and grant read access for
using read control:
access to
attrs=nextID
val=1
by group=... =ri
Example for decrementing by two without read:
access to
attrs=nextID
val=-2
by group=... =i
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9356
Issue ID: 9356
Summary: Add list of peerSIDs to consumer cookie to reduce
cross traffic
Product: OpenLDAP
Version: 2.5
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: quanah(a)openldap.org
Target Milestone: ---
If we add a list of peersids to the cookie, each consumer can tell the
providers who else the consumers talk to and then the provider can omit sending
updates to that consumer, originating from those peers
There's some special handling needed if a connection dies
If a consumer loses one of its peer connections, and after N retries is still
not connected, it should send a new cookie to its remaining peers saying
"here's my new peer list" with the missing one removed. Likewise, if a retry
eventually connects again, it can send a new cookie again
Make that peer list reset configurable in the syncrepl config stanza. This can
help account for end admin knowledge that some links may be more or less stable
than other ones.
The idea here is that if one of your other peers can still see the missing
peer, they can start routing updates to you again
It should abandon all existing persist sessions and send a new sync search with
the new cookie to all remaining peers
For consumer side, also means adding the sid for a given provider into the
syncrepl stanza to save on having to try and discover the peer sid.
--
You are receiving this mail because:
You are on the CC list for the issue.