https://bugs.openldap.org/show_bug.cgi?id=10205
Issue ID: 10205
Summary: SSL handshake blocks forever in async mode if server
unaccessible
Product: OpenLDAP
Version: 2.5.17
Hardware: x86_64
OS: Linux
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: libraries
Assignee: bugs(a)openldap.org
Reporter: regtube(a)hotmail.com
Target Milestone: ---
When ldaps:// scheme is used to connect to currently unaccessible server with
LDAP_OPT_CONNECT_ASYNC and LDAP_OPT_NETWORK_TIMEOUT options set, it blocks
forever on SSL_connect.
Here is a trace:
ldap_sasl_bind
ldap_send_initial_request
ldap_new_connection 1 1 0
ldap_int_open_connection
ldap_connect_to_host: TCP winserv.test.net:636
ldap_new_socket: 3
ldap_prepare_socket: 3
ldap_connect_to_host: Trying 192.168.56.2:636
ldap_pvt_connect: fd: 3 tm: 30 async: -1
ldap_ndelay_on: 3
attempting to connect:
connect errno: 115
ldap_int_poll: fd: 3 tm: 0
ldap_err2string
[2024-04-25 15:41:27.112] [error] [:1] bind(): Connecting (X)
[2024-04-25 15:41:27.112] [error] [:1] err: -18
ldap_sasl_bind
ldap_send_initial_request
ldap_int_poll: fd: 3 tm: 0
ldap_is_sock_ready: 3
ldap_ndelay_off: 3
TLS trace: SSL_connect:before SSL initialization
TLS trace: SSL_connect:SSLv3/TLS write client hello
Looks like it happens because non-blocking mode is cleared from the socket
(ldap_ndelay_off) after the first poll for write, and non-blocking mode is
never restored before attempt to do tls connect, because of the check that
assumes that non-blocking mode has already been set for async mode:
if ( !async ) {
/* if async, this has already been set */
ber_sockbuf_ctrl( sb, LBER_SB_OPT_SET_NONBLOCK, (void*)1 );
}
while in fact it was cleared.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10162
Issue ID: 10162
Summary: Fix for binary attributes data corruption in back-sql
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: backends
Assignee: bugs(a)openldap.org
Reporter: dex.tracers(a)gmail.com
Target Milestone: ---
Created attachment 1006
--> https://bugs.openldap.org/attachment.cgi?id=1006&action=edit
Fix for binary attributes corruption on backed-sql
I've configured slapd to use back-sql (mariadb through odbc) and observed
issues with the BINARY data retrievals from the database. The length of the
attributes was properly reported, but the correct data inside was always 16384
bytes and after that point - some junk (usually filled-up with AAAAAAAA and
some other attributes data from memory).
During the debugging - I've noticed that:
- The MAX_ATTR_LEN (16384 bytes) is used to set the length of the data for
BINARY columns when SQLBindCol is done inside of the
"backsql_BindRowAsStrings_x" function
- After SQLFetch is done - data in row->cols[i] is fetched up to the specified
MAX_ATTR_LEN
- After SQLFetch is done - the correct data size (greater than MAX_ATTR_LEN) is
represented inside of the row->value_len
I'm assuming that slapd allocates the pointer in memory (row->cols[i]), fills
it with the specified amount of data (MAX_ATTR_LEN), but when forming the
actual attribute data - uses the length from row->value_len and so everything
from 16384 bytes position till row->value_len is just a junk from the memory
(uninitialized, leftovers, data from other variables).
After an investigation, I've find-out that:
- for BINARY or variable length fields - SQLGetData should be used
- SQLGetData supports chunked mode (if length is unknown) or full-read mode if
the length is known
- it could be used in pair with SQLBindCol after SQLFetch (!)
Since we have the correct data length inside of row->value_len, I've just added
the code to the backsql_get_attr_vals() function to overwrite the corrupted
data with the correct data by issuing SQLGetData request. And it worked -
binary data was properly retrieved and reported over LDAP!
My current concerns / help needed - I'm not very familiar with the memory
allocation/deallocation mechanisms, so I'm afraid that mentioned change can
lead to memory corruption (so far not observed).
Please review attached patch (testing was done on OPENLDAP_REL_ENG_2_5_13, and
applied on the master branch for easier review/application).
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10020
Issue ID: 10020
Summary: dynlist's @groupOfUniqueNames is considered only for
the first configuration line
Product: OpenLDAP
Version: 2.5.13
Hardware: x86_64
OS: Linux
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: overlays
Assignee: bugs(a)openldap.org
Reporter: msl(a)touk.pl
Target Milestone: ---
If we consider the following configuration of dynlist:
{0}toukPerson labeledURI uniqueMember+memberOf@groupOfUniqueNames
{1}groupOfURLs memberURL uniqueMember+dgMemberOf@groupOfUniqueNames
The {0} entry will correctly populate the memberOf relatively to static group
membership.
The {1} entry will produce dgMemberOf with dynamic group membership correctly
(based on memberURL query) but it will not populate static entries IF {0} entry
in configuration is present. IF I remove {0} from the dynlist configuration -
or - remove @groupOfUniqueNames part from this configuration line, then both
dynamic and static entries will be populated correctly for {1}.
So the effects are as follows on some user entry:
if both {0} and {1} are present - {1} produced only dynamic groups:
memberOf: cn=adm,ou=touk,ou=group,dc=touk,dc=pl
memberOf: cn=touk,ou=touk,ou=group,dc=touk,dc=pl
dgMemberOf: cn=dyntouk,ou=dyntest,ou=group,dc=touk,dc=pl
if both {0} and {1} are present and @groupOfUniqueNames is removed from {0} -
{1} produced static+dynamic groups:
dgMemberOf: cn=adm,ou=touk,ou=group,dc=touk,dc=pl
dgMemberOf: cn=touk,ou=touk,ou=group,dc=touk,dc=pl
dgMemberOf: cn=dyntouk,ou=dyntest,ou=group,dc=touk,dc=pl
If only {1} is present - {1} produced static+dynamic groups:
dgMemberOf: cn=adm,ou=touk,ou=group,dc=touk,dc=pl
dgMemberOf: cn=touk,ou=touk,ou=group,dc=touk,dc=pl
dgMemberOf: cn=dyntouk,ou=dyntest,ou=group,dc=touk,dc=pl
For completness - if only {0} is present:
memberOf: cn=adm,ou=touk,ou=group,dc=touk,dc=pl
memberOf: cn=touk,ou=touk,ou=group,dc=touk,dc=pl
I would expect this behavior to be correct for the first case - {0} and {1}.
memberOf: cn=adm,ou=touk,ou=group,dc=touk,dc=pl
memberOf: cn=touk,ou=touk,ou=group,dc=touk,dc=pl
dgMemberOf: cn=dyntouk,ou=dyntest,ou=group,dc=touk,dc=pl
dgMemberOf: cn=adm,ou=touk,ou=group,dc=touk,dc=pl
dgMemberOf: cn=touk,ou=touk,ou=group,dc=touk,dc=pl
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9934
Issue ID: 9934
Summary: slapd-config(5) should document how to store
certificates for slapd usage
Product: OpenLDAP
Version: 2.5.13
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: documentation
Assignee: bugs(a)openldap.org
Reporter: quanah(a)openldap.org
Target Milestone: ---
Commit 7b41feed83b expanded the ability of cn=config to save the certificates
used for TLS by slapd directly in the config database. However the
documentation for the new parameters was never added to the slapd-config(5) man
page.
olcTLSCACertificate $ olcTLSCertificate $ olcTLSCertificateKey
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10106
Issue ID: 10106
Summary: Add organization to web list of OpenLDAP support
providers
Product: website
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: website
Assignee: bugs(a)openldap.org
Reporter: sudo(a)migrateq.io
Target Milestone: ---
Hello! This request is being opened as suggested by Quanah Gibson-Mount.
Could you please add Migrateq to your OpenLDAP Support page on
https://openldap.org/support
Company: Migrateq Inc.
Website: https://migrateq.io/support/tech/openldap
Migrateq provides migrations, integrations and advanced 24/7/365 technical
support for OpenLDAP and most Linux and Open Source Software.
Thank you =)
Richard
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10141
Issue ID: 10141
Summary: 100% CPU consumption with ldap_int_tls_connect
Product: OpenLDAP
Version: 2.6.3
Hardware: Other
OS: Linux
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: libraries
Assignee: bugs(a)openldap.org
Reporter: vivekanand754(a)gmail.com
Target Milestone: ---
While doing secure ldap connection, i'm seeing that connection is getting stuck
in read block in case it is unable to connect active directory sometime:
~ # strace -p 15049
strace: Process 15049 attached
read(3, 0x55ef720bda53, 5) = -1 EAGAIN (Resource temporarily
unavailable)
read(3, 0x55ef720bda53, 5) = -1 EAGAIN (Resource temporarily
unavailable)
.. ..
.. ..
After putting some logs, I can see that "ldap_int_tls_start" function of
"openldap-2.6.3/libraries/libldap/tls2.c" calls "ldap_int_tls_connect" in while
loop.
It seems to be blocking call, as it try to connect continuously until it get
connected(ti_session_connect returns 0) and thus consumes 100% CPU during that
time.
Is there any known issue ?
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9393
Issue ID: 9393
Summary: Provider a LDAP filter validation function
Product: OpenLDAP
Version: 2.4.56
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: libraries
Assignee: bugs(a)openldap.org
Reporter: best(a)univention.de
Target Milestone: ---
In many situations I need to validate if a user submitted LDAP filter has valid
syntax.
It seems there is no official function to check this.
Could you provide one?
libraries/libldap/filter.c: ldap_pvt_put_filter() can be used as a basis.
--
My current workaround is using a unconnected ldap connection and do a search
with that filter. This yields a FILTER_ERROR (invalid filter) or a SERVER_DOWN
error (invalid filter).
See also:
https://github.com/python-ldap/python-ldap/pull/272
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9914
Issue ID: 9914
Summary: Add OS pagesize to the back-mdb monitor information
Product: OpenLDAP
Version: 2.6.3
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: backends
Assignee: bugs(a)openldap.org
Reporter: quanah(a)openldap.org
Target Milestone: ---
The pagesize that back-mdb is using for pages should be exposed via the
cn=monitor backend, as a remote client doing a query will not have that
information available to it.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9211
Bug ID: 9211
Summary: Relax control is not consistently access-restricted
Product: OpenLDAP
Version: 2.4.49
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: ryan(a)openldap.org
Target Milestone: ---
The following operations can be performed by anyone having 'write' access (not
even 'manage') using the Relax control:
- modifying/replacing structural objectClass
- adding/modifying OBSOLETE attributes
Some operations are correctly restricted:
- adding/modifying NO-USER-MODIFICATION attributes marked as manageable
(Modification of non-conformant objects doesn't appear to be implemented at
all.)
In the absence of ACLs for controls, I'm of the opinion that all use of the
Relax control should require manage access. The Relax draft clearly and
repeatedly discusses its use cases in terms of directory _administrators_
temporarily relaxing constraints in order to accomplish a specific task.
--
You are receiving this mail because:
You are on the CC list for the bug.
https://bugs.openldap.org/show_bug.cgi?id=9920
Issue ID: 9920
Summary: MDB_PAGE_FULL with master3 (encryption) because there
is no room for the authentication data (MAC)
Product: LMDB
Version: unspecified
Hardware: x86_64
OS: Mac OS
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: liblmdb
Assignee: bugs(a)openldap.org
Reporter: info(a)parlepeuple.fr
Target Milestone: ---
Created attachment 915
--> https://bugs.openldap.org/attachment.cgi?id=915&action=edit
proposed patch
Hello,
on master3, using the encryption at rest feature,
I am testing as follow:
- on a new named database, i set the encryption function with
mdb_env_set_encrypt(env, encfunc, &enckey, 32)
- note that I chose to have a size parameter (The size of authentication data
in bytes, if any. Set this to zero for unauthenticated encryption mechanisms.)
of 32 bytes.
- I add 2 entries on the DB, trying to saturate the first page. I chose to add
a key of 33 Bytes and a value of 1977 Bytes, so the size of each node is 2010
Bytes (obviously the 2 keys are different).
- This passes and the DB has just one leaf_pages, no overflow_pages, no
branch_pages, an a depth of 1.
- If I add one byte to the values I insert (starting again from a blank DB),
then , instead of seeing 2 overflow_pages, I get an error : MDB_PAGE_FULL.
- this clearly should not have happened.
- Here is some tracing :
add to leaf page 2 index 0, data size 48 key size 7 [74657374646200]
add to leaf page 3 index 0, data size 1978 key size 33
[000000000000000000000000000000000000000000000000000000000000000000]
add to branch page 5 index 0, data size 0 key size 0 [null]
add to branch page 5 index 1, data size 0 key size 33
[000000000000000000000000000000000000000000000000000000000000000000]
add to leaf page 4 index 0, data size 1978 key size 33
[000000000000000000000000000000000000000000000000000000000000000000]
add to leaf page 4 index 1, data size 1978 key size 33
[020202020202020202020202020202020202020202020202020202020202020202]
not enough room in page 4, got 1 ptrs
upper-lower = 2020 - 2 = 2016
node size = 2020
Looking at the code, I understand that there is a problem at line 9005 :
} else if (node_size + data->mv_size > mc->mc_txn->mt_env->me_nodemax) {
where me_nodemax is incorrect, as it is not taking into account that some bytes
will be needed for the MAC authentication code, which size is in
env->me_esumsize.
me_nodemax is calculated at line 5349:
env->me_nodemax = (((env->me_psize - PAGEHDRSZ ) / MDB_MINKEYS) & -2)
- sizeof(indx_t);
So I substract me_esumsize with a "- env->me_esumsize" here:
env->me_nodemax = (((env->me_psize - PAGEHDRSZ - env->me_esumsize) /
MDB_MINKEYS) & -2)
- sizeof(indx_t);
I also substract it from me_maxfree_1pg in the line above, and in pmax in line
10435.
I do not know if my patch is correct, but it solves the issue.
Maybe there are other places in the code where the me_esumsize should be
substracted from the available size. By example, when calculating the number of
overflow pages in OVPAGES, it does not take into account me_esumsize, but I
think it is ok, because there is only one MAC for the entire set of OV pages,
and there is room for it in the first OV page.
See the attached proposed patch.
--
You are receiving this mail because:
You are on the CC list for the issue.