https://bugs.openldap.org/show_bug.cgi?id=9788
Issue ID: 9788
Summary: make warns about disabling/resetting jobserver
Product: OpenLDAP
Version: 2.6.1
Hardware: All
OS: Linux
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: build
Assignee: bugs(a)openldap.org
Reporter: orgads(a)gmail.com
Target Milestone: ---
Running make -j8 issues the following warning for each directory with make 4.3:
make[2]: warning: -j8 forced in submake: resetting jobserver mode.
with make 4.2.1:
make[3]: warning: -jN forced in submake: disabling jobserver mode.
With make 3.82 there is no warning, but the jobserver flags are duplicated for
each nested directory. e.g.:
cd back-monitor && make -w --jobserver-fds=3,4 - --jobserver-fds=3,4 -
--jobserver-fds=3,4 - --jobserver-fds=3,4 -j all
On my env this is fixed by removing all the occurrences of $(MFLAGS) from
build/dir.mk. MFLAGS is picked up by make when it exists, and there is no need
to pass it explicitly.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9831
Issue ID: 9831
Summary: connection_next() can skip an active connection
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: ondra(a)mistotebe.net
Target Milestone: ---
Uncovered by running test056 under interesting conditions repeatedly.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9809
Issue ID: 9809
Summary: slapo-pcache: incorrect call to monitor
unregister_entry
Product: OpenLDAP
Version: 2.4.18
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: overlays
Assignee: bugs(a)openldap.org
Reporter: hyc(a)openldap.org
Target Milestone: ---
Also an incorrect check for whether monitoring was initialized, thus calling
unregister_entry_callback when there's nothing to unregister. The incorrect
call causes a SEGV.
The incorrect call is also present in back-mdb, but never invoked because it
correctly sees there's nothing to unregister.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9803
Issue ID: 9803
Summary: liblber: assertion( ber->ber_buf == NULL ); failed
Product: OpenLDAP
Version: 2.4.46
Hardware: x86_64
OS: Linux
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: libraries
Assignee: bugs(a)openldap.org
Reporter: jengelh(a)inai.de
Target Milestone: ---
libraries/liblber/io.c function ber_get_next contains a line
assert( ber->ber_buf == NULL );
and with a larger application that uses libldap-2.4.46, I am running into that
sporadically. I have no idea how that happens, but it seems probable the LDAP
server (of which there is also no info on) is sending something that is
interpreted as invalid and ber_buf does not get freed, so it's set on the next
invocation.
```
(gdb)
zcore: io.c:514: ber_get_next: Assertion `ber->ber_buf == NULL' failed.
Thread 40 "rpc/34" received signal SIGABRT, Aborted.
[Switching to Thread 0x7fffd6ff8700 (LWP 18485)]
(gdb) up
#1 0x00007ffff20fb585 in abort () from /lib64/libc.so.6
(gdb)
#2 0x00007ffff20f285a in __assert_fail_base () from /lib64/libc.so.6
(gdb)
#3 0x00007ffff20f28d2 in __assert_fail () from /lib64/libc.so.6
(gdb)
#4 0x00007fffee0f48a1 in ber_get_next (sb=0x6040000aa650,
len=len@entry=0x7fffd6ff61c8, ber=ber@entry=0x6070000b0360) at io.c:514
514 assert( ber->ber_buf == NULL );
(gdb) p ber
$1 = (BerElement *) 0x6070000b0360
(gdb) p *ber
$2 = {ber_opts = {lbo_valid = 2, lbo_options = 1, lbo_debug = 0}, ber_tag =
116, ber_len = 78, ber_usertag = 0, ber_buf = 0x6070000b03d0 "cP", ber_ptr =
0x6070000b03d0 "cP", ber_end = 0x6070000b041e "", ber_sos_ptr = 0x0, ber_rwptr
= 0x0, ber_memctx = 0x0}
(gdb) up
#5 0x00007fffee310c91 in try_read1msg (result=0x7fffd6ff6348,
lc=0x6080001182a0, all=1, msgid=18, ld=0x6040000aa610) at result.c:494
494 tag = ber_get_next( lc->lconn_sb, &len, ber );
(gdb) up
#6 wait4msg (result=0x7fffd6ff6348, timeout=<optimized out>, all=1,
msgid=<optimized out>, ld=0x6040000aa610) at result.c:365
365 rc = try_read1msg( ld,
msgid, all, lc, result );
(gdb)
#7 ldap_result (ld=ld@entry=0x6040000aa610, msgid=<optimized out>,
all=all@entry=1, timeout=timeout@entry=0x0, result=result@entry=0x7fffd6ff6348)
at result.c:120
120 rc = wait4msg( ld, msgid, all, timeout, result );
(gdb) p result
$3 = (LDAPMessage **) 0x7fffd6ff6348
(gdb) p result[0]
$4 = (LDAPMessage *) 0x0
(gdb) dow
#6 wait4msg (result=0x7fffd6ff6348, timeout=<optimized out>, all=1,
msgid=<optimized out>, ld=0x6040000aa610) at result.c:365
365 rc = try_read1msg( ld,
msgid, all, lc, result );
(gdb) dow
#5 0x00007fffee310c91 in try_read1msg (result=0x7fffd6ff6348,
lc=0x6080001182a0, all=1, msgid=18, ld=0x6040000aa610) at result.c:494
494 tag = ber_get_next( lc->lconn_sb, &len, ber );
(gdb) p ber
$5 = <optimized out>
(gdb) dow
#4 0x00007fffee0f48a1 in ber_get_next (sb=0x6040000aa650,
len=len@entry=0x7fffd6ff61c8, ber=ber@entry=0x6070000b0360) at io.c:514
514 assert( ber->ber_buf == NULL );
(gdb) l
509 *
510 * We expect tag and len to be at most 32 bits wide.
511 */
512
513 if (ber->ber_rwptr == NULL) {
514 assert( ber->ber_buf == NULL );
515 ber->ber_rwptr = (char *) &ber->ber_len-1;
516 ber->ber_ptr = ber->ber_rwptr;
517 ber->ber_tag = 0;
518 }
(gdb) p ber
$6 = (BerElement *) 0x6070000b0360
(gdb) p ber[0]
$7 = {ber_opts = {lbo_valid = 2, lbo_options = 1, lbo_debug = 0}, ber_tag =
116, ber_len = 78, ber_usertag = 0, ber_buf = 0x6070000b03d0 "cP", ber_ptr =
0x6070000b03d0 "cP", ber_end = 0x6070000b041e "", ber_sos_ptr = 0x0, ber_rwptr
= 0x0, ber_memctx = 0x0}
(gdb) p ber->ber_buf
$8 = 0x6070000b03d0 "cP"
(gdb) up
#5 0x00007fffee310c91 in try_read1msg (result=0x7fffd6ff6348,
lc=0x6080001182a0, all=1, msgid=18, ld=0x6040000aa610) at result.c:494
494 tag = ber_get_next( lc->lconn_sb, &len, ber );
(gdb) p len
$10 = 99
(gdb) p lc
$11 = (LDAPConn *) 0x6080001182a0
```
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9818
Issue ID: 9818
Summary: slapo-translucent overlay crashes during wildcard
search with subordinate
Product: OpenLDAP
Version: 2.5.11
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: overlays
Assignee: bugs(a)openldap.org
Reporter: jeremy.diaz(a)rexconsulting.net
Target Milestone: ---
I found that slapd 2.5.11 w/slapo-translucent will crash when queried with a
wildcard
search. It looks like any wildcard search on any attribute specified in
"translucent_local" will cause the SIGSEGV on the latest version of Symas
OpenLDAP slapd, 2.5.11, MDB databases, running on CentOS7.
It seems that the SIGSEGV problem does not occur w the 2.4.44 from the RHEL7
distribution,
and so the problem may have be a regression. I have not tested any other
versions but have
verified that, with the exact same config, the problem does not happen on the
RHEL7 2.4.44
version, but does happen w/Symas OpenLDAP 2.5.11.
It's an interesting config here. There are two database+suffixes defined in the
instance. The first one is subordinate (ou=someorg,dc=corp,dc=com) to the
second one
(dc=corp,dc=com). The "subordinate" option is set to "True". The
second database section loads the translucent overlay which is pointed to the
upstream
Active Directory instance and has the same suffix of AD.
The problem is administrative. The group want their admins who manage LDAP data
to be able
to search using wildcard "cn=xyx*" filters. Besides crashing, we have noticed
that these work, but only when setting the basedn of the subordinate database.
I tried a
few things in a test lab and was able to reproduce the issue.
with "cn" in "translucent_local" and sublevel search of translucent
superior basedn dc=corp,dc=com
"(cn=jed)" filter returns subordinate database entry
"(cn=je*)" filter crashes slapd
with "cn" not in "translucent_local" and sublevel search of
translucent superior basedn dc=corp,dc=com
"(cn=jed)" filter return referrals from upstream Active Directory
"(cn=je*)" filter return referrals from upstream Active Directory
with "cn" in "translucent_local" and sublevel search of subordinate
basedn ou=someorg,dc=corp,dc=com
"(cn=jed)" filter returns subordinate database entry from ou=someorg
"(cn=je*)" filter returns subordinate database entry(ies) from ou=someorg
with "cn" not in "translucent_local" and sublevel search of
subordinate dbasedn ou=someorg,dc=corp,dc=com
"(cn=jed)" filter returns subordinate database entry from ou=someorg
"(cn=je*)" filter returns subordinate database entry(ies) from ou=someorg
Here's what the crash looks like:
622eb2f7.0393c4d6 0x7fcf15e89880 slapd starting
622eb2fd.32371976 0x7fce8d9f3700 slap_listener_activate(8):
622eb2fd.323bb364 0x7fce8d1f2700 >>> slap_listener(ldap:///)
622eb2fd.3247761c 0x7fce8d1f2700 connection_get(15): got connid=1000
622eb2fd.3247962c 0x7fce8d1f2700 connection_read(15): checking for input on
id=1000
622eb2fd.3247b177 0x7fce8d1f2700 ber_get_next
622eb2fd.3247e0b8 0x7fce8d1f2700 ber_get_next: tag 0x30 len 12 contents:
622eb2fd.3247f6f1 0x7fce8d1f2700 op tag 0x60, time 1647227645
622eb2fd.32480615 0x7fce8d1f2700 ber_get_next
622eb2fd.32484eca 0x7fce8d1f2700 conn=1000 op=0 do_bind
622eb2fd.324861e8 0x7fce8d1f2700 ber_scanf fmt ({imt) ber:
622eb2fd.324871c1 0x7fce8d1f2700 ber_scanf fmt (m}) ber:
622eb2fd.32488890 0x7fce8d1f2700 >>> dnPrettyNormal: <>
622eb2fd.32489324 0x7fce8d1f2700 <<< dnPrettyNormal: <>, <>
622eb2fd.3248ce51 0x7fce8d1f2700 do_bind: version=3 dn="" method=128
622eb2fd.3248efc7 0x7fce8d1f2700 send_ldap_result: conn=1000 op=0 p=3
622eb2fd.324906be 0x7fce8d1f2700 send_ldap_response: msgid=1 tag=97 err=0
622eb2fd.32492605 0x7fce8d1f2700 ber_flush2: 14 bytes to sd 15
622eb2fd.324ab763 0x7fce8d1f2700 do_bind: v3 anonymous bind
622eb2fd.325dc3b4 0x7fce8d1f2700 connection_get(15): got connid=1000
622eb2fd.325de12a 0x7fce8d1f2700 connection_read(15): checking for input on
id=1000
622eb2fd.325dec44 0x7fce8d1f2700 ber_get_next
622eb2fd.325e0856 0x7fce8d1f2700 ber_get_next: tag 0x30 len 63 contents:
622eb2fd.325e1663 0x7fce8d1f2700 op tag 0x63, time 1647227645
622eb2fd.325e22e8 0x7fce8d1f2700 ber_get_next
622eb2fd.325e500c 0x7fce8d1f2700 conn=1000 op=1 do_search
622eb2fd.325e5ae0 0x7fce8d1f2700 ber_scanf fmt ({miiiib) ber:
622eb2fd.325e69ea 0x7fce8d1f2700 >>> dnPrettyNormal: <dc=corp,dc=com>
622eb2fd.325e98bd 0x7fce8d1f2700 <<< dnPrettyNormal: <dc=corp,dc=com>,
<dc=corp,dc=com>
622eb2fd.325eaad7 0x7fce8d1f2700 ber_scanf fmt ({m) ber:
622eb2fd.325ebce5 0x7fce8d1f2700 ber_scanf fmt (m) ber:
622eb2fd.325eddcb 0x7fce8d1f2700 ber_scanf fmt ({M}}) ber:
622eb2fd.325f334c 0x7fce8d1f2700 ==> limits_get: conn=1000 op=1
self="[anonymous]" this="dc=corp,dc=com"
622eb2fd.325f4dcc 0x7fce8d1f2700 ==> translucent_search: <dc=corp,dc=com>
(cn=jed*)
Segmentation fault
Thanks!
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9785
Issue ID: 9785
Summary: test050 deadlock
Product: OpenLDAP
Version: 2.5.11
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: quanah(a)openldap.org
Target Milestone: ---
Running test050 in a loop sometimes results in a deadlock. Took 17 iterations
on one system, was 100% on another.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9789
Issue ID: 9789
Summary: syncprov uses a thread-local counters for the detached
op
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: ondra(a)mistotebe.net
Target Milestone: ---
Persistent searches routinely migrate across threads, however they keep using
op->o_counters from the original search op which is meant to be thread-local.
During shutdown, this counter can be destroyed as the original thread finishes,
but the persistent search might still be live somewhere else. At that point,
trying to acquire the destroyed sc_mutex fails and the thread usually stalls
forever.
slapd-asyncmeta is very likely to suffer from the same issues.
A representative backtrace of this happening:
Thread 3 (Thread 0x7f0b7d933640 (LWP 2928392) "slapd"):
#0 futex_wait (private=0, expected=2, futex_word=0x7f0b74000ff8) at
../sysdeps/nptl/futex-internal.h:146
#3 0x00007f0b7fd17a05 in ldap_pvt_thread_mutex_lock (mutex=Locked by LWP 0) at
thr_posix.c:313
#4 0x0000000000469564 in slap_send_search_entry (op=Search request conn=1003
op=1 = {...}, rs=Search entry = {...}) at result.c:1503
#5 0x00007f0b7f30561c in syncprov_sendresp (op=Search request conn=1003 op=1 =
{...}, ri=0x7f0b701eb8e0, so=0x7f0b74102b20, mode=1) at syncprov.c:976
#6 0x00007f0b7f305064 in syncprov_qplay (op=Search request conn=1003 op=1 =
{...}, so=0x7f0b74102b20) at syncprov.c:1028
#7 0x00007f0b7f304ecc in syncprov_qtask (ctx=0x7f0b7d932a58,
arg=0x7f0b74102b20) at syncprov.c:1086
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9804
Issue ID: 9804
Summary: slapd.conf(5) - remove comment from syncrepl about
sizelimit
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: documentation
Assignee: bugs(a)openldap.org
Reporter: michael(a)stroeder.com
Target Milestone: ---
slapd.conf(5) and slapd-config(5) contain the following really mis-leading
text:
"The sizelimit and timelimit parameters define a consumer requested limitation
on the number of entries that can be returned by the LDAP Content
Synchronization operation; as such, it is intended to implement partial
replication based on the size of the replicated database and on the time
required by the synchronization."
This is wrong. One cannot implement deterministic partial replication with
these limits.
=> This text should be removed.
--
You are receiving this mail because:
You are on the CC list for the issue.