[Issue 9388] New: mdb_stat for DupSort DBI shows incorrect data
by openldap-its@openldap.org
https://bugs.openldap.org/show_bug.cgi?id=9388
Issue ID: 9388
Summary: mdb_stat for DupSort DBI shows incorrect data
Product: LMDB
Version: 0.9.26
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: liblmdb
Assignee: bugs(a)openldap.org
Reporter: AskAlexSharov(a)gmail.com
Target Milestone: ---
It doesn't include pages pages used for values.
--
You are receiving this mail because:
You are on the CC list for the issue.
2 years, 1 month
[Bug 9205] New: Openldap 2.4.49 with overlays syncrepl+ppolicy+chain+ldap
by openldap-its@openldap.org
https://bugs.openldap.org/show_bug.cgi?id=9205
Bug ID: 9205
Summary: Openldap 2.4.49 with overlays
syncrepl+ppolicy+chain+ldap
Product: OpenLDAP
Version: 2.4.49
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: overlays
Assignee: bugs(a)openldap.org
Reporter: frederic.poisson(a)admin.gmessaging.net
Target Milestone: ---
Created attachment 700
--> https://bugs.openldap.org/attachment.cgi?id=700&action=edit
test script copied from test022-ppolicy and modified to show the trouble
Hello,
I'm doing a OpenLDAP test with a master/slave replication configuration
including ppolicy overlay. I would like to enable password change from the
slave replica with chain overlay, in order to validate the ppolicy
olcPPolicyForwardUpdates attribute to TRUE. I'm using LDAPS from slave to
master with SASL External authentication with client certificate. The client
certificate correspond to a user DN entry with "manage" rights on the master
server (the same used for the replication). This user DN has authzTo attribute
in order to match the correct PROXYAUTHZ request from its dn to user DN.
All of this configuration works on replica when i do first a failed
authentication (err=49) on replica. The pwdFailureTime value is updated on the
DN entry from replica to slave normally. I'm also able to do after some self
entry update on some attribute such as password or others from replica to
master.
But the weird behavior is that i need to run first an failed authentication,
otherwise if i try to change attribute on the slave server, it respond an
err=80 "Error: ldap_back_is_proxy_authz returned 0, misconfigured URI?". The
only way to retrieve correct behavior is to restart slapd, and redo one failed
authentication first. It seems that the chain overlay do not connect the master
server at startup.
I've done a modification of test script test022-ppolicy to test022-policy-chain
which use the same LDIF source and show the problem of modification on the
consumer not "relayed" to the supplier if a fail operation is not done before.
Regards
--
You are receiving this mail because:
You are on the CC list for the bug.
2 years, 1 month
[Issue 9340] New: Setting slapd.conf listener-threads > 1 causes Assertion failed: SLAP_SOCK_NOT_ACTIVE(id, s)
by openldap-its@openldap.org
https://bugs.openldap.org/show_bug.cgi?id=9340
Issue ID: 9340
Summary: Setting slapd.conf listener-threads > 1 causes
Assertion failed: SLAP_SOCK_NOT_ACTIVE(id, s)
Product: OpenLDAP
Version: 2.4.52
Hardware: Other
OS: Solaris
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: stacey.marshall(a)gmail.com
Target Milestone: ---
Created attachment 761
--> https://bugs.openldap.org/attachment.cgi?id=761&action=edit
Output from -d 5 option
This is not a new issue, the same is observed on version 2.4.45 through 2.4.53
Setting listener-threads via slapd.conf file on Sun4v (SPARC T4-1)
system to a value other than 1 results in "Assertion failed:
SLAP_SOCK_NOT_ACTIVE(id, s), file daemon.c" when starting slapd.
The same does not occur when using Directory (-F) configuration.
# grep '^listener-threads' /etc/openldap/slapd.conf
listener-threads 8
# /usr/lib/slapd -u openldap -g openldap \
> -f /etc/openldap/slapd.conf \
> -h 'ldap:/// ldapi:/// ldaps:///' -d 5 > slapd.error 2>&1
Abort (core dumped)
# tail slapd.error
5f578024 >>> dnNormalize: <cn=Current>
5f578024 <<< dnNormalize: <cn=current>
5f578024 >>> dnNormalize: <cn=Uptime>
5f578024 <<< dnNormalize: <cn=uptime>
5f578024 >>> dnNormalize: <cn=Read>
5f578024 <<< dnNormalize: <cn=read>
5f578024 >>> dnNormalize: <cn=Write>
5f578024 <<< dnNormalize: <cn=write>
5f578024 slapd starting
Assertion failed: SLAP_SOCK_NOT_ACTIVE(id, s), file daemon.c, line 903
# /usr/lib/slapd -VVV
@(#) $OpenLDAP: slapd 2.4.53 (Sep 8 2020 03:15:21) $
openldap
Included static overlays:
accesslog
auditlog
collect
constraint
dds
deref
dyngroup
dynlist
memberof
ppolicy
pcache
refint
retcode
rwm
seqmod
sssvlv
syncprov
translucent
unique
valsort
Included static backends:
config
ldif
monitor
ldap
mdb
meta
null
passwd
relay
shell
#
When running under debugger the issue is not observed.
The core file reveals the following threads:
# /opt/solarisstudio12.4/bin/dbx /usr/lib/slapd core
For information about new features see `help changes'
To remove this message, put `dbxenv suppress_startup_message 8.0' in your
.dbxrc
Reading slapd
core file header read successfully
Reading ld.so.1
Reading libc.so.1
Reading libldap_r-2.4.so.2.11.1
Reading liblber-2.4.so.2.11.1
Reading libsasl2.so.3.0.0
Reading liblogin.so.3.0.0
Reading libgssapiv2.so.3.0.0
Reading libgssapi_krb5.so.2.2
Reading libkrb5.so.3.3
Reading libcom_err.so.3.0
Reading libkrb5support.so.0.1
Reading libk5crypto.so.3.1
Reading libucrypto.so.1
Reading libbsm.so.1
Reading libtsol.so.2
Reading libinetutil.so.1
Reading libcryptoutil.so.1
Reading libelf.so.1
Reading libz.so.1
Reading libresolv.so.2
Reading libkwarn.so.1
Reading libplain.so.3.0.0
Reading libotp.so.3.0.0
Reading libcrypto.so.1.0.0
Reading libsasldb.so.3.0.0
Reading libdb-5.3.so
Reading libscram.so.3.0.0
Reading libltdl.so.7.3.1
Reading libssl.so.1.0.0
Reading libuuid.so.1
t@2 (l@2) terminated by signal ABRT (Abort)
0x0007fdd05cce17a8: __lwp_sigqueue+0x0008: bcc,a,pt
%icc,__lwp_sigqueue+0x18 ! 0x7fdd05cce17b8
(dbx) lwps
l@1 LWP suspended in uucopy()
o>l@2 signal SIGABRT in __lwp_sigqueue()
l@3 LWP suspended in __pollsys()
l@4 LWP suspended in __pollsys()
l@5 LWP suspended in __pollsys()
(dbx) where
current thread: t@2
=>[1] __lwp_sigqueue(0x0, 0xa00386a7d98, 0x6, 0x0, 0xffffffffffffffff,
0x0), at 0x7fdd05cce17a8
[2] raise(0x6, 0x7fdd0583ff060, 0x5, 0x5, 0x0, 0x0), at 0x7fdd05cc2af0c
[3] abort(0x1, 0x1210, 0x0, 0x1000, 0x7fdd05ce3c278, 0x1a278), at
0x7fdd05cbfa144
[4] _assert(0x10005d7a8, 0x10005d7c8, 0x387, 0x9, 0x8200,
0x7fdd05ce22000), at 0x7fdd05cbfb158
[5] slapd_daemon_task(0xc4, 0xffffffffffcdb7c8, 0x100382000, 0x324800,
0x61bf4f2270, 0x100406d3c), at 0x1000c118c
(dbx) lwp l@1
t@1 (l@1) stopped in uucopy at 0x7fdd05cce1d34
0x0007fdd05cce1d34: uucopy+0x0008: blu __cerror !
0x7fdd05cbdce80
(dbx) where
=>[1] uucopy(0x0, 0x7fdd0553fff10, 0xb0, 0xfffffffffffffff8, 0x0,
0xfffffe59ec25a990), at 0x7fdd05cce1d34
[2] setup_top_frame(0x7fdd054c00000, 0x7fffc0, 0x7fdd0553fffc0,
0x7fdd0553fffc0, 0x0, 0xfffffe59ec25adf0), at 0x7fdd05ccdc494
[3] setup_context(0xfffffe59ec25ab10, 0x7fdd05ccdc538, 0x7fdd058492240,
0x7fdd054c00000, 0x7fffc0, 0x1), at 0x7fdd05ccdc4dc
[4] _thrp_create(0x0, 0x800000, 0x1000becb0, 0x61bfa6bed0, 0x80,
0xfffffe59ec25aeb0), at 0x7fdd05ccd7f64
[5] pthread_create(0x61bfa6bed0, 0xfffffe59ec25af78, 0x1000becb0,
0x61bfa6bed0, 0x0, 0x0), at 0x7fdd05ccc80e4
[6] ldap_pvt_thread_create(0x61bfa6bed0, 0x0, 0x1000becb0, 0x61bfa6bed0,
0x0, 0x61bfa6bec0), at 0x7fdd05c81b324
[7] slapd_daemon(0x100406db8, 0x4, 0x10042710c, 0x100389490, 0x100406d38,
0x100382000), at 0x1000c1404
[8] main(0x8800, 0x10038aa88, 0x1, 0x100386038, 0x1, 0x10038aa84), at
0x10009ec34
(dbx) lwp l@3
t@3 (l@3) stopped in __pollsys at 0x7fdd05cce13fc
0x0007fdd05cce13fc: __pollsys+0x0008: blu __cerror !
0x7fdd05cbdce80
(dbx) where
=>[1] __pollsys(0x4, 0x2, 0x0, 0x0, 0x0, 0x0), at 0x7fdd05cce13fc
[2] _pollsys(0x7fdd0577ff2a0, 0x2, 0x0, 0x0, 0x0, 0x10), at
0x7fdd05cccd6a8
[3] pselect(0x10, 0x7fdd0577ffc30, 0x7fdd0577ff2a0, 0x7fdd05ce24c78, 0x0,
0x0), at 0x7fdd05cc2cb2c
[4] select(0x10, 0x7fdd0577ffc30, 0x0, 0x0, 0x0, 0x5), at 0x7fdd05cc2ced0
[5] slapd_daemon_task(0x61bf4e9890, 0x0, 0x100382000, 0x1, 0x1004010c8,
0x5), at 0x1000c014c
(dbx) lwp l@4
t@4 (l@4) stopped in __pollsys at 0x7fdd05cce13fc
0x0007fdd05cce13fc: __pollsys+0x0008: blu __cerror !
0x7fdd05cbdce80
(dbx) where
=>[1] __pollsys(0x4, 0x2, 0x0, 0x0, 0x0, 0x0), at 0x7fdd05cce13fc
[2] _pollsys(0x7fdd056bff370, 0x2, 0x0, 0x0, 0x0, 0x10), at
0x7fdd05cccd6a8
[3] pselect(0x12, 0x7fdd056bffd10, 0x7fdd056bff370, 0x7fdd05ce24c78, 0x0,
0x0), at 0x7fdd05cc2cb2c
[4] select(0x12, 0x7fdd056bffd10, 0x0, 0x0, 0x0, 0x5), at 0x7fdd05cc2ced0
[5] slapd_daemon_task(0x61bf4e9890, 0x0, 0x100382000, 0x2, 0x1004016f0,
0x5), at 0x1000c014c
(dbx) lwp l@5
t@5 (l@5) stopped in __pollsys at 0x7fdd05cce13fc
0x0007fdd05cce13fc: __pollsys+0x0008: blu __cerror !
0x7fdd05cbdce80
(dbx) where
=>[1] __pollsys(0x1, 0x2, 0x0, 0x0, 0x0, 0x0), at 0x7fdd05cce13fc
[2] _pollsys(0x7fdd055fff340, 0x2, 0x0, 0x0, 0x0, 0x10), at
0x7fdd05cccd6a8
[3] pselect(0x14, 0x7fdd055fffcf0, 0x7fdd055fff340, 0x7fdd05ce24c78, 0x0,
0x0), at 0x7fdd05cc2cb2c
[4] select(0x14, 0x7fdd055fffcf0, 0x0, 0x0, 0x0, 0x5), at 0x7fdd05cc2ced0
[5] slapd_daemon_task(0x61bf4e9890, 0x0, 0x100382000, 0x3, 0x100401d18,
0x5), at 0x1000c014c
(dbx)
--
You are receiving this mail because:
You are on the CC list for the issue.
2 years, 2 months
[Issue 9362] New: slapd crashed with a segmentation fault in syncprov
by openldap-its@openldap.org
https://bugs.openldap.org/show_bug.cgi?id=9362
Issue ID: 9362
Summary: slapd crashed with a segmentation fault in syncprov
Product: OpenLDAP
Version: 2.4.44
Hardware: All
OS: Linux
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: simon.pichugin(a)gmail.com
Target Milestone: ---
Description of problem:
Two times on slapd crashed with segmentation faults and cores were outputted.
The following messages were outputted to /var/log/messages.
--------------------
Jun 23 19:42:51 GC001CVFC101 kernel: slapd[1985]: segfault at 2 ip
00005647e0543414 sp 00007f6c088b5de0 error 4 in slapd[5647e04df000+1c0000]
Jun 23 19:42:53 GC001CVFC101 systemd: slapd.service: main process exited,
code=killed, status=11/SEGV
Jun 23 19:42:53 GC001CVFC101 systemd: Unit slapd.service entered failed state.
Jun 23 19:42:53 GC001CVFC101 systemd: slapd.service failed.
.....
Jun 23 23:56:48 GC001CVFC101 kernel: slapd[8759]: segfault at 3 ip
000055d24efbc414 sp 00007fe9a8ffae80 error 4 in slapd[55d24ef58000+1c0000]
Jun 23 23:56:49 GC001CVFC101 systemd: slapd.service: main process exited,
code=killed, status=11/SEGV
Jun 23 23:56:50 GC001CVFC101 systemd: Unit slapd.service entered failed state.
Jun 23 23:56:50 GC001CVFC101 systemd: slapd.service failed.
--------------------
The followings are the logs that are outputted to /var/log/slapd.log-20200624
Each segmentation fault occurred during modify and delete requests for ldap
entries.
-------------
Jun 23 19:42:50 GC001CVFC101 slapd[2281]: conn=7028108 op=391 MOD
dn="uid=user,ou=users,dc=example,dc=com"
.....
Jun 23 23:56:48 GC001CVFC101 slapd[32641]: conn=1552 op=144 DEL
dn="cn=group1,ou=groups,dc=example,dc=com"
-------------
The followings are the backtraces of two cores.
1) core-slapd.2281:
--------------------
(gdb) bt
#0 test_filter (op=op@entry=0x7f6c088b6130, e=0x7f6bee9e37c8, f=0x2) at
filterentry.c:69
^^^^^*1
#1 0x00007f6cb379e598 in syncprov_matchops (op=op@entry=0x7f6be01191d0,
opc=opc@entry=0x7f6bec001710, saveit=saveit@entry=1) at syncprov.c:1334
#2 0x00007f6cb379ec63 in syncprov_op_mod (op=0x7f6be01191d0, rs=<optimized
out>) at syncprov.c:2201
#3 0x00005647e0591e8a in overlay_op_walk (op=op@entry=0x7f6be01191d0,
rs=0x7f6c088b7960, which=op_modify, oi=0x5647e1633dd0, on=0x5647e1638d00)
at backover.c:661
#4 0x00005647e0592034 in over_op_func (op=0x7f6be01191d0, rs=<optimized out>,
which=<optimized out>) at backover.c:730
#5 0x00005647e053b2f9 in fe_op_modify (op=0x7f6be01191d0, rs=0x7f6c088b7960)
at modify.c:303
#6 0x00005647e053d2ed in do_modify (op=0x7f6be01191d0, rs=0x7f6c088b7960) at
modify.c:177
#7 0x00005647e0522e7c in connection_operation (ctx=ctx@entry=0x7f6c088b7bd0,
arg_v=arg_v@entry=0x7f6be01191d0) at connection.c:1158
#8 0x00005647e05231eb in connection_read_thread (ctx=0x7f6c088b7bd0,
argv=0x67) at connection.c:1294
#9 0x00007f6cbaad82fa in ldap_int_thread_pool_wrapper () from
debug/lib64/libldap_r-2.4.so.2
#10 0x00007f6cb9d9ae25 in start_thread (arg=0x7f6c088b8700) at
pthread_create.c:308
#11 0x00007f6cb925c34d in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:113
(gdb)
--------------------
2) core-slapd.32641
--------------------
(gdb) bt
#0 test_filter (op=op@entry=0x7fe9a8ffb1d0, e=0x7fe9c7ea8888, f=0x3) at
filterentry.c:69
^^^^^*1
#1 0x00007feab3f22598 in syncprov_matchops (op=op@entry=0x7fe99c000950,
opc=opc@entry=0x7fe99c001808, saveit=saveit@entry=1) at syncprov.c:1334
#2 0x00007feab3f22c63 in syncprov_op_mod (op=0x7fe99c000950, rs=<optimized
out>) at syncprov.c:2201
#3 0x000055d24f00ae8a in overlay_op_walk (op=op@entry=0x7fe99c000950,
rs=0x7fe9a8ffb960, which=op_delete, oi=0x55d250c54dd0, on=0x55d250c59d00)
at backover.c:661
#4 0x000055d24f00b034 in over_op_func (op=0x7fe99c000950, rs=<optimized out>,
which=<optimized out>) at backover.c:730
#5 0x000055d24efb6bf6 in fe_op_delete (op=0x7fe99c000950, rs=0x7fe9a8ffb960)
at delete.c:174
#6 0x000055d24efb68d6 in do_delete (op=0x7fe99c000950, rs=0x7fe9a8ffb960) at
delete.c:95
#7 0x000055d24ef9be7c in connection_operation (ctx=ctx@entry=0x7fe9a8ffbbd0,
arg_v=arg_v@entry=0x7fe99c000950) at connection.c:1158
#8 0x000055d24ef9c1eb in connection_read_thread (ctx=0x7fe9a8ffbbd0,
argv=0x3a) at connection.c:1294
#9 0x00007feabb25c2fa in ldap_int_thread_pool_wrapper () from
debug/lib64/libldap_r-2.4.so.2
#10 0x00007feaba51ee25 in start_thread (arg=0x7fe9a8ffc700) at
pthread_create.c:308
#11 0x00007feab99e034d in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:113
(gdb)
--------------------
The two cores seem to be the same, only the difference between modify and
delete request.
Both are considered to cause a segmentation fault because the value of the
third parameter(*1) of test_filter is invalid.
The third parameter(*1) of test_filter should be an address, like the source
below.
Excerpt from filterentry.c:
--------------------
60 int
61 test_filter(
62 Operation *op,
63 Entry *e,
64 Filter *f )
** the third parameter of test_filter
65 {
66 int rc;
67 Debug( LDAP_DEBUG_FILTER, "=> test_filter\n", 0, 0, 0 );
68
69 if ( f->f_choice & SLAPD_FILTER_UNDEFINED ) {
** referenced here
70 Debug( LDAP_DEBUG_FILTER, " UNDEFINED\n", 0, 0, 0 );
--------------------
Version-Release number of selected component (if applicable):
How reproducible:
Two times
Steps to Reproduce:
Unknown
Actual results:
slapd crashes with a segmentation fault and a core is outputted.
Expected results:
slapd doesn't crash and a core isn't outputted.
Thanks!
Simon
--
You are receiving this mail because:
You are on the CC list for the issue.
2 years, 2 months
[Bug 9222] New: Fix presence list to use a btree instead of an AVL tree
by openldap-its@openldap.org
https://bugs.openldap.org/show_bug.cgi?id=9222
Bug ID: 9222
Summary: Fix presence list to use a btree instead of an AVL
tree
Product: OpenLDAP
Version: 2.5
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: quanah(a)openldap.org
Target Milestone: ---
[23:34] <hyc> ok, so far heap profile shows that memory use during refresh is
normal
[23:35] <hyc> not wonderful, but normal. mem usage grows because we're
recording the present list while receiving entries in the refresh
[23:36] <hyc> I'm seeing for 1.2GB of data about 235MB of presentlist
[23:36] <hyc> which is pretty awful, considering presentlist is just a list of
UUIDs
[23:36] <hyc> being stored in an avl tree
[23:37] <hyc> a btree would have been better here, and we could just use an
unsorted segmented array
[23:42] <hyc> for the accumulation phase anyway. we need to be able to lookup
records during the delete pphase
[00:05] <hyc> this stuff seriously needs a rewrite
[01:13] <hyc> 2.8M records x 16 bytes per uuid so this should be no more than
48MB of overhead
[01:13] <hyc> and instead it's 3-400MB
--
You are receiving this mail because:
You are on the CC list for the bug.
2 years, 2 months
[Issue 9394] New: syncprov: Session log can end up with duplicate entries
by openldap-its@openldap.org
https://bugs.openldap.org/show_bug.cgi?id=9394
Issue ID: 9394
Summary: syncprov: Session log can end up with duplicate
entries
Product: OpenLDAP
Version: 2.4.55
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: overlays
Assignee: bugs(a)openldap.org
Reporter: quanah(a)openldap.org
Target Milestone: ---
Had an incident today where slapd stopped after the following assert was
triggered in syncprov.c:
1655 rc = tavl_insert( &sl->sl_entries, se, syncprov_sessionlog_cmp,
avl_dup_error );
1656 assert( rc == LDAP_SUCCESS );
This was due to the same entry being inserted into the sessionlog a second time
without the prior instance having been removed.
--
You are receiving this mail because:
You are on the CC list for the issue.
2 years, 2 months
[Issue 9391] New: (entryUUID=foobar) -> seg fault
by openldap-its@openldap.org
https://bugs.openldap.org/show_bug.cgi?id=9391
Issue ID: 9391
Summary: (entryUUID=foobar) -> seg fault
Product: OpenLDAP
Version: 2.4.56
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: michael(a)stroeder.com
Target Milestone: ---
Searching with filter (entryUUID=foobar) crashes slapd for Æ-DIR providers and
consumers.
Log message:
slapd: schema_init.c:2943: UUIDNormalize: Assertion `val->bv_len == 16' failed.
Let me know if you need more information to reproduce.
--
You are receiving this mail because:
You are on the CC list for the issue.
2 years, 2 months
[Issue 9397] New: LMDB: A second process opening a file with MDB_WRITEMAP can cause the first to SIGBUS
by openldap-its@openldap.org
https://bugs.openldap.org/show_bug.cgi?id=9397
Issue ID: 9397
Summary: LMDB: A second process opening a file with
MDB_WRITEMAP can cause the first to SIGBUS
Product: LMDB
Version: 0.9.26
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: liblmdb
Assignee: bugs(a)openldap.org
Reporter: github(a)nicwatson.org
Target Milestone: ---
Created attachment 780
--> https://bugs.openldap.org/attachment.cgi?id=780&action=edit
Full reproduction of SIGBUS MDB_WRITEMAP issue (works on Linux only)
The fundamental problem is that a ftruncate() on Linux that makes a file
smaller will cause accesses past the new end of the file to SIGBUS (see the
mmap man page).
The sequence that causes a SIGBUS involves two processes.
1. The first process opens a new LMDB file with MDB_WRITEMAP.
2. The second process opens the same LMDB file with MDB_WRITEMAP and with an
explicit map_size smaller than the first process's map size.
* This causes an ftruncate that makes the underlying file *smaller*.
3. (Optional) The second process closes the environment and exits.
4. The first process opens a write transaction and writes a bunch of data.
5. The first process commits the transaction. This causes a memory read from
the mapped memory that's now past the end of the file. On Linux, this triggers
a SIGBUS.
Attached is code that fully reproduces the problem on Linux.
The most straightforward solution is to allow ftruncate to *reduce* the file
size if it is the only reader. Another possibility is check the file size and
ftruncate if necessary every time a write transaction is opened. A third
possibility is to catch the SIGBUS signal.
Repro note: I used clone() to create the subprocess to most straightforwardly
demonstrate that the problem is not due to inherited file descriptors. The
problem still manifests when the processes are completely independent.
--
You are receiving this mail because:
You are on the CC list for the issue.
2 years, 3 months
[Issue 9401] New: Fix ldap_install_tls function name
by openldap-its@openldap.org
https://bugs.openldap.org/show_bug.cgi?id=9401
Issue ID: 9401
Summary: Fix ldap_install_tls function name
Product: OpenLDAP
Version: 2.5
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: libraries
Assignee: bugs(a)openldap.org
Reporter: quanah(a)openldap.org
Target Milestone: ---
The ldap_install_tls function is really an internal only method for slapd. It
should be renamed accordingly to ldap_int_install_tls to reflect this fact, and
the documentation updated accordingly.
--
You are receiving this mail because:
You are on the CC list for the issue.
2 years, 4 months