https://bugs.openldap.org/show_bug.cgi?id=7080
Quanah Gibson-Mount <quanah(a)openldap.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
Resolution|TEST |FIXED
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=6949
Quanah Gibson-Mount <quanah(a)openldap.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
Resolution|TEST |FIXED
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=5344
Quanah Gibson-Mount <quanah(a)openldap.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
Resolution|TEST |FIXED
Status|RESOLVED |VERIFIED
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9615
Issue ID: 9615
Summary: ppolicy pwcheck module should be a configuration
setting
Product: OpenLDAP
Version: 2.5.5
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: overlays
Assignee: bugs(a)openldap.org
Reporter: quanah(a)openldap.org
Target Milestone: ---
With the current implementation, the external pwcheck module for ppolicy is dl
opened every time a given password policy is checked during a password modify
operation. This appears to be problematic because eventually systems start
reporting:
check_password_quality: lt_dlopen failed: (ppm.so) file not found."
There's really zero reason for this functionality to be implemented this way.
Instead, an external password policy check module should be defined as a
password policy config item, and then whether or not to use it remains a part
of a given policy. This means the external module will only need to be opened
a single time.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=8775
Quanah Gibson-Mount <quanah(a)openldap.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|RESOLVED |VERIFIED
Resolution|TEST |FIXED
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=6916
Quanah Gibson-Mount <quanah(a)openldap.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|RESOLVED |VERIFIED
Resolution|TEST |FIXED
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9650
Issue ID: 9650
Summary: lloadd segfault on startup on systems using musl
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: Linux
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: lloadd
Assignee: bugs(a)openldap.org
Reporter: git(a)freundtech.com
Target Milestone: ---
I came across this problem while testing a fix for #9648. It seems to be
unrelated, so I'm creating a separate issue.
After compiling OpenLDAP 2.5.7 with the fix from #9648 and
----enable-balancer=yes on an Alpine Linux system lloadd segfaults on startup.
The backtrace from a debug build is
#0 0x00007ffff7fba852 in tss_get () from /lib/ld-musl-x86_64.so.1
#1 0x00005555555a844d in ldap_pvt_thread_key_getdata (key=0,
data=0x7fffffffe820) at thr_posix.c:360
#2 0x00005555555a7f9f in ldap_pvt_thread_pool_context () at tpool.c:1442
#3 0x0000555555583122 in slap_sl_context (ptr=0x7ffff7f606e0) at
sl_malloc.c:673
#4 0x0000555555580709 in ch_realloc (block=0x7ffff7f606e0, size=24) at
ch_malloc.c:81
#5 0x0000555555572303 in lload_open_listener (url=0x7ffff7c43c30 "ldap:///",
lud=0x7ffff7f60550,
listeners=0x7fffffffeb1c, cur=0x7fffffffeb20) at daemon.c:465
#6 0x00005555555733bf in lloadd_listeners_init (urls=0x5555555d1958
"ldap:///") at daemon.c:749
#7 0x000055555557f7a9 in main (argc=1, argv=0x7fffffffed08) at main.c:632
The problem seems to be that ldap_pvt_thread_pool_context in tpool.c is called
before ldap_int_thread_pool_startup in tpool.c is called.
ldap_int_thread_pool_startup calls ldap_pvt_thread_key_create, which on posix
calls pthread_key_create.
ldap_pvt_thread_pool_context calls ldap_pvt_thread_key_getdata, which on posix
calls pthread_getspecific.
If ldap_int_thread_pool_startup wasn't called pthread_getspecific gets passed
an uninitialized key, which is always 0 (because the variable is static).
According to man 3 pthread_getspecific "The effect of calling
pthread_getspecific() or pthread_setspecific() with a key value not obtained
from pthread_key_create() or after key has been deleted with
pthread_key_delete() is undefined."
This seems to work fine on glibc, but crash on musl.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9599
Issue ID: 9599
Summary: Additional balancing strategies for lloadd
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: lloadd
Assignee: bugs(a)openldap.org
Reporter: ondra(a)mistotebe.net
Target Milestone: ---
At the moment, lloadd picks a backend on a round-robin basis, taking the first
one that can deal with the request. This has several disadvantages:
- there is no way to implement a failover set up where certain (e.g. local)
servers should be contacted as a priority
- all balancing is implicit on the limits imposed on servers - a
connection/server is a candidate until we've reached those limits
Allowing a priority and/or a different strategy to be set should address these.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9598
Issue ID: 9598
Summary: Restricted operation routing in lloadd
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: lloadd
Assignee: bugs(a)openldap.org
Reporter: ondra(a)mistotebe.net
Target Milestone: ---
Lloadd is not supposed to understand the LDAP protocol and is happy to route
operations to whichever connection is available, but this can backfire in
certain ways:
- there are controls and extended operations that establish an shared context
on the connection (paged results, TXN, ...)
- it might take a measurable amount of time before a write operation is
propagated to other servers
There should be a way to force some of these to a chosen backend/upstream
connection temporarily or even permanently based on the OID of the
extop/control in question.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9597
Issue ID: 9597
Summary: Send a Notice of Disconnection to clients
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: lloadd
Assignee: bugs(a)openldap.org
Reporter: ondra(a)mistotebe.net
Target Milestone: ---
When closing client connections, lloadd should try and send a NoD response
first.
--
You are receiving this mail because:
You are on the CC list for the issue.