https://bugs.openldap.org/show_bug.cgi?id=10254
Issue ID: 10254
Summary: Allow upgrading password hash on bind
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: overlays
Assignee: bugs(a)openldap.org
Reporter: me(a)floriswesterman.nl
Target Milestone: ---
Many OpenLDAP installations are likely to contain relatively old password
hashes such as SSHA and CRYPT, as modern alternatives such as Argon are only
recent additions. Due to the nature of password hashes, it is of course not
possible to "unhash" the old values and rehash them with a more modern
algorithm. The presence of these old password hashes poses a liability in case
of information leaks or hacks.
Currently, the only way to upgrade a password hash is to wait for the user to
change their password. This can be sped up by expiring passwords and forcing
users to change them. However, this can be slow and frequent password rotation
is no longer considered a best practice.
It would be a very helpful addition to add support for upgrading a password
hash on bind. This is implemented in the 389 directory server:
https://www.port389.org/docs/389ds/design/pwupgrade-on-bind.html
Essentially, when a user binds, the password is checked like normal. In case of
a successful bind, the proposed feature would check the hash algorithm used for
the password; and in case it is not equal to the current `olcPasswordHash`
value, the user-provided password is rehashed using the new algorithm and
stored. This way, the old hashes are phased out more quickly, without being a
disturbance to users.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9343
Issue ID: 9343
Summary: Expand ppolicy policy configuration to allow URL
filter
Product: OpenLDAP
Version: 2.5
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: overlays
Assignee: bugs(a)openldap.org
Reporter: quanah(a)openldap.org
Target Milestone: ---
Currently, ppolicy only supports a single global default policy, and past that
any policies must be manually added to a given user entry if they are supposed
to have something other than the default policy.
Also, some sites want no default policy, and only a specific subset to have a
policy applied to them.
For both of these cases, it would be helpful if it were possible to configure a
policy to apply to a set of users via a URL similar to the way we handle
creating groups of users in dynlist
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10304
Issue ID: 10304
Summary: Unable to remove item from directory as part of
transaction if it is the last item in that directory
Product: OpenLDAP
Version: 2.5.13
Hardware: All
OS: Linux
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: backends
Assignee: bugs(a)openldap.org
Reporter: sophie.elliott(a)arcticlake.com
Target Milestone: ---
I am running my ldap server on Debian 11.3, with the mdb backend, using the
backported openldap version 2.5.13. I am not 100% certain if this is an issue
with OpenLDAP or liblmdb, but I have been running tests in the repo and it
looks like the liblmdb tests work fine, so I think it's with OpenLDAP itself.
I have been performing a transaction, and deleting entries from a directory
during this transaction. This works fine if the item that I am deleting isn't
the last entry in its directory, but when it is I get a MDB_NOTFOUND error on
the commit transaction call and the delete doesn't go through. Here is an
excerpt of the logs when this happens:
```
67a64334.14e1fc32 0x766ad2a00700 => index_entry_del( 108,
"accessGroupID=f23de82f-3a1c-4f88-86bb-bb07f9a0992d,o=[COMPANY],ou=accessGroups,dc=local,dc=[COMPANY],dc=com"
)
67a64334.14e21912 0x766ad2a00700 mdb_idl_delete_keys: 6c [62d34624]
67a64334.14e22812 0x766ad2a00700 <= index_entry_del( 108,
"accessGroupID=f23de82f-3a1c-4f88-86bb-bb07f9a0992d,o=[COMPANY],ou=accessGroups,dc=local,dc=[COMPANY],dc=com"
) success
67a64334.14e23a91 0x766ad2a00700 mdb_delete: txn_commit failed: MDB_NOTFOUND:
No matching key/data pair found (-30798)
```
Please let me know if I should submit this issue elsewhere, or if this is
something that has already been fixed in a more recent version. I'm also happy
to provide more details if necessary. Thank you!
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10274
Issue ID: 10274
Summary: Replication issue on MMR configuration
Product: OpenLDAP
Version: 2.5.14
Hardware: All
OS: Linux
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: falgon.comp(a)gmail.com
Target Milestone: ---
Created attachment 1036
--> https://bugs.openldap.org/attachment.cgi?id=1036&action=edit
In this attachment you will find 2 openldap configurations for 2 instances +
slamd conf exemple + 5 screenshots to show the issue and one text file to
explain what you see
Hello we are openning this issue further to the initial post in technical :
https://lists.openldap.org/hyperkitty/list/openldap-technical@openldap.org/…
Issue :
We are working on a project and we've come across an issue with the replication
after performance testing :
*Configuration :*
RHEL 8.6
OpenLDAP 2.5.14
*MMR-delta *configuration on multiple servers attached
300,000 users configured and used for tests
*olcLastBind: TRUE*
Use of SLAMD (performance shooting)
*Problem description:*
We are currently running performance and resilience tests on our infrastructure
using the SLAMD tool configured to perform BINDs and MODs on a defined range of
accounts.
We use a load balancer (VIP) to poll all of our servers equally. (but it is
possible to do performance tests directly on each of the directories)
With our current infrastructure we're able to perform approximately 300
MOD/BIND/s. Beyond that, we start to generate delays and can randomly come
across one issue.
However, when we run performance tests that exceed our write capacity, our
replication between servers can randomly create an incident with directories
being unable to catch up with their replication delay.
The directories update their contextCSNs, but extremely slowly (like freezing).
From then on, it's impossible for the directories to catch again. (even with no
incoming traffic)
A restart of the instance is required to perform a full refresh and solve the
incident.
We have enabled synchronization logs and have no error or refresh logs to
indicate a problem ( we can provide you with logs if necessary).
We suspect a write collision or a replication conflict but this is never write
in our sync logs.
We've run a lot of tests.
For example, when we run a performance test on a single live server, we don't
reproduce the problem.
Anothers examples: if we define different accounts ranges for each server with
SALMD, we don't reproduce the problem either.
If we use only one account for the range, we don't reproduce the problem
either.
______________________________________________________________________
I have add some screenshots on attachement to show you the issue and all the
explanations.
______________________________________________________________________
*Symptoms :*
One or more directories can no longer be replicated normally after performance
testing ends.
No apparent error logs.
Need a restart of instances to solve the problem.
*How to reproduce the problem:*
Have at least two servers in MMR mode
Set LastBind to TRUE
Perform a SLAMD shot from a LoadBalancer in bandwidth mode OR start multiple
SLAMD test on same time for each server with the same account range.
Exceed the maximum write capacity of the servers.
*SLAMD configuration :*
authrate.sh --hostname ${HOSTNAME} --port ${PORTSSL} \
--useSSL --trustStorePath ${CACERTJKS} \
--trustStorePassword ${CACERTJKSPW} --bindDN "${BINDDN}" \
--bindPassword ${BINDPW} --baseDN "${BASEDN}" \
--filter "(uid=[${RANGE}])" --credentials ${USERPW} \
--warmUpIntervals ${WARMUP} \
--numThreads ${NTHREADS} ${ARGS}
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10308
Issue ID: 10308
Summary: Implement cn=monitor for back-asyncmeta
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: backends
Assignee: bugs(a)openldap.org
Reporter: nivanova(a)symas.com
Target Milestone: ---
Currently back-asyncmeta has no cn=monitor capabilities. It will be useful to
implement some, specifically to monitor targets and target connection states.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10169
Issue ID: 10169
Summary: Add support for token only authentication with otp
overlay
Product: OpenLDAP
Version: 2.6.6
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: overlays
Assignee: bugs(a)openldap.org
Reporter: quanah(a)openldap.org
Target Milestone: ---
Currently the OTP overlay is password + token. It would be nice to be able to
configure it so it can run in a token only mode, similar to the slapo-totp
overlay in contrib. This would allow us to have a project supported solution
and retire that contrib module.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10301
Issue ID: 10301
Summary: Use assertion control in lastbind chaining
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: ondra(a)mistotebe.net
Target Milestone: ---
Take a setup with a bunch of consumers tracking lastbind information and
replicating this back from the provider. If a client sends a lot of successful
binds to it in a very short window, the changes might not have a chance to
replicate down so each of these binds has to trigger a new modification to be
forwarded.
This results in a lot of DB churn and replication traffic that is actually
meaningless (the pwdLastChange values before and after each of the mods will be
the same).
We probably can't avoid having to send something, but the change we send could
have an assertion control attached that lets the provider skip it if
pwdLastChange>=new_value, saving on all of the additional processing (and
additional useless replication traffic).
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=9186
Bug ID: 9186
Summary: RFE: More metrics in cn=monitor
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: ---
Component: backends
Assignee: bugs(a)openldap.org
Reporter: michael(a)stroeder.com
Target Milestone: ---
Currently I'm grepping metrics from syslog with mtail:
https://gitlab.com/ae-dir/ansible-ae-dir-server/-/blob/master/templates/mta…
With a new binary logging this is not possible anymore.
Thus it would be nice if cn=monitor provides more metrics.
1. Overall connection count per listener starting at 0 when started. This would
be a simple counter added to:
entries cn=Listener 0,cn=Listeners,cn=Monitor
2. Counter for the various "deferring" messages separated by the reason for
deferring.
3. Counters for all possible result codes. In my mtail program I also label it
with the result type.
--
You are receiving this mail because:
You are on the CC list for the bug.
https://bugs.openldap.org/show_bug.cgi?id=10293
Issue ID: 10293
Summary: Log operations generated by syncrepl at STATS level
Product: OpenLDAP
Version: unspecified
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: nivanova(a)symas.com
Target Milestone: ---
Similarly to how incoming operations are logged, operations created by syncrepl
should be logged as well.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=10092
Issue ID: 10092
Summary: Local logging doesn't build on Windows
Product: OpenLDAP
Version: 2.6.6
Hardware: All
OS: All
Status: UNCONFIRMED
Keywords: needs_review
Severity: normal
Priority: ---
Component: slapd
Assignee: bugs(a)openldap.org
Reporter: hyc(a)openldap.org
Target Milestone: ---
slapd/logging.c uses writev() to write a prefix and the log message together in
one call. This feature doesn't exist on Windows. The closest equivalent,
WriteFileGather, only works on page sized and aligned writes. On Windows the
only way to write as desired is to copy the message into a new buffer first.
--
You are receiving this mail because:
You are on the CC list for the issue.