Full_Name: Emily Backes
Submission from: (NULL) (184.108.40.206)
Similar to the recent overlay fixes to prevent updating entryCSN/contextCSN on
local changes, delete operations can cause inappropriate CSN setting on remote
Given a multi-master setup (normal syncrepl tested), so that each server has a
serverID set, with no overlays loaded other than syncprov, set up two or more
threads of delete operations; three or more seems to most reliably reproduce the
problem on the systems I've tested.
As the deletes are happening, the server1 side should of course show it's
This should of course be mirrored on the server2 side with contextCSN exactly
matching the set of CSN's from the server1 side. Instead, after enough
concurrent deletes to hit the race:
This happens even though server2 has never received any local write operations
(or indeed any connection other than the syncrepl search from server1 and my
searches to retrieve contextCSN). Again, no overlays are loaded.
This breaks syncrepl's assumptions and can result in other replication problems
as a result of CSN desync.
Working on tracing out exactly where it goes awry...
Full_Name: Aitor Carrera
Submission from: (NULL) (220.127.116.11)
When we use a non root user to bind, with multiple threads and some concurrent
1 .- In meta_back_bind_op_result method, in back-meta bind.c file the assert
"assert( LDAP_BACK_CONN_BINDING( msc ) );" is evaluated to false.
2.- Next, calls meta_back_cancel (bind.c) and then ldap_abandon_ext and other
assert is evaluated to false:
slapd: sasl.c:74: ldap_sasl_bind: Assertion `ld != ((void *)0)' failed.
That crash the slapd.
> In addition, random seed has to be initialized, otherwise the behavior
> is predictable.
I didn't think it too big of a deal, since it doesn't have to be
cryptographically random. I can submit a patch to call srandom()
> On the other side, I do not think that calling srand()
> in libldap is a good idea. The client application should do that. Or
> is there any other option?
The function ldap_domain2hostlist returns a list of strings, there
isn't any weight/prio information for the client to act
on. Furthermore the weight prio section was commented out, I figured
the library was the right place to do it.
The algo specified in rfc2782 is detailed, and specifies a random
number should be generated., I didn't want to implement it because it
results in sorting once by prio, once by weight instead of one sort
that does both.
James M. Leddy
Technical Account Manager
Red Hat Inc.
Full_Name: Hugo Monteiro
OS: Debian Squeeze 64bits
Submission from: (NULL) (18.104.22.168)
Performing a substring query on a locally stored attribute of a translucent
database, with only equality index will result on slapd crash.
We have a translucent database set up to handle samba attributes and we observed
that some client operations would crash slapd (like when performing user
enumeration, while changind folder ACLs).
The last logged query was
As per samba documentation we only had equality index on sambaSID attribute. We
have then reconfigured that attribute to use eq,sub,pres indexes, ran slapindex
on the database and slapd stopped crashing with that query.
On Wednesday 24 August 2011 21:28:03, James Leddy wrote:
> on incoming ftp
There is a typo in the patch:
+ strncpy(hostent_head[hostent_count].hostname, host,255);
In addition, random seed has to be initialized, otherwise the behavior is
predictable. On the other side, I do not think that calling srand() in libldap
is a good idea. The client application should do that. Or is there any other
Full_Name: Nick Urbanik
Version: 2.3.43-12 and 2.4.23-15
OS: CentOS 5
Submission from: (NULL) (22.214.171.124)
To my great surprise, OpenLDAP logs nothing at info priority, but excessive
amounts at debug priority, even when the loglevel is set to stats.
Here are examples of the size of log files holding *nothing* but OpenLDAP
stats logging in a production server:
# ls -lSr | tail -n4
-rw------- 1 root root 7160148590 Jul 28 10:48 ldap
-rw------- 1 root root 24102619198 Jul 26 04:02 ldap.3
-rw------- 1 root root 25034865261 Jul 27 04:02 ldap.2
-rw------- 1 root root 25504838803 Jul 28 04:02 ldap.1
$ bc -ql
scale = 6
25504838803 / 2^30
In other words, we were getting nearly 24 gigabytes of logging *each* *day*.
I raised this in the openldap-technical mailing list:
but found that this is by design:
OpenLDAP should log at info priority at least the following:
* when it is starting up
* when it is shutting down cleanly
* any errors, indicating issues that the sys admin should pay attention to
* (perhaps): one line for each connection.