I use the following version:
- OpenLDAP (2.4.35), but I have tried 2.4.39 as well
- Cyrus SASL (2.1.26)
- OpenSSL (1.0.1h)
- Heimdal ( I beleive 1.5.2)
On Mon, Oct 6, 2014 at 1:30 PM, Quanah Gibson-Mount <quanah(a)zimbra.com>
> --On Monday, October 06, 2014 2:27 PM -0400 Kristof Takacs <
> kristof.takacs(a)gmail.com> wrote:
> I use the following open source libraries:
>> - OpenLDAP
>> - Cyrus SASL
>> - OpenSSL
>> - Heimdal
> It is always critical to list the versions of software you are using.
> Please do so.
> Quanah Gibson-Mount
> Server Architect
> Zimbra, Inc.
> Zimbra :: the leader in open source messaging and collaboration
Hi folks. I am having a problem with index creation and am wondering if
anyone here has seen this or knows what may be happening.
We recently deployed a new proxy server product in our environment and have
been working with the vendor to resolve some problems. This device connects
to our existing LDAP server environment though a new database we added, per
the vendor¹s requirements. One of the problems I am seeing is that, since
the addition of this device, we are getting a lot of of the ³<=
bdb_equality_cadidates: (uniqueMember) not indexed² messages showing up in
the slapd log files. In order to resolve this, I¹ve attempted to create a
new equality index on this attribute however no matter what I do, slapindex
will not create an index! The following steps were done:
1. Shut down Slapd
2. Modify the slapd.conf file and add the ³index uniqueMember eq² entry to
the current list of indexes.
3. Save the slapd.conf file.
4. Navigate to the current path for the database files and delete all
existing index ³bdb² files.
5. Run a slapindex f (location of slapd) -b (suffix of database to reindex)
I¹ve done this several times, once without deleting the current index files
and then a few others after I¹ve deleted the current index files. The
slapindex tool will recreate all of the indexes except the ³uniqueMember²
index. I do not see an index file for this, and of course starting slapd
back up results in the same messages in the log.
So, what could be causing this? Is it possible that the vendor is using a
filter improperly? If there is no data in the uniqueMember attribute for any
of the records would it prevent slapindex from creating an index file?
Any help would be appreciated!
Just an fyi... As you would know from reading the LMDB design papers
( http://symas.com/mdb/#pubs ) LMDB is crash-proof by design. A Symas client
already confirmed this in their own crash testing last year
https://symas.com/carrier-grade-stability-and-performance/ and it has again
been verified by a research group at the University of Wisconsin. Their
findings are being presented at the Usenix OSDI conference this week, and you
can read the paper here
They report on a single "vulnerability" in LMDB, in which LMDB depends on the
atomicity of a single sector 106-byte write for its transaction commit
semantics. Their claim is that not all storage devices may guarantee the
atomicity of such a write. While I myself filed an ITS on this very topic a
year ago, http://www.openldap.org/its/index.cgi/Incoming?id=7668
the reality is that all storage devices made in the past 20+ years actually do
guarantee atomicity of single-sector writes. You would have to rewind back to
30 years at least, to find a HDD where this is not true.
The UWisc researchers' point is that we cannot say what behaviors will be
exported by up--and-coming nonvolatile RAM mechanisms (e.g. MRAM or PCRAM); if
they offer byte-addressability instead of sector-addressability then there's a
potential for these writes to become non-atomic in the future.
At any rate, this issue has zero relevance today, and we are monitoring all of
the upcoming NVRAM technologies closely for future developments.
The other takeaway from these reports is how critically unreliable many other
popular systems are. If you use any of the other projects that were included
in this research, you owe it to yourself to rethink that usage, or raise
discussions with their developers on how they plan to address their many
As an interesting footnote, BerkeleyDB was included in the original testing,
and while it was in their preliminary results
http://wisdom.cs.wisc.edu/workshops/spring-14/talks/Thanu.pdf it is now
conspicuously absent from the final paper. Regardless, the OpenLDAP Project is
deprecating use of BerkeleyDB for multiple reasons. Again, if you're still
using BDB you need to take a moment to re-evaluate your project.
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
I've implemented in our testing lab, 2 openldap master servers (both
configured as mirror mode); 1 HA server with HAproxy (to grant load
balancing and tcp proxying to both masters), and 1 openldap slave which
synchronizes its data from the masters through the HA server. All writes
sent to the slave (add/edit ous and users), are forwarded to the masters.
It work as is supposed to.
I've also implemented the ppolicy overlay on this architecture, with
"olcPPolicyForwardUpdates" set to TRUE on the slave. All authentication
failures made on the slave, are sent to the masters so they can manage
the policy. Here is the issue:
On the slave, after a few write operations, its database misses its base
(objectClass: dcObject). On the master, after an ldapsearch to its base I
get the following:
# admin, example.com
description: LDAP administrator
# Test, example.com
On the slave, after an ldapsearch to its base, I get the following:
# Test, example.com
I can still fetch objects, and make modifications to them from the slave;
but tools like PHPLDAPADMIN, which constructs the tree from "dcObject",
shows the following message: "This base cannot be created with PLA."
All this servers are Debian SID, with slapd 2.4.39.
I've been searching on the Internet how to solve this issue without any
luck. Can someone point me to the right direction?
I have been working on extending an application that searches LDAP server
with Kerberos support. I can now bind and then search using the following
- Simple Bind
- Simple Bind with TLS
- Kerberos Bind
I am having issues when I have Kerberos bind and TLS turned on.
I can see the the Kerberos ticket established, the SASL bind to the LDAP
server complete, but the LDAP search failing as the message cannot be
parsed by the server.
I use the following open source libraries:
- Cyrus SASL
In my debugging, I noticed that there are different writers that are
installed in the chain. I turned on debugging, and hence I see these
writers called in the order listed:
- simple with TLS: sb_debug_write() -> tlso_sb_write() -> sb_debug_write()
- Kerberos Bind: sb_debug_write() -> sb_sasl_generic_write() ->
sb_debug_write() -> sb_stream_write()
- Kerberos + TLS: sb_debug_write() -> sb_sasl_generic_write() ->
sb_debug_write() -> tlso_sb_write() -> sb_debug_write() -> sb_stream_write()
Is this a use case that is supposed to work? What could I be missing?
We have syncrepl working and recently got next problem: when trying to
rename object, operation succeeds, but syncrepl gets no events for
renaming children objects of renamed first one.
How do you fix this? Is it ldap server configuration, or am I doing smth
wrong with syncrepl client? Fast look into google gave no information.
Thanks for your attention
(talk is about OpenLDAP 2.4.31)
I'm running 2.4.39 with Mirrormode replication. I've noticed that
occasionally when making schema changes, a change will hang indefinitely
without reason. Though (possibly as a cause), also I've found that if I try
and restart one of the three servers in this replica set, it will
occasionally get the following with causes it to hang indefinitely:
Oct 1 08:55:37 ldap-03 slapd: daemon: shutdown requested and
Oct 1 08:55:37 ldap-03 slapd: slapd shutdown: waiting for 2
operations/tasks to finish
The other hosts show the disconnection and try and reconnect - the only way
to fix it seems to be either force killing this server or restarting all the
Any ideas? It's a fairly standard setup with mirrormode as per the doc's.
I've seen there is 2.4.40 released however haven't scheduled in an upgrade
yet as there wasn't anything on the changelog that appeared to be linked.