The Admin Guide still has not been updated with all of the relevant changes,
so here are some notes on new features in the 2.4 release... I believe all of
the manpages are up to date, so you can get specifics from them.
More complete cn=config functionality:
There is a new slapd-config(5) manpage for the cn=config backend.
the original design called for auto-renaming of config entries when you
insert or delete entries with ordered names, but that was not implemented in
2.3. It is now in 2.4. This means, e.g., if you have
and you want to add a new subordinate, now you can:
this will insert a new BDB database in slot 1 and bump all following
databases down one, so the original BDB database will now be named
In 2.3 you were only able to add new schema elements, not delete or
modify existing elements. In 2.4 you can modify schema at will. (Except for
the hardcoded system schema, of course.)
More sophisticated syncrepl configurations:
the original implementation of syncrepl in OpenLDAP 2.2 was intended to
support multiple consumers within the same database, but that feature never
worked and was removed from OpenLDAP 2.3. I.e., you could only configure a
single consumer in any database.
In 2.4 you can configure multiple consumers in a single database. The
configuration possibilities here are quite complex and numerous. You can
configure consumers over arbitrary subtrees of a database (disjoint or
overlapping). Any portion of the database may in turn be provided to other
consumers using the syncprov overlay. The syncprov overlay works with any
number of consumers over a single database or over arbitrarily many glued
As a consequence of the work to support multiple consumer contexts, the
syncrepl system now supports full N-way multimaster replication with
entry-level conflict resolution. There are some important constraints, of
course: In order to maintain consistent results across all servers, you must
maintain tightly synchronized clocks across all participating servers (e.g.,
you must use NTP on all servers). The entryCSNs used for replication now
record timestamps with microsecond resolution, instead of just seconds. The
delta-syncrepl code has not been updated to support multimaster usage yet,
that will come later in the 2.4 cycle.
On a related note, syncrepl was explicitly disabled on cn=config in 2.3.
It is now fully supported in 2.4; you can use syncrepl to replicate an entire
server configuration from one server to arbitrarily many other servers. It's
possible to clone an entire running slapd using just a small (less than 10
lines) seed configuration, or you can just replicate the schema subtrees,
etc. Tests 049 and 050 in the test suite provide working examples of these
In 2.3 you could configure syncrepl as a full push-mode replicator by
using it in conjunction with a back-ldap pointed at the target server. But
because the back-ldap database needs to have a suffix corresponding to the
target's suffix, you could only configure one instance per slapd.
In 2.4 you can define a database to be "hidden" which means that its
suffix is ignored when checking for name collisions, and the database will
never be used to answer requests received by the frontend. Using this hidden
database feature allows you to configure multiple databases with the same
suffix, allowing you to set up multiple back-ldap instances for pushing
replication of a single database to multiple targets. There may be other uses
for hidden databases as well (e.g., using a syncrepl consumer to maintain a
*local* mirror of a database on a separate filesystem).
More extensive TLS configuration control:
In 2.3, the TLS configuration in slapd was only used by the slapd
listeners. For outbound connections used by e.g. back-ldap or syncrepl their
TLS parameters came from the system's ldap.conf file.
In 2.4 all of these sessions inherit their settings from the main slapd
configuration but settings can be individually overridden on a
per-config-item basis. This is particularly helpful if you use
certificate-based authentication and need to use a different client
certificate for different destinations.
Various performance enhancements:
Too many to list. Some notable changes - ldapadd used to be a couple of
orders of magnitude slower than "slapadd -q". It's now at worst only about
half the speed of slapadd -q. A few weeks ago I did some comparisons of all
the 2.x OpenLDAP releases; the results are in the slides from my SCALE
presentation and you can find a copy here:
That compared 2.0.27, 2.1.30, 2.2.30, 2.3.33, and HEAD (as of a couple
weeks ago). Toward the latter end of the "Cached Search Performance" chart it
gets hard to see the difference because the runtimes are so small, but the
new code is about 25% faster than 2.3, which was about 20% faster than 2.2,
which was about 100% faster than 2.1, which was about 100% faster than 2.0,
in that particular search scenario. That test basically searched a 1.3GB DB
of 380836 entries (all in the slapd entry cache) in under 1 second. i.e., on
a 2.4GHz CPU with DDR400 ECC/Registered RAM we can search over 500 thousand
entries per second. The search was on an unindexed attribute using a filter
that would not match any entry, forcing slapd to examine every entry in the
DB, testing the filter for a match.
Essentially the slapd entry cache in back-bdb/back-hdb is so efficient
the search processing time is almost invisible; the runtime is limited only
by the memory bandwidth of the machine. (The search data rate corresponds to
about 3.5GB/sec; the memory bandwidth on the machine is only about 4GB/sec
due to ECC and register latency.)
I think it goes without saying that no other Directory Server in the world is
this fast or this efficient. Couple that with the scalability, manageability,
flexibility, and just the sheer know-how behind this software, and nothing
else is even remotely comparable.
-- Howard Chu
Chief Architect, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc
Chief Architect, OpenLDAP http://www.openldap.org/project/
I banged my head on OpenLDAP -> SASL -> PAM for two days. The status of
the documentation is really horrible. Until someone eventually fix that,
here is for future reference what I had to do (the NetBSD system parts
are out of topic, but I added them for the sake of completeness)
OpenLDAP-2.3.27 from NetBSD's pkgsrc
Cyrus-SASL-2.1.22 from NetBSD's pkgsrc
1) Install the software
1.1 Fix pkgsrc a bug
change --with-spasswd into --enable-spasswd
1.2 Install the following packages:
Set build options for pkgsrc: in /etc/mk.conf:
1.3 Install the following packages:
1.4 Fix another pkgsrc bug:
make && make install
2) Configure PAM
Create /etc/pam.ldap and populate it with your PAM configuration
3) Configure SASL
3.1 Enable saslauthd, by adding this to /etc/rc.conf:
saslauthd=YES saslauthd_flags="-a pam
3.2 Then start it:
3.3 Configure the SASL library for slapd, by creating
/usr/pkg/lib/sasl2/slapd.conf, with the following content:
3.4 Check SASL functionnality
testsaslauthd -s ldap -u login -p password
Make sure a wrong password really fails...
4) Configure OpenLDAP (the nasty part)
4.1 Enable PLAIN mechanism (disabled by default) in
/usr/pkg/etc/openldap/slapd.conf, by adding:
You don't need sasl-regex or authz-regex.
4.2 Enable TLS:
Generate TLS certificate, and add certificate, key and CA to
4.3 Populate the directory, make sure that user
cn=jdoe,dc=example,dc=net has this:
4.4 Enable slapd, by adding to /etc/rc.conf:
4.5 Start slapd:
4.6 Check that slapd will accept PLAIN SASL authentication:
ldapsearch -x -b "" -s base supportedSASLMechanisms
You should get:
4.7 Configure the LDAP client, in /usr/pkg/etc/openldap/ldap.conf:
4.8 Check that the whole thing works:
ldapsearch -x -WZD cn=jdoe,dc=example,dc=net
Don't forget to make sure a wrong password fails...
NB1: saslauthd logs in /var/log/authlog, the error messages are useful
NB2: slapd logs in /var/log/slapd.conf, the error messages are usually
meaningless, especially for ACL and SASL troubles.
NB3: Make sure your DN is right. I spent a lot of time running tests
with an invalid DN (ie: dc=jdoe instead of cn=jdoe)
I want to start this message by saying, what I'm about to describe is
completely vague and I don't expect to get a solution response. ;)
Basically, I'm out of ideas and am looking for some suggestions as to how
to debug the issue I'm running into.
Starting about half a year ago, slapd started "just dieing" out of the
blue. Not a think in the logs shows up to indicate what might have caused
it. The last query that I see in the logs before a crash always seems to
be nothing special. I don't even see a core dump being generated yet, but
then that may just be because I don't have the proper setup to get a core
dump at this time. We were running the last 2.2 and upgraded to the
latest release of 2.3 to make sure it wasn't an "old version" issue.
Unfortunately, slapd still dies a fair amount on us. It appears to be
fairly unpredictable. I've seen it crash within 1 minute of starting up
slapd (then a subsequent startup 'takes' just fine). I've seen it crash
when there were a number of network issues going on. I've seen it crash
out of the blue when nothing appeared to be going on. I don't really have
the drive space to turn on max debug logging 24/7 until the problem
We're thinking about setting up something to watch all of the network
traffic going to one of the boxes until it dies. (assuming we can find
something with the resources to do that)
That all said... since I have nothing solid to present, do you all have
any suggestions of what would be the best way to track down what's going
on? I'm literally out of ideas unless my berkeley db config is somehow
causing the problem or something like that.
I apologize for the vagueness. =/ Any ideas/suggestions?
Some users of Solaris may use LDAP clients based on the library
/usr/lib/libsldap.so.1. One such example is the Sun-provided
/usr/lib/nss_ldap.so.1. These clients have historically been at best
partially compatible with OpenLDAP, for various reasons. While there has
been a lot of good progress towards standards compliance (or at least
standards accommodation), paged results have had a long-standing bug. The
results cookie was improperly handed by libsldap, causing any results in
excess of the page size (1,000 in the case of libsladp) to be lost.
Rutgers has significantly more than 1,000 entries, and brought this
through Sun support channels. Patches were released for Solaris 9 last
week, which fix this issue
6278068 native ldap client: simple page mode broken in S9 and S10
I note this to openldap-software for users considering migration to
OpenLDAP slapd(8) who may have experienced this behavior and falsely
attributed it to OpenLDAP software. When properly patched, I can attest
that Solaris nss and OpenLDAP work "out of the box" together.
I want to migrate our openldap db backend which currently using idbm to bdb,
Matt have suggested following steps, would you please kind enough to put
some more light on few steps
Migrate to BDB or HDB. Here's the rough idea:
slapcat -f /path/to/slapd.conf -l mydb.ldif
Change your database to a new directory.
Read about tuning, cache sizing, and other stuff.
Read openldap man pages. ( what to read ;)
Read oracle's tuning docs for bdb. (we are not using any oracle things)
Re-Read openldap man pages. (re-read for what :-S
slapadd -f /path/to/slapd.conf -l mydb.ldif
Fix permissions on /var/run so the slapd user can write there -- this
one is pretty easy
Secondly, to be safe side, if something goes wrong, taking backup of ...
cp -r /var/lib/ldap/mydomain.com /TosomeSafePlace
is enough to fallback to ibdm?
I want minimum downtime, heh frankly I can't aford my ass on fire :P
About three times in the last several days, ldap-2.3.33 (RHEL4) has just stopped.
No core or error messages in the log files. The last entry in the log file
is just a search entry, with nothing common in the search from 'crash' to
'crash'. I have even tried the searches and they worked fine. I restart it
notes an unclean shutdown detected and attempts recovery and appears to do so.
My question is what is the proper way to debug this. If I do -d, it will
not fork, but since it fails at random times I probably will not be there
to see the output. Is this the best way to try to see what is happening?
If so, what is the recommended debug level? It was built with:
I see that there is 2.3.34 out and that it can use BDB 4.5. Is the
best course of action to upgrade?
The 2.3.27 version we have been running has not had a problem like this.
The 2.3.27 runs on port 900, the 2.3.33 runs on port 389.
Thanks for any help!
Running openldap 2.3.32 with bdb 4.2 (and using syncrepl if that's
I need to deal with the issue of how to safely delete old bdb log files on
our many replicas.
In a previous thread, Aaron Richton wrote:
> I'd recommend DB_LOG_AUTOREMOVE. Barring that, you can run db_archive
> manually. Check Sleepycat docs for details on either.
Well, I just ran db_archive and caused widespread chaos because most (all?)
of the replicas stopped responding to queries. (I have yet to perform a
I know that there's a bug in bdb 4.2 that causes logs to be held open even
though they're no longer required. Upgrading bdb is not on the cards right
now so I need to work around that problem by stopping and starting openldap.
So the question I have just at the moment is, when I run db_archive, should
openldap be running or not running?
I've seen nothing in any docs that suggest it should be stopped, and the bdb
docs simply imply that applications are expected to be running (but I'm not
a programmer so my I interpreted it wrongly). So I ran db_archive just after
starting openldap. Was that the wrong thing to do, and is it an obvious
cause for the meltdown?
Linux Systems Administrator
Opus International Consultants Ltd
Tel +64 4 471 7002, Fax +64 4 473 3017
Level 9 Majestic Centre, 100 Willis Street, PO Box 12 343
Wellington, New Zealand
A tls connection between a client and a 2.3.30 slapd hangs while the
server is giving the certificate; but this does not happen if the
server is run with -d 2 or higher, or if the client is the server
(A seemingly similar issue has been reported before, without
satisfactory reply, 4 years ago:
My slapd is the Debian-etch-packaged 2.3.30.
If I run a ldapsearch -ZZ on the server, it runs fine.
If I try to run the same thing from a client through the network, then
the connection hangs (I tried this from three clients). After I hit
Enter, the client stays there and it appears to wait forever until
If I use -d 1 on both the client and the server, then the client hangs
after it has displayed
TLS trace: SSL_connect:before/connect initialization
TLS trace: SSL_connect:SSLv2/v3 write client hello A
TLS trace: SSL_connect:SSLv3 read server hello A
TLS certificate verification: depth: 0, err: 0, subject: /C=GR/L=Athens/O=National Technical University of Athens/OU=ITIA Research Team/CN=www.itia.ntua.gr/emailAddress=sysadmins(a)itia.ntua.gr, issuer: /C=GR/L=Athens/O=National Technical University of Athens/OU=ITIA Research Team/CN=www.itia.ntua.gr/emailAddress=sysadmins(a)itia.ntua.gr
TLS trace: SSL_connect:SSLv3 read server certificate A
and the server has displayed
TLS trace: SSL_accept:before/accept initialization
TLS trace: SSL_accept:SSLv3 read client hello A
TLS trace: SSL_accept:SSLv3 write server hello A
TLS trace: SSL_accept:SSLv3 write certificate A
TLS trace: SSL_accept:SSLv3 write certificate request A
TLS trace: SSL_accept:error in SSLv3 flush data
TLS trace: SSL_accept:error in SSLv3 flush data
But if I use -d 2 or higher on the server, then the connection
succeeds. (Increasing -d on the client does not appear to affect it.)
Could you suggest how to search into it next? Because I'm lost. I
thought it might have to do with networking hardware, but ping -f from
one client to the server runs fine, with no packet loss, and nfs
(which I've noticed to be most sensitive to networking hardware
issues) also runs fine. The three clients where through Gigabit, 100
MBits, and 10 MBits. Two of them were running ubuntu-edgy-packaged
ldap-utils 2.2.26, one was running debian-etch-packaged 2.3.30. All
had identical behaviour.
I have a back-sql portion of my ldap tree, I can search within the
back-sql part of the tree (and within the ldbm), but searches do not
cross from the ldbm tree into the back-sql part of the tree.
Should I make a referal in the ldbm tree at the point the back-sql tree
is mounted, or is there a better way to do this?
I have installed openldap on my machine. In the slapd.conf file i have
included all the schema files. I need to import our directory server data
into OpenLdap now. Hence i have done the below things....
1) Created a schema file containing our data and included it in the
2) Created an ldif file, and tried importing our data using "ldapmodify"
I am facing few problems.
Looks like my schema file isnt being referred by openldap as....even when i
comment my schema file and run, the service is starting. That shows my
schema file isnt creating any impact there.
What i did is,,,,,i took some of our required ObjectClasses and included
them in the "misc.schema" file of OpenLdap just for a trial. Few of them
worked properly and my data from ldif got added into the server.
I still have few attributes and ObjectClasses which i am neither able to add
into any of OpenLdap's schema files nor create a new schema file and add( as
OpenLdap isnt referring to it).
Can someone guide me as to how to easily get my schema and data added to the
If i have to create a schema file of my own.....in that case..how do i make
the server refer to that as well...........
Eagerly awaiting the reply.......