I have followed the following link to configure LDAP with TLS:
but when i run the search command: i.e.,
*ldapsearch -x -b "dc=platalytics,dc=com" -H 'ldap://localhost:389' -ZZ*
i get the following error:
ldap_start_tls: Protocol error (2)
additional info: unsupported extended operation
Following is my *ldap.conf* file:
# LDAP Defaults
# See ldap.conf(5) for details
# This file should be world readable but not world writable.
# TLS certificates (needed for GnuTLS)
Following is my *cn=config.ldif* file:
# AUTO-GENERATED FILE - DO NOT EDIT!! Use ldapmodify.
# CRC32 0cd16f20
Can anyone please help what could be the issue?
I'm doing some experiments with LMDB trying to emulate a columnar
storage database using roaring bitmaps and other tricks.
The initial results are promising, but I ask myself, is a row based
storage like LMDB appropriate for implementing a columnar database or
are there better, more efficient ways/formats?
When bulk-renaming entries in web2ldap I do *not* alter the RDN of the entry
but also send delold: 0 in the MODRDN operation. IMO this is most minimal
This works ok in most setups.
But in a more strict setup (release 2.4.41) with slapo-constraint and
constraints on the RDN's characteristic attribute those MODRDN requests
trigger a constraint and fails with 'Constraint violation' although the RDN
value is not changed. I can't tell whether this was different with older
Even more strange: It works with delold: 1.
So I could easily alter web2ldap's behaviour to send delold: 1. But I'm not
sure whether that's the right general approach especially when thinking about
all the other LDAP servers out there.
So the question is: Is this an overzealous misbehaviour of slapo-constraint
and should it be fixed therein?
I'm working on a project using openLDAP C API (version 2.4.36) in a
asynchronous way. Everything works quite well after two years of
development cycles and product evolution. Since a year ago we have a few
clients successfully running our LDAP module on their servers.
Recently I've received a core dump file from one of our clients, with this
... libc frames ...
#6 0x00007f68887d3068 in ldap_int_bisect_find (v=<value optimized out>,
n=<value optimized out>, id=<value optimized out>, idxp=<value optimized
out>) at abandon.c:334
#7 0x00007f68887d32d2 in do_abandon (ld=0x7f67dcb0bbe0, origid=-1,
msgid=-1, sctrls=<value optimized out>, sendabandon=1) at abandon.c:300
... my application frames ...
>From my code, I'm calling openldap_ldap_abandon_ext(ld, msgid, NULL, NULL)
because a timeout has been reached after doing a
openldap_ldap_sasl_bind(...) and getting LDAP_X_CONNECTING state, while
waiting for the result of an LDAP_SUCCESS.
As I've read on *abandon.c*, the assert( id >= 0 ) is executed only on
certain flows, I guess... those which are involved in communication
handshake with the server in advanced stages. So I think, I'm calling
openldap_ldap_abandon_ext(...) at the wrong time.
My question is: can I use something from the API (ldap.h) to prevent
calling openldap_ldap_abandon_ext on this specific situation? I think I may
add code in my application to prevent crashing, but also ensure aborting
the connection correctly, as I have to respect my timeout policy.
BTW, the calls to the openLDAP API in my code are all protected with the
same boost::unique_lock<boost::mutex> to ensure thread safety. Logs shows
that my module was under heavy load when the application crashed. I've only
this core information, and I haven't been able to reproduce this situation
on my integration tests, even doing a simulation of a slowdown in
networking communications and shrinking timeouts.
Thanks in advance for your help!
Thanks Quanah for the prompt reply as you've always provided on this
Yes, just after I sent this, I realized the next question was going to
be what version of OpenLDAP, which is, "OpenLDAP: slapd 2.4.39 "
Yes, I'm using hdb:
checkpoint 1024 15
So, with 2.4.39 I should use ??? bdb or mdb? or?
And as mentioned about slapo-rwm, I'm using ldap proxy to Active
Directory mapping to:
rwm-map objectclass posixAccount user
rwm-map attribute uid sAMAccountName
rwm-map attribute cn cn
rwm-map attribute sn sn
rwm-map attribute uidNumber uidNumber
rwm-map attribute gidNumber gidNumber
rwm-map attribute homeDirectory unixHomeDirectory
rwm-map attribute loginShell loginShell
rwm-map attribute mail mail
rwm-map attribute *
Is there a better way instead?
Thanks a bunch!
------ Original Message ------
From: "Quanah Gibson-Mount" <quanah(a)zimbra.com>
To: "Sterling Sahaydak" <sterling.sahaydak(a)pi-coral.com>;
Sent: 7/9/2015 5:18:44 PM
Subject: Re: Replication mode - glibc detected *** slapd: double free or
>--On Thursday, July 09, 2015 9:53 PM +0000 Sterling Sahaydak
>>I've had replication working for the last few months and now running
>>the following when I run:
>>slapd -h "ldapi:/// ldap:/// ldaps:///" -d 4
>>I installed GIT on the server, version 1.7.1 which is a couple years
>>but was wondering would that have anything causing replication to stop
>The openldap version is what is important, not the version of git.
>However, it looks like you have a corrupt database:
>559edda4 hdb_db_open: database "dc=pi-coral,dc=com": unclean shutdown
>detected; attempting recovery.
>back-hdb is deprecated in current OpenLDAP, and back-mdb is the
>supported backend. I also note you appear to be using slapo-rwm, which
>is known to have a variety of issues.
>Zimbra :: the leader in open source messaging and collaboration
at some point in the past, i wound up taking drastic measures and
rebuilt my two ldap boxes after taking a backup of the data. i think my
process could use some fine tuning and polishing, as a weird nuance has
found its way into my environment.
i am replicating, using MMR, both config and data between two servers.
the config and schemas replicate without issue, as well as the data in
the mdb, but not any of the settings for the mdb. if i try, for
example, to add an ACL or Index to the mdb, i get an error "ObjectClass
modifications are not allowed".
i think the root of my issue is that i backed up one of the two boxes
and restored the one backup to both boxes while they were both offline.
i believe that because they both have the same backed up data on them,
some of the internal attributes are identical and therefore conflict. i
have seen logs about ContextCSNs being identical, but haven't had time
to investigate those messages till now. in any case, whatever i did
wrong now does not allow the mdb settings to be replicated between the
what i am looking to understand is how to i correct the situation. i am
looking to avoid recreating all of the data, so using backups, exports,
etc is something i want to do, and do correctly.
would i need to capture slapcat output to a file, or is ldapsearch the
correct way to export the data for backup/restore needs?
do i need to follow a destructive path to correct this issue or will
surgery on the mdb correct my issue?
i am running 2.4.39 on Fedora 20. any pointers would be appreciated.
I'm trying to do a LDIF import (via Apache Directory Studio) after a
replication issue. The goal is to correct a replication error.
The problem is that when I try to import the LDIF file, I'm receiving the
following error :
> #!ERROR [LDAP: error code 53 - shadow context; no update referral]
I found that is related to the replication, and probably the server is in
I also saw several solutions, as changing the olcMirrorMode to FALSE or
even delete it from the slapd.conf file.
My problem is that I can't find the olcMirrorMod in my slapd.conf file, I
just found it in the file
Here is my slapd.conf file (without the comments):
> include /usr/local/openldap/etc/openldap/schema/core.schema
> include /usr/local/openldap/etc/openldap/schema/cosine.schema
> include /usr/local/openldap/etc/openldap/schema/nis.schema
> pidfile /usr/local/openldap/var/run/slapd.pid
> argsfile /usr/local/openldap/var/run/slapd.args
> # MDB database definitions
> database mdb
> suffix "o=xxxxxxxxxx"
> rootdn "cn=admin,ou=programmes,o=xxxxxxxxxxxxx"
> rootpw secret
> directory /custom/data/openldap-data
> maxsize 5368709120
> index objectClass eq
> database monitor
> database config
> rootdn "cn=admin,cn=config"
> rootpw secret
And here is the LDIF file that I'm trying to import:
> version: 1
> dn: xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
> objectClass: top
> objectClass: organizationalUnit
> objectClass: organizationalentity
> ou: xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
> additionalmail: xxxxxxxxxxxxxx
> businessCategory: xxxxxxxxxxxxxx
> codecommune: xxxxxxx
> codespecialite: xxxxxxxxxxxxxxx
> codetypologie: xxxxxxxxxxx
> cta: xxxxxxxxxxxxxxxxx
> department: xxxxxxxxxxxxxxxxx
> departmentNumber: xxxxxxxxxxxxx
> destinationIndicator: xxxxxxxxxx
> facsimileTelephoneNumber: xxxxxxxxxxxxxx
> financialcode: xxxxxxxxxxxxxx
> l: xxxxxxxxxxxxxxx
> mail: xxxxxxxxxxxxxx
> postalAddress: xxxxxxxxx
> postalCode: xxxxxxxxxx
> seeAlso: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Are these information correct ? Do you think I can try to log onto ApacheDS
avec le compte root (cn=admin,cn=config / secret) to try the import ?
If not, is there anyway to see why my replication is not changing these
Thanks in advance for all the help,
Looking for feedback on why this is not working, or if it is a bug.
The details of my configuration are here:
I discovered (and proved), that ldapsearch is not honouring TLS_CERT/TLS_KEY in /etc/openldap/ldap.conf. I’m running the query as “root” and selinux is disabled.
If however, I put the TLS_CERT/TLS_KEY in my ~/ldaprc or ~/.ldaprc, then they are honoured.
Is this a bug?
What is stopping the “global default” of TLS_CERT/TLS_KEY from being read?
It seems that some issues with relax rules control and delta-syncrepl-MMR have
been fixed in 2.4.41. But I vaguely remember that there was another issue with
MMR in case the relax rules control is used.
I did not find anything else related in ITS in state non-closed.
Should updating the attributes 'authTimestamp' and 'pwdFailureTime' work with
relax rules control also with MMR?