default config file
by Rallavagu Kon
Hello All,
Noticing a difference in behavior of locating config file at the time of startup between 2.4.48 and 2.4.58.
The 2.4.48 is a ubuntu supplied package while 2.4.58 is compiled with following options.
./configure --prefix=/opt/openldap \
--sysconfdir=/etc/ldap \
--localstatedir=/opt/openldap/var \
--libexecdir=/opt/openldap/lib \
--disable-static \
--enable-debug \
--with-tls=openssl \
--with-cyrus-sasl \
--enable-dynamic \
--enable-crypt \
--enable-spasswd \
--enable-slapd \
--enable-modules \
--enable-rlookups \
--enable-backends=mod \
--disable-ndb \
--disable-sql \
--disable-shell \
--disable-bdb \
--disable-hdb \
--enable-overlays=mod
When started the slaps in debug mode, I see the following for 2.4.48
6077b590 backend_startup_one: starting "cn=config"
6077b590 ldif_read_file: read entry file: "/etc/ldap/slapd.d/cn=config.ldif”
Essentially, looking for “cn=config”. However, after replacing the binaries with compiled version for 2.4.58,
6077b37d could not stat config file "/etc/ldap/slapd.d/slapd.conf": No such file or directory (2)
I clearly notice that existing configuration file(s) are not considered with 2.4.58. Wondering what is the difference and how can I use the existing configuration files.
Thanks
23 hours, 37 minutes
Re: performance tuning for n-way and heavy client load
by Zetan Drableg
>> >> Do you have a lot of large groups that you frequently update?
>> >
>> > Yes we have several groups with ~40k users from which we frequently
>> > add/remove users based on upstream user provisioning workflows.
>>
>> Are you replacing the entire group when you do that, or only
>> adding/deleting specific users?
>>
>> Either way, for 2.4 you definitely want to use sortvals. Likely what you
>> need is OpenLDAP 2.5's multival feature as well.
We incrementally insert users and group memberships instead of
replacing the entire group every time.
This mailing list helped me discover that "sortvals member" improved
performance of single record inserts, but didn't help the overall
problem.
Why do excess free pages in MDB impact performance when inserting new data?
On Fri, Apr 16, 2021 at 11:05 AM Quanah Gibson-Mount <quanah(a)symas.com> wrote:
>
> --On Friday, April 16, 2021 12:01 PM -0700 Zetan Drableg
> <zetan.drableg(a)gmail.com> wrote:
>
> >> Do you have a lot of large groups that you frequently update?
> >
> > Yes we have several groups with ~40k users from which we frequently
> > add/remove users based on upstream user provisioning workflows.
>
> Are you replacing the entire group when you do that, or only
> adding/deleting specific users?
>
> Either way, for 2.4 you definitely want to use sortvals. Likely what you
> need is OpenLDAP 2.5's multival feature as well.
>
> Regards,
> Quanah
>
>
> --
>
> Quanah Gibson-Mount
> Product Architect
> Symas Corporation
> Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
> <http://www.symas.com>
1 day, 11 hours
Re: performance tuning for n-way and heavy client load
by Zetan Drableg
> >> When I write LDIFs to one node like delete user or remove user from
> >> group, we see spikes in authentication latency metrics (what's normally
> >> .2 - .5 second response time goes up to 15-30 seconds) across all nodes
> >> in the cluster at the same time.
> >
> > I ran mdb_copy -c to compact the LDAP databases. The size went from
> > 2.9G to 140M and the latency problem during inserts went away.
> > I've noticed the LDAP data.mdb is growing about 25M per day. What
> > accounts for the growth of free pages?
>
> Do you have a lot of large groups that you frequently update?
Yes we have several groups with ~40k users from which we frequently
add/remove users based on upstream user provisioning workflows.
1 day, 11 hours
performance tuning for n-way and heavy client load
by Zetan Drableg
openldap 2.4.57 on 16 core OracleLinux VMs with NVME disk.
8 nodes in n-way multi master configuration, MDB backend, 50k unique DNs.
We see about 10,000 auths per minute per node.
Under heavy client load, the log shows many "deferring operation: binding"
messages in the same second. slapd is using only 400% cpu (of 1600
possible).
[2021-04-13 19:15:58] connection_input: conn=150474 deferring operation:
binding
When I write LDIFs to one node like delete user or remove user from group,
we see spikes in authentication latency metrics (what's normally .2 - .5
second response time goes up to 15-30 seconds) across all nodes in the
cluster at the same time.
What knobs can be adjusted to allow for more concurrency? It seems like
writes are impacting reads.
*slapd.conf: threads*
default is 32, tried 64 and 128 with little improvement
*slapd.conf: syncrepl*
Should I increase sessionlog size?
Should I increase checkpoint ops?
How to determine optimum values?
syncprov-checkpoint 100 5
syncprov-sessionlog 100
syncprov-reloadhint TRUE
*mdb*
maxsize 17179869184
*Indices*
index objectClass eq,pres
index cn,uid,mail,mobile eq,pres,sub
index o,ou,dc,preferredLanguage eq,pres
index member,memberUid eq,pres
index uidNumber,gidNumber eq,pres
index memberOf eq
index entryUUID eq
index entryCSN eq
index uniqueMember eq
index sAMAccountName eq
*ulimit*
bash-4.2$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 482391
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 1048576
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
*n-way config*
serverID 1 ldap://XXXX:12389
syncrepl rid=1
provider=ldap://XXXXXX:12389
bindmethod=simple
starttls=yes
tls_cert=/opt/slapd/conf/cert.pem
tls_cacert=/etc/pki/tls/cert.pem
tls_key=/opt/slapd/conf/key.pem
binddn="cn=replication_manager,dc=service-accounts,o=Root"
credentials="YYYYYY"
tls_reqcert=never
searchbase=""
schemachecking=on
type=refreshAndPersist
retry="60 +"
(and 7 more)
mirrormode on
Any ideas?
Thanks
-Zetan
1 day, 11 hours
Timeout values in search_ext(), ldap_result() and global
by varun mittal
I am using openldap-2.4.39 on CentOS 7, to query my AD server, with
python-ldap wrapper
I set the following scheme:
ldap.set_option(ldap.OPT_NETWORK_TIMEOUT, 30)
ldap.set_option(ldap.OPT_TIMEOUT, 120)
conn = ldap.initialize(ldap://server-ip)
Using 3 types of queries - synchronous search_s(), asynchronous with and
without paging search_ext()
I am not using any timeout in the _ext method or the result3() methods
One of my python client LDAP searches(asynchronous with paging) took about
14 minutes to complete, in the customer environment. Eventually, the search
was successful.
Looking at the documentation, I am not sure which timeout value would be
applicable here.
I thought setting OPT_TIMEOUT should suffice for all kinds of searches.
And the strange thing is that the similar query, but synchronous(
ldap_search_ext_s) from my C client failed within 120 seconds. This is the
default AD server timelimit. The C application didn't specify any timeouts
What am I missing here?
1 day, 12 hours
Config files & env vars not read when geteuid() != getuid()
by Norm Green
Hello LDAP users and maintainers,
libraries/libldap/init.c has this code which bypasses read all LDAP
config env vars when the exe loadlig libldap is running in setuid mode.
This is causing problems for one of our customers who routinely run our
product Linux executables (which load our libldap) in setuid mode for
legitimate purposes.
Since we have the source, we can and may change this code.
In our case, customer wants to set env var LDAPCONF to point at a
non-default conf file but is unable to do so. In fact this code bypasses
almost all ways an alternate config file can be read.
Even $HOME/ldap.conf is not read.
My question here is should this code be considered a bug and changed to
be less restrictive? I fully appreciate there should be restrictions
when in setuid mode but the current code seems too restrictive.
init.c:
686
687 openldap_ldap_init_w_sysconf(LDAP_CONF_FILE);
688
689 #ifdef HAVE_GETEUID
690 if ( geteuid() != getuid() )
691 goto done;
692 #endif
693
694 openldap_ldap_init_w_userconf(LDAP_USERRC_FILE);
695
Norm Green
GemTalk Systems LLC
2 days, 6 hours
Problems setting up a proxy
by Hans van Zijst
Hi,
After more than a day of fiddling with it, I turn to you, the gurus ;)
I'm trying to create an OpenLDAP proxy that will talk to 2 OpenLDAP
servers, doing MirrorMode replication and using a floating IP so that I
can point all write queries to one and the same server. Those 2
MirrorMode servers are up and running and doing fine, but I can't figure
out how to make that proxy.
I'm running on Debian Bullseye (still "testing" at this moment), with
OpenLDAP 2.4.57, both on the backend servers and the proxy I'm trying to
make. I'm not using TLS yet, that's for later.
After installation, there's an (empty, of course) mdb database. I think
I should throw that away, but I'm not sure. The suffix in that database
is different than the one I need to proxy, so it's probably not a
problem to leave it there.
I have loaded the extra schemas that I use on the MirrorMode machines,
and loaded the backends ldap and meta, with LDIF files like this:
dn: cn=module{0},cn=config
changetype: modify
add: olcModuleLoad
olcModuleLoad: back_ldap.la
And fed that to slapd with
ldapmodify -Y EXTERNAL -h ldapi:/// -f <file>
I checked with ldapvi and saw both modules loaded. So far, so good.
Now I need to create the backend, and this is where I keep running into
problems. Although the use of slapd.conf has fallen from grace a long
time ago, every example I can find online only uses that. So I tried
creating one and adding it to the configuration with slaptest. This is
what I came up with:
backend meta
database meta
suffix "dc=example,dc=com"
rootdn "cn=admin,dc=example,dc=com"
rootpw "super secret passwd"
uri "ldap://172.16.7.6/dc=example,dc=com"
readonly yes
acl-authcDN "cn=admin,dc=example,dc=com"
acl-passwd "super secret passwd"
uri "ldap://172.16.7.7/dc=example,dc=com"
readonly yes
acl-authcDN "cn=admin,dc=example,dc=com"
acl-passwd "super secret passwd"
uri "ldap://172.16.7.8/dc=example,dc=com"
readonly no
acl-authcDN "cn=admin,dc=example,dc=com"
acl-passwd "super secret passwd"
But when I try to convert that, I get an error:
# slaptest -f /root/proxybackend.conf -F /etc/ldap/slapd.d
6075bced /root/proxybackend.conf: line 1: <backend> failed init (meta)!
slaptest: bad configuration directory!
The information in the OpenLDAP Handbook is, well, lacking:
https://openldap.org/doc/admin24/backends.html#Metadirectory
I had hoped to find a way to create an LDIF file which I could add with
ldapadd, but I never came much further than this:
dn: olcDatabase=meta
objectClass: olcDatabaseConfig
objectClass: olcMetaConfig
olcDatabase: meta
olcSuffix: dc=example,dc=com
olcRootDN: cn=admin,dc=example,dc=com
olcRootPW: "super secret passwd"
which results in:
adding new entry "olcDatabase=meta"
ldap_add: Server is unwilling to perform (53)
additional info: no global superior knowledge
I'm pretty sure I need more lines in that, to begin with the URI lines
to point the proxy to the machines it needs to contact, but I couldn't
find the olcSomeThing syntax for them. I'm pretty good at searching, but
not so good at finding, unfortunately.
Can somebody give me a few hints please? I'm pretty sure I'm missing
something small here, but I'm stuck.
Kind regards,
Hans
2 days, 7 hours
mdb_substring_candidates: (cn) not indexed
by Клеусов Владимир Сергеевич
Hi,
In logs mdb_substring_candidates: (cn) not indexed
But
slapcat -b cn=config | grep olcDbIndex
olcDbIndex: cn eq
Please tell me what this message is about ?
2 days, 14 hours
Getting MDB_MAP_RESIZED error, followed by slapd (appearing to) freeze
by Leonardo Bruno Lopes
Hello everyone.
I could find (very) few results while searching the internet for
MDB_MAP_RESIZED error in OpenLDAP, and they were of no help.
I'll describe my setup and the circumstances of the error.
I have OpenLDAP (version 2.4.47) and LMDB (version 0.9.22) installed
from Debian 10 default packages in a amd64 box.
Along with some usual settings (my slapd.conf file is attached), I set
the main database up using the mdb backend with maxsize = 7516192768
(7GB). The on-disk base size (the data.mdb file) is 134MB. I also have
loglevel = stats.
Everything seems to work flawless and fast, so all of a sudden syslog
prints this message:
Apr 14 09:47:56 my-ldap-box slapd[20736]: mdb_opinfo_get: err
MDB_MAP_RESIZED: Database contents grew beyond environment
mapsize(-30785)
and seconds later, the slapd daemon stop to answer requests, although
the service appears to be still running. This has happened 7 times
since April 7, when I moved the data from an old server to this one.
I changed loglevel to 'stats trace args shell' and I realized that
when the error occurs, there were always some ADD operations logged
right before.
I inspected the logs today, April 14, right after the occurrence of
the error and attached a file containing: 1) the error message, which
appears at 09:47:56, 2) the messages logged before, since 09:47:50, 3)
the messages logged after, until 09:48:01. This time on, slapd logged
no more messages and appeared to be stuck for the clients.
At 09:53:41 I restarted the daemon. The log generated from this time
until the first request answered is also attached.
For the record, I initially suspected of the replica server which was
running alongside the main server and which showed the same behavior.
This was because at then I associated the error with syncrepl
operations. But even after I stop the replica, the error keeps
ocurring in the main server.
Any clues?
Thanks for your consideration.
--
Atenciosamente,
Leonardo Lopes
INFRA-TI / DTI
--
Esta mensagem foi verificada pelo sistema de antiv��rus e
acredita-se estar livre de perigo.
3 days, 4 hours
be_modify failed(16)
by Rallavagu Kon
Hello All,
OpenLDAP server 2.4.49
Setup with replication
Noticing following error messages in server logs,
Apr 14 19:56:19 openldap-service-0 slapd[51]: syncrepl_null_callback : error code 0x10
Apr 14 19:56:19 openldap-service-0 slapd[51]: syncrepl_entry: rid=103 be_modify failed (16)
As the error code “16” suggested, validated the schema across participating nodes and found no discrepancies in the schema configuration across nodes.
However, we are noticing that one of the entries is missing from one node out of three participating replicas (multi master).
Any pointers on where to start to debug to understand the issue and fix it.
Thanks in advance.
3 days, 9 hours