Re: help with mdb database recovery after crash
by Andrei Mikhailovsky
Hi Howard,
Many thanks for your suggestions. I am about to try what you've suggested (download and compile the latest master version of lmdb from git using master branch of https://github.com/LMDB/lmdb).
However, just to note, I am running the latest version of zimbra which uses pretty recent version of openldap:
ii zimbra-lmdb 2.4.46-1zimbra8.7b amd64 LMDB Binaries
ii zimbra-lmdb-lib:amd64 2.4.46-1zimbra8.7b amd64 LMDB Libraries
ii zimbra-openldap-client 2.4.46-1zimbra8.7b amd64 OpenLDAP Client Binaries
ii zimbra-openldap-lib:amd64 2.4.46-1zimbra8.7b amd64 OpenLDAP Libraries
ii zimbra-openldap-server 2.4.46-1zimbra8.7b amd64 OpenLDAP Server Binaries
The problem that I am describing occurred with the above version of openldap.
I will keep you posted with any updates.
Cheers
Andrei
----- Original Message -----
> From: "Howard Chu" <hyc(a)symas.com>
> To: "Andrei Mikhailovsky" <andrei(a)arhont.com>, "openldap-technical" <openldap-technical(a)openldap.org>
> Sent: Thursday, 7 February, 2019 02:42:40
> Subject: Re: help with mdb database recovery after crash
> Andrei Mikhailovsky wrote:
>> Hello everyone,
>>
>> I have a bit of an issue with my ldap database. I have a Zimbra community
>> edition which uses openldap. A server crashed and I am unable to start the ldap
>> services after the reboot. The description of my problem, after some digging
>> about is:
>>
>>
>> the initial error indicated problem with the ldap
>>
>> Starting ldap...Done.
>> Search error: Unable to determine enabled services from ldap.
>> Enabled services read from cache. Service list may be inaccurate.
>>
>> Having investigated the issue, I noticed the following errors in the zimbra.log
>>
>> *slapd[31281]: mdb_entry_decode: attribute index 560427631 not recognized*
>>
>> I also noticed that the /opt/zimbra/data/ldap/mdb/db/data.mdb is actually 81Gb
>> in size and had reached the limit imposed by the ldap_db_maxsize variable. so
>> over the weekend, the LDAP mdb file became no longer sparse.
>>
>> I tried following the steps described in
>> https://syslint.com/blog/tutorial/solved-critical-ldap-primary-mdb-databa...
>> but with no success, as the slapcat segfaults with the following message.
>>
>>
>> /opt/zimbra/common/sbin/slapcat -ccc -F /opt/zimbra/data/ldap/config -b "" -l
>> /opt/zimbra/RECOVERY/SLAPCAT/zimbra_mdb.ldiff
>> 5c583982 mdb_entry_decode: attribute index 560427631 not recognized
>> Segmentation fault (core dumped)
>>
>> the mdb_copy produces a file of 420 mb in size, but it still contains the
>> "mdb_entry_decode: attribute index 560427631 not recognized" error.
>> I've also tried mdb_dump, but had the same issues after using the mdb_load
>> command.
>>
>> I found a post ( http://www.openldap.org/its/index.cgi/Software%20Bugs?id=8360 )
>> in the openldap community that the mdb gets corrupted if it reaches the maximum
>> defined size. but no solution of how to fix it.
>
> That's from over 3 years ago and has subsequently fixed. If you're running on
> such an old
> release, there's likely not much that can be done. Ordinarily it's possible to
> back up to
> the immediately preceding transaction, in case the last transaction is
> corrupted, but with
> that particular bug it's likely that the corruption occurred in an earlier
> transaction
> and has been carried forward in all subsequent ones.
>>
>> any advice on getting the database recovered and working again?
>
> You could try using the preceding transaction and see if it's in any better
> shape. The code
> for this is not released in LMDB 0.9. You can compile the mdb.master branch in
> git to obtain
> it. Then use the "-v" option with mdb_copy and see if that copy of the database
> is usable.
>
> --
> -- Howard Chu
> CTO, Symas Corp. http://www.symas.com
> Director, Highland Sun http://highlandsun.com/hyc/
> Chief Architect, OpenLDAP http://www.openldap.org/project/
4 years, 9 months
slapd memory usage
by A. Schulze
Hello,
A friend told me about his findings on slapd memory usage.
setup:
openldap-2.4.47
back_mdb
slapd running as PID 1 inside a docker container
docker host and docker conatiner based on Debian 9 / 64 bit
finding:
with minimal / trivial data slapd consume happily 20% of available phys. memory:
# top -p $( pidof slapd)
top - 21:47:10 up 10 days, 10 min, 5 users, load average: 0,06, 0,08, 0,09
Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0,9 us, 0,5 sy, 0,0 ni, 97,2 id, 1,3 wa, 0,0 hi, 0,1 si, 0,0 st
KiB Mem : 3926252 total, 142672 free, 1517516 used, 2266064 buff/cache
KiB Swap: 975868 total, 913316 free, 62552 used. 2065320 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2604 165534 20 0 1039308 732000 5628 S 0,0 18,6 0:00.38 slapd
workaround:
https://discuss.linuxcontainers.org/t/empty-openldap-slapd-consuming-800-...
-> limit open files to 1024
# top -p $( pidof slapd)
top - 21:49:16 up 10 days, 12 min, 5 users, load average: 0,07, 0,11, 0,10
Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie
%Cpu(s): 1,7 us, 0,6 sy, 0,0 ni, 90,6 id, 7,1 wa, 0,0 hi, 0,0 si, 0,0 st
KiB Mem : 3926252 total, 863500 free, 796492 used, 2266260 buff/cache
KiB Swap: 975868 total, 913320 free, 62548 used. 2786248 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2993 165534 20 0 48864 7820 5684 S 0,0 0,2 0:00.01 slapd
as far as I can tell from a short test there are no functional drawbacks.
Any idea why the memory usage is so different?
Andreas
4 years, 9 months
olcRootPW vs. userPassword of olcRootDN
by Zev Weiss
Hello,
I recently set about changing the rootdn password of my OpenLDAP 2.4
server.
I constructed an LDIF file looking something like this:
dn: olcDatabase={1}mdb,cn=config
changetype: modify
replace: olcRootPW
olcRootPW: {SSHA}new_passwd_hash
and fed that into ldapmodify. The server then started accepting the new
password and I figured I was done.
What I noticed a few minutes later, however, was that the server was
*also* still accepting the *old* password.
After some peeking around, my guess is that this is due to the fact that
while my config database ended up containing, as expected:
dn: olcDatabase={1}mdb,cn=config
# etc...
olcSuffix: dc=mydomain,dc=tld
olcRootDN: cn=admin,dc=mydomain,dc=tld
olcRootPW:: [base64 of {SSHA}new_passwd_hash]
the "main" database entry for cn=admin,dc=mydomain,dc=tld still had a
userPassword attribute of [base64 of {SSHA}old_passwd_hash]. Prior to
the password change the same base64 hash had been present in both, but
my change of course only updated the config database.
So I'm left with a few questions:
Is it "normal" to have both olcRootPW and the rootdn's userPassword
stored redundantly like this? If not, is the fact that I do a sign that
I did something inappropriate when initially configuring the server?
(Unfortunately I no longer remember exactly what I did at the time.)
If so, I assume the recommended password update procedure would be to
update both in tandem, though I have to wonder what the point of the
redundancy (and resulting potential for inconsistency) is. And should
section 5.2.5.5 of the admin guide perhaps make some mention of this?
Thanks,
Zev Weiss
4 years, 9 months
Re: LMDB mdb_dbi_open mystery
by Sam Dave
Thanks for the reply.
That's what it says on its own, but the doc also says:
* The database handle will be private to the current transaction until
* the transaction is successfully committed. If the transaction is
* aborted the handle will be closed automatically.
* After a successful commit the handle will reside in the shared
* environment, and may be used by other transactions.
That suggests I have to wait for a transaction to finish before I can reuse the same db handle for other transactions.
How would you in practice perform multiple transactions at the same time? I don't see it yet. I'm missing something basic.
Feb 11, 2019, 5:50 AM by hyc(a)symas.com:
> Sam Dave wrote:
>
>> Hello,
>>
>> The doc for mdb_dbi_open says:
>>
>> * This function must not be called from multiple concurrent
>> * transactions in the same process. A transaction that uses
>> * this function must finish (either commit or abort) before
>> * any other transaction in the process may use this function.
>>
>> This indicates that each process can only perform one transaction at the same time.
>>
>
> No, it only says you may only call mdb_dbi_open from one transaction at a time.
>
>> This makes sense for write transactions, but wasn't LMDB supposed to support multiple read transactions at the same time?
>>
>> I'm a bit confused now. Can you assist?
>>
>> Thanks,
>> Sam
>>
>
>
> --
> -- Howard Chu
> CTO, Symas Corp. > http://www.symas.com <http://www.symas.com>
> Director, Highland Sun > http://highlandsun.com/hyc <http://highlandsun.com/hyc/>
> Chief Architect, OpenLDAP > http://www.openldap.org/project <http://www.openldap.org/project/>
>
4 years, 9 months
Issue with replication
by Jignesh Patel
We are running openldap in cluster mode with MDB setup, and we started second cluster after some time and we observe that data is non synch between those 2 servers.
So how do we synchronize the data.
> On Sep 7, 2018, at 8:00 AM, openldap-technical-request(a)openldap.org wrote:
>
> Send openldap-technical mailing list submissions to
> openldap-technical(a)openldap.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://www.openldap.org/lists/mm/listinfo/openldap-technical
> or, via email, send a message with subject or body 'help' to
> openldap-technical-request(a)openldap.org
>
> You can reach the person managing the list at
> openldap-technical-owner(a)openldap.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of openldap-technical digest..."
>
>
> Send openldap-technical mailing list submissions to
> openldap-technical(a)openldap.org
> When replying, please edit your Subject: header so it is more specific than "Re: openldap-technical digest..."
>
> Today's Topics:
>
> 1. Replication issue? Data is different between master and
> consumer with same entryCSNs (Dave Steiner)
> 2. olcSecurity: tls=1 and olcLocalSSF= : what value should I
> use? (Jean-Francois Malouin)
> 3. Re: olcSecurity: tls=1 and olcLocalSSF= : what value should I
> use? (Quanah Gibson-Mount)
> 4. Re: Replication issue? Data is different between master and
> consumer with same entryCSNs (Frank Swasey)
> 5. Re: Replication issue? Data is different between master and
> consumer with same entryCSNs (Quanah Gibson-Mount)
> 6. Re: olcSecurity: tls=1 and olcLocalSSF= : what value should I
> use? (Jean-Francois Malouin)
> 7. Re: Replication issue? Data is different between master and
> consumer with same entryCSNs (Dave Steiner)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Wed, 5 Sep 2018 16:49:44 -0400
> From: Dave Steiner <steiner(a)rutgers.edu>
> To: openldap-technical(a)openldap.org
> Subject: Replication issue? Data is different between master and
> consumer with same entryCSNs
> Message-ID: <129e3614-50fe-ba15-4d4b-5f94d14abcd9(a)oit.rutgers.edu>
> Content-Type: text/plain; charset="utf-8"; Format="flowed"
>
>
> I've been noticing various data discrepancies between our LDAP master and LDAP
> consumers.? We are running OpenLDAP v2.4.44.? We have two masters running
> "mirromode TRUE" and all updates go through a VIP that points to the first one
> unless it's not available (doesn't happen very often except for during patches
> and restarts). We have 13 consumers that replicate through that same VIP.
>
> Here's an example of our syncrepl for a client:
>
> syncrepl rid=221
> ? type=refreshAndPersist
> ? schemachecking=on
> ? provider="ldap://ldapmastervip.rutgers.edu/"
> ? bindmethod=sasl
> ? saslmech=EXTERNAL
> ? starttls=yes
> ? tls_reqcert=demand
> ? tls_protocol_min="3.1"
> ? searchbase="dc=rutgers,dc=edu"
> ? attrs="*,+"
> ? retry="10 10 20 +"
> ? logbase="cn=accesslog"
> ? logfilter="(&(objectClass=auditWriteObject)(reqResult=0))"
> ? syncdata=accesslog
> ? network-timeout=30
> ? keepalive=180:3:60
>
> I check the contextCSN attributes on all the instances every day and they are
> all in sync (except during any major changes, of course). But I occasionally
> notice discrepancies in the data.... even though the contextCSNs and entryCSNs
> are the same.? For example (note hostnames have been changed):
>
> $ ldapsearch ... -H ldap://ldapmaster.rutgers.edu uid=XXXX postalAddress
> createTimestamp modifyTimestamp entryCSN
> dn: uid=XXXX,ou=People,dc=rutgers,dc=edu
> createTimestamp: 20121220100700Z
> postalAddress: Business And Science Bldg$227 Penn Street$Camden, NJ 081021656
> entryCSN: 20180505002024.083133Z#000000#001#000000
> modifyTimestamp: 20180505002024Z
>
> $ ldapsearch ... -H ldap://ldapconsumer3.rutgers.edu uid=XXXX postalAddress
> createTimestamp modifyTimestamp entryCSN
> dn: uid=XXXX,ou=People,dc=rutgers,dc=edu
> createTimestamp: 20121220100700Z
> postalAddress: BUSINESS AND SCIENCE BLDG$227 PENN STREET$CAMDEN, NJ 081021656
> entryCSN: 20180505002024.083133Z#000000#001#000000
> modifyTimestamp: 20180505002024Z
>
> So I'm trying to figure out why this happens (config issue, bug, ???) and
> second, if I can't use the contextCSN to report that everything is fine, what
> else can I do besides trying to compare ldif dumps.
>
> thanks,
> ds
> --
> Dave Steiner steiner(a)rutgers.edu
> IdM, Enterprise Application Services ?? ASB101; 848.445.5433
> Rutgers University, Office of Information Technology
>
>
4 years, 9 months
OpenLDAP 2.4.45 possible denial of service vulnerability?
by Juergen.Sprenger@swisscom.com
Hi,
After upgrading to the latest Release (Solaris 11.3 SRU35, OpenLDAP 2.4.45) we are experiencing massive workloads caused by single clients consuming all available threads and CPU-resources. Service does not longer respond to requests, even cn=monitor on loopback interface stops to respond properly.
OS:
# pkg info entire
Name: entire
Summary: entire incorporation including Support Repository Update (Oracle Solaris 11.3.35.6.0).
Description: This package constrains system package versions to the same
build. WARNING: Proper system update and correct package
selection depend on the presence of this incorporation.
Removing this package will result in an unsupported system.
For more information see:
https://support.oracle.com/rs?type=doc&id=2045311.1
Category: Meta Packages/Incorporations
State: Installed
Publisher: solaris
Version: 0.5.11 (Oracle Solaris 11.3.35.6.0)
Build Release: 5.11
Branch: 0.175.3.35.0.6.0
Packaging Date: August 10, 2018 03:22:59 PM
Size: 5.46 kB
FMRI: pkg://solaris/entire@0.5.11,5.11-0.175.3.35.0.6.0:20180810T152259Z
OpenSSL:
# pkg info entire
Name: entire
Summary: entire incorporation including Support Repository Update (Oracle Solaris 11.3.35.6.0).
Description: This package constrains system package versions to the same
build. WARNING: Proper system update and correct package
selection depend on the presence of this incorporation.
Removing this package will result in an unsupported system.
For more information see:
https://support.oracle.com/rs?type=doc&id=2045311.1
Category: Meta Packages/Incorporations
State: Installed
Publisher: solaris
Version: 0.5.11 (Oracle Solaris 11.3.35.6.0)
Build Release: 5.11
Branch: 0.175.3.35.0.6.0
Packaging Date: August 10, 2018 03:22:59 PM
Size: 5.46 kB
FMRI: pkg://solaris/entire@0.5.11,5.11-0.175.3.35.0.6.0:20180810T152259Z
OpenLDAP:
# pkg info entire
Name: entire
Summary: entire incorporation including Support Repository Update (Oracle Solaris 11.3.35.6.0).
Description: This package constrains system package versions to the same
build. WARNING: Proper system update and correct package
selection depend on the presence of this incorporation.
Removing this package will result in an unsupported system.
For more information see:
https://support.oracle.com/rs?type=doc&id=2045311.1
Category: Meta Packages/Incorporations
State: Installed
Publisher: solaris
Version: 0.5.11 (Oracle Solaris 11.3.35.6.0)
Build Release: 5.11
Branch: 0.175.3.35.0.6.0
Packaging Date: August 10, 2018 03:22:59 PM
Size: 5.46 kB
FMRI: pkg://solaris/entire@0.5.11,5.11-0.175.3.35.0.6.0:20180810T152259Z
Part of slapd.conf:
loglevel none stats sync
sizelimit 15000
timelimit 30
threads 64
tool-threads 8
idletimeout 0
writetimeout 0
security tls=0
conn_max_pending 100
conn_max_pending_auth 1000
database mdb
suffix "dc=scom"
rootdn "cn=*****"
rootpw {SSHA}*****
maxsize 17179869184
maxreaders 4096
searchstack 64
checkpoint 0 1
dbnosync
Machine is a X6-2, 44 cores, 88 threads, 256GB RAM:
# prtdiag
System Configuration: Oracle Corporation ORACLE SERVER X6-2
BIOS Configuration: American Megatrends Inc. 38070000 12/16/2016
BMC Configuration: IPMI 2.0 (KCS: Keyboard Controller Style)
==== Processor Sockets ====================================
Version Location Tag
-------------------------------- --------------------------
Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz P0
Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz P1
Even monitoring (cn=monitor) is no longer accessible when this occurs.
So far we experienced this behavior with clients of Oracle Enterprise Linux 6.x, Redhat Enterprise Linux 6.x and AIX. Service requests are opened at vendors support, but I'd prefer to have an installation which is less vulnerable and more resilient to issues of this kind.
No problems or issues with Solaris and HPUX clients.
Has anyone experienced similar problems or suggestions for configuration?
To avoid performance issues loglevel is now "none stats sync" but can be changed for some time to track down the cause.
Best regards
Jürgen Sprenger
4 years, 9 months
LMDB mdb_dbi_open mystery
by Sam Dave
Hello,
The doc for mdb_dbi_open says:
* This function must not be called from multiple concurrent
* transactions in the same process. A transaction that uses
* this function must finish (either commit or abort) before
* any other transaction in the process may use this function.
This indicates that each process can only perform one transaction at the same time.
This makes sense for write transactions, but wasn't LMDB supposed to support multiple read transactions at the same time?
I'm a bit confused now. Can you assist?
Thanks,
Sam
4 years, 9 months
Monitor configuration... also for the manual
by Arno Lehmann
Hi all,
I'm currently in the process of migrating LDAP data from a really
outdated system to something a bit fresher. So, I started with the
OpenLDAP as provided by Debian, and learned that a good part of the
system management knowledge I had is outdated. So, I stared reading TFM.
... and stumbled over the section where I wanted to learn how to set up
monitoring with the cn=config configuration scheme.
Naturally, that was exactly the incentive I needed to actually start
giving back something to the community. So, I ended up playing around a
bit and googling a bit and scratching my head a lot, taking notes, and
came up with something you might want to add into the manual -- see below.
Unfortunately, the whole thing did not exactly work out as I intended,
because the crucial step insists on failing. Namely, trying to create a
monitor database fails, indicating there already is one in existence.
However, I can't find any such beast:
root@host:~/ldap# ldapsearch -Q -LLL -H ldapi:/// -Y EXTERNAL -b
'cn=config' '(|(olcDatabase=monitor)(objectClass=olcMonitorConfig))'
root@host:~/ldap#
I have actually tried to add the above data using slapadd, with slapd
shut down, and got an even more confusing error message:
root@host:~/ldap# slapadd -n 0 -l addMonitorDB.ldif
slapadd: could not add entry dn="olcDatabase=Monitor,cn=config"
(line=1): autocreation of "olcDatabase={-1}frontend" failed
_######### 46.91% eta none elapsed none spd
1.3 M/s
Closing DB...
So, what do I need to do to get my manual suggestion into working condition?
(And also allow me to monitor my all fresh LDAP instance :-)
Cheers,
Arno
-----8<--------------cut here... manual text below ------------->8------
20.1. Monitor configuration via cn=config(5)
To enable monitoring an OpenLDAP server, the "monitor" database needs
to be available and configured, also allowing read acces to it.
20.1.1. Ensure the monitor backend is available
The first step is to ensure the monitor database is part of the
running slapd process. As database backends can be build into the main
binary, or loaded dynamically, as configured, the initial step is to
check if the module is built in, is already loaded, or, if not, add it
to the configuration.
20.1.2. Check the binary for built-in modules
By running
root@host:~/ldap# slapd -VVV
@(#) $OpenLDAP: slapd (May 23 2018 04:25:19) $
Debian OpenLDAP Maintainers <pkg-openldap-devel(a)lists.alioth.debian.org>
Included static backends:
config
ldif
root@host:~/ldap#
it is easy to check if the monitor backend database is built-in; in
the above case, it is not. If it already is part of the slapd binary,
the process goes to section 20.1.5 further below.
20.1.3. Check module loader configuration
*Note* that the below examples use one particular scheme to access
LDAP with administrative, i.e. full, privileges; things may be
configured differently in other environments. (The example is correct
for a stock Debian 9 installation, by the way.)
Checking if the monitor backend is already configured to be loaded
requires querying the LDAP configuration, in particular the module
loader configuration:
root@host:~/ldap# ldapsearch -LLL -H ldapi:/// -Y EXTERNAL -b
'cn=config' '(objectClass=olcModuleList)' olcModuleLoad
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
dn: cn=module{0},cn=config
olcModuleLoad: {0}back_mdb
olcModuleLoad: {1}back_monitor
root@host:~/ldap#
In this example, we see the back_monitor module to be loaded. If a
line referencing this module was not shown, the next step is described
below. Otherwise, proceed at section 20.1.5.
20.1.4. Add the monitor module to the module loader configuration
20.1.4.1. Verify the necessary module exists.
A careful administrator will ensure the needed module is actually
available to be loaded. This is done by checking the needed file
exists in the module loader's path.
We assume the path to be correctly set up in the loaders configuration
in the first place, trusting that the package builder prepared this
step. Thus:
root@host:~/ldap# ldapsearch -LLL -H ldapi:/// -Y EXTERNAL -b
'cn=config' '(objectClass=olcModuleList)' olcModulePath
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
dn: cn=module{0},cn=config
olcModulePath: /usr/lib/ldap
root@host:~/ldap# ls -l /usr/lib/ldap/*monitor*
lrwxrwxrwx 1 root root 26 May 23 2018
/usr/lib/ldap/back_monitor-2.4.so.2 -> back_monitor-2.4.so.2.10.7
-rw-r--r-- 1 root root 109408 May 23 2018
/usr/lib/ldap/back_monitor-2.4.so.2.10.7
-rw-r--r-- 1 root root 976 May 23 2018 /usr/lib/ldap/back_monitor.la
lrwxrwxrwx 1 root root 26 May 23 2018 /usr/lib/ldap/back_monitor.so
-> back_monitor-2.4.so.2.10.7
root@host:~/ldap#
and voilà, things look good, the module shared object file is
available, and things are prepared for different versions coexisting
in the usual Unix/Linux way. (The author has no idea what to expect on
a windows system.)
If things did *not* look good, it would be time to check the
distribution repositories for packages with OpenLDAP modules, or
verify the build and installation process, both of which is out of
scope of this chapter.
20.1.4.2. Adding the monitor backend to the loader configuration
Assuming paths and binaries are all correct, it's now merely a matter
of adding some attributes to the module loader's configuration. This
can be done with just a few lines of LDIF:
root@host:~/ldap# cat loadMonitor.ldif
dn: cn=config,cn=module
changetype: modify
add: olcModuleLoad
olcModuleLoad: back_monitor
root@host:~/ldap# ldapmodify -H ldapi:/// -Y EXTERNAL -f loadMonitor.ldif
should do all that is necessary. Error messages hopefully provide an
indication of what went wrong in case there is a problem.
20.1.5. Verifying the monitor backend needs to be configured
Of course, we start out being extra-careful, by checking that no
monitor backend is already configured:
root@host:~/ldap# ldapsearch -LLL -H ldapi:/// -Y EXTERNAL -b
'cn=config' '(objectClass=olcDataBaseConfig)' olcDatabase objectClass
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
dn: olcDatabase={-1}frontend,cn=config
objectClass: olcDatabaseConfig
objectClass: olcFrontendConfig
olcDatabase: {-1}frontend
dn: olcDatabase={0}config,cn=config
objectClass: olcDatabaseConfig
olcDatabase: {0}config
dn: olcDatabase={1}mdb,cn=config
objectClass: olcDatabaseConfig
objectClass: olcMdbConfig
olcDatabase: {2}mdb
root@host:~/ldap#
Carefully checking the output, it becomes clear no monitor database is
available as yet, so we need to continue.
20.1.6. Adding a "monitor" database
Again, a simple LDAP add operation will be sufficient:
root@host:~/ldap# cat addMonitorDB.ldif
dn: olcDatabase=monitor,cn=config
objectClass: olcDatabaseConfig
objectClass: olcMonitorConfig
olcDatabase: monitor
olcAccess: to dn.subtree=cn=monitor by users read
root@host:~/ldap#
Naturally, the ACL(s) may be adjusted as needed.
root@host:~/ldap# ldapadd -H ldapi:/// -Y EXTERNAL -f addMonitorDB.ldif
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
adding new entry "olcDatabase=monitor,cn=config"
ldap_add: Other (e.g., implementation specific) error (80)
additional info: only one monitor database allowed
(BETTER NOT PUBLISH THE FINAL TWO LINES...)
4 years, 9 months
Re: help with mdb database recovery after crash
by Andrei Mikhailovsky
Thanks everyone for their contribution. I will follow the suggestions over the weekend and update the list on the progress.
Kind regards
----- Original Message -----
> From: "Enes Kıdık" <enes.kidik(a)pardus.org.tr>
> To: "Andrei Mikhailovsky" <andrei(a)arhont.com>
> Sent: Thursday, 7 February, 2019 10:01:00
> Subject: Re: help with mdb database recovery after crash
> Hello Andrei,
> I am not posting this to openldap maillist but It may be useful to you.
>
> 3 years ago I was working on zimbra collaboration and had hard times with those
> sparse files.
> I found the links I read. Your situation seems different from mine but take a
> shot.
>
> you may want to search how to copy an old backup of zimbra sparse file to new
> location (after using mdb_copy copied file is no longer a a sparse file so
> there may be another way like slapcat-slapadd)
> https://wiki.zimbra.com/wiki/OpenLDAP_Performance_Tuning_8.0
> https://www.openldap.org/lists/openldap-technical/201305/msg00229.html
> https://linuxacademy.com/blog/linux/openldap-fixing-or-recovering-a-corru...
>
> Quannah is ldap-guru. He may give you a specific answer to solve your problem.
>
> Have a good day. I hope you solve it quickly.
> Enes
>
> ----- Original Message -----
> From: "Howard Chu" <hyc(a)symas.com>
> To: "Andrei Mikhailovsky" <andrei(a)arhont.com>, openldap-technical(a)openldap.org
> Sent: Thursday, February 7, 2019 5:42:40 AM
> Subject: Re: help with mdb database recovery after crash
>
> Andrei Mikhailovsky wrote:
>> Hello everyone,
>>
>> I have a bit of an issue with my ldap database. I have a Zimbra community
>> edition which uses openldap. A server crashed and I am unable to start the ldap
>> services after the reboot. The description of my problem, after some digging
>> about is:
>>
>>
>> the initial error indicated problem with the ldap
>>
>> Starting ldap...Done.
>> Search error: Unable to determine enabled services from ldap.
>> Enabled services read from cache. Service list may be inaccurate.
>>
>> Having investigated the issue, I noticed the following errors in the zimbra.log
>>
>> *slapd[31281]: mdb_entry_decode: attribute index 560427631 not recognized*
>>
>> I also noticed that the /opt/zimbra/data/ldap/mdb/db/data.mdb is actually 81Gb
>> in size and had reached the limit imposed by the ldap_db_maxsize variable. so
>> over the weekend, the LDAP mdb file became no longer sparse.
>>
>> I tried following the steps described in
>> https://syslint.com/blog/tutorial/solved-critical-ldap-primary-mdb-databa...
>> but with no success, as the slapcat segfaults with the following message.
>>
>>
>> /opt/zimbra/common/sbin/slapcat -ccc -F /opt/zimbra/data/ldap/config -b "" -l
>> /opt/zimbra/RECOVERY/SLAPCAT/zimbra_mdb.ldiff
>> 5c583982 mdb_entry_decode: attribute index 560427631 not recognized
>> Segmentation fault (core dumped)
>>
>> the mdb_copy produces a file of 420 mb in size, but it still contains the
>> "mdb_entry_decode: attribute index 560427631 not recognized" error.
>> I've also tried mdb_dump, but had the same issues after using the mdb_load
>> command.
>>
>> I found a post ( http://www.openldap.org/its/index.cgi/Software%20Bugs?id=8360 )
>> in the openldap community that the mdb gets corrupted if it reaches the maximum
>> defined size. but no solution of how to fix it.
>
> That's from over 3 years ago and has subsequently fixed. If you're running on
> such an old
> release, there's likely not much that can be done. Ordinarily it's possible to
> back up to
> the immediately preceding transaction, in case the last transaction is
> corrupted, but with
> that particular bug it's likely that the corruption occurred in an earlier
> transaction
> and has been carried forward in all subsequent ones.
>>
>> any advice on getting the database recovered and working again?
>
> You could try using the preceding transaction and see if it's in any better
> shape. The code
> for this is not released in LMDB 0.9. You can compile the mdb.master branch in
> git to obtain
> it. Then use the "-v" option with mdb_copy and see if that copy of the database
> is usable.
>
> --
> -- Howard Chu
> CTO, Symas Corp. http://www.symas.com
> Director, Highland Sun http://highlandsun.com/hyc/
> Chief Architect, OpenLDAP http://www.openldap.org/project/
4 years, 9 months
Locking down ciphers in OpenLDAP with GnuTLS
by Philip Colmer
I want to restrict the cipher suites used in OpenLDAP so that only TLS1.2
is supported.
Looking at https://openldap.org/doc/admin24/tls.html, I first tried setting
olcTLSCipherSuite to "HIGH" but the LDAP server gave an error 80 and then
stopped accepted further connections until I restarted it.
Since our OpenLDAP installation has been built with GnuTLS, I'm presuming
that I have to explicitly list out the GnuTLS cipher suites I want to use.
I've used gnutls-cli to list out the cipher suites that support PFS and
then extracted the ones that are TLS1.2.
So, just to confirm, do I need to provide a colon-separated list of each
and every cipher suite or is there a GnuTLS shorthand that I can use?
Regards
Philip
4 years, 9 months