RE: ldap query performance issue
by Chris Card
----------------------------------------
> From: ctcard(a)hotmail.com
> To: quanah(a)zimbra.com
> Subject: RE: ldap query performance issue
> Date: Thu, 23 May 2013 17:37:18 +0000
>
> ----------------------------------------
>> Date: Thu, 23 May 2013 10:06:51 -0700
>> From: quanah(a)zimbra.com
>> To: ctcard(a)hotmail.com; openldap-technical(a)openldap.org
>> Subject: Re: ldap query performance issue
>>
>> --On Thursday, May 23, 2013 4:40 PM +0000 Chris Card <ctcard(a)hotmail.com>
>> wrote:
>>
>>> Hi all,
>>>
>>> I have an openldap directory with about 7 million DNs, running openldap
>>> 2.4.31 with a BDB backend (4.6.21), running on CentOS 6.3.
>>>
>>> The structure of the directory is like this, with suffix dc=x,dc=y
>>>
>>> dc=x,dc=y
>>> account=a,dc=x,dc=y
>>> mail=m,account=a,dc=x,dc=y // Users
>>> ....
>>> licenceId=l,account=a,dc=x,dc=y // Licences,
>>> objectclass=licence ....
>>> group=g,account=a,dc=x,dc=y // Groups
>>> ....
>>> // etc.
>>>
>>> account=b,dc=x,dc=y
>>> ....
>>>
>>> Most of the DNs in the directory are users or groups, and the number of
>>> licences is small (<10) for each account.
>>>
>>> If I do a query with basedn account=a,dc=x,dc=y and filter
>>> (objectclass=licence) I see wildly different performance, depending on
>>> how many users are under account a. For an account with ~30000 users the
>>> query takes 2 seconds at most, but for an account with ~60000 users the
>>> query takes 1 minute.
>>>
>>> It only appears to be when I filter on objectclass=licence that I see
>>> that behaviour. If I filter on a different objectclass which matches a
>>> similar number of objects to the objectclass=licence filter, the
>>> performance doesn't seem to depend on the number of users.
>>>
>>> There is an index on objectclass (of course), but the behaviour I'm
>>> seeing seems to indicate that for this query, at some point slapd stops
>>> using the index and just scans all the objects under the account.
>>>
>>> Any ideas?
>>
>> Increase the IDL range. This is how I do it:
>>
>> --- openldap-2.4.35/servers/slapd/back-bdb/idl.h.orig 2011-02-17
>> 16:32:02.598593211 -0800
>> +++ openldap-2.4.35/servers/slapd/back-bdb/idl.h 2011-02-17
>> 16:32:08.937757993 -0800
>> @@ -20,7 +20,7 @@
>> /* IDL sizes - likely should be even bigger
>> * limiting factors: sizeof(ID), thread stack size
>> */
>> -#define BDB_IDL_LOGN 16 /* DB_SIZE is 2^16, UM_SIZE is 2^17
>> */
>> +#define BDB_IDL_LOGN 17 /* DB_SIZE is 2^16, UM_SIZE is 2^17
>> */
>> #define BDB_IDL_DB_SIZE (1<<BDB_IDL_LOGN)
>> #define BDB_IDL_UM_SIZE (1<<(BDB_IDL_LOGN+1))
>> #define BDB_IDL_UM_SIZEOF (BDB_IDL_UM_SIZE * sizeof(ID))
> Thanks, that looks like it might be the issue. Unfortunately I only see the issue in production, so patching it might be a pain.
I've tried this change, but it made no difference to the performance of the query.
Chris
9 years, 8 months
[lmdb] MDB_BAD_RSLOT for mdb_txn_get()
by Ben Johnson
I'm using the gomdb interface so I'll do my best to translate this to the C calls. I'm performing an mdb_txn_put(), committing the transaction and then later I'm opening a new read-only transaction where I do an mdb_txn_get() and I receive a "MDB_BAD_RSLOT: Invalid reuse of reader locktable slot" error.
When I switch the second transaction to not be read-only then the error goes away and it works fine. I checked the LMDB code and on line 1798, it's checking:
if (r->mr_pid != env->me_pid || r->mr_txnid != (txnid_t)-1)
return MDB_BAD_RSLOT;
The r->mr_pid == env->me_pid (I'm only running one process) and the r->mr_txnid is 0 and the txnid_t is 0 (so the txnid_t - 1 is -1). I started going down the rabbit hole to figure this out further but I don't understand the locktable setup entirely.
Am I doing something wrong with how I'm creating read-only transactions?
Ben Johnson
ben(a)skylandlabs.com
9 years, 8 months
Can I distribute salted-hashed passwords on different machines?
by Marco Pizzoli
Hi all,
I think I already know the answer, but I would like to be absolutely sure
about it.
Could I generate a {SSHA1} hash of a password (to be used for the rootdn
account) with the help of slappasswd utility on a system and reuse that
salted hash for the very same purpose but on a different operating system?
Thanks in advance again
Marco
9 years, 8 months
ldap query performance issue
by Chris Card
Hi all,
I have an openldap directory with about 7 million DNs, running openldap 2.4.31 with a BDB backend (4.6.21), running on CentOS 6.3.
The structure of the directory is like this, with suffix dc=x,dc=y
dc=x,dc=y
account=a,dc=x,dc=y
mail=m,account=a,dc=x,dc=y // Users
....
licenceId=l,account=a,dc=x,dc=y // Licences, objectclass=licence
....
group=g,account=a,dc=x,dc=y // Groups
....
// etc.
account=b,dc=x,dc=y
....
Most of the DNs in the directory are users or groups, and the number of licences is small (<10) for each account.
If I do a query with basedn account=a,dc=x,dc=y and filter (objectclass=licence) I see wildly different performance, depending on how many users are under account a. For an account with ~30000 users the query takes 2 seconds at most, but for an account with ~60000 users
the query takes 1 minute.
It only appears to be when I filter on objectclass=licence that I see that behaviour. If I filter on a different objectclass which matches a similar number of objects to the objectclass=licence filter, the performance doesn't seem to depend on the number of users.
There is an index on objectclass (of course), but the behaviour I'm seeing seems to indicate that for this query, at some point slapd stops using the index and just scans all the objects under the account.
Any ideas?
Chris
9 years, 8 months
Invalid manager attribute when in form 1.3.6.1.4.1.1466.0=#04024869,O=Test,C=GB
by Soulier, Marcel
Hi,
I am trying to import the following ldif file into openldap and get the error message "manager: value #0 invalid per syntax".
test.ldif:
dn: cn=test,o=users,dc=example,dc=com
objectClass: top
objectClass: person
cn: test
manager: 1.3.6.1.4.1.1466.0=#04024869,O=Test,C=GB
Console output:
adding new entry "cn=test,o=users,dc=example,dc=com"
ldap_add: Invalid syntax (21)
additional info: manager: value #0 invalid per syntax
According to the cosine.schema the value of the manager attribute should have the EQUALITY distinguishedNameMatch, SYNTAX 1.3.6.1.4.1.1466.115.121.1.12
The value "1.3.6.1.4.1.1466.0=#04024869,O=Test,C=GB" is taken from the examples provided for DN in rfc2252 and works fine in open-ds. So I would expect it to work in openldap as well.
What am I missing?
Marcel
Marcel.Soulier(a)opitz-consulting.com<mailto:Marcel.Soulier@opitz-consulting.com>
9 years, 8 months
Deadlock problem on objectClass.bdb
by Maxim Shaposhnik
Hi,
I'm faced with the OpenLDAP freeze problem on concurrent item modification.
OS type\version is FC17, OpenLDAP 2.4.35. Tried both BerkrleyDB versions
5.2.36 and latest 5.3.21. DB size is about 50K.
>From my experiments, LDAP stops responding when the count of locks on
objectClass.bdb reaches 3 (when less than 3, seems it resolves OK):
80000573 READ 3 HELD objectClass.bdb page 3
80000573 WRITE 7 HELD objectClass.bdb page 3
800001b6 READ 1 WAIT objectClass.bdb page 3
I also tried different locks detector schemes (different values for
set_lk_detect ) without success.
What may be a root cause of such situation?
This is my full db_stat output:
db_stat -CA
Default locking region information:
19 Last allocated locker ID
0x7fffffff Current maximum unused locker ID
9 Number of lock modes
200 Initial number of locks allocated
0 Initial number of lockers allocated
200 Initial number of lock objects allocated
3000 Maximum number of locks possible
1500 Maximum number of lockers possible
1500 Maximum number of lock objects possible
200 Current number of locks allocated
15 Current number of lockers allocated
200 Current number of lock objects allocated
40 Number of lock object partitions
2053 Size of object hash table
46 Number of current locks
115 Maximum number of locks at any one time
6 Maximum number of locks in any one bucket
11 Maximum number of locks stolen by for an empty partition
4 Maximum number of locks stolen for any one partition
13 Number of current lockers
15 Maximum number of lockers at any one time
26 Number of current lock objects
74 Maximum number of lock objects at any one time
2 Maximum number of lock objects in any one bucket
0 Maximum number of objects stolen by for an empty partition
0 Maximum number of objects stolen for any one partition
88126 Total number of locks requested
87895 Total number of locks released
0 Total number of locks upgraded
16 Total number of locks downgraded
174 Lock requests not available due to conflicts, for which we waited
153 Lock requests not available due to conflicts, for which we did not
wait
11 Number of deadlocks
0 Lock timeout value
0 Number of locks that have timed out
0 Transaction timeout value
0 Number of transactions that have timed out
2MB 504KB Region size
16 The number of partition locks that required waiting (0%)
8 The maximum number of times any partition lock was waited for (0%)
0 The number of object queue operations that required waiting (0%)
1 The number of locker allocations that required waiting (0%)
2 The number of region locks that required waiting (0%)
2 Maximum hash bucket length
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
=-=-=-=-=-=-=-=-=-=
Lock REGINFO information:
Environment Region type
1 Region ID
__db.001 Region name
0x7fc6ffe7f000 Region address
0x7fc6ffe7f0a0 Region allocation head
0x7fc70007f5b0 Region primary address
0 Region maximum allocation
0 Region allocated
Region allocations: 2874 allocations, 0 failures, 2750 frees, 7 longest
Allocations by power-of-two sizes:
1KB 2869
2KB 0
4KB 1
8KB 0
16KB 0
32KB 0
64KB 2
128KB 0
256KB 1
512KB 0
1024KB 1
REGION_SHARED Region flags
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Lock region parameters:
2 Lock region region mutex [2/59655 0% 25161/140492677707584]
<wakeups 0/1>
2053 locker table size
2053 object table size
2099280 obj_off
2316456 locker_off
1 need_dd
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Lock conflict matrix:
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Locks grouped by lockers:
Locker Mode Count Status ----------------- Object ---------------
e dd=11 locks held 1 write locks 0 pid/thread
23242/140324977571648 flags 10 priority 100
e READ 1 HELD id2entry.bdb handle 0
f dd=10 locks held 1 write locks 0 pid/thread
23242/140324977571648 flags 10 priority 100
f READ 1 HELD dn2id.bdb handle 0
10 dd= 9 locks held 0 write locks 0 pid/thread
23242/140324977571648 flags 0 priority 100
11 dd= 6 locks held 1 write locks 0 pid/thread
23242/140324451448576 flags 10 priority 100
11 READ 1 HELD objectClass.bdb handle 0
12 dd= 5 locks held 1 write locks 0 pid/thread
23242/140324451448576 flags 10 priority 100
12 READ 1 HELD cloudIdeAliases.bdb handle 0
13 dd= 4 locks held 1 write locks 0 pid/thread
23242/140324451448576 flags 10 priority 100
13 READ 1 HELD ou.bdb handle 0
8000019c dd= 8 locks held 0 write locks 0 pid/thread
23242/140324977571648 flags 0 priority 100
8000019d dd= 7 locks held 0 write locks 0 pid/thread
23242/140324451448576 flags 0 priority 100
800001a1 dd= 3 locks held 0 write locks 0 pid/thread
23242/140324443055872 flags 0 priority 100
800001b6 dd= 2 locks held 0 write locks 0 pid/thread
23242/140324332623616 flags 0 priority 100
800001b6 READ 1 WAIT objectClass.bdb page 3
8000045f dd= 1 locks held 1 write locks 1 pid/thread
23242/140324164859648 flags 0 priority 100
8000045f WRITE 1 HELD cloudIdeAliases.bdb page 5337
80000572 dd= 0 locks held 2 write locks 0 pid/thread
23242/140324451448576 flags 0 priority 100
80000572 READ 1 HELD 0x23f140 len: 9 data: 020000000000000000
80000572 READ 1 HELD dn2id.bdb page 10752
80000573 dd= 0 locks held 36 write locks 19 pid/thread
23242/140324451448576 flags 0 priority 100
80000573 READ 1 WAIT cloudIdeAliases.bdb page 5337
80000573 WRITE 1 HELD cloudIdeAliases.bdb page 4604
80000573 READ 1 HELD cloudIdeAliases.bdb page 4604
80000573 WRITE 1 HELD cloudIdeAliases.bdb page 6375
80000573 READ 1 HELD cloudIdeAliases.bdb page 6375
80000573 WRITE 1 HELD cloudIdeAliases.bdb page 200
80000573 READ 1 HELD cloudIdeAliases.bdb page 200
80000573 WRITE 1 HELD cloudIdeAliases.bdb page 1438
80000573 READ 1 HELD cloudIdeAliases.bdb page 1438
80000573 WRITE 1 HELD cloudIdeAliases.bdb page 16
80000573 READ 1 HELD cloudIdeAliases.bdb page 16
80000573 WRITE 1 HELD cloudIdeAliases.bdb page 286
80000573 READ 1 HELD cloudIdeAliases.bdb page 286
80000573 WRITE 1 HELD cloudIdeAliases.bdb page 2308
80000573 READ 1 HELD cloudIdeAliases.bdb page 2308
80000573 WRITE 1 HELD cloudIdeAliases.bdb page 4708
80000573 READ 1 HELD cloudIdeAliases.bdb page 4708
80000573 WRITE 1 HELD cloudIdeAliases.bdb page 123
80000573 READ 1 HELD cloudIdeAliases.bdb page 123
80000573 WRITE 1 HELD cloudIdeAliases.bdb page 540
80000573 READ 1 HELD cloudIdeAliases.bdb page 540
80000573 WRITE 1 HELD cloudIdeAliases.bdb page 4737
80000573 READ 1 HELD cloudIdeAliases.bdb page 4737
80000573 WRITE 1 HELD cloudIdeAliases.bdb page 2806
80000573 READ 1 HELD cloudIdeAliases.bdb page 2806
80000573 WRITE 1 HELD ou.bdb page 271
80000573 READ 1 HELD ou.bdb page 271
80000573 WRITE 7 HELD objectClass.bdb page 3
80000573 READ 3 HELD objectClass.bdb page 3
80000573 WRITE 3 HELD objectClass.bdb page 2
80000573 READ 1 HELD objectClass.bdb page 2
80000573 WRITE 1 HELD dn2id.bdb page 10234
80000573 READ 1 HELD dn2id.bdb page 10234
80000573 WRITE 1 HELD dn2id.bdb page 2
80000573 READ 1 HELD dn2id.bdb page 2
80000573 WRITE 1 HELD dn2id.bdb page 10225
80000573 WRITE 1 HELD dn2id.bdb page 10752
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Locks grouped by object:
Locker Mode Count Status ----------------- Object ---------------
80000573 READ 1 HELD cloudIdeAliases.bdb page 4604
80000573 WRITE 1 HELD cloudIdeAliases.bdb page 4604
80000573 READ 1 HELD cloudIdeAliases.bdb page 4708
80000573 WRITE 1 HELD cloudIdeAliases.bdb page 4708
80000573 READ 1 HELD cloudIdeAliases.bdb page 540
80000573 WRITE 1 HELD cloudIdeAliases.bdb page 540
13 READ 1 HELD ou.bdb handle 0
80000573 READ 1 HELD cloudIdeAliases.bdb page 2806
80000573 WRITE 1 HELD cloudIdeAliases.bdb page 2806
80000573 READ 1 HELD cloudIdeAliases.bdb page 4737
80000573 WRITE 1 HELD cloudIdeAliases.bdb page 4737
80000573 READ 1 HELD ou.bdb page 271
80000573 WRITE 1 HELD ou.bdb page 271
80000572 READ 1 HELD dn2id.bdb page 10752
80000573 WRITE 1 HELD dn2id.bdb page 10752
e READ 1 HELD id2entry.bdb handle 0
8000045f WRITE 1 HELD cloudIdeAliases.bdb page 5337
80000573 READ 1 WAIT cloudIdeAliases.bdb page 5337
80000573 READ 1 HELD cloudIdeAliases.bdb page 1438
80000573 WRITE 1 HELD cloudIdeAliases.bdb page 1438
f READ 1 HELD dn2id.bdb handle 0
80000573 READ 1 HELD dn2id.bdb page 2
80000573 WRITE 1 HELD dn2id.bdb page 2
80000572 READ 1 HELD 0x23f140 len: 9 data: 020000000000000000
80000573 READ 1 HELD dn2id.bdb page 10234
80000573 WRITE 1 HELD dn2id.bdb page 10234
80000573 WRITE 1 HELD dn2id.bdb page 10225
80000573 READ 1 HELD cloudIdeAliases.bdb page 123
80000573 WRITE 1 HELD cloudIdeAliases.bdb page 123
12 READ 1 HELD cloudIdeAliases.bdb handle 0
80000573 READ 1 HELD cloudIdeAliases.bdb page 16
80000573 WRITE 1 HELD cloudIdeAliases.bdb page 16
80000573 READ 1 HELD cloudIdeAliases.bdb page 200
80000573 WRITE 1 HELD cloudIdeAliases.bdb page 200
80000573 READ 1 HELD cloudIdeAliases.bdb page 6375
80000573 WRITE 1 HELD cloudIdeAliases.bdb page 6375
80000573 READ 3 HELD objectClass.bdb page 3
80000573 WRITE 7 HELD objectClass.bdb page 3
800001b6 READ 1 WAIT objectClass.bdb page 3
80000573 READ 1 HELD objectClass.bdb page 2
80000573 WRITE 3 HELD objectClass.bdb page 2
11 READ 1 HELD objectClass.bdb handle 0
80000573 READ 1 HELD cloudIdeAliases.bdb page 2308
80000573 WRITE 1 HELD cloudIdeAliases.bdb page 2308
80000573 READ 1 HELD cloudIdeAliases.bdb page 286
80000573 WRITE 1 HELD cloudIdeAliases.bdb page 286
9 years, 8 months
Syncrepl and selected subtrees
by Marco Pizzoli
Hi all,
I would like a hint on how to syncreplicate only a group of subtrees from a
master DIT.
In example, if I have a BaseDN called "ou=root,dc=my_domain" with 4
subtrees at the first nesting level (ou=subtree1, ou=subtree2, and so
on..), how can I configure a slave to syncrepl only subtree1 and subtree3?
Graphically:
ou=root,dc=my_domain
|- ou=subtree1
|- ou=subtree2
|- ou=subtree3
|- ou=subtree4
Of course, I could use a dedicated user with ACL-masqueraded data to her,
but what if I couldn't?
Is there a way to express this purpose on the "replica" configuration?
Thanks in advance
Marco
9 years, 8 months
MaxDBs
by Ben Johnson
Is there an upper limit to mdb_env_set_maxdbs()? And what's the overhead for adding additional DBs? Can I change this number once it's set if I close and reopen the env?
Ben Johnson
ben(a)skylandlabs.com
9 years, 8 months
Different admin passwords on replica servers
by Thomas Macaigne
Hello,
I have two servers in a N-Way MultiMaster / MirrorMode setup.
Everything works fine, backend and cn=config is replicated.
But I would like to have a different password for the cn=admin,dc=xxx=dc,dc=fr account.
How would one do this ?
Regards,
9 years, 8 months
Syncrepl unregularly stops on slaves, leaving DB in inconsistent state
by Karsten.Kroesch@swisscom.com
Hello OpenLDAP users,
I have a Syncrepl setup with one master server and seven slaves.
The slaves are mail servers running Postfix, SpamAssassin and Amavis as LDAP clients and have a relatively heavy load.
Every two weeks or so (not regularly) the Syncrepl stops on some of the slaves are stopping; there are no Syncrepl requests from the slaves any more.
Restarting the Slapd on the slaves fixes the problem in most cases, but sometimes some entries are not replicated until I modify them manually on the master. After that, it works fine again.
My OpenLDAP version is 2.4.23 running on SunOS 5.10 Generic_139555-08 sun4v sparc SUNW,Sun-Fire-T1000 Solaris. The servers that are affected more often are running in non-global zone.
Any ideas would be helpful.
Thanks in advance,
Karsten Kroesch
____________________________
Internet Application Engineer
Applications Operations
karsten.kroesch(a)swisscom.com
____________________________
Swisscom (Schweiz) AG
Corporate Business Unit
Müllerstrasse 16
8004 Zürich
____________________________
-------8<---------------------------------------
Affected entries, log files and configuration see below:
#
# On the master:
# ldapsearch mail=mthudianplackal(a)[domain-deleted].ch
# extended LDIF
#
# LDAPv3
# base <dc=ip-plus, dc=net> (default) with scope subtree
# filter: mail=mthudianplackal(a)[domain-deleted].ch
# requesting: ALL
#
# mthudianplackal(a)[domain-deleted].ch, [domain-deleted].ch, vsf, ip-plus.net
dn: mail=mthudianplackal(a)[domain-deleted].ch,dc=[domain-deleted].ch,ou=vsf,dc=ip-plus,
dc=net
objectClass: top
objectClass: mailObject
objectClass: amavisAccount
mail: mthudianplackal(a)[domain-deleted].ch
# search result
search: 2
result: 0 Success
# numResponses: 2
# numEntries: 1
# On some of the slaves:
$ ldapsearch mail=mthudianplackal(a)[domain-deleted].ch
# extended LDIF
#
# LDAPv3
# base <dc=ip-plus, dc=net> (default) with scope subtree
# filter: mail=mthudianplackal(a)[domain-deleted].ch
# requesting: ALL
#
# search result
search: 2
result: 0 Success
# numResponses: 1
Log files at the time, the entries were made:
May 16 11:56:20 v-vsf4 slapd[14302]: [ID 538834 local4.debug] daemon: select: listen=7 active_threads=0 tvp=zero
May 16 11:56:20 v-vsf4 slapd[14302]: [ID 538834 local4.debug] daemon: select: listen=8 active_threads=0 tvp=zero
May 16 11:56:31 v-vsf4 slapd[14302]: [ID 538834 local4.debug] daemon: select: listen=7 active_threads=0 tvp=zero
May 16 11:56:31 v-vsf4 slapd[14302]: [ID 538834 local4.debug] daemon: select: listen=8 active_threads=0 tvp=zero
May 16 11:56:31 v-vsf4 slapd[14302]: [ID 365351 local4.debug] do_syncrep2: rid=000 LDAP_RES_SEARCH_RESULT
# 15 Seconds no action -- unusual on a server with heavy load.
May 16 11:56:46 v-vsf4 slapd[14302]: [ID 538834 local4.debug] daemon: select: listen=7 active_threads=0 tvp=zero
May 16 11:56:46 v-vsf4 slapd[14302]: [ID 538834 local4.debug] daemon: select: listen=8 active_threads=0 tvp=zero
May 16 11:56:46 v-vsf4 slapd[14302]: [ID 977386 local4.debug] syncrepl_entry: rid=000 LDAP_RES_SEARCH_ENTRY(LDAP_SYNC_ADD)
May 16 11:56:46 v-vsf4 slapd[14302]: [ID 580501 local4.debug] syncrepl_entry: rid=000 inserted UUID a36b3802-525a-1032-9442-17888436c71f
May 16 11:56:46 v-vsf4 slapd[14302]: [ID 565591 local4.debug] syncrepl_entry: rid=000 be_search (0)
May 16 11:56:46 v-vsf4 slapd[14302]: [ID 709484 local4.debug] syncrepl_entry: rid=000 mail=mthudianplackal(a)[domain-deleted].ch,dc=[domain-deleted].ch,ou=vsf,dc=ip-plus,dc=net
May 16 11:56:48 v-vsf4 slapd[14302]: [ID 601841 local4.debug] daemon: activity on 1 descriptor
May 16 11:56:48 v-vsf4 slapd[14302]: [ID 538834 local4.debug] daemon: select: listen=7 active_threads=0 tvp=zero
May 16 11:56:48 v-vsf4 slapd[14302]: [ID 300852 local4.debug] daemon: listen=8, new connection on 91
May 16 11:56:48 v-vsf4 slapd[14302]: [ID 538834 local4.debug] daemon: select: listen=8 active_threads=0 tvp=zero
May 16 11:56:48 v-vsf4 slapd[14302]: [ID 368480 local4.debug] daemon: added 91r (active) listener=0
May 16 11:56:48 v-vsf4 slapd[14302]: [ID 848112 local4.debug] conn=35253 fd=91 ACCEPT from IP=192.168.1.4:45922 (IP=0.0.0.0:389)
May 16 11:56:48 v-vsf4 slapd[14302]: [ID 601841 local4.debug] daemon: activity on 1 descriptor
May 16 11:56:48 v-vsf4 slapd[14302]: [ID 609413 local4.debug] daemon: waked
May 16 11:56:48 v-vsf4 slapd[14302]: [ID 538834 local4.debug] daemon: select: listen=7 active_threads=0 tvp=zero
May 16 11:56:48 v-vsf4 slapd[14302]: [ID 538834 local4.debug] daemon: select: listen=8 active_threads=0 tvp=zero
May 16 11:56:48 v-vsf4 slapd[14302]: [ID 601841 local4.debug] daemon: activity on 1 descriptor
May 16 11:56:48 v-vsf4 slapd[14302]: [ID 802679 local4.debug] daemon: activity on:
May 16 11:56:48 v-vsf4 slapd[14302]: [ID 522297 local4.debug] 91r
May 16 11:56:48 v-vsf4 slapd[14302]: [ID 100000 local4.debug]
May 16 11:56:48 v-vsf4 slapd[14302]: [ID 694296 local4.debug] daemon: read activity on 91
May 16 11:56:48 v-vsf4 slapd[14302]: [ID 538834 local4.debug] daemon: select: listen=7 active_threads=0 tvp=zero
May 16 11:56:48 v-vsf4 slapd[14302]: [ID 215403 local4.debug] conn=35253 op=0 BIND dn="" method=128
May 16 11:56:48 v-vsf4 slapd[14302]: [ID 538834 local4.debug] daemon: select: listen=8 active_threads=0 tvp=zero
May 17 08:43:18 v-vsf4 slapd[14302]: [ID 515743 local4.debug] syncrepl_entry: rid=000 be_add mail=mthudianplackal(a)[domain-deleted].ch,dc=[domain-deleted].ch,ou=vsf,dc=ip-plus,dc=net (0)
May 17 08:43:34 v-vsf4 slapd[3312]: [ID 709484 local4.debug] syncrepl_entry: rid=000 mail=mthudianplackal(a)[domain-deleted].ch,dc=[domain-deleted].ch,ou=vsf,dc=ip-plus,dc=net
May 17 08:43:34 v-vsf4 slapd[3312]: [ID 515743 local4.debug] syncrepl_entry: rid=000 be_add mail=mthudianplackal(a)[domain-deleted].ch,dc=[domain-deleted].ch,ou=vsf,dc=ip-plus,dc=net (68)
May 17 08:43:34 v-vsf4 slapd[3312]: [ID 933660 local4.debug] syncrepl_entry: rid=000 be_modify mail=mthudianplackal(a)[domain-deleted].ch,dc=[domain-deleted].ch,ou=vsf,dc=ip-plus,dc=net (0)
May 17 08:43:47 v-vsf4 slapd[3312]: [ID 338579 local4.debug] nonpresent_callback: rid=000 nonpresent UUID a36b3802-525a-1032-9442-17888436c71f, dn mail=mthudianplackal(a)[domain-deleted].ch,dc=[domain-deleted].ch,ou=vsf,dc=ip-plus,dc=net
May 17 08:43:48 v-vsf4 slapd[3312]: [ID 905397 local4.debug] syncrepl_del_nonpresent: rid=000 be_delete mail=mthudianplackal(a)[domain-deleted].ch,dc=[domain-deleted].ch,ou=vsf,dc=ip-plus,dc=net (0)
May 17 10:11:05 v-vsf4 slapd[3312]: [ID 469902 local4.debug] conn=1480 op=1 SRCH base="dc=ip-plus,dc=net" scope=2 deref=0 filter="(mail=mthudianplackal(a)[domain-deleted].ch)"
May 17 10:39:39 v-vsf4 slapd[3312]: [ID 469902 local4.debug] conn=1595 op=1 SRCH base="dc=ip-plus,dc=net" scope=2 deref=0 filter="(mail=mthudianplackal(a)[domain-deleted].ch)"
May 17 10:41:15 v-vsf4 slapd[3312]: [ID 469902 local4.debug] conn=1599 op=1 SRCH base="dc=ip-plus,dc=net" scope=2 deref=0 filter="(mail=mthudianplackal(a)[domain-deleted].ch)"
May 17 10:41:22 v-vsf4 slapd[3312]: [ID 709484 local4.debug] syncrepl_entry: rid=000 mail=mthudianplackal(a)[domain-deleted].ch,dc=[domain-deleted].ch,ou=vsf,dc=ip-plus,dc=net
May 17 10:41:22 v-vsf4 slapd[3312]: [ID 515743 local4.debug] syncrepl_entry: rid=000 be_add mail=mthudianplackal(a)[domain-deleted].ch,dc=[domain-deleted].ch,ou=vsf,dc=ip-plus,dc=net (0)
May 17 10:41:37 v-vsf4 slapd[3312]: [ID 469902 local4.debug] conn=1601 op=1 SRCH base="dc=ip-plus,dc=net" scope=2 deref=0 filter="(mail=mthudianplackal(a)[domain-deleted].ch)"
May 17 10:41:37 v-vsf4 slapd[3312]: [ID 580335 local4.debug] conn=1601 op=1 ENTRY dn="mail=mthudianplackal(a)[domain-deleted].ch,dc=[domain-deleted].ch,ou=vsf,dc=ip-plus,dc=net"
Master configuration:
# See slapd.conf(5) for details on configuration options.
# This file should NOT be world readable.
#
include /etc/openldap/schema/core.schema
include /etc/openldap/schema/cosine.schema
include /etc/openldap/schema/inetorgperson.schema
include /etc/openldap/schema/nis.schema
include /etc/openldap/schema/openldap.schema
include /etc/openldap/schema/amavisd-new.schema
include /etc/openldap/schema/ipplus.schema
pidfile /var/run/slapd.pid
argsfile /var/run/slapd.args
# allow ldap protocol v2
allow bind_v2
# debug level
loglevel 256
#######################################################################
# ldbm database definitions
#######################################################################
database bdb
suffix "dc=ip-plus,dc=net"
rootdn "cn=root,dc=ip-plus,dc=net"
# Cleartext passwords, especially for the rootdn, should
# be avoid. See slappasswd(8) and slapd.conf(5) for details.
# Use of strong authentication encouraged.
rootpw swisscom
# The database directory MUST exist prior to running slapd AND
# should only be accessible by the slapd/tools. Mode 700 recommended.
directory /var/openldap-data
# Indices to maintain
index objectclass,entryCSN,entryUUID eq
index dc,cn,mail eq
#######################################################################
# replication
#######################################################################
overlay syncprov
syncprov-checkpoint 100 10
syncprov-sessionlog 100
On the slaves, the config looks like:
[ ... same as above, execpt replication: ]
#######################################################################
# replication
#######################################################################
syncrepl rid=0
provider=ldap://v-ldapmaster-lan:389
type=refreshOnly
interval=00:00:00:15
searchbase="dc=ip-plus,dc=net"
filter="(objectClass=*)"
scope=sub
attrs="*"
bindmethod=simple
binddn="cn=root,dc=ip-plus,dc=net"
credentials=swisscom
schemachecking=off
retry="5 +"
9 years, 8 months