Missing contextCSN on ldap cluster
by Abel FERNANDEZ
Hello,
I have a two actifs nodes LDAP cluster with replication stablished and working properly. The problem is when trying to check replication status I have no contextCSN returned in any of the nodes.
This is the command executed to get replication status and that should return contextCSN values if executed in both nodes (but it returns nothing) :
ldapsearch -x -LLL -H ldaps:// -s base -b 'dc=domain,dc=com' contextCSN dn: dc=domain,dc=com
This is the replication configuration in node1 (is the same in node 2 excepting the rid and the hostname:
syncrepl rid=001 provider=ldaps://HOSTNAME bindmethod=simple binddn="uid=user,ou=group,dc=domain,dc=com" credentials=PASSWORD searchbase="dc=domain,dc=com" attrs="*,+" type=refreshAndPersist interval=00:00:00:10 retry="5 5 300 +" mirrormode on
These are the values supossed to be indexed, configured in the slapd.confon both servers
index objectClass,entryCSN,entryUUID eq,pres index ou,cn,mail,surname,givenname eq,pres,sub index uidNumber,gidNumber,loginShell eq,pres index uid,memberUid eq,pres,sub index nisMapName,nisMapEntry eq,pres,sub
And the synchronisation options (also in slapd.conf)
overlay syncprov syncprov-checkpoint 50 1 syncprov-sessionlog 50
I'm using dbd database. OpendLDAP 2.4.44 from LTB project and CentOS 7 as OS.
Any clue of what I'm missing ?
Thank you in advance
Best regards
Abel
5 years, 8 months
LMDB random writes really slow for large data
by Chuntao HONG
I am testing LMDB performance with the benchmark given in
http://www.lmdb.tech/bench/ondisk/. And I noticed that LMDB random writes
are really slow when the data goes beyond memory.
I am using a machine with 4GB DRAM with Intel PCIE SSD. The key size is 10
bytes and value size is 1KB. The benchmark code is given in
http://www.lmdb.tech/bench/ondisk/, and the command line I used is
"./db_bench_mdb --benchmarks=fillrandbatch --threads=1
--stats_interval=1024 --num=10000000 --value_size=1000 --use_existing_db=0
".
For the first 1GB of data written, the average write rate is 140MB/s. The
rate then drops significantly to 40MB/s for the first 2GB. At the end of
the test, in which 10M values are written, the average rate is just 3MB/s,
and the instant rate is 1MB/s. I know LMDB is not optimized for writes, but
I didn't expect it to be this slow, given that I have a really high-end
Intel SSD.
I also notice that the way LMDB access the SSD is really strange. At the
beginning of the test, it writes the SSD at around 400MB/s, but performs no
read, which is expected. But as we write more and more data, LMDB starts to
read the SSD. As time goes on, the read throughput rises while the write
throughput drops significantly. At the end of test, LMDB is constantly
reading at around 190MB/s, while occationally issuing 100MB writes at
around 10-20 second intervals.
1. Is it normal for LMDB to have such low write throughput (1MB/s at the
end of test) for data stored on SSD?
2. Why is LMDB reading more data than it is writing (about 20MB data read
per 1MB written) at the end of the test?
To my understanding, although we have more data than the DRAM can hold, the
branch nodes of the B-tree should still be in the DRAM. So for every write,
the only pages that we need to fetch from SSD is the leaf nodes. And when
we write the leaf node, we might also need to write its parents. So there
should be more writes than reads. But it turns out LMDB is reading much
more than writing. I think it might be the reason why it is so slow at the
end. But I really cannot understand why.
For your reference, here is part of the log given by the benchmark:
--------------------------------------------------------
2018/03/12-10:36:30 ... thread 0: (1024,1024) ops and (54584.2,54584.2)
ops/second in (0.018760,0.018760) seconds
2018/03/12-10:36:30 ... thread 0: (1024,2048) ops and (111231.8,73231.8)
ops/second in (0.009206,0.027966) seconds
2018/03/12-10:36:30 ... thread 0: (1024,3072) ops and (125382.6,85019.2)
ops/second in (0.008167,0.036133) seconds
2018/03/12-10:36:30 ... thread 0: (1024,4096) ops and (206202.2,99661.8)
ops/second in (0.004966,0.041099) seconds
2018/03/12-10:36:30 ... thread 0: (1024,5120) ops and (259634.9,113669.2)
ops/second in (0.003944,0.045043) seconds
2018/03/12-10:36:30 ... thread 0: (1024,6144) ops and (306495.1,126984.1)
ops/second in (0.003341,0.048384) seconds
2018/03/12-10:36:30 ... thread 0: (1024,7168) ops and (339185.2,139447.1)
ops/second in (0.003019,0.051403) seconds
2018/03/12-10:36:30 ... thread 0: (1024,8192) ops and (384240.2,151512.9)
ops/second in (0.002665,0.054068) seconds
2018/03/12-10:36:30 ... thread 0: (1024,9216) ops and (385252.1,162465.2)
ops/second in (0.002658,0.056726) seconds
2018/03/12-10:36:30 ... thread 0: (1024,10240) ops and (371553.0,172152.9)
ops/second in (0.002756,0.059482) seconds
...
2018/03/12-10:36:37 ... thread 0: (1024,993280) ops and (70127.4,142518.0)
ops/second in (0.014602,6.969505) seconds
2018/03/12-10:36:37 ... thread 0: (1024,994304) ops and (199415.8,142559.9)
ops/second in (0.005135,6.974640) seconds
2018/03/12-10:36:37 ... thread 0: (1024,995328) ops and (75953.1,142431.4)
ops/second in (0.013482,6.988122) seconds
2018/03/12-10:36:37 ... thread 0: (1024,996352) ops and (200823.7,142474.0)
ops/second in (0.005099,6.993221) seconds
2018/03/12-10:36:37 ... thread 0: (1024,997376) ops and (71975.8,142330.8)
ops/second in (0.014227,7.007448) seconds
2018/03/12-10:36:37 ... thread 0: (1024,998400) ops and (62117.1,142142.6)
ops/second in (0.016485,7.023933) seconds
2018/03/12-10:36:37 ... thread 0: (1024,999424) ops and (36366.2,141720.2)
ops/second in (0.028158,7.052091) seconds
2018/03/12-10:36:37 ... thread 0: (1024,1000448) ops and (61914.3,141533.5)
ops/second in (0.016539,7.068630) seconds
2018/03/12-10:36:37 ... thread 0: (1024,1001472) ops and (60985.1,141342.6)
ops/second in (0.016791,7.085421) seconds
2018/03/12-10:36:37 ... thread 0: (1024,1002496) ops and (60466.5,141149.8)
ops/second in (0.016935,7.102356) seconds
2018/03/12-10:36:37 ... thread 0: (1024,1003520) ops and (60189.3,140956.3)
ops/second in (0.017013,7.119369) seconds
2018/03/12-10:36:37 ... thread 0: (1024,1004544) ops and (61731.4,140772.1)
ops/second in (0.016588,7.135957) seconds
...
2018/03/12-10:40:15 ... thread 0: (1024,3236864) ops and (5620.5,14373.0)
ops/second in (0.182189,225.203790) seconds
2018/03/12-10:40:15 ... thread 0: (1024,3237888) ops and (6098.5,14366.9)
ops/second in (0.167911,225.371701) seconds
2018/03/12-10:40:15 ... thread 0: (1024,3238912) ops and (5469.5,14359.5)
ops/second in (0.187221,225.558922) seconds
2018/03/12-10:40:15 ... thread 0: (1024,3239936) ops and (5593.9,14352.4)
ops/second in (0.183056,225.741978) seconds
2018/03/12-10:40:16 ... thread 0: (1024,3240960) ops and (5806.9,14345.7)
ops/second in (0.176342,225.918320) seconds
2018/03/12-10:40:16 ... thread 0: (1024,3241984) ops and (5332.9,14338.1)
ops/second in (0.192016,226.110336) seconds
2018/03/12-10:40:16 ... thread 0: (1024,3243008) ops and (5532.3,14330.9)
ops/second in (0.185096,226.295432) seconds
2018/03/12-10:40:16 ... thread 0: (1024,3244032) ops and (6108.8,14324.8)
ops/second in (0.167626,226.463058) seconds
2018/03/12-10:40:16 ... thread 0: (1024,3245056) ops and (6074.7,14318.6)
ops/second in (0.168567,226.631625) seconds
2018/03/12-10:40:17 ... thread 0: (1024,3246080) ops and (5615.2,14311.6)
ops/second in (0.182362,226.813987) seconds
2018/03/12-10:40:17 ... thread 0: (1024,3247104) ops and (5529.3,14304.5)
ops/second in (0.185194,226.999181) seconds
2018/03/12-10:40:17 ... thread 0: (1024,3248128) ops and (5846.2,14298.0)
ops/second in (0.175156,227.174337) seconds
2018/03/12-10:40:17 ... thread 0: (1024,3249152) ops and (5741.5,14291.2)
ops/second in (0.178351,227.352688) seconds
2018/03/12-10:40:17 ... thread 0: (1024,3250176) ops and (5640.2,14284.3)
ops/second in (0.181555,227.534243) seconds
...
2018/03/12-11:30:39 ... thread 0: (1024,9988096) ops and (1917.2,3074.3)
ops/second in (0.534112,3248.860552) seconds
2018/03/12-11:30:39 ... thread 0: (1024,9989120) ops and (1858.9,3074.1)
ops/second in (0.550851,3249.411403) seconds
2018/03/12-11:30:40 ... thread 0: (1024,9990144) ops and (1922.8,3073.9)
ops/second in (0.532557,3249.943960) seconds
2018/03/12-11:30:40 ... thread 0: (1024,9991168) ops and (1857.2,3073.7)
ops/second in (0.551382,3250.495342) seconds
2018/03/12-11:30:41 ... thread 0: (1024,9992192) ops and (1851.3,3073.5)
ops/second in (0.553130,3251.048472) seconds
2018/03/12-11:30:41 ... thread 0: (1024,9993216) ops and (1941.0,3073.3)
ops/second in (0.527568,3251.576040) seconds
2018/03/12-11:30:42 ... thread 0: (1024,9994240) ops and (1923.1,3073.2)
ops/second in (0.532461,3252.108501) seconds
2018/03/12-11:30:42 ... thread 0: (1024,9995264) ops and (1987.6,3073.0)
ops/second in (0.515200,3252.623701) seconds
2018/03/12-11:30:43 ... thread 0: (1024,9996288) ops and (1931.2,3072.8)
ops/second in (0.530233,3253.153934) seconds
2018/03/12-11:30:43 ... thread 0: (1024,9997312) ops and (1918.9,3072.6)
ops/second in (0.533633,3253.687567) seconds
2018/03/12-11:30:44 ... thread 0: (1024,9998336) ops and (1999.0,3072.4)
ops/second in (0.512246,3254.199813) seconds
2018/03/12-11:30:44 ... thread 0: (1024,9999360) ops and (1853.3,3072.2)
ops/second in (0.552533,3254.752346) seconds
fillrandbatch : 325.508 micros/op 3072 ops/sec; 3.0 MB/s
* -------------------------------------------------------- *
And here is the read/write rate dumpped from iostat:
*-------------------------------------------------------- *
Device tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sdb 73.00 0.12 25.52 0 25
sdb 531.00 0.00 495.21 0 495
sdb 15089.00 0.00 488.77 0 488
sdb 27431.00 0.01 463.55 0 463
sdb 13093.00 0.00 478.77 0 478
sdb 53676.00 0.00 413.79 0 413
sdb 16781.00 0.00 483.60 0 483
sdb 22267.00 0.00 323.32 0 323
sdb 23945.00 0.00 164.55 0 164
sdb 22867.00 0.00 152.25 0 152
sdb 22038.00 0.00 146.39 0 146
sdb 23825.00 0.00 263.61 0 263
...
sdb 20866.00 85.81 76.90 85 76
sdb 7684.00 101.75 115.19 101 115
sdb 3707.00 154.48 0.00 154 0
sdb 4349.00 181.41 0.00 181 0
sdb 4373.00 184.70 0.00 184 0
sdb 4329.00 185.04 0.00 185 0
sdb 4338.00 182.30 0.01 182 0
sdb 4364.00 184.27 0.00 184 0
sdb 5310.00 177.32 4.99 177 4
sdb 32130.00 99.07 119.70 99 119
sdb 27010.00 103.26 99.25 103 99
sdb 11109.00 67.18 99.96 67 99
sdb 3931.00 172.51 0.00 172 0
sdb 4112.00 171.28 0.00 171 0
sdb 4202.00 183.03 0.00 183 0
sdb 4119.00 183.79 0.00 183 0
sdb 4232.00 182.77 0.02 182 0
sdb 4224.00 185.90 0.00 185 0
sdb 4304.00 186.17 0.00 186 0
sdb 4279.00 188.83 0.00 188 0
sdb 4087.00 184.38 0.00 184 0
sdb 7758.00 163.86 16.70 163 16
sdb 21309.00 68.95 80.11 68 80
sdb 21166.00 81.66 78.42 81 78
sdb 19328.00 71.56 71.55 71 71
sdb 20836.00 89.08 76.52 89 76
sdb 3211.00 112.01 82.21 112 82
sdb 3939.00 173.40 0.00 173 0
sdb 3992.00 178.03 0.00 178 0
sdb 4251.00 181.49 0.00 181 0
sdb 4148.00 185.63 0.00 185 0
sdb 4094.00 184.12 0.01 184 0
sdb 4241.00 187.38 0.00 187 0
sdb 4044.00 186.60 0.00 186 0
sdb 4049.00 185.47 0.00 185 0
sdb 4247.00 189.17 0.00 189 0
...
sdb 17457.00 105.45 64.05 105 64
sdb 16736.00 82.12 62.35 82 62
sdb 12074.00 108.76 66.21 108 66
sdb 2232.00 194.44 0.00 194 0
sdb 2171.00 187.27 0.02 187 0
sdb 2322.00 197.91 0.00 197 0
sdb 2311.00 194.65 0.00 194 0
sdb 2240.00 187.93 0.00 187 0
sdb 2189.00 191.38 0.00 191 0
sdb 2266.00 192.33 0.01 192 0
sdb 2312.00 198.95 0.00 198 0
sdb 2310.00 199.84 0.00 199 0
sdb 2350.00 198.83 0.00 198 0
sdb 2275.00 198.31 0.00 198 0
sdb 3952.00 185.05 6.79 185 6
sdb 15842.00 59.89 59.67 59 59
sdb 16676.00 88.24 61.79 88 61
sdb 14768.00 75.94 55.00 75 54
sdb 5677.00 141.71 35.03 141 35
sdb 2135.00 184.78 0.04 184 0
sdb 2301.00 197.18 0.00 197 0
sdb 2334.00 198.81 0.00 198 0
sdb 2304.00 198.83 0.00 198 0
sdb 2348.00 198.67 0.00 198 0
sdb 2352.00 198.42 0.01 198 0
sdb 2373.00 199.32 0.00 199 0
sdb 2363.00 197.55 0.00 197 0
sdb 2289.00 198.71 0.00 198 0
sdb 2246.00 189.31 0.00 189 0
sdb 2357.00 198.64 0.01 198 0
sdb 2338.00 197.96 0.00 197 0
sdb 6292.00 177.60 16.56 177 16
sdb 19374.00 93.72 72.16 93 72
sdb 16873.00 101.38 62.01 101 62
sdb 16960.00 98.99 76.84 98 76
sdb 2299.00 189.32 6.16 189 6
sdb 2285.00 195.82 0.00 195 0
sdb 2346.00 198.25 0.00 198 0
sdb 2325.00 198.91 0.00 198 0
sdb 2353.00 197.72 0.02 197 0
sdb 2320.00 198.82 0.00 198 0
sdb 2327.00 200.05 0.00 200 0
sdb 2340.00 198.35 0.00 198 0
sdb 2322.00 199.29 0.00 199 0
sdb 2316.00 197.43 0.01 197 0
sdb 690.00 51.17 0.00 51 0
--------------------------------------------------------
5 years, 8 months
LDAP Account expiry
by Brad Marshall
Hi all,
I'm running OpenLDAP 2.4.44 in Docker on Ubuntu, and have a requirement to
lock accounts after they've been idle for a certain amount of time.
As I understand there's no native way to do this, I've written a python
script that loops over and checks the authTimestamp from the lastbind
overlay, which is all good. To lock the account I set the
pwdAccountLockTime to the timestamp, which all works well with the ppolicy
overlay in place.
The problem becomes when we want to unlock the accounts, and give the end
users a chance to auth so it will clear out the lock. My understanding
from reading the code was that I could set the timestamp for
pwdAccountLockTime into the future, and it should expire the account when
it gets to that time. This gives the users a grace period in which to
authenticate.
However when I do this, the account still seems locked - authentication
still says invalid credentials, but when I remove the pwdAccountLockTime
attribute the same password works. I've tried with both pwdLockoutDuration
set to 0 and a non zero value, and pwdLockout is set to True.
I also investigated using pwdEndTime and pwdStartTime as per the "Password
Policy for LDAP Directories" draft policy, but apparently this isn't
implemented.
Should any of this be working? Am I missing any piece of this puzzle
here? Has anyone got any suggestions on how to solve this problem, either
via the approach I'm trying or any alternative solution? Please let me
know if I've left any useful information out about this.
Thanks,
Brad
--
Brad Marshall
brad.marshall(a)gmail.com
5 years, 8 months
SAMBA PDC/BDC LDPA replicaiton issues
by Praveen Ghimire
Hi,
We are having some replication issues between the our PDC and BDC LDAP servers. Here are the details
Servers:
Name: LIN-PDC1.LIN
Role: PDC
SLAPD: openldap-2.4.28
Samba: 3.6.25
Distro: Ubuntu 12.04
Name: LIN-PDC2.LIN
Role: BDC
SLAPD: 2.4.31
Samba: 4.3.11
Distro: Ubuntu 14.04
LDAP Method: cn=config with smbldap tools
Database: HDB
Management: PHPLAMDIN
Replication Method: refreshAndPersist
Replication:
After importing the LDIFs for Provider and consumer, we found that the in the PDC the oldDatabase(1)HDB was converted from a file to a folder. The contents of the which are below. In BDC it remained a file.
BDC:
LDAP sync related bits from olCDatabase(1)HDB
olcSyncrepl: {0}rid=0 provider=ldap://lin-pdc1.lin bindmethod=simple bindd
n="cn=admin,dc=lin" credentials=seceret searchbase="dc=lin" log
base="cn=accesslog" logfilter="(&(objectClass=auditWriteObject)(reqResult=0))
" schemachecking=on type=refreshAndPersist retry="60 +" syncdata=accesslog
olcUpdateRef: ldap://lin-pdc1.lin
PDC:
root@lin-pdc1:/etc/ldap/slapd.d/cn=config/olcDatabase={1}hdb#<mailto:root@lin-pdc1:/etc/ldap/slapd.d/cn=config/olcDatabase=%7b1%7dhdb#> cat olcOverlay\=\{0\}syncprov.ldif
# AUTO-GENERATED FILE - DO NOT EDIT!! Use ldapmodify.
# CRC32 59e49836
dn: olcOverlay={0}syncprov
objectClass: olcOverlayConfig
objectClass: olcSyncProvConfig
olcOverlay: {0}syncprov
olcSpNoPresent: TRUE
structuralObjectClass: olcSyncProvConfig
entryUUID: 977916ca-b8a5-1037-9fec-c19e1fce1248
creatorsName: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
createTimestamp: 20180310115454Z
entryCSN: 20180310115454.449597Z#000000#000#000000
modifiersName: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
modifyTimestamp: 20180310115454Z
root@lin-pdc1:/etc/ldap/slapd.d/cn=config/olcDatabase={1}hdb#<mailto:root@lin-pdc1:/etc/ldap/slapd.d/cn=config/olcDatabase=%7b1%7dhdb#> cat olcOverlay\=\{1\}accesslog.ldif
# AUTO-GENERATED FILE - DO NOT EDIT!! Use ldapmodify.
# CRC32 98b496b3
dn: olcOverlay={1}accesslog
objectClass: olcOverlayConfig
objectClass: olcAccessLogConfig
olcOverlay: {1}accesslog
olcAccessLogDB: cn=accesslog
olcAccessLogOps: writes
olcAccessLogPurge: 07+00:00 01+00:00
olcAccessLogSuccess: TRUE
structuralObjectClass: olcAccessLogConfig
entryUUID: 97792548-b8a5-1037-9fed-c19e1fce1248
creatorsName: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
createTimestamp: 20180310115454Z
entryCSN: 20180310115454.449968Z#000000#000#000000
modifiersName: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
modifyTimestamp: 20180310115454Z
Results
- When the sync was first setup, the ldap data from PDC to BDC replicated.
- The following shows the replication is happening. Not sure if the CSN is meant to be different
root@lin-pdc2:/tmp/smbldap_files_lin-pdc2/ldifs# ldapsearch -z1 -LLLQY EXTERNAL -H ldapi:/// -s base -b dc=lin contextCSN
dn: dc=lin
contextCSN: 20180312013413.103495Z#000000#000#000000
root@lin-pdc1:/etc/ldap/slapd.d/cn=config/olcDatabase={1}hdb#<mailto:root@lin-pdc1:/etc/ldap/slapd.d/cn=config/olcDatabase=%7b1%7dhdb#> ldapsearch -z1 -LLLQY EXTERNAL -H ldapi:/// -s base -b dc=lin contextCSN
dn: dc=lin
contextCSN: 20180312065856.371133Z#000000#000#000000
- The replication stopped working after the initial dump. Logs from PDC and BDC below
PDC
slapd[25513]: hdb_db_open: warning - no DB_CONFIG file found in directory /var/lib/ldap/accesslog: (2).#012Expect poor performance for suffix
"cn=accesslog".
slapd starting
slapd[25513]: findbase failed! 32
BDC
slapd[9799]: do_syncrep2: rid=000 LDAP_RES_SEARCH_RESULT (32) No such object
slapd[9799]: do_syncrep2: rid=000 (32) No such object
slapd[9799]: do_syncrepl: rid=000 rc -2 retrying
Troubleshooting steps:
- Used IP instead of hostname
- Used the samba.ldif (schema) file from Samba 3 (BDC) for both PDC and BDC. This is to potentially mitigate issues due to different schema versions
- Confirmed that the cn=admin,dc=lin password across both DCs are same.
Can anyone please advise as to where the issue could be?
Regards,
Praveen Ghimire
5 years, 8 months
new attribute
by Alexander Schwarz
Hello,
I tried to create a new objectclass and a new attribute to develop scripts
to use against an ActiveDirectory.
objectlass=user
attribute=sAMAccountName
I have a new test.schema:
attributetype ( 1.2.840.113556.1.4.221
NAME 'sAMAccountName'
EQUALITY caseIgnoreMatch
SYNTAX '1.3.6.1.4.1.1466.115.121.1.15'
SINGLE-VALUE )
objectclass ( 1.2.840.113556.1.5.9
NAME 'user'
DESC 'a user'
SUP inetOrgPerson STRUCTURAL
MUST ( cn )
MAY ( sAMAccountName ) )
This is included in slapd.conf:
include ./schema/core.schema
include ./schema/cosine.schema
include ./schema/nis.schema
include ./schema/inetorgperson.schema
include ./schema/openldap.schema
include ./schema/pmi.schema
include ./schema/ppolicy.schema
include ./schema/dyngroup.schema
include ./schema/test.schema
I tried to modify a dummy user after restart ldap.
modify.ldif:
dn: cn=test test,ou=Benutzer,ou=Netzwerk,dc=network,dc=de
changetype: modify
add: sAMAccountName
sAMAccountName: test
I used the ldapmodify tool:
ldapmodify -a -x -D "cn=admin,dc=network,dc=de" -w passwd -H ldap:// -f
d:\modify.ldif
Eintrag cn=test test,ou=Benutzer,ou=Netzwerk,dc=network,dc=de wird geändert
ldap_modify: Objektklassenverletzung
ldap_modify: Zusätzliche Info: attribute 'sAMAccountName' not allowed
Can someone explain to me where is the mistake?
Regards,
Alex
5 years, 8 months
cannot fetch base DNS from replica
by Eugene M. Zheganin
Hi,
previously I succeeded several times on setting up the replication, but
this time I'm stuck. The only difference is that now the database is in
mdb (since bdb is obsoleted). ldapsearch is able to fetch all the
entries from both hosts (I've tried the replication with mirrormode and
without it - same result), the numResponses: 189 is both times, but on
"replica" (doesn't matter whether the mirrormode is on or off) the
Apache Directory Studio is not able to browse the tree, because it says
"cannot fetch Base DNs" (so it is only showing the Roor DSE), thus I'm
sure there's something wrong. here are my configs (I've skipped the
accesslog and ppolicy overlays since their configuration seems not to be
the problem):
Host A
====
serverID 1
database mdb
suffix "dc=playkey,dc=net"
rootdn "cn=webmaster,dc=playkey,dc=net"
rootpw {SSHA}XXXXXXXXXXXXXXXXXXXXXXXX
directory /var/db/openldap-data
overlay syncprov
syncprov-checkpoint 100 10
syncprov-sessionlog 100
syncrepl rid=001
provider=ldap://foobar1:389
type=refreshAndPersist
interval=00:00:10:00
retry="60 10 300 +"
filter="(objectClass=*)"
searchbase="dc=playkey,dc=net"
attrs="*,+"
schemachecking=off
bindmethod=simple
binddn="uid=proxyagent,ou=accounts,ou=enaza,dc=playkey,dc=net"
credentials=ghjcnbvtyzvjzk.,jdm
mirrormode on
Host B
====
serverID 2
database mdb
suffix "dc=playkey,dc=net"
rootdn "cn=webmaster,dc=playkey,dc=net"
rootpw {SSHA}XXXXXXXXXXXXXXXXXXXXXXXXXX
directory /var/db/openldap-data
overlay syncprov
syncprov-checkpoint 100 10
syncprov-sessionlog 100
syncrepl rid=001
provider=ldap://foobar2:389
type=refreshAndPersist
interval=00:00:10:00
retry="60 10 300 +"
filter="(objectClass=*)"
searchbase="dc=playkey,dc=net"
attrs="*,+"
schemachecking=off
bindmethod=simple
binddn="uid=proxyagent,ou=accounts,ou=enaza,dc=playkey,dc=net"
credentials=ghjcnbvtyzvjzk.,jdm
mirrormode on
So, how do I investigate what's wrong with base DNs on teh host B ?
Thanks.
Eugene.
5 years, 9 months
dynamic config replication
by Gerard Ranke
Hello list,
Openldap 2.4.45 here, on 1 producer and 4 consumers. ( I'll attach
relevant parts of the configuration at the end of this message. )
Following the scripts from test059, I configured the producer to serve
up a cn=config backend for the consumers. This seems to work nicely at
first: When you start a consumer from a minimal config, it loads the
producers schemafiles and the cn=config, and replication of the main
database is fine. Also, when fi. changing the loglevel on the producers
cn=config,cn=slave, the consumers pick up this change in their cn=config.
However, when I modify an olcAccess line on the producers
cn=config,cn=slave database, I get these errors on the consumer:
slapd[26324]: syncrepl_message_to_entry: rid=002 DN:
olcDatabase={1}mdb,cn=config,cn=slave, UUID:
7cff5ef6-90b1-1037-9d95-6dfd3149c2dc
slapd[26324]: syncrepl_entry: rid=002 LDAP_RES_SEARCH_ENTRY(LDAP_SYNC_ADD)
slapd[26324]: syncrepl_entry: rid=002 inserted UUID
7cff5ef6-90b1-1037-9d95-6dfd3149c2dc
slapd[26324]: syncrepl_entry: rid=002 be_search (0)
slapd[26324]: syncrepl_entry: rid=002 olcDatabase={1}mdb,cn=config
slapd[26324]: null_callback : error code 0x43
slapd[26324]: syncrepl_entry: rid=002 be_modify
olcDatabase={1}mdb,cn=config (67)
slapd[26324]: syncrepl_entry: rid=002 be_modify failed (67)
slapd[26324]: do_syncrepl: rid=002 rc 67 retrying
>From the error code ox43, it seems that the replication is somehow
trying to change the rdn, olcDatabase{1}mdb, on the consumer, which
makes no sense to me.
>From the producer, cn=config,cn=slave:
( This is identical to the consumer's cn=config )
dn: cn=config,cn=slave
objectClass: olcGlobal
objectClass: olcConfig
objectClass: top
cn: slaveconfig
cn: config
olcArgsFile: /var/run/slapd/slapd.args
olcAttributeOptions: lang-
olcAuthzPolicy: none
olcConcurrency: 0
olcConfigDir: slapd.d/
olcConnMaxPending: 100
olcConnMaxPendingAuth: 1000
olcGentleHUP: FALSE
olcIdleTimeout: 0
olcIndexIntLen: 4
olcIndexSubstrAnyLen: 4
olcIndexSubstrAnyStep: 2
olcIndexSubstrIfMaxLen: 4
olcIndexSubstrIfMinLen: 2
olcLocalSSF: 71
olcLogFile: none
olcLogLevel: none
olcPidFile: /var/run/slapd/slapd.pid
olcReadOnly: FALSE
olcSaslSecProps: noplain,noanonymous
olcSizeLimit: 20000
olcSockbufMaxIncoming: 262143
olcSockbufMaxIncomingAuth: 16777215
olcThreads: 16
olcTLSCACertificatePath: /etc/ssl/certs
olcTLSCertificateFile: /etc/ssl/certs/hkuwildcardcacert.cert
olcTLSCertificateKeyFile: /etc/ssl/private/hkuwildcardcacert.key
olcTLSCRLCheck: none
olcTLSVerifyClient: never
olcToolThreads: 2
I'll leave the rest PM, except for:
dn: olcDatabase={0}config,cn=config,cn=slave
objectClass: olcDatabaseConfig
objectClass: olcConfig
objectClass: top
olcDatabase: {0}config
olcRootDN: cn=root,cn=config
olcRootPW: xxxxxxxxxxxxxx
olcSyncrepl: {0}rid=002 provider=ldap://xxx.xx.xx bindmethod=simple
binddn="cn=config,cn=slave" credentials="xxxx"
tls_cert="/etc/ssl/certs/xxx.cert" tls_key="/etc/ssl/private/xxx.key"
tls_cacertdir="/etc/ssl/certs" tls_reqcert=demand tls_crlcheck=none
searchbase="cn=config,cn=slave" schemachecking=off
type=refreshAndPersist retry="5 5 10 +" suffixmassage="cn=config"
olcSyncUseSubentry: FALSE
This is identical to the consumers olcDatabase={0}config,cn=config entry.
Hopefully somebody can point me in the right direction!
Many thanks in advance,
gerard
5 years, 9 months
RE24 testing call (2.4.46) LMDB RE0.9 testing call (0.9.22)
by Quanah Gibson-Mount
This is expected to be the final testing call for 2.4.45, with an
anticipated release, depending on feedback, during the week of 2018/03/12.
For this release the primary focus was addressing replication issues found
in OpenLDAP, affecting both syncrepl and delta-syncrepl, whether as
provider/consumer or MMR-based configurations.
Generally, get the code for RE24:
<http://www.openldap.org/devel/gitweb.cgi?p=openldap.git;a=snapshot;h=refs...>
Configure & build.
Execute the test suite (via make test) after it is built. Optionally, cd
tests && make its run through the regression suite.
Thanks!
OpenLDAP 2.4.46 Engineering
Fixed libldap connection delete callbacks when TLS fails to start
(ITS#8717)
Fixed libldap to not reuse tls_session if TLS hostname check fails
(ITS#7373)
Fixed libldap cross-compiling with OpenSSL 1.1 (ITS#8687)
Fixed libldap OpenSSL 1.1.1 compatibility with BIO_method (ITS#8791)
Fixed libldap MozNSS CA certificate hash matching (ITS#7374)
Fixed libldap MozNSS with PEM certs when also using an NSS cert db
(ITS#7389)
Fixed libldap MozNSS initialization (ITS#8484)
Fixed libldap GnuTLS with GNUTLS_E_AGAIN (ITS#8650)
Fixed libldap memory leak with cancel operations (ITS#8782)
Fixed slapd Eventlog registry key creation on 64-bit Windows (ITS#8705)
Fixed slapd to maintain SSF across SASL binds (ITS#8796)
Fixed slapd syncrepl deadlock when updating cookie (ITS#8752)
Fixed slapd syncrepl callback to always be last in the stack (ITS#8752)
Fixed slapd telephoneNumberNormalize when the value is spaces and
hyphens (ITS#8778)
Fixed slapd CSN queue processing (ITS#8801)
Fixed slapd-ldap TLS connection timeout with high latency connections
(ITS#8720)
Fixed slapd-ldap to ignore unknown schema when omit-unknown-schema is
set (ITS#7520)
Fixed slapd-mdb with an optimization for long lived read transactions
(ITS#8226)
Fixed slapd-meta assert when olcDbRewrite is modified (ITS#8404)
Fixed slapd-sock with LDAP_MOD_INCREMENT operations (ITS#8692)
Fixed slapo-accesslog cleanup to only occur on failed operations
(ITS#8752)
Fixed slapo-dds entryTTL to actually decrease as per RFC 2589 (ITS#7100)
Fixed slapo-syncprov memory leak with delete operations (ITS#8690)
Fixed slapo-syncprov to not clear pending operation when checkpointing
(ITS#8444)
Fixed slapo-syncprov to correctly record contextCSN values in the
accesslog (ITS#8100)
Fixed slapo-syncprov not to log checkpoints to accesslog db (ITS#8607)
Fixed slapo-syncprov to process changes from this SID on REFRESH
(ITS#8800)
Fixed slapo-syncprov session log parsing to not block other operations
(ITS#8486)
Build Environment
Fixed Windows build with newer MINGW version (ITS#8697)
Fixed compiler warnings and removed unused variables (ITS#8578)
Contrib
Fixed ldapc++ Control structure (ITS#8583)
Documentation
Delete stub manpage for back-ldbm (ITS#8713)
Fixed ldap_bind(3) to mention the LDAP_SASL_SIMPLE mechanism
(ITS#8121)
Fixed slapd-config(5) typo for olcTLSCipherSuite (ITS#8715)
Fixed slapo-syncprov(5) indexing requirements (ITS#5048)
LMDB 0.9.22 Engineering
Fix regression with new db from 0.9.19 (ITS#8760)
Fix liblmdb to build on Solaris (ITS#8612)
Fix delete behavior with DUPSORT DB (ITS#8622)
Fix mdb_cursor_get/mdb_cursor_del behavior (ITS#8722)
--Quanah
--
Quanah Gibson-Mount
Product Architect
Symas Corporation
Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
<http://www.symas.com>
5 years, 9 months