Hi all, I'm testing multi-master replication between (at least 2) openldap nodes (2.4.45, on Ubuntu 18.04) and facing a problem with replication account.
I set up configuration for node1 and node2 (see configuration below), and rpuser account for replication (with same hashed password on both nodes). I can connect to node1 and node2 with rpuser account : ldapsearch -H ldap://node1-vpn -W -D "uid=rpuser,dc=foo,dc=bar" -b "dc=foo,dc=bar" Then I add a group or a user to a node to test replication with ldapadd -H ldap://node1-vpn -W -D "cn=admin,dc=foo,dc=bar" -f /tmp/openldap/rep_test_groupadd.ldif
and rep_test_groupadd.ldif:
dn: cn=testgroup,dc=foo,dc=bar objectClass: top objectClass: posixGroup gidNumber: 456
The new group or user is replicated on the other node, but then the rpuser's password doesn't work anymore on the other node. I can't connect anymore with ldapsearch -H ldap://node2-vpn -W -D "uid=rpuser,dc=foo,dc=bar" -b "dc=foo,dc=bar" and I got errors messages for replication in /var/log/syslog slap_client_connect: URI=ldap://node2-vpn DN="uid=rpuser,dc=foo,dc=bar" ldap_sasl_bind_s failed (49)
rpuser's password is still valid on node1
Any idea of what could cause this problem ? Thanks
Vincent
# config dn: cn=config objectClass: olcGlobal cn: config olcArgsFile: /var/run/slapd/slapd.args olcDisallows: bind_anon olcLogLevel: any olcPidFile: /var/run/slapd/slapd.pid olcRequires: authc olcToolThreads: 1 olcServerID: 0 ldap:/// olcServerID: 1 ldap://node1-vpn olcServerID: 2 ldap://node2-vpn
# module{0}, config dn: cn=module{0},cn=config objectClass: olcModuleList cn: module{0} olcModulePath: /usr/lib/ldap olcModuleLoad: {0}back_mdb
# module{1}, config dn: cn=module{1},cn=config objectClass: olcModuleList cn: module{1} olcModuleLoad: {0}syncprov.la
# {0}mdb, config dn: olcBackend={0}mdb,cn=config objectClass: olcBackendConfig olcBackend: {0}mdb
# {-1}frontend, config dn: olcDatabase={-1}frontend,cn=config objectClass: olcDatabaseConfig objectClass: olcFrontendConfig olcDatabase: {-1}frontend olcAccess: {0}to * by dn.exact=gidNumber=0+uidNumber=0,cn=peercred,cn=external ,cn=auth manage by * break olcAccess: {1}to dn.exact="" by * read olcAccess: {2}to dn.base="cn=Subschema" by * read olcSizeLimit: 500
# {0}config, config dn: olcDatabase={0}config,cn=config objectClass: olcDatabaseConfig olcDatabase: {0}config olcAccess: {0}to * by dn.exact=gidNumber=0+uidNumber=0,cn=peercred,cn=external ,cn=auth manage by * break
# {1}mdb, config dn: olcDatabase={1}mdb,cn=config objectClass: olcDatabaseConfig objectClass: olcMdbConfig olcDatabase: {1}mdb olcDbDirectory: /var/lib/ldap olcSuffix: dc=nodomain olcAccess: {0}to attrs=userPassword by self write by anonymous auth by * none olcAccess: {1}to attrs=shadowLastChange by self write by * read olcAccess: {2}to * by * read olcLastMod: TRUE olcRequires: authc olcRootDN: cn=admin,dc=nodomain olcRootPW: {SSHA}HdZbPd66TxCjeYEIAASbAQTnvFh3GOTw olcDbCheckpoint: 512 30 olcDbIndex: objectClass eq olcDbIndex: cn,uid eq olcDbIndex: uidNumber,gidNumber eq olcDbIndex: member,memberUid eq olcDbMaxSize: 1073741824
# {2}mdb, config dn: olcDatabase={2}mdb,cn=config objectClass: olcDatabaseConfig objectClass: olcMdbConfig olcDatabase: {2}mdb olcDbDirectory: /var/lab/ldap olcSuffix: dc=foo,dc=bar olcAccess: {0}to attrs=userPassword by self =xw by anonymous auth by * none olcAccess: {1}to * by dn="cn=admin,dc=foo,dc=bar" write by self write by user s read by * none olcAccess: {2}to * by dn="uid=rpuser,dc=foo,dc=bar" read olcAccess: {3}to * by dn="uid=rpuser,dc=foo,dc=bar" write olcLastMod: TRUE olcLimits: {0}dn.exact="uid=rpuser,dc=foo,dc=bar" time.soft=unlimited time.h ard=unlimited size.soft=unlimited size.hard=unlimited olcRequires: authc olcRootDN: cn=admin,dc=foo,dc=bar olcRootPW: {SSHA}zL8CSrnkBacsebLUsJ+dzva6eQ7xcyZJ olcSyncrepl: {0}rid=101 provider=ldap://node1-vpn binddn="uid=rpuser,dc=foo, dc=bar" bindmethod=simple credentials=rppwd searchbase="dc=foo,dc=bar" type=r efreshOnly interval=00:00:00:20 retry="5 10 20 10" timeout=1 olcSyncrepl: {1}rid=102 provider=ldap://node2-vpn binddn="uid=rpuser,dc=foo, dc=bar" bindmethod=simple credentials=rppwd searchbase="dc=foo,dc=bar" type=r efreshOnly interval=00:00:00:20 retry="5 10 20 10" timeout=1 olcMirrorMode: TRUE olcDbCheckpoint: 512 30 olcDbIndex: objectClass eq olcDbIndex: entryUUID eq olcDbIndex: entryCSN eq olcDbMaxSize: 1073741824
# {0}syncprov, {2}mdb, config dn: olcOverlay={0}syncprov,olcDatabase={2}mdb,cn=config objectClass: olcOverlayConfig objectClass: olcSyncProvConfig olcOverlay: {0}syncprov
Am 08.01.20 um 16:16 schrieb Vincent Ducot:
Hi all, I'm testing multi-master replication between (at least 2) openldap nodes (2.4.45, on Ubuntu 18.04) and facing a problem with replication account.
At some point in time I decided to create a separate database as replication-account
slapd.conf: database ldif directory /empty suffix "dc=syncrepl" access to dn.base="dc=syncrepl" by * auth rootdn "dc=syncrepl" rootpw "{PLAIN}secret"
This account exist per configuration even on an "empty" syncrepl consumer and is allowed to read/write the database to be replicated. It will not be replicated itself an avoid the issue you describe. N-way replication can start from zero.
If this should be insecure, I hope, somebody will correct me (and the archive), please.
Andreas
--On Wednesday, January 8, 2020 4:16 PM +0100 Vincent Ducot vincent.ducot@rubycat.eu wrote:
Hi all, I'm testing multi-master replication between (at least 2) openldap nodes (2.4.45, on Ubuntu 18.04) and facing a problem with replication account.
Any idea of what could cause this problem ?
# {1}mdb, config dn: olcDatabase={1}mdb,cn=config objectClass: olcDatabaseConfig objectClass: olcMdbConfig olcDatabase: {1}mdb olcDbDirectory: /var/lib/ldap olcSuffix: dc=nodomain olcAccess: {0}to attrs=userPassword by self write by anonymous auth by * none olcAccess: {1}to attrs=shadowLastChange by self write by * read olcAccess: {2}to * by * read
# {2}mdb, config dn: olcDatabase={2}mdb,cn=config objectClass: olcDatabaseConfig objectClass: olcMdbConfig olcDatabase: {2}mdb olcDbDirectory: /var/lab/ldap olcSuffix: dc=foo,dc=bar olcAccess: {0}to attrs=userPassword by self =xw by anonymous auth by * none olcAccess: {1}to * by dn="cn=admin,dc=foo,dc=bar" write by self write by user s read by * none olcAccess: {2}to * by dn="uid=rpuser,dc=foo,dc=bar" read olcAccess: {3}to * by dn="uid=rpuser,dc=foo,dc=bar" write
I see multiple problems with your configuration.
a) You have two different databases storing their DBs in the same location (/var/lib/ldap). I can't even imagine the havoc and destruction that would cause.
b) Your ACLs are broken. The "rpuser" account has no ability to replicate userPassword, since it can't read it. Also, ACLs #2 and #3 here will never be evaluated, since it's already covered in ACL#1 (by users read). Since it can't replicate userPassword, that value is getting lost from server#2, explaining why you can't bind to it after replication starts.
Regards, Quanah
--
Quanah Gibson-Mount Product Architect Symas Corporation Packaged, certified, and supported LDAP solutions powered by OpenLDAP: http://www.symas.com
Hi,
thanks for your answer.
a) It's not the same location, it's /var/lib and /var/lab (yeah, tricky)
b) I tested several possibilities but I didn't manage to make it work. Either the problem stayed the same, either the replication didn't work anymore, either I couldn't access to rpuser.
I understand that :
- rpuser should have read/write access to its password (to attrs=userPassword by dn="uid=rpuser,dc=foo,dc=bar" write)
- rpuser should have read/write access to all data (to * by dn="uid=rpuser,dc=foo,dc=bar" write)
- other users should have read access to their password (I don't want they could change it by themselves) and anonymous should authenticate (to attrs=userPassword by self read by anonymous auth by * none)
Am I right ?
Regards, Vincent
Le 08/01/2020 à 19:13, Quanah Gibson-Mount a écrit :
--On Wednesday, January 8, 2020 4:16 PM +0100 Vincent Ducot vincent.ducot@rubycat.eu wrote:
Hi all, I'm testing multi-master replication between (at least 2) openldap nodes (2.4.45, on Ubuntu 18.04) and facing a problem with replication account.
Any idea of what could cause this problem ?
# {1}mdb, config dn: olcDatabase={1}mdb,cn=config objectClass: olcDatabaseConfig objectClass: olcMdbConfig olcDatabase: {1}mdb olcDbDirectory: /var/lib/ldap olcSuffix: dc=nodomain olcAccess: {0}to attrs=userPassword by self write by anonymous auth by * none olcAccess: {1}to attrs=shadowLastChange by self write by * read olcAccess: {2}to * by * read
# {2}mdb, config dn: olcDatabase={2}mdb,cn=config objectClass: olcDatabaseConfig objectClass: olcMdbConfig olcDatabase: {2}mdb olcDbDirectory: /var/lab/ldap olcSuffix: dc=foo,dc=bar olcAccess: {0}to attrs=userPassword by self =xw by anonymous auth by * none olcAccess: {1}to * by dn="cn=admin,dc=foo,dc=bar" write by self write by user s read by * none olcAccess: {2}to * by dn="uid=rpuser,dc=foo,dc=bar" read olcAccess: {3}to * by dn="uid=rpuser,dc=foo,dc=bar" write
I see multiple problems with your configuration.
a) You have two different databases storing their DBs in the same location (/var/lib/ldap). I can't even imagine the havoc and destruction that would cause.
b) Your ACLs are broken. The "rpuser" account has no ability to replicate userPassword, since it can't read it. Also, ACLs #2 and #3 here will never be evaluated, since it's already covered in ACL#1 (by users read). Since it can't replicate userPassword, that value is getting lost from server#2, explaining why you can't bind to it after replication starts.
Regards, Quanah
--
Quanah Gibson-Mount Product Architect Symas Corporation Packaged, certified, and supported LDAP solutions powered by OpenLDAP: http://www.symas.com
--On Friday, January 10, 2020 5:48 PM +0100 Vincent Ducot vincent.ducot@rubycat.eu wrote:
a) It's not the same location, it's /var/lib and /var/lab (yeah, tricky)
Ah, missed that.
b) I tested several possibilities but I didn't manage to make it work. Either the problem stayed the same, either the replication didn't work anymore, either I couldn't access to rpuser.
I understand that :
- rpuser should have read/write access to its password (to
attrs=userPassword by dn="uid=rpuser,dc=foo,dc=bar" write)
- rpuser should have read/write access to all data (to * by
dn="uid=rpuser,dc=foo,dc=bar" write)
Sure, but ACLs stop processing on the first matching rule. Please review the slapd.access(5) man page. Your ACLsforthe rpuser are never evaluated since prior rules prevent them being reached.
--Quanah
--
Quanah Gibson-Mount Product Architect Symas Corporation Packaged, certified, and supported LDAP solutions powered by OpenLDAP: http://www.symas.com
Hi,
yes, I understand the processing order. So something like this should work, right ?
olcAccess: to attrs=userPassword by anonymous auth olcAccess: to * by dn="uid=rpuser,dc=foo,dc=bar" write olcAccess: to attrs=userPassword by self write by * none olcAccess: to * by dn="cn=admin,dc=foo,dc=bar" write by self write by users read by * none
Actually, after an object is replicated, rpuser is deleted (and also other objects of the same tree). Any idea why ?
In the log I get :
Jan 13 16:26:33 node5 slapd[9976]: => access_allowed: delete access to "dc=foo,dc=bar" "children" requested Jan 13 16:26:33 node5 slapd[9976]: <= root access granted Jan 13 16:26:33 node5 slapd[9976]: => access_allowed: delete access granted by manage(=mwrscxd) Jan 13 16:26:33 node5 slapd[9976]: => access_allowed: delete access to "uid=rpuser,dc=foo,dc=bar" "entry" requested Jan 13 16:26:33 node5 slapd[9976]: <= root access granted Jan 13 16:26:33 node5 slapd[9976]: => access_allowed: delete access granted by manage(=mwrscxd) Jan 13 16:26:33 node5 slapd[9976]: => index_entry_del( 7, "uid=rpuser,dc=foo,dc=bar" ) Jan 13 16:26:33 node5 slapd[9976]: <= index_entry_del( 7, "uid=rpuser,dc=foo,dc=bar" ) success Jan 13 16:26:33 node5 slapd[9976]: mdb_delete: deleted id=00000007 dn="uid=rpuser,dc=foo,dc=bar"
Thanks
Le 10/01/2020 à 23:27, Quanah Gibson-Mount a écrit :
--On Friday, January 10, 2020 5:48 PM +0100 Vincent Ducot vincent.ducot@rubycat.eu wrote:
a) It's not the same location, it's /var/lib and /var/lab (yeah, tricky)
Ah, missed that.
b) I tested several possibilities but I didn't manage to make it work. Either the problem stayed the same, either the replication didn't work anymore, either I couldn't access to rpuser.
I understand that :
- rpuser should have read/write access to its password (to
attrs=userPassword by dn="uid=rpuser,dc=foo,dc=bar" write)
- rpuser should have read/write access to all data (to * by
dn="uid=rpuser,dc=foo,dc=bar" write)
Sure, but ACLs stop processing on the first matching rule. Please review the slapd.access(5) man page. Your ACLsforthe rpuser are never evaluated since prior rules prevent them being reached.
--Quanah
--
Quanah Gibson-Mount Product Architect Symas Corporation Packaged, certified, and supported LDAP solutions powered by OpenLDAP: http://www.symas.com
--On Monday, January 13, 2020 4:53 PM +0100 Vincent Ducot vincent.ducot@rubycat.eu wrote:
Hi,
yes, I understand the processing order. So something like this should work, right ?
No. All access to userPassword is stopped by your very first ACL, no further ACLs for it will apply, as I already stated. Again, ACL processing STOPs at the FIRST matching rule. Additionally, a replication user only needs read access to read data off the master. It does not need explicit write access to its local db.
olcAccess: to attrs=userPassword by anonymous auth olcAccess: to * by dn="uid=rpuser,dc=foo,dc=bar" write olcAccess: to attrs=userPassword by self write by * none olcAccess: to * by dn="cn=admin,dc=foo,dc=bar" write by self write by users read by * none
So in the above, any and all access to userPassword STOPs at the "by anonymous auth access". Any other type of request for access to userPassword will be denied.
You most likely want something more like:
olcAccess: to attrs=userPassword by anonymous auth by self write by dn.exact="uid=rpuser,dc=foo,dc=bar" read olcAccess: to * by dn="cn=admin,dc=foo,dc=bar" write by self write by users read by * none
This appears to encapsulate the permissions you're trying to set up in the above.
Note that a "user" is *any* identity that succesfully authenticated to the LDAP server, so the "rpuser" is already covered in the "to *" access line by the rule "by users read".
--Quanah
--
Quanah Gibson-Mount Product Architect Symas Corporation Packaged, certified, and supported LDAP solutions powered by OpenLDAP: http://www.symas.com
Ok, I thought the rule matched if "by" also matched. Thanks to light it.
I apply the olcAccess you proposed.
I still have the problem of deletion of "dc=foo,dc=bar" tree on node2, for example when I add a user on node1. Any idea why ?
Thanks,
Regards,
Vincent
Le 13/01/2020 à 17:24, Quanah Gibson-Mount a écrit :
--On Monday, January 13, 2020 4:53 PM +0100 Vincent Ducot vincent.ducot@rubycat.eu wrote:
Hi,
yes, I understand the processing order. So something like this should work, right ?
No. All access to userPassword is stopped by your very first ACL, no further ACLs for it will apply, as I already stated. Again, ACL processing STOPs at the FIRST matching rule. Additionally, a replication user only needs read access to read data off the master. It does not need explicit write access to its local db.
olcAccess: to attrs=userPassword by anonymous auth olcAccess: to * by dn="uid=rpuser,dc=foo,dc=bar" write olcAccess: to attrs=userPassword by self write by * none olcAccess: to * by dn="cn=admin,dc=foo,dc=bar" write by self write by users read by * none
So in the above, any and all access to userPassword STOPs at the "by anonymous auth access". Any other type of request for access to userPassword will be denied.
You most likely want something more like:
olcAccess: to attrs=userPassword by anonymous auth by self write by dn.exact="uid=rpuser,dc=foo,dc=bar" read olcAccess: to * by dn="cn=admin,dc=foo,dc=bar" write by self write by users read by * none
This appears to encapsulate the permissions you're trying to set up in the above.
Note that a "user" is *any* identity that succesfully authenticated to the LDAP server, so the "rpuser" is already covered in the "to *" access line by the rule "by users read".
--Quanah
--
Quanah Gibson-Mount Product Architect Symas Corporation Packaged, certified, and supported LDAP solutions powered by OpenLDAP: http://www.symas.com
--On Monday, January 13, 2020 6:32 PM +0100 Vincent Ducot vincent.ducot@rubycat.eu wrote:
Ok, I thought the rule matched if "by" also matched. Thanks to light it.
I apply the olcAccess you proposed.
I still have the problem of deletion of "dc=foo,dc=bar" tree on node2, for example when I add a user on node1. Any idea why ?
Not off the top of my head. Without full configs for both servers or an understanding of the state of the replicated databases on each server, it would all be random speculation.
--Quanah
--
Quanah Gibson-Mount Product Architect Symas Corporation Packaged, certified, and supported LDAP solutions powered by OpenLDAP: http://www.symas.com
Quanah Gibson-Mount quanah@symas.com schrieb am 13.01.2020 um 20:31 in
Nachricht <1FB1DDD3A574DC1DA2F2F517@[192.168.1.144]>:
‑‑On Monday, January 13, 2020 6:32 PM +0100 Vincent Ducot vincent.ducot@rubycat.eu wrote:
Ok, I thought the rule matched if "by" also matched. Thanks to light it.
I apply the olcAccess you proposed.
I still have the problem of deletion of "dc=foo,dc=bar" tree on node2, for example when I add a user on node1. Any idea why ?
Not off the top of my head. Without full configs for both servers or an understanding of the state of the replicated databases on each server, it would all be random speculation.
I'd recommend "sync" logging to see what's really going on.
‑‑Quanah
‑‑
Quanah Gibson‑Mount Product Architect Symas Corporation Packaged, certified, and supported LDAP solutions powered by OpenLDAP: http://www.symas.com
Hi,
You can find below my full config.
To be more precise, my problem is :
- I add a user on node1, it's replicated on node2 - I add a second user (or group) on node2, it's not replicated on node2. In the logs, I get
Jan 15 16:11:21 node2 slapd[2465]: do_syncrep2: rid=102 LDAP_RES_SEARCH_RESULT Jan 15 16:11:22 node2 slapd[2465]: do_syncrep2: rid=101 LDAP_RES_INTERMEDIATE - SYNC_ID_SET Jan 15 16:11:22 node2 slapd[2465]: do_syncrep2: rid=101 LDAP_RES_SEARCH_RESULT Jan 15 16:11:22 node2 slapd[2465]: do_syncrep2: rid=101 cookie=rid=101,csn=20200115102817.516155Z#000000#000#000000 Jan 15 16:11:22 node2 slapd[2465]: nonpresent_callback: rid=101 present UUID 90915624-c578-1039-97ac-bb4be13c2c82, dn dc=foo,dc=bar Jan 15 16:11:22 node2 slapd[2465]: nonpresent_callback: rid=101 present UUID 90952132-c578-1039-8aef-6f411f63000a, dn cn=admin,dc=foo,dc=bar Jan 15 16:11:22 node2 slapd[2465]: nonpresent_callback: rid=101 present UUID 909a0760-c578-1039-8af0-6f411f63000a, dn ou=people,dc=foo,dc=bar Jan 15 16:11:22 node2 slapd[2465]: nonpresent_callback: rid=101 present UUID 909b4666-c578-1039-8af1-6f411f63000a, dn ou=groups,dc=foo,dc=bar Jan 15 16:11:22 node2 slapd[2465]: nonpresent_callback: rid=101 present UUID 9a1f5e84-c578-1039-918d-7129ec86f31a, dn uid=appadmin,ou=people,dc=foo,dc=bar Jan 15 16:11:22 node2 slapd[2465]: nonpresent_callback: rid=101 present UUID 9a48db24-c578-1039-918e-7129ec86f31a, dn cn=admins-for-app,ou=groups,dc=foo,dc=bar Jan 15 16:11:22 node2 slapd[2465]: nonpresent_callback: rid=101 present UUID 3032f6b0-cbcd-1039-952e-fb0cd8c5af02, dn uid=testuser,dc=foo,dc=bar Jan 15 16:11:22 node2 slapd[2465]: slap_queue_csn: queueing 0x7f4628103420 20200115102817.516155Z#000000#000#000000 Jan 15 16:11:22 node2 slapd[2465]: slap_graduate_commit_csn: removing 0x7f4628103420 20200115102817.516155Z#000000#000#000000
What means "nonpresent_callback" ?
I also tested with replication user in a different database, as suggested in this mailing list, but the result is the same.
Regards,
Vincent
# config dn: cn=config objectClass: olcGlobal cn: config olcArgsFile: /var/run/slapd/slapd.args olcDisallows: bind_anon olcLogLevel: any olcPidFile: /var/run/slapd/slapd.pid olcRequires: authc olcToolThreads: 1 olcServerID: 0 ldap:/// olcServerID: 1 ldap://node1-vpn olcServerID: 2 ldap://node2-vpn
# module{0}, config dn: cn=module{0},cn=config objectClass: olcModuleList cn: module{0} olcModulePath: /usr/lib/ldap olcModuleLoad: {0}back_mdb
# module{1}, config dn: cn=module{1},cn=config objectClass: olcModuleList cn: module{1} olcModuleLoad: {0}syncprov.la
# {0}mdb, config dn: olcBackend={0}mdb,cn=config objectClass: olcBackendConfig olcBackend: {0}mdb
# {-1}frontend, config dn: olcDatabase={-1}frontend,cn=config objectClass: olcDatabaseConfig objectClass: olcFrontendConfig olcDatabase: {-1}frontend olcAccess: {0}to * by dn.exact=gidNumber=0+uidNumber=0,cn=peercred,cn=external ,cn=auth manage by * break olcAccess: {1}to dn.exact="" by * read olcAccess: {2}to dn.base="cn=Subschema" by * read olcSizeLimit: 500
# {0}config, config dn: olcDatabase={0}config,cn=config objectClass: olcDatabaseConfig olcDatabase: {0}config olcAccess: {0}to * by dn.exact=gidNumber=0+uidNumber=0,cn=peercred,cn=external ,cn=auth manage by * break
# {1}mdb, config dn: olcDatabase={1}mdb,cn=config objectClass: olcDatabaseConfig objectClass: olcMdbConfig olcDatabase: {1}mdb olcDbDirectory: /var/lib/ldap olcSuffix: dc=nodomain olcAccess: {0}to attrs=userPassword by self write by anonymous auth by * none olcAccess: {1}to attrs=shadowLastChange by self write by * read olcAccess: {2}to * by * read olcLastMod: TRUE olcRequires: authc olcRootDN: cn=admin,dc=nodomain olcRootPW: {SSHA}HdZbPd66TxCjeYEIAASbAQTnvFh3GOTw olcDbCheckpoint: 512 30 olcDbIndex: objectClass eq olcDbIndex: cn,uid eq olcDbIndex: uidNumber,gidNumber eq olcDbIndex: member,memberUid eq olcDbMaxSize: 1073741824
# {2}mdb, config dn: olcDatabase={2}mdb,cn=config objectClass: olcDatabaseConfig objectClass: olcMdbConfig olcDatabase: {2}mdb olcDbDirectory: /var/foobar/ldap olcSuffix: dc=foo,dc=bar olcAccess: {0}to attrs=userPassword by anonymous auth by self write by dn.exact="cn=rpuser,dc=foo,dc=bar" read olcAccess: {1}to * by dn="cn=admin,dc=foo,dc=bar" write by self write by users read by * none olcLastMod: TRUE olcLimits: {0}dn.exact="uid=rpuser,dc=foo,dc=bar" time.soft=unlimited time.h ard=unlimited size.soft=unlimited size.hard=unlimited olcRequires: authc olcRootDN: cn=admin,dc=foo,dc=bar olcRootPW: {SSHA}zL8CSrnkBacsebLUsJ+dzva6eQ7xcyZJ olcSyncrepl: {0}rid=101 provider=ldap://node1-vpn binddn="uid=rpuser,dc=foo, dc=bar" bindmethod=simple credentials=rppwd searchbase="dc=foo,dc=bar" type=r efreshOnly interval=00:00:00:20 retry="5 10 20 10" timeout=1 olcSyncrepl: {1}rid=102 provider=ldap://node2-vpn binddn="uid=rpuser,dc=foo, dc=bar" bindmethod=simple credentials=rppwd searchbase="dc=foo,dc=bar" type=r efreshOnly interval=00:00:00:20 retry="5 10 20 10" timeout=1 olcMirrorMode: TRUE olcDbCheckpoint: 512 30 olcDbIndex: objectClass eq olcDbIndex: entryUUID eq olcDbIndex: entryCSN eq olcDbMaxSize: 1073741824
# {0}syncprov, {2}mdb, config dn: olcOverlay={0}syncprov,olcDatabase={2}mdb,cn=config objectClass: olcOverlayConfig objectClass: olcSyncProvConfig olcOverlay: {0}syncprov
Le 13/01/2020 à 20:31, Quanah Gibson-Mount a écrit :
--On Monday, January 13, 2020 6:32 PM +0100 Vincent Ducot vincent.ducot@rubycat.eu wrote:
Ok, I thought the rule matched if "by" also matched. Thanks to light it.
I apply the olcAccess you proposed.
I still have the problem of deletion of "dc=foo,dc=bar" tree on node2, for example when I add a user on node1. Any idea why ?
Not off the top of my head. Without full configs for both servers or an understanding of the state of the replicated databases on each server, it would all be random speculation.
--Quanah
--
Quanah Gibson-Mount Product Architect Symas Corporation Packaged, certified, and supported LDAP solutions powered by OpenLDAP: http://www.symas.com
Okay, I changed olcSyncrepl type to refreshAndPersist, and remove interval settings.
It seems to work now, although I don't really understand why.
Thanks for your help on ACLs
Regards,
Vincent
Le 15/01/2020 à 17:27, Vincent Ducot a écrit :
Hi,
You can find below my full config.
To be more precise, my problem is :
- I add a user on node1, it's replicated on node2
- I add a second user (or group) on node2, it's not replicated on node2.
In the logs, I get
Jan 15 16:11:21 node2 slapd[2465]: do_syncrep2: rid=102 LDAP_RES_SEARCH_RESULT Jan 15 16:11:22 node2 slapd[2465]: do_syncrep2: rid=101 LDAP_RES_INTERMEDIATE - SYNC_ID_SET Jan 15 16:11:22 node2 slapd[2465]: do_syncrep2: rid=101 LDAP_RES_SEARCH_RESULT Jan 15 16:11:22 node2 slapd[2465]: do_syncrep2: rid=101 cookie=rid=101,csn=20200115102817.516155Z#000000#000#000000 Jan 15 16:11:22 node2 slapd[2465]: nonpresent_callback: rid=101 present UUID 90915624-c578-1039-97ac-bb4be13c2c82, dn dc=foo,dc=bar Jan 15 16:11:22 node2 slapd[2465]: nonpresent_callback: rid=101 present UUID 90952132-c578-1039-8aef-6f411f63000a, dn cn=admin,dc=foo,dc=bar Jan 15 16:11:22 node2 slapd[2465]: nonpresent_callback: rid=101 present UUID 909a0760-c578-1039-8af0-6f411f63000a, dn ou=people,dc=foo,dc=bar Jan 15 16:11:22 node2 slapd[2465]: nonpresent_callback: rid=101 present UUID 909b4666-c578-1039-8af1-6f411f63000a, dn ou=groups,dc=foo,dc=bar Jan 15 16:11:22 node2 slapd[2465]: nonpresent_callback: rid=101 present UUID 9a1f5e84-c578-1039-918d-7129ec86f31a, dn uid=appadmin,ou=people,dc=foo,dc=bar Jan 15 16:11:22 node2 slapd[2465]: nonpresent_callback: rid=101 present UUID 9a48db24-c578-1039-918e-7129ec86f31a, dn cn=admins-for-app,ou=groups,dc=foo,dc=bar Jan 15 16:11:22 node2 slapd[2465]: nonpresent_callback: rid=101 present UUID 3032f6b0-cbcd-1039-952e-fb0cd8c5af02, dn uid=testuser,dc=foo,dc=bar Jan 15 16:11:22 node2 slapd[2465]: slap_queue_csn: queueing 0x7f4628103420 20200115102817.516155Z#000000#000#000000 Jan 15 16:11:22 node2 slapd[2465]: slap_graduate_commit_csn: removing 0x7f4628103420 20200115102817.516155Z#000000#000#000000
What means "nonpresent_callback" ?
I also tested with replication user in a different database, as suggested in this mailing list, but the result is the same.
Regards,
Vincent
# config dn: cn=config objectClass: olcGlobal cn: config olcArgsFile: /var/run/slapd/slapd.args olcDisallows: bind_anon olcLogLevel: any olcPidFile: /var/run/slapd/slapd.pid olcRequires: authc olcToolThreads: 1 olcServerID: 0 ldap:/// olcServerID: 1 ldap://node1-vpn olcServerID: 2 ldap://node2-vpn
# module{0}, config dn: cn=module{0},cn=config objectClass: olcModuleList cn: module{0} olcModulePath: /usr/lib/ldap olcModuleLoad: {0}back_mdb
# module{1}, config dn: cn=module{1},cn=config objectClass: olcModuleList cn: module{1} olcModuleLoad: {0}syncprov.la
# {0}mdb, config dn: olcBackend={0}mdb,cn=config objectClass: olcBackendConfig olcBackend: {0}mdb
# {-1}frontend, config dn: olcDatabase={-1}frontend,cn=config objectClass: olcDatabaseConfig objectClass: olcFrontendConfig olcDatabase: {-1}frontend olcAccess: {0}to * by dn.exact=gidNumber=0+uidNumber=0,cn=peercred,cn=external ,cn=auth manage by * break olcAccess: {1}to dn.exact="" by * read olcAccess: {2}to dn.base="cn=Subschema" by * read olcSizeLimit: 500
# {0}config, config dn: olcDatabase={0}config,cn=config objectClass: olcDatabaseConfig olcDatabase: {0}config olcAccess: {0}to * by dn.exact=gidNumber=0+uidNumber=0,cn=peercred,cn=external ,cn=auth manage by * break
# {1}mdb, config dn: olcDatabase={1}mdb,cn=config objectClass: olcDatabaseConfig objectClass: olcMdbConfig olcDatabase: {1}mdb olcDbDirectory: /var/lib/ldap olcSuffix: dc=nodomain olcAccess: {0}to attrs=userPassword by self write by anonymous auth by
- none
olcAccess: {1}to attrs=shadowLastChange by self write by * read olcAccess: {2}to * by * read olcLastMod: TRUE olcRequires: authc olcRootDN: cn=admin,dc=nodomain olcRootPW: {SSHA}HdZbPd66TxCjeYEIAASbAQTnvFh3GOTw olcDbCheckpoint: 512 30 olcDbIndex: objectClass eq olcDbIndex: cn,uid eq olcDbIndex: uidNumber,gidNumber eq olcDbIndex: member,memberUid eq olcDbMaxSize: 1073741824
# {2}mdb, config dn: olcDatabase={2}mdb,cn=config objectClass: olcDatabaseConfig objectClass: olcMdbConfig olcDatabase: {2}mdb olcDbDirectory: /var/foobar/ldap olcSuffix: dc=foo,dc=bar olcAccess: {0}to attrs=userPassword by anonymous auth by self write by dn.exact="cn=rpuser,dc=foo,dc=bar" read olcAccess: {1}to * by dn="cn=admin,dc=foo,dc=bar" write by self write by users read by * none olcLastMod: TRUE olcLimits: {0}dn.exact="uid=rpuser,dc=foo,dc=bar" time.soft=unlimited time.h ard=unlimited size.soft=unlimited size.hard=unlimited olcRequires: authc olcRootDN: cn=admin,dc=foo,dc=bar olcRootPW: {SSHA}zL8CSrnkBacsebLUsJ+dzva6eQ7xcyZJ olcSyncrepl: {0}rid=101 provider=ldap://node1-vpn binddn="uid=rpuser,dc=foo, dc=bar" bindmethod=simple credentials=rppwd searchbase="dc=foo,dc=bar" type=r efreshOnly interval=00:00:00:20 retry="5 10 20 10" timeout=1 olcSyncrepl: {1}rid=102 provider=ldap://node2-vpn binddn="uid=rpuser,dc=foo, dc=bar" bindmethod=simple credentials=rppwd searchbase="dc=foo,dc=bar" type=r efreshOnly interval=00:00:00:20 retry="5 10 20 10" timeout=1 olcMirrorMode: TRUE olcDbCheckpoint: 512 30 olcDbIndex: objectClass eq olcDbIndex: entryUUID eq olcDbIndex: entryCSN eq olcDbMaxSize: 1073741824
# {0}syncprov, {2}mdb, config dn: olcOverlay={0}syncprov,olcDatabase={2}mdb,cn=config objectClass: olcOverlayConfig objectClass: olcSyncProvConfig olcOverlay: {0}syncprov
Le 13/01/2020 à 20:31, Quanah Gibson-Mount a écrit :
--On Monday, January 13, 2020 6:32 PM +0100 Vincent Ducot vincent.ducot@rubycat.eu wrote:
Ok, I thought the rule matched if "by" also matched. Thanks to light it.
I apply the olcAccess you proposed.
I still have the problem of deletion of "dc=foo,dc=bar" tree on node2, for example when I add a user on node1. Any idea why ?
Not off the top of my head. Without full configs for both servers or an understanding of the state of the replicated databases on each server, it would all be random speculation.
--Quanah
--
Quanah Gibson-Mount Product Architect Symas Corporation Packaged, certified, and supported LDAP solutions powered by OpenLDAP: http://www.symas.com
--On Thursday, January 16, 2020 3:51 PM +0100 Vincent Ducot vincent.ducot@rubycat.eu wrote:
Okay, I changed olcSyncrepl type to refreshAndPersist, and remove interval settings.
Hi Vincent,
I would additionally strongly advise you update your configuration to use delta-syncrepl rather than standard syncrepl. I've never had any luck with refreshOnly, although theoretically it's supposed to work.
Regards, Quanah
--
Quanah Gibson-Mount Product Architect Symas Corporation Packaged, certified, and supported LDAP solutions powered by OpenLDAP: http://www.symas.com
openldap-technical@openldap.org