overlay unique
by A. Schulze
Hello,
I've a openldap master and numerous sync replica servers running. I'm suspect my master contain mail attributes that aren't unique.
My idea was to build an other sync replica with unique overlay enabled. The 'empty' sync replica will fetch data from master and complain about values that aren't unique.
I would discard that replica, correct the master database and start replication again until replication succeed.
But then there was reality :-/
I placed an deliberately none-unique value in my database but replication did not fail. The replica did contain two DNs with "mail=none-unique(a)example.test".
syncrepl.conf:
moduleload mdb.la
moduleload unique.la
database mdb
suffix ou=test
...
overlay unique
uniqiue_uri ldap:///ou=test?mail?sub?
index ...
limits ...
syncrepl rid=1 privider=ldap://master.example ...
access ...
Q: is this setup wrong?
Q: is replication the right way to enforce uniqueness? Looks like the answer is "no"
Q: what is "the" better way?
Andreas
3 years, 10 months
accesslog database: overflow or data rotation?
by Manuela Mandache
Hi all,
A directory is configured as delta-syncrepl provider, the backends for the
main database and the accesslog database are both mdb. Everything works
fine, my question is: What happens if there are so many write ops that the
size of the accesslog database reaches the value of olcDbMaxSize defined
for this database? Thanks!
Cheers,
Manuela
3 years, 10 months
Re: Problem with N-Way multimaster replication after an node fail and restore
by David Tello
Hi Quanah,
Thank for your answer. I improved my configuration with your indications.
Unfortunately the behavior remained the same. Following your suggestions, i
will send an message to ITS. I willl try to script the exact steps i did
for get the current behavior. i will reuse the logs that i putted here.
Regards,
David
El jue., 9 may. 2019 a las 20:16, Quanah Gibson-Mount (<quanah(a)symas.com>)
escribió:
> --On Thursday, May 09, 2019 11:21 AM +0200 David Tello
> <david.tello.wbsgo(a)gmail.com> wrote:
>
> >
> >
> > I have a problem with the N-Way replication multimaster. My enviroiment
> > is debian Stretch and slapd (2.4.47+dfsg-3~bpo9+1) from Stretch
> > backports.
>
> Hi David,
>
> I notice your slapo-syncprov overlays are missing a sessionlog setting.
> It
> is critical that this value be large (approximately the same as the number
> of entries in your database). I'd also note that the order in which
> overlays (particularly syncprov) are instantiated on a database can
> matter.
> It's generally advised that syncprov be the first instantiated overlay (In
> your configuration, it's the 7th overlay on the mdb database with index
> {6}). I.e., it should be the overlay at index {0}.
>
> These changes may or may not address your issue, but I note them as they
> are items that should be addressed in your configuration.
>
> If you are able to reproduce the issue after fixing the above config
> items,
> I would ask that you file an ITS at <http://www.openldap.org/its>. Even
> better is if you're able to script the reproduction case (it seems rather
> straight forward) so we can reproduce it locally to work on a fix.
>
> Regards,
> Quanah
>
> --
>
> Quanah Gibson-Mount
> Product Architect
> Symas Corporation
> Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
> <http://www.symas.com>
>
>
3 years, 10 months
Problem with N-Way multimaster replication after an node fail and restore
by David Tello
I have a problem with the N-Way replication multimaster. My enviroiment is
debian Stretch and slapd (2.4.47+dfsg-3~bpo9+1) from Stretch backports.
I have configured a cluster with 3 nodes. I have configured to replicate
the config database and a normal data mdb database. I followed the
documentation and the cluster sync work correctly when all nodes are
started.
If i made a change in any node, this change is replicated to the others. My
problem appears when i make a change whe one node is down. In this case,
the other two nodes repli correctly, and when the downed node start this
get all pending changes correctly. And in this point occurs the problem.
After the start of downed node, the changes mades in this node are not send
to the other two nodes. For other way, the changes mades in the other two
nodes (no shutdowned) are sended to all nodes correctly.
I tried this config with a two nodes cluster and the behavior was the same,
after reboot slapd do not send more updates.
I have checked the olcServerId because i know that a mistake here can
produce similar problems. I really think that it is correct.
The cmdlines of the proccess are:
/usr/sbin/slapd -h ldapi:/// ldap://wbsvisionNode1/ ldap://127.0.0.1/
ldaps:/// -g openldap -u openldap -F /etc/ldap/slapd.d
/usr/sbin/slapd -h ldapi:/// ldap://wbsvisionNode2/ ldap://127.0.0.1/
ldaps:/// -g openldap -u openldap -F /etc/ldap/slapd.d
/usr/sbin/slapd -h ldapi:/// ldap://wbsvisionNode3/ ldap://127.0.0.1/
ldaps:/// -g openldap -u openldap -F /etc/ldap/slapd.d
The config database configuración is below:
*DN: cn=config*
.....
olcServerID: 1 ldap://wbsvisionNode1/
olcServerID: 2 ldap://wbsvisionNode2/
olcServerID: 3 ldap://wbsvisionNode3/
*dn: olcDatabase={0}config,cn=config*
.......
olcMirrorMode: TRUE
olcSyncUseSubentry: FALSE
olcSyncrepl:
{0}rid=001
provider=ldap://wbsvisionNode1
bindmethod=simple
binddn="cn=admin,cn=config"
credentials="TuoDqAG7UbCJx8H8gfMO"
starttls=no
tls_cert=/etc/pki/certs/wbsagnitio.local.es.pem
tls_key=/etc/pki/private/wbsagnitio.local.es.key
tls_cacert=/etc/pki/certs/wbsagnitio-ca.pem
tls_reqcert=never
searchbase="cn=config"
scope=sub
type=refreshAndPersist
retry="5 5 300 +"
{1}rid=002
provider=ldap://wbsvisionNode2
bindmethod=simple
binddn="cn=admin,cn=config"
credentials="TuoDqAG7UbCJx8H8gfMO"
starttls=no
tls_cert=/etc/pki/certs/wbsagnitio.local.es.pem
tls_key=/etc/pki/private/wbsagnitio.local.es.key
tls_cacert=/etc/pki/certs/wbsagnitio-ca.pem
tls_reqcert=never
searchbase="cn=config"
scope=sub
type=refreshAndPersist
retry="5 5 300 +"
{2}rid=003
provider=ldap://wbsvisionNode3
bindmethod=simple
binddn="cn=admin,cn=config"
credentials="TuoDqAG7UbCJx8H8gfMO"
starttls=no
tls_cert=/etc/pki/certs/wbsagnitio.local.es.pem
tls_key=/etc/pki/private/wbsagnitio.local.es.key
tls_cacert=/etc/pki/certs/wbsagnitio-ca.pem
tls_reqcert=never
searchbase="cn=config"
scope=sub
type=refreshAndPersist
retry="5 5 300 +"
FALSE
------
*dn: olcOverlay={0}syncprov,olcDatabase={0}config,cn=config*
objectClass: olcSyncProvConfig
objectClass: olcOverlayConfig
olcOverlay: {0}syncprov
olcSpCheckpoint: 1000 1
The data database configuración is below:
*dn: olcDatabase={1}mdb,cn=config*
objectClass: olcMdbConfig
objectClass: olcDatabaseConfig
olcDatabase: {1}mdb
...
olcDbIndex: objectClass eq
olcDbIndex: entryCSN eq
...
olcLastMod: TRUE
olcSuffix: dc=local,dc=es
.........
olcMirrorMode: TRUE
olcSyncrepl: {0}rid=004
provider=ldap://wbsvisionNode1
bindmethod=simple
binddn="cn=manager,dc=local,dc=es"
credentials="test"
starttls=no
searchbase="dc=local,dc=es"
scope=sub
type=refreshAndPersist
retry="5 5 300 +"
olcSyncrepl: {1}rid=005
provider=ldap://wbsvisionNode2
bindmethod=simple
binddn="cn=manager,dc=local,dc=es"
credentials="test"
starttls=no
searchbase="dc=local,dc=es"
scope=sub
type=refreshAndPersist
retry="5 5 300 +"
olcSyncrepl: {2}rid=006
provider=ldap://wbsvisionNode3
bindmethod=simple
binddn="cn=manager,dc=local,dc=es"
credentials="test"
starttls=no
searchbase="dc=local,dc=es"
scope=sub
type=refreshAndPersist
retry="5 5 300 +"
olcSyncUseSubentry: FALSE
dn: olcOverlay={0}dynlist,olcDatabase={1}mdb,cn=config
objectClass: olcDynamicList
objectClass: olcOverlayConfig
olcOverlay: {0}dynlist
olcDlAttrSet: {0}virtualGroup memberDnURL uniqueMember
olcDlAttrSet: {1}virtualGroup memberUidURL memberUid:uid
olcDlAttrSet: {2}virtualGroup memberSidURL sambaSIDList:sambaSID
dn: olcOverlay={1}lastbind,olcDatabase={1}mdb,cn=config
objectClass: olcLastBindConfig
objectClass: olcOverlayConfig
olcOverlay: {1}lastbind
olcLastBindPrecision: 30
dn: olcOverlay={2}unique,olcDatabase={1}mdb,cn=config
objectClass: olcUniqueConfig
objectClass: olcOverlayConfig
olcOverlay: {2}unique
olcUniqueAttribute: uid
olcUniqueAttribute: uidNumber
olcUniqueAttribute: employeeNumber
olcUniqueBase: dc=local,dc=es
dn: olcOverlay={3}memberof,olcDatabase={1}mdb,cn=config
objectClass: olcMemberOf
objectClass: olcOverlayConfig
olcOverlay: {3}memberof
olcMemberOfDangling: ignore
olcMemberOfGroupOC: groupOfUniqueNames
olcMemberOfMemberAD: uniqueMember
olcMemberOfMemberOfAD: memberOf
olcMemberOfRefInt: TRUE
dn: olcOverlay={4}accesslog,olcDatabase={1}mdb,cn=config
objectClass: olcAccessLogConfig
objectClass: olcOverlayConfig
olcAccessLogDB: cn=accesslog
olcOverlay: {4}accesslog
olcAccessLogOld: (|(objectClass=wbsagnitioGroup)(objectClass=wbsagnitioAccou
nt))
olcAccessLogOldAttr: roles
olcAccessLogOldAttr: objectClass
olcAccessLogOps: writes
olcAccessLogOps: bind
olcAccessLogPurge: 7+00:00 1+00:00
dn: olcOverlay={5}ppolicy,olcDatabase={1}mdb,cn=config
objectClass: olcPPolicyConfig
objectClass: olcOverlayConfig
olcOverlay: {5}ppolicy
olcPPolicyDefault: cn=0-default,ou=passwordpolicies,ou=configuration,dc=loca
l,dc=es
olcPPolicyForwardUpdates: FALSE
olcPPolicyHashCleartext: FALSE
olcPPolicyUseLockout: FALSE
dn: olcOverlay={6}syncprov,olcDatabase={1}mdb,cn=config
objectClass: olcSyncProvConfig
objectClass: olcOverlayConfig
olcOverlay: {6}syncprov
olcSpCheckpoint: 1000 1
Below i show the sync log of some operations to show the behavior of my
cluster:
TEST 1: 3 Nodes up. Update in node 1 (correct):Log Node1:
May 7 17:03:22 wbsvisionNode1 slapd[14058]: slap_queue_csn: queueing
0x7fc7e0100a90 20190507150322.383471Z#000000#001#000000
May 7 17:03:22 wbsvisionNode1 slapd[14058]: slap_queue_csn: queueing
0x7fc7e0108b40 20190507150322.383471Z#000000#001#000000
May 7 17:03:22 wbsvisionNode1 slapd[14058]: slap_graduate_commit_csn:
removing 0x7fc7e0108b40 20190507150322.383471Z#000000#001#000000
May 7 17:03:22 wbsvisionNode1 slapd[14058]: syncprov_sendresp: to=002,
cookie=rid=004,sid=001,csn=20190507150322.383471Z#000000#001#000000
May 7 17:03:22 wbsvisionNode1 slapd[14058]: syncprov_sendresp: to=003,
cookie=rid=004,sid=001,csn=20190507150322.383471Z#000000#001#000000
May 7 17:03:22 wbsvisionNode1 slapd[14058]: slap_graduate_commit_csn:
removing 0x7fc7e0100a90 20190507150322.383471Z#000000#001#000000
Log Node2:
May 7 17:03:22 wbsvisionNode2 slapd[1263]: do_syncrep2: rid=004
cookie=rid=004,sid=001,csn=20190507150322.383471Z#000000#001#000000
May 7 17:03:22 wbsvisionNode2 slapd[1263]: syncrepl_message_to_entry:
rid=004 DN: cn=Usuarios,ou=grupos,dc=local,dc=es, UUID:
d6c282a0-00ff-1039-89ae-b1c1eec8fb13
May 7 17:03:22 wbsvisionNode2 slapd[1263]: syncrepl_entry: rid=004
LDAP_RES_SEARCH_ENTRY(LDAP_SYNC_MODIFY) tid 23fff700
May 7 17:03:22 wbsvisionNode2 slapd[1263]: syncrepl_entry: rid=004
be_search (0)
May 7 17:03:22 wbsvisionNode2 slapd[1263]: syncrepl_entry: rid=004
cn=Usuarios,ou=grupos,dc=local,dc=es
May 7 17:03:22 wbsvisionNode2 slapd[1263]: slap_queue_csn: queueing
0x7f80081057c0 20190507150322.383471Z#000000#001#000000
May 7 17:03:22 wbsvisionNode2 slapd[1263]: syncprov_matchops: skipping
original sid 001
May 7 17:03:22 wbsvisionNode2 slapd[1263]: slap_queue_csn: queueing
0x7f8008107700 20190507150322.383471Z#000000#001#000000
May 7 17:03:22 wbsvisionNode2 slapd[1263]: slap_graduate_commit_csn:
removing 0x7f8008107700 20190507150322.383471Z#000000#001#000000
May 7 17:03:22 wbsvisionNode2 slapd[1263]: syncprov_matchops: skipping
original sid 001
May 7 17:03:22 wbsvisionNode2 slapd[1263]: syncprov_sendresp: to=003,
cookie=rid=005,sid=002,csn=20190507150322.383471Z#000000#001#000000
May 7 17:03:22 wbsvisionNode2 slapd[1263]: slap_graduate_commit_csn:
removing 0x7f80081057c0 20190507150322.383471Z#000000#001#000000
May 7 17:03:22 wbsvisionNode2 slapd[1263]: syncrepl_entry: rid=004
be_modify cn=Usuarios,ou=grupos,dc=local,dc=es (0)
May 7 17:03:22 wbsvisionNode2 slapd[1263]: slap_queue_csn: queueing
0x7f8008105ea0 20190507150322.383471Z#000000#001#000000
May 7 17:03:22 wbsvisionNode2 slapd[1263]: slap_graduate_commit_csn:
removing 0x7f8008105ea0 20190507150322.383471Z#000000#001#000000
May 7 17:03:22 wbsvisionNode2 slapd[1263]: do_syncrep2: rid=006
cookie=rid=006,sid=003,csn=20190507150322.383471Z#000000#001#000000
May 7 17:03:22 wbsvisionNode2 slapd[1263]: do_syncrep2: rid=006 CSN too
old, ignoring 20190507150322.383471Z#000000#001#000000
(cn=Usuarios,ou=grupos,dc=local,dc=es)
Log Node3:
May 7 17:03:22 wbsvisionNode3 slapd[1100]: do_syncrep2: rid=004
cookie=rid=004,sid=001,csn=20190507150322.383471Z#000000#001#000000
May 7 17:03:22 wbsvisionNode3 slapd[1100]: syncrepl_message_to_entry:
rid=004 DN: cn=Usuarios,ou=grupos,dc=local,dc=es, UUID:
d6c282a0-00ff-1039-89ae-b1c1eec8fb13
May 7 17:03:22 wbsvisionNode3 slapd[1100]: syncrepl_entry: rid=004
LDAP_RES_SEARCH_ENTRY(LDAP_SYNC_MODIFY) tid adbb2700
May 7 17:03:22 wbsvisionNode3 slapd[1100]: syncrepl_entry: rid=004
be_search (0)
May 7 17:03:22 wbsvisionNode3 slapd[1100]: syncrepl_entry: rid=004
cn=Usuarios,ou=grupos,dc=local,dc=es
May 7 17:03:22 wbsvisionNode3 slapd[1100]: slap_queue_csn: queueing
0x7f428c1045c0 20190507150322.383471Z#000000#001#000000
May 7 17:03:22 wbsvisionNode3 slapd[1100]: syncprov_matchops: skipping
original sid 001
May 7 17:03:22 wbsvisionNode3 slapd[1100]: slap_queue_csn: queueing
0x7f428c1069e0 20190507150322.383471Z#000000#001#000000
May 7 17:03:22 wbsvisionNode3 slapd[1100]: slap_graduate_commit_csn:
removing 0x7f428c1069e0 20190507150322.383471Z#000000#001#000000
May 7 17:03:22 wbsvisionNode3 slapd[1100]: syncprov_matchops: skipping
original sid 001
May 7 17:03:22 wbsvisionNode3 slapd[1100]: slap_graduate_commit_csn:
removing 0x7f428c1045c0 20190507150322.383471Z#000000#001#000000
May 7 17:03:22 wbsvisionNode3 slapd[1100]: syncprov_sendresp: to=002,
cookie=rid=006,sid=003,csn=20190507150322.383471Z#000000#001#000000
May 7 17:03:22 wbsvisionNode3 slapd[1100]: syncrepl_entry: rid=004
be_modify cn=Usuarios,ou=grupos,dc=local,dc=es (0)
May 7 17:03:22 wbsvisionNode3 slapd[1100]: slap_queue_csn: queueing
0x7f428c104c90 20190507150322.383471Z#000000#001#000000
May 7 17:03:22 wbsvisionNode3 slapd[1100]: slap_graduate_commit_csn:
removing 0x7f428c104c90 20190507150322.383471Z#000000#001#000000
May 7 17:03:22 wbsvisionNode3 slapd[1100]: do_syncrep2: rid=005
cookie=rid=005,sid=002,csn=20190507150322.383471Z#000000#001#000000
May 7 17:03:22 wbsvisionNode3 slapd[1100]: do_syncrep2: rid=005 CSN too
old, ignoring 20190507150322.383471Z#000000#001#000000
(cn=Usuarios,ou=grupos,dc=local,dc=es)
TEST 2: 3 Nodes up. Update in node 2 (correct) (later this will be the down
node):Log Node1:
May 7 17:05:31 wbsvisionNode1 slapd[14058]: do_syncrep2: rid=005
cookie=rid=005,sid=002,csn=20190507150531.748816Z#000000#002#000000
May 7 17:05:31 wbsvisionNode1 slapd[14058]: syncrepl_message_to_entry:
rid=005 DN: cn=Usuarios,ou=grupos,dc=local,dc=es, UUID:
d6c282a0-00ff-1039-89ae-b1c1eec8fb13
May 7 17:05:31 wbsvisionNode1 slapd[14058]: syncrepl_entry: rid=005
LDAP_RES_SEARCH_ENTRY(LDAP_SYNC_MODIFY) tid eaffd700
May 7 17:05:31 wbsvisionNode1 slapd[14058]: syncrepl_entry: rid=005
be_search (0)
May 7 17:05:31 wbsvisionNode1 slapd[14058]: syncrepl_entry: rid=005
cn=Usuarios,ou=grupos,dc=local,dc=es
May 7 17:05:31 wbsvisionNode1 slapd[14058]: slap_queue_csn: queueing
0x7fc7d411fdd0 20190507150531.748816Z#000000#002#000000
May 7 17:05:31 wbsvisionNode1 slapd[14058]: syncprov_matchops: skipping
original sid 002
May 7 17:05:31 wbsvisionNode1 slapd[14058]: slap_queue_csn: queueing
0x7fc7d4105fa0 20190507150531.748816Z#000000#002#000000
May 7 17:05:31 wbsvisionNode1 slapd[14058]: slap_graduate_commit_csn:
removing 0x7fc7d4105fa0 20190507150531.748816Z#000000#002#000000
May 7 17:05:31 wbsvisionNode1 slapd[14058]: syncprov_matchops: skipping
original sid 002
May 7 17:05:31 wbsvisionNode1 slapd[14058]: slap_graduate_commit_csn:
removing 0x7fc7d411fdd0 20190507150531.748816Z#000000#002#000000
May 7 17:05:31 wbsvisionNode1 slapd[14058]: syncrepl_entry: rid=005
be_modify cn=Usuarios,ou=grupos,dc=local,dc=es (0)
May 7 17:05:31 wbsvisionNode1 slapd[14058]: slap_queue_csn: queueing
0x7fc7d4121830 20190507150531.748816Z#000000#002#000000
May 7 17:05:31 wbsvisionNode1 slapd[14058]: syncprov_sendresp: to=003,
cookie=rid=004,sid=001,csn=20190507150531.748816Z#000000#002#000000
May 7 17:05:31 wbsvisionNode1 slapd[14058]: slap_graduate_commit_csn:
removing 0x7fc7d4121830 20190507150531.748816Z#000000#002#000000
May 7 17:05:31 wbsvisionNode1 slapd[14058]: do_syncrep2: rid=006
cookie=rid=006,sid=003,csn=20190507150531.748816Z#000000#002#000000
May 7 17:05:31 wbsvisionNode1 slapd[14058]: do_syncrep2: rid=006 CSN too
old, ignoring 20190507150531.748816Z#000000#002#000000
(cn=Usuarios,ou=grupos,dc=local,dc=es)
Log Node2:
May 7 17:05:31 wbsvisionNode2 slapd[1263]: slap_queue_csn: queueing
0x7f8008106ed0 20190507150531.748816Z#000000#002#000000
May 7 17:05:31 wbsvisionNode2 slapd[1263]: slap_queue_csn: queueing
0x7f80081050e0 20190507150531.748816Z#000000#002#000000
May 7 17:05:31 wbsvisionNode2 slapd[1263]: slap_graduate_commit_csn:
removing 0x7f80081050e0 20190507150531.748816Z#000000#002#000000
May 7 17:05:31 wbsvisionNode2 slapd[1263]: syncprov_sendresp: to=003,
cookie=rid=005,sid=002,csn=20190507150531.748816Z#000000#002#000000
May 7 17:05:31 wbsvisionNode2 slapd[1263]: slap_graduate_commit_csn:
removing 0x7f8008106ed0 20190507150531.748816Z#000000#002#000000
May 7 17:05:31 wbsvisionNode2 slapd[1263]: syncprov_sendresp: to=001,
cookie=rid=005,sid=002,csn=20190507150531.748816Z#000000#002#000000
Log Node3:
May 7 17:05:31 wbsvisionNode3 slapd[1100]: do_syncrep2: rid=005
cookie=rid=005,sid=002,csn=20190507150531.748816Z#000000#002#000000
May 7 17:05:31 wbsvisionNode3 slapd[1100]: syncrepl_message_to_entry:
rid=005 DN: cn=Usuarios,ou=grupos,dc=local,dc=es, UUID:
d6c282a0-00ff-1039-89ae-b1c1eec8fb13
May 7 17:05:31 wbsvisionNode3 slapd[1100]: syncrepl_entry: rid=005
LDAP_RES_SEARCH_ENTRY(LDAP_SYNC_MODIFY) tid acbb0700
May 7 17:05:31 wbsvisionNode3 slapd[1100]: syncrepl_entry: rid=005
be_search (0)
May 7 17:05:31 wbsvisionNode3 slapd[1100]: syncrepl_entry: rid=005
cn=Usuarios,ou=grupos,dc=local,dc=es
May 7 17:05:31 wbsvisionNode3 slapd[1100]: slap_queue_csn: queueing
0x7f4288107310 20190507150531.748816Z#000000#002#000000
May 7 17:05:31 wbsvisionNode3 slapd[1100]: syncprov_matchops: skipping
original sid 002
May 7 17:05:31 wbsvisionNode3 slapd[1100]: slap_queue_csn: queueing
0x7f4288109c60 20190507150531.748816Z#000000#002#000000
May 7 17:05:31 wbsvisionNode3 slapd[1100]: slap_graduate_commit_csn:
removing 0x7f4288109c60 20190507150531.748816Z#000000#002#000000
May 7 17:05:31 wbsvisionNode3 slapd[1100]: syncprov_matchops: skipping
original sid 002
May 7 17:05:31 wbsvisionNode3 slapd[1100]: slap_graduate_commit_csn:
removing 0x7f4288107310 20190507150531.748816Z#000000#002#000000
May 7 17:05:31 wbsvisionNode3 slapd[1100]: syncprov_sendresp: to=001,
cookie=rid=006,sid=003,csn=20190507150531.748816Z#000000#002#000000
May 7 17:05:31 wbsvisionNode3 slapd[1100]: syncrepl_entry: rid=005
be_modify cn=Usuarios,ou=grupos,dc=local,dc=es (0)
May 7 17:05:31 wbsvisionNode3 slapd[1100]: slap_queue_csn: queueing
0x7f42881079f0 20190507150531.748816Z#000000#002#000000
May 7 17:05:31 wbsvisionNode3 slapd[1100]: slap_graduate_commit_csn:
removing 0x7f42881079f0 20190507150531.748816Z#000000#002#000000
May 7 17:05:31 wbsvisionNode3 slapd[1100]: do_syncrep2: rid=004
cookie=rid=004,sid=001,csn=20190507150531.748816Z#000000#002#000000
May 7 17:05:31 wbsvisionNode3 slapd[1100]: do_syncrep2: rid=004 CSN too
old, ignoring 20190507150531.748816Z#000000#002#000000
(cn=Usuarios,ou=grupos,dc=local,dc=es)
TEST 3: 2 Nodes up. The node 2 is down. Update in node 1:Log Node1:
May 7 17:10:16 wbsvisionNode1 slapd[14058]: slap_queue_csn: queueing
0x7fc7dc125e50 20190507151016.178467Z#000000#001#000000
May 7 17:10:16 wbsvisionNode1 slapd[14058]: slap_queue_csn: queueing
0x7fc7dc126f40 20190507151016.178467Z#000000#001#000000
May 7 17:10:16 wbsvisionNode1 slapd[14058]: slap_graduate_commit_csn:
removing 0x7fc7dc126f40 20190507151016.178467Z#000000#001#000000
May 7 17:10:16 wbsvisionNode1 slapd[14058]: syncprov_sendresp: to=002,
cookie=rid=004,sid=001,csn=20190507151016.178467Z#000000#001#000000
May 7 17:10:16 wbsvisionNode1 slapd[14058]: slap_graduate_commit_csn:
removing 0x7fc7dc125e50 20190507151016.178467Z#000000#001#000000
May 7 17:10:16 wbsvisionNode1 slapd[14058]: syncprov_sendresp: to=003,
cookie=rid=004,sid=001,csn=20190507151016.178467Z#000000#001#000000
Log Node3:
May 7 17:10:15 wbsvisionNode3 slapd[1100]: do_syncrep2: rid=004
cookie=rid=004,sid=001,csn=20190507151016.178467Z#000000#001#000000
May 7 17:10:15 wbsvisionNode3 slapd[1100]: syncrepl_message_to_entry:
rid=004 DN: cn=Usuarios,ou=grupos,dc=local,dc=es, UUID:
d6c282a0-00ff-1039-89ae-b1c1eec8fb13
May 7 17:10:15 wbsvisionNode3 slapd[1100]: syncrepl_entry: rid=004
LDAP_RES_SEARCH_ENTRY(LDAP_SYNC_MODIFY) tid a6ffd700
May 7 17:10:15 wbsvisionNode3 slapd[1100]: syncrepl_entry: rid=004
be_search (0)
May 7 17:10:15 wbsvisionNode3 slapd[1100]: syncrepl_entry: rid=004
cn=Usuarios,ou=grupos,dc=local,dc=es
May 7 17:10:15 wbsvisionNode3 slapd[1100]: slap_queue_csn: queueing
0x7f42a0124640 20190507151016.178467Z#000000#001#000000
May 7 17:10:15 wbsvisionNode3 slapd[1100]: syncprov_matchops: skipping
original sid 001
May 7 17:10:15 wbsvisionNode3 slapd[1100]: slap_queue_csn: queueing
0x7f42a01261a0 20190507151016.178467Z#000000#001#000000
May 7 17:10:15 wbsvisionNode3 slapd[1100]: slap_graduate_commit_csn:
removing 0x7f42a01261a0 20190507151016.178467Z#000000#001#000000
May 7 17:10:15 wbsvisionNode3 slapd[1100]: syncprov_matchops: skipping
original sid 001
May 7 17:10:15 wbsvisionNode3 slapd[1100]: slap_graduate_commit_csn:
removing 0x7f42a0124640 20190507151016.178467Z#000000#001#000000
May 7 17:10:15 wbsvisionNode3 slapd[1100]: syncprov_sendresp: to=002,
cookie=rid=006,sid=003,csn=20190507151016.178467Z#000000#001#000000
May 7 17:10:15 wbsvisionNode3 slapd[1100]: syncrepl_entry: rid=004
be_modify cn=Usuarios,ou=grupos,dc=local,dc=es (0)
May 7 17:10:15 wbsvisionNode3 slapd[1100]: slap_queue_csn: queueing
0x7f42a0124810 20190507151016.178467Z#000000#001#000000
May 7 17:10:15 wbsvisionNode3 slapd[1100]: slap_graduate_commit_csn:
removing 0x7f42a0124810 20190507151016.178467Z#000000#001#000000
TEST 4: 3 Nodes up. Logs of the sync produced by the Node2 start. The node2
resync correctly the change made in TEST3 (when this node was down).
Log Node1:
May 7 17:13:52 wbsvisionNode1 slapd[14058]: slap_queue_csn: queueing
0x7fc7dc122780 20190507151352.582436Z#000000#001#000000
May 7 17:13:52 wbsvisionNode1 slapd[14058]: slap_graduate_commit_csn:
removing 0x7fc7dc122780 20190507151352.582436Z#000000#001#000000
May 7 17:13:52 wbsvisionNode1 slapd[14058]: syncprov_search_response:
cookie=rid=004,sid=001,csn=20190507151016.178467Z#000000#001#000000;20190507150531.748816Z#000000#002#000000;20190507150013.014798Z#000000#003#000000
Log Node2:
May 7 17:13:52 wbsvisionNode2 slapd[591]: slapd starting
May 7 17:13:52 wbsvisionNode2 slapd[435]: Starting OpenLDAP: slapd.
May 7 17:13:52 wbsvisionNode2 systemd[1]: Started LSB: OpenLDAP standalone
server (Lightweight Directory Access Protocol).
May 7 17:13:52 wbsvisionNode2 slapd[591]: do_syncrep2: rid=001
LDAP_RES_INTERMEDIATE - REFRESH_DELETE
May 7 17:13:52 wbsvisionNode2 slapd[591]: do_syncrep2: rid=003
LDAP_RES_INTERMEDIATE - REFRESH_DELETE
May 7 17:13:52 wbsvisionNode2 slapd[591]: do_syncrep2: rid=004
LDAP_RES_INTERMEDIATE - SYNC_ID_SET
May 7 17:13:52 wbsvisionNode2 slapd[591]: do_syncrep2: rid=006
LDAP_RES_INTERMEDIATE - SYNC_ID_SET
May 7 17:13:52 wbsvisionNode2 slapd[591]: syncrepl_message_to_entry:
rid=004 DN: cn=Usuarios,ou=grupos,dc=local,dc=es, UUID:
d6c282a0-00ff-1039-89ae-b1c1eec8fb13
May 7 17:13:52 wbsvisionNode2 slapd[591]: syncrepl_entry: rid=004
LDAP_RES_SEARCH_ENTRY(LDAP_SYNC_ADD) tid f6021700
May 7 17:13:52 wbsvisionNode2 slapd[591]: syncrepl_entry: rid=004
be_search (0)
May 7 17:13:52 wbsvisionNode2 slapd[591]: syncrepl_entry: rid=004
cn=Usuarios,ou=grupos,dc=local,dc=es
May 7 17:13:52 wbsvisionNode2 slapd[591]: syncrepl_message_to_entry:
rid=006 DN: cn=Usuarios,ou=grupos,dc=local,dc=es, UUID:
d6c282a0-00ff-1039-89ae-b1c1eec8fb13
May 7 17:13:52 wbsvisionNode2 slapd[591]: syncrepl_entry: rid=006
LDAP_RES_SEARCH_ENTRY(LDAP_SYNC_ADD) tid f501f700
May 7 17:13:52 wbsvisionNode2 slapd[591]: dn_callback : entries have
identical CSN cn=Usuarios,ou=grupos,dc=local,dc=es
20190507151016.178467Z#000000#001#000000
May 7 17:13:52 wbsvisionNode2 slapd[591]: syncrepl_entry: rid=006
be_search (0)
May 7 17:13:52 wbsvisionNode2 slapd[591]: syncrepl_entry: rid=006
cn=Usuarios,ou=grupos,dc=local,dc=es
May 7 17:13:52 wbsvisionNode2 slapd[591]: syncrepl_entry: rid=006 entry
unchanged, ignored (cn=Usuarios,ou=grupos,dc=local,dc=es)
May 7 17:13:52 wbsvisionNode2 slapd[591]: do_syncrep2: rid=006
LDAP_RES_INTERMEDIATE - REFRESH_PRESENT
May 7 17:13:52 wbsvisionNode2 slapd[591]: do_syncrep2: rid=006
cookie=rid=006,sid=003,csn=20190507151016.178467Z#000000#001#000000;20190507150531.748816Z#000000#002#000000;20190507150013.014798Z#000000#003#000000
May 7 17:13:52 wbsvisionNode2 slapd[591]: nonpresent_callback: rid=006
present UUID d6b883e0-00ff-1039-898f-b1c1eec8fb13, dn dc=local,dc=es
May 7 17:13:52 wbsvisionNode2 slapd[591]: nonpresent_callback: rid=006
present UUID d6b88520-00ff-1039-8990-b1c1eec8fb13, dn
ou=personas,dc=local,dc=es
May 7 17:13:52 wbsvisionNode2 slapd[591]: nonpresent_callback: rid=006
present UUID d6b8e51a-00ff-1039-8991-b1c1eec8fb13, dn
ou=grupos,dc=local,dc=es
May 7 17:13:52 wbsvisionNode2 slapd[591]: nonpresent_callback: rid=006
present UUID d6b9494c-00ff-1039-8992-b1c1eec8fb13, dn
ou=dominios,dc=local,dc=es
May 7 17:13:52 wbsvisionNode2 slapd[591]: nonpresent_callback: rid=006
present UUID d6b9b18e-00ff-1039-8993-b1c1eec8fb13, dn ou=dns,dc=local,dc=es
May 7 17:13:52 wbsvisionNode2 slapd[591]: nonpresent_callback: rid=006
present UUID d6b9eaaa-00ff-1039-8994-b1c1eec8fb13, dn
ou=forward,ou=dns,dc=local,dc=es
.................................
Similar lines with other entries.
May 7 17:13:52 wbsvisionNode2 slapd[591]: nonpresent_callback: rid=006
present UUID d6c282a0-00ff-1039-89ae-b1c1eec8fb13, dn
cn=Usuarios,ou=grupos,dc=local,dc=es
...............................
May 7 17:13:52 wbsvisionNode2 slapd[591]: slap_queue_csn: queueing
0x7f63d0123740 20190507151016.178467Z#000000#001#000000
May 7 17:13:52 wbsvisionNode2 slapd[591]: syncrepl_entry: rid=004
be_modify cn=Usuarios,ou=grupos,dc=local,dc=es (0)
May 7 17:13:52 wbsvisionNode2 slapd[591]: do_syncrep2: rid=004
LDAP_RES_INTERMEDIATE - REFRESH_PRESENT
May 7 17:13:52 wbsvisionNode2 slapd[591]: do_syncrep2: rid=004
cookie=rid=004,sid=001,csn=20190507151016.178467Z#000000#001#000000;20190507150531.748816Z#000000#002#000000;20190507150013.014798Z#000000#003#000000
May 7 17:13:52 wbsvisionNode2 slapd[591]: nonpresent_callback: rid=004
present UUID d6b883e0-00ff-1039-898f-b1c1eec8fb13, dn dc=local,dc=es
May 7 17:13:52 wbsvisionNode2 slapd[591]: nonpresent_callback: rid=004
present UUID d6b88520-00ff-1039-8990-b1c1eec8fb13, dn
ou=personas,dc=local,dc=es
May 7 17:13:52 wbsvisionNode2 slapd[591]: nonpresent_callback: rid=004
present UUID d6b8e51a-00ff-1039-8991-b1c1eec8fb13, dn
ou=grupos,dc=local,dc=es
May 7 17:13:52 wbsvisionNode2 slapd[591]: nonpresent_callback: rid=004
present UUID d6b9494c-00ff-1039-8992-b1c1eec8fb13, dn
ou=dominios,dc=local,dc=es
May 7 17:13:52 wbsvisionNode2 slapd[591]: nonpresent_callback: rid=004
present UUID d6b9b18e-00ff-1039-8993-b1c1eec8fb13, dn ou=dns,dc=local,dc=es
May 7 17:13:52 wbsvisionNode2 slapd[591]: nonpresent_callback: rid=004
present UUID d6c282a0-00ff-1039-89ae-b1c1eec8fb13, dn
cn=Usuarios,ou=grupos,dc=local,dc=es
.............................
Similar lines with other entries.
May 7 17:13:52 wbsvisionNode2 slapd[591]: slap_queue_csn: queueing
0x7f63d0121400 20190507151016.178467Z#000000#001#000000
May 7 17:13:52 wbsvisionNode2 slapd[591]: slap_graduate_commit_csn:
removing 0x7f63d0121400 20190507151016.178467Z#000000#001#000000
May 7 17:13:52 wbsvisionNode2 slapd[591]: slap_graduate_commit_csn:
removing 0x7f63d0123740 20190507151016.178467Z#000000#001#000000
Log Node3:
May 7 17:13:52 wbsvisionNode3 slapd[1100]: slap_queue_csn: queueing
0x7f428c103b00 20190507151352.329505Z#000000#003#000000
May 7 17:13:52 wbsvisionNode3 slapd[1100]: slap_graduate_commit_csn:
removing 0x7f428c103b00 20190507151352.329505Z#000000#003#000000
May 7 17:13:52 wbsvisionNode3 slapd[1100]: syncprov_search_response:
cookie=rid=006,sid=003,csn=20190507151016.178467Z#000000#001#000000;20190507150531.748816Z#000000#002#000000;20190507150013.014798Z#000000#003#000000
TEST 5: 3 Nodes up. Change is made in Node2 after resync and this is not
sended to the other nodes in the cluster.Log Node1:
Empty.
Log Node2:
May 7 17:28:35 wbsvisionNode2 slapd[591]: slap_queue_csn: queueing
0x7f63cc121370 20190507152835.873382Z#000000#002#000000
May 7 17:28:35 wbsvisionNode2 slapd[591]: slap_queue_csn: queueing
0x7f63cc125120 20190507152835.873382Z#000000#002#000000
May 7 17:28:35 wbsvisionNode2 slapd[591]: slap_graduate_commit_csn:
removing 0x7f63cc125120 20190507152835.873382Z#000000#002#000000
May 7 17:28:35 wbsvisionNode2 slapd[591]: slap_graduate_commit_csn:
removing 0x7f63cc121370 20190507152835.873382Z#000000#002#000000
Log Node3:
Empty.
3 years, 10 months
Re: Issue with OpenLDAP as a proxy to multiple Windows DCs backends
by David Sanchez Herrero
Hello Clement,
Thank you for your answer. I tried some of these parameters before with no success. I can't remember exactily which values I probed because I tested them a few weeks ago, so I checked them again with this configuration, and I have the same wrong behaviour as without them:
bind-timeout 1000000 (1 second)
network-timeout 2 (2 seconds)
...
...
#######################################################################
# MDB database definitions
#######################################################################
###Ad Principal
database meta
suffix "dc=ldapproxy-pre,dc=local"
rootdn "cn=manager,dc=ldapproxy-pre,dc=local"
rootpw ??????????????
chase-referrals no
nretries 0
bind-timeout 1000000
network-timeout 2
###################################
#
# Entrada LDAP para ONE
#
###################################
uri "ldap://1.2.3.1/ou=ONE,ou=Usuarios,dc=ldapproxy-pre,dc=local"
...
...
Greetings, David.
3 years, 10 months
slapindex vs reimport debug messages (mdb_index_read: failed (-30798))
by John Holder
Greetings OpenLDAP List!
I have a quick question about the context and meaning of messages which
show when using slapd debug and how/whether they impact indexes and
performance.
I recently ran into an issue where an attribute was not indexed, so I
applied an index using #slapindex -F (config) (atr)
When starting the daemon in debug mode and doing a search, there are many
entries regarding mdb equality and index failure (see below)
However, if I re-import the data using slapadd, and run the same search,
all of those messages go away.
I was hoping someone could assist with the meaning of the messages and why
does it print the messages even after slapindex was run, but not if the
olcDbIndex is added and then the ldif is re-imported?
In particular I'm seeing :
5ccb5ed0 <= mdb_index_read: failed (-30798)
and it seems to then search all attributes within the candidates. But this
doesn't occur if the data is reimported using slapadd.
Thank you!
jh
Debug from before export but after slapindex (key) was run:
5ccb5ed0 connection_get(13): got connid=1004
5ccb5ed0 connection_read(13): checking for input on id=1004
ber_get_next
ber_get_next: tag 0x30 len 52 contents:
5ccb5ed0 op tag 0x60, time 1556831952 <callto:1556831952>
ber_get_next
5ccb5ed0 conn=1004 op=1 do_bind
ber_scanf fmt ({imt) ber:
ber_scanf fmt (m}) ber:
5ccb5ed0 >>> dnPrettyNormal: <uid=jbh,cn=esx,cn=net>
5ccb5ed0 <<< dnPrettyNormal: <uid=jbh,cn=esx,cn=net>,
<uid=jbh,cn=esx,cn=net>
5ccb5ed0 do_bind: version=3 dn="uid=jbh,cn=admins,cn=esx" method=128
5ccb5ed0 mdb_dn2entry("uid=jbh,cn=admins,cn=esx")
5ccb5ed0 => mdb_dn2id("uid=jbh,cn=admins,cn=esx")
5ccb5ed0 <= mdb_dn2id: got id=0x3
5ccb5ed0 => mdb_entry_decode:
5ccb5ed0 <= mdb_entry_decode
5ccb5ed0 do_bind: v3 bind: "uid=jbh,cn=admins,cn=esx" to
"uid=jbh,cn=admins,cn=esx"
5ccb5ed0 send_ldap_result: conn=1004 op=1 p=3
5ccb5ed0 send_ldap_response: msgid=5 tag=97 err=0
ber_flush2: 14 bytes to sd 13
5ccb5ed0 connection_get(13): got connid=1004
5ccb5ed0 connection_read(13): checking for input on id=1004
ber_get_next
ber_get_next: tag 0x30 len 72 contents:
5ccb5ed0 op tag 0x63, time 1556831952 <callto:1556831952>
ber_get_next
5ccb5ed0 conn=1004 op=2 do_search
ber_scanf fmt ({miiiib) ber:
5ccb5ed0 >>> dnPrettyNormal: <cn=servers,cn=esx>
5ccb5ed0 <<< dnPrettyNormal: <cn=servers,cn=esx>, <cn=servers,cn=esx>
ber_scanf fmt ({mm}) ber:
ber_scanf fmt ({M}}) ber:
5ccb5ed0 ==> limits_get: conn=1004 op=2 self="uid=jbh,cn=admins,cn=esx"
this="cn=servers,cn=esx"
5ccb5ed0 => mdb_search
5ccb5ed0 mdb_dn2entry("cn=servers,cn=esx")
5ccb5ed0 => mdb_dn2id("cn=servers,cn=esx")
5ccb5ed0 <= mdb_dn2id: got id=0xd
5ccb5ed0 => mdb_entry_decode:
5ccb5ed0 <= mdb_entry_decode
5ccb5ed0 search_candidates: base="cn=servers,cn=esx" (0x0000000d) scope=2
5ccb5ed0 => mdb_equality_candidates (objectClass)
5ccb5ed0 => key_read
5ccb5ed0 <= mdb_index_read: failed (-30798)
5ccb5ed0 <= mdb_equality_candidates: id=0, first=0, last=0
5ccb5ed0 => mdb_equality_candidates (cn)
5ccb5ed0 => key_read
5ccb5ed0 <= mdb_index_read 1 candidates
5ccb5ed0 <= mdb_equality_candidates: id=1, first=33, last=33
5ccb5ed0 mdb_search_candidates: id=1 first=33 last=33
5ccb5ed0 => mdb_entry_decode:
5ccb5ed0 <= mdb_entry_decode
5ccb5ed0 => send_search_entry: conn 1004 dn="cn=esxi-master.johnholder.net
,cn=servers,cn=esx"
ber_flush2: 5545 bytes to sd 13esx
5ccb5ed0 <= send_search_entry: conn 1004 exit.
5ccb5ed0 send_ldap_result: conn=1004 op=2 p=3
5ccb5ed0 send_ldap_response: msgid=6 tag=101 err=0
ber_flush2: 14 bytes to sd 13
5ccb5ed0 slap_listener_activate(7):
5ccb5ed0 >>> slap_listener(ldap://host.esx.local:389)
5ccb5ed0 connection_get(14): got connid=1005
5ccb5ed0 connection_read(14): checking for input on id=1005
ber_get_next
ber_get_next: tag 0x30 len 29 contents:
5ccb5ed0 op tag 0x77, time 1556831952 <callto:1556831952>
ber_get_next
5ccb5ed0 conn=1005 op=0 do_extended
ber_scanf fmt ({m) ber:
5ccb5ed0 send_ldap_extended: err=0 oid= len=0
5ccb5ed0 send_ldap_response: msgid=1 tag=120 err=0
ber_flush2: 14 bytes to sd 14
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_camp...>
Virus-free.
www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_camp...>
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
3 years, 10 months
Unix groups information in LDAP server
by JC
I am a complete rookie when it comes to LDAP, so my apologies if what I am about to ask is something obvious.
I have an LDIF file that contains entries like the following:
# someuser, individuals, mydomain.comdn: uid=someuser,ou=individuals,dc=mydomain,dc=comuid: someusercn: someuserobjectClass: accountobjectClass: posixAccountloginShell: /bin/bashuidNumber: 1000gidNumber: 100homeDirectory: /home/someuser
When used in conjunction with NSS in a Linux box, this allows me to centralize a number of Linux attributes for users - a specific one here called 'someuser'. The next thing I would like to do is to store information in the LDAP server about other groups that someuser belongs to. For example, besides 'users' (GID 100) someuser might belong to 'power' (GID 84) , 'mysql' (GID 27) and 'cdrom' (GID 19). Can anybody please point me in the right direction how to pull this off?
3 years, 10 months
Issue with OpenLDAP as a proxy to multiple Windows DCs backends
by David Sanchez Herrero
Hello all,
I'm having an issue with the configuration of an OpenLDAP working as a proxy to various Active Directory backends. The OpenLDAP proxy is in our network
and we have various VPN to connect it to the remote Windows Domain Controllers (5 remote Domain Controlers of different customers, each one managing it's own domain).
To configure the proxy, we use de META database.
When all the Domain Controllers are up, everything works fine, but when one of then goes down (network problems, a machine reboot, etc.), the web app that uses the OpenLDAP proxy
stops autheticating all users of all domains. The system process it's even hanged and when you try to stop or restart the service,
it takes a long time to respond. I can't find a way to force a short timeout to ignore the offline DC and let the users of the other domains to continue working.
The server OS is CentOS Linux release 7.4.1708 (Core), and the OpenLDAP version 2.4.44.
To check if this is an issue of this old version, I have deployed another server with Fedora 30 and OpenLDAP 2.4.47, but same behaviour, so it's probably a configuration problem.
Below are the slapd.conf file I'm using (with no private data). Any ideas about what to change in the configuration file?
Thanks in advance and best regards, David.
#
# See slapd.conf(5) for details on configuration options.
# This file should NOT be world readable.
#
Include /etc/openldap/schema/core.schema
Include /etc/openldap/schema/corba.schema
Include /etc/openldap/schema/cosine.schema
Include /etc/openldap/schema/duaconf.schema
Include /etc/openldap/schema/dyngroup.schema
Include /etc/openldap/schema/inetorgperson.schema
Include /etc/openldap/schema/java.schema
Include /etc/openldap/schema/misc.schema
Include /etc/openldap/schema/nis.schema
Include /etc/openldap/schema/openldap.schema
Include /etc/openldap/schema/collective.schema
Include /etc/openldap/schema/pmi.schema
Include /etc/openldap/schema/ppolicy.schema
allow bind_v2
# Define global ACLs to disable default read access.
# Do not enable referrals until AFTER you have a working directory
# service AND an understanding of referrals.
#referral ldap://root.openldap.org
pidfile /var/run/openldap/slapd.pid
argsfile /var/run/openldap/slapd.args
# Load dynamic backend modules:
modulepath /usr/lib64/openldap
moduleload rwm.la
moduleload back_meta.la
moduleload back_ldap.la
moduleload back_null.la
moduleload back_bdb.la
moduleload back_hdb.la
moduleload back_ldif.la
moduleload back_shell.la
moduleload back_perl.la
loglevel 4095
#######################################################################
# MDB database definitions
#######################################################################
###Ad Principal
database meta
suffix "dc=ldapproxy-pre,dc=local"
rootdn "cn=manager,dc=ldapproxy-pre,dc=local"
rootpw ??????????????
chase-referrals no
nretries 0
###################################
#
# Entrada LDAP para ONE
#
###################################
uri "ldap://1.2.3.1/ou=ONE,ou=Usuarios,dc=ldapproxy-pre,dc=local"
readonly yes
lastmod off
suffixmassage "ou=ONE,ou=Usuarios,dc=ldapproxy-pre,dc=local" "dc=ONE,dc=local"
idassert-bind bindmethod=simple
binddn="CN=USERONE,OU=Usuarios,DC=ONE,DC=local"
credentials="??????????????"
mode=none
flags=non-prescriptive
idassert-authzFrom "dn.exact:cn=manager,dc=ldapproxy-pre,dc=local"
overlay rwm
rwm-map attribute uid mail
###################################
#
# Entrada LDAP para TWO
#
###################################
uri "ldap://1.2.3.2/ou=TWO,ou=Usuarios,dc=ldapproxy-pre,dc=local"
readonly yes
lastmod off
suffixmassage "ou=TWO,ou=Usuarios,dc=ldapproxy-pre,dc=local" "ou=TWO,ou=people,ou=users,dc=TWO,dc=local"
idassert-bind bindmethod=simple
binddn="CN=USERTWO,CN=Users,DC=TWO,DC=local"
credentials="????????????"
mode=none
flags=non-prescriptive
idassert-authzFrom "dn.exact:cn=manager,dc=ldapproxy-pre,dc=local"
overlay rwm
###################################
#
# Entrada LDAP para THREE
#
###################################
uri "ldap://1.2.3.3/ou=THREE,ou=Usuarios,dc=ldapproxy-pre,dc=local"
readonly yes
lastmod off
suffixmassage "ou=THREE,ou=Usuarios,dc=ldapproxy-pre,dc=local" "dc=THREE,dc=red"
idassert-bind bindmethod=simple
binddn="CN=USERTHREE,CN=Users,DC=THREE,DC=red"
credentials="??????????????????????"
mode=none
flags=non-prescriptive
idassert-authzFrom "dn.exact:cn=manager,dc=ldapproxy-pre,dc=local"
overlay rwm
##########################################
#
# Entrada LDAP para FOUR
#
#########################################
uri "ldap://1.2.3.4/ou=FOUR,ou=Usuarios,dc=ldapproxy-pre,dc=local"
readonly yes
lastmod off
suffixmassage "ou=FOUR,ou=Usuarios,dc=ldapproxy-pre,dc=local" "dc=FOUR,dc=loc"
idassert-bind bindmethod=simple
binddn="CN=USERFOUR,CN=Users,DC=FOUR,DC=loc"
credentials="??????????????????????"
mode=none
flags=non-prescriptive
idassert-authzFrom "dn.exact:cn=manager,dc=ldapproxy-pre,dc=local"
overlay rwm
###################################
#
# Entrada LDAP para FIVE
#
###################################
uri "ldap://1.2.3.5/ou=FIVE,ou=Usuarios,dc=ldapproxy-pre,dc=local"
readonly yes
lastmod off
suffixmassage "ou=FIVE,ou=Usuarios,dc=ldapproxy-pre,dc=local" "dc=FIVE,dc=local"
idassert-bind bindmethod=simple
binddn="CN=USERFIVE,CN=Users,DC=FIVE,DC=local"
credentials="???????????????????"
mode=none
flags=non-prescriptive
idassert-authzFrom "dn.exact:cn=manager,dc=ldapproxy-pre,dc=local"
overlay rwm
3 years, 10 months
Help
by A. Yuesuen
Hello,
I'm trying to set up an LDAP Replication on two Ubuntu Server. Now i tried
to set up the replication using this guide:
https://systemausfall.org/wikis/howto/LDAP-Replikation
By using this Command:
>>
ldapmodify -Y EXTERNAL -H ldapi:///
dn: olcDatabase={1}hdb,cn=config
changetype: modify
add: olcSyncrepl
olcSyncrepl: {0}rid=2 provider=ldap://192.168.XX.YY
type=refreshOnly
bindmethod=simple
binddn="cn=syncagent,dc=EXAMPLE,dc=COM"
credentials=PASSWORD
interval="00:00:03:00"
retry="30 10 300 +"
timeout=1
tls_reqcert=never
schemachecking=off
searchbase="dc=EXAMPLE,dc=COM"
<<
*i got this Message:ldapmodify: invalid format (line 5) entry:
'olcDatabase={1}hdb,cn=config*
i have this Data in my slapd file
/etc/ldap/slapd.d/cn=config
'cn=module{0}.ldif' 'olcDatabase={0}config.ldif'
'cn=schema' 'olcDatabase={-1}frontend.ldif'
'cn=schema.ldif' 'olcDatabase={1}hdb'
'olcBackend={0}hdb.ldif' 'olcDatabase={1}hdb.ldif'
'olcDatabase={0}config'
Thank you for you help
best wishes
Ajdar Yüsün
3 years, 10 months
Open LDAP - How to define an additionnal "uid" like attribute equivalent to a RDMS unique key index
by pascal.foulon@orange.com
Hi all
1) The context
My team is working on a corporate directory using Open LDAP 2.4.38, managing about 200,000 employees.
Each employee is constructed using a specific object class named ftperson, based on parent object class inetOrgPerson
All the ftperson objects are are stored in an branch named ou=people .
We use the standard uid attribute for he RDN 's ftperson object to identify an employee.
So far, the full DN of an employee is something like : uid=CUID,ou=people,dc=intrannuaire,dc=orange,dc=com
with CUID representing an alphanumeric string
Precision : the CUID value is precalculated and provisioned by another corporate identity management system
that checks and ensures the CUID value is unique.
We have a new requirement consisting of adding an additionnal "uid" like attribute named xuid
The value of xuid will also precalculated et provisioned by the corporate identity management system
that will check and ensure the xuid value is unique.
At first, we have choosen to simply add a new attribute to the ftperson object structrure
using the following attribute definition :
olcAttributeTypes: {76}( ORANGE-AT:77 NAME 'xuid' EQUALITY caseIgnoreMatch SUBSTR caseIgnoreSubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 SINGLE-VALUE )
2) My question
We'd like to harden the xuid management policy on our Open LDAP server by adding an unicity constraint rule for the xuid attribute equivalent to a RDMS unique key index.
I've found and read several LDAP documentations including :
=> uid attribute définition
https://ldapwiki.com/wiki/0.9.2342.19200300.100.1.1
=> extented flags
https://ldapwiki.com/wiki/Extended%20Flags
I've tried several configurations such as :
- define xuid attribute using uid as a parent attribute type
olcAttributeTypes: {76}( ORANGE-AT:77 NAME 'xuid' SUP uid EQUALITY caseIgnoreMatch SUBSTR caseIgnoreSubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15{256} SINGLE-VALUE )
- define xuid attribute using uid as a parent attribute type with additional extended flags
olcAttributeTypes: {76}( ORANGE-AT:77 NAME 'xuid' SUP uid EQUALITY caseIgnoreMatch SUBSTR caseIgnoreSubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15{256} SINGLE-VALUE X-NDS_NAME 'uniqueID' X-NDS_LOWER_BOUND '1' X-NDS_UPPER_BOUND '64' X-NDS_PUBLIC_READ '0' X-NDS_NONREMOVABLE '0' )
Where injecting the modified configurations , the Open LDAP server seems to accept them (no error message).
When we add the xuid attribute to an existing ftperson object, it works
But the same xuid value can be set for different ftperson objects and so, the unicity constraint rule for xuid is not respected. :-(
Any idea that idea that could help to fix this issue ?
Regards
[logo Orange]<http://www.orange.com/>
Pascal Foulon
Concepteur Développeur
Responsable Technique / MOE portail Intranet France Mobile
Expert technique VMPAL-E / Quartz / Web Admin / IFM / Annuaire Groupe
Orange/TGI/OLS/SOFT/IDF-NANCY/DPI
fixe : +33 3 90 31 25 79 <https://monsi.sso.francetelecom.fr/index.asp?target=http%3A%2F%2Fclicvoic...>
mobile : +33 6 82 57 28 73 <https://monsi.sso.francetelecom.fr/index.asp?target=http%3A%2F%2Fclicvoic...>
pascal.foulon(a)orange.com<mailto:pascal.foulon@orange.com>
EDS Océane 58H « Annuaire Groupe » : BJC031
EDS Océane IFM « Intranet France Mobile » : BJC038
_________________________________________________________________________________________________________________________
Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.
3 years, 11 months