Ok, when I configure it and trap the logs, one of my systems shows:
[ldap@dv1nitle05-ldap1]$
[ldap@dv1nitle05-ldap1]$ daemon: select: listen=7 active_threads=0
tvp=zero
do_syncrep2: rid=011 got search entry without Sync State control
do_syncrepl: rid=011 retrying (3 retries left)
daemon: activity on 1 descriptor
daemon: waked
daemon: select: listen=7 active_threads=0 tvp=zero
daemon: select: listen=7 active_threads=0 tvp=zero
do_syncrep2: rid=011 got search entry without Sync State control
do_syncrepl: rid=011 retrying (2 retries left)
daemon: activity on 1 descriptor
daemon: waked
daemon: select: listen=7 active_threads=0 tvp=zero
daemon: select: listen=7 active_threads=0 tvp=zero
do_syncrep2: rid=011 got search entry without Sync State control
do_syncrepl: rid=011 retrying (1 retries left)
daemon: activity on 1 descriptor
daemon: waked
daemon: select: listen=7 active_threads=0 tvp=zero
daemon: select: listen=7 active_threads=0 tvp=zero
do_syncrep2: rid=011 got search entry without Sync State control
do_syncrepl: rid=011 retrying
daemon: activity on 1 descriptor
daemon: waked
daemon: select: listen=7 active_threads=0 tvp=zero
daemon: select: listen=7 active_threads=0 tvp=zero
do_syncrep2: rid=011 got search entry without Sync State control
do_syncrepl: rid=011 retrying (4 retries left)
daemon: activity on 1 descriptor
daemon: waked
daemon: select: listen=7 active_threads=0 tvp=zero
the other one is trying to do the sync with logs like
=> access_allowed: read access to "cn=manager,dc=nitle,dc=org"
"hasSubordinates" requested
=> dn: [1]
=> dn: [2] cn=subschema
=> dn: [3] cn=log
=> dnpat: [4] ^([^,]*,)?ou=[^,]+,(dc=[^,]+(,dc=[^,]+)*)$ nsub: 3
=> acl_get: [5] attr hasSubordinates
=> slap_access_allowed: result not in cache (hasSubordinates)
=> acl_mask: access to entry "cn=manager,dc=nitle,dc=org", attr
"hasSubordinates" requested
=> acl_mask: to value by "cn=manager,cn=config", (=0)
<= check a_dn_pat: cn=replslave,ou=replication,dc=nitle,dc=org
<= check a_dn_pat: cn=mirrormode,ou=replication,dc=nitle,dc=org
<= check a_dn_pat: uid=ldaprw,ou=staff,dc=nitle,dc=org
<= check a_dn_pat: uid=ldapro,ou=staff,dc=nitle,dc=org
<= check a_dn_pat: cn=manager,cn=config
<= acl_mask: [5] applying write(=wrscxd) (stop)
<= acl_mask: [5] mask: write(=wrscxd)
=> slap_access_allowed: read access granted by write(=wrscxd)
=> access_allowed: read access granted by write(=wrscxd)
daemon: activity on 1 descriptor
daemon: activity on: 11r
daemon: read activity on 11
daemon: select: listen=7 active_threads=0 tvp=zero
connection_read(11): input error=-2 id=5, closing.
daemon: activity on 1 descriptor
daemon: removing 11
daemon: waked
daemon: select: listen=7 active_threads=0 tvp=zero
It appears I'm also getting bind failures as the cn=manager,cn=config
user that I have, however, I don't always get the errors, I get
successful binds as that user to get the info above, but then later
after it's been running for a short while, it starts to get err49
(auth err)
I'm investigating more. I wonder if I'm overriding my databases......
On Jan 4, 2008, at 5:59 PM, Gavin Henry wrote:
<quote who="Chris G. Sellers">
> Hello all. I've read the news posting at
>
http://blog.suretecsystems.com/archives/40-OpenLDAP-Weekly-News-Issue-5.h...
> for multimaster N-Way sync. Very good stuff.
>
> I've configured the cn=config backend, and I can browse it with my
> LDAP browser on both my Masters. (I have two servers)
>
> I have created the replication agreements and are able to add them to
> the cn=config as documented in the URL above. No problem on both
> servers.
>
> When I add the data sync, I get a little confused.
>
> Below, for ${BACKEND} I assume I put something like bdb for the
> database backend correct? If I do this it does not fail, but the
> sync
> does not happen. I don't see a whole lot of errors either with the
> sync.
bdb or hdb, depending on how you've built slapd.
Logs?
>
> dn: olcDatabase={1}$BACKEND,cn=config
> objectClass: olcDatabaseConfig
> objectClass: olc${BACKEND}Config
> olcDatabase: {1}$BACKEND
> olcSuffix: $BASEDN
> olcDbDirectory: ./db
> olcRootDN: $MANAGERDN
> olcRootPW: $PASSWD
> olcSyncRepl: rid=004 provider=$URI1 binddn="$MANAGERDN"
> bindmethod=simple
> credentials=$PASSWD searchbase="$BASEDN" type=refreshOnly
> interval=00:00:00:10 retry="5 5 300 5" timeout=1
> olcSyncRepl: rid=005 provider=$URI2 binddn="$MANAGERDN"
> bindmethod=simple
> credentials=$PASSWD searchbase="$BASEDN" type=refreshOnly
> interval=00:00:00:10 retry="5 5 300 5" timeout=1
> olcSyncRepl: rid=006 provider=$URI3 binddn="$MANAGERDN"
> bindmethod=simple
> credentials=$PASSWD searchbase="$BASEDN" type=refreshOnly
> interval=00:00:00:10 retry="5 5 300 5" timeout=1
> olcMirrorMode: TRUE
>
> dn: olcOverlay=syncprov,olcDatabase={1}${BACKEND},cn=config
> changetype: add
> objectClass: olcOverlayConfig
> objectClass: olcSyncProvConfig
> olcOverlay: syncprov
>
>
> I end up with olcDatabase={1}bdb in there twice.
>
> What should the $BACKEND value be if not bdb. (if by db backend is
> bdb).
>
> Thanks for any insight. For now, I am going to have to revert to
> Master+Slave via syncrepl and referrals.
>
> Sellers
>
>
> |
> ----------------------------------------------------------------------|
> Chris G. Sellers, MLS Lead Internet Engineer
> National Institute for Technology & Liberal Education
> 535 West William Street, Ann Arbor, Michigan 48103
> chris.sellers(a)nitle.org 734.661.2318
>
>
>
>
>
>
______________________________________________
Chris G. Sellers | NITLE Technology
734.661.2318 | chris.sellers(a)nitle.org
AIM: imthewherd | GTalk: cgseller(a)gmail.com