Initially ran the below with and received that error.
# slapadd -l /usr/local/openldap-2.4.23/etc/openldap/ldif/group42_ldap.ldif
=> bdb_tool_entry_put: id2entry_add failed: DB_KEYEXIST: Key/data pair
already exists (-30995)
=> bdb_tool_entry_put: txn_aborted! DB_KEYEXIST: Key/data pair already
slapadd: could not add entry dn="dc=group42,dc=ldap" (line=1): txn_aborted!
DB_KEYEXIST: Key/data pair already exists (-30995)
slapadd shutdown: initiated
Saw a post of yours from a while back, so I backed up the consumer's dbase
and moved the directory out of the way, then ran slapadd and brought slapd
up on the consumer.
Things look more in-sync with each other than they did.
Question: I noticed that there are seven "reqStart" entries in the
cn=accesslog on the Provider machine. They were there when I was testing
and beating my head against the wall. Should they be deleted manually since
I used slapcat/slapadd? Or does it matter?
From: Quanah Gibson-Mount [mailto:email@example.com]
Sent: Wednesday, March 14, 2012 2:53 PM
To: Borresen, John - 0442 - MITLL
Subject: RE: OPENLDAP SYNCREPL
--On Wednesday, March 14, 2012 2:44 PM -0400 "Borresen, John - 0442 -
MITLL" <john.borresen(a)ll.mit.edu> wrote:
Pesonally, I use a specific replicator ID for replication that has full
read to all data on the master, and make that the first ACL. Then I don't
have to worry whether or not the other ACLs interfere. At this point, if
you haven't, I'd wipe out the replicas DB and make it do a fresh sync, so
you can confirm it is working properly, OR slapcat the master, and use the
appropriate flags to slapadd to reload it, and then ensure the replica
keeps up to date. If you're using the replica as-is, where you've had an
invalid configuration for days, I fully expect the data on the replica to
be in an odd state. Make sure nothing is using the replica if you choose
to reload it in either method.
Sr. Member of Technical Staff
A Division of VMware, Inc.
Zimbra :: the leader in open source messaging and collaboration