Monitor the SyncRepl replication status
by Bruno Lezoray EMSM
Hi all,
few months ago, i developed script to monitor slurpd replication, by
checking replication logs.
Now, we want to implement SyncRepl replication, and it looks more
complex to know a status of the replication.
Is someone already developed a tool to do that ?
My first idea is to compare the contextCSN of the suffix entry between
the master and the slave.
I don't know if there is some specific logs that indicate the
replication is broken.
Any help is welcome.
Rgds, Bruno.
16 years
chaining question
by Tony Earnshaw
I finally got chaining working on our OL 2.3.37 (I'll be updating) delta
syncrepl Samba consumer. It used to work before and stopped around OL
2.3.24 - unfortunately I don't know exactly which version.
The 2 2.3.37 and .38 chaining tests, 018 and 032 pass on my build
machine. But when I put these ad lib into slapd.conf on the consumer,
they don't.
What doesn't work after 'moduleload back_ldap.la':
overlay chain
chain-uri ldap://mercurius.intern/
chain-idassert-bind bindmethod=simple
binddn="cn=proxy,dc=barlaeus,dc=nl"
credentials=secret
mode=self
chain-tls start
Apart from chain-tls, this is almost verbatim what the two tests use.
I finally noticed from the SLAPO-CHAIN man page, not having seen the
wood for the trees, the following:
"Directives for configuring the underlying ldap database may also be
required, as shown in this example:".
So I tried the example, and this chaining config does work on the consumer:
overlay chain
chain-rebind-as-user FALSE
chain-uri ldap://mercurius.intern/
chain-rebind-as-user TRUE
chain-idassert-bind bindmethod=simple
binddn="cn=proxy,dc=barlaeus,dc=nl"
credentials=secret
mode=self
chain-tls start
Could someone please explain why the configuration for the two tests
should pass, while it doesn't on my consumer, and why the config with
the two chain-rebind-as-user stanzas does?
Best,
--Tonni
--
Tony Earnshaw
Email: tonni at hetnet dot nl
16 years
Re: Performance issue with In-Memory BDB
by Suhel Momin
Sorry, but I could not understand the meaning of back-bdb residing in HEAD.
I have attached a diff over openldap-2.3.35 for changes regarding BDB
in-memory.
This mainly has changes for keeping logs,environment and databases in
memory.
The bdb in-memory code is under #ifdef BDB_INMEMORY
I am trying out different environment cache sizes to check any performance
difference.
Thanks and Regards,
Suhel
On 8/31/07, Howard Chu <hyc(a)symas.com> wrote:
> Suhel Momin wrote:
> > Hi,
> >
> > I have made changes such that backend BDB ( ver 4.5 ) resides in
> > memory instead of on disk . This is with default slapd configuration.
> > I was hoping to have a better performance with this.
>
> back-bdb as currently residing in HEAD already yields efficiencies of 95%
> of
> available system bandwidth. It is extremely unlikely that you are going to
> be
> able to improve the performance significantly, most changes you could make
> to
> the code at this point will only yield performance losses.
>
> > But what I now see
> > is that do_add takes much more time then what it use to take when BDB
> > was on-Disk. Indexing is done on objectclass in both the cases.
> >
> > Any pointers on why this could be an issue?
>
> --
> -- Howard Chu
> Chief Architect, Symas Corp. http://www.symas.com
> Director, Highland Sun http://highlandsun.com/hyc/
> Chief Architect, OpenLDAP http://www.openldap.org/project/
>
16 years
Multiple Schema
by Suhel Momin
Hi,
I am planning to have multiple DIT's.
These DIT's may use different customised Schema.
Now my problem is to store these schema such that DIT's will use their
respective schema definition.
This is because the attributetype or objectclass in these different schemas
might overlap.
Hence I want to keep them seperate and use them for a particular DIT. Is
this possible?
Or, If I have to make changes in the existing openldap code how should I go
about it?
I have observed that attributetype, objectclass etc are stored in AVL.
do I need to maintain different AVL for my schemas to work?
Any help regarding how I should proceed is appreciated.
If I am on wrong mailing list please redirect me.
Thanks and Regards,
Suhel
16 years
High availability
by Taymour A. El Erian
Hi,
I have been searching for a long time to find a solution to give me
high availability on the writing. We have 2 ldap servers running as
multi master (I know it is not considered a good thing and it is very
old 2.0.x), this way if one is down the other will accept writes
(add/modify/delete). If we do the normal single master multiple slaves,
we will get more performance and high availability for reads but if the
master is down no updates. Also, we can not separate writes from reads
and we can not use referrals (not all applications we have can chase
referrals). I though of having a standby master and use heartbeat but it
doesnt look like a stable solution, any ideas ? maybe shared disk ?
--
Taymour A El Erian
System Division Manager
RHCE, LPIC, CCNA, MCSE, CNA
TE Data
E-mail: taymour.elerian(a)tedata.net
Web: www.tedata.net
Tel: +(202)-33320700
Fax: +(202)-33320800
Ext: 1101
16 years
openldap - logger
by Arunachalam Parthasarathy
Hello all,
Can we able to write the debug log messages, directly to a file without
using Syslog
Thanks in advance,
Arunachalam.
****************************************************************************
****************************
This e-mail and attachments contain confidential information from HUAWEI,
which is intended only for the person or entity whose address is listed
above. Any use of the information contained herein in any way (including,
but not limited to, total or partial disclosure, reproduction, or
dissemination) by persons other than the intended recipient's) is
prohibited. If you receive this e-mail in error, please notify the sender by
phone or email immediately and delete it!
16 years
Requesting advice for repairing a syncRepl issue
by Benjamin Lewis
Hello,
I have a pair of OpenLDAP servers that had been replicating flawlessly
with delta syncRepl for about 10 months. Just the other day, I saw
that modifications were no longer being replicated and these messages
were appearing in the syslog on the master server immediately after
the MOD line:
[ID 651871 local0.debug] => bdb_idl_insert_key: c_get next_dup failed:
DB_NOTFOUND: No matching key/data pair found (-30990)
[ID 809268 local0.debug] => bdb_dn2id_add: parent (cn=log) insert failed: -30990
I assume that something has become corrupted in the BDB database for
cn=log on the master. Does that seem correct? I'm definitely not
seeing any new entries in the cn=log database since those messages
began appearing.
If it is a corrupted index, I think that running "slapindex -b cn=log
-f .... " after stopping the slapd process will fix that. After that
completes, I should be able to restart the slapd and test that writes
to entries under the baseDN do cause new entries to appear in the
cn=log database.
If it's not an index, I have no idea how to repair this. I found the
error message in the sources (servers/slapd/back-bdb/idl.c:789 in
version 2.3.30) but honestly, I have no idea what that code is doing.
Once (if) I can repair things, I can begin worrying about getting
changes to the replica again. Since there are changes missing from
the cn=log database on the master, I assume that I'll need to cause a
complete re-sync. Is there a better way to accomplish that than
removing the entire database on the replica, using slapadd to import a
recent backup of the master, and restarting the replica?
Some specifics in case they matter:
Master:
Solaris10 amd64
BDB 4.2.52 + 5 patches
OpenLDAP 2.3.30
Replica:
Solaris10 amd64
BDB 4.2.52 + 5 patches
OpenLDAP 2.3.38 (upgraded from 2.3.33 the day before the problem began
on the Master)
(What I believe to be the) Relevant portions of slapd.conf file from
the Master (slightly obfuscated) are included at the end of this
message.
Thank you for any help,
-Ben
# access log database (used by syncprov-delta replication)
database bdb
suffix "cn=log"
directory /var/openldap/data/prod/logdb
rootdn "cn=Manager,dc=our,dc=domain"
mode 0660
shm_key 142
index default eq
index objectClass,entryUUID,entryCSN eq
index reqStart,reqEnd,reqResult,reqType eq
access to dn.subtree="cn=log"
by group.exact="cn=DirectoryAdmins,cn=administrators,dc=our,dc=domain" write
by dn.onelevel="cn=SyncUsers,cn=administrators,dc=our,dc=domain" read
by * none
overlay syncprov
syncprov-nopresent TRUE
syncprov-reloadhint TRUE
# This is all one line
limits dn.onelevel="cn=SyncUsers,cn=administrators,dc=our,dc=domain"
time.soft=unlimited time.hard=unlimited size.soft=unlimited
size.hard=unlimited
database hdb
suffix "dc=our,dc=domain"
rootdn "cn=manager,dc=our,dc=domain"
rootpw {SHA}[XXX REMOVED XXX]
directory /var/openldap/data/prod/db
checkpoint 100000 30
mode 0660
shm_key 42
cachesize 500000
idlcacheSize 1500000
index default pres,eq
index givenName,description,uid,cn,sn pres,eq,sub
index objectClass,uniqueMember,member eq
index employeeNumber eq,sub
index entryCSN,entryUUID eq
overlay ppolicy
ppolicy_default cn=standard,cn=policies,dc=our,dc=domain
overlay dynlist
dynlist-attrset groupOfURLs memberURL member
overlay syncprov
syncprov-checkpoint 100000 30
syncprov-sessionlog 300000
overlay accesslog
logdb cn=log
logops writes
logsuccess TRUE
logold (objectClass=inetOrgPerson)
logpurge 28+00:00 01+00:00
# This is all one line
limits dn.onelevel="cn=SyncUsers,cn=administrators,dc=our,dc=domain"
time.soft=unlimited time.hard=unlimited size.soft=unlimited
size.hard=unlimited
16 years