Buchan Milne writes:
On Wednesday 20 February 2008 21:00:42 Thomas Ledbetter wrote:
> but would it make more sense to do this more frequently over the
> course of the day to keep the checkpoint process less expensive per
> iteration?
IMHO, yes.
That also makes startup faster after an unclean shutdown, since slapd
needs to recover the database in order to start.
> What kinds of metrics should be used to determine how
frequently
> this should be done?
This would depend on the frequency of changes, and how long you can
afford to have database recovery run for. I usually go with something
around 5-10 minutes.
At our site we usually run read-only and then once in a while apply a
lot of changes at once - and we care about read and startup speed but
not much about update speed. So we've tried even briefer, 2 minutes.
Seems to work fine. Need to experiment with taking slapd up and down
and see what happens though.
checkpoint 1024 2
dbconfig set_flags DB_LOG_AUTOREMOVE
> If I have two master servers keeping, say, a week's worth
of
> transaction logs, a sound archival process, and a backup LDIF, would it
> make sense to just disable transaction logging altogether across the
> replicas?
If you can afford taking a slave down for a re-import or re-sync, maybe.
However, I would sacrifice a bit of performance to avoid this myself.
If you need translation to Berkeley DB parlance, if I've got it right:
You don't need _catastrophic_ recovery for the slaves - that implies
manual intervension, and then you can just as well use the backup of the
master slapd. However if you disable _normal_ recovery on the slaves,
then they'll also need manual help to start after an otherwise harmless
system or application failure.
--
Hallvard