On Wednesday, 7 July 2010 11:25:26 Arjan Filius wrote:
Hello Buchan, ea.
On Tue, 6 Jul 2010, Buchan Milne wrote:
On Tuesday, 6 July 2010 07:08:05 Arjan Filius wrote:
Hello openldap-technical,
new on the list, Arjan Filius is my Name.
Having setup openldap 2.4.21, with one master, and six slaves/consumers in delta syncrepl configuration and testing an upgrade from an older openldap version.
Please specify the version you are upgrading from, it *is* relevant.
Excuse, didn't came up it's 2.3.38 for 32bits i386 the older ldap version is on an other machine (and is using slurpd for replication) , and upgrade is by doing an export on the old (slapcat > export-file.ldif) and on the new machien (2.4.21) : slapadd -l /tmp/export-file.ldif -F ./slapd.d/
exporting (slapcat > export-file), importing (slapadd -l `export-file`) on the (empty/pristine) master, and attaching empty/pristine slaves works just fine except for taking more than one hour to complete.
Well, maybe you should consider appropriate tuning/changes to your import process to speed things up, rather than risk data integrity. You don't specify how large your database is, or any tuning etc., or other slapadd flags, so it is difficult to know if 1 hour is good or bad.
slapadd (on master) can be done in 12 minutes , with a tuned config, major ingredients: #checkpoint 10 3 checkpoint 2000 60 #dbnosync dbnosync # syncprov-checkpoint 100 10 syncprov-checkpoint 1000 100
'#' is the regular value without the special import tunables.
after import start master just with regular parameters (without nodbsync and checkpointing more strict 100 10)
The -q flag can be used in place of dbnosync.
machines all have 8G RAM, 1 CPU.
I'm also not use what is good or bad in terms of performance. Just looking for the quickest migration path, but not interested by doing exotic "tricks" . I thought just importing the same data (slapadd) on master and all slaves in paralel is not exotic, and would shorten the migration time.
Well, you weren't explicit in how you were loading the slaves.
You should probably do as follows:
1)slapcat on old master (let's say to old.ldif) 2)slapadd old.ldif on new master 3)slapcat on new master (let's say to new.ldif) 4)slapadd new.ldif on consumers
Starting with different data on providers and consumers is sure to result in broken replication.
I used exactly the same import file master ans slaves (source 2.3.39) , the imported ldif is about 580MB in the imported ldif is no contextCSN, so i think (re)generating it may lead to the situation (cookie issue) encountered.
Only contextCSN difference would not lead to replication failures, but entryCSN differences would ...
If the entryCSN after import on the new master is different to the ldif you imported (which is likely when doing migration over major versions), then your data isn't consistent, even if you are using the same original ldif ...
Regards, Buchan