Rodrigo Costa wrote:
Buchan,

I made exactly what you said even using the -q flag in the slapadd 
command. So in summary I did :

1) Load the master DB using LDIF file through slapdd(-q flag and 
DB_CACHEZIZE to 1GB);
2) Load the slave DB using LDIF(same) file through slapdd(-q flag and 
DB_CACHEZIZE to 1GB);
3)Then have the slapd.conf files appropriately configured
4)Start master and then after some minutes start slave.

So the slave is not starting blank but exactly loaded and in terms of 
data there isn't any difference between provider(master) and 
consumer(slave). I just see when the consumer slapd process start at 
slave machine(slapd started with -d 256 flag) the connection from the 
consumer to the provider slapd process.

Then all behavior explained start to happen. Please see the provider and 
consumer configuration file attached.

Something appears not being following the expect behavior. Also the 
memory consumed in the consumer is growing too fast and doesn't appear 
to really follow the caches directives.

Regards,

Rodrigo.

  
I do not know what attrs="*,+" is supposed to mean.  But with OpenLDAP2.4.11, if you do a search with attrs="*,+" as the attributes to search, the search will not return any attributes.  Your slave database will never be synchronized with the master.  I'd comment out that line and try a sync again to determine if this is the cause.

I used an LDIF file to load 4 OpenLDAP servers and the synchronization works perfectly among the 4 servers after the LDIF file was loaded to all of them.

Buchan Milne wrote:
  
On Sunday 03 May 2009 04:15:47 Rodrigo Costa wrote:
  
    
openldap software,

Sometime ago I open the ITS#5860 about some memory cache limitations not
being respected by config files. Even this issue was solved when I tried
to configured openldap to use replication(syncrepl) the system never
enter into sync and the behavior appears similar to the ITS#5860 bug.

The system start to sync and in the provider(master) I see the query for
the DB sync. But the consumer(slave) memory consumption start to grow
very fast making me to constrain much more the dncachesize to a 1/10 of
the size of the provider(master) where at least system doesn't crash at
consumer.

Since changes were done in the openldap 2.4.16 I download and made tests
with this version. I get into the same behavior with consumer(slave)
never getting in sync with provider(master).

The behaviors are :

1) Consumer(slave) start query to the provider(master) DB;
2) Memory allocation and number of threads in the provider(master) start
to increase as expected;
3) dncachesize directive into provider(master) controls as expected the
maximum memory to be allocated by slapd process in provider(master);
4) Consumer(slave) consumer memory in a much faster pace. dncachesize
configured to 1/10 of provider(master) to avoid memory allocation problems;
5) After sometime the consumer(slave) CPU usage maintains in 200%.
Provider(master) stays with low CPU usage, around 1 to 3 %;
6) A new provisioning in provider(master) isn't propagated to
consumer(slave);
7) Bases never get in sync and CPU usage in consumer still high. Queries
to provider(master) are answer very fast and even multiple individual
queries to consumer(slave) are also answer in reasonable time.

It looks like could exist certain issue in the replication logic where
some processing dead loop could be found by the replication
consumer(slave) logic.

The newest openldap version and Berkeley DB 4.7 with all patches were
compiled in the platform running the code.

Any idea about this behavior?
    
      
I have seen behaviour like this when there was something preventing 
synchronisation, and the comsumer would spawn more consumer threads, until the 
box ran out of memory. I fixed the real issue, and haven't seen it since (and 
haven't had time to try and reproduce it).

However, on a large database, you may have better success by initialising the 
consumer via 'slapadd' with an export from a provider, instead of using 
syncrepl to do it. Since slapadd can run multiple threads (syncrepl only runs 
one thread), and doesn't need to bother serving client requests, and can run 
without transactions (see -q flag), it is much more efficient.

Note that you could consider different tuning for import vs run-time, e.g. I 
usually increase the BDB cache_size for imports with slapadd, and decrease it 
for runtime.

Regards,
Buchan