Re: slow or inconsistent syncrepl
by Amol Kulkarni
Dear Quanah,
Thanks a lot for these 2 pointers. I'll check out the 2.4.30 version.
We had used delta syncrepl earlier but our accesslog size used to grow suddenly sometimes and the disk used to get full crashing/hanging the ldap service itself on the provider. But at that time we had kept the max age for the accesslog to be 7 days. I'll reduce it and give it a try again.
Also it would be helpful if you can throw some light on :
2. On a really busy ldap server, can replication slow down drastically? i.e does the read operations affect the replication in any way?
4. We are currently having about 60 consumers - is this too much ? What can be the max numbers of consumers ?
5. Sometimes we urgently need some particular node to be present on the consumer - for which we cannot wait - in that case we get ldif of that node from provider and do ldapadd on the consumer ( mirrormode is ON on the consumers ). Is this safe and correct or could it cause some side effects ? Is there a better way to handle it?
Thanks and Regards,
Amol Kulkarni.
----- Original Message -----
From: Quanah Gibson-Mount
Sent: 03/09/12 11:57 PM
To: Amol Kulkarni, openldap-technical(a)openldap.org
Subject: Re: slow or inconsistent syncrepl
--On Friday, March 09, 2012 2:20 PM +0100 Amol Kulkarni <amolkulkarni(a)gmx.com> wrote: > I have a following openldap setup with syncrepl : > - openldap version 2.4.23 This is your #1 issue. > - 1 provider and about 10 consumers in lan and 50 consumers on wan This is your #2 issue. Upgrade to a stable release. Use delta-syncrepl, which uses significantly less bandwidth than syncrepl. --Quanah -- Quanah Gibson-Mount Sr. Member of Technical Staff Zimbra, Inc A Division of VMware, Inc. -------------------- Zimbra :: the leader in open source messaging and collaboration
11 years, 8 months
slow or inconsistent syncrepl
by Amol Kulkarni
I have a following openldap setup with syncrepl :
- openldap version 2.4.23
- 1 provider and about 10 consumers in lan and 50 consumers on wan
- The replication type is push - refreshAndPersist.
- There are about 0.5 million nodes
- The physical size of ldap folder is about 5gb.
- All the lan servers have identical hardware configuration.
I have following problems/questions.
1. We observe that LDAP replication works better on one server as opposed to another in same LAN and having the same configuration ? Is ldap replication affected by the ldap read operations on the consumers? So if 2 consumers are getting different amount of ldap reads, will replication speed be different?
2. On a really busy ldap server, can replication slow down drastically?
3. We find that even though the contextcsn of provider & consumer is same - actually there are nodes which are different and there are nodes which are not added or deleted. So we have created a custom script which compares entrycsn's of each and every node. Is this ok ?
4. We are currently having about 60 consumers - is this too much ? What can be the max numbers of consumers ?
5. Sometimes we urgently need some particular node to be present on the consumer - for which we cannot wait - in that case we get ldif of that node from provider and do ldapadd on the consumer ( mirrormode is ON on the consumers ). Is this safe and correct or could it cause some side effects ?
Kindly help me in troubleshooting these issues.
Right now I cant think of what server configuration I could provide so I dint give it in this mail so pls tell me if anyone needs some particular configuration.
Thanks and Regards,
Amol Kulkarni.
11 years, 8 months
Re: configure options
by Brett @Google
Turned out there was an LD_LIBRARY_PATH used for the already running
openldap, but was set, so the "make test" prior to installation, was run
against the installed libraries, instead of the newly compiled (but as yet
uninstalled) ones in the build directory.
Arguably the test could "unset LD_LIBRARY_PATH" but that variable might be
used to find berkeley dn or other thrid part libraries, so maybe
LD_LIBRARY_PATH="`pwd`/libraries/liblber/.libs
`pwd`/libraries/libldap/.libs" to the script that launches tests ?
Has there ever been any thought of having a --with-bdb=<prefix> which
implies the addition of CPPFLAGS="-I<prefix>/include" and
LDFLAGS="-L<prefix>/lib" and similarly for things like --with-ssl=<another
prefix> ?
Personally i alway have to add -R<prefix>/lib also, but this varies by
platform, eg. linux has removed this in preference of a gcc option which
passes options through to the linker, so it's probably hard to do in a
platform-agnostic manner.
On Fri, Mar 9, 2012 at 12:07 PM, Howard Chu <hyc(a)symas.com> wrote:
> Quanah Gibson-Mount wrote:
>
>> --On Friday, March 09, 2012 10:49 AM +1000 "Brett @Google"
>> <brett.maxfield(a)gmail.com> wrote:
>>
>> Hello,
>>>
>>> Are the tests in test050 (multimaster concurrency) for release 2.4.30,
>>> still likely or expected to fail on solaris 10 with bdb 4.8.30 ?
>>>
>>> We dont use multimaster, but at one point in recent history the tests in
>>> test050 (multimaster concurrency) were expected to fail.
>>>
>>> I'll log an ITS if it is now unexpected for this test to fail..
>>>
>>> Cheers
>>> Brett
>>>
>>> Waiting 5 seconds for slapd to start...
>>> Waiting 5 seconds for slapd to start...
>>> 29490 Segmentation Fault - core dumped
>>>
>>
>> segmentation faults are never expected. You should file an ITS with the
>> backtrace on the core file.
>>
>
> And the last few lines of slapd.1.log.
>
> --
> -- Howard Chu
> CTO, Symas Corp. http://www.symas.com
> Director, Highland Sun http://highlandsun.com/hyc/
> Chief Architect, OpenLDAP http://www.openldap.org/**project/<http://www.openldap.org/project/>
>
--
*The only thing that interferes with my learning is my education.*
*
Albert Einstein*
11 years, 8 months
fw: multimaster
by Brett @Google
Hello,
Are the tests in test050 (multimaster concurrency) for release 2.4.30,
still likely or expected to fail on solaris 10 with bdb 4.8.30 ?
We dont use multimaster, but at one point in recent history the tests in
test050 (multimaster concurrency) were expected to fail.
I'll log an ITS if it is now unexpected for this test to fail..
Cheers
Brett
Waiting 5 seconds for slapd to start...
Waiting 5 seconds for slapd to start...
29490 Segmentation Fault - core dumped
Waiting 5 seconds for slapd to start...
Waiting 5 seconds for slapd to start...
Waiting 5 seconds for slapd to start...
Waiting 5 seconds for slapd to start...
ldapsearch failed (255)!
./scripts/test050-syncrepl-multimaster: kill: no such process
>>>>> test050-syncrepl-multimaster failed for hdb
(exit 255)
*** Error code 255
The following command caused the error:
./run -b hdb all
make: Fatal error: Command failed for target `hdb-yes'
Current working directory /home/govops/build/openldap/openldap-2.4.30/tests
*** Error code 1
The following command caused the error:
make hdb
make: Fatal error: Command failed for target `test'
Current working directory /home/govops/build/openldap/openldap-2.4.30/tests
gmake: *** [test] Error 1
--
*The only thing that interferes with my learning is my education.*
*
Albert Einstein*
11 years, 8 months
multi-master syncrepl with sasl/gssapi authentication
by travis.bean@assuretech.net
I am having trouble getting multi-master syncrepl to sync when using
"bindmethod=sasl" and "saslmech=gssapi". I achieved success when I tried
"bindmethod=simple", so at least I know it has been narrowed down to a
sasl/gssapi authentication problem (incorrect/missing sasl AuthzRegexp or
perhaps an incorrect/missing slapd ACL?).
My syncrepl config is as follows (do I need to specify an authcid/authzid
or is this id automatically obtained from gssapi?):
olcMirrorMode: TRUE
olcSyncRepl:
rid=001
provider=ldap://or-dc1-db.example.corp
retry="5 10 30 +"
bindmethod=sasl
saslmech=gssapi
type=refreshAndPersist
searchbase="cn=config"
olcSyncRepl:
rid=002
provider=ldap://or-dc2-db.example.corp
retry="5 10 30 +"
bindmethod=sasl
saslmech=gssapi
type=refreshAndPersist
searchbase="cn=config"
# Syncprov overlay
dn: olcOverlay=syncprov,olcDatabase={0}config,cn=config
objectClass: olcOverlayConfig
objectClass: olcSyncProvConfig
olcOverlay: syncprov
olcMirrorMode: TRUE
olcSyncRepl:
rid=003
provider=ldap://or-dc1-db.example.corp
retry="5 10 30 +"
bindmethod=sasl
saslmech=gssapi
type=refreshAndPersist
searchbase="dc=example,dc=corp"
olcSyncRepl:
rid=004
provider=ldap://or-dc2-db.example.corp
retry="5 10 30 +"
bindmethod=sasl
saslmech=gssapi
type=refreshAndPersist
searchbase="dc=example,dc=corp"
# Syncprov overlay
dn: olcOverlay=syncprov,olcDatabase={1}hdb,cn=config
objectClass: olcOverlayConfig
objectClass: olcSyncProvConfig
olcOverlay: syncprov
My access control:
olcAccess: to attrs=userPassword,shadowLastChange
by dn="uid=ldap-admin,ou=people,dc=example,dc=corp" write
by
dn="uid=ldap/or-dc1-db.example.corp,cn=example.corp,cn=gssapi,cn=auth"
write
by
dn="uid=ldap/or-dc2-db.example.corp,cn=example.corp,cn=gssapi,cn=auth"
write
by anonymous auth
by self write
by * none
olcAccess: to dn.subtree="ou=krb5,dc=example,dc=corp"
by dn="cn=kdc-srv,ou=krb5,dc=example,dc=corp" read
by dn="cn=adm-srv,ou=krb5,dc=example,dc=corp" write
by * none
olcAccess: to *
by dn="uid=ldap-admin,ou=people,dc=example,dc=corp" write
by
dn="uid=ldap/or-dc1-db.example.corp,cn=example.corp,cn=gssapi,cn=auth"
write
by
dn="uid=ldap/or-dc2-db.example.corp,cn=example.corp,cn=gssapi,cn=auth"
write
by peername.ip="192.168.0.0%255.255.255.0" read
My sasl AuthzRegexp:
olcAuthzRegexp: uid=([^,]+),cn=example.corp,cn=gssapi,cn=auth
uid=$1,ou=people,dc=example,dc=corp
I know sasl/gssapi are working since ldapwhoami on or-dc1-db returns:
SASL/GSSAPI authentication started
SASL username: ldap/or-dc1-db.example.corp(a)EXAMPLE.CORP
SASL SSF: 56
SASL data security layer installed.
dn:uid=ldap/or-dc1-db.example.corp,ou=people,dc=example,dc=corp
ldapwhoami on or-dc2-db returns:
SASL/GSSAPI authentication started
SASL username: ldap/or-dc2-db.example.corp(a)EXAMPLE.CORP
SASL SSF: 56
SASL data security layer installed.
dn:uid=ldap/or-dc2-db.example.corp,ou=people,dc=example,dc=corp
I get the following /var/log/syslog errors on or-dc1-db:
OR-DC1-DB slapd[5446]: slap_client_connect:
URI=ldap://or-dc2-db.example.corp ldap_sasl_interactive_bind_s failed (-2)
OR-DC1-DB slapd[5446]: do_syncrepl: rid=004 rc -2 retrying
OR-DC1-DB slapd[5446]: slap_client_connect:
URI=ldap://or-dc2-db.example.corp ldap_sasl_interactive_bind_s failed (-2)
OR-DC1-DB slapd[5446]: do_syncrepl: rid=002 rc -2 retrying
/var/log/syslog errors on or-dc2-db:
OR-DC2-DB slapd[5455]: slap_client_connect:
URI=ldap://or-dc1-db.example.corp ldap_sasl_interactive_bind_s failed (-2)
OR-DC2-DB slapd[5455]: do_syncrepl: rid=003 rc -2 retrying
OR-DC2-DB slapd[5455]: slap_client_connect:
URI=ldap://or-dc1-db.example.corp ldap_sasl_interactive_bind_s failed (-2)
OR-DC2-DB slapd[5455]: do_syncrepl: rid=001 rc -2 retrying
11 years, 8 months
replication without making the consumer database readonly
by Wuensche Michael
I have an Openldap 2.4 environment with 2 servers, one serving as provider for 2 databases and one as consumer.
On one of the databases I only want to replicate certain entries, filtered by objectclass. I use sync repl for replication. Now I would like to be able to write database entries on the consumer server, which are not covered by the filter and so are not replicated. But Openldap sends me a referral to the master on write attempts if I use the updateref directive. If I don't use this directive, I get error 53: unwilling to perform.
Is there a way to have part of a databases entries to be replicated and others being allowed to write locally?
Alternatively I'm considering to split the suffix into several databases.
Kind regards,
Michael
11 years, 8 months
[SOLVED] RE: OpenLDAP 2.4 : replication doesn't work when customer is stopped
by PROST Frédéric
After so many days of testing I finally found the solution, which is working on both the last version built from source and the Debian version :
I found it thanks to this post :
http://www.openldap.org/lists/openldap-technical/201008/msg00274.html
When we check the csn value it appears that ServerID is not transmitted :
csn=20120308091919.539118Z#000000#000#000000 (the string #000# should be serverID)
. To correct this problem, I simply changed the slapd -h option in my startup script (/etc/default/slapd on Debian)
from :
SLAPD_SERVICES="ldap:/// ldapi:///"
To :
SLAPD_SERVICES="ldap://<ip or hostname of the server (same as oclServerID)>"
This has to be done on both nodes with the correct IP.
Then, my csn is now someting like :
csn=20120308111043.060040Z#000000#001#000000 (note the #001# corresponding to the serverId)
Then retart the server and everything is runnig fine.
Regards,
--
Frédéric PROST
-----Message d'origine-----
De : Quanah Gibson-Mount [mailto:quanah@zimbra.com]
Envoyé : mercredi 7 mars 2012 18:02
À : PROST Frédéric; openldap-technical(a)openldap.org
Objet : RE: OpenLDAP 2.4 : replication doesn't work when customer is stopped
--On Wednesday, March 07, 2012 8:06 AM +0100 PROST Frédéric <f.prost(a)mb-line.com> wrote:
> Hello,
>
> My OpenLDAP version is 2.4.23 (installed with apt-get install slapd on
> Debian Squeeze).
Using 2.4.23 from Debian is a bad decision, for numerous reasons, which have been discussed multiple times on the list.
Please see: <http://www.openldap.org/faq/data/cache/1456.html>
for just a beginning of the reasons as to why this is a bad idea.
--Quanah
--
Quanah Gibson-Mount
Sr. Member of Technical Staff
Zimbra, Inc
A Division of VMware, Inc.
--------------------
Zimbra :: the leader in open source messaging and collaboration
11 years, 8 months
Migration of Openldap server from Solaris 8 to 10
by jay.chouhan@indiatimes.com
Dear Admin,
We are running Openldap server on Solaris 9 (Kernel version: SunOS 5.9 Generic 122300-30 Jul 2008)
Could you please guide me how to migrate Openldap server from Solais9 to Solaris10,
We are planning to upgrade OS from Solaris 9 to Solaris 10 by re-installing the OS..
Appreciate your kind reply on this. thanks
Kind regards,
Jay
11 years, 8 months
Help tweaking settings so slapd is not writing to disk so much
by Marc Roos
Hi
I am running on a vm dovecot and sendmail with authentication through pam
agains ldap. I have got strange spikes in the load and I think slapd is
writing to much to disk. I want to reduce disk io.
Anybody an idea why slapd is so often writing to disk instead of reading?
The slapd process keeps writing to disk although it is setup as a 'consumer'
and it does not receive any new data. There are only 2200 records and
olcDbDNcacheSize and olcDbIDLcacheSize are both set to 3000.
Thanks in advance for suggestions.
Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
10297 be/4 ldap 0.00 B/s 42.66 K/s 0.00 % 0.00 % slapd -h
ldap:///~/ ldapi:/// -u ldap
10298 be/4 ldap 0.00 B/s 11.64 K/s 0.00 % 0.00 % slapd -h
ldap:///~/ ldapi:/// -u ldap
10299 be/4 ldap 0.00 B/s 11.64 K/s 0.00 % 0.00 % slapd -h
ldap:///~/ ldapi:/// -u ldap
1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init
2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd]
3 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/0]
4 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/0]
5 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/0]
6 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/0]
7 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/1]
8 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/1]
11 years, 8 months
syncrepl consumer retry and sync questions
by Nick Milas
Hi,
Concluding from the documentation (I think it's not clear on this),
retry="60 +" should mean that the consumer retries indefinitely every 60
seconds.
However, I am observing that consumers with the config:
syncrepl rid=111
provider=ldaps://ldap.example.com
tls_reqcert=never
type=refreshAndPersist
retry="60 +"
searchbase="dc=example,dc=com"
schemachecking=off
bindmethod=simple
binddn="cn=Manager,dc=example,dc=com"
credentials="secret"
retry the first time after 60 seconds, then retry again after 2 hours:
(Note: Consumers lose connection to the provider at 18:47:59 as the
provider is stopped for a few minutes to become upgraded to 2.4.pre30.)
Feb 25 18:47:59 consumer111 slapd[2482]: do_syncrep2: rid=111 (-1) Can't
contact LDAP server
Feb 25 18:47:59 consumer111 slapd[2482]: do_syncrepl: rid=111 rc -1 retrying
Feb 25 18:48:59 consumer111 slapd[2482]: do_syncrep2: rid=111
LDAP_RES_INTERMEDIATE - REFRESH_DELETE
Feb 25 20:48:59 consumer111 slapd[2482]: do_syncrep2: rid=111 (-1) Can't
contact LDAP server
Feb 25 20:48:59 consumer111 slapd[2482]: do_syncrepl: rid=111 rc -1 retrying
Feb 25 20:48:59 consumer111 slapd[2482]: connection_read(35): no connection!
Feb 25 20:48:59 consumer111 slapd[2482]: connection_read(35): no connection!
Feb 25 20:49:59 consumer111 slapd[2482]: syncrepl_entry: rid=111
LDAP_RES_SEARCH_ENTRY(LDAP_SYNC_ADD)
...sync continues...
Note that there were a bunch of edits on the provider between 19:47 and
19:55 but they were not propagated to this consumer, until 20:49.
About my other two consumers (which have identical configuration, but
they bind with a non-root user), on which I wrote on my earlier email:
It seems they were also in some kind of delay (for two hours too??), but
I didn't wait and restarted slapd:
The first:
Feb 25 18:47:59 consumer222 slapd2.4[2140]: do_syncrep2: rid=222 (-1)
Can't contact LDAP server
Feb 25 18:47:59 consumer222 slapd2.4[2140]: do_syncrepl: rid=222 rc -1
retrying
Feb 25 18:48:59 consumer222 slapd2.4[2140]: do_syncrep2: rid=222
LDAP_RES_INTERMEDIATE - REFRESH_DELETE
Feb 25 20:18:04 consumer222 slapd2.4[2141]: slapd starting
Feb 25 20:18:04 consumer222 slapd2.4[2141]: syncrepl_entry: rid=222
LDAP_RES_SEARCH_ENTRY(LDAP_SYNC_ADD)
...sync continues...
and the second:
This one was stopped and upgraded to 2.4.30b3 (as I named it - it was
pre-30):
At 18:54:45 (and later at 19:29) it was started after upgrade, then, as
it was not syncing, it was restarted at 20:02:55 and it synced.
Feb 25 18:47:59 consumer333 slapd[2357]: do_syncrep2: rid=333 (-1) Can't
contact LDAP server
Feb 25 18:47:59 consumer333 slapd[2357]: do_syncrepl: rid=333 rc -1 retrying
Feb 25 18:48:59 consumer333 slapd[2357]: do_syncrep2: rid=333
LDAP_RES_INTERMEDIATE - REFRESH_DELETE
Feb 25 18:52:39 consumer333 slapd[2357]: daemon: shutdown requested and
initiated.
Feb 25 18:52:39 consumer333 slapd[2357]: slapd shutdown: waiting for 1
operations/tasks to finish
Feb 25 18:52:40 consumer333 slapd[2357]: slapd stopped.
Feb 25 18:52:40 consumer333 slapd[23371]: [OK] OpenLDAP stopped after 1
seconds
Feb 25 18:52:40 consumer333 slapd[23372]: [INFO] no data backup done
Feb 25 18:52:40 consumer333 slapd[23373]: [INFO] Halting OpenLDAP
replication...
Feb 25 18:52:40 consumer333 slapd[23374]: [INFO] no replica found in
configuration, aborting stopping slurpd
Feb 25 18:54:27 consumer333 slapd[23425]: [INFO] Using
/etc/default/slapd for configuration
Feb 25 18:54:27 consumer333 slapd[23430]: [INFO] Halting OpenLDAP...
Feb 25 18:54:27 consumer333 slapd[23431]: [INFO] can't read PID file, to
stop slapd try: /etc/init.d/slapd forcestop
Feb 25 18:54:27 consumer333 slapd[23432]: [INFO] Halting OpenLDAP
replication...
Feb 25 18:54:27 consumer333 slapd[23433]: [INFO] no replica found in
configuration, aborting stopping slurpd
Feb 25 18:54:44 consumer333 slapd[23458]: [INFO] Using
/etc/default/slapd for configuration
Feb 25 18:54:44 consumer333 slapd[23463]: [INFO] Launching OpenLDAP
configuration test...
Feb 25 18:54:45 consumer333 slapd[23486]: [OK] OpenLDAP configuration
test successful
Feb 25 18:54:45 consumer333 slapd[23496]: [INFO] No db_recover done
Feb 25 18:54:45 consumer333 slapd[23497]: [INFO] Launching OpenLDAP...
Feb 25 18:54:45 consumer333 slapd[23498]: [OK] File descriptor limit set
to 1024
Feb 25 18:54:45 consumer333 slapd[23499]: @(#) $OpenLDAP: slapd 2.4.X
(Feb 25 2012 18:38:31) $
swbuilder@vdev.example.com:/home/swbuilder/rpmbuild/BUILD/openldap-2.4.30b3/servers/slapd
Feb 25 18:54:45 consumer333 slapd[23500]: slapd starting
Feb 25 18:54:45 consumer333 slapd[23500]: do_syncrep2: rid=333
LDAP_RES_INTERMEDIATE - REFRESH_DELETE
Feb 25 18:54:46 consumer333 slapd[23505]: [OK] OpenLDAP started
Feb 25 19:29:14 consumer333 slapd[2318]: [INFO] Using /etc/default/slapd
for configuration
Feb 25 19:29:14 consumer333 slapd[2323]: [INFO] Launching OpenLDAP
configuration test...
Feb 25 19:29:15 consumer333 slapd[2346]: [OK] OpenLDAP configuration
test successful
Feb 25 19:29:15 consumer333 slapd[2356]: [INFO] No db_recover done
Feb 25 19:29:15 consumer333 slapd[2357]: [INFO] Launching OpenLDAP...
Feb 25 19:29:15 consumer333 slapd[2358]: [OK] File descriptor limit set
to 1024
Feb 25 19:29:15 consumer333 slapd[2359]: @(#) $OpenLDAP: slapd 2.4.X
(Feb 25 2012 18:38:31) $
swbuilder@vdev.example.com:/home/swbuilder/rpmbuild/BUILD/openldap-2.4.30b3/servers/slapd
Feb 25 19:29:15 consumer333 slapd[2360]: slapd starting
Feb 25 19:29:15 consumer333 slapd[2360]: do_syncrep2: rid=333
LDAP_RES_INTERMEDIATE - REFRESH_DELETE
Feb 25 19:29:16 consumer333 slapd[2365]: [OK] OpenLDAP started
Feb 25 20:02:42 consumer333 slapd[2943]: [INFO] Using /etc/default/slapd
for configuration
Feb 25 20:02:42 consumer333 slapd[2948]: [INFO] Halting OpenLDAP...
Feb 25 20:02:42 consumer333 slapd[2360]: daemon: shutdown requested and
initiated.
Feb 25 20:02:42 consumer333 slapd[2360]: slapd shutdown: waiting for 1
operations/tasks to finish
Feb 25 20:02:42 consumer333 slapd[2360]: slapd stopped.
Feb 25 20:02:43 consumer333 slapd[2952]: [OK] OpenLDAP stopped after 1
seconds
Feb 25 20:02:43 consumer333 slapd[2953]: [INFO] No data backup done
Feb 25 20:02:55 consumer333 slapd[2983]: [INFO] Using /etc/default/slapd
for configuration
Feb 25 20:02:55 consumer333 slapd[2988]: [INFO] Launching OpenLDAP
configuration test...
Feb 25 20:02:55 consumer333 slapd[3011]: [OK] OpenLDAP configuration
test successful
Feb 25 20:02:55 consumer333 slapd[3021]: [INFO] No db_recover done
Feb 25 20:02:55 consumer333 slapd[3022]: [INFO] Launching OpenLDAP...
Feb 25 20:02:55 consumer333 slapd[3023]: [OK] File descriptor limit set
to 1024
Feb 25 20:02:55 consumer333 slapd[3024]: @(#) $OpenLDAP: slapd 2.4.X
(Feb 25 2012 18:38:31) $
swbuilder@vdev.example.com:/home/swbuilder/rpmbuild/BUILD/openldap-2.4.30b3/servers/slapd
Feb 25 20:02:55 consumer333 slapd[3025]: slapd starting
Feb 25 20:02:55 consumer333 slapd[3025]: syncrepl_message_to_entry:
rid=333 DN: dc=bridge-o.admin,dc=example.com,ou=dns1,dc=example,dc=com,
UUID: a483715a-56bd-102f-9a9d-87b8bcc59e1e
Feb 25 20:02:55 consumer333 slapd[3025]: syncrepl_entry: rid=333
LDAP_RES_SEARCH_ENTRY(LDAP_SYNC_ADD)
...sync continues...
Questions:
1. What does the message "connection_read(35): no connection!" signify?
(On this consumer -only- I also get occasional messages, like the
following:
Feb 27 22:25:56 consumer111 slapd[2939]: connection_input:
conn=1001 deferring operation: binding
)
2. Why there was a stop in retrying for two hours on the running
consumers (111, 222)?
3. Why the upgraded consumer (333) did not sync? It was started after
the provider and it was running when the changes were made on the provider.
Thanks,
Nick
11 years, 8 months