N-way multi-master replication
by Adrien Futschik
I am testing Multi-master replication with two masters.
If I stop one of them and don't restart it before the end of retry interval,
the replication process doesn't work anymore. (looks like it is stopped).
exemple :
- M1 & M2 are synchronised.
- stop M2
- add entries to M1
- wait for retry to fail (retry="5 5 300 5")
- restart M2
- previously added entries to M1 are replicated to M2
- modifying M2 (attributes) are not replicated to M1
It look like M1 -> M2 : OK and M2 -> M1 : broken
Is there a way to restart replication after configured retry are over ? The
solution I found was to restart M1. After that everything seems to work fine.
I could set up retry so that the number of retry is very hight, but I was
wondering if there was a way to restart synchronisation online.
Thanks in advance,
Adrien
12 years, 1 month
Cache/Proxy/Replicating a distant, slow LDAP server
by Morten Mikkelsen
Hi.
I work at a rather large company that has a rather slow LDAP server
that impacts the performance on a wiki-server, I am using.
I am playing with the idea of setting up a cache or replication of the
company LDAP server locally to reduce time spent performing ldap
lookups, but as I am quite new to the world of LDAP and openldap, I
have a hard time getting the set-up right.
I would like to set up a server that does not impose any requirements
on the existing (slow) server at all. I only need a read-only server -
updates are made on the slow 'master' - and only a few percent of the
records are interesting to our wiki.
Having looked at caching and proxying, I ended up at setting my mind
on replication. As the master is out-of-bounds except for ldap queries
(no slurping-logs), the syncrepl-option seems to be the way to go.
I just can't get my head around the configuration.
The master LDAP has the following structure (as I see it)
o=company.com -> ou=commondirectory -> c=xx
Under commondirectory, all countries (such as 'dk', 'us' and a whole
bunch of others) are represented with the employees residing in them
listed below.
o=company.com -> ou=companygroups -> ou=groupmembers contain groups
that are used for controlling access to the wiki pages.
So what I need to have on my replicated server is: The groups and
people in the countries 'us' and 'dk'.
To start off easy, I try to replicate the c=dk at first - I've tried
adding the following to /etc/ldap/slapd.conf
syncrepl rid=111
provider=ldap://ldap.company.com:389
type=refreshOnly
interval=00:12:00:00
searchbase="c=dk,ou=commondirectory,o=company.com"
scope=one
updatedn="c=dk,ou=commondirectory,o=company.com"
which make the server start without errormessages, but when I query with
ldapsearch -x -h 127.0.0.1 -b "c=dk,ou=commondirectory,o=company.com"
'(objectClass=*)' I get no result: "result: 32 No such object"
What am I doing wrong?
--
/Morten
12 years, 1 month
N-way replication "dn_callback : entries have identical CSN"
by Adrien Futschik
Hy !
I am still working on n-way multi master replication. I am using
refreshAndPersist mode with to masters.
My first test was to inject in parallel 2 x 500 entries to the first master and
also 2 x 500 entries to the second master.
The goal is to see how the replication behaves when massive adds are performed
to both masters.
When I used OpenLDAP 2.4.11, I suffered lost. The 1000 entries where
successfully added on each master, but a few entries where lost in the
replication process. I didn't found any "errors" in the log, even with debug
on.
The only suspect messages I had, where like this :
=> bdb_idl_insert_key: c_put id failed: DB_LOCK_DEADLOCK: Locker killed to
resolve a deadlock (-30995)
=> bdb_dn2id_add 0x543: parent (ou=clients,o=edf,c=fr) insert failed: -30995
I then switched to OpenLDAP 2.4.14 and did not have the problem anymore (same
configuration). I guess this was a bug form 2.4.11.
Still, I had a few of theses messages in the log :
conn=9 op=1 => bdb_dn2id_add
dn="cn=M2client2(a)laposte.net,ou=clients,o=edf,c=fr" ID=0xb: put failed:
DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock -30995
but I also got a few of theses :
dn_callback : entries have identical CSN
cn=M2client20(a)laposte.net,ou=clients,o=edf,c=fr
20090217155418.854085Z#000000#002#000000
syncrepl_entry: rid=004 be_search (0)
syncrepl_entry: rid=004 cn=M2client20(a)laposte.net,ou=clients,o=edf,c=fr
syncrepl_entry: rid=004 entry unchanged, ignored
(cn=M2client20(a)laposte.net,ou=clients,o=edf,c=fr)
do_syncrep2: cookie=rid=004,sid=002
syncrepl_entry: rid=004 LDAP_RES_SEARCH_ENTRY(LDAP_SYNC_ADD)
Can anyone explain to me why ? is this normal ?
The dn_callback message comes from the master log-file where
"cn=M2client20(a)laposte.net,ou=clients,o=edf,c=fr" was inserted.
Thanks in advance,
Adrien Futschik
12 years, 1 month
slapd connection_read: no connection; tcp time_wait state
by James Bagley
Hi!
This is an interesting one... I have an OpenLDAP 2.4.12 server as a
consumer in a two node cluster. It's sole function is to answer queries
for our mail hub for recipient validation. We see about 50-300 queries
/ second and occasional spikes.
Unfortunately, our mail hub appliances (vendor name left out to protect
the guilty) are somewhat inefficient in ldap connection handling and are
opening a new TCP connection for every single ldap query. It does this
even when there are multiple recipients in one smtp session (boggles the
mind!). A percentage of these connections don't get closed properly and
I get the following error in the syslog:
slapd[23108]: connection_read(18): no connection!
The reason is that the connections are in a time_wait state because they
were not closed properly. They go away in 60 seconds, but with the load
this server gets we continuously have several hundred tcp connections in
a time_wait state and a system log full of the above errors.
I'm attaching two packet captures:
time_wait.cap - filtered a single complete tcp session that ended with
the port in a time_wait condition.
no_time_wait.cap - control capture for reference. This session closed
properly.
I can't claim to have the greatest understanding of 3-way / 4-way tcp
open / close handshakes. But, one thing that I did notice that seems to
be consistent among the sessions that end in time_wait is that the
fin-ack is initiated by the server. Possibly i'm reading it wrong, but
doesn't the client normally initiate the close? and the server does a
passive close? So, in theory the server should never have to wait for
the client.
Could someone more knowledgeable than me tell me why the server might
initiate the active close?
thanks,
-james
12 years, 1 month
LDAP Group
by Paul bob
I have created group called readonly1 in ldap server and added ldap user
"jimmy" member of that group. When I try to access he is not getting ldap
group access. Whereas I created local group readonly1 and added jimmy" into
that group it works fine.
dn: cn=readonly1,ou=Group,dc=ndmacb,dc=local
objectClass: posixGroup
objectClass: top
cn: readonly1
gidNumber: 550
memberUid: jimmy
Am I missing any config in ldap side? Thanks for your help!
-
PB
12 years, 2 months
connection.c segfault with new sasl mechanism
by Francesco Grossi ITQL
Dear team
We hope you can help us in discovering where or how we are wrong.
We are providing a new sasl mechanism for third party strong authentication
and we are in the final delivery step.
We are current with openldap 2.3.38.
We used strong authentication server to carry out the authorization part of
the mech and our backside
(sasl client) is Openldap 2.3.38.
On the client side we have pam-ldap+ssh.
sasl libs are cyrus 2.1.22
os is linux red-hat cent-os 4.7 final
During our functional and stress tests everything works fine but once in a
while when openldap crashes
and this happens only, but not always, under a certain condition:
Before explaining such a condition we would like to establish the meaning of
some terms:
1) straight authentication session: a simple (not interactive)
authentication, being either successfull or not
(right or wrong passcode)
2) interactive authentication session: a complex (interactive)
authentication with iterated tcp/ip requests such as
next-token or new-pin
Crash scenario:
one ore more straight authentication scripts are running in parallel,
emulating parallel ssh authentication sessions
with either right or wrong passcode;
one interactive authentication session:
in the error-originating session the whole user back-and-forth got his way
to the authentication server, that is to say next-token or new-pin procedure
comes to a complete end:
user gives its next token or new pin (twice) and the strong authentication
is fully accomplished and reported by the authentication manager.
What is wrong is that (by average 2 attempts out of 3) make openldap crash.
Sasl speaking our server-step mechanism closes fine and we expect its
dispose step to be called by openldap (by the sasl_server_end)
That is not going to happen being Openldap crashed meanwhile.
Again this happens only but not always when the stress scripts are running.
Does it smell of thread concurrency and safety?
Our different use cases tests run almost always successfully when done
individually or in parallel but manually (real people operating).
Conversely when stress test are going on the problem arise very often but
not always.
We had the Opeldap crash both on linux and solaris installations though on
linux the crash happens some steps earlier than with solaris.
On linux we have had the opportunity to rebuild openldap with some debug
instruction inside.
what came out is
connection.c
crashing at:
op->o_tmpfree( op->orb_edn.bv_val, op->o_tmpmemctx );
in both successful and error cases either the values passed and the function
address look alike.
op->orb_edn.bv_val shows the distinguished name of the user just logged in
(op->orb_edn.bv_len is correctly set with its length).
Based on the function name we guess ldap is about to free the DN allocated
storage.
On Solaris9 we had the segfault symptom but we have some more line on the
log, one of which shows a almost empty DN (!!!) made just of uid=""
GDB has the following from the linux core dump:
gdb -c core.5785
GNU gdb Red Hat Linux (6.3.0.0-1.159.el4rh) Copyright 2004 Free Software
Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain
conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB. Type "show warranty" for details.
This GDB was configured as "i386-redhat-linux-gnu".
Core was generated by `/usr/sbin/slapd -h ldap:///'.
Program terminated with signal 11, Segmentation fault.
#0 0x00268934 in ?? ()
THE FOLLOWING CODE IS AN EXCERPT OF THE FINAL SERVER_MECH_STEP FUNCTION OF
THE MECHANISM WHICH RUN BEHIND THE SASL_SERVER_STEP ONLY WHEN CRASH HAPPENS:
static int mymech_server_mech_step(void *conn_context
__attribute__((unused)),
sasl_server_params_t
*params,
const char *clientin,
unsigned clientinlen,
const char **serverout,
unsigned *serveroutlen,
sasl_out_params_t *oparams)
{
server_context_t *text = (server_context_t *) conn_context;
*serverout = NULL;
*serveroutlen = 0;
oparams->doneflag = 1;
oparams->mech_ssf = 0;
oparams->maxoutbuf = 0;
oparams->encode_context = NULL;
oparams->encode = NULL;
oparams->decode_context = NULL;
oparams->decode = NULL;
oparams->param_version = 0;
oparams->authid = text->authid;
return SASL_OK;
}
SLAPD.LOG OF THE CRASH CASE (WITH OUR ADDED SYSLOGS):
Feb 4 12:09:15 yavcentos slapd: SASL [conn=48] Debug
MMYMECH====================== SERVER mymech_server_mech_step END ==
Feb 4 12:09:15 yavcentos mymech: sasl.c label slap_sasl_authorize
Feb 4 12:09:15 yavcentos slapd: SASL proxy authorize [conn=48]:
authcid="alfredo@yavcentos" authzid="alfredo@yavcentos"
Feb 4 12:09:15 yavcentos slapd: conn=48 op=4 BIND
authcid="alfredo@yavcentos" authzid="alfredo@yavcentos"
Feb 4 12:09:15 yavcentos slapd: SASL Authorize [conn=48]: proxy
authorization allowed authzDN=""
Feb 4 12:09:15 yavcentos mymech: sasl.c label1
Feb 4 12:09:15 yavcentos mymech: result.c label2
Feb 4 12:09:15 yavcentos slapd: send_ldap_sasl: err=0 len=-1
Feb 4 12:09:15 yavcentos mymech: result.c label6a
Feb 4 12:09:15 yavcentos mymech: result.c label61
Feb 4 12:09:15 yavcentos mymech: result.c label61a (tra qui)
Feb 4 12:09:15 yavcentos mymech: result.c label61b
Feb 4 12:09:15 yavcentos mymech: result.c label61b2
op->o_callback->sc_response=0x805e1fc op=0x8926cb0 rs=0x17551a0
Feb 4 12:09:15 yavcentos mymech: connection.c label1
Feb 4 12:09:15 yavcentos mymech: connection.c label1a
Feb 4 12:09:15 yavcentos mymech: connection.c label11a
Feb 4 12:09:15 yavcentos mymech: connection.c label111a
Feb 4 12:09:15 yavcentos mymech: connection.c label12a
Feb 4 12:09:15 yavcentos mymech: connection.c label12a
op->o_tmpfree=0x809c3e8 op->orb_edn.bv_val=uid=alfredo@yavcentos,ou=
people,dc=my-domain,dc=com op->orb_edn.bv_len=51 op->o_tmpmemctx=0x8926c10
(slab malloc context)
END
SLAPD.LOG OF A SUCCESSFUL LOGIN CASE (WITH OUR ADDED SYSLOGS):
Feb 4 12:13:13 yavcentos mymech: sasl.c label slap_sasl_authorize
Feb 4 12:13:13 yavcentos slapd: SASL proxy authorize [conn=3]:
authcid="alfredo@yavcentos" authzid="alfredo@yavcentos"
Feb 4 12:13:14 yavcentos slapd: conn=3 op=4 BIND
authcid="alfredo@yavcentos" authzid="alfredo@yavcentos"
Feb 4 12:13:14 yavcentos slapd: SASL Authorize [conn=3]: proxy
authorization allowed authzDN=""
Feb 4 12:13:14 yavcentos mymech: sasl.c label1
Feb 4 12:13:14 yavcentos mymech: result.c label2
Feb 4 12:13:14 yavcentos slapd: send_ldap_sasl: err=0 len=-1
Feb 4 12:13:14 yavcentos mymech: result.c label6a
Feb 4 12:13:14 yavcentos mymech: result.c label61
Feb 4 12:13:14 yavcentos mymech: result.c label61a (tra qui)
Feb 4 12:13:14 yavcentos mymech: result.c label61b
Feb 4 12:13:14 yavcentos mymech: result.c label61b2
op->o_callback->sc_response=0x805e1fc op=0x86667b8 rs=0x51271a0
Feb 4 12:13:14 yavcentos mymech: connection.c label1
Feb 4 12:13:14 yavcentos mymech: connection.c label1a
Feb 4 12:13:14 yavcentos mymech: connection.c label11a
Feb 4 12:13:14 yavcentos mymech: connection.c label111a
Feb 4 12:13:14 yavcentos mymech: connection.c label12a
Feb 4 12:13:14 yavcentos mymech: connection.c label12a
op->o_tmpfree=0x809c3e8 op->orb_edn.bv_val=uid=alfredo@yavcentos,ou=
people,dc=my-domain,dc=com op->orb_edn.bv_len=51 op->o_tmpmemctx=0x86660c0
(slab malloc context)
Feb 4 12:13:14 yavcentos mymech: connection.c label13a
Feb 4 12:13:14 yavcentos mymech: connection.c label14a
Feb 4 12:13:14 yavcentos mymech: connection.c label15a
Feb 4 12:13:14 yavcentos mymech: connection.c label151a
Feb 4 12:13:14 yavcentos mymech: connection.c label151b
Feb 4 12:13:14 yavcentos mymech: connection.c label2
Feb 4 12:13:14 yavcentos slapd: conn=3 op=4 BIND
dn="uid=alfredo@yavcentos,ou=people,dc=my-domain,dc=com" mech=MYMECH sasl_
ssf=0 ssf=0
Feb 4 12:13:14 yavcentos slapd: do_bind: SASL/MYMECH bind:
dn="uid=alfredo@yavcentos,ou=people,dc=my-domain,dc=com" sasl_ss
f=0
Feb 4 12:13:14 yavcentos mymech: connection.c label3
Feb 4 12:13:14 yavcentos mymech: result.c label61c
Feb 4 12:13:14 yavcentos mymech: result.c label61d
Feb 4 12:13:14 yavcentos mymech: result.c label61d2
Feb 4 12:13:14 yavcentos mymech: result.c label61e
Feb 4 12:13:14 yavcentos slapd: send_ldap_response: msgid=5 tag=97 err=0
Feb 4 12:13:14 yavcentos mymech: result.c label1
Feb 4 12:13:14 yavcentos mymech: result.c label11
Feb 4 12:13:14 yavcentos mymech: result.c label12
Feb 4 12:13:14 yavcentos mymech: result.c label13
Feb 4 12:13:14 yavcentos mymech: result.c label14
Feb 4 12:13:14 yavcentos mymech: result.c label14e
Feb 4 12:13:14 yavcentos mymech: result.c label14f
Feb 4 12:13:14 yavcentos mymech: result.c label14g
Feb 4 12:13:14 yavcentos mymech: result.c label14h
Feb 4 12:13:14 yavcentos mymech: result.c label15
Feb 4 12:13:14 yavcentos mymech: result.c label16
Feb 4 12:13:14 yavcentos mymech: result.c label18
Feb 4 12:13:14 yavcentos mymech: result.c label19
Feb 4 12:13:14 yavcentos mymech: result.c label20
Feb 4 12:13:14 yavcentos mymech: result.c label21
Feb 4 12:13:14 yavcentos mymech: result.c label6b e qui2
Feb 4 12:13:14 yavcentos mymech: daemon: activity on 1 descriptor
Feb 4 12:13:15 yavcentos slapd: daemon: activity on:
Feb 4 12:13:15 yavcentos slapd: 13r
Feb 4 12:13:15 yavcentos slapd:
Feb 4 12:13:15 yavcentos slapd: daemon: read active on 13
Feb 4 12:13:15 yavcentos slapd: connection_get(13)
Feb 4 12:13:15 yavcentos slapd: connection_get(13): got connid=3
Feb 4 12:13:15 yavcentos slapd: connection_read(13): checking for input on
id=3
Feb 4 12:13:15 yavcentos slapd: daemon: epoll: listen=7 active_threads=0
tvp=NULL
Feb 4 12:13:15 yavcentos slapd: daemon: epoll: listen=8 active_threads=0
tvp=NULL
Feb 4 12:13:15 yavcentos slapd: conn=3 op=4 RESULT tag=97 err=0 text=
Feb 4 12:13:15 yavcentos mymech: sasl.c label4
Feb 4 12:13:15 yavcentos slapd: <== slap_sasl_bind: rc=0
Feb 4 12:13:15 yavcentos mymech: bind.c do_bind label1
Feb 4 12:13:15 yavcentos slapd: do_bind
Feb 4 12:13:15 yavcentos slapd: conn=3 op=5 BIND anonymous mech=implicit
ssf=0
Feb 4 12:13:15 yavcentos slapd: >>> dnPrettyNormal: <>
Feb 4 12:13:15 yavcentos slapd: <<< dnPrettyNormal: <>, <>
Feb 4 12:13:15 yavcentos slapd: do_bind: version=3 dn="" method=128
Feb 4 12:13:15 yavcentos slapd: conn=3 op=5 BIND dn="" method=128
Feb 4 12:13:15 yavcentos mymech: sasl.c label slap_sasl_reset
Feb 4 12:13:15 yavcentos slapd: send_ldap_result: conn=3 op=5 p=3
Feb 4 12:13:15 yavcentos slapd: send_ldap_result: err=0 matched="" text=""
Feb 4 12:13:15 yavcentos mymech: result.c label22a
Feb 4 12:13:15 yavcentos mymech: result.c label61
Feb 4 12:13:15 yavcentos mymech: result.c label61a (tra qui)
Feb 4 12:13:15 yavcentos mymech: result.c label61b
Feb 4 12:13:15 yavcentos mymech: result.c label61b2
op->o_callback->sc_response=0x805e1fc op=0x8668158 rs=0x2afc1a0
Feb 4 12:13:15 yavcentos mymech: connection.c label1
Feb 4 12:13:15 yavcentos mymech: connection.c label3
Feb 4 12:13:15 yavcentos mymech: result.c label61c
Feb 4 12:13:15 yavcentos mymech: result.c label61d
Feb 4 12:13:15 yavcentos mymech: result.c label61d2
Feb 4 12:13:15 yavcentos mymech: result.c label61e
Feb 4 12:13:15 yavcentos slapd: send_ldap_response: msgid=6 tag=97 err=0
Feb 4 12:13:15 yavcentos mymech: result.c label1
Feb 4 12:13:15 yavcentos mymech: result.c label11
Feb 4 12:13:15 yavcentos mymech: result.c label12
Feb 4 12:13:15 yavcentos mymech: result.c label13
Feb 4 12:13:15 yavcentos mymech: result.c label14
Feb 4 12:13:15 yavcentos mymech: result.c label14e
Feb 4 12:13:15 yavcentos mymech: result.c label14f
Feb 4 12:13:15 yavcentos mymech: result.c label14g
Feb 4 12:13:15 yavcentos mymech: result.c label14h
Feb 4 12:13:15 yavcentos mymech: result.c label15
Feb 4 12:13:15 yavcentos mymech: result.c label16
Feb 4 12:13:15 yavcentos mymech: result.c label18
Feb 4 12:13:15 yavcentos mymech: result.c label19
Feb 4 12:13:16 yavcentos mymech: result.c label20
Feb 4 12:13:16 yavcentos mymech: result.c label21
Feb 4 12:13:16 yavcentos mymech: result.c label22b
Feb 4 12:13:16 yavcentos mymech: result.c label5 e qui1
Feb 4 12:13:16 yavcentos slapd: conn=3 op=5 RESULT tag=97 err=0 text=
Feb 4 12:13:16 yavcentos slapd: do_bind: v3 anonymous bind
Feb 4 12:13:15 yavcentos mymech: daemon: activity on 1 descriptor
Feb 4 12:13:16 yavcentos slapd: daemon: activity on:
Feb 4 12:13:16 yavcentos slapd: 13r
Feb 4 12:13:16 yavcentos slapd:
Feb 4 12:13:16 yavcentos slapd: daemon: read active on 13
Feb 4 12:13:16 yavcentos slapd: connection_get(13)
Feb 4 12:13:16 yavcentos slapd: connection_get(13): got connid=3
Feb 4 12:13:16 yavcentos slapd: connection_read(13): checking for input on
id=3
Feb 4 12:13:16 yavcentos slapd: ber_get_next on fd 13 failed errno=0
(Success)
Feb 4 12:13:16 yavcentos slapd: connection_read(13): input error=-2 id=3,
closing.
Feb 4 12:13:16 yavcentos slapd: connection_closing: readying conn=3 sd=13
for close
Feb 4 12:13:16 yavcentos slapd: connection_close: conn=3 sd=-1
Feb 4 12:13:16 yavcentos mymech: sasl.c label slap_sasl_close
Feb 4 12:13:16 yavcentos mymech: sasl.c label slap_sasl_log
Feb 4 12:13:16 yavcentos slapd: SASL [conn=-1] Debug: MMYMECHSERVER
mymech_server_mech_dispose
Feb 4 12:13:16 yavcentos mymech: sasl.c label slap_sasl_log
Feb 4 12:13:16 yavcentos slapd: SASL [conn=-1] Error: MMYMECHSERVER
mymech_server_mech_dispose: start - receives *conn_co
ntext=0x8666a10 *utils=0x8666188
Feb 4 12:13:16 yavcentos mymech: sasl.c label slap_sasl_log
Feb 4 12:13:17 yavcentos slapd: SASL [conn=-1] Error: MMYMECHSERVER
mymech_server_mech_dispose
Feb 4 12:13:17 yavcentos slapd: daemon: removing 13
Feb 4 12:13:17 yavcentos slapd: conn=3 fd=13 closed (connection lost)
Feb 4 12:13:17 yavcentos slapd: daemon: epoll: listen=7 active_threads=0
tvp=NULL
Feb 4 12:13:17 yavcentos slapd: daemon: epoll: listen=8 active_threads=0
tvp=NULL
Feb 4 12:13:17 yavcentos slapd: daemon: activity on 1 descriptor
Feb 4 12:13:17 yavcentos slapd: daemon: activity on:
Feb 4 12:13:17 yavcentos slapd:
Feb 4 12:13:17 yavcentos slapd: daemon: epoll: listen=7 active_threads=0
tvp=NULL
Feb 4 12:13:17 yavcentos slapd: daemon: epoll: listen=8 active_threads=0
tvp=NULL
END
Compile and link options of mymech:
gcc -I ..... -v -Wall -W -O2 -MT mymech.lo -MD -MP -MF .deps/mymech.Tpo -c
mymech.c -fPIC -DPIC -o mymech.lo
gcc -shared mymech.lo mymech _init.lo plugin_common.lo -lpthread -lcrypt
-lresolv -lc -Wl,-soname -Wl,lib mymech.so.2 -o .libs/lib mymech.so.2.0.22
Would you have a clue or a though for us about some further check to do over
either our code or compile options,
(maybe particularly on the thread-safe topic)?
Would you suggest some thread safe programming strategy?
Many many thanks
Francesco Grossi
12 years, 2 months
Question about subtree renaming
by Philipp Foeckeler
Hi,
Could someone explain me please the prerequisites for subtree renames/moves
support by slapd (LDAP ModifyDN for non-leaf objects)?
Currently i use slapd 2.2.6-37.19 with bdb backend without.
What backend do i need to support subtree renames/moves?
What slapd versions could support it?
Do i need any special config for this?
I understand that Overlay salpo-refint would be a good idea, any other
overlays i should use then?
Thank you,
Philipp
12 years, 2 months
OpenLDAP coredumps on startup (Solaris 10)
by Daniel Hoffend
Hello
I'm setting up an openldap directory server (2.4.13) including a 2nd one
for as backup/failover partner. After i compiled everything, installing,
configuring everything (database, sync, schema, etc) and imported the
basic LDAP Layout (ou=Users,ou=Groups, etc), I wanted to use this
directory as Userdirectory for Userauthentication on ldap.
I was switching user/group lookups using the 'ldapclient' command and
modified to /etc/nsswitch.conf to refer for "files ldap" for passwd and
group.
Everything seems to work. 'genent passwd' and 'getent group' is listing
my ldap user and groups. But when i try to restart the slapd server it
crashes sometimes with a coredump.
----------------------------------------------------------------------
# /usr/local/libexec/slapd -d 65535 -u openldap -g openldap
@(#) $OpenLDAP: slapd 2.4.13 (Jan 30 2009 12:02:48) $
root@ldapserver:/usr/local/src/openldap-2.4.13/servers/slapd
ldap_pvt_gethostbyname_a: host=ldapserver, r=0
daemon_init: <null>
daemon_init: listen on ldap:///
daemon_init: 1 listeners to open...
ldap_url_parse_ext(ldap:///)
daemon: listener initialized ldap:///
daemon_init: 2 listeners opened
ldap_create
Bus Error (core dumped)
----------------------------------------------------------------------
Not everytime, sometimes several times in a row, sometimes after a 2nd
start. I've no clue what it could be.
In the ldap logfile, i found the 2 following lines
----------------------------------------------------------------------
an 30 16:26:56 ldapserver slapd[9494]: [ID 555073 local4.error] tid= 1:
multiple threads per connection not supported
Jan 30 16:26:56 ldapserver slapd[9494]: [ID 555073 local4.error] tid= 1:
multiple threads per connection not supported
----------------------------------------------------------------------
I started to run the slapd server using "truss" to see when the server
starts to coredump.
----------------------------------------------------------------------
# truss /usr/local/libexec/slapd -d 65535 -u openldap -g openldap
[...]
open("/etc/nsswitch.conf", O_RDONLY|O_LARGEFILE) = 9
fcntl(9, F_DUPFD, 0x00000100) Err#22 EINVAL
read(9, " #\n # C o p y r i g h".., 1024) = 1024
read(9, " g u r e i t o u t ".., 1024) = 245
read(9, 0xFF092400, 1024) = 0
close(9) = 0
fstat(3, 0xFFBFCAE8) = 0
time() = 1233329411
getpid() = 9512 [9511]
putmsg(3, 0xFFBFC1A0, 0xFFBFC194, 0) = 0
open("/var/run/syslog_door", O_RDONLY) = 9
door_info(9, 0xFFBFC0D8) = 0
getpid() = 9512 [9511]
door_call(9, 0xFFBFC0C0) = 0
close(9) = 0
fstat(3, 0xFFBFCB88) = 0
time() = 1233329411
getpid() = 9512 [9511]
putmsg(3, 0xFFBFC240, 0xFFBFC234, 0) = 0
open("/var/run/syslog_door", O_RDONLY) = 9
door_info(9, 0xFFBFC178) = 0
getpid() = 9512 [9511]
door_call(9, 0xFFBFC160) = 0
close(9) = 0
Incurred fault #5, FLTACCESS %pc = 0x0008E1FC
siginfo: SIGBUS BUS_ADRALN addr=0x00000191
Received signal #10, SIGBUS [default]
siginfo: SIGBUS BUS_ADRALN addr=0x00000191
----------------------------------------------------------------------
It looks like after reading nsswitch.conf, the server starts to crash. I
changed the following lines in the nsswitch.conf and the server starts
fine without any further problems. (even 20x in a row)
----------------------------------------------------------------------
Before: group files ldap
After: group files
----------------------------------------------------------------------
Another thing is: If the server could startup without problems, it never
crashed again. It's only sometimes during the initial startup.
I would be happy if anyone could help me or point me what i could
adjust. If needed i could provide more information.
--
Best regards
Daniel Hoffend
12 years, 2 months
run time link options
by Brett @Google
Is there any likelyhood of -R${PREFIX}/lib making it into the default build
options to help mitigate system library mismatches ?
Likewise a -with-bdb-home=/my/berkeley/home, which results in a
-L/my/berkeley/home/lib and/or -R/my/berkeley/home/lib option being added to
the LDFLAGS variable (-L in the case of a static build, -L and -R in the
case of a dynamic build) and -I/my/berkeley/home/include being added to
CPPFLAGS ?
I guess -R may not be available on every platform, buy maybe the presence of
-R is detectable ?
12 years, 2 months