dITStructureRules/nameForms in subschema subentry for informational purpose
by Michael Ströder
HI!
Discussed this very briefly with Howard at LDAPcon 2007 based on an idea
of Steve:
Support for dITStructureRules and nameForms is still in OpenLDAP's TODO.
In the meanwhile slapd could accept definitions for both in slapd.conf
and simply pass them on to a schema-aware LDAP client for informational
purpose without enforcing them. Same function like rootDSE <file> in
slapd.conf.
Opinions?
Ciao, Michael.
--
Michael Ströder
E-Mail: michael(a)stroeder.com
http://www.stroeder.com
14 years, 2 months
managing OpenLDAP / back-config
by Ralf Haferkamp
With the great features that back-config provides to configure OpenLDAP
servers at runtime it seems logical to start thinking about providing tools
that could help to leverage those features.
Currently to manage an OpenLDAP server through back-config you have the option
to use either a generic LDAP Browser (JXplorer, Apache LDAP Studio,
web2ldap), the OpenLDAP command line tools (ldapsearch, ldapmodify, ...) or
homegrown software using one of the available LDAP APIs. I think it would be
helpful to have some more sophisticated management tools (Commandline and/or
GUI).
In order to get there I think it could be helpful to create an API dedicated
to provide an easy way to access the OpenLDAP configuration (databases,
overlays, schema, access control, ...). This API could then be used to create
different flavors of management tools.
I have not yet spend too much time thinking about the design of such an API.
Neither about the programming language that I'd use to implement something
like this (Python, C, C++, ?). I first like to get a feeling how others think
about this and if anybody is interested in collaborating on such an API. So
please feel free to reply with your comments and suggestions :)
--
regards,
Ralf
14 years, 11 months
multiple server certificates
by Hallvard B Furuseth
Would it be hard to make different listener addresses present
different server certificates, signed by different CA certificates?
(I'm sure I've asked someone on that before, but don't remember if it
was this list.)
--
Hallvard
15 years, 1 month
How to get rid of sys_errlist and sys_nerr?
by Michael B Allen
Hello,
I'm trying to run something linked with libldap on a glibc-2.5 system
on a glibc-2.3 linux system and the loader compains about not having
the right versions of sys_errlist and sys_nerr.
Why does libldap need these symbols?
$ objdump -T libldap-2.3.so.0 | grep sys_
0000000000000000 DO *UND* 0000000000000004 GLIBC_2.4 sys_nerr
0000000000000000 DO *UND* 0000000000000420 GLIBC_2.4 sys_errlist
>From looking at config.log, configure finds strerror just fine.
Is there any way to get rid of these symbols?
Of all the symbols in all the libraries that make up my project these
are the only two GLIBC_2.4 or above symbols. If I can get rid of them,
my code should run on the glibc-2.3 system.
Mike
15 years, 1 month
Re: commit: ldap/servers/slapd/back-monitor cache.c
by Hallvard B Furuseth
ando(a)OpenLDAP.org writes:
> Modified Files:
> cache.c 1.32 -> 1.33
> avoid potential deadlock?
I've never been sure of the back-monitor locking code..
Do volatile entries get cached?
--
Hallvard
15 years, 1 month
OpenLDAP booth at OpenExpo, 25/26 May 2008, Karlsruhe, Germany
by Michael Ströder
HI!
It seems my application to run an OpenLDAP booth at http://openexpo.de/
in Germany, Karlsruhe, 25./26. May 2008 was accepted. Like requested by
them I'll sent them the OpenLDAP worm logo to be put on their web page.
Volunteers welcome to help at that booth. It's right before Linuxtag in
Berlin, Germany.
Ciao, Michael.
15 years, 1 month
[Fwd: Re: deadlocks in OpenLDAP]
by Howard Chu
I haven't looked at this part of back-monitor. Someone else care to respond?
-------- Original Message --------
Subject: Re: deadlocks in OpenLDAP
Date: Mon, 28 Apr 2008 04:30:23 -0400
From: Yin Wang <yinw(a)umich.edu>
To: Howard Chu <hyc(a)symas.com>
Hi Howard,
Our study shows a possible deadlock in OpenLDAP 2.4.8.
Hope you could help explain.
- monitor_cache_get at servers/slapd/back-monitor/cache.c:163
waiting for mp_mutex while holding mi_cache_mutex
- monitor_cache_release at servers/slapd/back-monitor/cache.c:366
waiting for mi_cache_mutex while holding mp_mutex
We have not been able to verify this in a real running
environment, which could be difficult, if not impossible.
Therefore your comments would be extremely valuable.
Any help would be greatly appreciated.
Yin
======= At 2008-01-12, 19:48:38 you wrote: =======
>Yin Wang wrote:
>> Hi Howard,
>>
>> Sorry I am replying a very old email below that
>> you send in last April. Terence is a colleague
>> of mine and we are still working on the project.
>> I hope to understand the problem better.
>>
>> When you said "While we can control the order of
>> lock acquisition in the OpenLDAP code, we have no
>> control over it in the BerkeleyDB layer", do you
>> mean the (possible) deadlock comes from BerkeleyDB
>> or it is because of the interaction of OpenLDAP
>> and BerkeleyDB? If it is the latter case and I
>> assume BerkeleyDB is deadlock-free, I don't understand
>> why using such a library could cause deadlocks.
>>
>> Your help would be greatly appreciated.
>
>Since you say you're working on a research project, you shouldn't assume
>anything. You should do some actual research. The BerkeleyDB lock system is
>fully described in their documentation. Read it.
>
>If you have questions that aren't addressed by the BerkeleyDB docs you can ask
>those, but I don't have time to answer questions about things that are already
>well documented.
>>
>> Yin Wang
>> Research Assistant
>> EECS Department, University of Michigan
>>
>>
>>> -----Original Message-----
>>> From: Howard Chu [mailto:hyc@openldap.org]
>>> Sent: Thursday, April 19, 2007 5:32 PM
>>> To: Kelly, Terence P
>>> Cc: Project(a)openldap.org
>>> Subject: Re: deadlocks in OpenLDAP
>>>
>>> Kelly, Terence P wrote:
>>>
>>>> Hi,
>>>>
>>>> I'm a researcher with interests
>>>> in concurrent programming issues. I'm
>>>> writing with a question about deadlocks
>>>> in OpenLDAP code.
>>>>
>>>> Based on the OpenLDAP issue tracking system,
>>>> I gather that deadlocks involving circular
>>>> wait for locks have occurred or have been
>>>> suspected in slapd.
>>>>
>>>> In principle it's possible to avoid deadlock
>>>> by consistently acquiring locks in a defined
>>>> order, but in practice this can be inconvenient
>>>> or impossible.
>>>>
>>>> Can you give me some intuition for why it's
>>>> hard to prevent deadlocks in slapd? Has
>>>> your experience with deadlocks in OpenLDAP
>>>> software given you any generic insights
>>>> into deadlock and how (not) to avoid it?
>>>> Would your insights apply to other
>>>> software in addition to slapd?
>>>>
>>>> Many thanks in advance for any wisdom you
>>>> can share! Long editorials and brain dumps
>>>> are particularly welcome.
>>>>
>>>> -- Terence
>>> For OpenLDAP the problem is that there are two layers of locking systems
>>>
>>> in use - the OpenLDAP code and the BerkeleyDB code. While we can control
>>>
>>> the order of lock acquisition in the OpenLDAP code, we have no control
>>> over it in the BerkeleyDB layer. As such, the usual approach of strictly
>>>
>>> ordering locks doesn't work here.
>
>--
> -- Howard Chu
> Chief Architect, Symas Corp. http://www.symas.com
> Director, Highland Sun http://highlandsun.com/hyc/
> Chief Architect, OpenLDAP http://www.openldap.org/project/
= = = = = = = = = = = = = = = = = = = =
--
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
15 years, 1 month
LDAP transactions
by Howard Chu
Just thinking aloud right now about what's needed here.
Basically all incoming requests participating in a transaction are queued
until the Commit is received. Aside from basic parsing/validation, no other
processing is performed until then.
If an Abort is received, the queued requests are simply discarded.
When the Commit request is received, all of the queued requests are performed
sequentially by a single thread and the results are gathered. If any
individual request fails, any preceding completed requests must be rolled back.
Since I'm considering this for both back-bdb and back-ndb, it appears we're
going to need some transaction-specific hooks to be added to the BackendInfo
structure.
1) start a TXN, get a TXN handle
2) end a TXN: Commit or Abort
For back-bdb the tricky part is exposing the updates atomically in the caches.
I think the fact that entry caching uses BDB locks helps somewhat; we can keep
entry cache items locked until the overall transaction completes. But for
Aborts we'll either need to keep a copy of the previous cache info, or just
discard it all. For now, discarding it all seems simpler.
For back-ndb things are currently easy since this backend does no caching of
its own. As such, once the backend issues a Commit or Abort request, there's
no further (backend) work to be done.
It's tempting to think about this for backglue, but we'd need a cross-database
lock manager of some kind for detecting deadlocks. That implies that we really
need an LDAP-level lock request, to handle distributed locking, and that the
Transaction handling ought to be built on top of that. Currently the
Transaction layer says nothing at all about locking, and it's up to the
underying LDAP database to take care of it.
I guess another approach would just be to have backglue fully serialize all
transactions; if only one is outstanding at any time there can be no deadlocks.
This brings up a question about whether slapd in general should fully
serialize them. I was thinking, at the least, that we should only allow one
active transaction per connection, though that was mainly a matter of
convenience. Thoughts?
--
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
15 years, 1 month