OpenLDAP 2.5
by Howard Chu
With 2.4.21 out, and hopefully stable enough to promote to the next Stable
release, it's time to feature-freeze 2.4 and prepare for the 2.5 branch. As I
already announced to the OpenLDAP-Committers, we're also planning to switch
from CVS to GIT in mid-January. Commits for 2.5 will begin after we've settled
into GIT.
We haven't really laid out a formal roadmap for 2.5 yet, but I think most of
it has been discussed here or in Development ITSs already.
I would like to be able to resolve all outstanding Development ITSs - we will
either implement them or reject/close them. There are 42 outstanding at the
moment.
Likewise for all outstanding ITSs in Software Bugs - many of them have been
deferred because a proper fix would require invasive changes to large parts of
the code base. There are 26 outstanding. With 2.5 beginning we are free to
make these large scale changes.
We should also walk thru the Software Enhancement requests and decide which to
accept and which to reject. Currently there are 37 outstanding.
I also have a number of specific areas I want to see worked on; some of these
are included in the above ITSs but I'll outline them here:
syncrepl
config - this is pretty unwieldy already; syncrepl needs to be moved
outside of the slapd core and into an overlay. That will allow us a lot more
flexibility in configuring while also eliminating a lot of redundant parsing code.
suffixmassage - we at the very least need to be able to point a consumer
at some non-homogeneous suffix in the provider. We may go for complete
librewrite support as well, although at this point I don't see as strong a need.
config
TLS certs and keys should be stored as LDAP attributes, not pointers to
filesystem locations. This is a prereq to using some of the dynamic cert
generation features of the CA overlay. (This can be troublesome as there may
not be clean APIs for reading certs from memory in all of the TLS APIs we
support.)
Disabling individual config attribute values and entries. At the moment
I'm thinking of adding an ";x-disabled" tag to those values.
back-mdb
Using a single-level store for Entries will impact all of the schema
engine as well. I think the simplest solution here is going to be using an
mmap'd file for all of the schema elements.
The actual design of back-mdb still needs to be defined in several areas.
The single-level store approach exposes us to some new failure modes that the
current multi-level backends don't have. (E.g., corruptions due to bad RAM /
wild pointer writes are very likely to get persisted on disk, implicitly.)
The solution I'm considering is based on a mirroring strategy. Every
database will be stored twice on disk: once in the file that is actively
mmap'd into the process, and once in a write-only file. On every intentional
update of a memory page, we will also store a checksum of the page, and
manually write the page to the mirror. If we detect a checksum failure on any
in-memory page we can still retrieve a valid copy from the mirror file. This
of course doubles our potential I/O load, but I don't believe it's any worse
than the load from performing write-ahead logging on a traditional database.
(And yes, mirroring will take the place of writing transaction log files.)
Some of these same considerations apply to the schema storage, but not
entirely. At runtime, the schema is effectively read-only. When we do dynamic
schema changes thru cn=config, all other threads are suspended. For the mmap
purposes, we can mark all of the schema pages as read-only during runtime, and
only make them read-write when cn=config is actually trying to perform an
update. As such, the only sticky issue is dealing with changes made to the
back-config internal files by plain text editors and such.
These are the things I'm interested in. But as always, this Project is driven
forward by the particular interests of each individual contributor. If you
have other ideas you want to pursue, speak up.
--
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
13 years, 7 months
back-config delete support (syncprov overlay)
by Ralf Haferkamp
Hi,
while taking up some loose ends on my work on delete support for back-
config (enabled with -DSLAP_CONFIG_DELETE) I wondered how we should deal
with the deletion of the syncprov overlay when there are active
refreshAndPersist sessions. What errorcode should we sent when closing
such a connection? To me LDAP_UNAVAILABLE sound like the best choice.
LDAP_UNWILLING_TO_PERFORM might also be ok.
--
Ralf
13 years, 9 months
SASL OTP and syncrepl
by manu@netbsd.org
Hello
After exchanging a few private messages with Pierangelo Masarati, I just
posted ITD#6475:
> When binding using SASL OTP to a replica, the bind works, but the
> cmusaslsecretOTP attribute is modified on the replica and fail to be
> propagated to the master. On the next modification, the master will
> overwrite the replica's updated cmusaslsecretOTP value.
>
> Here is a script that exhibit the behaviour:
> ftp://ftp.openldap.org/incoming/ldapotp.tgz
> That require SASL enabled OpenLDAP, with the OTP plugin installed. The
> PATH in run.sh must probably be adjusted.
The problem is in sasl_auxprop_store(), who bypass the replication
process. The easier fix to me seems to send a referal to the master on
any SASL OTP bind, Any other idea?
--
Emmanuel Dreyfus
http://hcpnet.free.fr/pubz
manu(a)netbsd.org
13 years, 9 months
syncprov mmr propagation
by Howard Chu
Currently we still send too many updates in a fully connected mesh. The
provider currently knows not to resend an update to the originating server,
and also not to resend an update to the server that just relayed the update to
it. We should also filter out updates that we know another server is likely to
receive from an alternate path.
E.g. in a 4-way mesh:
1 - 2
|\ /|
| X |
|/ \|
3 - 4
When server 1 receives an update from a client, it will propagate it to each
of 2, 3, and 4. These servers will also attempt to propagate it to each other:
2 to 3,4; 3 to 2,4; and 4 to 2,3. If these further updates are not quite
simultaneous, it is possible that one or two redundant updates will be pruned
out given the existing code. But that's just based on luck and variations in
server and network load.
Once any node has received any updates from a given sid, it should be able to
remember which neighbors have sent it identical updates, and filter them out
from further propagation attempts.
Of course, this may turn out to be a bad idea; if one of the nodes loses one
connection, it might stop receiving updates altogether even though other
servers could forward to it. I guess we can't protect against this case
without at least some redundant messages still being generated.
--
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
13 years, 9 months