contextCSN of subordinate syncrepl DBs
by Rein Tollevik
I've been trying to figure out why syncrepl used on a backend that is
subordinate to a glue database with the syncprov overlay should save the
contextCSN in the suffix of the glue database rather than the suffix of
the backend where syncrepl is used. But all I come up with are reasons
why this should not be the case. So, unless anyone can enlighten me as
to what I'm missing, I suggest that this be changed.
The problem with the current design is that it makes it impossible to
reliably replicate more than one subordinate db from the same remote
server, as there are now race conditions where one of the subordinate
backends could save an updated contextCSN value that is picked up by the
other before it has finished its synchronization. An example of a
configuration where more than one subordinate db replicated from the
same server might be necessary is the central master described in my
previous posting in
http://www.openldap.org/lists/openldap-devel/200806/msg00041.html
My idea as to how this race condition could be verified was to add
enough entries to one of the backends (while the consumer was stopped)
to make it possible to restart the consumer after the first backend had
saved the updated contextCSN but before the second has finished its
synchronization. But I was able to produce it by simply add or delete
of an entry in one of the backends before starting the consumer. Far to
often was the backend without any changes able to pick up and save the
updated contextCSN from the producer before syncrepl on the second
backend fetched its initial value. I.e it started with an updated
contextCSN and didn't receive the changes that had taken place on the
producer. If syncrepl stored the values in the suffix of their own
database then they wouldn't interfere with each other like this.
There is a similar problem in syncprov, as it must use the lowest
contextCSN value (with a given sid) saved by the syncrepl backends
configured within the subtree where syncprov is used. But to do that it
also needs to distinguish the contextCSN values of each syncrepl
backend, which it can't do when they all save them in the glue suffix.
This also implies that syncprov must ignore contextCSN updates from
syncrepl until all syncrepl backends has saved a value, and that
syncprov on the provider must send newCookie sync info messages when it
updates its contextCSN value when the changed entry isn't being
replicated to a consumer. I.e as outlined in the message referred to above.
Neither of these changes should interfere with ordinary multi-master
configurations where syncrepl and syncprov are both use on the same
(glue) database.
I'll volunteer to implement and test the necessary changes if this is
the right solution. But to know whether my analysis is correct or not I
need feedback. So, comments please?
--
Rein Tollevik
Basefarm AS
13 years, 10 months
contextCSN interaction between syncrepl and syncprov
by Rein Tollevik
The remaining errors and race condition that test058 demonstrates cannot
be solved unless syncrepl is changed to always store the contextCSN in
the suffix of the database where it is configured, not the suffix of its
glue database as it does today.
Assuming serverID 0 is reserved for the single master case, syncrepl and
syncprov can in that case only be configured within the same database
context if syncprov is a pure forwarding server I.e, it will not update
any CSN value and syncrepl have no need to fetch any values from it.
In the multi-master case it is only the contextCSN whose SID matches the
current serverID that syncprov maintains, the other are all received by
syncrepl. So, the only time syncrepl should need an updated CSN from
syncprov is when it is about to present it to its peer, i.e when it
initiates a refresh phase. Actually, a race condition that would render
the state of the database undetermined could occur if syncrepl fetches
an updated CSN from syncprov during the initial refresh phase. So, it
should be sufficient to read the contextCSN values from the database
before a new refresh phase is initiated, independent of whether syncprov
is in use or not.
Syncrepl will receive updates to the contextCSN value with its own SID
from its peers, at least with ITS#5972 and ITS#5973 in place. I.e, the
normal ignoring of updates tagged with a too old contextCSN value will
continue to work. It should also be safe to ignore all updates tagged
with a contextCSN or entryCSN value whose SID is the current servers
non-zero serverID, provided a complete refresh cycle is known to have
taken place. I.e, when a contextCSN value with the current non-zero
serverID was read from the database before the refresh phase started, or
after the persistent phase have been entered.
The state of the database will be undetermined unless an initial refresh
(i.e starting from an empty database or CSN set) have been run to
completion. I cannot see how this can be avoided, and as far as I know
it is so now too. It might be worth mentioning in the doc. though
(unless it already is).
Syncprov must continue to monitor the contextCSN updates from syncrepl.
When it receives updates destined for the suffix of the database it
itself is configured it must replace any CSN value whose SID matches its
own non-zero serverID with the value it manages itself (which should be
greater or equal to the value syncrepl tried to store unless something
is seriously wrong). Updates to "foreign" contextCSN values (i.e those
with a SID not matching the current non-zero serverID) should be
imported into the set of contextCSN values syncprov itself maintain.
Syncprov could also short-circuit the contextCSN update and delay it to
its own checkpoint. I'm not sure what effect the checkpoint feature
have today when syncrepl constantly updates the contextCSN..
Syncprov must, when syncrepl updates the contextCSN in the suffix of a
subordinate DB, update its own knowledge of the "foreign" CSNs to be the
*lowest* CSN with any given SID stored in all the subordinate DBs (where
syncrepl is configured). And no update must take place unless a
contextCSN value have been stored in *all* the syncrepl-enabled
subordinate DBs. Any values matching the current non-zero serverID
should be updated in this case too, but a new value should probably not
be inserted.
These changes should (unless I'm completely lost that is..) create a
cleaner interface between syncrepl and syncprov without harming the
current multi-master configurations, and make asymmetric multiple
masters configurations like the one in test058 work. Comments please?
Rein
13 years, 10 months
slapd tcp buffers support (ITS#6234)
by Quanah Gibson-Mount
This was added to OpenLDAP 2.4.18 release, however the ifdef about DEVEL
was never removed. The intent was to make it usable with 2.4.18 and later.
I'm guessing the ifdef should be removed now in HEAD, at which point I'll
sync it to RE24?
--Quanah
--
Quanah Gibson-Mount
Principal Software Engineer
Zimbra, Inc
--------------------
Zimbra :: the leader in open source messaging and collaboration
13 years, 11 months
Fwd: LLVMdev Digest, Vol 64, Issue 54
by Howard Chu
I was just thinking about trying out LLVM, looks like someone already got there.
-------- Original Message --------
Date: Fri, 30 Oct 2009 17:40:43 +0800
From: Nan Zhu <zhunansjtu(a)gmail.com>
Subject: [LLVMdev] I have built a whole-program bitcode file for
openldap-2.19
To: llvmdev(a)cs.uiuc.edu
Hi,all
I have written a wrapper which includes gcc/g++,ld and llvm counterparts, it
invokes native or llvm compiler and linker according to the options it
receives,after I replace the native tools with my wrapper in libtool script,
I just typed like this
make CC=wrapper AC_CFLAGS=-emit-llvm
then I got a bc file of slapd and other tools in clients directory
The accessory is the whole-program bitcode file of slapd
(My platform is Fedora11 x86_64+E6300)
the method can also be applied to many GNU projects but there may be some
other problems when I treat to BIND,I'm checking it..
Thank you
13 years, 11 months
Re: commit: ldap/libraries/libldap sasl.c
by Hallvard B Furuseth
Still seems wrong: Caller passes len=500, generic_sb_sasl_write()
returns 0 or -1 with LDAP_PVT_SASL_PARTIAL_WRITE, caller collects 300
more bytes and passes len=800, then generic_sb_sasl_write() ignores
not 500 but 800 bytes and tells caller that they were all written.
Either make p->flags a ber_slen_t with the number of incoming bytes to
ignore, or after p->ops->encode() = success, write() = <partial write
or non-fatal error> return len-1 with with LDAP_PVT_SASL_PARTIAL_WRITE.
--
Hallvard
13 years, 11 months
Re: small project
by Dmitry Kolesov
Hello.
Could you tell me more about add LDIFv2 (XML) support to ldapadd/modify and slapadd?
-Dmitry
> I would suggest starting by adding import support to ldapadd/modify
> and slapadd, then may adding export support to ldapsearch and slapdump
> (using some sort of content detection to determine whether or not to
> use xml-value-spec or not).
>
> -- Kurt
13 years, 11 months
Distributed ppolicy state
by Howard Chu
One of the major concerns I still have with password policy is the issue of
the overhead involved in maintaining so many policy state variables for
authentication failure / lockout tracking. It turns what would otherwise be
pure read operations into writes, which is already troublesome for some cases.
But in the context of replication, the problem can be multiplied by the number
of replicas in use. Avoiding this write magnification effect is one of the
reasons the initial versions of the ppolicy overlay explicitly prevented its
state updates from being replicated. Replicating these state updates for every
authentication request simply won't scale.
Unfortunately the braindead account lockout policy really doesn't work well
without this sort of state information.
The problem is not much different from the scaling issues we have to deal with
in making code run well on multiprocessor / multicore machines. Having
developed effective solutions to those problems, we ought to be able to apply
the same thinking to this as well.
The key to excellent scaling is the so-called "shared-nothing" approach, where
every processor just uses its own local resources and never has to synchronize
with ( == wait for) any other processor, but for the most part it's a design
ideal, not something you can do perfectly in practice. However, we have some
recent examples in the slapd code where we've been able to use this approach
to good effect.
In the connection manager, we used to handle monitoring/counter information
(number of ops, type of ops, etc) in a single counter, which required a lot of
locking overhead to update. We now use an array of counters per thread, and
each thread can update its own counters for free, completely eliminating the
locking overhead. The trick is in recognizing that this type of info is
written far more often than it is read, so optimizing the update case is far
more important than optimizing the query case. When someone tries to read the
counters that are exposed in back-monitor, then we simply iterate across the
arrays and tally up the counters then. Since there's no particular requirement
that all the counters be read in the same instant in time, all of these
reads/updates can be performed without locking, so again we get it for free,
no synchronization overhead at all.
So, it should now be obvious where we should go with the replication issue...
Ideally, you want password policy enforcement rules that don't even need
global state at all. IMO, the best approach is still to keep policy state
private to each DSA, and this still makes sense for DSAs that are
topologically remote. E.g., assume you have a pair of servers, each in two
separate cities. It's unlikely that a login attempt on one server will be in
any way connected to a simultaneous login attempt on the other server. And in
the face of bot attack, the rate of logins will probably be high enough to
swamp the channel between the two servers, resulting in queueing delays that
ultimately aggregate several of the updates on the attacked server into just a
single update on the remote server. (E.g., N separate failure updates on one
server will coalesce into a single update on the remote server.)
Therefore, most of the time it's pointless for each server to try to
immediately update the other with login failure info.
In the case of a local, load-balanced cluster of replicas, where the network
latency between DSAs is very low, the natural coalescing of updates may not
occur as often. Still, it would be better if the updates didn't happen at all.
And in such an environment, where the DSAs are so close together that latency
is low, distributing reads is still cheaper than distributing writes. So, the
correct way to implement this global state is to keep it distributed
separately during writes, and collect it during reads.
I'm looking for a way to express this in the schema and in the ppolicy draft,
but I'm not sure how just yet. It strikes me that X.500 probably already has a
type of distributed/collective attribute but I haven't looked yet.
Also I think we can take this a step further, but haven't thought it through
all the way yet. If you typically have login failures coming from a single
client, it should be sufficient to always route that client's requests to the
same DSA, and have all of its failure tracking done locally/privately on that DSA.
At the other end, if you have an attack mounted by a number of separate
machines, it's not clear that you must necessarily collect the state from
every DSA on every authentication request. E.g., if you're setting a lockout
based on the number of login failures, once the failure counter on a single
DSA reaches the lockout threshold, it doesn't matter any more what the failure
counter is on any other DSA, so that DSA no longer needs to look for the
values on any other node.
If a client comes along and does a search to retrieve the policy state, e.g.
looking for the last successful login or the last failure, then you want
whatever DSA receives the request to broadcast the search to all the other
DSAs and collate the results for the client by default. (Note that simple
aggregation only works for multivalued attributes; for single-valued
attributes like pwdLastSuccess you have to know to pick the most recent
value.) And probably you should be able to specify a control (like
ManageDSAit) to disable this automatic broadcast and only retrieve the value
from a single DSA.
I realize that the points listed above about login attacks miss several attack
scenarios. I think more of the scenarios need to be outlined and analyzed
before moving forward with any recommendations on lockout behavior; the
internet today is pretty different from when these lockout mechanisms were
first designed.
--
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
13 years, 11 months
small project
by Dmitry Kolesov
Hello.
I would like to work on a small project from the TO DO list:
- Add LDIFv2 (XML) support to command line tools.
Could you tell more, what need I to do?
Best regards.
- Dmitry
13 years, 11 months
response to bind
by masarati@aero.polimi.it
ITS#6337 seems to indicate that it's time to move bind response inside the
backends. Unfortunately right now the need to send bind response from
outside the backends is well-bundled into slapd, so the whole thing might
not be trivial.
p.
13 years, 11 months