syncrepl
by Howard Chu
ITS#4618, 4623, 4626 and 4703 all basically have to do with trying to
use multiple replication contexts with a single provider. This is a
behavior that the 2.3 syncprov implementation just wasn't designed for;
it was meant to handle only a single context.
Looking at the ideas in 2.2's syncrepl, it might have gone in the
direction of solving these problems if it weren't weighed down by so
many insurmountable design and implementation flaws. 2.2 probably tried
to do too much too soon, and got waylaid by the devil in the details.
At this point, this solution for multiple contexts presents itself:
1) We assign distinct searchbases to each context.
2) Every distinct source of changes must have its own unique rid. E.g.,
if a database is a provider for a context, it must have an rid. Every
consumer within its namingContext must have their own rid's just as
before. (The new requirement here is assigning rid's to providers that
are masters of their data.)
3) Currently the provider hands a consumer a cookie consisting of the
rid that the consumer supplied, plus a single contextCSN from the
provider. This single contextCSN is inadequate for accurately capturing
all of the changes that may come from multiple sources in a
namingContext. Instead, the provider will send out a cookie consisting
of multiple rid,CSN pairs - one for every rid of the provider's that
resides in the consumer's search space. This is the only reliable way to
make sure that all changes are tracked and propagated.
This says that in general, rids should not need to be configured on
consumers - they should be dictated solely by the providers. It may be a
good idea to allow them to be configured on consumers as an override,
but for now that seems unimportant.
So:
1) the provider must have its own unique rid configured
2) the consumer's rid is optional
3) the provider must be told about all of the consumers living under it
4) the provider must aggregate all of the consumer cookies under it with
its own context info when generating a cookie for its own consumers
Currently slapd treats an entire database as read-only when it has a
consumer configured on it. This raises the question of how to allow
multiple consumers in a single context - should we allow multiple
consumers per DB, as 2.2 tried (and failed) to do, or should we continue
with the current approach of one consumer per DB, and use glue to
collect multiple consumers under one roof?
--
-- Howard Chu
Chief Architect, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc
OpenLDAP Core Team http://www.openldap.org/project/
16 years, 7 months
slapo-dynlist desgin question(s)
by Quanah Gibson-Mount
Stanford is looking at implementing groups into our LDAP servers, and in
particular, looking at using slapo-dynlist. However, it does not behave as
I expected it to.
Basically, it uses the credentials of whomever bound to determine the
membership list. This means I would have to give access to a privileged
attribute to those who wished to use groups, which is exactly what I'm
trying to avoid. What I wanted to do, was specifically control the access
to the group objects themselves. If an entity has access to the group
object, they would then be able to see all current members of the group.
I believe this would mean adding functionality to slapo-dynlist to where it
uses the rootdn to perform the internal search instead of the credentials.
Would it be possible to have this sort of addition?
--Quanah
--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html
16 years, 7 months
GSS-SPNEGO Protocol Details
by Michael B Allen
Hello,
I've implemented SASL binds for GSSAPI and GSS-SPNEGO using a
Sockbuf_IO_Desc handler instead of libsasl. Everything works great
but I've noticed some behavior from the server I'm using that
is not consistent with the available documentation (RFC 2222 and
draft-ietf-sasl-gssapi-03 by Melnikov). Would anyone happen to know
where I might ask about GSS-SPNEGO protocol details? Is there an IETF
mailing list somewhere?
There are three issues:
1) GSS-SPNEGO search replies are sealed even though the request was
not and a capture of another client talking to the same server shows
replies as integ-only. A examination of the captures of my code and
the other client shows the packets are identical (minus ber encoding
differences and encrypted krb5 bits).
2) GSS-SPNEGO does not appear to use the additional bind exchange to
negotiate the security-layer bit mask like GSSAPI does.
3) GSSAPI can use what is apparently the DN of an account called the
"authorization identity". The actual values for this field do not
appear to be documented anywhere.
I don't suppose I should care since the code works fine but I do. Any
pointers are appreciated.
Mike
16 years, 8 months
sb_sasl_write short-count bug?
by Michael B Allen
Hi,
Consider the sb_sasl_write function:
341 static ber_slen_t
342 sb_sasl_write( Sockbuf_IO_Desc *sbiod, void *buf, ber_len_t len)
343 {
<snip>
352 /* Are there anything left in the buffer? */
353 if ( p->buf_out.buf_ptr != p->buf_out.buf_end ) {
354 ret = ber_pvt_sb_do_write( sbiod, &p->buf_out );
<snip>
391 ret = ber_pvt_sb_do_write( sbiod, &p->buf_out );
392
393 /* return number of bytes encoded, not written, to ensure
394 * no byte is encoded twice (even if only sent once).
395 */
396 return len;
397 }
This optimistically returns the len supplied. If ber_pvt_sb_do_write
returns a short-count then data will be left in the p->buf_out. The
remaining data will not be written until the next sb_sasl_write call which
may not happen. I have not observed a problem but just from examining
the logic I thought I might say something.
Is this bug?
Would putting ber_pvt_sb_do_write in a loop do any good?
Mike
16 years, 8 months
Re: commit: ldap/tests/scripts test049-sync-config
by Howard Chu
ando(a)OpenLDAP.org wrote:
> Update of /repo/OpenLDAP/pkg/ldap/tests/scripts
>
> Modified Files:
> test049-sync-config 1.3 -> 1.4
>
> Log Message:
> make sure replication finished before comparing data (under valgrind, replication may take ages)
I debated adding such a check before. I think the first check you added
should be removed. In my initial testing there were timing-dependent errors
that cropped up when refreshing was occurring while the ldapadd was running.
I think it's important that we continue to test for this case, however
crudely it's done. Too bad we don't have a reliable means to notify the test
script when the consumer has actually started its work. We could query
back-monitor but normally (not under valgrind) the refresh could complete
before we got the back-monitor search result.
--
-- Howard Chu
Chief Architect, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc
OpenLDAP Core Team http://www.openldap.org/project/
16 years, 8 months
Re: cn=include
by Eric Irrgang
Since
1) the behavior is different
2) the slapd.conf to cn=config conversion sucks in the relevant
information anyway
and
3) use of includes is inappropriate under cn=config
is it maybe time for the slap* tools stop creating cn=Includes?
On Tue, 23 Jan 2007, Howard Chu wrote:
> Eric Irrgang wrote:
>> Are olcInclude attributes in cn=config honored as per the Admin Guide
>> section 5.2.2 or is that documentation misleading?
>
> Good question. The short answer is - the use of include files is not
> recommended for cn=config. They really only work correctly when slapd is using
> slapd.conf.
>
> Keep in mind - in slapd.conf, you can insert include statements anywhere at
> all in the config file, you can order them completely arbitrarily,
> interleaving them with any other config statements. Under cn=config, all of
> the cn=Includes are grouped under one place, they can't have anything else
> inserted between them, so if they needed to have other intervening directives
> processed first, they would fail.
>
> Also, the point of using cn=config is to make every part of the configuration
> accessible/modifiable using LDAP. slapd.conf-formatted files (e.g. include
> files) are not accessible or modifiable using LDAP.
> --
> -- Howard Chu
> Chief Architect, Symas Corp. http://www.symas.com
> Director, Highland Sun http://highlandsun.com/hyc
> OpenLDAP Core Team http://www.openldap.org/project/
>
--
Eric Irrgang - UT Austin ITS Unix Systems - (512)475-9342
16 years, 8 months
Search timeout?
by Pierangelo Masarati
I'm facing an issue: when a proxy is searching a remote server, and the
server does not respond in the sense that it happily accepts a search
request but never returns any type of response (a "silly" way of
reproducing this issue consists in sending a SIGSTOP to the remote
server: it accepts requests but does nothing), right now the proxy
honors the timelimit. However, if the search is performed as the rootdn
of the proxy, it will last forever. So I'd like to introduce the
concept of "search timeout". This differs from the already existing
"timelimit" in that it is not a client-requested (or server-imposed)
limit on the overall duration of a search operation, but rather a
server-imposed limitation on how long the proxy can wait consecutively
for any sort of response from a remote server. This should, optionally,
apply to the rootdn as well, since it's more about the sanity of the
connection than on the properties of the operation. This type of
limitation has already been introduced for other operations, including
compare and all write operations, for a similar purpose. Comments?
p.
16 years, 8 months
Re: commit: ldap/tests/scripts test049-sync-config defines.sh
by Howard Chu
hyc(a)OpenLDAP.org wrote:
> Update of /repo/OpenLDAP/pkg/ldap/tests/scripts
>
> Modified Files:
> defines.sh 1.147 -> 1.148
> Added Files:
> test049-sync-config NONE -> 1.1
>
> Log Message:
> Test slave bootstrapping via syncrepl
For anyone curious, this test shows how to use back-config and syncrepl to
dynamically configure and populate a server.
Two stub entries (cn=config and olcdatabase=config,cn=config) are slapadded
to initialize the master and slave configurations. Then ldapadd/ldapmodify
are used to load the syncprov module, add a syncrepl consumer to the config
database, and add the syncprov overlay to the config database on the master.
Then ldapmodify is used to start a syncrepl consumer on the slave's config
database.
The rest of the script adds schema, a database backend and content for the
backend on the master, all of which get replicated to the slave.
The dilemma of how not to wipe out the consumer configuration once the
complete master configuration is replicated onto it is solved with this
trick: the master has both a provider and a consumer configured on it, and
the consumer points at the master. The syncrepl config handler checks to see
if its providerURI matches any of the current server's listenerURIs. If
there's a match, the config is a no-op. (It gets parsed but no consumer task
is triggered.)
--
-- Howard Chu
Chief Architect, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc
OpenLDAP Core Team http://www.openldap.org/project/
16 years, 8 months
Modrdn replication
by Pierangelo Masarati
ITS#4809 reports that when replicating modrdn via slurpd, operational
attributes don't get replicated. This appears to be intrinsically
caused by the definition of the modrdn operation, which, unlike the
modify and add operations, doesn't contemplate the possibility to
add/modify attributes other than the naming ones. So add/modify can be
easily exploited for replication by adding/modifying the write-related
operational attributes during replication, while modrdn can't.
Assuming there's any intention to fix slurpd replication until slurpd is
dismissed, we need to find a means to attach modification of
write-related operational attributes to a modrdn operation, to
complement the modrdn operation itself.
The proposed solution consists in explicitly modifying the necessary
(operational) attributes by means of an additional modify operation that
is attached to modrdn. This may be occasionally useful regardless of
slurpd replication, which makes it more appealing for OpenLDAP
developers, so that the required effort wouldn't be just wasted by
slurpd dismissal.
The additional modify could be wrapped into a control's value, and that
control might be the "relax" control itself, so that the original
operation, augmented by the optional modify, would need to succeed as a
whole or abort, extending the capabilities of the "relax" control by
allowing extra modifications to be added, in order to preserve the
integrity of the object after the operation.
Comments? p.
16 years, 8 months