What are the plans for dyngroup.c since dynlist.c does everything and more?
Leave as is?
T +44 (0) 1224 279484
M +44 (0) 7930 323266
F +44 (0) 1224 824887
Open Source. Open Solutions(tm).
With back-relay to back-ldif/back-null, Password Modify fails with
"operation not supported within naming context". However it works
with back-ldif without back-relay, and with back-relay to back-bdb.
The difference is that back-relay:relay_back_op_extended() does
send_ldap_error( op, rs, LDAP_UNWILLING_TO_PERFORM,
"operation not supported within naming context" );
while back-bdb:bdb_extended() does
rs->sr_text = "not supported within naming context";
and back-ldif/back-null have no be_extended.
It works with back-relay if I change relay_back_op_extended() to do the
same as back-bdb.
back-ldif also lacks a compare, so I tried the same change to
relay_back_op_compare(), but then slapd did not respond to Compare.
So, which backend is right? Should a backends's or overlay's
be_extended() leave it to the caller to send results, just like
be_bind() at success? Or should that be done just in some cases?
Is it documented somewhere?
Back-relay also logs two results, I haven't checked if it sends two:
conn=0 op=1 EXT oid=22.214.171.124.4.1.4126.96.36.199
conn=0 op=1 PASSMOD id="cn=urgle,cn=db" old new
conn=0 op=1 RESULT tag=120 err=53 text=operation not supported within naming context
conn=0 op=1 RESULT oid= err=1 text=operation not supported within naming context
There are tests that don't support back-ldif again... I'll file an ITS
(or reuse ITS#5265?), but first I'm wondering:
When a test with back-ldif fails because data gets sorted differently
from with bdb/hdb and the compare with expected data fails, what's the
best way to fix them?
We can change the data, or add -S "" to ldapsearch, or vary between
both. The searches are slowly growing -S arguments all over the place,
I'm not sure if that is a good or a bad thing. It's not always easy to
see if some testdata is carefully built or not. OTOH maybe there are
tests where the order is important and that's not easy to see either.
E.g. test042-valsort can be fixed by renaming Dave to John,
so he comes after George in data/valsort3.out. Looks harmless,
I'll do that unless someone says it's a bad idea.
test011-glue-slapadd, test012-glue-populate, test029-ldapglue
look like they can be most easily fixed with -S.
> operation.c 1.81 -> 1.82
> rename ldap_pvt_thread_pool_setkey_x() to
> ldap_pvt_thread_pool_setkey() (as part of ITS#5309)
One nitpick - in this code in operation.c:
> ldap_pvt_thread_pool_getkey( ctx, (void *)slap_op_free, &otmp, NULL );
> op2 = otmp;
> LDAP_STAILQ_NEXT( op, o_next ) = op2;
> ldap_pvt_thread_pool_setkey( ctx, (void *)slap_op_free,
> (void *)op, slap_op_q_destroy, NULL, NULL );
can it be a problem if 'op' is stored to the context before its o_next
gets updated? If not, we can save a getkey call - move the setkey up
instead. I _think_ it's all right since only the current thread should
be accessing the key (except during pauses), and the pool can't pause
> index.c 1.70 -> 1.71
> ITS#4112 temporarily disable broken code
>+#if 0 /* ifdef LDAP_COMP_MATCH */
There are other '#ifdef LDAP_COMP_MATCH's for component indexing in bdb,
should they stay? (Can kill at least the one above, to shut gcc up
about now-unused variables.)
slapd often assumes "member" and "groupOfNames" are always defined (e.g.
in ACLs, but in many other places). However, this attribute and
objectClass are defined in core.schema (core.ldif). For consistency, I
believe they should rather be hardcoded in schema_prep.c.
Ing. Pierangelo Masarati
OpenLDAP Core Team
via Dossi, 8 - 27100 Pavia - ITALIA
Office: +39 02 23998309
Mobile: +39 333 4963172
Has anyone got a dual or quad socket Intel Xeon based server for testing? I've
been testing on two AMD systems, one quad socket dual core and one dual socket
quad core. There are a lot of different ways to tune these systems...
slapd currently uses a single listener thread and a pool of some number of
worker threads. I've found that performance improves significantly when the
listener thread is pinned to a single core, and no other threads are allowed
to run there. I've also found that performance improves somewhat when all
worker threads are pinned to specific cores, instead of being free to run on
any of the remaining cores. This has made testing a bit more complicated than
I originally was just pinning the entire process to a set number of cores
(first 1, then 2, incrementing up to 8) to see how performance changed with
additional cores. But due to the motherboard layout and the fact that the I/O
bridges are directly attached to particular sockets, it makes a big difference
exactly which cores you use.
Another item I noticed is that while we scale perfectly linearly from 1 core
to 2 cores in a socket (with a dual-core processor), as we start spreading
across multiple sockets the scaling tapers off drastically. That makes sense
given the constraints of the Hypertransport connections between the sockets.
On the quad-core system we scale pretty linearly from 1 to 4 cores (in one
socket) but again the improvement tapers off drastically when the 2nd socket
is added in.
I don't have any Xeon systems to test on at the moment, but I'm curious to see
how they do given that all CPUs should have equal access to the northbridge.
(Of course, given that both memory and I/O traffic go over the bus, I'm not
expecting any miracles...)
The quad-core system I'm using is a Supermicro AS-2021M-UR+B; it's based on an
Nvidia MCP55 chipset. The gigabit ethernet is integrated in this chipset.
Using back-null we can drive this machine to over 54,000
authentications/second, at which point 100% of a core is consumed by interrupt
processing in the ethernet driver. The driver doesn't support interrupt
coalescing, unfortunately. (By the way, that represents somewhere between
324,000pps and 432,000pps. While there's only 5 LDAP packets per transaction,
some of the client machines choose to send separate TCP ACKs, while others
don't, which makes the packet count somewhere between 5-8 packets per
transaction. I hadn't taken those ACKs into account when I discussed these
figures before. At these packet sizes (80-140 bytes), I think the network
would be 100% saturated at around 900,000pps.)
Interestingly, while 2 cores can get over 13,000 auths/second, and 4 cores can
get around 25,000 auths/second (using back-hdb), with all 8 cores it's only
peaking at 29,000 auths/second. This tells me it's better to run two separate
slapds in a mirrormode configuration on this box (4 cores per process) than to
run a single process across all of the cores. Then I'd expect to hit 50,000
auths/second total, pretty close to the limits of the ethernet device/driver.
-- Howard Chu
Chief Architect, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
I've been working with current CVS OpenLDAP and the memberof plugin, for
Following your suggestion, I'm trying to load multiple memberof
instances, but the syntax doesn't seem to work for me. Attached is how
I'm currently configuring the overlay. It causes this when loading:
overlay_config(): overlay "memberof" already in list
overlay_config(): overlay "memberof" already in list
It also only appears to work for the first entry (happily that is
member/memberof, and this seems to have worked).
Is the syntax I'm using correct, or does the module need to be reworked
for this operation?
Finally, I'm wondering if the error returns can be adjusted:
When I add invalid member to a group, OpenLDAP returns
LDAP_CONSTRAINT_VIOLATION <adding non-existing object as group member>,
but AD returns error 32, LDAP_NO_SUCH_OBJECT for this situation. Would
it be reasonable to change this, or could it be made configurable.
Having the LDAP server give me the error the client expects would avoid
the need for a translation layer. (it might be nobody ever looks at
this, but I don't like to make that assumption).
Andrew Bartlett http://samba.org/~abartlet/
Authentication Developer, Samba Team http://samba.org
Samba Developer, Red Hat Inc. http://redhat.com