ldap_int_sasl_bind() and canonical Kerberos names
by Geert Jansen
Hi,
at the moment, ldap_int_sasl_bind() uses ldap_host_connected_to() to get
a fully qualified host name that will be used as the server fqdn with
the sasl client. This fqdn is acquired by ldap_host_connected_to() using
a reverse DNS lookup. The code explains why this is done:
/*
* do a reverse lookup on the addr to get the official hostname.
* this is necessary for kerberos to work right, since the official
* hostname is used as the kerberos instance.
*/
Using reverse DNS names has however always been problematic. The
following comment is from the MIT code: (lib/krb5/os/sn2princ.c):
/* XXX: This is *so* bogus. There are several cases where
this won't get us the canonical name of the host, but
this is what we've trained people to expect. We'll
probably fix it at some point, but let's try to
preserve the current behavior and only shake things up
once when it comes time to fix this lossage. */
To address this issue, a draft RFC has been written
(draft-ietf-krb-wg-kerberos-referrals-09) that adds server-side name
canonicalisation to Kerberos and therefore removes the need to use
reverse DNS for this. This draft has been implemented in MIT Kerberos
1.6. The feature is enabled by default and if you want to use it you
probably want to set "rdns = false" in [libdefaults] to disable
canonicalisation based on reverse DNS.
Disabling these reverse DNS lookups however is not possible at the
moment with the OpenLDAP client as explained above. I did a quick patch
to have ldap_int_sasl_bind() use a value based on the LDAP option
LDAP_OPT_HOST_NAME and that worked as expected.
Would you guys be interested in a patch that allows the disabling
hostname canonicalisation based on reverse DNS? The patch would need to
make this behaviour optional and non-default as some real workloads may
break and also it would somehow need to handle LDAP URIs with multiple
hosts.
Regards,
Geert Jansen
15 years, 11 months
syncrepl fails on RE_2_4
by Dieter Kluenter
Hi,
I'm testing REL_ENG_2_4. The provider is loaded with slapadd -w -F -f
-l, while the consumer is started with an empty database. The initial
database is replicated by the consumer but no further synchronisation
occurs. I tested read access to all databases with ldapsearch so there
is no hidden access rule that prevents from reading.
On the consumer I see many
do_syncrep2: rid=003 got search entry without Sync State control
do_syncrepl: rid=003 retrying (4 retries left)
I don't know wether this is important.
this are my configuration files
,----[ provider slapd.conf ]
| database config
| rootdn cn=config
| rootpw secret
| access to dn.subtree="cn=config" by dn.exact="cn=replicator,o=avci,c=de" read
| overlay syncprov
|
| database bdb
| suffix "o=avci,c=de"
| rootdn "cn=admin,o=avci,c=de"
| rootpw secret
| ...
|
| overlay accesslog
| logdb cn=log
| logops writes
| logpurge 3+00:00 1+00:00
|
| overlay syncprov
| syncprov-checkpoint 5 10
|
| database bdb
| suffix cn=log
| directory /tmp/slapd1/log
| rootdn cn=log
| index reqStart eq
| access to dn.subtree="cn=log" by dn.exact="cn=replicator,o=avci,c=de" read
| database monitor
`----
,----[ consumer slapd.conf ]
| database config
| rootdn cn=config
| rootpw hhdy01
| access to dn.subtree="cn=config" by dn.exact="cn=replicator,o=avci,c=de" read
|
| syncrepl rid=01
| provider=ldap://localhost:1007
| bindmethod=sasl
| saslmech=digest-md5
| authcid=replicator
| credentials=xxxxxx
| searchbase="cn=config"
| scope=sub
| attrs="*","+"
| type=refreshAndPersist
| retry="5 5 300 5"
| MirrorMode off
|
| database bdb
| suffix "o=avci,c=de"
| rootdn "cn=admin,o=avci,c=de"
| rootpw secret
| syncrepl rid=03
| provider="ldap://localhost:9007"
| bindmethod=sasl
| saslmech=digest-md5
| authcid=replicator
| credentials=replicator
| searchbase="o=avci,c=de"
| scope=sub
| attrs="*","+"
| type=refreshAndPersist
| retry="5 5 300 5"
| logbase="cn=log"
| syncdata=accesslog
|
| updateref ldap://localhost:9007
| MirrorMode off
|
| overlay accesslog
| logdb cn=log
| logops writes
| logpurge 3+00:00 1+00:00
| index reqStart eq
|
| database bdb
| suffix cn=log
| directory /tmp/slapd2/log
| rootdn cn=log
| index reqStart eq
| access to dn.subtree="cn=log" by dn.exact="cn=replicator,o=avci,c=de" read
|
| database monitor
`----
-Dieter
--
Dieter Klünter | Systemberatung
http://www.dkluenter.de
GPG Key ID:8EF7B6C6
15 years, 11 months
last call for testing 2.3, 2.4
by Quanah Gibson-Mount
Please test current tip for 2.3 and 2.4. All known issues are believed
fixed at this time. If there are no problem reports by tomorrow noon my
time, I plan on tagging for release 2.3.39 and 2.4.6.
Thanks,
Quanah
--
Quanah Gibson-Mount
Principal Software Engineer
Zimbra, Inc
--------------------
Zimbra :: the leader in open source messaging and collaboration
15 years, 11 months
Re: thread pools, performance
by Howard Chu
Rick Jones wrote:
> There are definitely interrupt coalescing settings available with tg3-driven
> cards, as well as bnx2 driven ones:
>
> ftp://ftp.cup.hp.com/dist/networking/briefs/nic_latency_vs_tput.txt
Really nice work there, thanks.
> Also, if the platform and the I/O card support it, and it isn't the default, MSI
> or MSI-X interrupts are often lower overhead than legacy INTA irq's. They can
> also allow - on NICs which have the support - the interrupts to be spread
> intelligently (well, semi-intelligently at least :) across multiple cores.
For reference, the machine is a Celestica A8440.
http://www.amd.com.cn/CHCN/assets/content_type/DownloadableAssets/A8440_D...
The ethernet controllers are on a hub attached by HyperTransport to a single
processor, I don't think you can usefully distribute the interrupts to
anything beyond that socket.
> Although, if there is still 10% idle, that probably needs to go next :)
Heh heh. The 80/20 rule hits this with a vengeance. That's "10%" of "800%"
total, which means really only about 1.2% of a CPU, which is almost totally
indistinguishable from measurement error in the oprofile results. This is all
intuition (guesswork) now, no more obvious hot spots left to attack. Maybe if
I'm really bored over the holidays I'll spend some time on it. (Not likely.)
>> Reminds me of the old leapfrogging games with Excelan ethernet cards and
>> their onboard TCP engines (15+ years ago), allowing machines of that
>> time to hit a whopping 250KB/sec on 10Mbit ethernet. A couple years
>> later the main CPUs got fast enough to do 500KB/sec without using the
>> cards' "accelerators." It's been many years since I saw another NIC with
>> onboard TCP engine after that, but they're on the market now...
>
> Don't forget the "NFS accelerators" from ca 1990 and where they are today :)
I think I have a few in the parts bin...
--
-- Howard Chu
Chief Architect, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
15 years, 11 months
Please test RE24
by Howard Chu
I don't remember if we posted this already or not. We're prepping 2.4.6 for
release; please test RE24 and submit any problems you encounter to the ITS.
Likewise, RE23 looks about ready. If you're interested in 2.3.39 please test
and report results for RE23 as well.
Currently all tests in RE24 pass for me on OpenSUSE 10.2 x86_64 and
FedoraCore6 x86_64.
--
-- Howard Chu
Chief Architect, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
15 years, 11 months
ITS#4943 tpool.c, thread pools
by Howard Chu
I presume we can close this ITS now.
I've been running some tests on a quad-processor AMD system, and seeing a lot
of mutex contention in the frontend. It looks like the current threadpool and
connection manager architecture are a bad fit for a NUMA system like this. I'm
planning to add support for multiple thread pools (one per CPU would be the
idea) and multiple listener threads to slapd.
As a first step, after 2.4.6 is released, I'm going to unifdef the
SLAPD_LIGHTWEIGHT_DISPATCHER symbol and delete the old dispatcher code.
Based on some experimental changes I've already made, I see a difference
between 25K auths/sec with the current code, vs 39K auths/sec using separate
thread pools.
--
-- Howard Chu
Chief Architect, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
15 years, 11 months
Re: commit: ldap/doc/man/man8 slapadd.8
by Howard Chu
quanah(a)OpenLDAP.org wrote:
> Update of /repo/OpenLDAP/pkg/ldap/doc/man/man8
>
> Modified Files:
> slapadd.8 1.46 -> 1.47
>
> Log Message:
> ITS#5189 add note about db_stat and slapd needing to be run when using quick mode.
This note is incorrect and inappropriate. It is inappropriate because db_stat
is a BerkeleyDB specific command, it has no relevance to the generic features
of slapadd. It is incorrect because most other db_stat options still work.
db_stat -c is not "broken" here either; db_stat -c attempts to return lock
status information and it correctly tells you that there is no lock
information to return. That is not an error, nor does it need fixing.
--
-- Howard Chu
Chief Architect, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
15 years, 11 months
Setting up slapo-pcache with back-config
by Ralf Haferkamp
Hello,
is slapo-pcache supposed to work with back-config with current HEAD? I was not
able to create a working configuration, yet.
As soon as the olcPcacheConfig Entry is added as a child below a back-ldap
Database Entry, slapd tries to open the corresponding bdb/hdb Database for
the pcache overlay, which of course does not yet exist as it has to be a
child object of the olcPcacheConfig Entry.
Any suggestions how this could be fixed?
--
Ralf
15 years, 11 months