We are trying to create a LDAP proxy to hide two distinct AD servers
behind a "single LDAP view". The goal is to authentify and authorize
extranet and internal users using a single LDAP server, as LDAP clients
(eg Apache) should only talk to a single LDAP server, and not be aware
about the multiple AD servers behind the proxy.
Our understanding is that we can create a meta database with two
back-ends, using distinct uri/suffix/etc.
- using an AD user to talk to the proxy, which then is re-used by the
proxy to talk to the back-end
What does not work:
- one "front-end", simple-bind LDAP-user used to access the LDAP-proxy,
and only known to the proxy
- one back-end user per back-end (known in AD).
So we want to first search where a user is by using a front-end account,
and then retry a bind with the user's effective username and password
using its correct DN.
suffixmassage "OU=O3,dc=meta,dc=x1,dc=ch" "OU=O3,dc=ad,dc=x1,dc=ch"
When we try to use idassert-bind above, we always get the following
error in the log:
535a1f25 conn=1000 op=1 <<< meta_search_dobind_init=4
535a1f25 conn=1000 op=1 <<< meta_back_search_start=4
535a1f25 conn=1000 op=1 meta_back_search: ncandidates=1 cnd="*"
535a1f25 conn=1000 op=1 >>> meta_search_dobind_init
535a1f25 conn=1000 op=1 meta_search_dobind_init mc=0x7f17fc008ef0:
non-empty dn with empty cred; binding anonymously
so it looks our identity is never used beyond the proxy to talk to the AD.
Can anybody explain what the "rid", "sid", and "to" IDs refer to in the syncprov_sendresp message? Example:
slapd: syncprov_sendresp: to=002, cookie=rid=006,sid=003,csn=20140430111351.287889Z#000000#001#000000
I guess the original is from SID==1, the local SID==003. Does it send to SID==2? If so, what is rid==6 referring to?
I have a branch "ou=people" where RDN are in the form "X1234" and NEVER
change for one people.
Ex. : uid=X1234,ou=people,dc=example,dc=org
In this node, I have the login under "eduPersonPrincipalName" attribute
which MAY change.
Some applications doesn't allow us to define which login to use and so take
"uid" attribute by default, not so cool.
Is there any possibility in OpenLDAP to duplicate dynamically an OU with
another RDN to have for example :
I found the previous post of someone else who faced
the same problem I'm encountering, but I did not see a posted
In /etc/openldap/ldap.conf, TLS_REQCERT is set to 'allow'.
I would like to leave this setting, but override it for a
specific invocation of ldapsearch. I have attempted to do so by
setting TLS_REQCERT in ~/.ldaprc and be setting the LDAPTLS_REQCERT
environment variable. Neither has worked.
Interestingly, I _HAVE_ found that I can override TLS_CACERTDIR
in either of those locations.
Is this a bug?
Andrew D. Arenson | aarenson (@) iu.edu
Advanced Biomedical IT Core, Research Technologies, UITS | W (317) 278-1208
RT is a PTI Cyberinfrastructure & Service Center | C (317) 679-4669
Indiana University Purdue University Indianapolis | F (317) 278-1852
Looking at the test source code of 2.4.39 for the ppolicy script, I can see
the ldapsearch is using a '-e ppolicy' option. The man page for
ldapsearch lists 'general extensions' under -e and -E options. But I
cannot figure out what these extensions are.
What is '-e ppolicy' ? and when do you need it?
We had a nasty incident with OpenLDAP 2.4.33 (and DB 4.8.30). slapd started
to forget about somes entries. There is no error in the logs, but we
can see that SRCH operation on some objects that existed randomly
returned nentries=0 instead of nentriees=1
We fixed it by destroying the DB and resync'ing from master.
Is it a known problem? I upgraded to 2.4.39 after the problem, but I
would like to be sure I will not encounter that again.
Reviewing current time handling code, while lutil_parsetime understands
and can parse a generalized time that includes fractions of a second,
there doesn't seem to be any code that can generate a generalized time
string including fractions of a second, in particular to microsecond
resolution (to match a struct timeval time)?
I'd like to enhance the current password policy module to use
microsecond resolution for the pwdFailureTime attribute, as the current
1 second resolution makes it less than ideal for account lockouts.
Currently, it is using slap_timestamp to generate the generalized time
to store, which only provides a generalized time with 1 second
granularity. On initial review, it looks like simply storing a generalized
time with microsecond resolution in the pwdFailureTime attribute is all
that is required to enhance the ppolicy module for better account
lockout support, because as previously mentioned lutil_parsetime already
understands and can parse fractional seconds. I don't see any other code
that would need to be modified so far.
The question is how to generate the needed format? One option would be
to enhance one way or another the existing generic support, perhaps
adding a slap_timestamp_usec function? Another would be to just add a
call to gettimeofday() next to the current call to time() in the ppolicy
code, generate the generalized time string with slap_timestamp, and then
mash the fractional seconds into it.
Ideally I'd like to get this enhancement to ppolicy accepted into the
code base, so I'd appreciate some feedback as to what implementation
would be preferred for this.
I've just spent about two days chasing down a bug in my application using
LMDB, where if I ran the code on an ARM (and ARM only) platform, data
returned from mdb_get() would appear to be corrupted.
The symtomps were quite bizarre, I'm including them here in case someone
else sees something similar; you can skip to the conclusion if you're in a
a) data written by an application process using mdb_put() appeared correct
b) the same data read by a different application process using
mdb_cursor_get() would appear consistent when examined with a live GDB on
c) however, when actually running the application code, even under GDB,
some data would mysteriously be corrupted
d) single-stepping through one case of corruption lead me to a single LDR
instruction which appeared to load bogus data from memory. GDB showed the
same data (at the exact same memory address) as correct, i.e. not
e) the problem only occured on ARM and was always reproducible.
Much experimenting and googling later I found , the upshot of which is
that on some ARM platforms unaligned word accesses to data using LDR
produce *bogus* data. No exception, nothing, just silent data corruption.
My question is: Given that I'm storing C structs directly in LMDB, I need
to get at least word-aligned pointers back from LMDB in order to be able to
access the data safely.
I've not found anything in the documentation or the source about being able
to tweak alignment of key and data values; I *think* that all I need is an
option where LMDB would guarantee a minimum word (4 byte in this case)
alignment of data.
How hard would this be to implement? Would you consider such a feature?
The only workaround I can think of is explicitly copying all data I get
back from LMDB into aligned struct pointers; for obvious performance
reasons I don't want to do that.
See the second answer, and the link to the ARM documentation for the
We're testing the ppolicy module for the purposes of enabling account
lockout on our ldap infrastructure. During initial testing, I noticed
that it didn't seem to be catching all of the failed logins, and then
realized that the pwdFailureTime attribute in which they are stored
seems to have a granularity of only 1 second?
So, if there are 100 failed logins in 1 second, for the purposes of
account lockout, the password policy module only records them all as 1
failed login? Such that if you had a pwdMaxFailure set to 100, an
intruder would actually be able to get in 10000 password guess attempts
before the account was actually locked out?
Am I misunderstanding something here? Is there anyway to get
pwdFailureTime to use microsecond granularity like entryCSN?