I am a LDAP newbie. I am trying to set up LDAP producer and consumer. The
producer cannot start when I include the overlay syncprov. I get the
following when I debug and the server stops. Can any say, what is going
I see that there is module syncprov.la in the path given by "modulepath".
Thanks in advance,
slapd init: initiated server.
bdb_back_initialize: initialize BDB backend
bdb_back_initialize: Berkeley DB 4.6.21: (September 27, 2007)
hdb_back_initialize: initialize HDB backend
hdb_back_initialize: Berkeley DB 4.6.21: (September 27, 2007)
bdb_db_init: Initializing BDB database
>>> dnPrettyNormal: <o=example>
<<< dnPrettyNormal: <o=example>, <o=example>
>>> dnPrettyNormal: <cn=root, o=example>
<<< dnPrettyNormal: <cn=root,o=example>, <cn=root,o=example>
overlay "syncprov" not found
slapd destroy: freeing system resources.
access to dn.base="" by * read
access to dn.base="cn=Subschema" by * read
access to *
by self write
by users read
by anonymous auth
by dn="cn=replica,o=example" read
rootdn "cn=root, o=example"
index objectClass,entryCSN,entryUUID eq
syncprov-checkpoint 10 5
rootdn "cn=replica, o=example"
index default pres,eq
index objectClass,entryCSN,entryUUID eq
Who is this Quanah guy who keeps coming out with the wrong info? ;)
On 2/9/08, Quanah Gibson-Mount <quanah(a)zimbra.com> wrote:
> --On Saturday, February 09, 2008 9:27 AM -0800 James Hartley
> <james.hartley(a)gmail.com> wrote:
> > ah very good.... The documentaion I read seems to imply the number was
> > tied to the replica,
> > ie if the slave one used rid 001 then slave2 would use rid002.
> > According to your email
> > I would need rid001 for database 1 and rid002 for database 2 in the
> > conf file for slave 1 and if I had another slave, say slave2 I would
> > need to have an rid003 and rid004.
> > Thank you for clarifying this point.
> Keep replies on the list. And no, you are incorrect still. The RIDs need
> to be unique per database, on a given replica. So rid001 and rid002 on
> replica1, rid001 and rid002 on replica2, is just fine.
> Quanah Gibson-Mount
> Principal Software Engineer
> Zimbra, Inc
> Zimbra :: the leader in open source messaging and collaboration
Sent from Google Mail for mobile | mobile.google.comhttp://www.suretecsystems.com/services/openldap/
I setup OpenLDAP & MIT Kerberos successfully. I created a self-signed certificate for OpenLDAP and I configured the server to work only on ldaps. I migrated all existing users and groups to OpenLDAP. Everything was working just perfect till I added a new group object using ldapadd and then deleted it using ldapdelete, since then ldapsearch takes very long time to complete. It returns the correct results but after very long time. I tried ldapsearch -d8 to see what is going on and here are the errors I got:
TLS certificate verification: Error, self signed certificate
TLS certificate verification: depth: 0, err: 18, subject: [SOME INFORMATION HERE]
TLS trace: SSL_connect:SSLv3 read server certificate A
TLS trace: SSL_connect:SSLv3 read server done A
TLS trace: SSL_connect:SSLv3 write client key exchange A
TLS trace: SSL_connect:SSLv3 write change cipher spec A
TLS trace: SSL_connect:SSLv3 write finished A
TLS trace: SSL_connect:SSLv3 flush data
TLS trace: SSL_connect:SSLv3 read finished A
TLS trace: SSL3 alert write:warning:bad certificate
TLS: unable to get peer certificate.
Do you think the delay is related to the above? What is wrong with OpenLDAP? I did not touch any configuration, only ldapadd and ldapdelete! This piece of software is very unstable :( Please help.
Express yourself instantly with MSN Messenger! Download today it's FREE!
Oren Laadan wrote:
> Howard Chu wrote:
>> You haven't provided any information to explain why you cannot structure
>> your additional entries as a distinct subtree. You're still just
>> handwaving when we ask for concrete examples of the entries involved.
> Clearly I'm new to LDAP. Please indicate what information is missing,
> I'll be happy to provide, even the local database (my .ldif file) and
> sample queries from the remote server. Just name it.
> Taking a step back: we have a departmental LDAP server for user auth,
> (posix) groups, autofs maps and so on. In my group, we add to the DB
> groups and autofs maps that do not exist on the remote server, so a
> user on our machines can belong to additional groups.
> I am not arguing that I cannot structure it differently. I simply do
> not know if I can structure it differently. Ideally I could add entries
> to the remote database, but that is impossible. The remote server
> gives DN dc=MAIN,dc=EXAMPLE,dc=COM, which is what I made the local
> server give (via the meta backend) and which is what the clients are
> using as their base DN.
Since it appears that you just need to make your data work with
pam_ldap/nss_ldap I suggest you (1) keep your local data in a distinct subtree
and (2) read the pam/nss_ldap documentation regarding the use of multiple
service search descriptors. There's no reason to be using suffixmassage here.
-- Howard Chu
Chief Architect, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
I want to setup a local ldap server for my team that will extend a remote
ldap server (whose database is inaccessible to me and I cannot simply
replicate) with a small number (less than 100) of new (local) entries.
For example, the local server may add entries for new users only in my
team, but also support authentication of all users in the remote server.
I tried to use back-meta, which seems most suitable for merging data
from multiple targets. Assume the DN base is "dc=EXAMPLE,dc=COM",
which is what the clients use.
To set it up, I used the following config snippets:
# bdb backend, with a "local" DN base different than the main one\
# not intended to serve clients, but to serve the meta backend only
# meta backend, with the right DN base, serving the clients
suffixmassage "dc=EXAMPLE,dc=COM" "dc=TMP,dc=EXAMPLE,dc=COM"
There is a local database for subtree dc=TMP,dc=EXAMPLE,dc=COM (which
isn't used by the clients). This database holds the additional entries.
(Clearly, I cannot have used the same DN base).
The main database (used by the clients), dc=EXAMPLE,dc=COM is a meta-
backend, which forwards queries to both the remote server and the local
database. With the latter, is uses suffixmassage to convert from the
real DN to the local database DN and back.
There are two problems with this configuration: first it is suboptimal
because it requires multiple threads to handle the self-referral to the
local database. More importantly, due to a problem in the server this
leads to random lockup of the server.
The discussion in the -bugs lists (read the full thread here:
(1) use overlay translucent
(2) use back-relay
(3) use back-ldap to the remote server and subordinate glue for local db
(1) I tried the overlay translucent (see config below), and the local
database entries didn't show up. When the overlay was turned if, they
did show up (but without the remove entries ...). Indeed the man page
says it should be used to override and/or modify attributes of entries
coming from the remote server; it doesn't say anything about being able
to add new entries.
# bdb backend, local database, same DN base
(2) back-relay does not merge two database, but instead makes the job
of relaying to the same server internal and therefore much more efficient.
(3) Tried the config below, but it wouldn't run... so I'm not sure what
the right config should be. Again, I think the issue here is that a
subordinate database is to be a subtree of the remote server, unlike
the simple merge that I require (both remote and local at the same level)
The only other solution I can think of is run two (!) separate ldap
instances on the local server machine to avoid the lockup problem I've
been experiencing. Ugly ...
(Note: I tried both 2.3.39 and 2.4.7)
Hopefully *someone* would know how to successfully get my setup to work.
I have a person object with the following entry in LDIF:
I then create another posix object for the above person using alias:
I have another ou then use alias to refer to the posixAccount:
But when I search the entry, I get nothing for the posixAccount:
ldapsearch -a always -x -b "ou=deer,dc=estream,dc=com,dc=my"
I expect the alias dereference will return correct result for me. It seems
like the alias object class cannot mix with other object class in OpenLDAP.
If I mix all the object classes into one object, the ldapsearch will
The reason I refactor to that details is I wish to create more than one
posixAccount for same person in different server. Please advice on how to
achieve that or if it is not encourage to construct the DIT in such manner.
Thank you very much
Chau Chee Yang
E Stream Software Sdn Bhd
SQL Financial Accounting
--On Saturday, February 09, 2008 9:27 AM -0800 James Hartley
> ah very good.... The documentaion I read seems to imply the number was
> tied to the replica,
> ie if the slave one used rid 001 then slave2 would use rid002.
> According to your email
> I would need rid001 for database 1 and rid002 for database 2 in the
> conf file for slave 1 and if I had another slave, say slave2 I would
> need to have an rid003 and rid004.
> Thank you for clarifying this point.
Keep replies on the list. And no, you are incorrect still. The RIDs need
to be unique per database, on a given replica. So rid001 and rid002 on
replica1, rid001 and rid002 on replica2, is just fine.
Principal Software Engineer
Zimbra :: the leader in open source messaging and collaboration
I'm trying to figure out what my ACL should be in slapd.conf. What I
want is that a user can change his/her password, but they won't be able
to read any other user's password. Right now what I have is not
restrictive enough. I've read the OpenLDAP admin guide on ACLs but it
was not clear to me what I should use. What I have currently is below.
What do I need to change it to to have the results I want?
access to attrs=userPassword,sambaLMPassword,sambaNTPassword
by self write
by anonymous auth
by * read
by * none
access to *
by * read
I've got a situation that may be unique to our site and I'm wondering if
anyone might have an idea on how to accomplish something. I'll have to
set the ground work with a bit of an explanation first.
A long time ago we chose to allow/deny authentication on the whole by
allowing/denying access to the userPassword attribute. This is done by
an accountActive attribute set to "Y" or "N" which is then used in an
ACL filter statement for auth access to the userPassword attribute. This
gave us the capability to allow/deny access across the board from a
As time went on we added application based attributes set to "Y" or "N"
that would allow a DN to authenticate even though the accountActive
attribute was set to "N". This again was accomplished by an ACL filter
statement for auth access to the userPassword attribute when the
requesting source was known server (DNS) running/hosting that application.
This has worked well for us.
But now we need to be able to authenticate a DN that has the
accountActive attribute set to "N". We can't use the above method with
an application based attribute in conjunction with a known server so
we're looking for an alternative.
Using a privileged admin type DN that is allowed auth access to the
userPassword attribute along with an ACL filter statement seems like the
way to go. But implementing this technique appears easier said then done.
The original thought was to bind as the privileged admin DN and then do
a, for lack of a better term, sub-bind as the users DN in hopes that the
original bind as the privileged admin DN would then allow this
restricted authentication to succeed. Well, we have not been able to
accomplish this for probably one of two reasons. We're either doing
something wrong, or it's just not possible.
So anyone out there that knows of a possible way to accomplish this type
of authentication or, for those that like a challenge, I like to hear
any and all ideas on how to possibily accomplish this.
It's already been purposed and a successful proof of concept was done
were we encrypt the users password and then pass it to LDAP doing an
ldapcompare and use the results to determine successful/failed
authentication but that method was not warmly received, mainly due to
having to do it in code. So I guess one of the constraints to accomplish
this is to use only established LDAP calls and/or functions, i.e. pass
the DN and password without any manipulation of the password.
Thanks for taking the time to read this and considering this.