I've 2 ldap instance in mirror mode which each one has its consumer.
Since a few days, even if contextCSN attributs gave me the info that all instance are in sync (as shown below), this is not the case !!
I did ldapsearch against the 2 provider for one branch and I found difference in number of entries.
I also did it against consumer and I saw a difference between consumer/provider for sys1 but not for consumer/provider for sys2 which were in sync. !! very strange ...
sys1 - 389 (consumer)
sys1 - 3892 (provider)
sys2 - 389 (consumer)
sys2 - 3892 (provider)
I also did a stop/start of providers to force the replication process from difference because of the stop, but even they have done some replication between them, the difference in that concerned branch doesn't disappeared.
Can you help me to find out the root cause of this unsync mirrored systems ?
How can I check it deeper ?
Where do have I to check why I get this difference ?
Can It be related to system performance ? (memory ..)
Thx in advance for your help.
PS. We use openldap version 2.4.35 under solaris 5.10 (zone systems)
Disclaimer: This message and the information contained herein is proprietary and confidential and subject to the Tech Mahindra policy statement, you may review the policy at http://www.techmahindra.com/Disclaimer.html externally http://tim.techmahindra.com/tim/disclaimer.html internally within TechMahindra.
We're currently running through all of our SSL/TLS using apps to disable
SSLv3 and update the accepted ciphers list, as well as other current
best practices. I don't see any way to disable SSL compression in
openldap? Does SSL compression with ldap traffic not lead to the same
issue as it does in web traffic?
Also, are there any plans to support ECDHE ciphers in openldap? I see
there's an ITS ticket about it, it's rather old and the last update
questioned whether those ciphers should be avoided due to potential NSA
meddling in their design.
I'm having trouble making binding through a chaining. I have 2 servers,
server 1 has a referral ou pointing to a another server (server2). Server1
has the following configuration:
olcDbStartTLS: none starttls=no
olcDbIDAssertBind: mode=self flags=prescriptive,proxy-authz-non-critical
bindmethod=simple timeout=0 network-timeout=0
binddn="cn=admin,dc=example,dc=ar" credentials="password" keepalive=0:0:0
>From the server1 I can make changes and searches without problems to
entries on server2 (the chaining works fine for this), but when I want to
make a binding, it gives me invalid credentials.
mboscovich@mambo-tango:~$ ldapwhoami -vvv -h server1 -x -D
ldap_initialize( ldap://server1:389 )
Enter LDAP Password:
ldap_bind: Invalid credentials (49)
If I make the same query but to the server2 where is hosted the entry (so
not the chaining is used) the binding runs smoothly:
mboscovich@mambo-tango:~$ ldapwhoami -vvv -h server2 -x -D
ldap_initialize( ldap://server2:389 )
Enter LDAP Password:
Result: Success (0)
The logs on server1 when it's fail, show this:
Dec 8 19:19:55 server1 slapd: conn=1014 fd=20 ACCEPT from IP=
Dec 8 19:19:55 server1 slapd: conn=1014 op=0 BIND
Dec 8 19:19:55 server1 slapd: conn=1014 op=0 RESULT tag=97 err=49
Dec 8 19:19:55 server1 slapd: conn=1014 op=1 UNBIND
Dec 8 19:19:55 server1 slapd: conn=1014 fd=20 closed
and on the server02 i couldn't see any log in this case.
What am I doing wrong?.
I have a replicated LDAP and a few Windows PC's what want to
authenticate using Samba. Normally I use "smbpasswd -w" to give the ldap
admin dn, but because it's replicated there is no ldap admin!
Is there a way to authenticate using a replicated LDAP?
The users are created on the master.
I am using the "refreshAndPersist" type of replication.
Paul van der Vlis.
Paul van der Vlis Linux systeembeheer Groningen
A number of the mirror sites have gone offline over the years. Anyone
interested in running a new mirror for us?
-------- Forwarded Message --------
Subject: (ITS#8331) Download mirror list needs updating
Date: Fri, 04 Dec 2015 10:01:12 +0000
Full_Name: Andrew Findlay
Submission from: (NULL) (2001:8b0:8d0:f7e1::94)
The Netherlands mirror has vanished: ftp.nl.uu.net does not resolve in DNS.
I have a question regarding growing an LMDB database when a write transaction hits MDB_MAP_FULL.
I would like to avoid defining a high mapsize value because my application will contain many MDB_envs, and because I have Windows users (Windows allocates the whole file on the disk).
Based on the intuition that MDB_MAP_FULL should not leave the database in a weird state, I have made the following little experiment. When MDB_MAP_FULL is encountered I tried to:
* copy the current env (mdb_env_copy) into another directory (fine: it does not seem to contain uncommited data)
* reset the transaction < error bit > (modified LMDB code to introduce a < txn->mt_flags &= ~MDB_TXN_ERROR > somewhere)
* commit the transaction
* close the database
* close the env
* reopen it with a higher mapsize value
* reopen the database
* create another transaction
* continue writing
... and it seems to be working pretty well.
Assuming I am ready to < relax > some of the ACID requirements, does it sound reasonable to think that MDB_MAP_FULL does not leave LMDB is a weird state? And that the < trick > described above should always be working? By < working > I mean: the copied environment will never contain uncommitted data (so I can rely on it to implement a kind of rollback), the reopened environment will always be valid and contain the expected data (data written before hitting MDB_MAP_FULL)?
Thanks in advance for any insight,
I may have a need soon to implement "computed" attributes in LDAP, to
accommodate dumb clients that are unable to properly update the database
for example, an attribute masterAttr may have values like "A:B" (its
value updated by the dumb client), but other clients need the A or B
part separately. So whenever masterAttr is updated with value "A:B",
firstPartAttr have to be updated with "A" and secondPartAttr with "B"
what are my options to achieve this?
- is there an overlay like slapo-rwm but for attribute values? I
searched but did not found anything. So I guess the answer is no.
- using a combination of back-perl, back-relay and slapo-translucent? is
that even possible
- using back-sock as an overlay to monitor modifications and update the
modified objects accordingly?
- a script that monitor the accesslog database and update the modified
- biting the bullet and writing an overlay myself?
before diving too deep into this issue (and possibly drowning), I
figured I would ask. any thoughts? ideas?
thanks in advance, best regards,
*Jephté CLAIN | Développeur, Intégrateur d'applications*
Service Systèmes d'Information
Direction des Systèmes d'Information <http://dsi.univ-reunion.fr>
Tél: +262 262 93 86 31 <tel:+262262938631> || Gsm: +262 692 29 58 24
www.univ-reunion.fr <http://www.univ-reunion.fr> || Facebook
|| Twitter <http://twitter.com/univ_reunion>
If you know how to build OpenLDAP manually, and would like to participate
in testing the next set of code for the 2.4.43 release, please do so.
Generally, get the code for RE24:
Configure & build.
Execute the test suite (via make test) after it is built.
Zimbra :: the leader in open source messaging and collaboration
I see very strange searches in my slapd.log, and wonder what I my have misconfigured.
On every SSH connection (with ssh key, not password) :
Search for the TTY:
slapd: conn=1000 op=307 SRCH base="dc=mydomain,dc=lan" scope=2 deref=0
slapd: conn=1000 op=307 SRCH attr=uid userPassword uidNumber gidNumber cn
homeDirectory loginShell gecos description objectClass
For the date:
slapd: conn=1000 op=308 SRCH base="dc=mydomain,dc=lan" scope=2 deref=0
slapd: conn=1000 op=309 SRCH base="dc=mydomain,dc=lan" scope=2 deref=0
slapd: conn=1000 op=310 SRCH base="dc=mydomain,dc=lan" scope=2 deref=0
(But I don't see "uid=root" when logging in over SSH with a key.)
I wouldn't expect to see a search for "root", since it's a system account, and I use
a key, so I would expect LDAP to be completely out of the picture.
However, I do see many searches in the logs for other system accounts:
Most seem to be triggered by the standard system cron jobs or service restarts etc.
The system is Debian 8.2 "Jessie". The following packages related to ldap or pam are
At this point, it's difficult for me to know what may be relevant, so I'm afraid I
have to paste a lot of stuff here in the hope that it includes some clue for someone...
# egrep 'cache|check' /etc/nscd.conf
enable-cache passwd yes
check-files passwd yes
enable-cache group yes
check-files group yes
enable-cache hosts yes
check-files hosts yes
enable-cache services yes
check-files services yes
enable-cache netgroup yes
check-files netgroup yes
# grep ldap /etc/nsswitch.conf
passwd: compat ldap
group: compat ldap
shadow: compat ldap
# listconf /etc/pam_ldap.conf
# listconf /etc/pam.d/common-auth
auth [success=2 default=ignore] pam_unix.so nullok_secure
auth [success=1 default=ignore] pam_ldap.so use_first_pass
auth requisite pam_deny.so
auth required pam_permit.so
# listconf /etc/pam.d/common-account
account [success=2 new_authtok_reqd=done default=ignore] pam_unix.so
account [success=1 default=ignore] pam_ldap.so
account requisite pam_deny.so
account required pam_permit.so
# listconf /etc/pam.d/common-password
password [success=2 default=ignore] pam_unix.so obscure sha512
password [success=1 user_unknown=ignore default=die] pam_ldap.so use_authtok
password requisite pam_deny.so
password required pam_permit.so
# listconf /etc/pam.d/common-session
session [default=1] pam_permit.so
session requisite pam_deny.so
session required pam_permit.so
session required pam_unix.so
session optional pam_ldap.so
My LDAP olcLogLevel is "filter stats sync". Please let me know if the other lines of
that log may be useful, or if other log levels should be enabled (I tried, but didn't
notice anything interesting).
Well, if you have read so far, now is the time to tell me that this is all useless
and that I should have posted that other essential config file which I missed ... :-)
Thanks for any help in solving this mystery,