OpenLDAP with SSL connection and search only with wildcard
by a.leurs@consense-gmbh.de
Hello,
I have successfully managed to create my SSL-Connection to the OpenLDAP and from the OpenLDAP the two different Active Directorys.
But now when I perform a search with only a wildcard (e.g. (sn=*)), I don't get any results.
A search with the filter (sn=l*) works fine. I get all users wich lastname starts with the letter 'l'.
When I switch back to LDAP instead of LDAPS it works fine.
Here is my slapd.conf:
#LDAP Backend configuration file
# See slapd.conf(5) for details on configuration options.
# This file should NOT be world readable.
ucdata-path ./ucdata
include ./schema/core.schema
include ./schema/cosine.schema
include ./schema/nis.schema
include ./schema/inetorgperson.schema
pidfile ./run/slapd.pid
argsfile ./run/slapd.args
# Full log level
loglevel 32768 16384 2048 1024 512 256 128 64 32 16 8 4 2 1
sizelimit unlimited
timelimit unlimited
# Enable TLS if port is defined for ldaps (to openldap)
TLSVerifyClient never
TLSCipherSuite HIGH:MEDIUM:-SSLv2:-SSLv3
TLSProtocolMin 3.3
TLSCertificateFile ./secure/certs/maxcrc.cert.pem
TLSCertificateKeyFile ./secure/certs/maxcrc.key.pem
TLSCACertificateFile ./secure/certs/maxcrc.cert.pem
# Configuration for Connection to example.com
database meta
suffix "DC=example,DC=com"
rootdn "DC=example,DC=com"
rebind-as-user yes
uri ldaps://example.com:636/dc=example,DC=com
lastmod off
chase-referrals no
idassert-bind bindmethod=simple
binddn="cn=CN=username,OU=Users,OU=Orga,DC=example,DC=com"
credentials="XXXX"
tls_reqcert=never
tls_cacert=./secure/certs/example.pem
tls ldaps tls_reqcert=allow tls_cacert=./secure/certs/example.pem
# Configuration for Connection to Test-LDAP
uri ldap://ldap.andrew.cmu.edu/dc=test,dc=exapmle,dc=com
suffixmassage "dc=test,dc=example,dc=com" "dc=edu,dc=meta,dc=com"
overlay rwm
rwm-map attribute uid samaccountname
rwm-map attribute member memberOf
rwm-map objectclass inetOrgPerson user
3 years, 3 months
Info needed on OpenLDAP support / compliance on FIPS 140.2
by Vijay Kumar
Hi Team,
We are using the version 2.4.48 OpenLDAP, we would like to know which
versions of OpenLDAP which used OpenSSL are compliant towards FIPS 140.2
standards.?
Please let us know the details.
Thank you.
--
Thanks & Regards,
Vijay Kumar
*+91-94944 44009*
3 years, 3 months
Best practices in storing user device data
by Nick Milas
Hello everyone,
In our (non-profit, research) organization we are already using OpenLDAP
for many years, storing people data and dns records (LDAP-based DNS server).
We are now looking into how we could organize our LDAP DIT in order to
store device data (descriptions, MAC addresses, IP Addresses).
The idea is to be able to use the DIT for combined and/or independent
user- and device- based authentication throughout the network (e.g.
using TACACS, Radius pulling data from LDAP DIT or elsewhere).
Currently we are storing data about devices (IP and MAC) Addresses using
phpIPAM and NetDisco open source software, so data is stored in
relational databases (postgresql on NetDisco, MySQL on phpIPAM), yet
network-related data is not directly (i.e. integrated in db schemas)
associated to users (except in descriptions).
In phpIPAM we are organizing our IP Spaces (public and private).
NetDisco uses SNMP to scan the network and automatically associate
end-devices ("nodes") to switches ("devices") and MAC addresses to IP
addresses.
We are currently investigating whether we should:
1. Store device data in the DIT as part of user records. Thus, each
user entry would also include info about the devices the user is
responsible for, most importantly IP Addresses assigned to them and
MAC addresses. Is this approach considered sane? If so, which Object
Class(es) would serve this need?
2. Store data in a separate branch, for example:
dn: cn=devicexxx,ou=Nodes,dc=example,dc=com
objectClass: device
objectClass: ieee802Device
objectClass: radiusprofile
objectClass: simpleSecurityObject
objectClass: top
cn: devicexxx
description: Main Server at Net Lab
l: Main Campus
macAddress: 00:24:8c:3c:xx:xx
ou: tech
owner: cn=TechAdmins,ou=Groups,dc=example,dc=com
radiusArapSecurity: 195.xxx.xxx.1
radiusArapZoneAccess: 255.255.255.128
radiusFramedIPAddress: 195.xxx.xxx.63
radiusHint: 50004
radiusNASIpAddress: 195.xxx.xxx.125
radiusTerminationAction: 33
radiusTunnelMediumType: IEEE-802
radiusTunnelPrivateGroupId: 1
radiusTunnelType: VLAN
userPassword:: ****************
We have successfully tried this approach using FreeRadius and Cisco
2960 switches but I didn't find this solution ideal/intuitive,
especially because devices are totally dis-associated from users.
It seems to be more natural to authenticate users based on their
personal (ldap-based) credentials and devices based on their MAC
addresses alone.
But of course, I may be wrong...
3. Use an non-LDAP store, e.g. MySQL.
I would be grateful to people here who have already dealt with this
issue and would be eager to share their experience.
Any reference(s) to relevant documents regarding the above will be
valuable too!
Thanks in advance.
Cheers,
Nick
3 years, 3 months
Restoring from prod into qa
by John C. Pfeifer
I have a both a production cluster and a qa cluster of servers. Each cluster is setup with multi-master (mirror-mode) delta-sync replication.
On a weekly basis, I need to reload the data in qa from production. My problem is that, after successfully loading the dump, there is an epic flurry of replication events which tend to exhaust my burst balances in AWS. While I could request more resources (at a greater cost), I first want to verify that I have a reasonable process.
On one of the production servers, I generate a dump:
/usr/sbin/slapcat -F /etc/openldap/slapd.d -b dc=umd,dc=edu -l dump.ldif
On each of the qa servers (simultaneously):
1) fetch the dump
2) delete the dc=umd,dc=edu and cn=accesslog LMDB files
3) /usr/sbin/slapadd -F /etc/openldap/slapd.d -b dc=umd,dc=edu -q -w -S 0 -l dump.ldif
Is this a reasonable approach?
Is the use of the ‘-S’ flag correct?
Should I be modifying the dump in any manner (e.g. deleting the entryCSN attributes)?
Thanks for any advise.
//
John Pfeifer
Division of Information Technology
University of Maryland, College Park
3 years, 3 months
Rewriting attribute.
by Jan Hugo Prins
Hello,
I'm trying to do a rewrite using the rwm overlay:
I'm trying to rewrite uid: user1-branch1 to uid: user1
Some context:
We have the following situation:
We have a central OpenLDAP with several OU's. In these OU's we have user
SubOU's and a user has a UID that is a combination of his CN with a dash
and an abbreviation for the OU he is living in.
For example:
OU=Branch1,DC=Example,DC=ORG
User 1:
dn=User1,OU=Branch1,DC=Example,DC=ORG
cn=User1
uid=User1-Branch1
OU=Branch2,DC=Example,DC=ORG
User 1:
dn=User1,OU=Branch1,DC=Example,DC=ORG
cn=User1
uid=User1-Branch2
The reason this is done in the past (15 or 20 years ago) was that they
wanted to have multiple branches and people could authenticate with the
cn within there own branch. All very complicated history, but I have to
work with it now.
Someone setup a new Samba server a while back and wanted to normalize
this Samba config a little so he created a LDAP proxy on this server
where he proxied only one OU and did a rwm map from cn to uid. Part of
this config:
overlay rwm
rwm-map attribute uid cn
This works fine to some extend. One of the problems I found just now is
that I don't have a cn anymore in the DN's that I get from this LDAP
proxy, besides that, if the proxy has to much access and you search for
a uid=User1 it will return both User1 from Branch1 and Branch2, and this
could result in some security issues.
For this reason I'm currently doing a little redesign of this setup and
I would like to change the rwm-map to a rewrite of the uid where I
simply strip everything including the dash in the uid, besides that I'm
going to limit access of this proxy by using a proxy user with limited
access to only the OU that it needs access to.
The access limitation works just fine.
I only need a little help with the rewrite.
Thanks,
Jan Hugo Prins
3 years, 3 months
Configuring multiple proxies for Mirror-Mode
by wayne.mcnaught@landregistry.gov.uk
I am looking for some advice and information. I have configured multiple LDAPs in a Mirror-Mode configuration and fronted by OpenLDAP in proxy mode.
I understand that the list contained in the DBURI attribute is used to define the backends, and all the proxies are configured with the same list. I understand that first URI in the DbURI attribute will be used unless this fails, in which case it will fall back to the second URI. It will then keep on the second one until that one fails. This seems fine for most failure cases, when all proxies recognise the same failure. If communication fails between one proxy and the one backend LDAP and doesn't affect all proxies, writes will now be directed to different backends from different proxies. Is there some way to keep the proxies in-line or recognise a failure on one proxy and force the others to change.
Thanks in advance
Wayne McNaught
3 years, 3 months
slapadd gives confusing output - str2entry
by Scott Classen
Hello,
I’m setting up a new openldap(2.4.50) server and am running into this odd error message when importing my initial slapd.ldif file:
# slapadd -v -n 0 -F slapd.d -l slapd.ldif
added: "cn=config" (00000001)
added: "cn=module{0},cn=config" (00000001)
added: "cn=schema,cn=config" (00000001)
added: "cn={0}core,cn=schema,cn=config" (00000001)
added: "cn={1}cosine,cn=schema,cn=config" (00000001)
added: "cn={2}inetorgperson,cn=schema,cn=config" (00000001)
added: "cn={3}rfc2307bis,cn=schema,cn=config" (00000001)
added: "cn={4}openldap,cn=schema,cn=config" (00000001)
added: "cn={5}ppolicy,cn=schema,cn=config" (00000001)
added: "cn={6}misc,cn=schema,cn=config" (00000001)
added: "olcDatabase={-1}frontend,cn=config" (00000001)
added: "olcDatabase={1}mdb,cn=config" (00000001)
added: "olcOverlay={0}ppolicy,olcDatabase={1}mdb,cn=config" (00000001)
added: "olcOverlay={1}memberof,olcDatabase={1}mdb,cn=config" (00000001)
added: "olcOverlay={2}refint,olcDatabase={1}mdb,cn=config" (00000001)
5ee172af str2entry: entry -1 has no dn
slapadd: could not parse entry (line=1500)
_#################### 100.00% eta none elapsed none fast!
Closing DB…
Slapadd claims to have completed 100%, but also complains about entry -1 having no dn:
Any advice?
Cheers,
Scott
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Scott Classen, Ph.D.
ALS-ENABLE
TomAlberTron Beamline 8.3.1
SIBYLS Beamline 12.3.1
Advanced Light Source
Lawrence Berkeley National Laboratory
1 Cyclotron Rd
MS6R2100
Berkeley, CA 94720
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
3 years, 3 months
SSL Multi Master setup issue
by Aric Wilisch
Hey everyone.
Just setup a multi master configuration on two openldap 2.4 systems on Centos 7. Replication seems to be working and I can do ldapsearches with ldap or ldaps while I'm ON the boxes.
I'm finding when I try to do a ldapsearch using ldaps from an external box I get the following error:
Jun 09 18:36:29 prod-openldap-01 slapd[20102]: conn=1301 fd=19 TLS established tls_ssf=256 ssf=256
Jun 09 18:36:29 prod-openldap-01 slapd[20102]: conn=1301 fd=19 closed (connection lost)
Example search :
ldapsearch -x -LLL -W -D "cn=ldapadm,dc=<domain redacted>,dc=com" -H ldaps://public-ldap-01.<domain redacted> -b 'dc=<domain redacted>,dc=com' -s sub "(objectclass=uid)" *
in /etc/sysconfig/slapd I have the following:
SLAPD_URLS="ldapi:/// ldap://stage-openldap-01.<domain redacted> ldaps:///"
The ldap:// address reflects what was setup for the olcServerID when I was setting up. However if I check slaptest -f /etc/sysconfig/slapd -v I get:
5ee10c18 /etc/sysconfig/slapd: line 10: unknown directive <SLAPD_URLS=ldapi:/// ldap://stage-openldap-01.<domain redacted>.com ldaps:///> outside backend info and database definitions.
slaptest: bad configuration file!
I haven't setup an ldap server in years so I'm not sure where my problem is. If I can get external auth and searches working via ldaps the build will be complete.
Appreciate any help anyone can give.
Regards,
Aric
Sent from Mailspring (https://link.getmailspring.com/link/CD141FF0-8BD1-4F0B-9E01-62C712ABDDD8@...), the best free email app for work
3 years, 3 months
Re: Aw: Re: Antw: [EXT] Re: Unexpected LMDB RSS /performance difference on similar machines
by Howard Chu
Ulrich Windl wrote:
> Hi!
>
> (Sorry this mail frontent is unable to quote properly, so I top-post)
>
> In https://git.openldap.org/openldap/openldap/-/blob/13f3bcd59c2055d53e4759b... the comment basically says "Note that we don't currently support Huge pages."
The comment says huge pages are not pageable. Which is still true in all
current versions of Linux, as well as every other operating system.
>
> In https://www.openldap.org/lists/openldap-technical/201401/msg00213.html I had asked whether pages will be swapped when loading a 20GB database into 4GB of RAM and you said "No". I doubted that and in your unique way (not to call it "insulting") you said "You've already demonstrated multiple times that "as far as you know" is not far at all."
>
> Howard I think there is no need to be that rude.
I have zero tolerance for bullshit, which is what you post. You make stupid guesses
when the facts are already clearly documented, but you're too lazy to read them
yourself. Your guesses are worthless, and the actual facts are readily available.
Guesses contribute nothing but noise.
>
> Regarding the comment in https://git.openldap.org/openldap/openldap/-/blob/13f3bcd59c2055d53e4759b..., it seems the comment contradicts to what you claimed in https://www.openldap.org/lists/openldap-technical/201401/msg00213.html, namely "
> We rely on the OS
> * demand-pager to read our data and page it out when memory
> * pressure from other processes is high.
> ". In mail you doubted that pages would be "swapped".
I did not "doubt" - I *know*. Because again, these are readily verifiable facts.
Which you are still ignorant of, and you continue to neglect educating yourself
about them.
The mmap'd pages that LMDB uses are pageable. They never get swapped. These are
two similar but distinct operations. If you would bother to read and educate
yourself you would understand that. Instead you continue to spout unsubstantiated
nonsense.
>
> Regards,
> Ulrich
>
>>>> Howard Chu 08.06.2020, 14:02 >>>
> Ulrich Windl wrote:
>>>>> Howard Chu <hyc(a)symas.com> schrieb am 07.06.2020 um 22:44 in Nachricht
>> <14412_1591562670_5EDD51AD_14412_294_1_79b319e0-fa23-a622-893b-b1b558a9385c@syma
>> .com>:
>>> Alec Matusis wrote:
>>>> 2. dd reads the entire environment file into system file buffers (93GB).
>>> Then when the entire environment is cached, I run the binary with
>>> MDB_NORDAHEAD, but now it reads 80GB into shared memory, like when
>>> MDB_NORDAHEAD is not set. Is this expected? Can it be prevented?
>>>
>>> It's not reading anything, since the data is already cached in memory.
>>>
>>> Is this expected? Yes - the data is already present, and LMDB always
>>> requests a single mmap for the entire size of the environment. Since
>>> the physical memory is already assigned, the mmap contains it all.
>>>
>>> Can it be prevented - why does it matter? If any other process needs
>>> to use the RAM, it will get it automatically.
>>
>> While reading this: The amount of memory could suggest that using the
>> hugepages feature could speed up things, especially if most of the mmapped data
>> is expected to reside in RAM. Hugepages need to be enabled using
>> vm.nr_hugepages=... in /etc/sysctl.conf (or corresponding). However I don't
>> know whether LMDB can use them.
>>
>> A current AMD CPU offers these page sizes: 4k, 2M, and 1G, but some VMs (like
>> Xen) can't use it. On the system I see, hugepages are 2MB on size. I don't know
>> what the internal block size of LMDB is, but likely it would benefit to match
>> the hugepage size if using it...
> https://git.openldap.org/openldap/openldap/-/blob/13f3bcd59c2055d53e4759b...
> We have already had this discussion, and your suggestion was irrelevant back then too.
> https://www.openldap.org/lists/openldap-technical/201401/msg00213.html
> Please stop posting disinformation.
--
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
3 years, 3 months
multi-threaded argon2 hashing
by Manuela Mandache
Hello all,
I compiled pw-argon2 for OpenLDAP 2.4.44 (running on CentOS 7) and configured the directory to use the {ARGON2} password scheme. Everything works fine, only it seems the parallelism remains 1 whatever parameter I give when I load the module. Memory usage and number of iterations do follow the values I give at module load.
Here's cn=module,cn=config:
dn: cn=module{0},cn=config
objectClass: olcModuleList
cn: module{0}
olcModulePath: /usr/lib64/openldap
olcModuleLoad: {0}ppolicy
olcModuleLoad: {1}syncprov
olcModuleLoad: {2}accesslog
olcModuleLoad: {3}pw-argon2 m=4096 t=8 p=8
And here's (the beginning of) a password which has been changed using ldappasswd (base64 decoded value obtained with ldapsearch):
{ARGON2}$argon2id$v=19$m=4096,t=8,p=1$7KxBUtls1NNPDK4Q4f+.......
What am I missing?
I don't know if this is relevant, libsodium version is 1.0.18 and I compiled pw-argon2 using the libraries provided by openldap-2.4.44-21.el7_6.src.rpm. Let me know if I need to provide other configuration elements.
Two more points:
- the pw-argon2 man page (and README file of the module) examples seem to be obtained using argon2i, while the module uses argon2id;
- what salt length is used?
Thanks for your help, best regards,
Manuela
3 years, 3 months