Re: Antw: Re: ssf Security Question
by Quanah Gibson-Mount
--On Monday, November 20, 2017 8:43 AM +0100 Ulrich Windl
<Ulrich.Windl(a)rz.uni-regensburg.de> wrote:
> Hi!
>
> BTW: Does anyone know the backgraound of SUSE Linux Enterprise Server
> (SLES) moving from OpenLDAP to Redhat's directory server in ist next
> release?
Do you have a relevant link?
--Quanah
--
Quanah Gibson-Mount
Product Architect
Symas Corporation
Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
<http://www.symas.com>
5 years, 10 months
Openldap/authconfig authenticating multiple times
by Dave Macias
Not sure if I sent this right the first time....
I had posted this on centos forum here but no help :(
https://www.centos.org/forums/viewtopic.php?f=48&t=65041&hilit=authconfig
Basic background:
3 openldap servers with multimaster replication and ppolicy pwdMaxFailure: 6
.
When i try to authenticate to the linux box authconfig authenticates to all
3 master servers which return 3 failures, which give you 3
pwdFailureTime attributes
for the account. So after typing the password incorrectly twice, the user
get's locked out.
Trying to understand why this is happening.
When configured another clean box i dont see this behavior (one
pwdFailureTime per incorrect password attempt). I've also reinstalled
related packages but no change. The behavior is seen on all three master
ldap servers.
Please see the link for details
Any input is appreciated.
thank you,
-dave
5 years, 10 months
ldap + meta "Proxy operation retry failed" when re-binding as retrieved user for Active Directory account authentication
by Boyd, John K.
According to my slapd debug logs, after the account is not found locally, the search continues in active directory. I can bind under the admin query only account and successfully retrieve the user account information (I have verified this) from active directory; however, I can't then rebind-as-user with the user password in order to authenticate.
I get a "Proxy operation retry failed" error:
slapd[22555]: conn=1000 fd=8 ACCEPT from IP=127.0.0.1:35848 (IP=127.0.0.1:389)
slapd[22555]: conn=1001 fd=9 ACCEPT from IP=127.0.0.1:35850 (IP=127.0.0.1:389)
slapd[22555]: conn=1000 op=0 BIND dn="cn=xxxx,ou=local" method=128
slapd[22555]: conn=1000 op=0 BIND dn="cn=xxxx,ou=local" mech=SIMPLE ssf=0
slapd[22555]: conn=1000 op=0 RESULT tag=97 err=0 text=
slapd[22555]: conn=1000 op=1 SRCH base="dc=example,dc=com" scope=2 deref=0 filter="(uid=xxxx0029)"
slapd[22555]: conn=1002 fd=11 ACCEPT from IP=127.0.0.1:35852 (IP=127.0.0.1:389)
slapd[22555]: conn=1002 op=0 BIND dn="cn=xxxx,ou=local" method=128
slapd[22555]: conn=1002 op=0 BIND dn="cn=xxxx,ou=local" mech=SIMPLE ssf=0
slapd[22555]: conn=1002 op=0 RESULT tag=97 err=0 text=
slapd[22555]: conn=1003 fd=13 ACCEPT from IP=127.0.0.1:35854 (IP=127.0.0.1:389)
slapd[22555]: conn=1003 op=0 BIND dn="cn=xxxx,ou=local" method=128
slapd[22555]: conn=1003 op=0 BIND dn="cn=xxxx,ou=local" mech=SIMPLE ssf=0
slapd[22555]: conn=1003 op=0 RESULT tag=97 err=0 text=
slapd[22555]: conn=1002 op=1 SRCH base="ou=xxxx,dc=sooner,dc=net,dc=ou,dc=edu" scope=2 deref=0 filter="(uid=xxxx0029)"
slapd[22555]: conn=1003 op=1 SRCH base="ou=local" scope=2 deref=0 filter="(uid=xxxx0029)"
slapd[22555]: conn=1003 op=1 SEARCH RESULT tag=101 err=32 nentries=0 text=
slapd[22555]: conn=1000 op=1 meta_back_search[1] match="" err=32 (No such object) text="".
slapd[22555]: conn=1002 op=1 SEARCH RESULT tag=101 err=0 nentries=1 text=
slapd[22555]: conn=1000 op=1 SEARCH RESULT tag=101 err=0 nentries=1 text=
slapd[22555]: conn=1001 op=0 BIND dn="cn=11079,ou=xxxx,dc=a,dc=example,dc=com" method=128
slapd[22555]: conn=1004 fd=16 ACCEPT from IP=127.0.0.1:35858 (IP=127.0.0.1:389)
slapd[22555]: conn=1004 op=0 BIND dn="cn=11079,ou=General,ou=xxxx,dc=sooner,dc=net,dc=ou,dc=edu" method=128
slapd[22555]: conn=1004 op=0 ldap_back_retry: retrying URI="ldaps://active.directory" DN=""
slapd[22555]: conn=1004 op=0 RESULT tag=97 err=52 text=Proxy operation retry failed
slapd[22555]: conn=1004 op=1 UNBIND
slapd[22555]: conn=1001 op=0 RESULT tag=97 err=52 text=
slapd[22555]: conn=1004 fd=16 closed
Here is my meta configuration:
database meta
suffix dc=example,dc=com
# The last rwm-map line maps all other attributes to nothing.
overlay rwm
rwm-map attribute uid sAMAccountname
rwm-map attribute *
#rwm-map objectclass posixGroup group
#rwm-map objectclass posixAccount person
#rwm-map objectclass memberUid member
##
uri "ldap://127.0.0.1/dc=a,dc=example,dc=com"
suffixmassage "dc=a,dc=example,dc=com" "ou=xxxx,dc=sooner,dc=net,dc=ou,dc=edu"
rebind-as-user true
idassert-bind
bindmethod=simple
binddn="cn=XXXX,ou=local"
credentials=XXXX
mode=none
idassert-authzFrom "dn.regex:.*"
##
uri "ldap://127.0.0.1/dc=b,dc=example,dc=com"
suffixmassage "dc=b,dc=example,dc=com" "ou=local"
rebind-as-user true
idassert-bind
bindmethod=simple
binddn="cn=XXXX,ou=local"
credentials=XXXX
mode=none
idassert-authzFrom "dn.regex:.*"
##
database ldap
uri ldaps://active.directory
suffix ou=xxxx,dc=sooner,dc=net,dc=ou,dc=edu
rebind-as-user true
idassert-bind
bindmethod=simple
binddn="cn=XXXX,ou=it,ou=services,ou=accounts,dc=sooner,dc=net,dc=ou,dc=edu"
credentials=XXXX
tls_reqcert=allow
tls_cacert=/etc/letsencrypt/live/lmamr-lims.rccc.ou.edu/fullchain.pem
tls_cert=/etc/letsencrypt/live/lmamr-lims.rccc.ou.edu/cert.pem
tls_key=/etc/letsencrypt/live/lmamr-lims.rccc.ou.edu/privkey.pem
mode=none
idassert-authzFrom "dn.regex:.*"
5 years, 10 months
Re: ssf Security Question
by Howard Chu
William Brown wrote:
> On Fri, 2017-11-17 at 08:34 +0100, Michael Ströder wrote:
>> William Brown wrote:
>>> Just want to point out there are some security risks with ssf
>>> settings.
>>> I have documented these here:
>>>
>>> https://fy.blackhats.net.au/blog/html/2016/11/23/the_minssf_trap.ht
>>> ml
>>
>> Nice writeup. I always considered SSF values to be naive and somewhat
>> overrated. People expect too much when looking at these numbers -
>> especially regarding the "strength" of cryptographic algorithms which
>> changes over time anyway with new cryptanalysis results coming up.
>>
>> Personally I always try to implement a TLS-is-must policy and prefer
>> LDAPS (with correct protocol and ciphersuites configured) over
>> LDAP/StartTLS to avoid this kind of pre-TLS leakage.
>> Yes, I deliberately ignore "LDAPS is deprecated". ;-]
>
> I agree. If only there was a standards working group that could
> deprecate startTLS in favour of TLS .... :)
I have to agree as well. On my own servers I also use TLS on other "plaintext"
ports too (such as pop3 and others) that no one has any business connecting to
in plaintext.
>> Furthermore some LDAP server implementation (IIRC e.g. MS AD) refuse
>> to
>> accept SASL/GSSAPI bind requests sent over TLS-secured channel. Which
>> is
>> IMO also somewhat questionable.
>
> Yes, I really agree. While a plain text port exists, data leaks are
> possible. We should really improve this situation, where we have TLS
> for all data to prevent these mistakes.
>
> I think a big part of the issue is that GSSAPI forces the encryption
> layer, and can't work via an already encrypted channel. The people I
> know involved in this space are really resistant to changing this due
> to the "kerberos centric" nature of the products.
Interesting. Our libldap/liblber works fine with GSSAPI's encryption layered
over TLS and vice versa.
--
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
5 years, 10 months
[LMDB] Large transactions
by Jürgen Baier
Hi,
I have a question about LMDB (I hope this is the right mailing list for
such a question).
I'm running a benchmark (which is similar to my intended use case) which
does not behave as I hoped. I store 1 billion key/value pairs in a
single LMDB database. _In a single transaction._ The keys are MD5 hash
codes from random data (16 bytes) and the value is the string "test".
I'm using lmdbjava which currently uses LMDB 0.9.19.
The benchmark is executed on Linux (Ubuntu 17.04 with a 4.10 kernel and
a ext4 filesystem).
In the beginning the data is inserted relatively fast:
1M/1000M took:3317 ms (3s 317ms)
Then the insert performance deteriorates gradually. After inserting 642M
entries inserting 1M entries takes more than 5 minutes:
642M/1000M took:305734 ms (5m 5s 734ms)
At this time the database size (data.mdb) is about 27GiB. The filesystem
buffer cache has about the same value, so I assume most pages are
cached. Linux still reports 28G free memory.
A short analysis with perf seems to indicate that most time is spent in
mdb_page_spill
Children Self
96,45% 0,00% lmdbjava-native-library.so [.] mdb_cursor_put
96,45% 0,00% lmdbjava-native-library.so [.] mdb_put
96,45% 0,00% jffi8421248145368054745.so (deleted) [.]
0xffff80428d388b3f
60,43% 2,61% lmdbjava-native-library.so [.]
mdb_page_spill.isra.16
47,39% 47,39% lmdbjava-native-library.so [.] mdb_midl_sort
26,07% 0,24% lmdbjava-native-library.so [.] mdb_page_touch
26,07% 0,00% lmdbjava-native-library.so [.] mdb_cursor_touch
25,83% 0,00% lmdbjava-native-library.so [.] mdb_page_unspill
23,22% 0,24% lmdbjava-native-library.so [.] mdb_page_dirty
22,99% 22,27% lmdbjava-native-library.so [.] mdb_mid2l_insert
11,14% 0,24% [kernel.kallsyms] [k]
entry_SYSCALL_64_fastpath
10,43% 0,47% lmdbjava-native-library.so [.] mdb_page_flush
9,95% 0,00% libpthread-2.24.so [.] __GI___libc_pwrite
9,72% 0,00% [kernel.kallsyms] [k] vfs_write
9,72% 0,00% [kernel.kallsyms] [k] sys_pwrite64
9,48% 0,00% [kernel.kallsyms] [k]
generic_perform_write
9,48% 0,00% [kernel.kallsyms] [k]
__generic_file_write_iter
9,48% 0,00% [kernel.kallsyms] [k] ext4_file_write_iter
9,48% 0,00% [kernel.kallsyms] [k] new_sync_write
9,48% 0,00% [kernel.kallsyms] [k] __vfs_write
9,24% 0,00% lmdbjava-native-library.so [.] mdb_cursor_set
8,06% 0,47% lmdbjava-native-library.so [.] mdb_page_search
7,35% 0,95% lmdbjava-native-library.so [.] mdb_page_search_root
4,98% 0,24% lmdbjava-native-library.so [.] mdb_page_get.isra.13
The documentation about mdb_page_spill says (as far as I understand)
that this function is called to prevent MDB_TXN_FULL situations. Does
this mean that my transaction is simply too large to be handled
efficiently by LMDB?
Note that a similar benchmark with 4 byte integer keys took only 2h34m
for 1000M entries (the integer keys were sorted, but I did not use
MDB_APPEND).
I understand LMDB is not write-optimized and maybe my transactions are
simply too large. However, I hope I'm just doing something wrong and I
can still use LMDB for my use case.
Any ideas?
Thank you,
Juergen
--
Juergen Baier
5 years, 10 months
ssf Security Question
by Kaya Saman
Hi,
I am a little confused with this. Basically I have a client connecting
to the database, a DECT IP phone base station which doesn't support
STARTLS and my slapd config has settings for clients to use certificates
to connect.
What would be the best way to set this up so that the DECT IP client
only accesses the particular place that it needs to, the AddressBook
section but then other clients will need to use STARTTLS for everything
else??
Currently I am looking at:
https://www.openldap.org/doc/admin24/security.html
https://www.openldap.org/doc/admin24/access-control.html
and have currently put this in my slapd.conf:
#Removed the Global? security clause
#security ssf=128
#Added generic ACL for all access to require ssf of 128
access to *
by ssf=128 self write
by ssf=128 anonymous auth
by ssf=128 users read
#Added ACL for open access to AddressBook in Read and Search only mode
access to dn.children="ou=AddressBook,dc=domain,dc=com"
by * search
by * read
Is this correct or do I need to engage the "security" Global section too?
Though the documentation suggests otherwise: "For fine-grained control,
SSFs may be used in access controls. See theAccess Control
<https://www.openldap.org/doc/admin24/access-control.html>section for
more information."
Thanks.
Kaya
5 years, 10 months
Re: ssf Security Question
by Quanah Gibson-Mount
--On Friday, November 17, 2017 12:53 PM +1000 William Brown
<wibrown(a)redhat.com> wrote:
Hi William,
> Hey mate,
>
> Just want to point out there are some security risks with ssf settings.
> I have documented these here:
>
> https://fy.blackhats.net.au/blog/html/2016/11/23/the_minssf_trap.html
>
> This is a flaw in the ldap protocol and can never be resolved without
> breaking the standard. The issue is that by the time the ssf check is
> done, you have already cleartexted sensitive material.
I think what you mean is: There is no way with startTLS to prevent possible
leakage of credentials when using simple binds. ;) Your blog certainly
covers this concept well, but just wanted to be very clear on what the
actual issue is. ;) I've been rather unhappy about this for a long time as
well, and have had a discussion going on the openldap-devel list about
LDAPv4 and breaking backwards compatibility to fix this protocol bug.
Another note -- The reason GSSAPI shows up as an SSF of 56 is because it
has been hard coded that way in cyrus-sasl. Starting with cyrus-sasl
version 2.1.27, which is near release, the actual SASL SSF is finally
passed back into the caller. It may be worthwhile noting this in your blog
post. ;)
Warm regards,
Quanah
--
Quanah Gibson-Mount
Product Architect
Symas Corporation
Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
<http://www.symas.com>
5 years, 10 months
Re: Is existing documentation kind of vague?
by Howard Chu
MJ J wrote:
> Certainly, I will make a better list tomorrow or so and send them to
> you. Generally, it relates to the areas of cn=config which are not
> runtime configurable and the lack of inline ACLs being first-class
> citizens.
>
> Basically, I feel that anything which is exposed via cn=config should
> not require an offline slapadd in order to take effect. These are not
> really huge problems for LDAP experts, but quite challenging when
> trying to train a bit less skilled people to handle operations. A
> detailed list of cn=config params which can and cannot be modified
> during runtime would help matters greatly, instead of the current
> situation of knowledge via trial and error ;-)
The intention has always been 100% runtime modifiable. Any that aren't should
be considered as bugs in 2.5.
> On Sun, Nov 19, 2017 at 8:28 PM, Howard Chu <hyc(a)symas.com> wrote:
>> MJ J wrote:
>>>
>>> I actually like 389 a lot and I have used Netscape DS extensively in
>>> managing international telecom networks about 15 years ago. There are
>>> quite many management features that are superior to OpenLDAP still to
>>> this day, but I simply cannot use it anymore because of the lack of
>>> scalability. I know the original Netscape DS devs quite extensively...
>>
>>
>> There are certainly some obvious deficiences remaining in cn=config, and
>> we're working on addressing as many as possible for 2.5. But I'd be curious
>> to see your list of issues.
>>
>>
>>>
>>> -mike
>>>
>>> On Fri, Nov 17, 2017 at 8:34 AM, William Brown <wibrown(a)redhat.com> wrote:
>>>>
>>>> On Fri, 2017-11-17 at 08:27 +0200, MJ J wrote:
>>>>>
>>>>> No matter how you wrap poll() and select(), they will always be
>>>>> poll()
>>>>> and select() - you will always run loops around an ever increasing
>>>>> stack of file descriptors while doing I/O. BDB is always going to
>>>>> have
>>>>> the same old problems... That's what I'm talking about - sacrificing
>>>>> performance for platform portability (NSPR).
>>>>>
>>>>> FreeIPA could be multi-tenant i.e.support top-level and subordinate
>>>>> kerberos realms if it supported a more sensible DIT layout. I know
>>>>> because I have built such a system (based on OpenLDAP) and deployed
>>>>> it
>>>>> internationally. Probably the best piece of code to come out of the
>>>>> project is bind-dyndb-ldap.
>>>>
>>>>
>>>> Whoa mate - I'm not here to claim that 389 is a better ldap server - we
>>>> just do some different things. We acknowledge our limitations and are
>>>> really working on them and paying down our tech debt. We want to remove
>>>> parts of nspr, replace bdb and more. :)
>>>>
>>>> I'm here to follow the progress of the openldap project, who have a
>>>> team of people I respect greatly and want to learn from, and here to
>>>> help discussions and provide input from a different perspective.
>>>>
>>>> There are things that today openldap does much better than us for
>>>> certain - and there are also some things that we do differently too
>>>> like DNA plugin uid allocation, replication etc,
>>>>
>>>> There are also project focusses and decisions made to improve
>>>> supportability in systems like FreeIPA - we can discuss them forever,
>>>> but reality is today, FreeIPA is not targeting multi-tennant
>>>> environments because the majority of our consumers don't want that
>>>> functionality. We made a design decision and have to live with it. I'm
>>>> providing this information to help give the ability for people to
>>>> construct an informed opinion.
>>>>
>>>>
>>>> As mentioned, I'm not here to throw insults and criticisms, I'm here to
>>>> have positive, respectful discussions about technology, to provide
>>>> different ideas, and to learn from others :)
>>>>
>>>> Thanks,
>>>>
>>>>>
>>>>> On Fri, Nov 17, 2017 at 4:49 AM, William Brown <wibrown(a)redhat.com>
>>>>> wrote:
>>>>>>
>>>>>> On Thu, 2017-11-16 at 05:54 +0200, MJ J wrote:
>>>>>>>
>>>>>>> Sure, it can be improved to become invulnerable to the
>>>>>>> academically
>>>>>>> imaginative race conditions that are not going to happen in real
>>>>>>> life.
>>>>>>> That will go to the very bottom of my list of things to do now,
>>>>>>> thanks.
>>>>>>>
>>>>>>> FreeIPA is a cool concept, too bad it's not scalable or multi-
>>>>>>> tenant
>>>>>>> capable.
>>>>>>
>>>>>>
>>>>>> It's a lot more scalable depending on which features you
>>>>>> enable/disable. It won't even be multi-tenant due to the design
>>>>>> with
>>>>>> gssapi/krb.
>>>>>>
>>>>>> At the end of the day, the atomic UID/GID alloc in FreeIPA is from
>>>>>> the
>>>>>> DNA plugin from 389-ds-base (which you can multi-instance on a
>>>>>> server
>>>>>> or multi-tentant with many backends). We use a similar method to AD
>>>>>> in
>>>>>> that each master has a pool of ids to alloc from, and they can
>>>>>> atomically request pools. This prevents the race issues you are
>>>>>> describing here with openldap.
>>>>>>
>>>>>> So that's an option for you, because those race conditions *do* and
>>>>>> *will* happen, and it will be a bad day for you when they do.
>>>>>>
>>>>>>
>>>>>> Another option is an external IDM system that allocs the uid's and
>>>>>> feeds them to your LDAP environment instead,
>>>>>>
>>>>>> Full disclosure: I'm a core dev of 389 directory server, so that's
>>>>>> why
>>>>>> I'm speaking in this context. Not here to say bad about openldap or
>>>>>> try
>>>>>> to poach you, they are a great project, just want to offer
>>>>>> objective
>>>>>> insight from "the other (dark?) side". :)
>>>>>>
>>>>>>>
>>>>>>> On Wed, Nov 15, 2017 at 11:09 PM, Michael Ströder <michael@stroed
>>>>>>> er.c
>>>>>>> om> wrote:
>>>>>>>>
>>>>>>>> MJ J wrote:
>>>>>>>>>
>>>>>>>>> TLDR; in a split-brain situation, you could run into trouble.
>>>>>>>>> But
>>>>>>>>> this
>>>>>>>>> isn't the only place. Efffective systems monitoring is the
>>>>>>>>> key
>>>>>>>>> here.
>>>>>>>>>
>>>>>>>>> Long answer;
>>>>>>>>> [..]
>>>>>>>>> The solution I posted has been in production in a large,
>>>>>>>>> dynamic
>>>>>>>>> company for several years and never encountered a problem.
>>>>>>>>
>>>>>>>>
>>>>>>>> Maybe it works for you. But I still don't understand why you
>>>>>>>> post
>>>>>>>> such a
>>>>>>>> lengthy justification insisting on your MOD_INCREMENT / read-
>>>>>>>> after-
>>>>>>>> write
>>>>>>>> approach with possible race condition even in a single master
>>>>>>>> deployment
>>>>>>>> while there are two proper solutions with just a few lines code
>>>>>>>> more:
>>>>>>>>
>>>>>>>> 1. delete-by-value to provoke a conflict like the original
>>>>>>>> poster
>>>>>>>> mentioned by pointing to
>>>>>>>> http://www.rexconsulting.net/ldap-protocol-uidNumber.html
>>>>>>>>
>>>>>>>> 2. MOD_INCREMENT with pre-read control
>>>>>>>>
>>>>>>>> Of course none of the solutions work when hitting multiple
>>>>>>>> providers
>>>>>>>> hard in a MMR setup or in a split-brain situation. One has to
>>>>>>>> choose a
>>>>>>>> "primary" provider then.
>>>>>>>> BTW: AFAIK with FreeIPA each provider has its own ID range to
>>>>>>>> prevent that.
>>>>>>>>
>>>>>>>> Ciao, Michael.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> --
>>>>>> Sincerely,
>>>>>>
>>>>>> William Brown
>>>>>> Software Engineer
>>>>>> Red Hat, Australia/Brisbane
>>>>>>
>>>>>
>>>>>
>>>> --
>>>> Sincerely,
>>>>
>>>> William Brown
>>>> Software Engineer
>>>> Red Hat, Australia/Brisbane
>>>>
>>>
>>>
>>
>>
>> --
>> -- Howard Chu
>> CTO, Symas Corp. http://www.symas.com
>> Director, Highland Sun http://highlandsun.com/hyc/
>> Chief Architect, OpenLDAP http://www.openldap.org/project/
>
--
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
5 years, 10 months
Re: Is existing documentation kind of vague?
by Howard Chu
MJ J wrote:
> I actually like 389 a lot and I have used Netscape DS extensively in
> managing international telecom networks about 15 years ago. There are
> quite many management features that are superior to OpenLDAP still to
> this day, but I simply cannot use it anymore because of the lack of
> scalability. I know the original Netscape DS devs quite extensively...
There are certainly some obvious deficiences remaining in cn=config, and we're
working on addressing as many as possible for 2.5. But I'd be curious to see
your list of issues.
>
> -mike
>
> On Fri, Nov 17, 2017 at 8:34 AM, William Brown <wibrown(a)redhat.com> wrote:
>> On Fri, 2017-11-17 at 08:27 +0200, MJ J wrote:
>>> No matter how you wrap poll() and select(), they will always be
>>> poll()
>>> and select() - you will always run loops around an ever increasing
>>> stack of file descriptors while doing I/O. BDB is always going to
>>> have
>>> the same old problems... That's what I'm talking about - sacrificing
>>> performance for platform portability (NSPR).
>>>
>>> FreeIPA could be multi-tenant i.e.support top-level and subordinate
>>> kerberos realms if it supported a more sensible DIT layout. I know
>>> because I have built such a system (based on OpenLDAP) and deployed
>>> it
>>> internationally. Probably the best piece of code to come out of the
>>> project is bind-dyndb-ldap.
>>
>> Whoa mate - I'm not here to claim that 389 is a better ldap server - we
>> just do some different things. We acknowledge our limitations and are
>> really working on them and paying down our tech debt. We want to remove
>> parts of nspr, replace bdb and more. :)
>>
>> I'm here to follow the progress of the openldap project, who have a
>> team of people I respect greatly and want to learn from, and here to
>> help discussions and provide input from a different perspective.
>>
>> There are things that today openldap does much better than us for
>> certain - and there are also some things that we do differently too
>> like DNA plugin uid allocation, replication etc,
>>
>> There are also project focusses and decisions made to improve
>> supportability in systems like FreeIPA - we can discuss them forever,
>> but reality is today, FreeIPA is not targeting multi-tennant
>> environments because the majority of our consumers don't want that
>> functionality. We made a design decision and have to live with it. I'm
>> providing this information to help give the ability for people to
>> construct an informed opinion.
>>
>>
>> As mentioned, I'm not here to throw insults and criticisms, I'm here to
>> have positive, respectful discussions about technology, to provide
>> different ideas, and to learn from others :)
>>
>> Thanks,
>>
>>>
>>> On Fri, Nov 17, 2017 at 4:49 AM, William Brown <wibrown(a)redhat.com>
>>> wrote:
>>>> On Thu, 2017-11-16 at 05:54 +0200, MJ J wrote:
>>>>> Sure, it can be improved to become invulnerable to the
>>>>> academically
>>>>> imaginative race conditions that are not going to happen in real
>>>>> life.
>>>>> That will go to the very bottom of my list of things to do now,
>>>>> thanks.
>>>>>
>>>>> FreeIPA is a cool concept, too bad it's not scalable or multi-
>>>>> tenant
>>>>> capable.
>>>>
>>>> It's a lot more scalable depending on which features you
>>>> enable/disable. It won't even be multi-tenant due to the design
>>>> with
>>>> gssapi/krb.
>>>>
>>>> At the end of the day, the atomic UID/GID alloc in FreeIPA is from
>>>> the
>>>> DNA plugin from 389-ds-base (which you can multi-instance on a
>>>> server
>>>> or multi-tentant with many backends). We use a similar method to AD
>>>> in
>>>> that each master has a pool of ids to alloc from, and they can
>>>> atomically request pools. This prevents the race issues you are
>>>> describing here with openldap.
>>>>
>>>> So that's an option for you, because those race conditions *do* and
>>>> *will* happen, and it will be a bad day for you when they do.
>>>>
>>>>
>>>> Another option is an external IDM system that allocs the uid's and
>>>> feeds them to your LDAP environment instead,
>>>>
>>>> Full disclosure: I'm a core dev of 389 directory server, so that's
>>>> why
>>>> I'm speaking in this context. Not here to say bad about openldap or
>>>> try
>>>> to poach you, they are a great project, just want to offer
>>>> objective
>>>> insight from "the other (dark?) side". :)
>>>>
>>>>>
>>>>> On Wed, Nov 15, 2017 at 11:09 PM, Michael Ströder <michael@stroed
>>>>> er.c
>>>>> om> wrote:
>>>>>> MJ J wrote:
>>>>>>> TLDR; in a split-brain situation, you could run into trouble.
>>>>>>> But
>>>>>>> this
>>>>>>> isn't the only place. Efffective systems monitoring is the
>>>>>>> key
>>>>>>> here.
>>>>>>>
>>>>>>> Long answer;
>>>>>>> [..]
>>>>>>> The solution I posted has been in production in a large,
>>>>>>> dynamic
>>>>>>> company for several years and never encountered a problem.
>>>>>>
>>>>>> Maybe it works for you. But I still don't understand why you
>>>>>> post
>>>>>> such a
>>>>>> lengthy justification insisting on your MOD_INCREMENT / read-
>>>>>> after-
>>>>>> write
>>>>>> approach with possible race condition even in a single master
>>>>>> deployment
>>>>>> while there are two proper solutions with just a few lines code
>>>>>> more:
>>>>>>
>>>>>> 1. delete-by-value to provoke a conflict like the original
>>>>>> poster
>>>>>> mentioned by pointing to
>>>>>> http://www.rexconsulting.net/ldap-protocol-uidNumber.html
>>>>>>
>>>>>> 2. MOD_INCREMENT with pre-read control
>>>>>>
>>>>>> Of course none of the solutions work when hitting multiple
>>>>>> providers
>>>>>> hard in a MMR setup or in a split-brain situation. One has to
>>>>>> choose a
>>>>>> "primary" provider then.
>>>>>> BTW: AFAIK with FreeIPA each provider has its own ID range to
>>>>>> prevent that.
>>>>>>
>>>>>> Ciao, Michael.
>>>>>
>>>>>
>>>>
>>>> --
>>>> Sincerely,
>>>>
>>>> William Brown
>>>> Software Engineer
>>>> Red Hat, Australia/Brisbane
>>>>
>>>
>>>
>> --
>> Sincerely,
>>
>> William Brown
>> Software Engineer
>> Red Hat, Australia/Brisbane
>>
>
>
--
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
5 years, 10 months