Hello list.
People from SSSD would like to have a better information when some TLS operation in OpenLDAP library fails, instead of a general LDAP_CONNECT_ERROR. I already mentioned it on this list some time ago: http://www.openldap.org/lists/openldap-devel/201105/msg00011.html
I can write a patch for this, but I would like to discuss it with you before.
I already tried something. I added LDAP_TLS_INITIALIZATION_ERROR (-19) and LDAP_TLS_NEGOTIATION_ERROR (-20) API error codes and slightly modified the TLS code in OpenLDAP to propagate the errors. These two new error codes are sufficient for SSSD.
Currently I have covered only the code for Mozilla NSS backend and it still needs some tunings. I would like to know, if adding the error codes this way is acceptable. Should I proceed? Or should it be done a different way?
Thanks & regards,
Jan
Jan Včelák wrote:
People from SSSD would like to have a better information when some TLS operation in OpenLDAP library fails, instead of a general LDAP_CONNECT_ERROR. I already mentioned it on this list some time ago: http://www.openldap.org/lists/openldap-devel/201105/msg00011.html
I can write a patch for this, but I would like to discuss it with you before.
I already tried something. I added LDAP_TLS_INITIALIZATION_ERROR (-19) and LDAP_TLS_NEGOTIATION_ERROR (-20) API error codes and slightly modified the TLS code in OpenLDAP to propagate the errors. These two new error codes are sufficient for SSSD.
Currently I have covered only the code for Mozilla NSS backend and it still needs some tunings. I would like to know, if adding the error codes this way is acceptable. Should I proceed? Or should it be done a different way?
This suggestion makes me itchy in so many ways.
About the -Z option (attempted TLS without verification):
If I understand your message of 2011 correctly: http://www.openldap.org/lists/openldap-devel/201105/msg00011.html the client may decide to talk the straight LDAP protocol (without TLS) to a server which expects TLS. Clearly that's wrong - the client's valid options at that point are to talk TLS, disconnect, or tear down the TLS connection before talking plain LDAP.
However. Do not use -Z. Use -ZZ. The single -Z option should be axed. Also TLS_REQCERT needs a big red warning.
-Z is a request for a security hole. Users should not be required to know that. There may be rare valid reasons to use -Z, but an option for that should not be the most obvious way to request TLS.
When the client requests TLS, it normally wants to do something which needs a trusted connection. Like sending a password with Simple Bind. If the client happily proceeds after TLS failure, it'll be sending that password over an unencrypted connection.
The server is happy, it doesn't get a bad TLS connection, and maybe it'll reject the unprotected Bind. But that's too late. The unencrypted password has already been sent. Similarly, turning off TLS_REQCERT opens for connection hijacking/man-in-the-middle attacks. (You get a perfectly good TLS connection to the attacker.)
About changing result codes, assuming you still want that:
I don't know TLS programming. Can you get the info you want from ldap_get_option LDAP_OPT_X_TLS_CTX or LDAP_OPT_X_TLS_SSL_CTX? If not, maybe libldap could save some extra TLS info for the client.
I'd like to see API error codes improved. My favorite is to split LDAP_SERVER_DOWN in at least "could not connect" and "connection lost".
However, we need to be reluctant to change the result code for a given situation. Changes can break existing code which cares for that situation and handles the current result code. Such changes belong in the switch to OpenLDAP 2.5 or 3.0.
For 2.4 I suggest ldap_set_option(ld, LDAP_CODE_POLICY, &policy_version) or something like that, to request different result codes. Either that, or in your case libldap could store the TLS state somewhere so some API call could get at it. Like LDAP_OPT_X_TLS* mentioned above.
The LDAP_CODE_POLICY approach needs a general brainstorm about result codes, so we don't end up with too many different policies to support. Just changing two code which one application needs doesn't sound good. Then next time we add one more, then two more, each time adding a new policy version -- and soon enough we have a dozen different policies and an utter mess in libldap where it tries to keep track of them all.
Hallvard
Hello,
I already tried something. I added LDAP_TLS_INITIALIZATION_ERROR (-19) and LDAP_TLS_NEGOTIATION_ERROR (-20) API error codes and slightly modified the TLS code in OpenLDAP to propagate the errors. These two new error codes are sufficient for SSSD.
This suggestion makes me itchy in so many ways.
Why?
About the -Z option (attempted TLS without verification):
Actually this is not about -Z behavior. I just mentioned in the mail, that it would be great if the library provided more information about the type of the failure.
About changing result codes, assuming you still want that:
I don't know TLS programming. Can you get the info you want from ldap_get_option LDAP_OPT_X_TLS_CTX or LDAP_OPT_X_TLS_SSL_CTX? If not, maybe libldap could save some extra TLS info for the client.
No, you cannot. Because both structures are implementation specific. And we have OpenSSL, GnuTLS and Mozilla NSS supported by OpenLDAP. And the manual pages say: 'Applications generally should not use this option.'
I'd like to see API error codes improved. My favorite is to split LDAP_SERVER_DOWN in at least "could not connect" and "connection lost".
I belive this already works. ldap_get_option with LDAP_OPT_RESULT_CODE will return LDAP_TIMEOUT.
However, we need to be reluctant to change the result code for a given situation. Changes can break existing code which cares for that situation and handles the current result code. Such changes belong in the switch to OpenLDAP 2.5 or 3.0.
Yes, it might break some existing applications. Therefore I'm asking for upstream point of view. But I would like to have a solution for this in 2.4.
For 2.4 I suggest ldap_set_option(ld, LDAP_CODE_POLICY, &policy_version) The LDAP_CODE_POLICY approach needs a general brainstorm about result codes, so we don't end up with too many different policies to support.
I do not like this approach. API versioning might complicate it too much. And we will get stuck with another ldap_set_option flag.
Jan
Jan Vcelak wrote:
About the -Z option (attempted TLS without verification):
Actually this is not about -Z behavior. I just mentioned in the mail, that it would be great if the library provided more information about the type of the failure.
Perhaps more context about where this perceived need is coming from would have helped the public discussion. Dmitri Pal @ Red Hat pointed me to a bug report that seems to have been the catalyst for this request. We exchanged a few responses and I thought it would be useful to re-join the public conversation.
Dmitri Pal wrote:
On 04/17/2012 01:43 PM, Howard Chu wrote:
Dmitri Pal wrote:
I did not say it is a major problem but we have seen multiple times on our community lists people trying to setup TLS for SSSD (openssl or nss) manually and getting the certificate problems that are hard to diagnose. Here is one of them filed by our QE as we followed up on one of the community threads: https://bugzilla.redhat.com/show_bug.cgi?id=640393 And as you see it is not on the NSS or openssl level. If the paths are not configured properly (a typo in the path for example) you will get a certificate error but it is actually a wrong path. Unfortunately the lowest layer that knows about the issue is openldap not underlying crypto module. This is the kind of issue that we want to fix.
This is exactly the kind of issue that NSS makes messy to fix. Normally we know that cacertdir and cacert must point to a directory and a file, respectively. It would be feasible to check access(path, R_OK) or something at the time that an app calls ldap_set_option() on them. But with NSS, these parameters might be something else entirely
- a DB path and a cert name within the DB, and such pathname-based
checks would give spurious failures.
Because of NSS, nobody but the underlying crypto module knows what these parameters actually mean.
I.e., it is not an OpenLDAP level issue, it is precisely an NSS issue.
path not found/no permission is certainly a common failure condition, but running in debug mode makes that obvious, because the explicit error text is logged on stderr.
If I configure slapd.conf with TLSCACertificateFile /some/bogus/path
and try to start it, I get:
TLS: could not load verify locations (file:`/some/bogus/path',dir:`'). TLS: error:02001002:system library:fopen:No such file or directory bss_file.c:169 TLS: error:2006D080:BIO routines:BIO_new_file:no such file bss_file.c:172 TLS: error:0B084002:x509 certificate routines:X509_load_cert_crl_file:system lib by_file.c:274 4f8daa38 main: TLS init def ctx failed: -1 4f8daa38 slapd destroy: freeing system resources. 4f8daa38 slapd stopped. 4f8daa38 connections_destroy: nothing to destroy.
It's quite obvious "No such file or directory".
If the cause of failure isn't as obvious with NSS, then again I have to say, it seems to me that you're looking in the wrong place for a solution.
It strikes me that it would benefit the community for this conversation to be public, e.g. on the openldap-devel mailing list.
We can do it there. I sent a subscription request. Do you want to start this conversation over there from the beginning? There is already a thread that I wanted to draw you attention to and it turned into this discussion. May be you can just cut and paste you comments from this thread into the public thread and we take it from there?
That's the on-topic bit, but I felt it's important to also address this side-note:
On the side note about NSS. I am kind of surprised a bit with your view. The move to pluggable crypto modules is an industry trend not something NSS team invented. And there is a big push for the ECC for example which NSS does not support. So there will be other solutions and other crypto modules. Creating a clean abstraction around it is a logical next step so blaming that NSS destroyed a perfect world dominated by open SSL is a bit strange. I agree that it is work and annoying but seems like the right thing to do from the interoperability and flexibility point of view. Isn't it the strength of the open source and community development model?
This is a much larger philosophical question; I'm pretty sure we've disagreed on this before in the past ;) I am certainly not arguing against developing clean, modular abstractions.
My perspective: life is short. I don't just write software for the sake of writing software. I write to create something Good and Free, because I believe it's important that the sum total of human knowledge is increased, and that people are able to learn from the work that has been done, and that the work is worthy of learning from.
Coincidentally, in the landscape that existed when the OpenSSL and OpenLDAP Projects began, they were the only open source games in town. But my perspective didn't change when Mozilla became open source. (And that's not just mother duck syndrome either; I've been thru the Mozilla code base inside and out.)
Good *and* free are my constraints. Simply being free is not interesting; there are many free projects out there that are drivel. GnuTLS is garbage.
Like GnuTLS, MozNSS is inferior by design, and by designer mindset. OpenSSL may not be perfect, but it at least takes the correct attitude of being a toolkit, not a polished solution. Security is not a simple thing to acquire or deliver, it cannot be neatly shrink-wrapped.
NSS takes the (demonstrably false) attitude that security *is* a neatly shrink-wrapped package. This is evident even from the outside, where its proponents brag that NSS is FIPS-validated, and that any app that uses it is also automatically FIPS-validated, whereas OpenSSL is claimed to be inferior because apps that rely on it must be individually FIPS-validated. [1]http://fedoraproject.org/wiki/FedoraCryptoConsolidation
Anyone who knows anything about computer security can see the obvious fallacy here - security is not magic fairy dust that you can sprinkle on any application (i.e. link with NSS) and suddenly be secure. It must be carefully considered and taken as a whole.
NSS takes the attitude that the NSS developers know everything and that app developers know nothing. And that secure computing can be isolated inside a single trusted base and applied uniformly to all applications. This might have been true when there was only one app to deal with, the Netscape browser, but it is not true in the Brave New World that RedHat has thrust it into.
In the real world, different apps operate in different trust domains and have different requirements. I don't want my LDAP server to use the same trusted certificate store as my HTTP client. I don't want my LDAP client to trust the same certificates as my SMTP client.
NSS developers started from the position of believing they know everything about how security must operate, and have created a library that only works one way. The OpenSSL developers started from the position of believing they know little, and have created a code base that can be used in myriad ways.
Open source is not just an intellectual exercise, it's not just a vehicle for novice programmers to learn their craft. It is not just a vehicle for commercial enterprises to acquire new products at zero cost. We want to do useful work, as quickly as possible, so that we can leverage as much as possible and advance society as far as possible within our lifetimes. You don't get there by fragmenting the community and constantly re-treading the same path. You don't get there by constantly flogging the same horse that already died years ago. You pick the best technology and push it further, expanding the boundaries of human knowledge. The work we've done to support MozNSS and GnuTLS in OpenLDAP has been IMO a waste of time and energy, forcing us to re-tread well established paths instead of focusing on new capabilities.
There's nothing gained by having two different ways to do exactly the same thing. There's something lost by having to support multiple ways of doing almost the same thing. Losses in time, mental energy, and overall efficiency, at least.
On 04/17/2012 05:21 PM, Howard Chu wrote:
Jan Vcelak wrote:
About the -Z option (attempted TLS without verification):
Actually this is not about -Z behavior. I just mentioned in the mail, that it would be great if the library provided more information about the type of the failure.
Perhaps more context about where this perceived need is coming from would have helped the public discussion. Dmitri Pal @ Red Hat pointed me to a bug report that seems to have been the catalyst for this request. We exchanged a few responses and I thought it would be useful to re-join the public conversation.
Dmitri Pal wrote:
On 04/17/2012 01:43 PM, Howard Chu wrote:
Dmitri Pal wrote:
I did not say it is a major problem but we have seen multiple
times on
our community lists people trying to setup TLS for SSSD (openssl
or nss)
manually and getting the certificate problems that are hard to
diagnose.
Here is one of them filed by our QE as we followed up on one of the community threads: https://bugzilla.redhat.com/show_bug.cgi?id=640393 And as you see it is not on the NSS or openssl level. If the paths
are
not configured properly (a typo in the path for example) you will
get a
certificate error but it is actually a wrong path. Unfortunately the lowest layer that knows about the issue is openldap not underlying crypto module. This is the kind of issue that we want to fix.
This is exactly the kind of issue that NSS makes messy to fix. Normally we know that cacertdir and cacert must point to a directory and a file, respectively. It would be feasible to check access(path, R_OK) or something at the time that an app calls ldap_set_option() on them. But with NSS, these parameters might be something else entirely
- a DB path and a cert name within the DB, and such pathname-based
checks would give spurious failures.
Because of NSS, nobody but the underlying crypto module knows what these parameters actually mean.
I.e., it is not an OpenLDAP level issue, it is precisely an NSS issue.
path not found/no permission is certainly a common failure condition, but running in debug mode makes that obvious, because the explicit error text is logged on stderr.
If I configure slapd.conf with TLSCACertificateFile /some/bogus/path
and try to start it, I get:
TLS: could not load verify locations (file:`/some/bogus/path',dir:`'). TLS: error:02001002:system library:fopen:No such file or directory bss_file.c:169 TLS: error:2006D080:BIO routines:BIO_new_file:no such file
bss_file.c:172
TLS: error:0B084002:x509 certificate routines:X509_load_cert_crl_file:system lib by_file.c:274 4f8daa38 main: TLS init def ctx failed: -1 4f8daa38 slapd destroy: freeing system resources. 4f8daa38 slapd stopped. 4f8daa38 connections_destroy: nothing to destroy.
It's quite obvious "No such file or directory".
If the cause of failure isn't as obvious with NSS, then again I have to say, it seems to me that you're looking in the wrong place for a solution.
It strikes me that it would benefit the community for this conversation to be public, e.g. on the openldap-devel mailing list.
We can do it there. I sent a subscription request. Do you want to start this conversation over there from the beginning? There is already a thread that I wanted to draw you attention to and it turned into this discussion. May be you can just cut and paste you comments from this thread into the public thread and we take it from
there?
That's the on-topic bit, but I felt it's important to also address this side-note:
On the side note about NSS. I am kind of surprised a bit with your
view.
The move to pluggable crypto modules is an industry trend not something NSS team invented. And there is a big push for the ECC for example
which
NSS does not support. So there will be other solutions and other crypto modules. Creating a clean abstraction around it is a logical next step so blaming that NSS destroyed a perfect world dominated by open SSL
is a
bit strange. I agree that it is work and annoying but seems like the right thing to do from the interoperability and flexibility point of view. Isn't it the strength of the open source and community
development
model?
This is a much larger philosophical question; I'm pretty sure we've disagreed on this before in the past ;) I am certainly not arguing against developing clean, modular abstractions.
My perspective: life is short. I don't just write software for the sake of writing software. I write to create something Good and Free, because I believe it's important that the sum total of human knowledge is increased, and that people are able to learn from the work that has been done, and that the work is worthy of learning from.
Coincidentally, in the landscape that existed when the OpenSSL and OpenLDAP Projects began, they were the only open source games in town. But my perspective didn't change when Mozilla became open source. (And that's not just mother duck syndrome either; I've been thru the Mozilla code base inside and out.)
Good *and* free are my constraints. Simply being free is not interesting; there are many free projects out there that are drivel. GnuTLS is garbage.
Like GnuTLS, MozNSS is inferior by design, and by designer mindset. OpenSSL may not be perfect, but it at least takes the correct attitude of being a toolkit, not a polished solution. Security is not a simple thing to acquire or deliver, it cannot be neatly shrink-wrapped.
NSS takes the (demonstrably false) attitude that security *is* a neatly shrink-wrapped package. This is evident even from the outside, where its proponents brag that NSS is FIPS-validated, and that any app that uses it is also automatically FIPS-validated, whereas OpenSSL is claimed to be inferior because apps that rely on it must be individually FIPS-validated. [1]http://fedoraproject.org/wiki/FedoraCryptoConsolidation
Anyone who knows anything about computer security can see the obvious fallacy here - security is not magic fairy dust that you can sprinkle on any application (i.e. link with NSS) and suddenly be secure. It must be carefully considered and taken as a whole.
NSS takes the attitude that the NSS developers know everything and that app developers know nothing. And that secure computing can be isolated inside a single trusted base and applied uniformly to all applications. This might have been true when there was only one app to deal with, the Netscape browser, but it is not true in the Brave New World that RedHat has thrust it into.
In the real world, different apps operate in different trust domains and have different requirements. I don't want my LDAP server to use the same trusted certificate store as my HTTP client. I don't want my LDAP client to trust the same certificates as my SMTP client.
NSS developers started from the position of believing they know everything about how security must operate, and have created a library that only works one way. The OpenSSL developers started from the position of believing they know little, and have created a code base that can be used in myriad ways.
Open source is not just an intellectual exercise, it's not just a vehicle for novice programmers to learn their craft. It is not just a vehicle for commercial enterprises to acquire new products at zero cost. We want to do useful work, as quickly as possible, so that we can leverage as much as possible and advance society as far as possible within our lifetimes. You don't get there by fragmenting the community and constantly re-treading the same path. You don't get there by constantly flogging the same horse that already died years ago. You pick the best technology and push it further, expanding the boundaries of human knowledge. The work we've done to support MozNSS and GnuTLS in OpenLDAP has been IMO a waste of time and energy, forcing us to re-tread well established paths instead of focusing on new capabilities.
There's nothing gained by having two different ways to do exactly the same thing. There's something lost by having to support multiple ways of doing almost the same thing. Losses in time, mental energy, and overall efficiency, at least.
Oh well... This is a completely philosophical exercise. I am not necessarily in favor of NSS or OpenSSL or some other crypto solution. They just exist and given. We would not have had so many kinds of cars if there have not been a need. There is. It does not make sense to deny a reality on the premises that in the past everything was OK. Things change, life goes on.
I value everybody's time too and understand that creating a good abstraction is a cost especially if single solution worked in the past. So following the rules of the meritocracy it is completely reasonable to expect that whoever has the need does the work. And this is the case here. But we want to do the work in the least intrusive way and to address as many concerns as possible. So the question was and is "can you please let us know how we should implement it to make things work for everybody?".
Dmitri Pal wrote:
On 04/17/2012 05:21 PM, Howard Chu wrote:
Jan Vcelak wrote:
About the -Z option (attempted TLS without verification):
Actually this is not about -Z behavior. I just mentioned in the mail, that it would be great if the library provided more information about the type of the failure.
Perhaps more context about where this perceived need is coming from would have helped the public discussion. Dmitri Pal @ Red Hat pointed me to a bug report that seems to have been the catalyst for this request. We exchanged a few responses and I thought it would be useful to re-join the public conversation.
Dmitri Pal wrote:
On 04/17/2012 01:43 PM, Howard Chu wrote:
Dmitri Pal wrote:
I did not say it is a major problem but we have seen multiple
times on
our community lists people trying to setup TLS for SSSD (openssl
or nss)
manually and getting the certificate problems that are hard to
diagnose.
Here is one of them filed by our QE as we followed up on one of the community threads: https://bugzilla.redhat.com/show_bug.cgi?id=640393 And as you see it is not on the NSS or openssl level. If the paths
are
not configured properly (a typo in the path for example) you will
get a
certificate error but it is actually a wrong path. Unfortunately the lowest layer that knows about the issue is openldap not underlying crypto module. This is the kind of issue that we want to fix.
This is exactly the kind of issue that NSS makes messy to fix. Normally we know that cacertdir and cacert must point to a directory and a file, respectively. It would be feasible to check access(path, R_OK) or something at the time that an app calls ldap_set_option() on them. But with NSS, these parameters might be something else entirely
- a DB path and a cert name within the DB, and such pathname-based
checks would give spurious failures.
Because of NSS, nobody but the underlying crypto module knows what these parameters actually mean.
I.e., it is not an OpenLDAP level issue, it is precisely an NSS issue.
path not found/no permission is certainly a common failure condition, but running in debug mode makes that obvious, because the explicit error text is logged on stderr.
If I configure slapd.conf with TLSCACertificateFile /some/bogus/path
and try to start it, I get:
TLS: could not load verify locations (file:`/some/bogus/path',dir:`'). TLS: error:02001002:system library:fopen:No such file or directory bss_file.c:169 TLS: error:2006D080:BIO routines:BIO_new_file:no such file
bss_file.c:172
TLS: error:0B084002:x509 certificate routines:X509_load_cert_crl_file:system lib by_file.c:274 4f8daa38 main: TLS init def ctx failed: -1 4f8daa38 slapd destroy: freeing system resources. 4f8daa38 slapd stopped. 4f8daa38 connections_destroy: nothing to destroy.
It's quite obvious "No such file or directory".
If the cause of failure isn't as obvious with NSS, then again I have to say, it seems to me that you're looking in the wrong place for a solution.
I value everybody's time too and understand that creating a good abstraction is a cost especially if single solution worked in the past. So following the rules of the meritocracy it is completely reasonable to expect that whoever has the need does the work. And this is the case here. But we want to do the work in the least intrusive way and to address as many concerns as possible. So the question was and is "can you please let us know how we should implement it to make things work for everybody?".
OK. But at the moment I still don't understand why providing the debug output (as we already do) isn't sufficient to allow administrators to identify their misconfiguration issues.
On 04/17/2012 06:15 PM, Howard Chu wrote:
Dmitri Pal wrote:
On 04/17/2012 05:21 PM, Howard Chu wrote:
Jan Vcelak wrote:
About the -Z option (attempted TLS without verification):
Actually this is not about -Z behavior. I just mentioned in the mail, that it would be great if the library provided more information about the type of the failure.
Perhaps more context about where this perceived need is coming from would have helped the public discussion. Dmitri Pal @ Red Hat pointed me to a bug report that seems to have been the catalyst for this request. We exchanged a few responses and I thought it would be useful to re-join the public conversation.
Dmitri Pal wrote:
On 04/17/2012 01:43 PM, Howard Chu wrote:
Dmitri Pal wrote:
I did not say it is a major problem but we have seen multiple
times on
our community lists people trying to setup TLS for SSSD (openssl
or nss)
manually and getting the certificate problems that are hard to
diagnose.
Here is one of them filed by our QE as we followed up on one of the community threads: https://bugzilla.redhat.com/show_bug.cgi?id=640393 And as you see it is not on the NSS or openssl level. If the paths
are
not configured properly (a typo in the path for example) you will
get a
certificate error but it is actually a wrong path. Unfortunately the lowest layer that knows about the issue is openldap not underlying crypto module. This is the kind of issue that we want to fix.
This is exactly the kind of issue that NSS makes messy to fix. Normally we know that cacertdir and cacert must point to a directory and a file, respectively. It would be feasible to check access(path, R_OK) or something at the time that an app calls ldap_set_option() on them. But with NSS, these parameters might be something else entirely
- a DB path and a cert name within the DB, and such pathname-based
checks would give spurious failures.
Because of NSS, nobody but the underlying crypto module knows what these parameters actually mean.
I.e., it is not an OpenLDAP level issue, it is precisely an NSS issue.
path not found/no permission is certainly a common failure condition, but running in debug mode makes that obvious, because the explicit error text is logged on stderr.
If I configure slapd.conf with TLSCACertificateFile /some/bogus/path
and try to start it, I get:
TLS: could not load verify locations (file:`/some/bogus/path',dir:`'). TLS: error:02001002:system library:fopen:No such file or directory bss_file.c:169 TLS: error:2006D080:BIO routines:BIO_new_file:no such file
bss_file.c:172
TLS: error:0B084002:x509 certificate routines:X509_load_cert_crl_file:system lib by_file.c:274 4f8daa38 main: TLS init def ctx failed: -1 4f8daa38 slapd destroy: freeing system resources. 4f8daa38 slapd stopped. 4f8daa38 connections_destroy: nothing to destroy.
It's quite obvious "No such file or directory".
If the cause of failure isn't as obvious with NSS, then again I have to say, it seems to me that you're looking in the wrong place for a solution.
I value everybody's time too and understand that creating a good abstraction is a cost especially if single solution worked in the past. So following the rules of the meritocracy it is completely reasonable to expect that whoever has the need does the work. And this is the case here. But we want to do the work in the least intrusive way and to address as many concerns as possible. So the question was and is "can you please let us know how we should implement it to make things work for everybody?".
OK. But at the moment I still don't understand why providing the debug output (as we already do) isn't sufficient to allow administrators to identify their misconfiguration issues.
We need some way for developers writing applications that use the OpenLDAP API to get more detailed information from TLS/SSL connection and other failures.
Rich Megginson wrote:
On 04/17/2012 06:15 PM, Howard Chu wrote:
Dmitri Pal wrote:
On 04/17/2012 05:21 PM, Howard Chu wrote:
If the cause of failure isn't as obvious with NSS, then again I have to say, it seems to me that you're looking in the wrong place for a solution.
I value everybody's time too and understand that creating a good abstraction is a cost especially if single solution worked in the past. So following the rules of the meritocracy it is completely reasonable to expect that whoever has the need does the work. And this is the case here. But we want to do the work in the least intrusive way and to address as many concerns as possible. So the question was and is "can you please let us know how we should implement it to make things work for everybody?".
OK. But at the moment I still don't understand why providing the debug output (as we already do) isn't sufficient to allow administrators to identify their misconfiguration issues.
We need some way for developers writing applications that use the OpenLDAP API to get more detailed information from TLS/SSL connection and other failures.
Jan's original proposal is for LDAP_TLS_INITIALIZATION_ERROR to allow it to be distinguished from a session negotiation error. The bugzilla bug quoted previously complains that TLS settings aren't checked at startup time. Sounds to me like your actual problem is that you should be forcing the context initialization to occur earlier, to catch these cases. Unfortunately, ever since ITS#5696, you'll still be unable to catch all possible NSS internal errors this way.
For your https://bugzilla.redhat.com/show_bug.cgi?id=640393 I suggest you add a call to ldap_set_option(ld, LDAP_OPT_X_TLS_NEWCTX, &flag) in your app startup sequence to force libldap to perform context initialization, and do your pathname/dbname/certname validation at that time. That will give you an opportunity to detect misconfiguration/initialization errors. Or at least, as much as is possible since your real initialization is still deferred.
This may seem less precise compared to the original proposal, but it has the virtue of failing early, rather than waiting until the first session attempt to report a config error.
On 04/17/2012 08:32 PM, Howard Chu wrote:
Rich Megginson wrote:
On 04/17/2012 06:15 PM, Howard Chu wrote:
Dmitri Pal wrote:
On 04/17/2012 05:21 PM, Howard Chu wrote:
> If the cause of failure isn't as obvious with NSS, then again I > have > to say, it seems to me that you're looking in the wrong place for a > solution.
I value everybody's time too and understand that creating a good abstraction is a cost especially if single solution worked in the past. So following the rules of the meritocracy it is completely reasonable to expect that whoever has the need does the work. And this is the case here. But we want to do the work in the least intrusive way and to address as many concerns as possible. So the question was and is "can you please let us know how we should implement it to make things work for everybody?".
OK. But at the moment I still don't understand why providing the debug output (as we already do) isn't sufficient to allow administrators to identify their misconfiguration issues.
We need some way for developers writing applications that use the OpenLDAP API to get more detailed information from TLS/SSL connection and other failures.
Jan's original proposal is for LDAP_TLS_INITIALIZATION_ERROR to allow it to be distinguished from a session negotiation error. The bugzilla bug quoted previously complains that TLS settings aren't checked at startup time. Sounds to me like your actual problem is that you should be forcing the context initialization to occur earlier, to catch these cases. Unfortunately, ever since ITS#5696, you'll still be unable to catch all possible NSS internal errors this way.
For your https://bugzilla.redhat.com/show_bug.cgi?id=640393 I suggest you add a call to ldap_set_option(ld, LDAP_OPT_X_TLS_NEWCTX, &flag) in your app startup sequence to force libldap to perform context initialization, and do your pathname/dbname/certname validation at that time. That will give you an opportunity to detect misconfiguration/initialization errors. Or at least, as much as is possible since your real initialization is still deferred.
This may seem less precise compared to the original proposal, but it has the virtue of failing early, rather than waiting until the first session attempt to report a config error.
Is this how you would recommend doing this when using the OpenSSL crypto implementation with OpenLDAP? Because, AFAIK, using OL with ossl "suffers" the same problem of deferred initialization when the first TLS/SSL context is required.
Rich Megginson wrote:
On 04/17/2012 08:32 PM, Howard Chu wrote:
Rich Megginson wrote:
On 04/17/2012 06:15 PM, Howard Chu wrote:
Dmitri Pal wrote:
On 04/17/2012 05:21 PM, Howard Chu wrote:
>> If the cause of failure isn't as obvious with NSS, then again I >> have >> to say, it seems to me that you're looking in the wrong place for a >> solution.
I value everybody's time too and understand that creating a good abstraction is a cost especially if single solution worked in the past. So following the rules of the meritocracy it is completely reasonable to expect that whoever has the need does the work. And this is the case here. But we want to do the work in the least intrusive way and to address as many concerns as possible. So the question was and is "can you please let us know how we should implement it to make things work for everybody?".
OK. But at the moment I still don't understand why providing the debug output (as we already do) isn't sufficient to allow administrators to identify their misconfiguration issues.
We need some way for developers writing applications that use the OpenLDAP API to get more detailed information from TLS/SSL connection and other failures.
Jan's original proposal is for LDAP_TLS_INITIALIZATION_ERROR to allow it to be distinguished from a session negotiation error. The bugzilla bug quoted previously complains that TLS settings aren't checked at startup time. Sounds to me like your actual problem is that you should be forcing the context initialization to occur earlier, to catch these cases. Unfortunately, ever since ITS#5696, you'll still be unable to catch all possible NSS internal errors this way.
For your https://bugzilla.redhat.com/show_bug.cgi?id=640393 I suggest you add a call to ldap_set_option(ld, LDAP_OPT_X_TLS_NEWCTX,&flag) in your app startup sequence to force libldap to perform context initialization, and do your pathname/dbname/certname validation at that time. That will give you an opportunity to detect misconfiguration/initialization errors. Or at least, as much as is possible since your real initialization is still deferred.
This may seem less precise compared to the original proposal, but it has the virtue of failing early, rather than waiting until the first session attempt to report a config error.
Is this how you would recommend doing this when using the OpenSSL crypto implementation with OpenLDAP? Because, AFAIK, using OL with ossl "suffers" the same problem of deferred initialization when the first TLS/SSL context is required.
Yes, it would also work for OpenSSL (and GnuTLS). You probably want to set the global context, so use something like:
int is_listener = 0; ldap_set_option(NULL, LDAP_OPT_X_TLS_NEWCTX, &is_listener);
You will probably want to add some more checks in tlsm_ctx_init() to validate that cacertdir and cacertfile are usable.