We are planning to deploy the password policy module to satisfy our security groups requirement for account lockouts (a.k.a., intentionally provided DoS attack vectors <sigh>). I had a couple of questions regarding the deployment I was hoping someone might be kind enough to answer.
Does the password policy module need to be loaded on all of the servers simultaneously, even if there are no password policies defined? We typically stage configuration changes, pulling servers out of the load balancer, updating them, testing them, and then putting them back, such that at no time is service unavailable. The password policy module extends the schema though, and I don't want a server with it loaded potentially trying to replicate unknown attributes to one without it loaded. It's not clear whether simply loading the module would potentially cause this, or if password policy attributes would only be replicated if the module was actually configured with a default policy or if a user had a specifically defined policy. So, would it be safe to stage the initial configuration change loading the module as long as no policies are in place or used (until all of the servers have been updated), or is it required to shut down all of the servers simultaneously to make the change?
We are only planning to avail of account lockouts, not any of the other functionality of the module. As such, unless I misunderstand, the following policy should enable lockouts but not apply any of the other restrictions:
dn: cn=default,ou=policies,dc=example,dc=com cn: default objectClass: pwdPolicy pwdAttribute: userPassword pwdLockout: TRUE pwdLockoutDuration: 1800 pwdMaxFailure: 100 pwdFailureCountInterval: 300
This would be the default policy. We also have some number of service accounts which we would not want subject to lockouts, if I understand correctly, configuring those accounts with an explicit password policy pwdPolicySubentry like this:
dn: cn=serviceaccount,ou=policies,dc=example,dc=com cn: default objectClass: pwdPolicy pwdAttribute: userPassword
Should leave them with no restrictions?
Finally, there is a requirement for the helpdesk to be able to manually unlock a locked out account. For an account that is currently locked out, would deleting the pwdAccountLockedTime and pwdFailureTime attributes reset it to a normal state?
Thanks much.
I didn't see any replies to this, but for the archives, after doing some testing, evidentally there's no way to deploy the password policy module without shutting down everything, updating the configuration, and then bringing it back online.
Even without any active policies defined, the ppolicy overlay starts generating and replicating pwdFailureTime entries, and any replication consumer without the module also loaded breaks and stops replicating. I'm not sure what use it is to maintain pwdFailureTime entries for objects with no actual password policy in place, other than I suppose to retroactively apply a policy that might be added in the future based on historical authentication failures.
On Wed, Apr 23, 2014 at 05:23:28PM -0700, Paul B. Henson wrote:
We are planning to deploy the password policy module to satisfy our security groups requirement for account lockouts (a.k.a., intentionally provided DoS attack vectors <sigh>). I had a couple of questions regarding the deployment I was hoping someone might be kind enough to answer.
Does the password policy module need to be loaded on all of the servers simultaneously, even if there are no password policies defined? We typically stage configuration changes, pulling servers out of the load balancer, updating them, testing them, and then putting them back, such that at no time is service unavailable. The password policy module extends the schema though, and I don't want a server with it loaded potentially trying to replicate unknown attributes to one without it loaded. It's not clear whether simply loading the module would potentially cause this, or if password policy attributes would only be replicated if the module was actually configured with a default policy or if a user had a specifically defined policy. So, would it be safe to stage the initial configuration change loading the module as long as no policies are in place or used (until all of the servers have been updated), or is it required to shut down all of the servers simultaneously to make the change?
We are only planning to avail of account lockouts, not any of the other functionality of the module. As such, unless I misunderstand, the following policy should enable lockouts but not apply any of the other restrictions:
dn: cn=default,ou=policies,dc=example,dc=com cn: default objectClass: pwdPolicy pwdAttribute: userPassword pwdLockout: TRUE pwdLockoutDuration: 1800 pwdMaxFailure: 100 pwdFailureCountInterval: 300
This would be the default policy. We also have some number of service accounts which we would not want subject to lockouts, if I understand correctly, configuring those accounts with an explicit password policy pwdPolicySubentry like this:
dn: cn=serviceaccount,ou=policies,dc=example,dc=com cn: default objectClass: pwdPolicy pwdAttribute: userPassword
Should leave them with no restrictions?
Finally, there is a requirement for the helpdesk to be able to manually unlock a locked out account. For an account that is currently locked out, would deleting the pwdAccountLockedTime and pwdFailureTime attributes reset it to a normal state?
Thanks much.
Paul B. Henson wrote:
Even without any active policies defined, the ppolicy overlay starts generating and replicating pwdFailureTime entries, and any replication consumer without the module also loaded breaks and stops replicating. I'm not sure what use it is to maintain pwdFailureTime entries for objects with no actual password policy in place, other than I suppose to retroactively apply a policy that might be added in the future based on historical authentication failures.
Sometimes it's handy to see when people had failed logins even if you don't apply lockout policy.
You simply should not load slapo-ppolicy without also loading its schema.
Ciao, Michael.
From: Michael Ströder Sent: Sunday, April 27, 2014 11:27 PM
Sometimes it's handy to see when people had failed logins even if you
don't
apply lockout policy.
It would be even more handy to be able to roll out password policy support without having to shut down your entire LDAP infrastructure ;).
You simply should not load slapo-ppolicy without also loading its schema.
On a given server, obviously. However, ideally, you should be able to load the module on a given server but not have it actually do anything until password policies are actually applied, allowing you to stage the rollout across your servers until the module is loaded everywhere (with no instance where every single server was unavailable).
Paul B. Henson wrote:
From: Michael Ströder Sent: Sunday, April 27, 2014 11:27 PM
Sometimes it's handy to see when people had failed logins even if you
don't
apply lockout policy.
It would be even more handy to be able to roll out password policy support without having to shut down your entire LDAP infrastructure ;).
You simply should not load slapo-ppolicy without also loading its schema.
On a given server, obviously. However, ideally, you should be able to load the module on a given server but not have it actually do anything until password policies are actually applied, allowing you to stage the rollout across your servers until the module is loaded everywhere (with no instance where every single server was unavailable).
1. If HA is important you surely have more than one replica and a decent fail-over mechanism.
2. Loading slapo-ppolicy and the schema file in one restart is trivial.
3. If you like more complex things you can add the module and the schema file without restarting the server by using back-config.
Sorry. I don't see the problem.
Ciao, Michael.
From: Michael Ströder Sent: Monday, April 28, 2014 11:50 PM
- If HA is important you surely have more than one replica and a decent
fail-over mechanism.
Absolutely.
- Loading slapo-ppolicy and the schema file in one restart is trivial.
Agreed.
Sorry. I don't see the problem.
The problem, in an environment where all of the servers are masters (which is somewhat required for a sane account lockout implementation, unless you're using the chaining mechanism to forward the failed authentication attributes), as soon as *one* of those servers has been updated to load the ppolicy module, it starts trying to replicate pwdFailureTime whenever an authentication fails. All of the other servers, which have not yet been updated to load the password policy module, fail replication due to an unknown attribute.
So, the problem is that unless you want your infrastructure to start failing to replicate, you have to update all of them at the same time, such that there is never a scenario where some systems have the password policy module loaded and others don't. And the only way to do that is to for at least some amount of time have all of them shutdown.
I guess alternatively you could start updating them one at a time without shutting them all down, and the ones you haven't gotten to yet would simply fail replications until you got to them.
But it would be a lot simpler if you could load the password policy module and have it not actually try to replicate anything until it's actually configured with a policy.
Paul B. Henson wrote:
But it would be a lot simpler if you could load the password policy module and have it not actually try to replicate anything until it's actually configured with a policy.
AFAICS nothing prevents you from loading the schema first on all replicas. And after that load the overlay.
Ciao, Michael.
From: Michael Ströder Sent: Tuesday, April 29, 2014 12:50 PM
AFAICS nothing prevents you from loading the schema first on all replicas. And after that load the overlay.
The attribute in question is not defined in the external schema, in fact, it is commented out:
#5.3.4 pwdFailureTime # # This attribute holds the timestamps of the consecutive authentication # failures. # # ( 1.3.6.1.4.1.42.2.27.8.1.19 # NAME 'pwdFailureTime' # DESC 'The timestamps of the last consecutive authentication # failures' # EQUALITY generalizedTimeMatch # ORDERING generalizedTimeOrderingMatch # SYNTAX 1.3.6.1.4.1.1466.115.121.1.24 # USAGE directoryOperation )
The actual definition used by openldap is embedded in the schema_info within the ppolicy module itself. So, having the external schema loaded on one replica, and the module itself in use on another, still results in failed replication.
Paul B. Henson wrote:
From: Michael Ströder Sent: Tuesday, April 29, 2014 12:50 PM
AFAICS nothing prevents you from loading the schema first on all replicas. And after that load the overlay.
The attribute in question is not defined in the external schema, in fact, it is commented out:
Ah, sorry. Missed that.
It was always my opinion that overlays should not define schema elements within the C code. Overlays should only check whether the required schema elements are present in the subschema. This is a topic for openldap-devel though.
Ciao, Michael.
Michael Ströder wrote:
Paul B. Henson wrote:
From: Michael Ströder Sent: Tuesday, April 29, 2014 12:50 PM
AFAICS nothing prevents you from loading the schema first on all replicas. And after that load the overlay.
The attribute in question is not defined in the external schema, in fact, it is commented out:
Ah, sorry. Missed that.
Nope. I did not miss anything. I tried to reproduce your problem in a simple MMR testbed.
If just add "moduleload ppolicy" to your slapd.conf (or similar action for back-config) the subschema will contain attribute type description for 'pwdFailureTime' from slapo-accesslog. So you should do this first on all replicas, one after the other, without adding "overlay ppolicy" to the database section.
In a second step you have to add "overlay ppolicy" to the database section on all replicas, also one after the other. The replication will catch up even though some prior modifications might have failed in the past.
Ciao, Michael.
From: Michael Ströder Sent: Friday, May 02, 2014 4:21 AM
If just add "moduleload ppolicy" to your slapd.conf (or similar action for
[...]
In a second step you have to add "overlay ppolicy" to the database section
Sweet, I never considered loading the module but not using it. Thanks much for the follow-up and suggestion, I think that will allow me to stage the deployment as I wanted without any downtime.
all replicas, also one after the other. The replication will catch up even though some prior modifications might have failed in the past.
Will there be any failures? After step one, all of the replicas should have the schema loaded. As you start migrating servers to step two, they will start generating and trying to replicate the attribute. As the other servers already know about the attribute, wouldn't the replication succeed, although at that point there would be nothing on that particular server paying any attention to it?
Thanks again, I really appreciate your help.
Paul B. Henson wrote:
From: Michael Ströder Sent: Friday, May 02, 2014 4:21 AM
If just add "moduleload ppolicy" to your slapd.conf (or similar action for
[...]
In a second step you have to add "overlay ppolicy" to the database section
Sweet, I never considered loading the module but not using it. Thanks much for the follow-up and suggestion, I think that will allow me to stage the deployment as I wanted without any downtime.
all replicas, also one after the other. The replication will catch up even though some prior modifications might have failed in the past.
Will there be any failures? After step one, all of the replicas should have the schema loaded. As you start migrating servers to step two, they will start generating and trying to replicate the attribute. As the other servers already know about the attribute, wouldn't the replication succeed, although at that point there would be nothing on that particular server paying any attention to it?
In my test setup 1. provider has slapo-ppolicy fully configured in the database section but 2. provider does not have it. The attribute is replicated.
BTW: AFAIK write operations to 'pwdFailureTime' are normally not replicated. But with normal syncrepl mode attribute 'pwdFailureTime' will get replicated if there's a change to another attribute. Maybe one could mitigate this by setting parameter 'attrs' to not include all operational attributes. But I would not recommend doing so.
It would be nice if one could explicitly exclude attributes with parameter 'attrs' though. This would allow to work around an issue with slapo-allowed in a MMR setup...
Ciao, Michael.
Michael Ströder wrote:
It would be nice if one could explicitly exclude attributes with parameter 'attrs' though. This would allow to work around an issue with slapo-allowed in a MMR setup...
With example: http://www.openldap.org/its/index.cgi?findid=7847
Ciao, Michael.
2014-05-03 13:22 GMT+02:00 Michael Ströder michael@stroeder.com:
Paul B. Henson wrote:
From: Michael Ströder Sent: Friday, May 02, 2014 4:21 AM
If just add "moduleload ppolicy" to your slapd.conf (or similar action
for
[...]
In a second step you have to add "overlay ppolicy" to the database
section
Sweet, I never considered loading the module but not using it. Thanks
much
for the follow-up and suggestion, I think that will allow me to stage the deployment as I wanted without any downtime.
all replicas, also one after the other. The replication will catch up
even
though some prior modifications might have failed in the past.
Will there be any failures? After step one, all of the replicas should
have
the schema loaded. As you start migrating servers to step two, they will start generating and trying to replicate the attribute. As the other
servers
already know about the attribute, wouldn't the replication succeed,
although
at that point there would be nothing on that particular server paying any attention to it?
In my test setup 1. provider has slapo-ppolicy fully configured in the database section but 2. provider does not have it. The attribute is replicated.
BTW: AFAIK write operations to 'pwdFailureTime' are normally not replicated. But with normal syncrepl mode attribute 'pwdFailureTime' will get replicated if there's a change to another attribute. Maybe one could mitigate this by setting parameter 'attrs' to not include all operational attributes. But I would not recommend doing so.
Hi,
I opened an ITS for a similar thing: http://www.openldap.org/its/index.cgi/Incoming?id=7766
Any feedback about it?
Clément.
From: Michael Ströder Sent: Saturday, May 03, 2014 4:22 AM
BTW: AFAIK write operations to 'pwdFailureTime' are normally not replicated.
Hmm, in my initial testing, it seemed to be. Account lockout wouldn't be nearly as useful if the failures were not synchronized across all of the servers and the settings were applied separately on each one. (Well, arguably account lockout is not useful in general :), but as a checkbox on an audit form it would be less useful if the failures weren't synchronized).
Paul B. Henson wrote:
From: Michael Ströder BTW: AFAIK write operations to 'pwdFailureTime' are normally not replicated.
Hmm, in my initial testing, it seemed to be.
The attribute is replicated when the entry is replicated as a whole (e.g. during initial phase). I'd rather consider this to be a bug though. Use exattrs in your syncrepl statement.
But AFAICS slapo-ppolicy's write operation on this attribute does not trigger the replication.
Account lockout wouldn't be nearly as useful if the failures were not synchronized across all of the servers and the settings were applied separately on each one. (Well, arguably account lockout is not useful in general :),
Glad you already remarked that yourself. ;-)
but as a checkbox on an audit form it would be less useful if the failures weren't synchronized).
I have quite some experience discussing that with security folks. Most of them are open to good arguments. But personally I wonder why I have to tell security folks about this DoS attack vector. Anyway...
Ciao, Michael.
openldap-technical@openldap.org