Buchan Milne wrote:
On Tuesday 16 June 2009 10:45:00 Jordi Espasa Clofent wrote:
Hi,
According to http://www.openldap.org/lists/openldap-software/200701/msg00149.html, the "In general, ppolicy related state values are not replicated; each replica is on its own as far as state-related attributes in enforcing password policy."
¿Is it means that of I've one provider and two consumers, the changes made in ppolicy statements in provider are not sync againt the consumers as other kind entries/attributes are?
I need to know because of I've change my userPassword in provider and:
- I can use without problems the new password using the provider
- I cannot use the new password against the two consumers.
userPassword is *not* a "state-related attribute", please see 'man slapo- ppolicy'.
Note, that what this does mean is that you may be locked out on one slave, but not the others (and maybe not the provider), and simple reset-ing the password on the master may not be sufficient to unlock the account on the slaves, and the pwdAccountFailureTime attributes may not be cleared, meaning one more failed authentication may lock the account on a slave (especially in a load-balanced environment).
As I've noted before, the ppolicy draft specification doesn't address how ppolicy state should behave in a replicated environment. The spec is also still only a draft, not finalized. Experience with this version of the draft implementation will probably be useful in shaping the final spec.
As others have also commented, locking out an account due to X number of incorrect logins is generally a bad idea; it offers a trivial avenue for denial-of-service and doesn't actually deter brute force password attacks. In my opinion, this feature should be removed from the spec and replaced with an incremental delay instead. I.e., when any login attempt fails, start adding delays before processing subsequent attempts from the same client (or for the same user).
In a widely distributed environment, it makes little sense to replicate a password failure incident to servers located halfway around the world. Indeed, attempting to do so will likely result in greater DOS all through the infrastructure. IMO failure tracking should be node-local; logs of such events can be forwarded to a central security auditing point but that should be a separate mechanism from the general purpose directory.
I suppose in a load-balancing environment you need all of the servers in the pool to be kept in sync, but then you're losing the benefit of doing load balancing. (I.e., instead of dividing the workload amongst the servers, you're requiring all of the workload to be duplicated on all the servers.)