On Tuesday 22 July 2008 18:53:56 Kevin Elliott wrote:
Folks,
With all this talk about multimaster, could someone point me to some resources that describe industry standard implementations and best practices of OpenLDAP in multimaster mode for the purposes of high availability and robustness? I have yet to see comprehensive documents that describe solutions for most small to medium businesses, and would love to see something you recommend.
In my opinion (I may have missed some scenarios):
1)If you need failover reads, have sufficient slaves, and ensure that all software and configurations are able/configured to fail over. In my case, that means I probably need to build sudo against OpenLDAP on Solaris instead of against the Sun LDAP SDK, and I might need to find a solution for bind_sdb- ldap (which doesn't seem to be able to take multiple hostnames in the LDAP URI).
2)If you need a site that only has a slave to be able to propagate changes, ensure that your software is configured to chase referrals on updates (e.g., samba can, pam_ldap can etc.).
3)If you have a site that only has a slave, but changes need to be propagated from clients of this slave from software that does not chase referrals, use the chain overlay.
If you have users using the OpenLDAP commandline utilities (which won't chase referrals with authentication), teach the users to send changes to your master. If they can't do that, they shouldn't be using these utilities.
4)If you need consistent but highly available writes, use a cluster middleware. If you have shared storage available (e.g. SAN), use it. If you don't use a shared storage software implementation (e.g. drbd).
5)If you need more write throughput (and tuning will not help you further), split your DIT, or scale up (get faster disks, more disks, SAN etc.). Scaling out won't help.
6)If you need to be able to write to the same DIT portion on different servers simultaneously, you should consider whether the possible data synchronisation issues could pose a problem. If they don't, multi-master may be for you.
I have seen people on this list wanting multi-master to solve most of the items above, where only one of them (6) may be a valid reason.
BTW, I use multi-master on my "personal" infrastructure, which consists of a desktop machine at home, a laptop that is used at home and at work and other places, and a desktop at work. Both desktops are domain controllers backed on LDAP, and I have multi-master configured between these 3 machines to ensure that password changes by domain members (at home, or at work) will be propagated to all LDAP servers. However, I think this is probably an abuse of multi-master, and I don't think I will be logging any ITSs in the event that I lose any changes ....
In production, I have one HA cluster (RHEL3 with Red Hat Cluster Suite on EMC SAN for shared storage) for a master for one environment (with 2 slaves in the production site, and one "failover" master and one slave in the DR site). The other environment (which is actually bigger) has a standalone master and load- balanced slaves for the "production" site, and standalone slaves for site sites. I don't think I will be risking data consistency on > 1 million entires with multi-master.
Regards, Buchan