Hi,
I am not sure if this is the right place to ask this or not. If I install 2 nodes of OpenLDAP and they both share the same SAN storage, is it possible that both of them would be working active/active ?, i.e. behind a load balancer (doing reads and writes).
Taymour A. El Erian wrote:
Hi,
I am not sure if this is the right place to ask this or not. If I install 2 nodes of OpenLDAP and they both share the same SAN storage, is it possible that both of them would be working active/active ?, i.e. behind a load balancer (doing reads and writes).
Taymour,
This was recently covered in Suretec's Blog (providing I understand what you're saying correctly). http://blog.suretecsystems.com/archives/50-OpenLDAP-2.4-with-MirrorMode-Acti...
Regards,
Andy See
Multimaster support is present in OpenLDAP 2.4.
On Thu, 6 Dec 2007, Taymour A. El Erian wrote:
Hi,
I am not sure if this is the right place to ask this or not. If I install 2 nodes of OpenLDAP and they both share the same SAN storage, is it possible that both of them would be working active/active ?, i.e. behind a load balancer (doing reads and writes).
-- Taymour A El Erian System Division Manager RHCE, LPIC, CCNA, MCSE, CNA TE Data E-mail: taymour.elerian@tedata.net Web: www.tedata.net Tel: +(202)-33320700 Fax: +(202)-33320800 Ext: 1101
Aaron Richton wrote:
Multimaster support is present in OpenLDAP 2.4.
That's not quite the complete answer though. He's also talking about two servers sharing the same storage. In general, that is not supported in BerkeleyDB and is certainly not supported by back-bdb or back-hdb.
On Thu, 6 Dec 2007, Taymour A. El Erian wrote:
Hi,
I am not sure if this is the right place to ask this or not. If I install 2 nodes of OpenLDAP and they both share the same SAN storage, is it possible that both of them would be working active/active ?, i.e. behind a load balancer (doing reads and writes).
-- Taymour A El Erian System Division Manager RHCE, LPIC, CCNA, MCSE, CNA TE Data E-mail: taymour.elerian@tedata.net Web: www.tedata.net Tel: +(202)-33320700 Fax: +(202)-33320800 Ext: 1101
.
Howard Chu wrote:
Aaron Richton wrote: Multimaster support is present in OpenLDAP 2.4.
That's not quite the complete answer though. He's also talking about
two
servers sharing the same storage. In general, that is not supported in BerkeleyDB and is certainly not supported by back-bdb or back-hdb.
What are you trying to accomplish?
If you want high availability for ldap writes, create two master servers (each with their own storage/db files) in multimaster mode (2.4) or mirror mode, and set up the load balancer such that all connections to the VIP go to one master, failing over to the second master if the first one is down. (Active/Hot standby) This provides better reliability because there are no single points of failure (i.e. a disk failure/San issue or db corruption on one won't generally affect the other, so you can fail over from these kinds of problems), and minimizes write conflicts (since only one master is being written to at any given time). Additionally, create a bunch of read-only replicas behind a separate load balanced VIP for the majority of your traffic (most ldap clients are generally just doing auth and/or lookups so, are read only).
If you are trying to do this to scale up write performance, multiple masters (in any form) is not really the answer (check the archives for the many times this has been discussed). Basically, it comes down to multiple masters still have to write the same data to every master, so this doesn't increase performance. Even with them sharing the db files, the disk I/O is probably the bottleneck on performance, so this wouldn't really help. In general, your percentage of writes to reads in LDAP should be very small, so having the read-only replica cluster (which can be expanded out to, for all practical purposes, an unlimited number of servers) will take most of the traffic off your masters, which are limited in scalability (under this model) to as big a box as you can build for one server (but this should be fine if you offload most of the clients to the R/O cluster, and just have writes go to the masters).
On Thu, 6 Dec 2007, Taymour A. El Erian wrote:
Hi,
I am not sure if this is the right place to ask this or not. If I
install
2 nodes of OpenLDAP and they both share the same SAN storage, is it
possible
that both of them would be working active/active ?, i.e. behind a
load
balancer (doing reads and writes).
Clowser, Jeff (Contractor) wrote:
Howard Chu wrote:
Aaron Richton wrote: Multimaster support is present in OpenLDAP 2.4.
That's not quite the complete answer though. He's also talking about
two
servers sharing the same storage. In general, that is not supported in BerkeleyDB and is certainly not supported by back-bdb or back-hdb.
What are you trying to accomplish?
Add high availability to my master servers, avoiding replication.
If you want high availability for ldap writes, create two master servers (each with their own storage/db files) in multimaster mode (2.4) or mirror mode, and set up the load balancer such that all connections to the VIP go to one master, failing over to the second master if the first one is down.
What happens when that one master comes back again ?, will the previous master replicate the data to it, what about conflicts ?
(Active/Hot standby) This provides better reliability because there are no single points of failure (i.e. a disk failure/San issue or db corruption on one won't generally affect the other, so you can fail over from these kinds of problems), and minimizes write conflicts (since only one master is being written to at any given time). Additionally, create a bunch of read-only replicas behind a separate load balanced VIP for the majority of your traffic (most ldap clients are generally just doing auth and/or lookups so, are read only).
I need the master/replica to be transparent to the clients, so I should use chaining ?
Master 1 Master2 Replica1 Replica2 (Chain) |_________| |________| | VIP1 | |____________________| | VIP2
Now, I use VIP2 on the clients and VIP1 in the chain configuration ?
If you are trying to do this to scale up write performance, multiple masters (in any form) is not really the answer (check the archives for the many times this has been discussed). Basically, it comes down to multiple masters still have to write the same data to every master, so this doesn't increase performance. Even with them sharing the db files, the disk I/O is probably the bottleneck on performance, so this wouldn't really help. In general, your percentage of writes to reads in LDAP should be very small, so having the read-only replica cluster (which can be expanded out to, for all practical purposes, an unlimited number of servers) will take most of the traffic off your masters, which are limited in scalability (under this model) to as big a box as you can build for one server (but this should be fine if you offload most of the clients to the R/O cluster, and just have writes go to the masters).
On Thu, 6 Dec 2007, Taymour A. El Erian wrote:
Hi,
I am not sure if this is the right place to ask this or not. If I
install
2 nodes of OpenLDAP and they both share the same SAN storage, is it
possible
that both of them would be working active/active ?, i.e. behind a
load
balancer (doing reads and writes).
What are you trying to accomplish?
Add high availability to my master servers, avoiding replication.
Why avoid replication? Multimastering is not necessarily bad, if done right. If you have two masters, but always write to one, with the other as a hot standby, you have the high availability of multimastering without much risk of conflicts (conflicts will only occur if someone skips the VIP and writes conflicting data to each individual server). They key is to avoiding this is to make only one available to clients at any given time.
If you want high availability for ldap writes, create two master
servers
(each with their own storage/db files) in multimaster mode (2.4) or mirror mode, and set up the load balancer such that all connections
to
the VIP go to one master, failing over to the second master if the
first
one is down.
What happens when that one master comes back again ?, will the previous
master replicate the data to it, what about conflicts ?
That's the idea. The failed master (even if you had to rebuild it from scratch) should just sync up with the primary master - if you have to rebuild from scratch (i.e. an empty db), leave the secondary master in place until the primary is fully synced up before making it primary again.
I need the master/replica to be transparent to the clients, so I should
use chaining ?
Master 1 Master2 Replica1 Replica2 (Chain) |_________| |________| | VIP1 | |____________________| | VIP2
If you really need that, that will probably work (conceptually it's fine, but I've never actually used the OL chaining overlay, so there may be gotchas in what it can do). Doing this will spread the connections across many read-only servers, and only writes will "get through" to the master. Just be sure the read-only replicas point to the Master VIP for their chaining. What I've found, though, is that most clients really only need read access, and the ones that need write access and can't follow referrals tend to be small enough in usage that they can be pointed to the master. I always consider it to be a badly designed client if it can't split off it's read only usage from it's administrative/write usage or follow referrals to do writes, such that it only works using a writable master (i.e. a mail server should only read LDAP for sending/receiving/reading mail, but administrative purposes - anything from creating/deleting accounts to changing your password or email preferences - should be separated/segregated out so it can point to the master. In particular, doing something like logging into webmail to read mail should not cause writes to the ldap server - something the Sun webmail product, for example, is guilty of).
Now, I use VIP2 on the clients and VIP1 in the chain configuration ?
Sounds right (conceptually).
- Jeff
Clowser, Jeff (Contractor) wrote:
What are you trying to accomplish?
Add high availability to my master servers, avoiding replication.
Why avoid replication? Multimastering is not necessarily bad, if done right. If you have two masters, but always write to one, with the other as a hot standby, you have the high availability of multimastering without much risk of conflicts (conflicts will only occur if someone skips the VIP and writes conflicting data to each individual server). They key is to avoiding this is to make only one available to clients at any given time.
So in this case you would use multimaster instead of doing master/slave and upgrade the slave to a master in case of master failure ?, how different is this from mirror mode. I think a load balancer can be configured in NAT mode and this way no one can skip the VIP.
If you want high availability for ldap writes, create two master
servers
(each with their own storage/db files) in multimaster mode (2.4) or mirror mode, and set up the load balancer such that all connections
to
the VIP go to one master, failing over to the second master if the
first
one is down.
What happens when that one master comes back again ?, will the previous
master replicate the data to it, what about conflicts ?
That's the idea. The failed master (even if you had to rebuild it from scratch) should just sync up with the primary master - if you have to rebuild from scratch (i.e. an empty db), leave the secondary master in place until the primary is fully synced up before making it primary again.
I need the master/replica to be transparent to the clients, so I should
use chaining ?
Master 1 Master2 Replica1 Replica2 (Chain) |_________| |________| | VIP1 | |____________________| | VIP2
If you really need that, that will probably work (conceptually it's fine, but I've never actually used the OL chaining overlay, so there may be gotchas in what it can do). Doing this will spread the connections across many read-only servers, and only writes will "get through" to the master. Just be sure the read-only replicas point to the Master VIP for their chaining. What I've found, though, is that most clients really only need read access, and the ones that need write access and can't follow referrals tend to be small enough in usage that they can be pointed to the master. I always consider it to be a badly designed client if it can't split off it's read only usage from it's administrative/write usage or follow referrals to do writes, such that it only works using a writable master (i.e. a mail server should only read LDAP for sending/receiving/reading mail, but administrative purposes - anything from creating/deleting accounts to changing your password or email preferences - should be separated/segregated out so it can point to the master. In particular, doing something like logging into webmail to read mail should not cause writes to the ldap server - something the Sun webmail product, for example, is guilty of).
Can anyone from the developers tell me if this would work ?
Now, I use VIP2 on the clients and VIP1 in the chain configuration ?
Sounds right (conceptually).
- Jeff
On Thursday 06 December 2007 13:46:28 Taymour A. El Erian wrote:
Hi,
I am not sure if this is the right place to ask this or not. If I
install 2 nodes of OpenLDAP and they both share the same SAN storage, is it possible that both of them would be working active/active ?, i.e. behind a load balancer (doing reads and writes).
Sharing the same storage is typically what one would do with an Active/Passive HA cluster, using a cluster middleware to avoid both nodes accessing the storage simultaneously (and moving the resource IP etc. with it).
The only real way to do active/active for writes is multimaster replication in 2.4. But, do note that all it buys you is a cheaper solution (e.g., no SAN required), not a faster one.
Regards, Buchan
Buchan Milne wrote:
On Thursday 06 December 2007 13:46:28 Taymour A. El Erian wrote:
Hi,
I am not sure if this is the right place to ask this or not. If I
install 2 nodes of OpenLDAP and they both share the same SAN storage, is it possible that both of them would be working active/active ?, i.e. behind a load balancer (doing reads and writes).
Sharing the same storage is typically what one would do with an Active/Passive HA cluster, using a cluster middleware to avoid both nodes accessing the storage simultaneously (and moving the resource IP etc. with it).
The only real way to do active/active for writes is multimaster replication in 2.4. But, do note that all it buys you is a cheaper solution (e.g., no SAN required), not a faster one.
Indeed. I wonder why people even think that "load balancing" with shared storage ever makes sense. The biggest bottleneck in server performance is disk throughput. Putting a bunch of fast CPU frontends in front of the same Bunch Of Disks isn't going to do squat for write rates, and write rates are the only important metric in a replication scenario.
Indeed. I wonder why people even think that "load balancing" with shared storage ever makes sense. The biggest bottleneck in server performance is disk throughput. Putting a bunch of fast CPU frontends in front of the same Bunch Of Disks isn't going to do squat for write rates, and write rates are the only important metric in a replication scenario.
I think perhaps the OP is asking the wrong question, sure, but I see a great need for the ability to provide a "load balanced" read cluster not for the performance gains (and there certainly could be some on reads) but for the HA. I'd certainly like to be able to quiesce a node for maintenance, take it out of the cluster, patch it, bring it back up and have it re-sync, then bring it back into the cluster without any interruption in service to the users. It looks like mirror mode in an active/active cluster (behind a load balancer) would allow me to do that.
John
-----Original Message----- From: openldap-software-bounces+jeff_clowser=fanniemae.com@openldap.org
[mailto:openldap-software->bounces+jeff_clowser=fanniemae.com@openldap.o rg] On Behalf Of John Madden
Sent: Thursday, December 06, 2007 1:39 PM To: Howard Chu Cc: Buchan Milne; openldap-software@openldap.org; Taymour A. El Erian Subject: Re: Active/Active servers
Indeed. I wonder why people even think that "load balancing" with
shared
storage ever makes sense. The biggest bottleneck in server
performance is disk
throughput. Putting a bunch of fast CPU frontends in front of the
same Bunch
Of Disks isn't going to do squat for write rates, and write rates are
the only
important metric in a replication scenario.>
I think perhaps the OP is asking the wrong question, sure, but I see a great need for the ability to provide a "load balanced" read cluster
not
for the performance gains (and there certainly could be some on reads) but for the HA. I'd certainly like to be able to quiesce a node for maintenance, take it out of the cluster, patch it, bring it back up and have it re-sync, then bring it back into the cluster without any interruption in service to the users. It looks like mirror mode in an active/active cluster (behind a load balancer) would allow me to do that.
John
Only change to this comment I would make is: rather than an active/active master cluster, I'd have it active/hot standby (i.e. the VIP on the load balancer only directs connections to one master, and fails over to the other master if that one is unavailable rather than balancing connections between the two masters all the time, to avoid/minimize write conflicts).
- Jeff
Only change to this comment I would make is: rather than an active/active master cluster, I'd have it active/hot standby (i.e. the VIP on the load balancer only directs connections to one master, and fails over to the other master if that one is unavailable rather than balancing connections between the two masters all the time, to avoid/minimize write conflicts).
Good point, I hadn't considered write conflicts. Active/passive of course won't provide you the read performance of active/active/LB, but I doubt that's really the concern here anyway.
John
--On December 6, 2007 5:03:01 PM -0500 John Madden jmadden@ivytech.edu wrote:
Only change to this comment I would make is: rather than an active/active master cluster, I'd have it active/hot standby (i.e. the VIP on the load balancer only directs connections to one master, and fails over to the other master if that one is unavailable rather than balancing connections between the two masters all the time, to avoid/minimize write conflicts).
Good point, I hadn't considered write conflicts. Active/passive of course won't provide you the read performance of active/active/LB, but I doubt that's really the concern here anyway.
If people want READ performance, then set up a couple of replicas in an LB pool, and point the clients that just need to read at them. Just like one does now. Masters for writing, replicas for reading.
--Quanah
--
Quanah Gibson-Mount Principal Software Engineer Zimbra, Inc -------------------- Zimbra :: the leader in open source messaging and collaboration
Aaron Richton, i think right, -- replication of what?? - with the one ldap-base...
-----Original Message----- From: "Taymour A. El Erian" taymour.elerian@tedata.net To: openldap-software@openldap.org Date: Thu, 06 Dec 2007 13:46:28 +0200 Subject: Active/Active servers
Hi,
I am not sure if this is the right place to ask this or not. If I install 2 nodes of OpenLDAP and they both share the same SAN storage, is it possible that both of them would be working active/active ?, i.e. behind a load balancer (doing reads and writes).
openldap-software@openldap.org