We have two openldap 2.4.7 , configured as MirrorMode, We are planning to add load balancer in front of both servers into the production environment, We don't want too go through conflicts issues as it was stated before as messy process.
--------- --------- . . . . . Srv1 . . Srv2 . --------- --------- \ / ---- ------------ . LoadB . .-------.
As per my understanding, the load balancer(failover mode) is redirecting all traffic to the active server(srv1); if the active server went down the traffic will be redirected to stand-by server(srv2). When srv1 goes online again the load balancer will redirect all trafic to srv1, while srv1 is in progress to get synced with srv2. The load balancer will not consider the sync process; it will just redirect the traffic.
it was previously stated on the mailing list that there should be one write at a time. is there any conflict will occur when server getting bulk syncing and receiving updates(attribute level)/add requests as well?
What happen if there attribute-level conflict? how to avoid it? suggestions are highly welcomed.
-- Diaa Radwan
Diaa Radwan wrote:
We have two openldap 2.4.7 , configured as MirrorMode, We are planning to add load balancer in front of both servers into the production environment, We don't want too go through conflicts issues as it was stated before as messy process.
. . . . . Srv1 . . Srv2 .
\ / ---- ------------ . LoadB . .-------.
As per my understanding, the load balancer(failover mode) is redirecting all traffic to the active server(srv1); if the active server went down the traffic will be redirected to stand-by server(srv2). When srv1 goes online again the load balancer will redirect all trafic to srv1, while srv1 is in progress to get synced with srv2. The load balancer will not consider the sync process; it will just redirect the traffic.
it was previously stated on the mailing list that there should be one write at a time. is there any conflict will occur when server getting bulk syncing and receiving updates(attribute level)/add requests as well?
What happen if there attribute-level conflict? how to avoid it? suggestions are highly welcomed.
I would recommend the LB only redirects when the server it is *currently* pointing to goes down.
That way, if svr1 goes down, svr2 becomes active. When svr1 comes back, it resynchronises and is ready and waiting to take over should svr2 go down.
If you immediately switch and you have some hardware failure that has caused corruptions or other things on svr1, then it would be unwise for that to become the active node until you have fully investigated why it went down in the first place.
Gavin.
Diaa Radwan wrote:
We have two openldap 2.4.7 , configured as MirrorMode, We are planning to add load balancer in front of both servers into the production environment, We don't want too go through conflicts issues as it was stated before as messy process.
. . . . . Srv1 . . Srv2 .
\ / ---- ------------ . LoadB . .-------.
As per my understanding, the load balancer(failover mode) is redirecting all traffic to the active server(srv1); if the active server went down the traffic will be redirected to stand-by server(srv2). When srv1 goes online again the load balancer will redirect all trafic to srv1, while srv1 is in progress to get synced with srv2. The load balancer will not consider the sync process; it will just redirect the traffic.
it was previously stated on the mailing list that there should be one write at a time. is there any conflict will occur when server getting bulk syncing and receiving updates(attribute level)/add requests as well?
Yes, this is a possibility. At Symas we do not advise our customers to immediately switch back to a failed server when it comes back online. Your mirrormode servers should be peers in every sense of the word: They should have the same disk, memory, network, and processor configuration. Therefore it won't matter which server is fielding write requests. When your first server goes offline, your load balancer should switch to the second and continue in that configuration until that one goes offline. Presumably by then you will have gotten your first server back online and it will have synchronized itself. If your second server goes offline, then the load balancer can switch back to the first. The synchronization status can be checked by looking at the operational attribute 'contextCSN' in the root object of the replicated naming context (remember to use '+' or call the attribute name out explicitly when using ldapsearch).
What happen if there attribute-level conflict? how to avoid it? suggestions are highly welcomed.
Best to follow procedure from the previous paragraph. If you absolutely _must_ switch back to the first server as soon as possible, wait until the contextCSN attributes in the mirror pair are equal to one another, or at least reasonably close. Note that in a system with a heavy write load this may not happen long enough to make a clean switch, so 'close' is good enough.
-- Diaa Radwan .
Hope this helps,
-Matt
--
Matthew Hardin Symas Corporation - The LDAP Guys http://www.symas.com
I think too, the idea is you treat the second master server as a slave in practice, meaning you never do updates to it unless the primary master is down.
Effectively, the difference from a Master/Slave setup is that you will not have to promote the Slave to a Master and adjust any replication agreement settings in the event of a failed server.
Is that a fair analysis ?
Sellers
On Jan 22, 2008, at 5:23 PM, Matthew Hardin wrote:
Diaa Radwan wrote:
We have two openldap 2.4.7 , configured as MirrorMode, We are planning to add load balancer in front of both servers into the production environment, We don't want too go through conflicts issues as it was stated before as messy process.
. . . . . Srv1 . . Srv2 .
\ /
. LoadB .
.-------.
As per my understanding, the load balancer(failover mode) is redirecting all traffic to the active server(srv1); if the active server went down the traffic will be redirected to stand-by server(srv2). When srv1 goes online again the load balancer will redirect all trafic to srv1, while srv1 is in progress to get synced with srv2. The load balancer will not consider the sync process; it will just redirect the traffic.
it was previously stated on the mailing list that there should be one write at a time. is there any conflict will occur when server getting bulk syncing and receiving updates(attribute level)/add requests as well?
Yes, this is a possibility. At Symas we do not advise our customers to immediately switch back to a failed server when it comes back online. Your mirrormode servers should be peers in every sense of the word: They should have the same disk, memory, network, and processor configuration. Therefore it won't matter which server is fielding write requests. When your first server goes offline, your load balancer should switch to the second and continue in that configuration until that one goes offline. Presumably by then you will have gotten your first server back online and it will have synchronized itself. If your second server goes offline, then the load balancer can switch back to the first. The synchronization status can be checked by looking at the operational attribute 'contextCSN' in the root object of the replicated naming context (remember to use '+' or call the attribute name out explicitly when using ldapsearch).
What happen if there attribute-level conflict? how to avoid it? suggestions are highly welcomed.
Best to follow procedure from the previous paragraph. If you absolutely _must_ switch back to the first server as soon as possible, wait until the contextCSN attributes in the mirror pair are equal to one another, or at least reasonably close. Note that in a system with a heavy write load this may not happen long enough to make a clean switch, so 'close' is good enough.
-- Diaa Radwan .
Hope this helps,
-Matt
--
Matthew Hardin Symas Corporation - The LDAP Guys http://www.symas.com
______________________________________________ Chris G. Sellers | NITLE Technology 734.661.2318 | chris.sellers@nitle.org AIM: imthewherd | GTalk: cgseller@gmail.com
<quote who="Chris G. Sellers">
I think too, the idea is you treat the second master server as a slave in practice, meaning you never do updates to it unless the primary master is down.
Effectively, the difference from a Master/Slave setup is that you will not have to promote the Slave to a Master and adjust any replication agreement settings in the event of a failed server.
Is that a fair analysis ?
Pretty much and also that the configurations are exactly the same, bar where the Syncrepl points to and ServerID
Sellers
On Jan 22, 2008, at 5:23 PM, Matthew Hardin wrote:
Diaa Radwan wrote:
We have two openldap 2.4.7 , configured as MirrorMode, We are planning to add load balancer in front of both servers into the production environment, We don't want too go through conflicts issues as it was stated before as messy process.
. . . . . Srv1 . . Srv2 .
\ /
. LoadB .
.-------.
As per my understanding, the load balancer(failover mode) is redirecting all traffic to the active server(srv1); if the active server went down the traffic will be redirected to stand-by server(srv2). When srv1 goes online again the load balancer will redirect all trafic to srv1, while srv1 is in progress to get synced with srv2. The load balancer will not consider the sync process; it will just redirect the traffic.
it was previously stated on the mailing list that there should be one write at a time. is there any conflict will occur when server getting bulk syncing and receiving updates(attribute level)/add requests as well?
Yes, this is a possibility. At Symas we do not advise our customers to immediately switch back to a failed server when it comes back online. Your mirrormode servers should be peers in every sense of the word: They should have the same disk, memory, network, and processor configuration. Therefore it won't matter which server is fielding write requests. When your first server goes offline, your load balancer should switch to the second and continue in that configuration until that one goes offline. Presumably by then you will have gotten your first server back online and it will have synchronized itself. If your second server goes offline, then the load balancer can switch back to the first. The synchronization status can be checked by looking at the operational attribute 'contextCSN' in the root object of the replicated naming context (remember to use '+' or call the attribute name out explicitly when using ldapsearch).
What happen if there attribute-level conflict? how to avoid it? suggestions are highly welcomed.
Best to follow procedure from the previous paragraph. If you absolutely _must_ switch back to the first server as soon as possible, wait until the contextCSN attributes in the mirror pair are equal to one another, or at least reasonably close. Note that in a system with a heavy write load this may not happen long enough to make a clean switch, so 'close' is good enough.
-- Diaa Radwan .
Hope this helps,
-Matt
--
Matthew Hardin Symas Corporation - The LDAP Guys http://www.symas.com
Chris G. Sellers | NITLE Technology 734.661.2318 | chris.sellers@nitle.org AIM: imthewherd | GTalk: cgseller@gmail.com
Gavin Henry wrote:
<quote who="Chris G. Sellers"> > I think too, the idea is you treat the second master server as a slave > in practice, meaning you never do updates to it unless the primary > master is down. > > Effectively, the difference from a Master/Slave setup is that you will > not have to promote the Slave to a Master and adjust any replication > agreement settings in the event of a failed server. > > Is that a fair analysis ?
Pretty much and also that the configurations are exactly the same, bar where the Syncrepl points to and ServerID
In fact, using the ServerID the configurations can be exactly the same, period. (Use both syncrepl configurations on both servers. The ServerID will be used to prevent a server from redundantly connecting to itself.) So you don't have to adjust any settings at all for automatic failover and recovery.
<quote who="Howard Chu">
Gavin Henry wrote:
<quote who="Chris G. Sellers"> > I think too, the idea is you treat the second master server as a slave > in practice, meaning you never do updates to it unless the primary > master is down. > > Effectively, the difference from a Master/Slave setup is that you will > not have to promote the Slave to a Master and adjust any replication > agreement settings in the event of a failed server. > > Is that a fair analysis ?
Pretty much and also that the configurations are exactly the same, bar where the Syncrepl points to and ServerID
In fact, using the ServerID the configurations can be exactly the same, period. (Use both syncrepl configurations on both servers. The ServerID will be used to prevent a server from redundantly connecting to itself.) So you don't have to adjust any settings at all for automatic failover and recovery.
AH, ok. So a unique ServerID is the only req. I'll update the MM docs in the Guide.
Thanks.
-- -- Howard Chu Chief Architect, Symas Corp. http://www.symas.com Director, Highland Sun http://highlandsun.com/hyc/ Chief Architect, OpenLDAP http://www.openldap.org/project/
On Monday 21 January 2008 16:49:39 Diaa Radwan wrote:
We have two openldap 2.4.7 , configured as MirrorMode, We are planning to add load balancer in front of both servers into the production environment, We don't want too go through conflicts issues as it was stated before as messy process.
. . . . . Srv1 . . Srv2 .
\ / ---- ------------ . LoadB . .-------.
As per my understanding, the load balancer(failover mode) is redirecting all traffic to the active server(srv1);
Actually, I would consider having two separate "server farms" configured, with two "virtual servers", one running on each. There would be one virtual servers
if the active server went down the traffic will be redirected to stand-by server(srv2). When srv1 goes online again the load balancer will redirect all trafic to srv1, while srv1 is in progress to get synced with srv2. The load balancer will not consider the sync process; it will just redirect the traffic.
it was previously stated on the mailing list that there should be one write at a time. is there any conflict will occur when server getting bulk syncing and receiving updates(attribute level)/add requests as well?
What happen if there attribute-level conflict? how to avoid it? suggestions are highly welcomed.
openldap-software@openldap.org