Hi,
I have been searching for a long time to find a solution to give me high availability on the writing. We have 2 ldap servers running as multi master (I know it is not considered a good thing and it is very old 2.0.x), this way if one is down the other will accept writes (add/modify/delete). If we do the normal single master multiple slaves, we will get more performance and high availability for reads but if the master is down no updates. Also, we can not separate writes from reads and we can not use referrals (not all applications we have can chase referrals). I though of having a standby master and use heartbeat but it doesnt look like a stable solution, any ideas ? maybe shared disk ?
Taymour A. El Erian wrote:
Hi,
I have been searching for a long time to find a solution to give me high availability on the writing. We have 2 ldap servers running as multi master (I know it is not considered a good thing and it is very old 2.0.x), this way if one is down the other will accept writes (add/modify/delete). If we do the normal single master multiple slaves, we will get more performance and high availability for reads but if the master is down no updates. Also, we can not separate writes from reads and we can not use referrals (not all applications we have can chase referrals). I though of having a standby master and use heartbeat but it doesnt look like a stable solution, any ideas ? maybe shared disk ?
For openldap 2.{2,3}.x you can use ggated+gmirror+carp (on freebsd) or heartbeat + mirror with drdb (on linux). It should work.
For openldap 2.4 two-node multi-master solution declared, afaik, but currently 2.4 is alpha.
WBR. Dmitriy
Dmitriy Kirhlarov wrote:
Taymour A. El Erian wrote:
Hi,
I have been searching for a long time to find a solution to give me high availability on the writing. We have 2 ldap servers running as multi master (I know it is not considered a good thing and it is very old 2.0.x), this way if one is down the other will accept writes (add/modify/delete). If we do the normal single master multiple slaves, we will get more performance and high availability for reads but if the master is down no updates. Also, we can not separate writes from reads and we can not use referrals (not all applications we have can chase referrals). I though of having a standby master and use heartbeat but it doesnt look like a stable solution, any ideas ? maybe shared disk ?
For openldap 2.{2,3}.x you can use ggated+gmirror+carp (on freebsd) or heartbeat + mirror with drdb (on linux). It should work.
For openldap 2.4 two-node multi-master solution declared, afaik, but currently 2.4 is alpha.
WBR. Dmitriy
Thanks dmitriy, Would you happen to have any documentation regarding Linux. What about the case I explained, that we have clients that do read/write, how to send the writes to the masters and reads to slaves without having the clients chase referrals ?
Taymour A. El Erian wrote:
Dmitriy Kirhlarov wrote:
Taymour A. El Erian wrote:
Hi,
I have been searching for a long time to find a solution to give me high availability on the writing. We have 2 ldap servers running as multi master (I know it is not considered a good thing and it is very old 2.0.x), this way if one is down the other will accept writes (add/modify/delete). If we do the normal single master multiple slaves, we will get more performance and high availability for reads but if the master is down no updates. Also, we can not separate writes from reads and we can not use referrals (not all applications we have can chase referrals). I though of having a standby master and use heartbeat but it doesnt look like a stable solution, any ideas ? maybe shared disk ?
For openldap 2.{2,3}.x you can use ggated+gmirror+carp (on freebsd) or heartbeat + mirror with drdb (on linux). It should work.
For openldap 2.4 two-node multi-master solution declared, afaik, but currently 2.4 is alpha.
Two-node high availability support was in older 2.4 alpha releases. Current 2.4 releases support full N-way multimaster.
Thanks dmitriy, Would you happen to have any documentation regarding Linux. What about the case I explained, that we have clients that do read/write, how to send the writes to the masters and reads to slaves without having the clients chase referrals ?
Use the chaining overlay. See test017 for an example configuration.
Howard Chu wrote:
Two-node high availability support was in older 2.4 alpha releases. Current 2.4 releases support full N-way multimaster.
Very nice.
Howard,
Could you please tell me how stable openldap-2.4 is and when "alpha" suffix is going to be removed from 2.4 branch? Thank you in advance.
WBR Dmitriy
Dmitriy Kirhlarov wrote:
Howard Chu wrote:
Two-node high availability support was in older 2.4 alpha releases. Current 2.4 releases support full N-way multimaster.
Very nice.
Howard,
Could you please tell me how stable openldap-2.4 is and when "alpha" suffix is going to be removed from 2.4 branch? Thank you in advance.
There is a beta release planned soon.
Gavin.
Gavin Henry wrote:
Dmitriy Kirhlarov wrote:
Howard Chu wrote:
Two-node high availability support was in older 2.4 alpha releases. Current 2.4 releases support full N-way multimaster.
Very nice.
Howard,
Could you please tell me how stable openldap-2.4 is and when "alpha" suffix is going to be removed from 2.4 branch? Thank you in advance.
There is a beta release planned soon.
Gavin.
I would like to make my request now more clear:
I want to have 2 master servers in a 2 node cluster (active/standby) using shared storage (irrelevant topic here) and 1 slave server. The 3 servers should be using the same IP address (using a layer 4 switch or IP load balancer). This IP address will be used in all ldap aware applications. In case one application tries to do a write operation and it hits the slave, the slave should redirect it to the master (which will have another IP address or URI different than the whole group IP address).
Is this possible, does it make any sense ?
On Thursday 16 August 2007 16:09:45 Taymour A. El Erian wrote:
I would like to make my request now more clear:
I want to have 2 master servers in a 2 node cluster (active/standby) using shared storage (irrelevant topic here) and 1 slave server. The 3 servers should be using the same IP address (using a layer 4 switch or IP load balancer).
This is not active/standby then.
This IP address will be used in all ldap aware applications. In case one application tries to do a write operation and it hits the slave, the slave should redirect it to the master (which will have another IP address or URI different than the whole group IP address).
Is this possible, does it make any sense ?
No, this does not make sense.
What may make sense would be: -active/standby cluster with one published IP address -load balancer using cluster IP and slave -slave setup sanely with updateref set to hostname of clustered IP
Or, even better, 2 slaves load-balanced, active/standby cluster, updateref on both slaves pointing to hostname of clustered IP.
Or, load-balanced multi-master with 2.4.
Regards, Buchan
Buchan Milne wrote:
On Thursday 16 August 2007 16:09:45 Taymour A. El Erian wrote:
I would like to make my request now more clear:
I want to have 2 master servers in a 2 node cluster (active/standby) using shared storage (irrelevant topic here) and 1 slave server. The 3 servers should be using the same IP address (using a layer 4 switch or IP load balancer).
This is not active/standby then.
1 master will be running and the other not accessing the storage.
This IP address will be used in all ldap aware applications. In case one application tries to do a write operation and it hits the slave, the slave should redirect it to the master (which will have another IP address or URI different than the whole group IP address).
Is this possible, does it make any sense ?
No, this does not make sense.
What may make sense would be: -active/standby cluster with one published IP address -load balancer using cluster IP and slave -slave setup sanely with updateref set to hostname of clustered IP
Or, even better, 2 slaves load-balanced, active/standby cluster, updateref on both slaves pointing to hostname of clustered IP.
Or, load-balanced multi-master with 2.4.
I guess this is what I actually meant but maybe I did not say it right. The main problem is that I do not want the clients to chase referrals, I would like to use the chain overlay to automatically have them redirected.
I am currently using multimaster with 2.0.x
Regards, Buchan
Howard Chu wrote:
Taymour A. El Erian wrote:
Dmitriy Kirhlarov wrote:
Taymour A. El Erian wrote:
Hi,
I have been searching for a long time to find a solution to give me high availability on the writing. We have 2 ldap servers running as multi master (I know it is not considered a good thing and it is very old 2.0.x), this way if one is down the other will accept writes (add/modify/delete). If we do the normal single master multiple slaves, we will get more performance and high availability for reads but if the master is down no updates. Also, we can not separate writes from reads and we can not use referrals (not all applications we have can chase referrals). I though of having a standby master and use heartbeat but it doesnt look like a stable solution, any ideas ? maybe shared disk ?
For openldap 2.{2,3}.x you can use ggated+gmirror+carp (on freebsd) or heartbeat + mirror with drdb (on linux). It should work.
For openldap 2.4 two-node multi-master solution declared, afaik, but currently 2.4 is alpha.
Two-node high availability support was in older 2.4 alpha releases. Current 2.4 releases support full N-way multimaster.
Thanks dmitriy, Would you happen to have any documentation regarding Linux. What about the case I explained, that we have clients that do read/write, how to send the writes to the masters and reads to slaves without having the clients chase referrals ?
Use the chaining overlay. See test017 for an example configuration.
Hi,
I am not sure which conf files belong to this example.
On Sunday 12 August 2007 19:51:39 Dmitriy Kirhlarov wrote:
Taymour A. El Erian wrote:
Hi,
I have been searching for a long time to find a solution to give me high availability on the writing. We have 2 ldap servers running as multi master (I know it is not considered a good thing and it is very old 2.0.x), this way if one is down the other will accept writes (add/modify/delete). If we do the normal single master multiple slaves, we will get more performance and high availability for reads but if the master is down no updates. Also, we can not separate writes from reads and we can not use referrals (not all applications we have can chase referrals). I though of having a standby master and use heartbeat but it doesnt look like a stable solution, any ideas ? maybe shared disk ?
For openldap 2.{2,3}.x you can use ggated+gmirror+carp (on freebsd) or heartbeat + mirror with drdb (on linux). It should work.
Or any clustering system using shared storage. For example, I have a cluster running RHEL3 with Red Hat Cluster Suite on (shared) Fibre-attached storage.
In the end, there isn't that much that is specific to OpenLDAP (e.g. most clustering solutions provide support for MySQL, the only difference is which init script the clustering middleware uses to start/stop the service).
For openldap 2.4 two-node multi-master solution declared, afaik, but currently 2.4 is alpha.
But, for some applications (which just like to assume that one IP address will always have the master), you may still need a load balancer or clustering software to manage a virtual IP address to get the HA part ...
Regards, Buchan
Buchan Milne wrote:
On Sunday 12 August 2007 19:51:39 Dmitriy Kirhlarov wrote:
Taymour A. El Erian wrote:
Hi,
I have been searching for a long time to find a solution to give me high availability on the writing. We have 2 ldap servers running as multi master (I know it is not considered a good thing and it is very old 2.0.x), this way if one is down the other will accept writes (add/modify/delete). If we do the normal single master multiple slaves, we will get more performance and high availability for reads but if the master is down no updates. Also, we can not separate writes from reads and we can not use referrals (not all applications we have can chase referrals). I though of having a standby master and use heartbeat but it doesnt look like a stable solution, any ideas ? maybe shared disk ?
For openldap 2.{2,3}.x you can use ggated+gmirror+carp (on freebsd) or heartbeat + mirror with drdb (on linux). It should work.
Or any clustering system using shared storage. For example, I have a cluster running RHEL3 with Red Hat Cluster Suite on (shared) Fibre-attached storage.
In the end, there isn't that much that is specific to OpenLDAP (e.g. most clustering solutions provide support for MySQL, the only difference is which init script the clustering middleware uses to start/stop the service).
Have you tried RH cluster suite with OpenLDAP, I couldn't find any documentation for that.
For openldap 2.4 two-node multi-master solution declared, afaik, but currently 2.4 is alpha.
But, for some applications (which just like to assume that one IP address will always have the master), you may still need a load balancer or clustering software to manage a virtual IP address to get the HA part ...
Regards, Buchan
On Tuesday 14 August 2007 08:42:38 Taymour A. El Erian wrote:
Buchan Milne wrote:
Or any clustering system using shared storage. For example, I have a cluster running RHEL3 with Red Hat Cluster Suite on (shared) Fibre-attached storage.
In the end, there isn't that much that is specific to OpenLDAP (e.g. most clustering solutions provide support for MySQL, the only difference is which init script the clustering middleware uses to start/stop the service).
Have you tried RH cluster suite with OpenLDAP, I couldn't find any documentation for that.
I haven't "tried" it, I currently run it in production on one cluster on RHEL3 (and previously have done so on RHEL 2.1 AS).
The documentation for RHCS covers sufficiently the requirements for running any service under RHCS (unfortunately the init script in the RH openldap packages doesn't use the right exit codes for best operation with RHCS ... and that was one of the original reasons I built my own packages - see http://staff.telkomsa.net/packages/ ).
If you have *specific* questions, please ask, but this isn't the right forum to discuss the use of clustering middleware.
Regards, Buchan
Buchan Milne wrote:
On Tuesday 14 August 2007 08:42:38 Taymour A. El Erian wrote:
Buchan Milne wrote:
Or any clustering system using shared storage. For example, I have a cluster running RHEL3 with Red Hat Cluster Suite on (shared) Fibre-attached storage.
In the end, there isn't that much that is specific to OpenLDAP (e.g. most clustering solutions provide support for MySQL, the only difference is which init script the clustering middleware uses to start/stop the service).
Have you tried RH cluster suite with OpenLDAP, I couldn't find any documentation for that.
I haven't "tried" it, I currently run it in production on one cluster on RHEL3 (and previously have done so on RHEL 2.1 AS).
The documentation for RHCS covers sufficiently the requirements for running any service under RHCS (unfortunately the init script in the RH openldap packages doesn't use the right exit codes for best operation with RHCS ... and that was one of the original reasons I built my own packages - see http://staff.telkomsa.net/packages/ ).
If you have *specific* questions, please ask, but this isn't the right forum to discuss the use of clustering middleware.
Regards, Buchan
Would you happen to have any documentation on how to implement chaining ?
Taymour A. El Erian skrev, on 16-08-2007 15:24:
[...]
Would you happen to have any documentation on how to implement chaining ?
If you've built from source (Buchan's srpm or whatever) and have a build directory from a successful build, go to $DIR/tests, run './run test032-chain'; when it's done, got to ./testrun and look at both slapd.?.conf; that should give you a basic idea, together with the rest there.
However, I broke my own chain slapd config file on the relevant (Samba) slave server and my shell scripts which I use for master updates etc. kept getting referral errors, even with the above configs. I spent "a good time" on this and finally got it working again with the following:
overlay chain chain-uri ldaps://ldap.master/ chain-idassert-bind bindmethod=simple binddn="cn=proxy,dc=school,dc=nl" credentials=Wh4t3v3r mode=self flags=non-prescriptive
I was using digest-md5 SASL binding and ldap with starttls, which I always use for all replication etc., but that was what was breaking chaining - no idea why.
This is OL 2.3.37, BTW.
HTH,
--Tonni
Tony Earnshaw wrote:
Taymour A. El Erian skrev, on 16-08-2007 15:24:
[...]
Would you happen to have any documentation on how to implement chaining ?
If you've built from source (Buchan's srpm or whatever) and have a build directory from a successful build, go to $DIR/tests, run './run test032-chain'; when it's done, got to ./testrun and look at both slapd.?.conf; that should give you a basic idea, together with the rest there.
I always get this error
Could not locate slapd(8)
I checked the code and this happens because of the following check
if test ! -x /usr/sbin/.3 ; then
However, I broke my own chain slapd config file on the relevant (Samba) slave server and my shell scripts which I use for master updates etc. kept getting referral errors, even with the above configs. I spent "a good time" on this and finally got it working again with the following:
overlay chain chain-uri ldaps://ldap.master/ chain-idassert-bind bindmethod=simple binddn="cn=proxy,dc=school,dc=nl" credentials=Wh4t3v3r mode=self flags=non-prescriptive
I was using digest-md5 SASL binding and ldap with starttls, which I always use for all replication etc., but that was what was breaking chaining - no idea why.
This is OL 2.3.37, BTW.
HTH,
--Tonni
Taymour A. El Erian skrev, on 20-08-2007 16:04:
Would you happen to have any documentation on how to implement chaining ?
If you've built from source (Buchan's srpm or whatever) and have a build directory from a successful build, go to $DIR/tests, run './run test032-chain'; when it's done, got to ./testrun and look at both slapd.?.conf; that should give you a basic idea, together with the rest there.
I always get this error
Could not locate slapd(8)
What "always"?
When do you get this error? What are you doing at the time?
I checked the code and this happens because of the following check
if test ! -x /usr/sbin/.3 ; then
However, I broke my own chain slapd config file on the relevant (Samba) slave server and my shell scripts which I use for master updates etc. kept getting referral errors, even with the above configs. I spent "a good time" on this and finally got it working again with the following:
overlay chain chain-uri ldaps://ldap.master/ chain-idassert-bind bindmethod=simple binddn="cn=proxy,dc=school,dc=nl" credentials=Wh4t3v3r mode=self flags=non-prescriptive
I was using digest-md5 SASL binding and ldap with starttls, which I always use for all replication etc., but that was what was breaking chaining - no idea why.
This is OL 2.3.37, BTW.
I'd seriously forget doing *anything whatever* (let alone "High availability") with OpenLDAP until you can get this little thing working.
Best,
--Tonni
--
Tony Earnshaw Email: tonni at hetnet dot nl
Tony Earnshaw wrote:
Taymour A. El Erian skrev, on 20-08-2007 16:04:
Would you happen to have any documentation on how to implement chaining ?
If you've built from source (Buchan's srpm or whatever) and have a build directory from a successful build, go to $DIR/tests, run './run test032-chain'; when it's done, got to ./testrun and look at both slapd.?.conf; that should give you a basic idea, together with the rest there.
I always get this error
Could not locate slapd(8)
What "always"?
When do you get this error? What are you doing at the time? I checked the code and this happens because of the following check if test ! -x /usr/sbin/.3 ; then
I get the error when I try to run the test as you noted above
However, I broke my own chain slapd config file on the relevant (Samba) slave server and my shell scripts which I use for master updates etc. kept getting referral errors, even with the above configs. I spent "a good time" on this and finally got it working again with the following:
overlay chain chain-uri ldaps://ldap.master/ chain-idassert-bind bindmethod=simple binddn="cn=proxy,dc=school,dc=nl" credentials=Wh4t3v3r mode=self flags=non-prescriptive
I was using digest-md5 SASL binding and ldap with starttls, which I always use for all replication etc., but that was what was breaking chaining - no idea why.
This is OL 2.3.37, BTW.
I'd seriously forget doing *anything whatever* (let alone "High availability") with OpenLDAP until you can get this little thing working.
Best,
--Tonni
--
Tony Earnshaw Email: tonni at hetnet dot nl
Taymour A. El Erian skrev, on 21-08-2007 08:21:
Would you happen to have any documentation on how to implement chaining ?
If you've built from source (Buchan's srpm or whatever) and have a build directory from a successful build, go to $DIR/tests, run './run test032-chain'; when it's done, got to ./testrun and look at both slapd.?.conf; that should give you a basic idea, together with the rest there.
I always get this error
Could not locate slapd(8)
What "always"?
When do you get this error? What are you doing at the time?
(In Thunderbird 2.0.0.6 I attempt to remove incorrectly inserted quotes and *try* to correct anathema-waking coding and formatting).
I checked the code and this happens because of the following check if test ! -x /usr/sbin/.3 ; then
Nowhere in my test environment can I reproduce (or anything like it) such a path. If you're in $tests, and after a successful build, whatever it is, is testing for slapd, then slapd is in ../servers/slapd/slapd. There's never any '.3' in /usr/sbin/, whatever happens, and never will be.
I get the error when I try to run the test as you noted above
AFAICS you never produced a successful build, but I could be wrong.
Anyone else care to help Taymour on his way? I give up ...
--Tonni
--
However, I broke my own chain slapd config file on the relevant (Samba) slave server and my shell scripts which I use for master updates etc. kept getting referral errors, even with the above configs. I spent "a good time" on this and finally got it working again with the following:
overlay chain chain-uri ldaps://ldap.master/ chain-idassert-bind bindmethod=simple binddn="cn=proxy,dc=school,dc=nl" credentials=Wh4t3v3r mode=self flags=non-prescriptive
I was using digest-md5 SASL binding and ldap with starttls, which I always use for all replication etc., but that was what was breaking chaining - no idea why.
This is OL 2.3.37, BTW.
I'd seriously forget doing *anything whatever* (let alone "High availability") with OpenLDAP until you can get this little thing working.
Best,
--Tonni
--
Tony Earnshaw Email: tonni at hetnet dot nl
Tony Earnshaw wrote:
Taymour A. El Erian skrev, on 21-08-2007 08:21:
Would you happen to have any documentation on how to implement chaining ?
If you've built from source (Buchan's srpm or whatever) and have a build directory from a successful build, go to $DIR/tests, run './run test032-chain'; when it's done, got to ./testrun and look at both slapd.?.conf; that should give you a basic idea, together with the rest there.
I always get this error
Could not locate slapd(8)
What "always"?
When do you get this error? What are you doing at the time?
(In Thunderbird 2.0.0.6 I attempt to remove incorrectly inserted quotes and *try* to correct anathema-waking coding and formatting).
I checked the code and this happens because of the following check if test ! -x /usr/sbin/.3 ; then
Nowhere in my test environment can I reproduce (or anything like it) such a path. If you're in $tests, and after a successful build, whatever it is, is testing for slapd, then slapd is in ../servers/slapd/slapd. There's never any '.3' in /usr/sbin/, whatever happens, and never will be.
I get the error when I try to run the test as you noted above
AFAICS you never produced a successful build, but I could be wrong.
Anyone else care to help Taymour on his way? I give up ...
Here is an excerpt of the code from run script
AC_THREADS=threadsyes
export AC_bdb AC_hdb AC_ldap AC_ldbm AC_meta AC_monitor AC_relay AC_sql \ AC_accesslog AC_dynlist AC_pcache AC_ppolicy AC_refint AC_retcode \ AC_rwm AC_unique AC_syncprov AC_translucent AC_valsort \ AC_WITH_SASL AC_WITH_TLS AC_WITH_MODULES_ENABLED AC_ACI_ENABLED \ AC_THREADS
if test ! -x /usr/sbin/.3 ; then echo "Could not locate slapd(8)" exit 1 fi
BACKEND=
This test is what's giving me the problem I rebuilt the openldap using src rpm
--Tonni
--
However, I broke my own chain slapd config file on the relevant (Samba) slave server and my shell scripts which I use for master updates etc. kept getting referral errors, even with the above configs. I spent "a good time" on this and finally got it working again with the following:
overlay chain chain-uri ldaps://ldap.master/ chain-idassert-bind bindmethod=simple binddn="cn=proxy,dc=school,dc=nl" credentials=Wh4t3v3r mode=self flags=non-prescriptive
I was using digest-md5 SASL binding and ldap with starttls, which I always use for all replication etc., but that was what was breaking chaining - no idea why.
This is OL 2.3.37, BTW.
I'd seriously forget doing *anything whatever* (let alone "High availability") with OpenLDAP until you can get this little thing working.
Best,
--Tonni
--
Tony Earnshaw Email: tonni at hetnet dot nl
Taymour A. El Erian skrev, on 22-08-2007 08:35:
[...]
Here is an excerpt of the code from run script
AC_THREADS=threadsyes
export AC_bdb AC_hdb AC_ldap AC_ldbm AC_meta AC_monitor AC_relay AC_sql \ AC_accesslog AC_dynlist AC_pcache AC_ppolicy AC_refint AC_retcode \ AC_rwm AC_unique AC_syncprov AC_translucent AC_valsort \ AC_WITH_SASL AC_WITH_TLS AC_WITH_MODULES_ENABLED AC_ACI_ENABLED \ AC_THREADS
if test ! -x /usr/sbin/.3 ; then echo "Could not locate slapd(8)" exit 1 fi
BACKEND=
This test is what's giving me the problem I rebuilt the openldap using src rpm
Strange, my run scripts says:
AC_THREADS=threadsyes
export AC_bdb AC_hdb AC_ldap AC_ldbm AC_meta AC_monitor AC_relay AC_sql \ AC_accesslog AC_dynlist AC_pcache AC_ppolicy AC_refint AC_retcode \ AC_rwm AC_unique AC_syncprov AC_translucent AC_valsort \ AC_WITH_SASL AC_WITH_TLS AC_WITH_MODULES_ENABLED AC_ACI_ENABLED \ AC_THREADS
if test ! -x ../servers/slapd/slapd ; then echo "Could not locate slapd(8)" exit 1 fi
BACKEND=
Is this Buchan's srpm? I see the latest versions produce two extra rpms, testprogs and tests. I've never installed these - until just now, to look. And, indeed, his run script has .3 instead of what it should be, slapd2.3 and even if one corrects this, it's everywhere in the test configs, too.
Simply go to your BUILD/openldap-whatever/test dir and run the tests there ...
Best,
--Tonni
On Wednesday 22 August 2007 08:35:37 Taymour A. El Erian wrote:
if test ! -x /usr/sbin/.3 ; then echo "Could not locate slapd(8)" exit 1 fi
This is a victim of trying to get the tests working in a separate package. It's quite obvious what the fix is ... make it /usr/sbin/slapd2.3 instead of /etc/sbin/.3. (and in scripts/defines.sh too).
If it's not fixed yet, I'll try and fix it soon ...
Regards, Buchan
Buchan Milne wrote:
On Wednesday 22 August 2007 08:35:37 Taymour A. El Erian wrote:
if test ! -x /usr/sbin/.3 ; then echo "Could not locate slapd(8)" exit 1 fi
This is a victim of trying to get the tests working in a separate package. It's quite obvious what the fix is ... make it /usr/sbin/slapd2.3 instead of /etc/sbin/.3. (and in scripts/defines.sh too).
If it's not fixed yet, I'll try and fix it soon ...
I have built the RPMS from your src RPM on RHEL 3 U8. I already tried to do this before I sent the email to the list and I get the following error
Running slapadd to build slapd database... ./scripts/test032-chain: line 31: /usr/sbin/.3: No such file or directory
Regards, Buchan
On Wednesday 22 August 2007 11:29:52 Taymour A. El Erian wrote:
Buchan Milne wrote:
On Wednesday 22 August 2007 08:35:37 Taymour A. El Erian wrote:
if test ! -x /usr/sbin/.3 ; then echo "Could not locate slapd(8)" exit 1 fi
This is a victim of trying to get the tests working in a separate package. It's quite obvious what the fix is ... make it /usr/sbin/slapd2.3 instead of /etc/sbin/.3. (and in scripts/defines.sh too).
If it's not fixed yet, I'll try and fix it soon ...
I have built the RPMS from your src RPM on RHEL 3 U8. I already tried to do this before I sent the email to the list and I get the following error
Running slapadd to build slapd database... ./scripts/test032-chain: line 31: /usr/sbin/.3: No such file or directory
Following the instructions directly above (taking into account I mistyped /usr/sbin/.3 as /etc/sbin/.3) would have fixed this (I tested it myself), but I have fixed the issue in the 2.3.38 packages which are now available (from http://staff.telkomsa.net/packages/), and it is now possible to run all the tests using only the binary packages, by:
1)sudo yum install openldap2.3-tests 2)export TMPDIR=/tmp 3)cd /usr/share/openldap2.3/tests 4)make tests
(This packaging of the tests was initially done for Mandriva, which at present doesn't need the versioned suffix for 2.3)
Regards, Buchan
Tony Earnshaw wrote:
Taymour A. El Erian skrev, on 16-08-2007 15:24:
[...]
Would you happen to have any documentation on how to implement chaining ?
If you've built from source (Buchan's srpm or whatever) and have a build directory from a successful build, go to $DIR/tests, run './run test032-chain'; when it's done, got to ./testrun and look at both slapd.?.conf; that should give you a basic idea, together with the rest there.
I can't seem to find this ./testrun diretory I have ./testdata
Taymour A. El Erian skrev, on 03-09-2007 15:26:
Would you happen to have any documentation on how to implement chaining ?
If you've built from source (Buchan's srpm or whatever) and have a build directory from a successful build, go to $DIR/tests, run './run test032-chain'; when it's done, got to ./testrun and look at both slapd.?.conf; that should give you a basic idea, together with the rest there.
I can't seem to find this ./testrun diretory I have ./testdata
You'll only have it when you've run the test, and it will have the timestamp of the time you ran the test.
I will be posting a new question about chaining, it explains what works for me, what doesn't and asks why.
--Tonni
Buchan Milne wrote:
On Tuesday 14 August 2007 08:42:38 Taymour A. El Erian wrote:
Buchan Milne wrote:
Or any clustering system using shared storage. For example, I have a cluster running RHEL3 with Red Hat Cluster Suite on (shared) Fibre-attached storage.
In the end, there isn't that much that is specific to OpenLDAP (e.g. most clustering solutions provide support for MySQL, the only difference is which init script the clustering middleware uses to start/stop the service).
Have you tried RH cluster suite with OpenLDAP, I couldn't find any documentation for that.
I haven't "tried" it, I currently run it in production on one cluster on RHEL3 (and previously have done so on RHEL 2.1 AS).
The documentation for RHCS covers sufficiently the requirements for running any service under RHCS (unfortunately the init script in the RH openldap packages doesn't use the right exit codes for best operation with RHCS ... and that was one of the original reasons I built my own packages - see http://staff.telkomsa.net/packages/ ).
If you have *specific* questions, please ask, but this isn't the right forum to discuss the use of clustering middleware.
Regards, Buchan
I want to make this more available, I have a DR site and I would like to have an LDAP server running getting all the updates from the primary site. This DR LDAP will be active in case the primary site is down and will accept read/write operations.
What do you think ?
On Tuesday 21 August 2007 08:57:07 Taymour A. El Erian wrote:
I want to make this more available, I have a DR site and I would like to have an LDAP server running getting all the updates from the primary site. This DR LDAP will be active in case the primary site is down and will accept read/write operations.
In my company, DR means a disaster has struck, e.g. a building has been destroyed. So, you typically don't: 1)Do a DR failover for any trivial reason 2)Do a DR failover automatically
So, just run a normal replica in the DR, and have a config file ready to be able to "promote" it to a master. If you need slaves there too, just have them replicate of the "DR master", so you have less config changes to do in the case of a real disaster.
Regards, Buchan
Buchan Milne wrote:
On Tuesday 21 August 2007 08:57:07 Taymour A. El Erian wrote:
I want to make this more available, I have a DR site and I would like to have an LDAP server running getting all the updates from the primary site. This DR LDAP will be active in case the primary site is down and will accept read/write operations.
In my company, DR means a disaster has struck, e.g. a building has been destroyed. So, you typically don't: 1)Do a DR failover for any trivial reason 2)Do a DR failover automatically
So, just run a normal replica in the DR, and have a config file ready to be able to "promote" it to a master. If you need slaves there too, just have them replicate of the "DR master", so you have less config changes to do in the case of a real disaster.
Well, maybe I am not explaining correctly, we have a problem in the main site which could cause it to be unreachable for a couple of hours or so and in this event I want everything to run on the backup site. In case the main site is down, I need the backup to take over and even for writes and updates.
Regards, Buchan
openldap-software@openldap.org