Please stay on-list.
On Tuesday 22 July 2008 10:41:18 Liutauras Adomaitis wrote:
On Tue, Jul 22, 2008 at 10:02 AM, Buchan Milne bgmilne@staff.telkomsa.net
wrote:
On Monday 21 July 2008 14:48:23 Liutauras Adomaitis wrote:
On Mon, Jul 21, 2008 at 12:49 PM, Buchan Milne <
bgmilne@staff.telkomsa.net>
wrote:
On Sunday 20 July 2008 23:34:03 Liutauras Adomaitis wrote:
[...]
It shows, that it is adding MirrorMode TRUE. So why?
The configuration directive may have been overloaded when multi-master was added (after mirrormode). AFAIK it allows the database in question
to
both have a syncrepl directive, yet take updates from a DN besides the updatedn (see the description on the slapd.conf man page).
are you saying, that in multimaster configuration I have to have updatedn directive to be able to do writes?
No. Without mirrormode or multi-master, a slave would only accept updates from the updatedn. In multi-master, the master is also a slave, so it needs to accept updates from any DN, while being configured as a slave (having replication configuration).
Sorry, but still don't get it. As you say node in multimaster configuration is master and slave at the same time. As a slave it can accept writes only having updatedn directive. Right? I'm really sorry if I don't understand, but it seems that "No" contradicts other sentences.
No means: your interpretation that it is multimaster requires updatedn to be set is incorrect.
In thread "explain diff between multimaster and mirror mode" I found out, that mirrormode is kind of high availability implementation for openldap. In my case I want to have multimaster replication, which could allow me
to
do writes to different master servers at a time.
You may want to think very carefully about why you want this, and not mirrormode, or a single master.
Yes maybe it is not necessary. I have read some posts saying the same - people do multimaster then they really don't need it at all. This is my first acquaintance with syncrepl and ldap replication, so I don't really know what is best for me. I tried master- slave, but it had some undisired side effects,
If you are that unspecific about the issues, we can't comment on your decision.
so I decided to switch to configuration there I could do writes on "slaves", and turned those slaves into masters. I didn't know about updatedn directive which maybe is suitable for me, but needs to be tested. I decided not to do mirrormode, because according to Dieter Kluenter: "Mirror mode is a sort of backup and standby system. Only one ldap server should be visible and available, thus allowing write operations, while the second ldap-server is in hot standby position, and only available to clients" But I need to do writes at both master at the same tme.
You haven't explained why, so we can't give you feedback on your decision to use multimaster.
Are you sure it is not some other aspect of your configuration? Have you posted details? Have you posted the error message?
The most probably it is my configuration and I'm missing something. The error is "shadow context; no update referral at ...", I have posted in my first letter. I receive this error on any master I try to do writes. If add mirrormode true to the end - I can do writes.
Exactly. Enabling "mirrormode" is a prerequisite for multimaster.
The difference between mirrormode an multimaster (as far as I know) these days (post 2.4.6) is really your architecture (whether you allow writes to one master, or more), not the configuration (though more than two syncrepl statements does mean that it can't be mirrormode).
I have one central master, and other master are replicated from this one.
This is not the definition of multimaster. As far as I know, the recommended architecture for multimaster is that all masters can see (replicate from) each other. Otherwise, they are locally writable slaves, with no guarantee of convergence.
Configuration of other masters are the same, except serverID are 2 and 3 respectively and they have only one syncrepl directive pointing to central master. So there is no replication between master002 and master 001.
Regards, Buchan
This is how I view multi-master as well. It's a implementation differentiation more so than technology difference.
Sellers
On Jul 22, 2008, at 6:40 AM, Buchan Milne wrote:
Exactly. Enabling "mirrormode" is a prerequisite for multimaster.
The difference between mirrormode an multimaster (as far as I know) these days (post 2.4.6) is really your architecture (whether you allow writes to one master, or more), not the configuration (though more than two syncrepl statements does mean that it can't be mirrormode).
Folks,
With all this talk about multimaster, could someone point me to some resources that describe industry standard implementations and best practices of OpenLDAP in multimaster mode for the purposes of high availability and robustness? I have yet to see comprehensive documents that describe solutions for most small to medium businesses, and would love to see something you recommend.
Thanks, Kevin
Chris G. Sellers wrote:
This is how I view multi-master as well. It's a implementation differentiation more so than technology difference.
Sellers
On Jul 22, 2008, at 6:40 AM, Buchan Milne wrote:
Exactly. Enabling "mirrormode" is a prerequisite for multimaster.
The difference between mirrormode an multimaster (as far as I know) these days (post 2.4.6) is really your architecture (whether you allow writes to one master, or more), not the configuration (though more than two syncrepl statements does mean that it can't be mirrormode).
On Tuesday 22 July 2008 18:53:56 Kevin Elliott wrote:
Folks,
With all this talk about multimaster, could someone point me to some resources that describe industry standard implementations and best practices of OpenLDAP in multimaster mode for the purposes of high availability and robustness? I have yet to see comprehensive documents that describe solutions for most small to medium businesses, and would love to see something you recommend.
In my opinion (I may have missed some scenarios):
1)If you need failover reads, have sufficient slaves, and ensure that all software and configurations are able/configured to fail over. In my case, that means I probably need to build sudo against OpenLDAP on Solaris instead of against the Sun LDAP SDK, and I might need to find a solution for bind_sdb- ldap (which doesn't seem to be able to take multiple hostnames in the LDAP URI).
2)If you need a site that only has a slave to be able to propagate changes, ensure that your software is configured to chase referrals on updates (e.g., samba can, pam_ldap can etc.).
3)If you have a site that only has a slave, but changes need to be propagated from clients of this slave from software that does not chase referrals, use the chain overlay.
If you have users using the OpenLDAP commandline utilities (which won't chase referrals with authentication), teach the users to send changes to your master. If they can't do that, they shouldn't be using these utilities.
4)If you need consistent but highly available writes, use a cluster middleware. If you have shared storage available (e.g. SAN), use it. If you don't use a shared storage software implementation (e.g. drbd).
5)If you need more write throughput (and tuning will not help you further), split your DIT, or scale up (get faster disks, more disks, SAN etc.). Scaling out won't help.
6)If you need to be able to write to the same DIT portion on different servers simultaneously, you should consider whether the possible data synchronisation issues could pose a problem. If they don't, multi-master may be for you.
I have seen people on this list wanting multi-master to solve most of the items above, where only one of them (6) may be a valid reason.
BTW, I use multi-master on my "personal" infrastructure, which consists of a desktop machine at home, a laptop that is used at home and at work and other places, and a desktop at work. Both desktops are domain controllers backed on LDAP, and I have multi-master configured between these 3 machines to ensure that password changes by domain members (at home, or at work) will be propagated to all LDAP servers. However, I think this is probably an abuse of multi-master, and I don't think I will be logging any ITSs in the event that I lose any changes ....
In production, I have one HA cluster (RHEL3 with Red Hat Cluster Suite on EMC SAN for shared storage) for a master for one environment (with 2 slaves in the production site, and one "failover" master and one slave in the DR site). The other environment (which is actually bigger) has a standalone master and load- balanced slaves for the "production" site, and standalone slaves for site sites. I don't think I will be risking data consistency on > 1 million entires with multi-master.
Regards, Buchan
<quote who="Buchan Milne">
On Tuesday 22 July 2008 18:53:56 Kevin Elliott wrote:
Folks,
With all this talk about multimaster, could someone point me to some resources that describe industry standard implementations and best practices of OpenLDAP in multimaster mode for the purposes of high availability and robustness? I have yet to see comprehensive documents that describe solutions for most small to medium businesses, and would love to see something you recommend.
In my opinion (I may have missed some scenarios):
<snip>
Buchan,
You may want to firm these up and submit an ITS for:
http://www.openldap.org/doc/admin24/replication.html
and
http://www.openldap.org/doc/admin24/replication.html#MirrorMode
Thanks.
Buchan,
Thank you very much for taking the time to iterate over some scenarios and include your suggestions. I resonate with most of what you have suggested, and I had a follow-up question to see if you recommend something different for our particular scenario.
We have a small ldap database, but have had several LDAP outages (mostly due to bdb corruptions that we've yet to diagnose why they're occurring, other than possibly because all the versions are several years old). This ends up taking out all of our unix, OSX, and Windows systems (we're running samba on ldap). Our slave ldap seems to be in a good state during these outages, but most of our systems do not have the ability for one reason or another, to communicate with the slave --- even if we were to point all the systems to it, we would not be able to write to it while the master ldap is down, which is a deal breaker for us! Changes need to occur, and we need to feel confident that the diffs will make it back into the master when it is revived.
What is your suggestion for our specific scenario?
Thanks in advance, Kevin
Buchan Milne wrote:
On Tuesday 22 July 2008 18:53:56 Kevin Elliott wrote:
Folks,
With all this talk about multimaster, could someone point me to some resources that describe industry standard implementations and best practices of OpenLDAP in multimaster mode for the purposes of high availability and robustness? I have yet to see comprehensive documents that describe solutions for most small to medium businesses, and would love to see something you recommend.
In my opinion (I may have missed some scenarios):
1)If you need failover reads, have sufficient slaves, and ensure that all software and configurations are able/configured to fail over. In my case, that means I probably need to build sudo against OpenLDAP on Solaris instead of against the Sun LDAP SDK, and I might need to find a solution for bind_sdb- ldap (which doesn't seem to be able to take multiple hostnames in the LDAP URI).
2)If you need a site that only has a slave to be able to propagate changes, ensure that your software is configured to chase referrals on updates (e.g., samba can, pam_ldap can etc.).
3)If you have a site that only has a slave, but changes need to be propagated from clients of this slave from software that does not chase referrals, use the chain overlay.
If you have users using the OpenLDAP commandline utilities (which won't chase referrals with authentication), teach the users to send changes to your master. If they can't do that, they shouldn't be using these utilities.
4)If you need consistent but highly available writes, use a cluster middleware. If you have shared storage available (e.g. SAN), use it. If you don't use a shared storage software implementation (e.g. drbd).
5)If you need more write throughput (and tuning will not help you further), split your DIT, or scale up (get faster disks, more disks, SAN etc.). Scaling out won't help.
6)If you need to be able to write to the same DIT portion on different servers simultaneously, you should consider whether the possible data synchronisation issues could pose a problem. If they don't, multi-master may be for you.
I have seen people on this list wanting multi-master to solve most of the items above, where only one of them (6) may be a valid reason.
BTW, I use multi-master on my "personal" infrastructure, which consists of a desktop machine at home, a laptop that is used at home and at work and other places, and a desktop at work. Both desktops are domain controllers backed on LDAP, and I have multi-master configured between these 3 machines to ensure that password changes by domain members (at home, or at work) will be propagated to all LDAP servers. However, I think this is probably an abuse of multi-master, and I don't think I will be logging any ITSs in the event that I lose any changes ....
In production, I have one HA cluster (RHEL3 with Red Hat Cluster Suite on EMC SAN for shared storage) for a master for one environment (with 2 slaves in the production site, and one "failover" master and one slave in the DR site). The other environment (which is actually bigger) has a standalone master and load- balanced slaves for the "production" site, and standalone slaves for site sites. I don't think I will be risking data consistency on > 1 million entires with multi-master.
Regards, Buchan
Kevin Elliott wrote:
Buchan,
Thank you very much for taking the time to iterate over some scenarios and include your suggestions. I resonate with most of what you have suggested, and I had a follow-up question to see if you recommend something different for our particular scenario.
We have a small ldap database, but have had several LDAP outages (mostly due to bdb corruptions that we've yet to diagnose why they're occurring, other than possibly because all the versions are several years old). This ends up taking out all of our unix, OSX, and Windows systems (we're running samba on ldap). Our slave ldap seems to be in a good state during these outages, but most of our systems do not have the ability for one reason or another, to communicate with the slave --- even if we were to point all the systems to it, we would not be able to write to it while the master ldap is down, which is a deal breaker for us! Changes need to occur, and we need to feel confident that the diffs will make it back into the master when it is revived.
What is your suggestion for our specific scenario?
What version of OpenLDAP are you running at the moment? the bdb corruptions are very concerning.
Hi,
I am using openldap-2.3.39 stable version. I am able to encode using the etest.c in library/liblber directory. But I am not able to decode the encoded data.
I am encoding the data and this data is written to a file in binary format. While decoding the data, this file is opened and the descriptor is passed as the parameter to the function "ber_sockbuf_add_io". Then doing the ber_get_next and then doing the ber_scanf.
If I have encoded one ber element its decoding. If I decode 2 elements ber_get_next fails. I am not sure why this is happening.
This is what the output I get when I do ./dtest
[root@Linux liblber]# ./dtest ===fd1 = 3 ber_get_next ber_get_next: Numerical result out of range [root@Linux liblber]#
Kindly help me .....
With regards ShashiKumar.
My program is as follows
-------------------------------------Etest.c------------------------- #include "lber-int.h" #include "portable.h"
#include <stdio.h> #include <ac/stdlib.h>
#include <ac/socket.h> #include <ac/string.h> #include <ac/unistd.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <unistd.h>
#ifdef HAVE_CONSOLE_H #include <console.h> #endif /* HAVE_CONSOLE_H */
#include "lber.h"
static void usage( const char *name ) { fprintf( stderr, "usage: %s fmtstring\n", name ); }
static char* getbuf( void ) { char *p; static char buf[1024]; if ( fgets( buf, sizeof(buf), stdin ) == NULL ) return NULL;
if ( (p = strchr( buf, '\n' )) != NULL ) *p = '\0';
return buf; }
int main( int argc, char **argv ) { char *s; char ch = 'i'; int tag; char arr[20]; int i; ber_len_t count;
int fd, fd1,rc; BerElement *ber , *ber1; Sockbuf *sb; /* enable debugging */ int ival = -1; ber_set_option( NULL, LBER_OPT_DEBUG_LEVEL, &ival ); fd = fileno(stdout);
sb = ber_sockbuf_alloc(); ber_sockbuf_add_io( sb, &ber_sockbuf_io_fd, LBER_SBIOD_LEVEL_PROVIDER, (void *)&fd );
if( sb == NULL ) { perror( "ber_sockbuf_alloc_fd" ); return( EXIT_FAILURE ); }
if ( (ber = ber_alloc_t( LBER_USE_DER )) == NULL ) { perror( "ber_alloc" ); return( EXIT_FAILURE ); }
fprintf(stderr, "encode: start\n" );
ber_printf( ber, "iii", 107,108,109);
if ( ber_flush( sb, ber, 0 ) == -1 ) { perror( "ber_flush" ); return( EXIT_FAILURE ); } memcpy((void *)arr,(void *)ber->ber_buf,ber->ber_len); printf("ber_len = %d\n",ber->ber_len); fd1 = open("./abc",O_WRONLY|O_CREAT); printf("====fd = %d\n",fd1); write(fd1,ber->ber_buf,ber->ber_len); for(i = 0; i<ber->ber_len; i++) printf("0x%02x ",ber->ber_buf[i]); printf("\n"); ber_sockbuf_free( sb ); ber_free( ber, 1 ); return( EXIT_SUCCESS ); }
--------------------------dtest.c------------------------------- #include "portable.h"
#include <stdio.h>
#include <ac/stdlib.h> #include <ac/string.h> #include <ac/socket.h> #include <ac/unistd.h> #include <ac/errno.h>
#ifdef HAVE_CONSOLE_H #include <console.h> #endif
#include <lber.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include<unistd.h> #include "lber.h"
static void usage( const char *name ) { fprintf( stderr, "usage: %s fmt\n", name ); }
int main( int argc, char **argv ) { char *s;
ber_tag_t tag; ber_len_t len;
BerElement *ber; BerElement *ber1; Sockbuf *sb; int fd,fd1; char buf[128]; char buf1[128]; char fmt[2]; int a,b; fmt[0] = *s; fmt[1] = '\0';
/* enable debugging */ int ival = -1; ber_set_option( NULL, LBER_OPT_DEBUG_LEVEL, &ival ); /* if ( argc < 2 ) { usage( argv[0] ); return( EXIT_FAILURE ); } */ #ifdef HAVE_CONSOLE_H ccommand( &argv ); cshow( stdout ); #endif
sb = ber_sockbuf_alloc(); fd1 = open("./abc",O_RDWR); printf("===fd1 = %d\n",fd1); ber_sockbuf_add_io( sb, &ber_sockbuf_io_fd, LBER_SBIOD_LEVEL_TRANSPORT, void *)&fd1 ); ber = ber_alloc_t(LBER_USE_DER); ber1 = ber_alloc_t(LBER_USE_DER); if( ber == NULL ) { perror( "ber_alloc_t" ); return( EXIT_FAILURE ); }
for (;;) { tag = ber_get_next( sb, &len, ber); if( tag != LBER_ERROR ) break;
if( errno == EWOULDBLOCK ) continue; if( errno == EAGAIN ) continue;
perror( "ber_get_next" ); return( EXIT_FAILURE ); }
printf("decode: message tag 0x%lx and length %ld\n", (unsigned long) tag, (long) len ); ival = 0; for( ;;) { len = sizeof(buf); //printf("decode: format %s\n", fmt ); tag = ber_scanf( ber, "iii", &a, &b ,&buf[0]); if( tag == LBER_ERROR ) { perror( "ber_scanf" ); return( EXIT_FAILURE ); } }
ber_sockbuf_free( sb ); return( EXIT_SUCCESS ); }
On Tue, Jul 22, 2008 at 1:40 PM, Buchan Milne bgmilne@staff.telkomsa.net wrote:
Please stay on-list.
On Tuesday 22 July 2008 10:41:18 Liutauras Adomaitis wrote:
On Tue, Jul 22, 2008 at 10:02 AM, Buchan Milne <
bgmilne@staff.telkomsa.net>
wrote:
On Monday 21 July 2008 14:48:23 Liutauras Adomaitis wrote:
On Mon, Jul 21, 2008 at 12:49 PM, Buchan Milne <
bgmilne@staff.telkomsa.net>
wrote:
On Sunday 20 July 2008 23:34:03 Liutauras Adomaitis wrote:
[...]
It shows, that it is adding MirrorMode TRUE. So why?
The configuration directive may have been overloaded when multi-master was added (after mirrormode). AFAIK it allows the database in question
to
both have a syncrepl directive, yet take updates from a DN besides the updatedn (see the description on the slapd.conf man page).
are you saying, that in multimaster configuration I have to have updatedn directive to be able to do writes?
No. Without mirrormode or multi-master, a slave would only accept
updates
from the updatedn. In multi-master, the master is also a slave, so it needs
to
accept updates from any DN, while being configured as a slave (having replication configuration).
Sorry, but still don't get it. As you say node in multimaster
configuration
is master and slave at the same time. As a slave it can accept writes
only
having updatedn directive. Right? I'm really sorry if I don't understand, but it seems that "No"
contradicts
other sentences.
No means: your interpretation that it is multimaster requires updatedn to be set is incorrect.
In thread "explain diff between multimaster and mirror mode" I found out, that mirrormode is kind of high availability implementation for openldap. In my case I want to have multimaster replication, which could allow me
to
do writes to different master servers at a time.
You may want to think very carefully about why you want this, and not mirrormode, or a single master.
Yes maybe it is not necessary. I have read some posts saying the same - people do multimaster then they really don't need it at all. This is my first acquaintance with syncrepl and ldap replication, so I don't really know what is best for me. I tried master- slave, but it had some
undisired
side effects,
If you are that unspecific about the issues, we can't comment on your decision.
I was thinking to make a new thread, because it is a little bit different question.
The most probably it is my configuration and I'm missing something. The error is "shadow context; no update referral at ...", I have posted in my first letter. I receive this error on any master I try to do writes. If add mirrormode true to the end - I can do writes.
Exactly. Enabling "mirrormode" is a prerequisite for multimaster.
The difference between mirrormode an multimaster (as far as I know) these days (post 2.4.6) is really your architecture (whether you allow writes to one master, or more), not the configuration (though more than two syncrepl statements does mean that it can't be mirrormode).
I have one central master, and other master are replicated from this one.
This is not the definition of multimaster. As far as I know, the recommended architecture for multimaster is that all masters can see (replicate from) each other. Otherwise, they are locally writable slaves, with no guarantee of convergence.
Ok I see now. MultiMaster and MirrofMode configurations are the same, the difference is just how I use them. Having mirrormode true directive in configuration file just enables write operation. The "load balancer" in MirrorMode for redirecting writes if first master fails is question for third party software.
If that is correct - then thanks a lot for explaining me that. Liutauras
openldap-software@openldap.org