Re: Mirror mode replication
by Philip Colmer
> Just curious, why would you do "mirror mode" MMR vs just plain MMR? Do
you feel you have a specific
> requirement that only one master ever receive the write traffic?
No specific requirement but the documentation made various points that
suggested "mirror mode" MMR would be easier to support ... For example, for
plain MMR, the arguments against included:
If connectivity with a provider is lost because of a network partition,
then "automatic failover" can just compound the problem
Typically, a particular machine cannot distinguish between losing contact
with a peer because that peer crashed, or because the network link has
failed
If a network is partitioned and multiple clients start writing to each of
the "masters" then reconciliation will be a pain; it may be best to simply
deny writes to the clients that are partitioned from the single provider
but the arguments against mirror mode were more semantics (e.g. "MirrorMode
is not what is termed as a Multi-Master solution" and "MirrorMode can be
termed as Active-Active Hot-Standby") rather than any real negatives.
I'm essentially looking to have two LDAP servers and keep them in sync.
LDAP consumers will be configured to query both and the web interfaces
would be configured to talk to their "local" instance with DNS pointing at
a preferred instance.
For me, the biggest concern I have about implementing MMR - plain or mirror
mode - is the challenge of recovering from a problem. Mirror mode seems to
be simpler in that respect because only one node has the writes and
therefore reconciliation should be straightforward.
Philip
On 2 July 2013 16:27, Quanah Gibson-Mount <quanah(a)zimbra.com> wrote:
> --On Tuesday, July 02, 2013 10:25 AM +0100 Philip Colmer <
> philip.colmer(a)linaro.org> wrote:
>
>
>> At the moment, we have a single LDAP server which we are using with LDAP
>> Account Manager for web-based object management and Atlassian Crowd for
>> authentication. The LDAP server is queried directly by other servers for
>> UNIX-level authentication, i.e. SSH and group membership.
>>
>>
>> I'm looking at introducing a second LDAP server and I'm leaning towards
>> choosing mirror mode as the replication methodology. Since the only
>> writes to LDAP come via LAM or Crowd, and these are both web-based, I
>> think I could set up an almost identical server to the one I have at the
>> moment and use a system like Amazon's Route 53 DNS service with health
>> checks to allow me to redirect users off to the second server if the
>> first server fails.
>>
>
> Just curious, why would you do "mirror mode" MMR vs just plain MMR? Do
> you feel you have a specific requirement that only one master ever receive
> the write traffic?
>
> --Quanah
>
>
>
> --
>
> Quanah Gibson-Mount
> Sr. Member of Technical Staff
> Zimbra, Inc
> A Division of VMware, Inc.
> --------------------
> Zimbra :: the leader in open source messaging and collaboration
>
10 years, 4 months
Corrupted cn=config on multimaster configuration
by paolo penzo
Hi all,
recently I've upgraded some OpenLDAP installations running MM replica
from version 2.4.23 to version 2.4.34 (both compiled and packaged
locally on RHEL6). At the beginning everything worked fine but then on a
cluster I got a corrupted cn=config database after that some
modifications were made. When this arose on all the nodes but one the
cn=config database was missing of all the other database definitions.
I've made a little investigation on this with no luck but I was able to
reproduce this behaviour with a simple script which is attached whenever
the replication is configured. In my tests both the nodes are configured
with the monitor database but after the replica is activated, on the
second one, this database is removed and the logs reports "be_delete
olcDatabase={1}monitor,cn=config"
Did I miss something?
Regards,
Paolo.
________________________________
Non stampare questa e-mail.
Questo documento e' formato esclusivamente per il destinatario. Tutte le informazioni ivi contenute, compresi eventuali allegati, sono soggette a riservatezza a termini del vigente D.Lgs. 196/2003 in materia di privacy e quindi ne e' proibita l'utilizzazione. Se avete ricevuto per errore questo messaggio, Vi preghiamo cortesemente di contattare immediatamente il mittente e cancellare la e-mail. Grazie.
Please don't print this e-mail.
Confidentiality Notice - This e-mail message including any attachments is for the sole use of the intended recipient and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.
10 years, 4 months
Mirror mode replication
by Philip Colmer
At the moment, we have a single LDAP server which we are using with LDAP
Account Manager for web-based object management and Atlassian Crowd for
authentication. The LDAP server is queried directly by other servers for
UNIX-level authentication, i.e. SSH and group membership.
I'm looking at introducing a second LDAP server and I'm leaning towards
choosing mirror mode as the replication methodology. Since the only writes
to LDAP come via LAM or Crowd, and these are both web-based, I think I
could set up an almost identical server to the one I have at the moment and
use a system like Amazon's Route 53 DNS service with health checks to allow
me to redirect users off to the second server if the first server fails.
It occurs to me, though, that either LAM or Crowd could have failed,
leaving the LDAP service itself quite healthy. In that situation, all of
the writes would still be directed to one of the LDAP servers so is that a
problem?
The documentation says that "the two providers are set up to replicate from
each other but an external frontend is employed to direct all writes to
only one of the two servers. The second provider will only be used for
writes if the first provider crashes, at which point the frontend will
switch to directing all writes to the second provider. When a crashed
provider is repaired and restarted it will automatically catch up to any
changes on the running provider and resync."
So the only requirement of mirror mode *seems* to be that all writes just
go to one provider at a time, i.e. the replication model must be as close
to single-master as possible, presumably because of consistency
requirements?
Or do I need to introduce yet another layer of "something" to direct the
writes? The documentation suggests slapd in proxy model or a hardware load
balancer, but would my scenario as described above meet the needs?
Thanks.
Philip
10 years, 4 months
OpenLDAP multimaster
by 25Dollar Tech
Hello Team,
I have a few concern about OpenLDAP multimaster restoration and migration.
Scenario
I have OpenLDAP multimaster configured on ubuntu 9.04 and OPenLDAP version
is 2.4.15. i.e. NODE1 and NODE2
Unfortunatly NODE2 has crashed due to server hardware failure. and running
my OpenLDAP infra for a while without multi-master server but I have
multimaster confoguraiton on NODE1 which is working without any issues.
Question
1) Is it possible to bind new OpenLDAP version and OS ubuntu 12.04 with
existing OpenLDAP multimaster server; existing NODE1 server has config DB
and HDB already configured and it contains more than 10000 entry. and NODE2
is already crashed.
2) If yes then what will be procedure to replicate config DB or how to
equalize the database.
--
*Thanks & Regards,
25dollarTech Team
https://sites.google.com/site/25dollartech/*
*Email: 25dollartechhelp(a)gmail.com*
10 years, 4 months
pure_ftpd & LDAP
by maral
hey!
I installed Pure-ftpd + Ldap and add a user in Ldap
but Pure-ftpd don't recognize that user
how can i fix this issue?
10 years, 5 months
OpenLDAP Proxy using PKCS#11/SmartCard client authentication
by Stefan Scheidewig
Hello,
we have two LDAP instances. LDAP A acts as proxy for LDAP B using the
ldap-backend. Now we configured LDAP B to use client authentication. We
successfully established a connection to LDAP B using OpenSSL s_client
and the PKCS#11 engine (OpenSSL engine library). Now we want the LDAP
proxy to establish the connection using this pkcs11 engine (we compiled
the ldap proxy to use OpenSSL as TLS implementation). Is there a
posibility to tell the LDAP proxy to use the certificate and key from
the smartcard (e.g. something like pkcs11:slot_1-id_42) ?
Thank you in advance,
Stefan Scheidewig
--
Mit freundlichen Grüßen,
Stefan Scheidewig
T-Systems Multimedia Solutions GmbH
BU Content & Collaboration Solution
PF 54 Integrated Content Portals
Dipl.-Inf. Stefan Scheidewig
Softwareentwickler
Hausanschrift: Riesaer Str. 5, 01129 Dresden, Germany
Postanschrift: Postfach 10 02 24, 01072 Dresden, Germany
+49 351 2820 2924 (Tel)
+49 351 2820 5118 (Fax)
Stefan.Scheidewig(a)t-systems.com (E-Mail)
Internet: http://www.t-systems-mms.com
T-Systems Multimedia Solutions GmbH
Aufsichtsrat: Klaus Werner (Vorsitzender)
Geschäftsführung: Peter Klingenburg, Susanne Heger
Handelsregister: Amtsgericht Dresden HRB 11433
Sitz der Gesellschaft Dresden
Ust-IdNr.: DE 811 807 949
10 years, 5 months