Andrew,
Thank your for replying to my question.
You can equip them with caches if you want to, or just set them up to
pass > the queries through to the appropriate backend server. Where a non-caching proxy is the same as a chain overlay?
In the LDAP proxy solution you suggested, could I then simply put a generic load balancer like Linux Virtual Server in front of the 'team' of LDAP proxies? I'd like to have a single IP / hostname I can use for the LDAP clients.
I like the idea of the caching proxies. The major advantage of it is that I expect that only a relatively small subset of the users to be active at one time, so the caches could be very small and thus very fast.
Best Regards, Germ van Eck -----Oorspronkelijk bericht----- Van: Andrew Findlay [mailto:andrew.findlay@skills-1st.co.uk] Verzonden: donderdag 24 februari 2011 17:28 Aan: Germ van Eck CC: openldap-technical@openldap.org Onderwerp: Re: Advise on distributed directory service
On Tue, Feb 22, 2011 at 05:07:27PM +0100, Germ van Ek wrote:
Note: the use of referrals to construct a Distributed Directory
Service
is extremely clumsy and not well supported by common clients. If an
Very true...
existing installation has already been built using referrals, the use
of
the chain overlay to hide the referrals will greatly improve the usability of the Directory system. A better approach would be to use explicitly defined local and proxy databases in subordinate configurations to provide a seamless view of the Distributed
Directory.
The use of 1 single proxy cache server seams to 'ease the pain' a bit, but does not seam like a very scalable approach. The use of proxy-overlays would make the server the client connects to function
as
a kind of non caching proxy, and in general 'be involved' in all of
the
requests, which again doesn't seam very desirable, and very single-point-of-failure.
As your proxy server will not hold any databases there is no reason why you cannot have many copies of it. They can all be identical. You can equip them with caches if you want to, or just set them up to pass the queries through to the appropriate backend server. This removes the single point of failure, and (with caches) improves the overall throughput of the system.
Andrew