Hello,
Thks for replying.
You mentioned: Anyway, if you do reach those limits, I guess you must currently split up your LDAP directory. Put different subtrees in different servers. Then set up referrals between them. Tie them together with the chain overlay or ldap backend if you don't want the clients to have to deal with referrals, though that increases the server load.
So, my *query* - While compiling LDAP, Do we have to check any special parameter if are planning to go for it.
In our LDAP - we are storing information on the basis of consumers which are assigning themselves to various products. Due to that we have only sub-tree (consumer information). Now on working scenario - if i plan to make sub-tree on the basis of product. *Query* - How to proceed in current working scenario?
*Consumer Information:* ConsumerId LoginId Password Status Phone PrivateKey ProductCode ....some other 5-6 columns
Kindly guide.
Thanks and Regards, Gaurav Gugnani
On Tue, Mar 20, 2012 at 12:20 PM, Hallvard B Furuseth < h.b.furuseth@usit.uio.no> wrote:
Gaurav Gugnani wrote:
Actually, i want to know - how to "scale out" once you reach the limits to run openLdap in one single box?
You said "some million of records". That's nowhere near OpenLDAP's limits, nor near the multi-terabyte databases you mention, unless your LDAP entries are quite large - e.g. lots of JPEG photos and the like.
Your scenario just sounds like a database which does not all fit in RAM. The Tuning section of the Admin Guide describes which parameters to give priority in that case. But as Howard mentions, that'll become unnecessary. The MDB backend will leave that to the OS.
Anyway, if you do reach those limits, I guess you must currently split up your LDAP directory. Put different subtrees in different servers. Then set up referrals between them. Tie them together with the chain overlay or ldap backend if you don't want the clients to have to deal with referrals, though that increases the server load.
-- Hallvard