Hello Gibson,
Thks for replying. However, my concern is:
In our production system we have 8GB of ram with 100GB hard disk. So till what limit it the Read/Write operation to LDAP goes smoothly. Lets say - 1 row of data is 10K bytes.
Please don't mind - but we are planning for the afterwards approach. (What to do - when LDAP goes slow with this configuration)
For Example - In mysql we could do first level partitioning the tables and afterwards at last resort we could do its sharding.
So, my query is - Can we do anything other than upgrading H/W or OS if the I/O operation to LDAP gets slow?
Thanks and Regards, Gaurav Gugnani
On Thu, Mar 15, 2012 at 12:20 AM, Quanah Gibson-Mount quanah@zimbra.comwrote:
--On Wednesday, March 14, 2012 11:08 AM +0530 Gaurav Gugnani < gugnanigaurav@gmail.com> wrote:
Hi All,
First of all thks for helping me out with the issues on openLDAP. Well today, my query is pretty generic and lot many people working on LDAP would face such an issues.
We are using openldap 2.4.26 with BDB as backend. We've installed the setup on linux machine of 64 bit with 4GB ram. Now, currently we have some 10K records in it and its working perfectly fine. Our systems is running under replication - syncrepl.
However, in near future we can foresee some million of records to turn up. So, Can any one please advise - What are the different Scaling options available with LDAP?
I'm not sure what you mean by scaling options. OpenLDAP scales, and that has been shown numerous times. How well/far it scales depends entirely on your hardware and operating system and the size of the DB in relation to those things.
--Quanah
--
Quanah Gibson-Mount Sr. Member of Technical Staff Zimbra, Inc A Division of VMware, Inc.
Zimbra :: the leader in open source messaging and collaboration
Well, you could do a bunch of slave nodes behind a VIP. Or even multi master behind a VIP.
Both are 'more' HW, but not reaching a new class of server - which would be orders of magnitude more expensive.
There's also DB tuning I suspect, which I'm entirely clueless on.
I've also seem ref's to alternate system memory managers on this list.
- chris
Chris Jacobs Systems Administrator, Technology Services Group
Apollo Group | Apollo Marketing & Product Development | Aptimus, Inc. 1501 4th Ave | Suite 2500 | Seattle, WA 98101 direct 206.839.8245 | cell 206.601.3256 | Fax 206.644.0628 email: chris.jacobs@apollogrp.edu
________________________________ From: openldap-technical-bounces@OpenLDAP.org openldap-technical-bounces@OpenLDAP.org To: Quanah Gibson-Mount quanah@zimbra.com; openldap-technical@openldap.org openldap-technical@openldap.org Sent: Wed Mar 14 23:59:57 2012 Subject: Re: Scaling LDAP
Hello Gibson,
Thks for replying. However, my concern is:
In our production system we have 8GB of ram with 100GB hard disk. So till what limit it the Read/Write operation to LDAP goes smoothly. Lets say - 1 row of data is 10K bytes.
Please don't mind - but we are planning for the afterwards approach. (What to do - when LDAP goes slow with this configuration)
For Example - In mysql we could do first level partitioning the tables and afterwards at last resort we could do its sharding.
So, my query is - Can we do anything other than upgrading H/W or OS if the I/O operation to LDAP gets slow?
Thanks and Regards, Gaurav Gugnani
On Thu, Mar 15, 2012 at 12:20 AM, Quanah Gibson-Mount <quanah@zimbra.commailto:quanah@zimbra.com> wrote: --On Wednesday, March 14, 2012 11:08 AM +0530 Gaurav Gugnani <gugnanigaurav@gmail.commailto:gugnanigaurav@gmail.com> wrote:
Hi All,
First of all thks for helping me out with the issues on openLDAP. Well today, my query is pretty generic and lot many people working on LDAP would face such an issues.
We are using openldap 2.4.26 with BDB as backend. We've installed the setup on linux machine of 64 bit with 4GB ram. Now, currently we have some 10K records in it and its working perfectly fine. Our systems is running under replication - syncrepl.
However, in near future we can foresee some million of records to turn up. So, Can any one please advise - What are the different Scaling options available with LDAP?
I'm not sure what you mean by scaling options. OpenLDAP scales, and that has been shown numerous times. How well/far it scales depends entirely on your hardware and operating system and the size of the DB in relation to those things.
--Quanah
--
Quanah Gibson-Mount Sr. Member of Technical Staff Zimbra, Inc A Division of VMware, Inc. -------------------- Zimbra :: the leader in open source messaging and collaboration
________________________________ This message is private and confidential. If you have received it in error, please notify the sender and remove it from your system.
Chris Jacobs wrote:
There's also DB tuning I suspect, which I'm entirely clueless on.
All of that is going to be tossed out the window. back-mdb fits 4x as many entries as back-bdb/hdb in the same amount of RAM, runs queries 2x as fast, and has no tuning parameters.
- chris
Chris Jacobs Systems Administrator, Technology Services Group
Apollo Group | Apollo Marketing & Product Development | Aptimus, Inc. 1501 4th Ave | Suite 2500 | Seattle, WA 98101 direct 206.839.8245 | cell 206.601.3256 | Fax 206.644.0628 email: chris.jacobs@apollogrp.edu
*From*: openldap-technical-bounces@OpenLDAP.org openldap-technical-bounces@OpenLDAP.org *To*: Quanah Gibson-Mount quanah@zimbra.com; openldap-technical@openldap.org openldap-technical@openldap.org *Sent*: Wed Mar 14 23:59:57 2012 *Subject*: Re: Scaling LDAP
Hello Gibson,
Thks for replying. However, my concern is:
In our production system we have 8GB of ram with 100GB hard disk. So till what limit it the Read/Write operation to LDAP goes smoothly. Lets say - 1 row of data is 10K bytes.
Please don't mind - but we are planning for the afterwards approach. (What to do - when LDAP goes slow with this configuration)
For Example - In mysql we could do first level partitioning the tables and afterwards at last resort we could do its sharding.
So, my query is - Can we do anything other than upgrading H/W or OS if the I/O operation to LDAP gets slow?
Thanks and Regards, Gaurav Gugnani
On Thu, Mar 15, 2012 at 12:20 AM, Quanah Gibson-Mount <quanah@zimbra.com mailto:quanah@zimbra.com> wrote:
--On Wednesday, March 14, 2012 11:08 AM +0530 Gaurav Gugnani <gugnanigaurav@gmail.com <mailto:gugnanigaurav@gmail.com>> wrote: Hi All, First of all thks for helping me out with the issues on openLDAP. Well today, my query is pretty generic and lot many people working on LDAP would face such an issues. We are using openldap 2.4.26 with BDB as backend. We've installed the setup on linux machine of 64 bit with 4GB ram. Now, currently we have some 10K records in it and its working perfectly fine. Our systems is running under replication - syncrepl. However, in near future we can foresee some million of records to turn up. So, Can any one please advise - What are the different Scaling options available with LDAP? I'm not sure what you mean by scaling options. OpenLDAP scales, and that has been shown numerous times. How well/far it scales depends entirely on your hardware and operating system and the size of the DB in relation to those things. --Quanah -- Quanah Gibson-Mount Sr. Member of Technical Staff Zimbra, Inc A Division of VMware, Inc. -------------------- Zimbra :: the leader in open source messaging and collaboration
This message is private and confidential. If you have received it in error, please notify the sender and remove it from your system.
Hello,
I might be not clear with posting my query. Let me try again.
Actually, i want to know - how to "scale out" once you reach the limits to run openLdap in one single box?
In one of the presentation created by Howard Chu - he mentions:
The Road Ahead...
• Work on scale-out, vs scale-up
*– allow multi-terabyte DBs to be served without requiring a single giant server*
**
So, thats what i want to know - What will be the ways to scale out, if we reach the highest limit of one single box.
I hope this time, i made it more clear.
Thanks and Regards,
Gaurav Gugnani
On Thu, Mar 15, 2012 at 12:29 PM, Gaurav Gugnani gugnanigaurav@gmail.comwrote:
Hello Gibson,
Thks for replying. However, my concern is:
In our production system we have 8GB of ram with 100GB hard disk. So till what limit it the Read/Write operation to LDAP goes smoothly. Lets say - 1 row of data is 10K bytes.
Please don't mind - but we are planning for the afterwards approach. (What to do - when LDAP goes slow with this configuration)
For Example - In mysql we could do first level partitioning the tables and afterwards at last resort we could do its sharding.
So, my query is - Can we do anything other than upgrading H/W or OS if the I/O operation to LDAP gets slow?
Thanks and Regards, Gaurav Gugnani
On Thu, Mar 15, 2012 at 12:20 AM, Quanah Gibson-Mount quanah@zimbra.comwrote:
--On Wednesday, March 14, 2012 11:08 AM +0530 Gaurav Gugnani < gugnanigaurav@gmail.com> wrote:
Hi All,
First of all thks for helping me out with the issues on openLDAP. Well today, my query is pretty generic and lot many people working on LDAP would face such an issues.
We are using openldap 2.4.26 with BDB as backend. We've installed the setup on linux machine of 64 bit with 4GB ram. Now, currently we have some 10K records in it and its working perfectly fine. Our systems is running under replication - syncrepl.
However, in near future we can foresee some million of records to turn up. So, Can any one please advise - What are the different Scaling options available with LDAP?
I'm not sure what you mean by scaling options. OpenLDAP scales, and that has been shown numerous times. How well/far it scales depends entirely on your hardware and operating system and the size of the DB in relation to those things.
--Quanah
--
Quanah Gibson-Mount Sr. Member of Technical Staff Zimbra, Inc A Division of VMware, Inc.
Zimbra :: the leader in open source messaging and collaboration
Gaurav Gugnani wrote:
Actually, i want to know - how to "scale out" once you reach the limits to run openLdap in one single box?
You said "some million of records". That's nowhere near OpenLDAP's limits, nor near the multi-terabyte databases you mention, unless your LDAP entries are quite large - e.g. lots of JPEG photos and the like.
Your scenario just sounds like a database which does not all fit in RAM. The Tuning section of the Admin Guide describes which parameters to give priority in that case. But as Howard mentions, that'll become unnecessary. The MDB backend will leave that to the OS.
Anyway, if you do reach those limits, I guess you must currently split up your LDAP directory. Put different subtrees in different servers. Then set up referrals between them. Tie them together with the chain overlay or ldap backend if you don't want the clients to have to deal with referrals, though that increases the server load.
Hallvard B Furuseth wrote:
Gaurav Gugnani wrote:
Actually, i want to know - how to "scale out" once you reach the limits to run openLdap in one single box?
You said "some million of records". That's nowhere near OpenLDAP's limits, nor near the multi-terabyte databases you mention, unless your LDAP entries are quite large - e.g. lots of JPEG photos and the like.
Your scenario just sounds like a database which does not all fit in RAM. The Tuning section of the Admin Guide describes which parameters to give priority in that case. But as Howard mentions, that'll become unnecessary. The MDB backend will leave that to the OS.
Anyway, if you do reach those limits, I guess you must currently split up your LDAP directory. Put different subtrees in different servers. Then set up referrals between them. Tie them together with the chain overlay or ldap backend if you don't want the clients to have to deal with referrals, though that increases the server load.
Back when I wrote about that, I was speaking of back-ndb. Since it uses MySQL Cluster, you can simply add more cluster nodes if you want to scale further.
Going back to the original question - once you reach the limits of a single box, you obviously need either a larger box, or more boxes.
Unfortunately back-ndb (and the NDB API) needs a bit more work before it can be generally useful. And in the time since Oracle acquired Sun (and therefore MySQL), most people who were interested in the NDB OpenLDAP code have walked away from it. If you know of any developers who'd like to pick up back-ndb and push it further, send them over...
Hello,
Thks for replying.
You mentioned: Anyway, if you do reach those limits, I guess you must currently split up your LDAP directory. Put different subtrees in different servers. Then set up referrals between them. Tie them together with the chain overlay or ldap backend if you don't want the clients to have to deal with referrals, though that increases the server load.
So, my *query* - While compiling LDAP, Do we have to check any special parameter if are planning to go for it.
In our LDAP - we are storing information on the basis of consumers which are assigning themselves to various products. Due to that we have only sub-tree (consumer information). Now on working scenario - if i plan to make sub-tree on the basis of product. *Query* - How to proceed in current working scenario?
*Consumer Information:* ConsumerId LoginId Password Status Phone PrivateKey ProductCode ....some other 5-6 columns
Kindly guide.
Thanks and Regards, Gaurav Gugnani
On Tue, Mar 20, 2012 at 12:20 PM, Hallvard B Furuseth < h.b.furuseth@usit.uio.no> wrote:
Gaurav Gugnani wrote:
Actually, i want to know - how to "scale out" once you reach the limits to run openLdap in one single box?
You said "some million of records". That's nowhere near OpenLDAP's limits, nor near the multi-terabyte databases you mention, unless your LDAP entries are quite large - e.g. lots of JPEG photos and the like.
Your scenario just sounds like a database which does not all fit in RAM. The Tuning section of the Admin Guide describes which parameters to give priority in that case. But as Howard mentions, that'll become unnecessary. The MDB backend will leave that to the OS.
Anyway, if you do reach those limits, I guess you must currently split up your LDAP directory. Put different subtrees in different servers. Then set up referrals between them. Tie them together with the chain overlay or ldap backend if you don't want the clients to have to deal with referrals, though that increases the server load.
-- Hallvard
openldap-technical@openldap.org