Hello,
time ago, we installed a Linux Guest with OpenLDAP (db size appox. 650MByte / ) server in a ESXi environment. Maybe because of a read/write ratio 100:1, the hard discs where heavy used by writing bdb backends memory mapped files. The CPU in that Linux system had iowait (top) between 80% and 100% and the other VMs on the ESXi went slow down.
After changing to shared memory (shm_key), all problems with disc IO where gone.
I read in the mailing list and on "OpenLDAP performance tuning" guide, that it does not matter if using memory mapped files or shared memory until the database is over 8GB. But why we had such problems?
Please note, the OpenLDAP was operating very fast with the memory mapped files, because of using indexes and proper caching.
Now, I want install more than one OpenLDAP server on one Linux system (now real Hardware). Every OpenLDAP server will be bind on a separate IP and DNS host name.
So in this scenario it is hard to calculate the shared memory and assign each LDAP server to the right shared memory region (key).
Therefore I want go back to memory mapped files. Are there any recommendation for sizing the Linux system like: - type of file system (ext3, ext4, xfs, ..) - parameters of file system (syncing -> commit=nrsec, data=*, ... ) - swap using (swappiness, dirty_background_ratio) - ???
Thanks for any help. Meike
Meike Stone wrote:
Hello,
time ago, we installed a Linux Guest with OpenLDAP (db size appox. 650MByte / ) server in a ESXi environment. Maybe because of a read/write ratio 100:1, the hard discs where heavy used by writing bdb backends memory mapped files. The CPU in that Linux system had iowait (top) between 80% and 100% and the other VMs on the ESXi went slow down.
After changing to shared memory (shm_key), all problems with disc IO where gone.
I read in the mailing list and on "OpenLDAP performance tuning" guide, that it does not matter if using memory mapped files or shared memory until the database is over 8GB. But why we had such problems?
Please note, the OpenLDAP was operating very fast with the memory mapped files, because of using indexes and proper caching.
Now, I want install more than one OpenLDAP server on one Linux system (now real Hardware). Every OpenLDAP server will be bind on a separate IP and DNS host name.
So in this scenario it is hard to calculate the shared memory and assign each LDAP server to the right shared memory region (key).
?? Just pick some key numbers that are spread out "enough" to not overlap. 10, 20, 30, 40, etc.
Therefore I want go back to memory mapped files. Are there any recommendation for sizing the Linux system like:
- type of file system (ext3, ext4, xfs, ..)
- parameters of file system (syncing -> commit=nrsec, data=*, ... )
- swap using (swappiness, dirty_background_ratio)
- ???
Probably the most important setting is to mount with noatime or relatime.
On Tue, Nov 1, 2011 at 6:43 PM, Howard Chu hyc@symas.com wrote:
Meike Stone wrote:
Hello,
time ago, we installed a Linux Guest with OpenLDAP (db size appox. 650MByte / ) server in a ESXi environment. Maybe because of a read/write ratio 100:1, the hard discs where heavy used by writing bdb backends memory mapped files. The CPU in that Linux system had iowait (top) between 80% and 100% and the other VMs on the ESXi went slow down.
After changing to shared memory (shm_key), all problems with disc IO where gone.
I read in the mailing list and on "OpenLDAP performance tuning" guide, that it does not matter if using memory mapped files or shared memory until the database is over 8GB. But why we had such problems?
Please note, the OpenLDAP was operating very fast with the memory mapped files, because of using indexes and proper caching.
Now, I want install more than one OpenLDAP server on one Linux system (now real Hardware). Every OpenLDAP server will be bind on a separate IP and DNS host name.
So in this scenario it is hard to calculate the shared memory and assign each LDAP server to the right shared memory region (key).
?? Just pick some key numbers that are spread out "enough" to not overlap. 10, 20, 30, 40, etc.
Therefore I want go back to memory mapped files. Are there any
recommendation for sizing the Linux system like:
- type of file system (ext3, ext4, xfs, ..)
- parameters of file system (syncing -> commit=nrsec, data=*, ... )
- swap using (swappiness, dirty_background_ratio)
- ???
Probably the most important setting is to mount with noatime or relatime.
Disabling write barrier is a big win and ext4 or perhaps xfs is a good
choice (http://www.ep.ph.bham.ac.uk/general/support/raid/raidperf11.html).
hth
-- -- Howard Chu CTO, Symas Corp. http://www.symas.com Director, Highland Sun http://highlandsun.com/hyc/ Chief Architect, OpenLDAP http://www.openldap.org/**project/http://www.openldap.org/project/
Meike Stone wrote:
Hello,
time ago, we installed a Linux Guest with OpenLDAP (db size appox. 650MByte / ) server in a ESXi environment. Maybe because of a read/write ratio 100:1, the hard discs where heavy used by writing bdb backends memory mapped files. The CPU in that Linux system had iowait (top) between 80% and 100% and the other VMs on the ESXi went slow down.
After changing to shared memory (shm_key), all problems with disc IO where gone.
I read in the mailing list and on "OpenLDAP performance tuning" guide, that it does not matter if using memory mapped files or shared memory until the database is over 8GB. But why we had such problems?
Please note, the OpenLDAP was operating very fast with the memory mapped files, because of using indexes and proper caching.
Now, I want install more than one OpenLDAP server on one Linux system (now real Hardware). Every OpenLDAP server will be bind on a separate IP and DNS host name.
So in this scenario it is hard to calculate the shared memory and assign each LDAP server to the right shared memory region (key).
Therefore I want go back to memory mapped files. Are there any recommendation for sizing the Linux system like:
- type of file system (ext3, ext4, xfs, ..)
- parameters of file system (syncing -> commit=nrsec, data=*, ... )
- swap using (swappiness, dirty_background_ratio)
- ???
Also, back-mdb (in git master) will behave much better in a VM deployment. (Actually, back-mdb behaves better than back-bdb/hdb in all environments.)
Hello Howard,
Thanks for the helpful information! All about the back-mdb sounds so good! Will the new back-mdb included in the next release? Is it recommended to use this backend in production environment?
Thanks for hard work on the great OpenLDAP!
Meike
2011/11/1 Howard Chu hyc@symas.com:
Meike Stone wrote:
Hello,
time ago, we installed a Linux Guest with OpenLDAP (db size appox. 650MByte / ) server in a ESXi environment. Maybe because of a read/write ratio 100:1, the hard discs where heavy used by writing bdb backends memory mapped files. The CPU in that Linux system had iowait (top) between 80% and 100% and the other VMs on the ESXi went slow down.
After changing to shared memory (shm_key), all problems with disc IO where gone.
I read in the mailing list and on "OpenLDAP performance tuning" guide, that it does not matter if using memory mapped files or shared memory until the database is over 8GB. But why we had such problems?
Please note, the OpenLDAP was operating very fast with the memory mapped files, because of using indexes and proper caching.
Now, I want install more than one OpenLDAP server on one Linux system (now real Hardware). Every OpenLDAP server will be bind on a separate IP and DNS host name.
So in this scenario it is hard to calculate the shared memory and assign each LDAP server to the right shared memory region (key).
Therefore I want go back to memory mapped files. Are there any recommendation for sizing the Linux system like: - type of file system (ext3, ext4, xfs, ..) - parameters of file system (syncing -> commit=nrsec, data=*, ... ) - swap using (swappiness, dirty_background_ratio) - ???
Also, back-mdb (in git master) will behave much better in a VM deployment. (Actually, back-mdb behaves better than back-bdb/hdb in all environments.)
-- -- Howard Chu CTO, Symas Corp. http://www.symas.com Director, Highland Sun http://highlandsun.com/hyc/ Chief Architect, OpenLDAP http://www.openldap.org/project/
Meike Stone wrote:
Hello Howard,
Thanks for the helpful information! All about the back-mdb sounds so good! Will the new back-mdb included in the next release? Is it recommended to use this backend in production environment?
I was thinking we should hold it off until OpenLDAP 2.5. But it actually is working perfectly fine already; we may include it in 2.4 as an Experimental feature.
It has passed every test I've thrown at it with no problems, but since the code is so new, I would tell anyone considering using it to test it heavily in their own dev/test environment before even thinking of pushing it into production.
From a practical perspective, nearly all of the back-mdb code is quite mature, being a direct copy/paste from back-bdb/hdb. But there are also portions that are quite new, and it would be wise to expect bugs lurking there somewhere.
Thanks for hard work on the great OpenLDAP!
Meike
2011/11/1 Howard Chuhyc@symas.com:
Meike Stone wrote:
Hello,
time ago, we installed a Linux Guest with OpenLDAP (db size appox. 650MByte / ) server in a ESXi environment. Maybe because of a read/write ratio 100:1, the hard discs where heavy used by writing bdb backends memory mapped files. The CPU in that Linux system had iowait (top) between 80% and 100% and the other VMs on the ESXi went slow down.
After changing to shared memory (shm_key), all problems with disc IO where gone.
I read in the mailing list and on "OpenLDAP performance tuning" guide, that it does not matter if using memory mapped files or shared memory until the database is over 8GB. But why we had such problems?
Please note, the OpenLDAP was operating very fast with the memory mapped files, because of using indexes and proper caching.
Now, I want install more than one OpenLDAP server on one Linux system (now real Hardware). Every OpenLDAP server will be bind on a separate IP and DNS host name.
So in this scenario it is hard to calculate the shared memory and assign each LDAP server to the right shared memory region (key).
Therefore I want go back to memory mapped files. Are there any recommendation for sizing the Linux system like:
- type of file system (ext3, ext4, xfs, ..)
- parameters of file system (syncing -> commit=nrsec, data=*, ... )
- swap using (swappiness, dirty_background_ratio)
- ???
Also, back-mdb (in git master) will behave much better in a VM deployment. (Actually, back-mdb behaves better than back-bdb/hdb in all environments.)
-- -- Howard Chu CTO, Symas Corp. http://www.symas.com Director, Highland Sun http://highlandsun.com/hyc/ Chief Architect, OpenLDAP http://www.openldap.org/project/
Howard Chu wrote:
Meike Stone wrote:
Thanks for the helpful information! All about the back-mdb sounds so good! Will the new back-mdb included in the next release? Is it recommended to use this backend in production environment?
I was thinking we should hold it off until OpenLDAP 2.5. But it actually is working perfectly fine already; we may include it in 2.4 as an Experimental feature.
I'm testing back-mdb in a local environment. No problems so far. I think it could be added in 2.4.27 announcing it for public testing. Otherwise no-one else will test it thoroughly.
Ciao, Michael.
Le 03/11/2011 10:22, Michael Ströder a écrit :
I'm testing back-mdb in a local environment. No problems so far. I think it could be added in 2.4.27 announcing it for public testing. Otherwise no-one else will test it thoroughly. Ciao, Michael.
Is there an ETA for 2.4.27 by the way ?
Seb
Sébastien Bernard wrote:
Le 03/11/2011 10:22, Michael Ströder a écrit :
I'm testing back-mdb in a local environment. No problems so far. I think it could be added in 2.4.27 announcing it for public testing. Otherwise no-one else will test it thoroughly. Ciao, Michael.
Is there an ETA for 2.4.27 by the way ?
Not yet. We are blocked waiting for a fix to ITS#7025. I haven't yet got a test environment that can reproduce the issue. Help creating a bare minimum test case that demonstrates the problem would certainly be useful.
I was thinking we should hold it off until OpenLDAP 2.5. But it actually is working perfectly fine already; we may include it in 2.4 as an Experimental feature.
I'm testing back-mdb in a local environment. No problems so far. I think it could be added in 2.4.27 announcing it for public testing. Otherwise no-one else will test it thoroughly.
Hello Michael,
how do you test? Can you share Test, environment and your findings and experiences?
Thanks Meike
Meike Stone wrote:
I'm testing back-mdb in a local environment. No problems so far. I think it could be added in 2.4.27 announcing it for public testing. Otherwise no-one else will test it thoroughly.
how do you test? Can you share Test, environment and your findings and experiences?
1. Compiled HEAD from source (see www.openldap.org how to access the git repository)
2. Exported my data with slapcat using old config
3. just changed my slapd.conf to use "database mdb" instead of "database hdb" and removed all the back-hdb-specific configuration directives (mostly caching)
4. imported my data with slapadd
Feels quicker even with rather small data sets.
Ciao, Michael.
openldap-technical@openldap.org