Is it safe to use a 'clone' of an openldap servers's database to rebuild another server in a cluster?
In my tests, I followed a procedure where I shut 2 servers down, copied the backend database from one to the other, and restarted and everything seems to indicate that the 'cloned' server is valid. Replication works.. Adds/deletes work.. etc..
Is there any danger in using this procedure? Is there anything 'instance specific' that is stored in the directory that could cause an issue?
Ive found that even using slapadd's 'quick' flag it can still take 4 hours to import an LDIF, and if I can rely on this procedure to rebuild an LDAP read server in a crisis, I'd like to continue using it.
I've done this before, but only in testing. I've cloned to make a snapshot too (cp -rp) and then changed the startup scripts and conf files, certificates, etc to match the new server.
I'm curious to see others success of failure with this too.
Sellers
On Jan 23, 2008, at 9:35 AM, Thomas Ledbetter wrote:
Is it safe to use a 'clone' of an openldap servers's database to rebuild another server in a cluster?
In my tests, I followed a procedure where I shut 2 servers down, copied the backend database from one to the other, and restarted and everything seems to indicate that the 'cloned' server is valid. Replication works.. Adds/deletes work.. etc..
Is there any danger in using this procedure? Is there anything 'instance specific' that is stored in the directory that could cause an issue?
Ive found that even using slapadd's 'quick' flag it can still take 4 hours to import an LDIF, and if I can rely on this procedure to rebuild an LDAP read server in a crisis, I'd like to continue using it.
______________________________________________ Chris G. Sellers | NITLE Technology 734.661.2318 | chris.sellers@nitle.org AIM: imthewherd | GTalk: cgseller@gmail.com
On Wednesday 23 January 2008 18:18:08 Chris G. Sellers wrote:
I've done this before, but only in testing. I've cloned to make a snapshot too (cp -rp) and then changed the startup scripts and conf files, certificates, etc to match the new server.
I'm curious to see others success of failure with this too.
There are obviously ways this can fail. E.g., servers run different architecture platform (e.g. x86 vs x86_64). Servers have different (incompatible) major versions of OpenLDAP.
However, you can copy the database files ... even if slapd was running at the time, if you are careful about how you copy the database files and transaction logs. However, I wouldn't copy certificates around myself ....
Regards, Buchan
Thomas Ledbetter skrev, on 23-01-2008 15:35:
Is it safe to use a 'clone' of an openldap servers's database to rebuild another server in a cluster?
In my tests, I followed a procedure where I shut 2 servers down, copied the backend database from one to the other, and restarted and everything seems to indicate that the 'cloned' server is valid. Replication works.. Adds/deletes work.. etc..
Is there any danger in using this procedure? Is there anything 'instance specific' that is stored in the directory that could cause an issue?
Ive found that even using slapadd's 'quick' flag it can still take 4 hours to import an LDIF, and if I can rely on this procedure to rebuild an LDAP read server in a crisis, I'd like to continue using it.
You make no mention of your OS/distro for any of the servers, nor the OpenLDAP version, nor to what extent the OS/distro versions and OL versions are concurrent.
You might as well pose the question: "Is there a life after death? And what may I do to gain it?" without stating your hypothesis for there being such.
--Tonni
--On Wednesday, January 23, 2008 9:35 AM -0500 Thomas Ledbetter tledbett@revelstone.net wrote:
Ive found that even using slapadd's 'quick' flag it can still take 4 hours to import an LDIF, and if I can rely on this procedure to rebuild an LDAP read server in a crisis, I'd like to continue using it.
This would generally indicate that you've failed to properly tune DB_CONFIG.
--Quanah
--
Quanah Gibson-Mount Principal Software Engineer Zimbra, Inc -------------------- Zimbra :: the leader in open source messaging and collaboration
Quanah Gibson-Mount wrote:
--On Wednesday, January 23, 2008 9:35 AM -0500 Thomas Ledbetter tledbett@revelstone.net wrote:
Ive found that even using slapadd's 'quick' flag it can still take 4 hours to import an LDIF, and if I can rely on this procedure to rebuild an LDAP read server in a crisis, I'd like to continue using it.
This would generally indicate that you've failed to properly tune DB_CONFIG.
Or he's got a ~35GB database...
--On Wednesday, January 23, 2008 9:24 AM -0800 Howard Chu hyc@symas.com wrote:
This would generally indicate that you've failed to properly tune DB_CONFIG.
Or he's got a ~35GB database...
That's why I said "generally". ;) Since the post leaves out magnitudes of necessary information to really determine why slapadd is taking so long. But, "generally", DB_CONFIG not being tuned is the culprit the majority of the time. ;)
--Quanah
--
Quanah Gibson-Mount Principal Software Engineer Zimbra, Inc -------------------- Zimbra :: the leader in open source messaging and collaboration
Thomas-
Is it safe to use a 'clone' of an openldap servers's database to rebuild another server in a cluster?
In my tests, I followed a procedure where I shut 2 servers down, copied the backend database from one to the other, and restarted and everything seems to indicate that the 'cloned' server is valid. Replication works.. Adds/deletes work.. etc..
Is there any danger in using this procedure? Is there anything 'instance specific' that is stored in the directory that could cause an issue?
Ive found that even using slapadd's 'quick' flag it can still take 4 hours to import an LDIF, and if I can rely on this procedure to rebuild an LDAP read server in a crisis, I'd like to continue using it.
While I'm sure someone else will say that it's not advisable, I've cloned the disc of a Solaris 10 (x86-64) machine running OpenLDAP and haven't ran into any issues with it yet. (Knocking on wood)
However- OpenLDAP was not running at the time.
Best of luck- -chris
Christopher Orr wrote:
Thomas-
Is it safe to use a 'clone' of an openldap servers's database to rebuild another server in a cluster?
In my tests, I followed a procedure where I shut 2 servers down, copied the backend database from one to the other, and restarted and everything seems to indicate that the 'cloned' server is valid. Replication works.. Adds/deletes work.. etc..
Is there any danger in using this procedure? Is there anything 'instance specific' that is stored in the directory that could cause an issue?
There is nothing 'instance specific' in the data files for current releases. If no other processes are using the files, it's generally safe to clone them. There are exceptions of course, which is why none of our documentation ever tells you that this is a safe thing to do.
Ive found that even using slapadd's 'quick' flag it can still take 4 hours to import an LDIF, and if I can rely on this procedure to rebuild an LDAP read server in a crisis, I'd like to continue using it.
While I'm sure someone else will say that it's not advisable, I've cloned the disc of a Solaris 10 (x86-64) machine running OpenLDAP and haven't ran into any issues with it yet. (Knocking on wood)
However- OpenLDAP was not running at the time.
We went to some effort to make sure that it's safe to run slapcat while slapd is running, to allow hot backups to be performed. Ignoring this feature is pretty counterproductive. BerkeleyDB itself also provides documentation for how to perform a hot backup of the raw DB files. Both of these options exist and are already documented; anything else you do at your own risk.
Thanks for all the feedback on this guys.
This would generally indicate that you've failed to properly tune DB_CONFIG.
Completely possible! :) I worked through the stuff in the openldap faq-a-matic in the past, but it was complicated, and we've added alot of users since that time.
Is there a better tuning guide out there? One that relates more to the actual data structures being used?
Or he's got a ~35GB database...
the backend database files 'weigh' ~4.4 GB alone after running 'db_archive -d' to clean up the old 'log.' files.
We went to some effort to make sure that it's safe to run slapcat while slapd is running, to allow hot backups to be performed. Ignoring this feature is pretty counterproductive. BerkeleyDB itself also provides documentation for how to perform a hot backup of the raw DB files. Both of these options exist and are already documented; anything else you do at your own risk.
I'd much prefer to use the slapcat method, but as I mentioned the import time has grown rather substantially in the past half year as we've added so much data - to the point that it takes 4 hours to do an import!
We're talking reasonably fast hardware too: Its a poweredge 1850, dual 2.8 GHz Xeons with 4GB of memory and a RAID 1 array dedicated to the backend database.
The schema we use is highly customized for our application. And Im working on a project to 'trim the fat', as there is definitely improvement to do there.
But what can I do to learn more about this fine art of tuning DB_CONFIG? :)
--On Thursday, January 24, 2008 10:13 PM -0500 Thomas Ledbetter tledbett@revelstone.net wrote:
Thanks for all the feedback on this guys.
This would generally indicate that you've failed to properly tune DB_CONFIG.
Completely possible! :) I worked through the stuff in the openldap faq-a-matic in the past, but it was complicated, and we've added alot of users since that time.
Is there a better tuning guide out there? One that relates more to the actual data structures being used?
There's the FAQ entry at:
http://www.openldap.org/faq/index.cgi?_highlightWords=db_config&file=1075
but it looks like it needs some updating. Here's the general overview:
The amount of cachesize to set in the DB_CONFIG file for loading via slapadd should, if at all possible, be the size of "du -c -h *.bdb" in the data directory. This is how much space your DB will take while loading via slapadd.
Other things you can do to decrease the amount of time loading data takes is to use the -q flag when bulk loading, and to set the tool-threads parameter in slapd.conf to be equal to the number of real cores (i.e., no hyper-threading) your system has.
Since I don't know the specifics of your indexing, it's hard to know exactly how long it should take to load your DB, but if the size of *.bdb is > than the 4GB of memory on your system, it is time to upgrade your systems RAM. You also don't say whether you are running a 32 or 64-bit kernel, and that's vital information to have.
--Quanah
--
Quanah Gibson-Mount Principal Software Engineer Zimbra, Inc -------------------- Zimbra :: the leader in open source messaging and collaboration
Quanah Gibson-Mount wrote:
--On Thursday, January 24, 2008 10:13 PM -0500 Thomas Ledbetter tledbett@revelstone.net wrote:
Thanks for all the feedback on this guys.
This would generally indicate that you've failed to properly tune DB_CONFIG.
Completely possible! :) I worked through the stuff in the openldap faq-a-matic in the past, but it was complicated, and we've added alot of users since that time.
Is there a better tuning guide out there? One that relates more to the actual data structures being used?
There's the FAQ entry at:
http://www.openldap.org/faq/index.cgi?_highlightWords=db_config&file=1075
but it looks like it needs some updating. Here's the general overview:
The amount of cachesize to set in the DB_CONFIG file for loading via slapadd should, if at all possible, be the size of "du -c -h *.bdb" in the data directory. This is how much space your DB will take while loading via slapadd.
Other things you can do to decrease the amount of time loading data takes is to use the -q flag when bulk loading, and to set the tool-threads parameter in slapd.conf to be equal to the number of real cores (i.e., no hyper-threading) your system has.
Since I don't know the specifics of your indexing, it's hard to know exactly how long it should take to load your DB, but if the size of *.bdb is > than the 4GB of memory on your system, it is time to upgrade your systems RAM. You also don't say whether you are running a 32 or 64-bit kernel, and that's vital information to have.
tuning.sdf is still a straight import of that FAQ. Feel free to submit a patch ;-)
openldap-software@openldap.org