If you know how to build OpenLDAP manually, and would like to participate in testing the next set of code for the 2.4.34 release, please do so.
Generally, get the code for RE24:
Configure & build.
Execute the test suite (via make test) after it is built.
Thanks!
--Quanah
--
Quanah Gibson-Mount Sr. Member of Technical Staff Zimbra, Inc A Division of VMware, Inc. -------------------- Zimbra :: the leader in open source messaging and collaboration
Hi Quanah,
On 02/22/2013 12:24 AM, Quanah Gibson-Mount wrote:
If you know how to build OpenLDAP manually, and would like to participate in testing the next set of code for the 2.4.34 release, please do so.
Generally, get the code for RE24:
Configure & build.
Execute the test suite (via make test) after it is built.
Here's an undefined symbol error I got with RE24 rev 1e38e77 from a few minutes ago. I built it without any GnuTLS stuff but with Red Hat RPM_OPT_FLAGS which are -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC -Wl,--as-needed -DLDAP_CONNECTIONLESS".
Initiating LDAP tests for BDB... Running ./scripts/all for bdb...
Executing all LDAP tests for bdb Starting test000-rootdse for bdb...
running defines.sh Starting slapd on TCP/IP port 9011... Using ldapsearch to retrieve the root DSE... Waiting 5 seconds for slapd to start... Waiting 5 seconds for slapd to start... Waiting 5 seconds for slapd to start... Waiting 5 seconds for slapd to start... Waiting 5 seconds for slapd to start... Waiting 5 seconds for slapd to start... ./scripts/test000-rootdse: line 66: kill: (21814) - No such process /home/mockbuild/rpmbuild/BUILD/openldap-1e38e77/clients/tools/.libs/lt-ldapsearch: symbol lookup error: /home/mockbuild/rpmbuild/BUILD/openldap-1e38e77/clients/tools/.libs/lt-ldapsearch: undefined symbol: ldif_debug
Test failed test000-rootdse failed for bdb
Regards, Patrick
--On Friday, February 22, 2013 2:02 AM +0100 Patrick Lists openldap-list@puzzled.xs4all.nl wrote:
Hi Quanah,
On 02/22/2013 12:24 AM, Quanah Gibson-Mount wrote:
If you know how to build OpenLDAP manually, and would like to participate in testing the next set of code for the 2.4.34 release, please do so.
Generally, get the code for RE24:
Configure & build.
Execute the test suite (via make test) after it is built.
Here's an undefined symbol error I got with RE24 rev 1e38e77 from a few minutes ago. I built it without any GnuTLS stuff but with Red Hat RPM_OPT_FLAGS which are -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC -Wl,--as-needed -DLDAP_CONNECTIONLESS".
Initiating LDAP tests for BDB... Running ./scripts/all for bdb...
Executing all LDAP tests for bdb Starting test000-rootdse for bdb...
running defines.sh Starting slapd on TCP/IP port 9011... Using ldapsearch to retrieve the root DSE... Waiting 5 seconds for slapd to start... Waiting 5 seconds for slapd to start... Waiting 5 seconds for slapd to start... Waiting 5 seconds for slapd to start... Waiting 5 seconds for slapd to start... Waiting 5 seconds for slapd to start... ./scripts/test000-rootdse: line 66: kill: (21814) - No such process /home/mockbuild/rpmbuild/BUILD/openldap-1e38e77/clients/tools/.libs/lt-ld apsearch: symbol lookup error: /home/mockbuild/rpmbuild/BUILD/openldap-1e38e77/clients/tools/.libs/lt-ld apsearch: undefined symbol: ldif_debug
Sounds like your build linked to the system libraries (/usr/lib/). ldif_debug has been in RE24 for quite some time.
--Quanah
--
Quanah Gibson-Mount Sr. Member of Technical Staff Zimbra, Inc A Division of VMware, Inc. -------------------- Zimbra :: the leader in open source messaging and collaboration
On 02/22/2013 02:23 AM, Quanah Gibson-Mount wrote: [snip]
/home/mockbuild/rpmbuild/BUILD/openldap-1e38e77/clients/tools/.libs/lt-ld apsearch: symbol lookup error: /home/mockbuild/rpmbuild/BUILD/openldap-1e38e77/clients/tools/.libs/lt-ld apsearch: undefined symbol: ldif_debug
Sounds like your build linked to the system libraries (/usr/lib/). ldif_debug has been in RE24 for quite some time.
I wiped the old rpm build, unpacked the tarball again and just did a ./configure, make depend, make and make test and it's now at test060-mt-hot for a while. Seems there is something off in the rpmbuild process. That's beyond my knowledge so suggestions were to look are most welcome.
Regards, Patrick
Hi Quanah
On 02/22/2013 02:23 AM, Quanah Gibson-Mount wrote:
Sounds like your build linked to the system libraries (/usr/lib/). ldif_debug has been in RE24 for quite some time.
I found the cause. The Fedora Packaging Guidelines don't allow rpath: http://fedoraproject.org/wiki/Packaging:Guidelines#Beware_of_Rpath The snippet below in the spec file causes make test to fail:
%configure sed -i 's|^hardcode_libdir_flag_spec=.*|hardcode_libdir_flag_spec=""|g' libtool sed -i 's|^runpath_var=LD_RUN_PATH|runpath_var=DIE_RPATH_DIE|g' libtool
RHEL6/CentOS6's provided openldap RPM is required by a zillion packages so it can not be removed. To prevent the RE24 build from ever using the system openldap libs in /lib64 the solution seems to be to allow rpath in the RE24 build so the RE24 apps always use the RE24 libs in the $rpath location. Better solutions always welcome :-)
Hope this helps someone bumping into the same issue.
Regards, Patrick
--On Friday, February 22, 2013 1:39 PM +0100 Patrick Lists openldap-list@puzzled.xs4all.nl wrote:
RHEL6/CentOS6's provided openldap RPM is required by a zillion packages so it can not be removed. To prevent the RE24 build from ever using the system openldap libs in /lib64 the solution seems to be to allow rpath in the RE24 build so the RE24 apps always use the RE24 libs in the $rpath location. Better solutions always welcome :-)
Hi Patrick,
it is generally a bad idea to use Linux distro provided OpenLDAP packages for a variety of reasons. The best idea is to build OpenLDAP into your own location for server & client packages, so you are isolated from the general junk shipped by the distro.
A good example of doing this the right way is the ltb project:
http://ltb-project.org/wiki/download#openldap
You can of course use their spec files for your local build. ;)
--Quanah
--
Quanah Gibson-Mount Sr. Member of Technical Staff Zimbra, Inc A Division of VMware, Inc. -------------------- Zimbra :: the leader in open source messaging and collaboration
Hi Quanah,
On 02/22/2013 11:56 PM, Quanah Gibson-Mount wrote: [snip]
Hi Patrick,
it is generally a bad idea to use Linux distro provided OpenLDAP packages for a variety of reasons.
Heh I figured that out the hard way after I could not get my cn=config setup going with the distro provided openldap 2.4.23 RPM.
The best idea is to build OpenLDAP into your own location for server & client packages, so you are isolated from the general junk shipped by the distro.
Yup that's what I have done now. The RE24 package is installed into /usr/local and everything has "24" added to its name so it's clearly distinctive and does not interfere with the distro provided openldap packages.
A good example of doing this the right way is the ltb project:
Thanks for the tip.
You can of course use their spec files for your local build. ;)
I had a look and there are quite extensive spec files for multiple packages. To prevent that I bite of more than I can chew I chose to create a simpler package and start with that. My simple setup seems to work fine now due to some much appreciated help from list members.
If I can be of further assistance with testing I'll be happy to help where my limited LDAP knowledge allows.
Regards, Patrick
--On Thursday, February 21, 2013 03:24:01 PM -0800 Quanah Gibson-Mount quanah@zimbra.com wrote:
If you know how to build OpenLDAP manually, and would like to participate in testing the next set of code for the 2.4.34 release, please do so.
Generally, get the code for RE24:
Configure & build.
Execute the test suite (via make test) after it is built.
Thanks!
--Quanah
Built debian packages, installed on a master, and loaded the database. This is on debian testing (wheezy). The master looks fine. This is using back-mdb. Load with mdb is much faster than old hdb backends. I will get some real numbers in a day or two.
I am having issues with the slave. Packages install fine. When I start to slapadd load data theeta starts at 5 minutes and just keeps increasing. I killed it after about 15 minutes when the eta hit 25 minutes and was still climbing. Thought it must be something in my configuration and just tried pulling the exact configuration I used to load the master to the slave hardware and got the same results. More debugging to do to try and figure out what is up, but it does not look like a openldap problem. When the load starts to slow down the progress display freezes for 30-60 seconds (a guess, but a while anyway) and then picks up again.
Bill
--On Thursday, February 21, 2013 07:01:14 PM -0800 Bill MacAllister whm@stanford.edu wrote:
--On Thursday, February 21, 2013 03:24:01 PM -0800 Quanah Gibson-Mount quanah@zimbra.com wrote:
If you know how to build OpenLDAP manually, and would like to participate in testing the next set of code for the 2.4.34 release, please do so.
Generally, get the code for RE24:
Configure & build.
Execute the test suite (via make test) after it is built.
Thanks!
--Quanah
Built debian packages, installed on a master, and loaded the database. This is on debian testing (wheezy). The master looks fine. This is using back-mdb. Load with mdb is much faster than old hdb backends. I will get some real numbers in a day or two.
I am having issues with the slave. Packages install fine. When I start to slapadd load data theeta starts at 5 minutes and just keeps increasing. I killed it after about 15 minutes when the eta hit 25 minutes and was still climbing. Thought it must be something in my configuration and just tried pulling the exact configuration I used to load the master to the slave hardware and got the same results. More debugging to do to try and figure out what is up, but it does not look like a openldap problem. When the load starts to slow down the progress display freezes for 30-60 seconds (a guess, but a while anyway) and then picks up again.
The partition holding the mdb database was initialized as ext4 and mounted with acls turned on. I reinitialized the partition as ext3 and mounted it without acl support. The load of the slave looks more reasonable now. Unfortunately, it did not finish.
slapadd -q /etc/ldap/slapd.d -b dc=stanford,dc=edu -l ./db-ldif.0 "######### 46.37% eta 07m12s elapsed 06m13s spd 1.9 M/s 51272d39 => mdb_idl_insert_keys: c_put id failed: MDB_TXN_FULL: Transaction has too many dirty pages - transaction too big (-30788) 51272d39 => mdb_tool_entry_put: index_entry_add failed: err=80 51272d39 => mdb_tool_entry_put: txn_aborted! Internal error (80) slapadd: could not add entry dn="suRegID=a9a6183ae77011d183712436000baa77,cn=people,dc=stanford,dc=edu" (line=17741844): txn_aborted! Internal error (80) *######### 46.43% eta 07m11s elapsed 06m13s spd 1.9 M/s Closing DB...
Not sure what to do about that. Any suggestions?
Bill
Bill MacAllister wrote:
--On Thursday, February 21, 2013 07:01:14 PM -0800 Bill MacAllister whm@stanford.edu wrote:
--On Thursday, February 21, 2013 03:24:01 PM -0800 Quanah Gibson-Mount quanah@zimbra.com wrote:
If you know how to build OpenLDAP manually, and would like to participate in testing the next set of code for the 2.4.34 release, please do so.
Generally, get the code for RE24:
Configure & build.
Execute the test suite (via make test) after it is built.
Thanks!
--Quanah
Built debian packages, installed on a master, and loaded the database. This is on debian testing (wheezy). The master looks fine. This is using back-mdb. Load with mdb is much faster than old hdb backends. I will get some real numbers in a day or two.
I am having issues with the slave. Packages install fine. When I start to slapadd load data theeta starts at 5 minutes and just keeps increasing. I killed it after about 15 minutes when the eta hit 25 minutes and was still climbing. Thought it must be something in my configuration and just tried pulling the exact configuration I used to load the master to the slave hardware and got the same results. More debugging to do to try and figure out what is up, but it does not look like a openldap problem. When the load starts to slow down the progress display freezes for 30-60 seconds (a guess, but a while anyway) and then picks up again.
The partition holding the mdb database was initialized as ext4 and mounted with acls turned on. I reinitialized the partition as ext3 and mounted it without acl support. The load of the slave looks more reasonable now. Unfortunately, it did not finish.
slapadd -q /etc/ldap/slapd.d -b dc=stanford,dc=edu -l ./db-ldif.0 "######### 46.37% eta 07m12s elapsed 06m13s spd 1.9 M/s 51272d39 => mdb_idl_insert_keys: c_put id failed: MDB_TXN_FULL: Transaction has too many dirty pages - transaction too big (-30788) 51272d39 => mdb_tool_entry_put: index_entry_add failed: err=80 51272d39 => mdb_tool_entry_put: txn_aborted! Internal error (80) slapadd: could not add entry dn="suRegID=a9a6183ae77011d183712436000baa77,cn=people,dc=stanford,dc=edu" (line=17741844): txn_aborted! Internal error (80) *######### 46.43% eta 07m11s elapsed 06m13s spd 1.9 M/s Closing DB...
Not sure what to do about that. Any suggestions?
That's pretty odd, if this same LDIF already loaded successfully on another machine. Generally I'd interpret this error to mean you have a lot of indexing defined and some entry is generating a lot of index values. But that doesn't seem true if you're using the same config as another machine.
You can probably get past this error by tweaking back-mdb/tools.c but again, it makes no sense that you run into this problem on one machine but not on another, with identical config. (in tool_entry_open change writes_per_commit to 500 from the current value of 1000.)
--On Friday, February 22, 2013 4:01 AM -0800 Howard Chu hyc@symas.com wrote:
That's pretty odd, if this same LDIF already loaded successfully on another machine. Generally I'd interpret this error to mean you have a lot of indexing defined and some entry is generating a lot of index values. But that doesn't seem true if you're using the same config as another machine.
Stanford's replicas have a few hundred more indices than the master.
--Quanah
--
Quanah Gibson-Mount Sr. Member of Technical Staff Zimbra, Inc A Division of VMware, Inc. -------------------- Zimbra :: the leader in open source messaging and collaboration
--On February 22, 2013 4:02:24 AM -0800 Quanah Gibson-Mount quanah@zimbra.com wrote:
--On Friday, February 22, 2013 4:01 AM -0800 Howard Chu hyc@symas.com wrote:
That's pretty odd, if this same LDIF already loaded successfully on another machine. Generally I'd interpret this error to mean you have a lot of indexing defined and some entry is generating a lot of index values. But that doesn't seem true if you're using the same config as another machine.
Stanford's replicas have a few hundred more indices than the master.
--Quanah
The master is not end user accessible and has enough indexes to support the processes that update the directory. Currently the master has 20 indexes. The slaves support a wide variety of applications and have 119 indexes.
Note, in one of my iterations I also tested loading into back-hdb which worked just fine.
Bill
--On February 22, 2013 4:01:00 AM -0800 Howard Chu hyc@symas.com wrote:
slapadd -q /etc/ldap/slapd.d -b dc=stanford,dc=edu -l ./db-ldif.0 "######### 46.37% eta 07m12s elapsed 06m13s spd 1.9 M/s 51272d39 => mdb_idl_insert_keys: c_put id failed: MDB_TXN_FULL: Transaction has too many dirty pages - transaction too big (-30788) 51272d39 => mdb_tool_entry_put: index_entry_add failed: err=80 51272d39 => mdb_tool_entry_put: txn_aborted! Internal error (80) slapadd: could not add entry dn="suRegID=a9a6183ae77011d183712436000baa77,cn=people,dc=stanford,dc=ed u" (line=17741844): txn_aborted! Internal error (80) *######### 46.43% eta 07m11s elapsed 06m13s spd 1.9 M/s Closing DB...
Not sure what to do about that. Any suggestions?
That's pretty odd, if this same LDIF already loaded successfully on another machine. Generally I'd interpret this error to mean you have a lot of indexing defined and some entry is generating a lot of index values. But that doesn't seem true if you're using the same config as another machine.
You can probably get past this error by tweaking back-mdb/tools.c but again, it makes no sense that you run into this problem on one machine but not on another, with identical config. (in tool_entry_open change writes_per_commit to 500 from the current value of 1000.)
Rebuilt the debian packages and the load now completes. I am able to start the server and query it. Maybe the writes_per_commit should be exposed to the user on the slapadd command line?
The server shutdown is really, really slow. Mentioed it to Quanah and he thought it was related to the number of indexes. I am still having some issues with the init script, but I need to do some more testing to understand what is going on.
Bill
--On Friday, February 22, 2013 02:44:33 PM -0800 Bill MacAllister whm@stanford.edu wrote:
The server shutdown is really, really slow. Mentioed it to Quanah and he thought it was related to the number of indexes. I am still having some issues with the init script, but I need to do some more testing to understand what is going on.
Turns out the startup issue was because the directory that ldapi:/// was using to write the socket file to disappeared. Creating the directory again appears to have cleaned up the shutdown slowness as well.
Bill
On 02/22/2013 12:24 AM, Quanah Gibson-Mount wrote:
If you know how to build OpenLDAP manually, and would like to participate in testing the next set of code for the 2.4.34 release, please do so.
Generally, get the code for RE24:
Configure & build.
Execute the test suite (via make test) after it is built.
Afaict all tests were successful on an up-to-date CentOS 6.3 x86_64 box when RE24 was built manually with ./configure && make depend && make && make test.
Regards, Patrick
openldap-technical@openldap.org