For a project that requires a large user authentication database, we are currently using OpenLDAP with a BDB backend. We have about 150K users in the tree and all works well. Authentication and new user creation is fast and we are happy.
But, when we try and get statistical data from the tree, we run into the limitations of LDAP: trying to find all user that have registered last month, using a filter with 2 dates, is just too slow. It takes minutes to come back with a result.
To get around this limitation, we want to experiment with a PSQL backend so we can do some comparative testing.
(If any of you have a way of allowing us to interrogate our BDB backend with SQL like queries that are relatively fast, than please let me know.)
Our test environment:
openldap 2.4.16 with Postgres backend I have loaded CORE in slapd.conf as well as our custom schema for our users
The only ACL in the conf is ACCESS TO * BY * WRITE
Our tree looks like this and I have loaded the data tables and meta-data tables:
dc=example,dc=come ou=people,dc=example,dc=com cn=user1,dc=example,dc=com
The setup is working about 60%.
with openLdapAdmin, I can see the tree and I can add users.
What I can not do, is add an OU. It gives me:
LDAP said: Server is unwilling to perform Error number: 0x35 (LDAP_UNWILLING_TO_PERFORM) Description: The LDAP server refused to perform the operation.
If I get this on our custom schema, I can explain this by not having the right meta-data and procedures loaded. But as this is part of the CORE schema, am I right in only adding the meta-data for OU in ldap_attr_mappings without add or delete procedures?
I have looked at the log files and outputs but I can not figure out what is going wrong and why it is not accepting any new OU
Any help is appreciated.
On Mon, Apr 20, 2009 at 03:43:00PM +0200, Marcel Berteler wrote:
But, when we try and get statistical data from the tree, we run into the limitations of LDAP: trying to find all user that have registered last month, using a filter with 2 dates, is just too slow. It takes minutes to come back with a result.
(If any of you have a way of allowing us to interrogate our BDB backend with SQL like queries that are relatively fast, than please let me know.)
I assume you are searching on createTimestamp - something like
(&(createTimestamp>=200903010000Z)(createTimestamp<=200904010000Z))
Have you indexed this attribute?
LDAP won't do generic relational operations, but it should be able to answer that sort of query very well.
Andrew
Andrew Findlay wrote the following on 2009/04/20 16:58:
(If any of you have a way of allowing us to interrogate our BDB backend with SQL like queries that are relatively fast, than please let me know.)
I assume you are searching on createTimestamp - something like
(&(createTimestamp>=200903010000Z)(createTimestamp<=200904010000Z))
That is indeed how we use it.
Have you indexed this attribute?
I will double check, but even if the 'slowness' is taken away, I still run into the return limit that we have set. Increasing this limit will allow me to retrieve more data, but also opens the server up for DOS problems. Is there a way of having a limit defined that is user based? So the user that is running the stats searches can have a higher limit than other users?
LDAP won't do generic relational operations, but it should be able to answer that sort of query very well.
Andrew
On 21.04.2009 11:41, Marcel Berteler wrote:
Andrew Findlay wrote the following on 2009/04/20 16:58:
(If any of you have a way of allowing us to interrogate our BDB backend with SQL like queries that are relatively fast, than please let me know.)
I assume you are searching on createTimestamp - something like
(&(createTimestamp>=200903010000Z)(createTimestamp<=200904010000Z))
That is indeed how we use it.
Have you indexed this attribute?
I will double check, but even if the 'slowness' is taken away, I still run into the return limit that we have set. Increasing this limit will allow me to retrieve more data, but also opens the server up for DOS problems. Is there a way of having a limit defined that is user based? So the user that is running the stats searches can have a higher limit than other users?
Yes, see the "limits" keyword in slapd.conf(5). You need something like: limits dn.exact=<your stats user's DN> size=unlimited
Regards, Jonathan
We should probably look at this after upgrading to the latest version. We currently (still) use 2.2.6 and according to the online manual, the limits seem to be generic.
http://www.openldap.org/doc/admin22/slapdconfig.html#Configuration%20File%20...
marcel
Jonathan Clarke wrote the following on 2009/04/21 12:01:
Yes, see the "limits" keyword in slapd.conf(5). You need something like: limits dn.exact=<your stats user's DN> size=unlimited
Regards, Jonathan
Marcel Berteler marcel.berteler@bdsolutions.co.za writes:
Andrew Findlay wrote the following on 2009/04/20 16:58:
[...]
I will double check, but even if the 'slowness' is taken away, I still run into the return limit that we have set. Increasing this limit will allow me to retrieve more data, but also opens the server up for DOS problems. Is there a way of having a limit defined that is user based? So the user that is running the stats searches can have a higher limit than other users?
man slapd.conf(5), limits.
-Dieter
Should this directive work?
index regDate eq
Where regDate is defined as follows:
attributetype ( 1.3.6.1.4.1.22371.1.1 NAME 'regDate' DESC 'Registration Date' EQUALITY generalizedTimeMatch ORDERING generalizedTimeOrderingMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.24 SINGLE-VALUE )
I can not seem to find a index type that specifically mentions time matches.
Marcel
(&(createTimestamp>=200903010000Z)(createTimestamp<=200904010000Z))
Have you indexed this attribute?
Marcel Berteler marcel.berteler@bdsolutions.co.za writes:
Should this directive work?
index regDate eq
Where regDate is defined as follows:
attributetype ( 1.3.6.1.4.1.22371.1.1 NAME 'regDate' DESC 'Registration Date' EQUALITY generalizedTimeMatch ORDERING generalizedTimeOrderingMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.24 SINGLE-VALUE )
I can not seem to find a index type that specifically mentions time matches.
RFC-4517, section 3.3.13
-Dieter
Dieter Kluenter wrote the following on 2009/04/21 13:35:
Marcel Berteler marcel.berteler@bdsolutions.co.za writes:
Should this directive work?
index regDate eq
Where regDate is defined as follows:
attributetype ( 1.3.6.1.4.1.22371.1.1 NAME 'regDate' DESC 'Registration Date' EQUALITY generalizedTimeMatch ORDERING generalizedTimeOrderingMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.24 SINGLE-VALUE )
I can not seem to find a index type that specifically mentions time matches.
RFC-4517, section 3.3.13
-Dieter
Dieter, maybe its me, but that section does not clarify what indexing directive I can use to speed up searching dates. It specifies the format and content of Generalized Time.
Marcel
Marcel Berteler marcel.berteler@bdsolutions.co.za writes:
Dieter Kluenter wrote the following on 2009/04/21 13:35:
Marcel Berteler marcel.berteler@bdsolutions.co.za writes:
Should this directive work?
index regDate eq
Where regDate is defined as follows:
attributetype ( 1.3.6.1.4.1.22371.1.1 NAME 'regDate' DESC 'Registration Date' EQUALITY generalizedTimeMatch ORDERING generalizedTimeOrderingMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.24 SINGLE-VALUE )
I can not seem to find a index type that specifically mentions time matches.
RFC-4517, section 3.3.13
-Dieter
Dieter, maybe its me, but that section does not clarify what indexing directive I can use to speed up searching dates. It specifies the format and content of Generalized Time.
Just a few lines down, section 4.2.16 and 4.2.17 an equality index should be sufficient.
-Dieter
Dieter Kluenter wrote:
Marcel Berteler marcel.berteler@bdsolutions.co.za writes:
Dieter Kluenter wrote the following on 2009/04/21 13:35:
Marcel Berteler marcel.berteler@bdsolutions.co.za writes:
Should this directive work?
index regDate eq
Where regDate is defined as follows:
attributetype ( 1.3.6.1.4.1.22371.1.1 NAME 'regDate' DESC 'Registration Date' EQUALITY generalizedTimeMatch ORDERING generalizedTimeOrderingMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.24 SINGLE-VALUE )
I can not seem to find a index type that specifically mentions time matches.
RFC-4517, section 3.3.13
Dieter, maybe its me, but that section does not clarify what indexing directive I can use to speed up searching dates. It specifies the format and content of Generalized Time.
Just a few lines down, section 4.2.16 and 4.2.17 an equality index should be sufficient.
Server-side database indexing is implementation-specific and therefore RFC 4517 does not say anything about indexing.
For indexing searches with <= or >= IMHO kind of an ordering index would be needed for the ORDERING matching rule. Maybe one of the developers could elaborate on how ordering matching works within slapd and whether an equality index already speeds up this. When doing it with back-sql probably the PostgresQL indexing configuration is relevant either.
Ciao, Michael.
Michael Ströder michael@stroeder.com writes:
Dieter Kluenter wrote:
Marcel Berteler marcel.berteler@bdsolutions.co.za writes:
Dieter Kluenter wrote the following on 2009/04/21 13:35:
Marcel Berteler marcel.berteler@bdsolutions.co.za writes:
[...]
Just a few lines down, section 4.2.16 and 4.2.17 an equality index should be sufficient.
Server-side database indexing is implementation-specific and therefore RFC 4517 does not say anything about indexing.
For indexing searches with <= or >= IMHO kind of an ordering index would be needed for the ORDERING matching rule. Maybe one of the developers could elaborate on how ordering matching works within slapd and whether an equality index already speeds up this. When doing it with back-sql probably the PostgresQL indexing configuration is relevant either.
A 'greater than or equal to', or 'less than or equal to' filter in fact is operated on an equality index database.
-Dieter
Dieter Kluenter wrote:
Michael Strödermichael@stroeder.com writes:
Dieter Kluenter wrote:
Marcel Bertelermarcel.berteler@bdsolutions.co.za writes:
Dieter Kluenter wrote the following on 2009/04/21 13:35:
Marcel Bertelermarcel.berteler@bdsolutions.co.za writes:
[...]
Just a few lines down, section 4.2.16 and 4.2.17 an equality index should be sufficient.
Server-side database indexing is implementation-specific and therefore RFC 4517 does not say anything about indexing.
For indexing searches with<= or>= IMHO kind of an ordering index would be needed for the ORDERING matching rule. Maybe one of the developers could elaborate on how ordering matching works within slapd and whether an equality index already speeds up this. When doing it with back-sql probably the PostgresQL indexing configuration is relevant either.
A 'greater than or equal to', or 'less than or equal to' filter in fact is operated on an equality index database.
That's true in back-bdb/hdb. For back-sql, everything depends on the underlying SQL database.
Marcel Berteler wrote:
For a project that requires a large user authentication database, we are currently using OpenLDAP with a BDB backend. We have about 150K users in the tree and all works well. Authentication and new user creation is fast and we are happy.
But, when we try and get statistical data from the tree, we run into the limitations of LDAP: trying to find all user that have registered last month, using a filter with 2 dates, is just too slow. It takes minutes to come back with a result.
To get around this limitation, we want to experiment with a PSQL backend so we can do some comparative testing.
(If any of you have a way of allowing us to interrogate our BDB backend with SQL like queries that are relatively fast, than please let me know.)
Our test environment:
openldap 2.4.16 with Postgres backend I have loaded CORE in slapd.conf as well as our custom schema for our users
The only ACL in the conf is ACCESS TO * BY * WRITE
Our tree looks like this and I have loaded the data tables and meta-data tables:
dc=example,dc=come ou=people,dc=example,dc=com cn=user1,dc=example,dc=com
The setup is working about 60%.
with openLdapAdmin, I can see the tree and I can add users.
What I can not do, is add an OU. It gives me:
LDAP said: Server is unwilling to perform Error number: 0x35 (LDAP_UNWILLING_TO_PERFORM) Description: The LDAP server refused to perform the operation.
If I get this on our custom schema, I can explain this by not having the right meta-data and procedures loaded. But as this is part of the CORE schema, am I right in only adding the meta-data for OU in ldap_attr_mappings without add or delete procedures?
No you're not. There is no core schema mapping in back-sql, everything needs to be mapped by you, including core schema items. In fact, back-sql's logic has no notion of attributes per se, but only of attributes in some relationship with (structural) objectClasses according to the mappings you define.
If you mapped, say, "cn" for "person", don't expect to be able to use "cn" in, say, "inetOrgPerson" or "device". You need a separate "cn" mapping for each objectClass that needs to use it.
I have looked at the log files and outputs but I can not figure out what is going wrong and why it is not accepting any new OU
Maybe if you let others look at your logs, others can figure it out for you.
Let me anticipate that since you're using OpenLDAP 2.2.6, there is no chance any issue can get fixed.
p.
Ing. Pierangelo Masarati OpenLDAP Core Team
SysNet s.r.l. via Dossi, 8 - 27100 Pavia - ITALIA http://www.sys-net.it ----------------------------------- Office: +39 02 23998309 Mobile: +39 333 4963172 Fax: +39 0382 476497 Email: ando@sys-net.it -----------------------------------
Pierangelo Masarati wrote the following on 2009/04/22 13:10:
Marcel Berteler wrote:
Our test environment:
openldap 2.4.16 with Postgres backend I have loaded CORE in slapd.conf as well as our custom schema for our users
The only ACL in the conf is ACCESS TO * BY * WRITE
Our tree looks like this and I have loaded the data tables and meta-data tables:
dc=example,dc=come ou=people,dc=example,dc=com cn=user1,dc=example,dc=com
The setup is working about 60%.
with openLdapAdmin, I can see the tree and I can add users.
What I can not do, is add an OU. It gives me:
LDAP said: Server is unwilling to perform Error number: 0x35 (LDAP_UNWILLING_TO_PERFORM) Description: The LDAP server refused to perform the operation.
If I get this on our custom schema, I can explain this by not having the right meta-data and procedures loaded. But as this is part of the CORE schema, am I right in only adding the meta-data for OU in ldap_attr_mappings without add or delete procedures?
No you're not. There is no core schema mapping in back-sql, everything needs to be mapped by you, including core schema items. In fact, back-sql's logic has no notion of attributes per se, but only of attributes in some relationship with (structural) objectClasses according to the mappings you define.
If you mapped, say, "cn" for "person", don't expect to be able to use "cn" in, say, "inetOrgPerson" or "device". You need a separate "cn" mapping for each objectClass that needs to use it.
What I do not understand than is that this [1] example does not define functions for editing and creating OUs. Does that mean the only way of adding an OU if you do not define the related functions is by directly adding them to the SQL database? Should I define the functions to create and edit OU as well if I want to edit / delete an OU via ldap?
[1] : http://www.darold.net/projects/ldap_pg/HOWTO/x178.html
-- The organizationalUnit objectClass insert into ldap_oc_mappings (id,name,keytbl,keycol,create_proc,delete_proc,expect_return) values (2,'organizationalUnit','organizational_unit','id',NULL,NULL,0);
I have looked at the log files and outputs but I can not figure out what is going wrong and why it is not accepting any new OU
Maybe if you let others look at your logs, others can figure it out for you.
What log level do you recommend and what specific part of the log files are of use to 'debug' this?
Let me anticipate that since you're using OpenLDAP 2.2.6, there is no chance any issue can get fixed.
On our test box, we don't use 2.2.6 but 2.4.16
Marcel
Marcel Berteler wrote:
What I do not understand than is that this [1] example does not define functions for editing and creating OUs. Does that mean the only way of adding an OU if you do not define the related functions is by directly adding them to the SQL database?
Yes.
p.
Ing. Pierangelo Masarati OpenLDAP Core Team
SysNet s.r.l. via Dossi, 8 - 27100 Pavia - ITALIA http://www.sys-net.it ----------------------------------- Office: +39 02 23998309 Mobile: +39 333 4963172 Fax: +39 0382 476497 Email: ando@sys-net.it -----------------------------------
Thanks for clarifying this. After adding the functions and referring to them in the oc_mappings table, this works.
My custom schema is still problematic, but at least I am one step further.
M
Pierangelo Masarati wrote the following on 2009/04/23 12:40:
Marcel Berteler wrote:
What I do not understand than is that this [1] example does not define functions for editing and creating OUs. Does that mean the only way of adding an OU if you do not define the related functions is by directly adding them to the SQL database?
Yes.
p.
openldap-technical@openldap.org