hyc(a)symas.com wrote:
> I see the same result on FreeBSD 6.2. It appears to be because libfetch was
> detected by configure and used here, and libfetch failed to open the FILE URLs
> that load the necessary schema.
The test script uses relative URLs (RFC1808) which our liblutil stuff supports,
but apparently libfetch only knows how to parse absolute URLs (RFC1738) - see
their CVS
http://www.freebsd.org/cgi/cvsweb.cgi/src/lib/libfetch/fetch.c?rev=1.38
Seems to me to be a deficiency in libfetch, but I guess we can rework the
scripts to use absolute FILE URLs here.
--
-- Howard Chu
Chief Architect, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
kurt(a)OpenLDAP.org wrote:
> Full_Name: Kurt Zeilenga
> Version: HEAD
> OS: MacoSX 10.4.8/FreeBSD 6.2
> URL: ftp://ftp.openldap.org/incoming/
> Submission from: (NULL) (24.180.46.200)
> Submitted by: kurt
>
>
> On both MacOS X and FreeBSD, I see test047 fail trying to configure the server
> via cn=config. Both built using ./configure --enable-backends
> --enable-overlays, nothing fancy.
>
> Cleaning up test run directory leftover from previous run.
> Running ./scripts/test049-sync-config...
> running defines.sh
> Starting producer slapd on TCP/IP port 9011...
> Using ldapsearch to check that producer slapd is running...
> Inserting syncprov overlay on producer...
> Starting consumer slapd on TCP/IP port 9012...
> Using ldapsearch to check that consumer slapd is running...
> Configuring syncrepl on consumer...
> Waiting 10 seconds for syncrepl to receive changes...
> Using ldapsearch to check that syncrepl received config changes...
> Adding schema and databases on producer...
I see the same result on FreeBSD 6.2. It appears to be because libfetch was
detected by configure and used here, and libfetch failed to open the FILE URLs
that load the necessary schema.
> ldapadd failed for database config (21)!
>
> where test.out contains:
> ldap_add: Invalid syntax (21)
> additional info: olcSuffix: value #0 invalid per syntax
>
> and the LDIF was:
> dn: olcDatabase={1}bdb,cn=config
> objectClass: olcDatabaseConfig
> objectClass: olcbdbConfig
> olcDatabase: {1}bdb
> olcSuffix: dc=example,dc=com
> olcDbDirectory: ./db
> olcRootDN: cn=Manager,dc=example,dc=com
> olcRootPW: secret
> olcSyncRepl: rid=002 provider=ldap://localhost:9011/
> binddn="cn=Manager,dc=example,dc=com" bindmethod=simple
> credentials=secret searchbase="dc=example,dc=com" type=refreshOnly
> interval=00:00:00:10
> olcUpdateRef: ldap://localhost:9011/
>
> dn: olcOverlay=syncprov,olcDatabase={1}bdb,cn=config
> changetype: add
> objectClass: olcOverlayConfig
> objectClass: olcSyncProvConfig
> olcOverlay: syncprov
>
>
>
>
--
-- Howard Chu
Chief Architect, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
Full_Name: Kurt Zeilenga
Version: HEAD
OS: MacoSX 10.4.8/FreeBSD 6.2
URL: ftp://ftp.openldap.org/incoming/
Submission from: (NULL) (24.180.46.200)
Submitted by: kurt
On both MacOS X and FreeBSD, I see test047 fail trying to configure the server
via cn=config. Both built using ./configure --enable-backends
--enable-overlays, nothing fancy.
Cleaning up test run directory leftover from previous run.
Running ./scripts/test049-sync-config...
running defines.sh
Starting producer slapd on TCP/IP port 9011...
Using ldapsearch to check that producer slapd is running...
Inserting syncprov overlay on producer...
Starting consumer slapd on TCP/IP port 9012...
Using ldapsearch to check that consumer slapd is running...
Configuring syncrepl on consumer...
Waiting 10 seconds for syncrepl to receive changes...
Using ldapsearch to check that syncrepl received config changes...
Adding schema and databases on producer...
ldapadd failed for database config (21)!
where test.out contains:
ldap_add: Invalid syntax (21)
additional info: olcSuffix: value #0 invalid per syntax
and the LDIF was:
dn: olcDatabase={1}bdb,cn=config
objectClass: olcDatabaseConfig
objectClass: olcbdbConfig
olcDatabase: {1}bdb
olcSuffix: dc=example,dc=com
olcDbDirectory: ./db
olcRootDN: cn=Manager,dc=example,dc=com
olcRootPW: secret
olcSyncRepl: rid=002 provider=ldap://localhost:9011/
binddn="cn=Manager,dc=example,dc=com" bindmethod=simple
credentials=secret searchbase="dc=example,dc=com" type=refreshOnly
interval=00:00:00:10
olcUpdateRef: ldap://localhost:9011/
dn: olcOverlay=syncprov,olcDatabase={1}bdb,cn=config
changetype: add
objectClass: olcOverlayConfig
objectClass: olcSyncProvConfig
olcOverlay: syncprov
ando(a)sys-net.it wrote:
Ralf,
This patch should now work as expected.
<http://www.sys-net.it/~ando/Download/pcache_sizelimit.patch>
Does it harmonize with your improvements? Or, if you prefer, first
commit your changes, and I'll harmonize the rest.
p.
Ing. Pierangelo Masarati
OpenLDAP Core Team
SysNet s.r.l.
via Dossi, 8 - 27100 Pavia - ITALIA
http://www.sys-net.it
---------------------------------------
Office: +39 02 23998309
Mobile: +39 333 4963172
Email: pierangelo.masarati(a)sys-net.it
---------------------------------------
Howard Chu wrote:
> If the client provides a sizelimit, save that away. Forward the request
> with no sizelimit, so the cache can see everything.
>
> If the forwarded request hits a sizelimit, I think we can still use the
> result. While there's no guarantee that repeated attempts to search the
> remote server would return the exact same set of entries, there's also
> no harm done if the cache does so.
>
> But if the result exceeds the cache's sizelimit, the result set must be
> uncached, same as now.
Well, I don't quite agree about this. In fact, if we know in advance
that the cache can only cache up to <entry_limit>, removing the client
requested size limit might lead to waste of resources, because a search
could then potentially return much many entries than <entry_limit>,
which wouldn't be cached nor returned to the client, thus defeating the
purpose of caching. So if the client requests a sizelimit <client_sl>,
we should:
if <client_sl> > <entry_limit>, leave it in place. If less than
<entry_limit> entries result, fine; if less than <client_sl> entries
result, fine but don't cache; otherwise, return LDAP_SIZELIMIT_EXCEEDED.
if <client_sl> < <entry_limit>, set size limit to <entry_limit>. If
less than <client_sl> entries result, fine; if less than <entry_limit>
entries result, return LDAP_SIZE_LIMIT, but keep caching; otherwise,
don't cache at all.
Ing. Pierangelo Masarati
OpenLDAP Core Team
SysNet s.r.l.
via Dossi, 8 - 27100 Pavia - ITALIA
http://www.sys-net.it
---------------------------------------
Office: +39 02 23998309
Mobile: +39 333 4963172
Email: pierangelo.masarati(a)sys-net.it
---------------------------------------
hyc(a)symas.com wrote:
> Can you spell out again what behavior you're aiming for?
>
> I think what makes sense so far is:
> If the client provides a sizelimit, save that away. Forward the request with no
> sizelimit, so the cache can see everything.
OK.
> If the forwarded request hits a sizelimit, I think we can still use the result.
In principle, yes. But not with current query containment, because if
the sizelimit is hit, say, with mail=*(a)example.com, then current query
containment would indicate that mail=foo(a)example.com is contained in the
sizelimit-hit query, but if mail=foo(a)example.com exists but is not
contained in the subset of mail=*(a)example.com that is cached, nothing
would be returned.
> While there's no guarantee that repeated attempts to search the remote server
> would return the exact same set of entries, there's also no harm done if the
> cache does so.
>
> But if the result exceeds the cache's sizelimit, the result set must be
> uncached, same as now.
Yes, but the query would be cached (as failing), and this would make
query containment behave as above. I think I'm missing where query
containment can be modified to not check whether a query is contained in
that failing because sizelimit.
p.
p.
Ing. Pierangelo Masarati
OpenLDAP Core Team
SysNet s.r.l.
via Dossi, 8 - 27100 Pavia - ITALIA
http://www.sys-net.it
---------------------------------------
Office: +39 02 23998309
Mobile: +39 333 4963172
Email: pierangelo.masarati(a)sys-net.it
---------------------------------------
ando(a)sys-net.it wrote:
> ando(a)sys-net.it wrote:
>
>> Please disregard; there's still a couple of issues that I wasn't aware
>> and that need to be dealt with. Actually, caching those results may
>> make further requests that look compatible with this to erroneously use
>> that dataset.
>
> The issue here is that a search exceeding sizelimit, if not cached,
> would destroy cacheability of all searches contained in it. Since a
> search that could exceed sizelimit is likely to include substrings or
> so, things like (mail=*domain.com) exceeding sizelimit would make all
> searches for (mail=foo(a)domain.com) non cacheable. I fear there's little
> to do about this, unless we want to heavily rework query containment
> stuff (and I don't feel like, as I'm having hard times in understanding
> what that code does). So I have now a bottomline fix that handles
> sizelimit correctly, but makes all cacheable queries contained in the
> offending one no longer answerable.
Can you spell out again what behavior you're aiming for?
I think what makes sense so far is:
If the client provides a sizelimit, save that away. Forward the request with no
sizelimit, so the cache can see everything.
If the forwarded request hits a sizelimit, I think we can still use the result.
While there's no guarantee that repeated attempts to search the remote server
would return the exact same set of entries, there's also no harm done if the
cache does so.
But if the result exceeds the cache's sizelimit, the result set must be
uncached, same as now.
--
-- Howard Chu
Chief Architect, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
ando(a)sys-net.it wrote:
> Please disregard; there's still a couple of issues that I wasn't aware
> and that need to be dealt with. Actually, caching those results may
> make further requests that look compatible with this to erroneously use
> that dataset.
The issue here is that a search exceeding sizelimit, if not cached,
would destroy cacheability of all searches contained in it. Since a
search that could exceed sizelimit is likely to include substrings or
so, things like (mail=*domain.com) exceeding sizelimit would make all
searches for (mail=foo(a)domain.com) non cacheable. I fear there's little
to do about this, unless we want to heavily rework query containment
stuff (and I don't feel like, as I'm having hard times in understanding
what that code does). So I have now a bottomline fix that handles
sizelimit correctly, but makes all cacheable queries contained in the
offending one no longer answerable.
p.
Ing. Pierangelo Masarati
OpenLDAP Core Team
SysNet s.r.l.
via Dossi, 8 - 27100 Pavia - ITALIA
http://www.sys-net.it
---------------------------------------
Office: +39 02 23998309
Mobile: +39 333 4963172
Email: pierangelo.masarati(a)sys-net.it
---------------------------------------
ando(a)sys-net.it wrote:
> here's a patch that implements the above.
>
> <http://www.sys-net.it/~ando/Download/pcache_sizelimit.patch>
Please disregard; there's still a couple of issues that I wasn't aware
and that need to be dealt with. Actually, caching those results may
make further requests that look compatible with this to erroneously use
that dataset.
p.
Ing. Pierangelo Masarati
OpenLDAP Core Team
SysNet s.r.l.
via Dossi, 8 - 27100 Pavia - ITALIA
http://www.sys-net.it
---------------------------------------
Office: +39 02 23998309
Mobile: +39 333 4963172
Email: pierangelo.masarati(a)sys-net.it
---------------------------------------
ando(a)sys-net.it wrote:
> I'm getting inclined towards handling this case like a regular search,
> only replacing LDAP_SUCCESS with LDAP_SIZELIMIT_EXCEEDED. In fact, it
> would be more appropriate to make this request not cacheable, because
> there's no guarantee the server would return the same entries for
> repeated searches, but for caching purposes this shouldn't really matter.
Ralf,
here's a patch that implements the above.
<http://www.sys-net.it/~ando/Download/pcache_sizelimit.patch>
Does it harmonize with your improvements?
Cheers, p.
Ing. Pierangelo Masarati
OpenLDAP Core Team
SysNet s.r.l.
via Dossi, 8 - 27100 Pavia - ITALIA
http://www.sys-net.it
---------------------------------------
Office: +39 02 23998309
Mobile: +39 333 4963172
Email: pierangelo.masarati(a)sys-net.it
---------------------------------------