(ITS#7703) mdb sync() issues vs. ACID
by h.b.furuseth@usit.uio.no
Full_Name: Hallvard B Furuseth
Version: LMDB_0.9.8
OS:
URL:
Submission from: (NULL) (81.191.45.35)
Submitted by: hallvard
mdb_env_sync() uses the wrong sync method when syncing a commit
written with a different MDB_WRITEMAP setting in another MDB_env.
Two processes with MDB_NOMETASYNC, each process doing every 2nd
write txn, will sync each other's meta pages. If they have
different MDB_WRITEMAPs, every meta page gets synced wrongly.
This breaks durability of ACID.
There is a similar problem if a process crashes after writing
the meta page but before sync succeeds, and mdb_env_open() then
resets the lockfile to refer to the unsynced commit. Robust
mutexes will introduce a similar problem without mdb_env_open.
I'm not volunteering to figure out how to do this right, e.g. how
do fsync/msync/FlushFileBuffers work on various OSes if the file
descriptor or memory map is read-only, do we need to set a "need
to sync" flag in the lockfile in this case for the first writer
or write txn to obey?
Another fix: Disable this scenario. Store the MDB_WRITEMAP
setting in the lockfile when resetting it, even with MDB_RDONLY.
Obey that flag rather than the writemap flag to mdb_env_open()
when not resetting the lockfile. However, now a small program
like mdb_stat can have disproportionate effect on another process
which opens the env at the same time. Also nested txns need to
work with MDB_WRITEMAP.
For the crash case above and robust mutexes:
Maybe mdb_env_open() should not modify me_txns->mti_txnid if it
refers to the oldest meta page. That way the possibly unsynced
commit will never be exposed unless the lockfile is removed.
But next write txn must then reset the "hidden" metapage and sync
before proceeding, similar to how mdb_env_write_meta() does at
failure. Otherwise removing the lockfile would expose a meta
page referring to data which may have been overwritten, e.g. by
an mdb_abort()ed commit.
Another variant would be to sync in mdb_env_open() when resetting
the lockfile, or maybe an MDB_RDONLY env must set a "sync needed"
flag.
9 years, 12 months
(ITS#7702) hdb and mdb derefere aliases differently
by julien.combes@i-carre.net
Full_Name: Julien COMBES
Version: 2.4.36
OS: debian squeeze
URL: ftp://ftp.openldap.org/incoming/
Submission from: (NULL) (212.23.175.188)
Hello,
With openldap 2.4.36, i found a case where the aliases are deref differently
between hdb and mdb. A search with deref aliases on an attribut not indexed (or
on "*"), mdb backend returns the entry twice where hdb backend returns the entry
once. For example :
With a directory like that :
---------------------------------------------------------------------
dn: dc=test,dc=com
objectClass: top
objectClass: dcObject
objectClass: organization
dc: test
o: test
dn: ou=a,dc=test,dc=com
objectClass: top
objectClass: organizationalUnit
ou: a
dn: ou=b,ou=a,dc=test,dc=com
objectClass: top
objectClass: organizationalUnit
ou: b
dn: ou=c,ou=a,dc=test,dc=com
objectClass: top
objectClass: organizationalUnit
ou: c
dn: cn=foo,ou=b,ou=a,dc=test,dc=com
objectClass: top
objectClass: person
cn: foo
sn: foo
dn: cn=bar,ou=c,ou=a,dc=test,dc=com
objectClass: top
objectClass: alias
objectClass: extensibleObject
aliasedObjectName: cn=foo,ou=b,ou=a,dc=test,dc=com
cn: bar
---------------------------------------------------------------------
I have that kind of result :
-> Search on mdb, cn not indexed :
$ ldapsearch -x -LLL -b "ou=a,dc=test,dc=com" cn=foo dn -a always
dn: cn=foo,ou=b,ou=a,dc=test,dc=com
dn: cn=foo,ou=b,ou=a,dc=test,dc=com
$ ldapsearch -x -LLL -b "ou=a,dc=test,dc=com" cn=* dn -a always
dn: cn=foo,ou=b,ou=a,dc=test,dc=com
dn: cn=foo,ou=b,ou=a,dc=test,dc=com
-> Search on mdb, cn indexed eq,sub :
$ ldapsearch -x -LLL -b "ou=a,dc=test,dc=com" cn=foo dn -a always
dn: cn=foo,ou=b,ou=a,dc=test,dc=com
$ ldapsearch -x -LLL -b "ou=a,dc=test,dc=com" cn=* dn -a always
dn: cn=foo,ou=b,ou=a,dc=test,dc=com
dn: cn=foo,ou=b,ou=a,dc=test,dc=com
-> Search on hdb, cn not indexed :
$ ldapsearch -x -LLL -b "ou=a,dc=test,dc=com" cn=foo dn -a always
dn: cn=foo,ou=b,ou=a,dc=test,dc=com
$ ldapsearch -x -LLL -b "ou=a,dc=test,dc=com" cn=* dn -a always
dn: cn=foo,ou=b,ou=a,dc=test,dc=com
-> Search on hdb
$ ldapsearch -x -LLL -b "ou=a,dc=test,dc=com" cn=foo dn -a always
dn: cn=foo,ou=b,ou=a,dc=test,dc=commits
$ ldapsearch -x -LLL -b "ou=a,dc=test,dc=com" cn=* dn -a always
dn: cn=foo,ou=b,ou=a,dc=test,dc=com
Regards,
Julien COMBES
P.S: I have first posted this message as a comment in ITS7577 the 25 Jul 2013.
But as the ITS7577 is tagged closed and has no answers since this date, I decide
to repost as a new report.
10 years
(ITS#7701) Deletion triggers SEGV on next access of existing cursor
by dw@botanicus.net
Full_Name: David Wilson
Version: 919a0f5b54dad3acf2f84e7993bb00e7aa098037
OS: Linux
URL: http://www.h1.botanicus.net/20130919-lmdb.trace.bz2
Submission from: (NULL) (178.238.153.20)
In some circumstances LMDB will segfault in mdb_cursor_get() on a cursor that
existed prior to a deletion that just occurred. It is apparently related to the
page the cursor points to becoming mutated by the deletion.
In LMDB 0.9.8, the issue may manifest as a NULL ptr dereference:
#0 mdb_xcursor_init1 (mc=mc@entry=0x1013da0, node=node@entry=0x7ffd76031ac2) at
lib/mdb.c:6525
#1 0x00007ffff5209b73 in mdb_cursor_next (mc=0x1013da0, key=0xce39a0,
data=data@entry=0xce3660, op=<optimized out>) at lib/mdb.c:5042
#2 0x00007ffff5208a6e in mdb_cursor_get (mc=0x1013da0, key=0xce39a0,
data=0xce3660, op=op@entry=MDB_NEXT) at lib/mdb.c:5526
#3 0x00007ffff5202a83 in _cffi_f_mdb_cursor_get (self=<optimized out>,
args=<optimized out>) at lmdb/__pycache__/lmdb_cffi.c:714
#4 0x0000000000471a8b in call_function (oparg=<optimized out>,
pp_stack=0x7fffffffcf30) at ../Python/ceval.c:4021
#5 PyEval_EvalFrameEx (
Due to F_DUPDATA bit being set as tested on line 5041. Examining the leaf
structure reveals a large chunk of ASCII text (which should have been written to
the DB), interspersed with some integer values.
Versions of LMDB <= 0.9.7 exhibit similar crashes, although in a different
place: "assert(mc)" at the start of mdb_cursor_get().
Various MDB versions going back 6 months were tested, and none exhibited any
better behaviour. In the attached trace/replay, version
c0575825730dd2aab4031100b25d632a3d052447 from April exhibited the bug after only
1993 operations executed from the trace file. Any more recent version requires
the entire trace file to execute before triggering the assert or SEGV.
http://www.h1.botanicus.net/20130919-lmdb.trace.bz2 is a 33MiB trace file that
can be loaded and executed using
https://github.com/dw/acid/blob/master/misc/lmdb-replay.c
The apparent workaround is to reinitialize any existing cursors should a
deletion occur within a transaction, as demonstrated in
https://github.com/dw/acid/commit/56f183c71a668e540df58d5137e946616a446c49
10 years
AW: (ITS#7655) segfault during initial mirror of multimaster delta replication
by hans.freitag@entiretec.com
Hi,
unfortunately i was not able to reproduce the exact problem with the segfault, but, after a few updates,
we still have the problem that with replication enabled the slapd freezes during a write operation.
SETUP DESCRIPTION:
Openldap Version 2.4.36
Back-MDB (we have issues for quite a while, even when we where running on bdb)
All write and read requests are directed to the active node, so the passive
node is replicating.
So, if I did not understand something wrong I have two threads: The main thread,
and the one which is doing the replication.
Netstat of TCP Replication connections, the second is initiated by the
passive system polling from the active
tcp 0 53 10.169.127.13:389 10.169.126.13:43340 ESTABLISHED
tcp 1905336 0 10.169.127.13:52384 10.169.126.13:389 ESTABLISHED
top -H of the LDAP Processes:
7767 ldap 20 0 84.4g 7.1g 6.9g S 1 10.1 1:02.13 slapd
7768 ldap 20 0 84.4g 7.1g 6.9g S 0 10.1 7:54.44 slapd
8023 ldap 20 0 84.4g 7.1g 6.9g S 0 10.1 0:32.31 slapd
7766 ldap 20 0 84.4g 7.1g 6.9g S 0 10.1 0:00.00 slapd
7769 ldap 20 0 84.4g 7.1g 6.9g S 0 10.1 0:32.81 slapd
7770 ldap 20 0 84.4g 7.1g 6.9g S 0 10.1 7:44.94 slapd
8024 ldap 20 0 84.4g 7.1g 6.9g t 0 10.1 0:32.53 slapd
PASTEBIN:
I Pastebinned all the backtraces to:
http://pastebin.com/vVGEqEUt
I hope this helps to track back the problem.
Kind regards - Mit freundlichen Grüßen
i.A. Hans Freitag
» Linux Administrator
ENTIRETEC AG . Pforzheimer Strasse 33 . 01189 Dresden . Germany
T: +49.351.41355.0 . M: . F: +49.351.41355.99
E: hans.freitag(a)entiretec.com
ENTIRETEC | http://www.entiretec.com
Germany | Switzerland | United Arab Emirates | Malaysia | United States of America
ENTIRETEC AG
Vorstand: Thomas Herrmann (Vorsitzender), Thomas Wetzel, Carsten Klemm . Aufsichtsratsvorsitzende: Dr. Jutta Horezky
Sitz der Gesellschaft: Dresden . Amtsgericht Dresden HRB 24915 . USt-IdNr. DE227705033
> -----Ursprüngliche Nachricht-----
> Von: openldap-bugs-bounces(a)OpenLDAP.org [mailto:openldap-bugs-
> bounces(a)OpenLDAP.org] Im Auftrag von quanah(a)zimbra.com
> Gesendet: Montag, 5. August 2013 05:15
> An: openldap-its(a)openldap.org
> Betreff: Re: (ITS#7655) segfault during initial mirror of multimaster
> delta replication
>
> --On Sunday, August 04, 2013 4:27 PM +0000 hans.freitag(a)entiretec.com
> wrote:
>
> > Full_Name: Hans Freitag
> > Version: 2.4.35 and 33
> > OS: SLES 11SP2
> > URL: ftp://ftp.openldap.org/incoming/
> > Submission from: (NULL) (193.200.138.3)
> >
> >
> > I have a Multimaster Delta replication setup here with bdb on a 18 GB
> > Database.
> >
> > After a crash due to a full disk I made a new database on one node
> ans
> > started over.
> >
> > The empty node started to replicate, from the full one but after a
> while
> > (approx. 2GB) it crashed with a segfault:
> >
> > Aug 4 11:45:32 mhr-dd-lda-01 kernel: [52189.476209] slapd[10158]:
> > segfault at 20 ip 00007ff97ebfabc0 sp 00007ff6e57e6b38 error 4 in
> > libc-2.11.1.so[7ff97eb79000+155000]
> >
> > So i thought, maybe it is not e good Idea to put in a package for SP2
> in a
> > machine running SP1 so my first attempt to solve was an upgrade.
> After the
> > upgrade I got this:
> >
> > Aug 4 12:46:29 mhr-dd-lda-01 kernel: [ 1414.757587] slapd[3704]:
> > segfault at 20 ip 00007fc82eee6182 sp 00007fc592e0acf0 error 4 in
> > slapd[7fc82ee7a000+1e6000]
> >
> > So I created a brandnew openldap RPM 2.4.35 rpm to try out if the
> problem
> > is maybe related to the 2.4.33 version I am running. But fail:
> >
> > Aug 4 13:47:19 mhr-dd-lda-01 kernel: [ 5063.074410] slapd[8749]:
> > segfault at 20 ip 00007fcbc1b537dc sp 00007fc92624fb88 error 4 in
> > slapd[7fcbc1ac8000+1ea000]
> >
> > At the moment I deactivated the accesslogging on the node which seems
> to
> > work. I will know for sure in a few hours. ;-) I can try to reproduce
> > that on a backup node next week. Whenn all the main nodes are up and
> > running again. :)
>
> I would suggest you build with debugging symbols, enable core files,
> and
> provide a backtrace of the problem. What you have provided does not
> give
> any useful information for debugging purposes. You also fail to state
> the
> backend you are using (back-bdb or back-hdb).
>
> For information on how to provide a backtrace:
>
> <http://www.openldap.org/faq/data/cache/59.html>
>
> Regards,
> Quanah
>
> --
>
> Quanah Gibson-Mount
> Lead Engineer
> Zimbra, Inc
> --------------------
> Zimbra :: the leader in open source messaging and collaboration
>
10 years
Re: (ITS#7698) Multiple Paged search requests on one connection fail
by michael@stroeder.com
This is a cryptographically signed message in MIME format.
--------------ms070508090701020005010904
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
john.unsworth(a)cp.net wrote:
> AFAIK it is not the standard LDAP changelog - let me know if I'm wrong.=
We
> don't want to write specific code for each LDAP server that doesn't con=
form
> to the LDAP (draft I know) standard.
I presume you're referring to [1] as "the standard LDAP changelog".
Note that this is an expired informational Internet draft and not a propo=
sed
standard.
Ciao, Michael.
[1] https://tools.ietf.org/html/draft-good-ldap-changelog-04
--------------ms070508090701020005010904
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature
MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFfzCC
BXswggNjoAMCAQICAwxOfTANBgkqhkiG9w0BAQUFADB5MRAwDgYDVQQKEwdSb290IENBMR4w
HAYDVQQLExVodHRwOi8vd3d3LmNhY2VydC5vcmcxIjAgBgNVBAMTGUNBIENlcnQgU2lnbmlu
ZyBBdXRob3JpdHkxITAfBgkqhkiG9w0BCQEWEnN1cHBvcnRAY2FjZXJ0Lm9yZzAeFw0xMjEw
MDIyMDE3MDlaFw0xNDEwMDIyMDE3MDlaMD8xGDAWBgNVBAMUD01pY2hhZWwgU3Ry9mRlcjEj
MCEGCSqGSIb3DQEJARYUbWljaGFlbEBzdHJvZWRlci5jb20wggEiMA0GCSqGSIb3DQEBAQUA
A4IBDwAwggEKAoIBAQDo2SKth5GhtaDrCyfGtyUG+/hAAa/J52L0NFN4SSRvTtdGf9HfWwwd
NCtgae0TVGWk2lKDbXA9d5vmyIiRhuwxd90H6FLErhRBeB9G67qtw87E8WUoXt2DwPQEUTWV
hqHpPadlmgFw3+i3TGQQTe3O3W9MMMd4GJNhObem2VGRuCD37OXnzBksTcq0FPJgcWAhe3d/
0ItOkNWBqgq8Mf3p7WFBhaQ0a27BC/mKtH8fI3kPcS305imPRja69Msq3EwUZBc9ToVp6FRQ
NYKjfOBybDUzVkmRZl3H8xutQP2w8Zxb8m5f7Q1BfLLrIFScfYvIDgOERxTCd4lab8+/09XH
AgMBAAGjggFEMIIBQDAMBgNVHRMBAf8EAjAAMFYGCWCGSAGG+EIBDQRJFkdUbyBnZXQgeW91
ciBvd24gY2VydGlmaWNhdGUgZm9yIEZSRUUgaGVhZCBvdmVyIHRvIGh0dHA6Ly93d3cuQ0Fj
ZXJ0Lm9yZzAOBgNVHQ8BAf8EBAMCA6gwQAYDVR0lBDkwNwYIKwYBBQUHAwQGCCsGAQUFBwMC
BgorBgEEAYI3CgMEBgorBgEEAYI3CgMDBglghkgBhvhCBAEwMgYIKwYBBQUHAQEEJjAkMCIG
CCsGAQUFBzABhhZodHRwOi8vb2NzcC5jYWNlcnQub3JnMDEGA1UdHwQqMCgwJqAkoCKGIGh0
dHA6Ly9jcmwuY2FjZXJ0Lm9yZy9yZXZva2UuY3JsMB8GA1UdEQQYMBaBFG1pY2hhZWxAc3Ry
b2VkZXIuY29tMA0GCSqGSIb3DQEBBQUAA4ICAQC9ouXq3p/bDWMM4tBKgD3tl4HY5H0eECl8
q9/nqk0UL6YeWkrCiQdrDtNPW7DcGqNYtzdgtzmyTr1GhiAX+igrOjdk/ge5NRcQOpONK/4b
zrmpQEcIUyxSSDKLWh211/kcFfxxLEiJ5teF4GL8Fc1qbrLP4+DCvJXWfYaaR5NLjZMqm2VP
yKTv3qpXWnGohiRkGTwS/11QM2XCfIGdRsQT9a8mO4m2fn2tGPp2TEIoCLrDDrbGVeDWaOWB
OIeTrp4wa3Q4OI6yCptJhEqKvjhV96IBRYgM76nTBqsqnDzwxExAyhhWiUS5DunRHOr/+NyF
pUpD4883RBLO0g9kUEGOhtZNF1u+8zEL0YgMGvifAom9JEklLOXZuqj0MThypKs/3d/OyOQb
4gURnu6oZwcKZ7LskytWnlRKUxF6o0A8grtmyKkqe14TS7cQbg0NTaIYXPkHR+dfFmb3uEqn
BBjvpJXFcEtWI2lQXC/ET+au991pK797ExBOmpQwjIn3SjiW80vw/UoL6DMvqY/6JhVhyNTP
MJ2W5AX5kc27DIbVtVGZs8J4AYhuNALJUq9N9Ka7rPRj3RcYDrfehDLOkM5iMnarpmtuOpLK
d1SvZhqj/0N/JWGIDpPSTkTFOPP6ZN9I9Rqyf+9NGqb2sjo4DkIiZcHxt735/GJLwus5KLBl
2DGCA6EwggOdAgEBMIGAMHkxEDAOBgNVBAoTB1Jvb3QgQ0ExHjAcBgNVBAsTFWh0dHA6Ly93
d3cuY2FjZXJ0Lm9yZzEiMCAGA1UEAxMZQ0EgQ2VydCBTaWduaW5nIEF1dGhvcml0eTEhMB8G
CSqGSIb3DQEJARYSc3VwcG9ydEBjYWNlcnQub3JnAgMMTn0wCQYFKw4DAhoFAKCCAfUwGAYJ
KoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTMwOTE3MjA0MjUzWjAj
BgkqhkiG9w0BCQQxFgQUEAusugl++MhH733peKnCHcb/Nw4wbAYJKoZIhvcNAQkPMV8wXTAL
BglghkgBZQMEASowCwYJYIZIAWUDBAECMAoGCCqGSIb3DQMHMA4GCCqGSIb3DQMCAgIAgDAN
BggqhkiG9w0DAgIBQDAHBgUrDgMCBzANBggqhkiG9w0DAgIBKDCBkQYJKwYBBAGCNxAEMYGD
MIGAMHkxEDAOBgNVBAoTB1Jvb3QgQ0ExHjAcBgNVBAsTFWh0dHA6Ly93d3cuY2FjZXJ0Lm9y
ZzEiMCAGA1UEAxMZQ0EgQ2VydCBTaWduaW5nIEF1dGhvcml0eTEhMB8GCSqGSIb3DQEJARYS
c3VwcG9ydEBjYWNlcnQub3JnAgMMTn0wgZMGCyqGSIb3DQEJEAILMYGDoIGAMHkxEDAOBgNV
BAoTB1Jvb3QgQ0ExHjAcBgNVBAsTFWh0dHA6Ly93d3cuY2FjZXJ0Lm9yZzEiMCAGA1UEAxMZ
Q0EgQ2VydCBTaWduaW5nIEF1dGhvcml0eTEhMB8GCSqGSIb3DQEJARYSc3VwcG9ydEBjYWNl
cnQub3JnAgMMTn0wDQYJKoZIhvcNAQEBBQAEggEAq+P94yAbnCHzS4eSmCAnZ9HB/kitBw++
I3EWNu3OZ+bYst3YHAtwjDLZyHZOy7YreSiznixIR73sM6ZFZji1L11chKnoQyhnb48bOFHC
Mc3tY7iR15KZc2kzahw33XfvHv4XKuXNKdFqBbqMJdgSoZ6b9WCKp3vJhi0CexuH64WeJ+iX
ZUbOAhkZ9AiSLaK9Ra2wHksEXJvmzexPCM8LnEGbUFbUtf5sVp8lJw/gq4FSSdcQt1GHyw3z
AioPL4aqaFm7h6/if7qR0AAMV/W4ssrJvnv7gSNyZWS/FeKn16TU61hZZ9Z5ZenKUnarTYuj
tgXfQHxjsQlS3f0R78wU9gAAAAAAAA==
--------------ms070508090701020005010904--
10 years
Re: (ITS#7698) Multiple Paged search requests on one connection fail
by michael@stroeder.com
This is a cryptographically signed message in MIME format.
--------------ms080203050507020908040705
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
John Unsworth wrote:
> LDAP servers we connect to typically contain
> thousands if not millions of entries. Reading the lot without paging -
> although we can do it - is very inefficient both in terms of LDAP resou=
rces
> and application resources.
I really wonder why that is inefficient. I'm also developing custom sync
software for large data sets. If stream processing is possible you don't =
need
much resources. If stream processing is not possible paging does not help=
either.
> In the case of OpenLDAP there is no LDAP changelog and so the only way =
we
> have of discovering changes is to read the whole directory and compare =
with
> what was read last time.
Use slapo-accesslog which is more powerful than the old changelog data an=
yway.
Ciao, Michael.
--------------ms080203050507020908040705
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature
MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIFfzCC
BXswggNjoAMCAQICAwxOfTANBgkqhkiG9w0BAQUFADB5MRAwDgYDVQQKEwdSb290IENBMR4w
HAYDVQQLExVodHRwOi8vd3d3LmNhY2VydC5vcmcxIjAgBgNVBAMTGUNBIENlcnQgU2lnbmlu
ZyBBdXRob3JpdHkxITAfBgkqhkiG9w0BCQEWEnN1cHBvcnRAY2FjZXJ0Lm9yZzAeFw0xMjEw
MDIyMDE3MDlaFw0xNDEwMDIyMDE3MDlaMD8xGDAWBgNVBAMUD01pY2hhZWwgU3Ry9mRlcjEj
MCEGCSqGSIb3DQEJARYUbWljaGFlbEBzdHJvZWRlci5jb20wggEiMA0GCSqGSIb3DQEBAQUA
A4IBDwAwggEKAoIBAQDo2SKth5GhtaDrCyfGtyUG+/hAAa/J52L0NFN4SSRvTtdGf9HfWwwd
NCtgae0TVGWk2lKDbXA9d5vmyIiRhuwxd90H6FLErhRBeB9G67qtw87E8WUoXt2DwPQEUTWV
hqHpPadlmgFw3+i3TGQQTe3O3W9MMMd4GJNhObem2VGRuCD37OXnzBksTcq0FPJgcWAhe3d/
0ItOkNWBqgq8Mf3p7WFBhaQ0a27BC/mKtH8fI3kPcS305imPRja69Msq3EwUZBc9ToVp6FRQ
NYKjfOBybDUzVkmRZl3H8xutQP2w8Zxb8m5f7Q1BfLLrIFScfYvIDgOERxTCd4lab8+/09XH
AgMBAAGjggFEMIIBQDAMBgNVHRMBAf8EAjAAMFYGCWCGSAGG+EIBDQRJFkdUbyBnZXQgeW91
ciBvd24gY2VydGlmaWNhdGUgZm9yIEZSRUUgaGVhZCBvdmVyIHRvIGh0dHA6Ly93d3cuQ0Fj
ZXJ0Lm9yZzAOBgNVHQ8BAf8EBAMCA6gwQAYDVR0lBDkwNwYIKwYBBQUHAwQGCCsGAQUFBwMC
BgorBgEEAYI3CgMEBgorBgEEAYI3CgMDBglghkgBhvhCBAEwMgYIKwYBBQUHAQEEJjAkMCIG
CCsGAQUFBzABhhZodHRwOi8vb2NzcC5jYWNlcnQub3JnMDEGA1UdHwQqMCgwJqAkoCKGIGh0
dHA6Ly9jcmwuY2FjZXJ0Lm9yZy9yZXZva2UuY3JsMB8GA1UdEQQYMBaBFG1pY2hhZWxAc3Ry
b2VkZXIuY29tMA0GCSqGSIb3DQEBBQUAA4ICAQC9ouXq3p/bDWMM4tBKgD3tl4HY5H0eECl8
q9/nqk0UL6YeWkrCiQdrDtNPW7DcGqNYtzdgtzmyTr1GhiAX+igrOjdk/ge5NRcQOpONK/4b
zrmpQEcIUyxSSDKLWh211/kcFfxxLEiJ5teF4GL8Fc1qbrLP4+DCvJXWfYaaR5NLjZMqm2VP
yKTv3qpXWnGohiRkGTwS/11QM2XCfIGdRsQT9a8mO4m2fn2tGPp2TEIoCLrDDrbGVeDWaOWB
OIeTrp4wa3Q4OI6yCptJhEqKvjhV96IBRYgM76nTBqsqnDzwxExAyhhWiUS5DunRHOr/+NyF
pUpD4883RBLO0g9kUEGOhtZNF1u+8zEL0YgMGvifAom9JEklLOXZuqj0MThypKs/3d/OyOQb
4gURnu6oZwcKZ7LskytWnlRKUxF6o0A8grtmyKkqe14TS7cQbg0NTaIYXPkHR+dfFmb3uEqn
BBjvpJXFcEtWI2lQXC/ET+au991pK797ExBOmpQwjIn3SjiW80vw/UoL6DMvqY/6JhVhyNTP
MJ2W5AX5kc27DIbVtVGZs8J4AYhuNALJUq9N9Ka7rPRj3RcYDrfehDLOkM5iMnarpmtuOpLK
d1SvZhqj/0N/JWGIDpPSTkTFOPP6ZN9I9Rqyf+9NGqb2sjo4DkIiZcHxt735/GJLwus5KLBl
2DGCA6EwggOdAgEBMIGAMHkxEDAOBgNVBAoTB1Jvb3QgQ0ExHjAcBgNVBAsTFWh0dHA6Ly93
d3cuY2FjZXJ0Lm9yZzEiMCAGA1UEAxMZQ0EgQ2VydCBTaWduaW5nIEF1dGhvcml0eTEhMB8G
CSqGSIb3DQEJARYSc3VwcG9ydEBjYWNlcnQub3JnAgMMTn0wCQYFKw4DAhoFAKCCAfUwGAYJ
KoZIhvcNAQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTMwOTE3MjAzNzMzWjAj
BgkqhkiG9w0BCQQxFgQUTyl3bIAh895kSiYoIfJPfdrRbiowbAYJKoZIhvcNAQkPMV8wXTAL
BglghkgBZQMEASowCwYJYIZIAWUDBAECMAoGCCqGSIb3DQMHMA4GCCqGSIb3DQMCAgIAgDAN
BggqhkiG9w0DAgIBQDAHBgUrDgMCBzANBggqhkiG9w0DAgIBKDCBkQYJKwYBBAGCNxAEMYGD
MIGAMHkxEDAOBgNVBAoTB1Jvb3QgQ0ExHjAcBgNVBAsTFWh0dHA6Ly93d3cuY2FjZXJ0Lm9y
ZzEiMCAGA1UEAxMZQ0EgQ2VydCBTaWduaW5nIEF1dGhvcml0eTEhMB8GCSqGSIb3DQEJARYS
c3VwcG9ydEBjYWNlcnQub3JnAgMMTn0wgZMGCyqGSIb3DQEJEAILMYGDoIGAMHkxEDAOBgNV
BAoTB1Jvb3QgQ0ExHjAcBgNVBAsTFWh0dHA6Ly93d3cuY2FjZXJ0Lm9yZzEiMCAGA1UEAxMZ
Q0EgQ2VydCBTaWduaW5nIEF1dGhvcml0eTEhMB8GCSqGSIb3DQEJARYSc3VwcG9ydEBjYWNl
cnQub3JnAgMMTn0wDQYJKoZIhvcNAQEBBQAEggEAm42W7q1EMPcgUQ8jMD560Ml8344oCv8m
JeaMH+6k1GjWDfqQkX3qPj2gGbqw27JfHs7lEjMqedyXdFcK0LokVzCsRxbjTwseaq4JX0gl
llgHDa/fwF1Et+dcCMFaMMeyo+zeNUnLq46iVhrROtGov67iYKqO7/y0O+YSdVR7z9G2amby
nqlxCZhK9gr1IlGHe3hXkWiTIR8ud/6jVLYGhiKS2n8ctpltD2EoJslJFA1Vy7FOPF4ecoN7
IbB6egUHRjsNPp1BO294jPnlLLkLeD/CXrCl7RAEhpVDPeY51nnI8p3isUgqcmO0LR6+f+/j
nyHAskCnziCmuzD93wWv5gAAAAAAAA==
--------------ms080203050507020908040705--
10 years
Re: (ITS#7698) Multiple Paged search requests on one connection fail
by hyc@symas.com
John Unsworth wrote:
>>> There are no other directory servers in existence that can scale to the
> level that OpenLDAP does and perform well at that scale.
>
> How do you justify/prove that statement?
Our (Symas) customers tell us which directory server they're migrating from
when they come to us. We perform benchmarks periodically as well. Nothing else
comes close. Come to LDAPCon this year, we'll be presenting our latest
benchmark results then.
> -----Original Message-----
> From: Howard Chu [mailto:hyc@symas.com]
> Sent: 17 September 2013 20:52
> To: john.unsworth(a)cp.net; openldap-its(a)openldap.org
> Subject: Re: (ITS#7698) Multiple Paged search requests on one connection
> fail
>
> john.unsworth(a)cp.net wrote:
>> Products other than browsers use LDAP servers. Our product is a meta
>> directory that watches for changes on multiple data sources (LDAP,
>> Databases, SAP, ...) and reconciles any changes across the complete
>> set of sources according to data filtering and transformation rules.
>> However sometimes it is necessary to read the whole data set - for
>> example when a new data server is added, or when changes may have been
>> missed for any reason, or when an 'audit' type operation is required
>> to ensure that all data sources are consistent. LDAP servers we
>> connect to typically contain thousands if not millions of entries.
>
> OpenLDAP is regularly used in production by large telcos with billions of
> entries. There are no other directory servers in existence that can scale to
> the level that OpenLDAP does and perform well at that scale.
>
>> Reading the lot without paging -
>> although we can do it - is very inefficient both in terms of LDAP
>> resources and application resources.
>
> That's utter nonsense. The amount of data is the same no matter how many
> pieces you slice it into. It is *more* inefficient to slice it into multiple
> requests.
>
>> The directory is read using multiple parallel paged searches across
>> different branches on a single connection. Every directory we have
>> used except OpenLDAP can manage this effectively.
>
>> In the case of OpenLDAP there is no LDAP changelog and so the only way
>> we have of discovering changes is to read the whole directory and
>> compare with what was read last time.
>
> So you're saying you're connecting to OpenLDAP servers from version 2.1 or
> older, from before syncrepl or the accesslog or changelog overlays existed.
>
>> -----Original Message-----
>> From: Michael Ströder [mailto:michael@stroeder.com]
>> Sent: 17 September 2013 20:20
>> To: john.unsworth(a)cp.net; openldap-its(a)openldap.org
>> Subject: Re: (ITS#7698) Multiple Paged search requests on one
>> connection fail
>>
>> john.unsworth(a)cp.net wrote:
>>> I would also be interested to understand why " paged results is
>>> inherently flawed".
>>
>> Although being the author of an interactive LDAP UI client I still
>> wonder why people want paged results (or tree browsing).
>>
>> If you have so many results you should narrow the search criteria.
>> Browsing/paging is pretty inefficient working style.
>>
>> Ciao, Michael.
>>
>>
>>
>>
>>
>
>
--
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
10 years