Full_Name: Quanah Gibson-Mount
Version: 2.4
OS: N/A
URL: ftp://ftp.openldap.org/incoming/
Submission from: (NULL) (47.208.128.44)
The majority of supported overlays use an objectClass of the format:
olcOVERLAYConfig
However, there are two overlays that do *not* follow this format, which is
confusing.
memberOf -> olcMemberOf
dynlist -> olcDynamicList
For 2.5, I would suggest we change these to be consistent with all the other
overlays and document this change in the Admin Guide section on upgrade notes
etc.
--On Tuesday, August 08, 2017 7:08 PM +0000 dhawes(a)gmail.com wrote:
> Full_Name: David Hawes
> Version: 2.4.45
> OS: Linux
> URL: ftp://ftp.openldap.org/incoming/
> Submission from: (NULL) (2001:468:c80:2103:0:523:da5e:da5e)
Hi David,
I believe this was fixed with ITS#8796 (part of the 2.4.46 release). Can
you confirm?
Regards,
Quanah
--
Quanah Gibson-Mount
Product Architect
Symas Corporation
Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
<http://www.symas.com>
On Mon, Jan 22, 2018 at 11:57:38PM +0000, ondra(a)mistotebe.net wrote:
> On Mon, Jan 22, 2018 at 09:59:21PM +0000, quanah(a)openldap.org wrote:
>> After doing conversion, the resulting cn=config database has *two* ldap backends
>> defined:
>>
>> dn: olcDatabase={-1}frontend,cn=config
>> dn: olcOverlay={0}chain,olcDatabase={-1}frontend,cn=config
>> dn: olcDatabase={0}ldap,olcOverlay={0}chain,olcDatabase={-1}frontend,cn=conf
>
> This is the catchall database used to handle referrals that are not
> handled by any other database you configure by hand. It collects all the
> chain-* settings that appear before the first chain-uri.
>
>> dn: olcDatabase={1}ldap,olcOverlay={0}chain,olcDatabase={-1}frontend,cn=conf
>>
>> The first instance ({0}ldap,...) isn't even valid. If you remove the entire
>> chain configuration from this database, and then attempt to import it, you get
>> the following:
>
> Yeah that is a problem.
Turns out the problem is different yet. When the overlay is started up
after adding its entry, it generates a default backend internally. On
adding the above backend it now thinks it has a default one already (even
though there is no entry for it yet) and rejects it.
>> This is because the first instance ({0}) is *missing* the required olcDbURI
>> attribute. In addition, it generates completely bogus attribute values (See
>> ITS#8693)
>
> Maybe we just need to inherit objectclass: olcLdapDatabase somehow in
> olcChainOverlay and keep these settings in the overlay entry, or specify
> a bogus URI to be configured there. Whatever is most useable and still
> allows for seamless expansion.
Still thinking making the overlay objectclass inherit the attributes
from olcLdapDatabase instead of creating a fake DB but that can't be
done for 2.4 and I have no idea how to properly go about that yet
anyway.
For 2.4 at least it might make sense to just use the flags on the
default backend to say it has no entry associated with it (yet) and:
- clear that in ldap_chain_cfadd_apply so we know it can be replaced
later
- also create the entry if slapd is just starting up (How about
cn=config replication though? Upgrades need to be planned)
- maybe only let a default backend be added if there really is no other
backend yet (including the temporary ones used during normal
operation), since those will get some defaults from it
--
OndÅ™ej KuznÃk
Senior Software Engineer
Symas Corporation http://www.symas.com
Packaged, certified, and supported LDAP solutions powered by OpenLDAP
--On Tuesday, May 07, 2019 10:35 AM +0000 vangelier(a)hotmail.com wrote:
> Full_Name: Victor Angelier
> Version: OpenLDAP: slapd 2.4.44
> OS: CentOS
> URL: ftp://ftp.openldap.org/incoming/
> Submission from: (NULL) (78.78.44.170)
>
>
> When using OpenLDAP with NSS DB in HA setup you can not change the TLS
> certificate name through LDIF with ldapmodify.
Hello,
The MozNSS implementation was done by RedHat. It has since been abandoned
by RedHat, although they have left a MozNSS/OpenSSL bridge in place.
Any issues with MozNSS and OpenLDAP or OpenLDAP and RedHat's OpenSSL to
MozNSS bridge needs to be filed with RedHat as it is not an OpenLDAP issue.
If you can reproduce the problem with an OpenLDAP build that is directly
linked to OpenSSL without RedHat's MozNSS bridge, please follow up.
This ITS will be closed until any such follow up is provided.
Warm regards,
Quanah
--
Quanah Gibson-Mount
Product Architect
Symas Corporation
Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
<http://www.symas.com>
Full_Name: Victor Angelier
Version: OpenLDAP: slapd 2.4.44
OS: CentOS
URL: ftp://ftp.openldap.org/incoming/
Submission from: (NULL) (78.78.44.170)
When using OpenLDAP with NSS DB in HA setup you can not change the TLS
certificate name through LDIF with ldapmodify.
The only way to update the TLS certificate name is by editing the cn=config.ldif
file with breaches the signature.
This is especially with HA setup a serious issue.
Reproduce. Install and setup OpenLDAP in HA (I have 2 nodes)
Configure it so that it uses NSS DB
cn=config.ldif
cat /etc/openldap/slapd.d/cn\=config.ldif
# AUTO-GENERATED FILE - DO NOT EDIT!! Use ldapmodify.
# CRC32 13782a66
dn: cn=config
objectClass: olcGlobal
cn: config
olcArgsFile: /var/run/openldap/slapd.args
olcPidFile: /var/run/openldap/slapd.pid
olcTLSProtocolMin: 3.3
olcTLSCipherSuite: ECDHE-RSA-AES256-SHA384:AES256-SHA256:!RC4:HIGH:!MD5:!aNULL:!EDH:!EXP:!SSLV2:!eNULL:!SSLV3
olcTLSDHParamFile: /etc/openldap/ssl/dhparams
structuralObjectClass: olcGlobal
entryUUID: ef483c7c-da8d-1038-907a-df6f97fe6ec7
creatorsName: cn=config
createTimestamp: 20190314101611Z
olcTLSCACertificatePath: /etc/openldap/ssl
olcTLSCACertificateFile: /etc/pki/tls/certs/ca-bundle.crt
olcTLSCertificateFile: "Cyberdyne Security"
olcTLSCertificateKeyFile: /etc/openldap/ssl/password
olcTLSVerifyClient: allow
olcServerID: 1 ldaps://ldap-n1.cyberdynesecurity.ae
olcServerID: 2 ldaps://ldap-n2.cyberdynesecurity.ae
olcLogFile: /var/log/slapd.log
entryCSN: 20190507074650.989216Z#000000#001#000000
modifiersName: cn=Manager,dc=cyberdynesecurity,dc=ae
modifyTimestamp: 20190507074650Z
contextCSN: 20190507074650.989216Z#000000#001#000000
contextCSN: 20190402094130.452589Z#000000#002#000000
Now try change "olcTLSCertificateFile" through LDIF
vi change.ldif
dn: cn=config
changetype: modify
replace: olcTLSCertificateFile
olcTLSCertificateFile: "new certificate name"
ldapmodify -Y EXTERNAL -H ldapi:/// -f edit.ldif -v
ldap_initialize( ldapi:///??base )
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
replace olcTLSCertificateFile:
"new certificate name"
modifying entry "cn=config"
ldap_modify: Other (e.g., implementation specific) error (80)
If "olcTLSCertificateFile" is set to an existing file like /tmp/certificate.crt
it works fine.
Sorry for my late reply.
https://www.osstech.co.jp/download/hamano/openldap/ppolicy_fix_pwdInHistory…
The attached file is derived from OpenLDAP Software. All of the modifications to OpenLDAP Software represented in the following patch(es) were developed by Open Source Solution Technology Corporation. Open Source Solution Technology Corporation has not assigned rights and/or interest in this work to any party. I, HAMANO Tsukasa am authorized by Open Source Solution Technology Corporation, my employer, to release this work under the following terms.
Open Source Solution Technology Corporation hereby place the following modifications to OpenLDAP Software (and only these modifications) into the public domain. Hence, these modifications may be freely used and/or redistributed for any purpose with or without attribution and/or other notice.
On Fri, 08 Sep 2017 01:21:42 +0900,
Quanah Gibson-Mount wrote:
>
> --On Tuesday, January 12, 2016 7:28 AM +0000 hamano(a)osstech.co.jp wrote:
>
> > Full_Name: HAMANO Tsukasa
> > Version: 2.4.43
> > OS: Linux
> > URL:
> > https://www.osstech.co.jp/download/hamano/openldap/ppolicy_fix_pwdInHisto
> > ry.patch Submission from: (NULL) (240b:10:2640:bf0:290:4cff:fe0d:f43e)
> >
> >
> > We fixed several issue around ppolicy.
> >
> > 1) reduce pwdInHistory
> > If set pwdInHistory to 5 then reduce pwdInHistory to 3,
> > We expect to check password with three history, but ppolicy check
> > password with all pwdHistory attribute.
> >
> > 2) reduce pwdInHistory to zero
> > If set pwdInHistory to 5 then reduce pwdInHistory to 0,
> > We expect that ppolicy password checking will be disbale. but the
> > pwdHistory attribute are remains, so password checking is still
> > enabled.
> > We need to remove pwdHistory attribute.
>
> Hi,
>
> I'm working on catching up on old ITS submissions. This submission is
> missing an IPR and cannot be included until it is provided. Please
> see <http://www.openldap.org/devel/contributing.html> for information
> on the IPR requirements.
>
> Thanks,
> Quanah
>
>
> --
>
> Quanah Gibson-Mount
> Product Architect
> Symas Corporation
> Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
> <http://www.symas.com>
>
--
Open Source Solution Technology Corporation
HAMANO Tsukasa <hamano(a)osstech.co.jp>
--On Friday, February 15, 2019 2:37 PM +0000 hyc(a)openldap.org wrote:
> Full_Name: Howard Chu
> Version: 2.5
> OS:
> URL: ftp://ftp.openldap.org/incoming/
> Submission from: (NULL) (80.233.42.124)
> Submitted by: hyc
>
>
> IDLs used for index slots are hardcoded to a max of 65536 elements in the
> DB and twice that in memory. Sites with larger DBs tend to need larger
> IDLs; if they're running with sufficient memory it ought to be possible
> to configure larger IDLs.
Per the documentation, this new setting appears to be a new type of
"global" option that affects all MDB databases in the configuration.
This new setting, per the documentation, must be configured in a "backend
mdb" section.
However, there appear to be some problems with the implmentation.
Issue #1: "backend mdb" requires a "directory" directive
Since this is a new "global" concept, "directory" should not be required
(and in fact, seems problematc).
Issue #2: If a "directory" statement is supplied (see above), slapd core
dumps when attempting to convert this configuration to cn=config:
The slapd.conf file I'm using to test is:
include /usr/local/etc/openldap/schema/core.schema
include /usr/local/etc/openldap/schema/cosine.schema
include /usr/local/etc/openldap/schema/inetorgperson.schema
modulepath /usr/local/lib64/openldap
moduleload back_mdb
backend mdb
idlexp 32
database mdb
directory /var/openldap-data
suffix dc=example,dc=com
index default eq
index objectClass
(The directory line location was moved to the "backend" section after
initial testing due to issue #1).
--Quanah
--
Quanah Gibson-Mount
Product Architect
Symas Corporation
Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
<http://www.symas.com>
--_E0C027EF-451F-4EC6-B6DE-2F6B94348BB5_
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="utf-8"
For the sake of putting this in the email thread (other code discussion in =
GitHub), here is the latest squashed commit of the proposed patch (with the=
on-demand, retained overlapped array to reduce re-malloc and opening event=
handles): https://github.com/kriszyp/node-lmdb/commit/726a9156662c703bf3d4=
53aab75ee222072b990f
Thanks,
Kris
From: Kris Zyp
Sent: April 30, 2019 12:43 PM
To: Howard Chu; openldap-its(a)OpenLDAP.org
Subject: RE: (ITS#9017) Improving performance of commit sync in Windows
> What is the point of using writemap mode if you still need to use WriteFi=
le
> on every individual page?
As I understood from the documentation, and have observed, using writemap m=
ode is faster (and uses less temporary memory) because it doesn=E2=80=99t r=
equire mallocs to allocate pages (docs: =E2=80=9CThis is faster and uses fe=
wer mallocs=E2=80=9D). To be clear though, LMDB is so incredibly fast and e=
fficient, that in sync-mode, it takes enormous transactions before the time=
spent allocating and creating the dirty pages with the updated b-tree is a=
nywhere even remotely close to the time it takes to wait for disk flushing,=
even with an SSD. But the more pertinent question is efficiency, and measu=
ring CPU cycles rather than time spent (efficiency is more important than j=
ust time spent). When I ran my tests this morning of 100 (sync) transaction=
s with 100 puts per transaction, times varied quite a bit, but it seemed li=
ke running with writemap enabled typically averages about 500ms of CPU and =
with writemap disabled it typically averages around 600ms. Not a huge diffe=
rence, but still definitely worthwhile, I think.
Caveat emptor: Measuring LMDB performance with sync interactions on Windows=
is one of the most frustratingly erratic things to measure. It is sunny ou=
tside right now, times could be different when it starts raining later, but=
, this is what I saw this morning...
> What is the performance difference between your patch using writemap, and=
just
> not using writemap in the first place?
Running 1000 sync transactions on 3GB db with a single put per transaction,=
without writemap map, without the patch took about 60 seconds. And it took=
about 1 second with the patch with writemap mode enabled! (there is no sig=
nificant difference in sync times with writemap enabled or disabled with th=
e patch.) So the difference was huge in my test. And not only that, without=
the patch, the CPU usage was actually _higher_ during that 60 seconds (clo=
se to 100% of a core) than during the execution with the patch for one seco=
nd (close to 50%). =C2=A0Anyway, there are certainly tests I have run where=
the differences are not as large (doing small commits on large dbs accentu=
ates the differences), but the patch always seems to win. It could also be =
that my particular configuration causes bigger differences (on an SSD drive=
, and maybe a more fragmented file?).
Anyway, I added error handling for the malloc, and fixed/changed the other =
things you suggested. Be happy to make any other changes you want. The upda=
ted patch is here:
https://github.com/kriszyp/node-lmdb/commit/25366dea9453749cf6637f43ec17b9b=
62094acde
> OVERLAPPED* ov =3D malloc((pagecount - keep) * sizeof(OVERLAPPED));
> Probably this ought to just be pre-allocated based on the maximum number =
of dirty pages a txn allows.
I wasn=E2=80=99t sure I understood this comment. Are you suggesting we mall=
oc(MDB_IDL_UM_MAX * sizeof(OVERLAPPED)) for each environment, and retain it=
for the life of the environment? I think that is 4MB, if my math is right,=
which seems like a lot of memory to keep allocated (we usually have a lot =
of open environments). If the goal is to reduce the number of mallocs, how =
about we retain the OVERLAPPED array, and only free and re-malloc if the pr=
evious allocation wasn=E2=80=99t large enough? Then there isn=E2=80=99t unn=
ecessary allocation, and we only malloc when there is a bigger transaction =
than any previous. I put this together in a separate commit, as I wasn=E2=
=80=99t sure if this what you wanted (can squash if you prefer): https://gi=thub.com/kriszyp/node-lmdb/commit/2fe68fb5269c843e2e789746a17a4b2adefaac40
Thank you for the review!=20
Thanks,
Kris
From: Howard Chu
Sent: April 30, 2019 7:12 AM
To: kriszyp(a)gmail.com; openldap-its(a)OpenLDAP.org
Subject: Re: (ITS#9017) Improving performance of commit sync in Windows
kriszyp(a)gmail.com wrote:
> Full_Name: Kristopher William Zyp
> Version: LMDB 0.9.23
> OS: Windows
> URL: https://github.com/kriszyp/node-lmdb/commit/7ff525ae57684a163d32af74=
a0ab9332b7fc4ce9
> Submission from: (NULL) (71.199.6.148)
>=20
>=20
> We have seen very poor performance on the sync of commits on large databa=
ses in
> Windows. On databases with 2GB of data, in writemap mode, the sync of eve=
n small
> commits is consistently well over 100ms (without writemap it is faster, b=
ut
> still slow). It is expected that a sync should take some time while waiti=
ng for
> disk confirmation of the writes, but more concerning is that these sync
> operations (in writemap mode) are instead dominated by nearly 100% system=
CPU
> utilization, so operations that requires sub-millisecond b-tree update
> operations are then dominated by very large amounts of system CPU cycles =
during
> the sync phase.
>=20
> I think that the fundamental problem is that FlushViewOfFile seems to be =
an O(n)
> operation where n is the size of the file (or map). I presume that Window=
s is
> scanning the entire map/file for dirty pages to flush, I'm guessing becau=
se it
> doesn't have an internal index of all the dirty pages for every file/map-=
view in
> the OS disk cache. Therefore, the turns into an extremely expensive, CPU-=
bound
> operation to find the dirty pages for (large file) and initiate their wri=
tes,
> which, of course, is contrary to the whole goal of a scalable database sy=
stem.
> And FlushFileBuffers is also relatively slow as well. We have attempted t=
o batch
> as many operations into single transaction as possible, but this is still=
a very
> large overhead.
>=20
> The Windows docs for FlushFileBuffers itself warns about the inefficienci=
es of
> this function (https://docs.microsoft.com/en-us/windows/desktop/api/filea=
pi/nf-fileapi-flushfilebuffers).
> Which also points to the solution: it is much faster to write out the dir=
ty
> pages with WriteFile through a sync file handle (FILE_FLAG_WRITE_THROUGH)=
.
>=20
> The associated patch
> (https://github.com/kriszyp/node-lmdb/commit/7ff525ae57684a163d32af74a0ab=
9332b7fc4ce9)
> is my attempt at implementing this solution, for Windows. Fortunately, wi=
th the
> design of LMDB, this is relatively straightforward. LMDB already supports
> writing out dirty pages with WriteFile calls. I added a write-through han=
dle for
> sending these writes directly to disk. I then made that file-handle
> overlapped/asynchronously, so all the writes for a commit could be starte=
d in
> overlap mode, and (at least theoretically) transfer in parallel to the dr=
ive and
> then used GetOverlappedResult to wait for the completion. So basically
> mdb_page_flush becomes the sync. I extended the writing of dirty pages th=
rough
> WriteFile to writemap mode as well (for writing meta too), so that WriteF=
ile
> with write-through can be used to flush the data without ever needing to =
call
> FlushViewOfFile or FlushFileBuffers. I also implemented support for write
> gathering in writemap mode where contiguous file positions infers contigu=
ous
> memory (by tracking the starting position with wdp and writing contiguous=
pages
> in single operations). Sorting of the dirty list is maintained even in wr=
itemap
> mode for this purpose.
What is the point of using writemap mode if you still need to use WriteFile
on every individual page?
> The performance benefits of this patch, in my testing, are considerable. =
Writing
> out/syncing transactions is typically over 5x faster in writemap mode, an=
d 2x
> faster in standard mode. And perhaps more importantly (especially in envi=
ronment
> with many threads/processes), the efficiency benefits are even larger,
> particularly in writemap mode, where there can be a 50-100x reduction in =
the
> system CPU usage by using this patch. This brings windows performance wit=
h
> sync'ed transactions in LMDB back into the range of "lightning" performan=
ce :).
What is the performance difference between your patch using writemap, and j=
ust
not using writemap in the first place?
--=20
=C2=A0=C2=A0-- Howard Chu
=C2=A0 CTO, Symas Corp.=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 http://www.symas.com
=C2=A0 Director, Highland Sun=C2=A0=C2=A0=C2=A0=C2=A0 http://highlandsun.co=
m/hyc/
=C2=A0 Chief Architect, OpenLDAP=C2=A0 http://www.openldap.org/project/
--_E0C027EF-451F-4EC6-B6DE-2F6B94348BB5_
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html; charset="utf-8"
<html xmlns:o=3D"urn:schemas-microsoft-com:office:office" xmlns:w=3D"urn:sc=
hemas-microsoft-com:office:word" xmlns:m=3D"http://schemas.microsoft.com/of=
fice/2004/12/omml" xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta ht=
tp-equiv=3DContent-Type content=3D"text/html; charset=3Dutf-8"><meta name=
=3DGenerator content=3D"Microsoft Word 15 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:Consolas;
panose-1:2 11 6 9 2 2 4 3 2 4;}
@font-face
{font-family:"Segoe UI";
panose-1:2 11 5 2 4 2 4 2 2 3;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.blob-code-inner
{mso-style-name:blob-code-inner;}
span.pl-c1
{mso-style-name:pl-c1;}
span.pl-k
{mso-style-name:pl-k;}
.MsoChpDefault
{mso-style-type:export-only;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
{page:WordSection1;}
--></style></head><body lang=3DEN-CA link=3Dblue vlink=3D"#954F72"><div cla=
ss=3DWordSection1><p class=3DMsoNormal>For the sake of putting this in the =
email thread (other code discussion in GitHub), here is the latest squashed=
commit of the proposed patch (with the on-demand, retained overlapped arra=
y to reduce re-malloc and opening event handles): https://github.com/kriszy=
p/node-lmdb/commit/726a9156662c703bf3d453aab75ee222072b990f</p><p class=3DM=
soNormal><o:p> </o:p></p><p class=3DMsoNormal>Thanks,<br>Kris</p><p cl=
ass=3DMsoNormal><o:p> </o:p></p><div style=3D'mso-element:para-border-=
div;border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0cm 0cm 0cm'><=
p class=3DMsoNormal style=3D'border:none;padding:0cm'><b>From: </b><a href=
=3D"mailto:kriszyp@gmail.com">Kris Zyp</a><br><b>Sent: </b>April 30, 2019 1=
2:43 PM<br><b>To: </b><a href=3D"mailto:hyc@symas.com">Howard Chu</a>; <a h=
ref=3D"mailto:openldap-its@OpenLDAP.org">openldap-its(a)OpenLDAP.org</a><br><=
b>Subject: </b>RE: (ITS#9017) Improving performance of commit sync in Windo=
ws</p></div><p class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>=
> What is the point of using writemap mode if you still need to use Writ=
eFile<o:p></o:p></p><p class=3DMsoNormal>> on every individual page?<o:p=
></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>A=
s I understood from the documentation, and have observed, using writemap mo=
de is faster (and uses less temporary memory) because it doesn=E2=80=99t re=
quire mallocs to allocate pages (docs: =E2=80=9CThis is faster and uses few=
er mallocs=E2=80=9D). To be clear though, LMDB is so incredibly fast and ef=
ficient, that in sync-mode, it takes enormous transactions before the time =
spent allocating and creating the dirty pages with the updated b-tree is an=
ywhere even remotely close to the time it takes to wait for disk flushing, =
even with an SSD. But the more pertinent question is efficiency, and measur=
ing CPU cycles rather than time spent (efficiency is more important than ju=
st time spent). When I ran my tests this morning of 100 (sync) transactions=
with 100 puts per transaction, times varied quite a bit, but it seemed lik=
e running with writemap enabled typically averages about 500ms of CPU and w=
ith writemap disabled it typically averages around 600ms. Not a huge differ=
ence, but still definitely worthwhile, I think.<o:p></o:p></p><p class=3DMs=
oNormal><o:p> </o:p></p><p class=3DMsoNormal>Caveat emptor: Measuring =
LMDB performance with sync interactions on Windows is one of the most frust=
ratingly erratic things to measure. It is sunny outside right now, times co=
uld be different when it starts raining later, but, this is what I saw this=
morning...<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p clas=
s=3DMsoNormal>> What is the performance difference between your patch us=
ing writemap, and just<o:p></o:p></p><p class=3DMsoNormal>> not using wr=
itemap in the first place?<o:p></o:p></p><p class=3DMsoNormal><o:p> </=
o:p></p><p class=3DMsoNormal>Running 1000 sync transactions on 3GB db with =
a single put per transaction, without writemap map, without the patch took =
about 60 seconds. And it took about 1 second with the patch with writemap m=
ode enabled! (there is no significant difference in sync times with writema=
p enabled or disabled with the patch.) So the difference was huge in my tes=
t. And not only that, without the patch, the CPU usage was actually _<i>hig=
her</i>_ during that 60 seconds (close to 100% of a core) than during the e=
xecution with the patch for one second (close to 50%). Anyway, there =
are certainly tests I have run where the differences are not as large (doin=
g small commits on large dbs accentuates the differences), but the patch al=
ways seems to win. It could also be that my particular configuration causes=
bigger differences (on an SSD drive, and maybe a more fragmented file?).<o=
:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal=
>Anyway, I added error handling for the malloc, and fixed/changed the other=
things you suggested. Be happy to make any other changes you want. The upd=
ated patch is here:<o:p></o:p></p><p class=3DMsoNormal>https://github.com/k=
riszyp/node-lmdb/commit/25366dea9453749cf6637f43ec17b9b62094acde<o:p></o:p>=
</p><p class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>><spa=
n class=3Dblob-code-inner><span style=3D'font-size:9.0pt;font-family:Consol=
as;color:#24292E'> OVERLAPPED* ov =3D </span></span><span class=3Dpl-c1><sp=
an style=3D'font-size:9.0pt;font-family:Consolas;color:#005CC5'>malloc</spa=
n></span><span class=3Dblob-code-inner><span style=3D'font-size:9.0pt;font-=
family:Consolas;color:#24292E'>((pagecount - keep) * </span></span><span cl=
ass=3Dpl-k><span style=3D'font-size:9.0pt;font-family:Consolas;color:#D73A4=
9'>sizeof</span></span><span class=3Dblob-code-inner><span style=3D'font-si=
ze:9.0pt;font-family:Consolas;color:#24292E'>(OVERLAPPED));</span></span><s=
pan class=3Dblob-code-inner><span style=3D'font-size:9.0pt;font-family:Cons=
olas;color:#24292E'><o:p></o:p></span></span></p><p class=3DMsoNormal><span=
class=3Dblob-code-inner><span style=3D'font-size:9.0pt;font-family:Consola=
s;color:#24292E'>> </span></span><span style=3D'font-size:10.5pt;font-fa=
mily:"Segoe UI",sans-serif;color:#24292E;background:white'>Probably this ou=
ght to just be pre-allocated based on the maximum number of dirty pages a t=
xn allows.</span><span style=3D'font-size:10.5pt;font-family:"Segoe UI",san=
s-serif;background:white'><o:p></o:p></span></p><p class=3DMsoNormal><span =
style=3D'font-size:10.5pt;font-family:"Segoe UI",sans-serif;color:#24292E;b=
ackground:white'><o:p> </o:p></span></p><p class=3DMsoNormal><span sty=
le=3D'font-size:10.5pt;font-family:"Segoe UI",sans-serif;color:#24292E;back=
ground:white'>I wasn=E2=80=99t sure I understood this comment. Are you sugg=
esting we </span>malloc(MDB_IDL_UM_MAX * sizeof(OVERLAPPED)) for each envir=
onment, and retain it for the life of the environment? I think that is 4MB,=
if my math is right, which seems like a lot of memory to keep allocated (w=
e usually have a lot of open environments). If the goal is to reduce the nu=
mber of mallocs, how about we retain the OVERLAPPED array, and only free an=
d re-malloc if the previous allocation wasn=E2=80=99t large enough? Then th=
ere isn=E2=80=99t unnecessary allocation, and we only malloc when there is =
a bigger transaction than any previous. I put this together in a separate c=
ommit, as I wasn=E2=80=99t sure if this what you wanted (can squash if you =
prefer): https://github.com/kriszyp/node-lmdb/commit/2fe68fb5269c843e2e7897=
46a17a4b2adefaac40<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p>=
<p class=3DMsoNormal>Thank you for the review! <span style=3D'font-size:10.=
5pt;font-family:"Segoe UI",sans-serif;color:#24292E;background:white'><o:p>=
</o:p></span></p><p class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNo=
rmal>Thanks,<br>Kris<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></=
p><div style=3D'border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0c=
m 0cm 0cm'><p class=3DMsoNormal><b>From: </b><a href=3D"mailto:hyc@symas.co=
m">Howard Chu</a><br><b>Sent: </b>April 30, 2019 7:12 AM<br><b>To: </b><a h=
ref=3D"mailto:kriszyp@gmail.com">kriszyp(a)gmail.com</a>; <a href=3D"mailto:o=
penldap-its(a)OpenLDAP.org">openldap-its(a)OpenLDAP.org</a><br><b>Subject: </b>=
Re: (ITS#9017) Improving performance of commit sync in Windows<o:p></o:p></=
p></div><p class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>kris=
zyp(a)gmail.com wrote:<o:p></o:p></p><p class=3DMsoNormal>> Full_Name: Kri=
stopher William Zyp<o:p></o:p></p><p class=3DMsoNormal>> Version: LMDB 0=
.9.23<o:p></o:p></p><p class=3DMsoNormal>> OS: Windows<o:p></o:p></p><p =
class=3DMsoNormal>> URL: https://github.com/kriszyp/node-lmdb/commit/7ff=
525ae57684a163d32af74a0ab9332b7fc4ce9<o:p></o:p></p><p class=3DMsoNormal>&g=
t; Submission from: (NULL) (71.199.6.148)<o:p></o:p></p><p class=3DMsoNorma=
l>> <o:p></o:p></p><p class=3DMsoNormal>> <o:p></o:p></p><p class=3DM=
soNormal>> We have seen very poor performance on the sync of commits on =
large databases in<o:p></o:p></p><p class=3DMsoNormal>> Windows. On data=
bases with 2GB of data, in writemap mode, the sync of even small<o:p></o:p>=
</p><p class=3DMsoNormal>> commits is consistently well over 100ms (with=
out writemap it is faster, but<o:p></o:p></p><p class=3DMsoNormal>> stil=
l slow). It is expected that a sync should take some time while waiting for=
<o:p></o:p></p><p class=3DMsoNormal>> disk confirmation of the writes, b=
ut more concerning is that these sync<o:p></o:p></p><p class=3DMsoNormal>&g=
t; operations (in writemap mode) are instead dominated by nearly 100% syste=
m CPU<o:p></o:p></p><p class=3DMsoNormal>> utilization, so operations th=
at requires sub-millisecond b-tree update<o:p></o:p></p><p class=3DMsoNorma=
l>> operations are then dominated by very large amounts of system CPU cy=
cles during<o:p></o:p></p><p class=3DMsoNormal>> the sync phase.<o:p></o=
:p></p><p class=3DMsoNormal>> <o:p></o:p></p><p class=3DMsoNormal>> I=
think that the fundamental problem is that FlushViewOfFile seems to be an =
O(n)<o:p></o:p></p><p class=3DMsoNormal>> operation where n is the size =
of the file (or map). I presume that Windows is<o:p></o:p></p><p class=3DMs=
oNormal>> scanning the entire map/file for dirty pages to flush, I'm gue=
ssing because it<o:p></o:p></p><p class=3DMsoNormal>> doesn't have an in=
ternal index of all the dirty pages for every file/map-view in<o:p></o:p></=
p><p class=3DMsoNormal>> the OS disk cache. Therefore, the turns into an=
extremely expensive, CPU-bound<o:p></o:p></p><p class=3DMsoNormal>> ope=
ration to find the dirty pages for (large file) and initiate their writes,<=
o:p></o:p></p><p class=3DMsoNormal>> which, of course, is contrary to th=
e whole goal of a scalable database system.<o:p></o:p></p><p class=3DMsoNor=
mal>> And FlushFileBuffers is also relatively slow as well. We have atte=
mpted to batch<o:p></o:p></p><p class=3DMsoNormal>> as many operations i=
nto single transaction as possible, but this is still a very<o:p></o:p></p>=
<p class=3DMsoNormal>> large overhead.<o:p></o:p></p><p class=3DMsoNorma=
l>> <o:p></o:p></p><p class=3DMsoNormal>> The Windows docs for FlushF=
ileBuffers itself warns about the inefficiencies of<o:p></o:p></p><p class=
=3DMsoNormal>> this function (https://docs.microsoft.com/en-us/windows/d=
esktop/api/fileapi/nf-fileapi-flushfilebuffers).<o:p></o:p></p><p class=3DM=
soNormal>> Which also points to the solution: it is much faster to write=
out the dirty<o:p></o:p></p><p class=3DMsoNormal>> pages with WriteFile=
through a sync file handle (FILE_FLAG_WRITE_THROUGH).<o:p></o:p></p><p cla=
ss=3DMsoNormal>> <o:p></o:p></p><p class=3DMsoNormal>> The associated=
patch<o:p></o:p></p><p class=3DMsoNormal>> (https://github.com/kriszyp/=
node-lmdb/commit/7ff525ae57684a163d32af74a0ab9332b7fc4ce9)<o:p></o:p></p><p=
class=3DMsoNormal>> is my attempt at implementing this solution, for Wi=
ndows. Fortunately, with the<o:p></o:p></p><p class=3DMsoNormal>> design=
of LMDB, this is relatively straightforward. LMDB already supports<o:p></o=
:p></p><p class=3DMsoNormal>> writing out dirty pages with WriteFile cal=
ls. I added a write-through handle for<o:p></o:p></p><p class=3DMsoNormal>&=
gt; sending these writes directly to disk. I then made that file-handle<o:p=
></o:p></p><p class=3DMsoNormal>> overlapped/asynchronously, so all the =
writes for a commit could be started in<o:p></o:p></p><p class=3DMsoNormal>=
> overlap mode, and (at least theoretically) transfer in parallel to the=
drive and<o:p></o:p></p><p class=3DMsoNormal>> then used GetOverlappedR=
esult to wait for the completion. So basically<o:p></o:p></p><p class=3DMso=
Normal>> mdb_page_flush becomes the sync. I extended the writing of dirt=
y pages through<o:p></o:p></p><p class=3DMsoNormal>> WriteFile to writem=
ap mode as well (for writing meta too), so that WriteFile<o:p></o:p></p><p =
class=3DMsoNormal>> with write-through can be used to flush the data wit=
hout ever needing to call<o:p></o:p></p><p class=3DMsoNormal>> FlushView=
OfFile or FlushFileBuffers. I also implemented support for write<o:p></o:p>=
</p><p class=3DMsoNormal>> gathering in writemap mode where contiguous f=
ile positions infers contiguous<o:p></o:p></p><p class=3DMsoNormal>> mem=
ory (by tracking the starting position with wdp and writing contiguous page=
s<o:p></o:p></p><p class=3DMsoNormal>> in single operations). Sorting of=
the dirty list is maintained even in writemap<o:p></o:p></p><p class=3DMso=
Normal>> mode for this purpose.<o:p></o:p></p><p class=3DMsoNormal><o:p>=
</o:p></p><p class=3DMsoNormal>What is the point of using writemap mo=
de if you still need to use WriteFile<o:p></o:p></p><p class=3DMsoNormal>on=
every individual page?<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p=
></p><p class=3DMsoNormal>> The performance benefits of this patch, in m=
y testing, are considerable. Writing<o:p></o:p></p><p class=3DMsoNormal>>=
; out/syncing transactions is typically over 5x faster in writemap mode, an=
d 2x<o:p></o:p></p><p class=3DMsoNormal>> faster in standard mode. And p=
erhaps more importantly (especially in environment<o:p></o:p></p><p class=
=3DMsoNormal>> with many threads/processes), the efficiency benefits are=
even larger,<o:p></o:p></p><p class=3DMsoNormal>> particularly in write=
map mode, where there can be a 50-100x reduction in the<o:p></o:p></p><p cl=
ass=3DMsoNormal>> system CPU usage by using this patch. This brings wind=
ows performance with<o:p></o:p></p><p class=3DMsoNormal>> sync'ed transa=
ctions in LMDB back into the range of "lightning" performance :).=
<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNorm=
al>What is the performance difference between your patch using writemap, an=
d just<o:p></o:p></p><p class=3DMsoNormal>not using writemap in the first p=
lace?<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p class=3DMs=
oNormal>-- <o:p></o:p></p><p class=3DMsoNormal> -- Howard Chu<o:=
p></o:p></p><p class=3DMsoNormal> CTO, Symas Corp. &=
nbsp; http://www.symas.com<o:p></o:p></=
p><p class=3DMsoNormal> Director, Highland Sun  =
; http://highlandsun.com/hyc/<o:p></o:p></p><p class=3DMsoNormal> Chi=
ef Architect, OpenLDAP http://www.openldap.org/project/<o:p></o:p></p=
><p class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal><o:p> =
</o:p></p></div></body></html>=
--_E0C027EF-451F-4EC6-B6DE-2F6B94348BB5_--
--On Wednesday, April 24, 2019 9:23 PM +0000 siddjain(a)live.com wrote:
> --_000_MWHPR08MB2400D7AE5E8EEC3D17192FACB53C0MWHPR08MB2400namp_
> Content-Type: text/plain; charset="iso-8859-1"
> Content-Transfer-Encoding: quoted-printable
>
> Thank you. we tried using another openldap image and that worked. so it
> see= ms the problem is with the osixia docker image we were using to run
> openlda= p. it is based on debian (which uses GnuTLS per your email) so
> tbh we are s= urprised it would have such a bug in it. the image that
> worked for us is ba= sed on alpine.
> https://github.com/osixia/docker-light-baseimage/blob/stable/image/Docker
> fi= le
> https://github.com/tiredofit/docker-openldap/blob/master/Dockerfile
> but back to your comment, how can one isolate what TLS/SSL library
> OpenLDAP= is linked to in the environment you're using?
Use the "ldd" command or similar to see what libraries it is linked to. I
see you filed a bug against Osixia, however the bug should be filed against
GnuTLS, as that's where the issue is.
This ITS will be closed as it is not OpenLDAP related.
--Quanah
--
Quanah Gibson-Mount
Product Architect
Symas Corporation
Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
<http://www.symas.com>
--On Friday, May 03, 2019 5:22 PM +0000 dominiks.mail(a)gmx.net wrote:
> 2. flags=3Doverride
> After I added the flags=3Doverride setting in the idassert-bind section
> the problem was gone and everything works for me. I don't know if it's a
> good approach but at least it works for now with our openLDAP backend
> server.
Prior to OpenLDAP 2.4.33 where ITS#7403 was fixed (8829115c76), back-ldap
would always behave as if the "override" flag had been set if the
idassert-bind directive has been configured. I.e., the previously working
configuration depended upon a bug and that bug was fixed.
This ITS will be closed as back-ldap is functioning as designed and
documented in the slapd-ldap(5) man page.
Regards,
Quanah
--
Quanah Gibson-Mount
Product Architect
Symas Corporation
Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
<http://www.symas.com>