On Mon, Jun 03, 2019 at 09:43:11AM +0000, jpayanides(a)prosodie.com wrote:
> Hello Ondřej,
Hi Jean-Philippe,
> sorry to answer so late, I was off (sick).
>
> Answering your questions:
>
> I didn't try to reproduce the crash on a different system.
this will be the easiest way to get more information both on whether the
crash might be coming from another of the system and, with newer tools,
more about the crash itself.
> What is the goal of installing an new version of gdb on my system ? I am
> not sure it will be easy to proceed.
>From what you have provided, the version of gdb you have used seems to
have issues debugging this program, a newer tool might be able to do a
better job (or you can try remote debugging with gdbserver). But if you
can, trying to reproduce this on a different (newer) system should be
a priority.
> Following your advise, I have corrected the chain configuration with
> removing mode=self, but it didn't change nothing regarding the crash.
>
> do you think it could be possible that the glibc contains bugs resulting
> that crash?
Hard to tell, your system is ancient and I have no idea how stable any
part of it was.
> I do not know what to do. Would it be relevant to upgrade the OS?
If you can upgrade the system at all, please do so, it has been about 12
years since Debian Etch was released and many things have happened
since.
Regards,
--
OndÅ™ej KuznÃk
Senior Software Engineer
Symas Corporation http://www.symas.com
Packaged, certified, and supported LDAP solutions powered by OpenLDAP
Full_Name: Alex
Version: LMDB 0.9.23
OS: iOS
URL: https://hastebin.com/raw/arexecefew
Submission from: (NULL) (2620:119:5001:3000:242c:3bea:acec:6a7d)
Hey guys,
We are using LMDB in our LRUCache implementation and facing the issue: when we
evict old records from this cache sometimes we have MDB_MAP_FULL error though
record eviction by using mdb_del() went fine. We were able to isolate an issue
in the unit test which is run from just one thread. Since there are no any
concurrent readers for this case we are expecting LMDB to claim free pages
immediately after we commit "deletion" transaction, but in some rare cases it
still raises MDB_MAP_FULL error.
The unit test scenario is as follows:
1) start with empty DB
2) keep inserting random records until hi the first MDB_FULL
3) delete some old recrods to some pages
4) try to insert new record and check that is succeeded.
The step 4 fails with another MDB_FULL error
We enable debugging and attached a debug logs from LDMB. Hope it would be
helpful.
--On Sunday, May 26, 2019 12:47 PM +0000 danybensighar(a)gmail.com wrote:
> Full_Name: Dany Bensighar
Hello,
The ITS system is for bug reports. Usage questions such as this should be
sent to the openldap-technical(a)openldap.org mailing list
(<https://www.openldap.org/lists/mm/listinfo/openldap-technical>)
This ITS will be closed.
Regards,
Quanah
--
Quanah Gibson-Mount
Product Architect
Symas Corporation
Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
<http://www.symas.com>
Full_Name: Dany Bensighar
Version: 2.4.44
OS: Red Hat Enterprise Linux Server release 7.6 (Maipo)
URL: ftp://ftp.openldap.org/incoming/
Submission from: (NULL) (212.138.78.4)
Hello,
Iam setting up a master slave openldap configuration in rhel 7, while adding the
syncprov module I am getting the below error. I have checked openspaces and
special characters but nothing found
Please find the syncprov.ldif file
dn: olcOverlay=syncprov,olcDatabase={2}hdb,cn=config
objectClass: olcOverlayConfig
objectClass: olcSyncProvConfig
olcOverlay: syncprov
olcSpSessionLog: 100
Please find the error
ldapadd -Y EXTERNAL -H ldapi:/// -f syncprov.ldif
SASL/EXTERNAL authentication started
SASL username: gidNumber=1001+uidNumber=1001,cn=peercred,cn=external,cn=auth
SASL SSF: 0
adding new entry "olcOverlay=syncprov,olcDatabase={2}hdb,cn=config"
ldap_add: Invalid syntax (21)
additional info: objectClass: value #1 invalid per syntax
Your support will be highly appreciated
--000000000000856dab058992e756
Content-Type: text/plain; charset="UTF-8"
The short answer is yes. Our mappings are now bigger than 1TiB (out of 2TiB
total available). Also, it *seems* to be that it is just way faster for
Linux to map things in order all at once. Additionally, we have some boxes
that still have spinning disks, and sequential access is a couple orders of
magnitude faster on those.
I haven't looked at the logic in the Linux pager, but it seems to be fairly
aggressive about taking memory pages back from large memory maps, even
though it doesn't strictly need to. Locking the memory is a bit of a
sledgehammer, but in our experience it is an effective one.
On Thu, May 23, 2019 at 2:23 PM Howard Chu <hyc(a)symas.com> wrote:
> github(a)nicwatson.org wrote:
> > Full_Name: Nic Watson
> > Version:
> > OS: Linux
> > URL: ftp://ftp.openldap.org/incoming/
> > Submission from: (NULL) (73.132.68.128)
> >
> >
> > Goal:
> > I'd like a clean way to get at the address of the data memory map in
> LMDB.
> >
> > MDB_envinfo.me_mapaddr only returns the map address if MAP_FIXED is used.
> >
> > Current Workarounds:
> > * Use OS-specific mechanism to retrieve all memory maps (e.g.
> > /proc/<pid>/smaps).
> > * Defeat opaque handle and reach into the MDB_env struct directly and
> grab the
> > me_map field.
> >
> > Justification:
> > In my current application, I notice a significant performance increase
> if I
> > mlock the mapfile. In order to do that cleanly, I need the address of
> the map.
>
> That sounds odd. Is there a lot of memory pressure from other processes on
> the machine?
> Where is the performance loss or gain coming from?
>
> --
> -- Howard Chu
> CTO, Symas Corp. http://www.symas.com
> Director, Highland Sun http://highlandsun.com/hyc/
> Chief Architect, OpenLDAP http://www.openldap.org/project/
>
--000000000000856dab058992e756
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
<div dir=3D"ltr">The short answer is yes. Our mappings are now bigger than =
1TiB (out of 2TiB total available). Also, it *seems* to be that it is just =
way faster for Linux to map things in order all at once. Additionally, we h=
ave some boxes that still have spinning disks, and sequential access is a c=
ouple orders of magnitude faster on those.<div><br></div><div>I haven't=
looked at the logic in the Linux pager, but it seems to be fairly aggressi=
ve about taking memory pages back from large memory maps, even though it do=
esn't strictly need to. Locking the memory is a bit of a sledgehammer, =
but in our experience it is an effective one.</div></div><br><div class=3D"=
gmail_quote"><div dir=3D"ltr" class=3D"gmail_attr">On Thu, May 23, 2019 at =
2:23 PM Howard Chu <<a href=3D"mailto:hyc@symas.com">hyc(a)symas.com</a>&g=
t; wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0p=
x 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><a hre=
f=3D"mailto:github@nicwatson.org" target=3D"_blank">github(a)nicwatson.org</a=
> wrote:<br>
> Full_Name: Nic Watson<br>
> Version: <br>
> OS: Linux<br>
> URL: <a href=3D"ftp://ftp.openldap.org/incoming/" rel=3D"noreferrer" t=
arget=3D"_blank">ftp://ftp.openldap.org/incoming/</a><br>
> Submission from: (NULL) (73.132.68.128)<br>
> <br>
> <br>
> Goal:<br>
> I'd like a clean way to get at the address of the data memory map =
in LMDB.<br>
> <br>
> MDB_envinfo.me_mapaddr only returns the map address if MAP_FIXED is us=
ed.<br>
> <br>
> Current Workarounds:<br>
> * Use OS-specific mechanism to retrieve all memory maps (e.g.<br>
> /proc/<pid>/smaps). <br>
> * Defeat opaque handle and reach into the MDB_env struct directly and =
grab the<br>
> me_map field.<br>
> <br>
> Justification:<br>
> In my current application, I notice a significant performance increase=
if I<br>
> mlock the mapfile.=C2=A0 In order to do that cleanly, I need the addre=
ss of the map.<br>
<br>
That sounds odd. Is there a lot of memory pressure from other processes on =
the machine?<br>
Where is the performance loss or gain coming from?<br>
<br>
-- <br>
=C2=A0 -- Howard Chu<br>
=C2=A0 CTO, Symas Corp.=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0<a href=3D"=
http://www.symas.com" rel=3D"noreferrer" target=3D"_blank">http://www.symas=
.com</a><br>
=C2=A0 Director, Highland Sun=C2=A0 =C2=A0 =C2=A0<a href=3D"http://highland=sun.com/hyc/" rel=3D"noreferrer" target=3D"_blank">http://highlandsun.com/h=
yc/</a><br>
=C2=A0 Chief Architect, OpenLDAP=C2=A0 <a href=3D"http://www.openldap.org/p=
roject/" rel=3D"noreferrer" target=3D"_blank">http://www.openldap.org/proje=
ct/</a><br>
</blockquote></div>
--000000000000856dab058992e756--
github(a)nicwatson.org wrote:
> Full_Name: Nic Watson
> Version:
> OS: Linux
> URL: ftp://ftp.openldap.org/incoming/
> Submission from: (NULL) (73.132.68.128)
>
>
> Goal:
> I'd like a clean way to get at the address of the data memory map in LMDB.
>
> MDB_envinfo.me_mapaddr only returns the map address if MAP_FIXED is used.
>
> Current Workarounds:
> * Use OS-specific mechanism to retrieve all memory maps (e.g.
> /proc/<pid>/smaps).
> * Defeat opaque handle and reach into the MDB_env struct directly and grab the
> me_map field.
>
> Justification:
> In my current application, I notice a significant performance increase if I
> mlock the mapfile. In order to do that cleanly, I need the address of the map.
That sounds odd. Is there a lot of memory pressure from other processes on the machine?
Where is the performance loss or gain coming from?
--
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
Full_Name: Nic Watson
Version:
OS: Linux
URL: ftp://ftp.openldap.org/incoming/
Submission from: (NULL) (73.132.68.128)
Goal:
I'd like a clean way to get at the address of the data memory map in LMDB.
MDB_envinfo.me_mapaddr only returns the map address if MAP_FIXED is used.
Current Workarounds:
* Use OS-specific mechanism to retrieve all memory maps (e.g.
/proc/<pid>/smaps).
* Defeat opaque handle and reach into the MDB_env struct directly and grab the
me_map field.
Justification:
In my current application, I notice a significant performance increase if I
mlock the mapfile. In order to do that cleanly, I need the address of the map.
On Mon, May 20, 2019 at 10:29:02AM +0000, AYANIDES, JEAN-PHILIPPE wrote:
> Yes, it serves traffic. It doesn't crash until I run a bind with a
> wrong password.
>
>> (gdb) cont
>> Continuing.
>
> [LDAP search with wrong password]
>
>> [New Thread -1237541968 (LWP 29380)]
>
>> Program exited with code 0177.
>> (gdb) thread apply all bt full
>
> Yes sorry, It runs on an old version of debian: 4:0 32 bits that I am
> not authorised to upgrade.
>
> It the first time we record that sort of issue on that platform…
Hi Jean-Philippe,
are you able to reproduce this on a different system, if not, can you
get a more recent gdb on that system?
I have tried to reproduce with a config similar to the one you describe
didn't manage to provoke a segfault with re24 nor master.
On a side note, I don't think running the chain overlay with mode=self
will work to propagate the ppolicy information the way you want. The
same failing credentials will be used to contact the upstream server and
so the initial bind will be rejected again, not allowing the
modification to be performed.
Regards,
--
OndÅ™ej KuznÃk
Senior Software Engineer
Symas Corporation http://www.symas.com
Packaged, certified, and supported LDAP solutions powered by OpenLDAP