Clarification of LMDB transaction behaviour
by Dorian Taylor (Lists)
Re-re-sending this because it appears twice not to have initially made it through:
I have recently taken up stewardship of the Ruby binding for LMDB. It did not take me long to find problems in its design pertaining to concurrent transactions in a multithreaded environment. I would like to fix these problems but I am afraid I still have a few questions after carefully reading the LMDB documentation.
First, I would like to confirm my understanding that there may be only one active read-write transaction per environment, irrespective of processes and/or threads attached, although this transaction may be nested.
This raises some subsidiary questions:
1) The documentation states specifically that *read-write* transactions may be nested, but what about read-only?
2) Must (read-only) transactions always be in a single hierarchy per thread or can there be many “roots” at once?
3) Given that the relevant LMDB structs appear not to discriminate between transaction types, are there consequences for opening, e.g., a read-write transaction subordinate to a read-only one?
The reason why I ask is that the current (inherited) design of the binding keeps a hash table of transactions keyed by thread, and does not distinguish between read-write and read-only, and affords only a single “root” transaction per thread (whether read-write or read-only). I don’t need the memory leaks, double-frees, deadlocks and other bad behaviour to infer that this structure is probably wrong.
Based on what I can glean from the LMDB documentation, is that I probably want to separate the read-write and read-only transactions, make the former a singleton (since there can only be one read-write transaction per environment), artificially flatten the latter (since it probably isn’t meaningful to nest a read-only transaction anyway) and then wrap the transaction code so it does the right thing. What I suppose I’m looking for here is confirmation that my assumptions are correct.
Thanks in advance,
--
Dorian Taylor
Make things. Make sense.
https://doriantaylor.com
1 year
Read only Replica
by Marc Franquesa
I wrongly supposed that a LDAP server configured with replication
(sycnrepl) and not using syncprov modules (so is only a consumer and not a
provider) would automatically behave as a Read-ONLY replica as it will sync
from other servers specified on the syncrepl settings but will not be
providing deltas thru syncprov module.
However I tested the following scenario (N-way multimatseer with one
'Readreplica')
- Servers A and B with syncprov enabled (so they are providers)
- Servers A and B both sync (syncprel) to the other (so they are consumers)
- Added server C syncrepl to A and B, *BUT not loading syncprov*. So is a
consumer only, (ReadReplica)?
However I verified that I can make changes to C and they got stored into C.
(Not replicated to A/B as they don't sync with C).
- So how I got C behave like a true ReadOnly replica (denying writes)?
- If I have to set some settings, note that I'm also replicating olcConfig
tree cn=config, so how I got this setting applied only to one server?
Thanks for any hints or explation on my doubts.
1 year
Can syncrepl do both database and acl?
by xuhua.lin@gmail.com
In a simple provider-consumer setup, I added syncprov to both config (config, cn=config) and the database (mdb,cn=config) on the provider, added syncrepl respectively on the consumer. The initial replication seems working, entries added to provider will show up at consumer until an acl rule added into provider. When I check the consumer, the syncrepl directive in the (mdb, cn=config) is removed. I guess it's trying to replicate producer's config since there is no syncrel on the provider (I also see syncprov added to consumer). This defeats the purpose of database replication. Is this expected behavior? Did I miss anything? (In master-master setup, every server is replicating everything for each other.)
Thanks,
Xuhua
1 year
Re: [EXT] Slapd unexpectedly shutdown
by Kevin Olbrich
Am Di., 7. Apr. 2020 um 23:18 Uhr schrieb Quanah Gibson-Mount
<quanah(a)symas.com>:
>
>
>
> --On Tuesday, April 7, 2020 11:36 PM +0200 Kevin Olbrich <ko(a)sv01.de> wrote:
>
> > Def. ANOM_ABEND: Triggered when a processes ends abnormally (with a
> > signal that could cause a core dump, if enabled).
> >
> > Looks like it's actually crashing but I don't think it's because of
> > "slap_global_control: unrecognized control: 1.3.6.1.4.1.42.2.27.8.5.1"
> > as I can see multiple lines before the crash.
> >
> > Is it possible that Slapd crashed if it tries to proxy data to a node
> > while it shuts down? Maybe a special case when bind was successful but
> > query failes?
>
> Hi Kevin,
>
> What OpenLDAP Release are you running? I ask, because there was a fix that
> went into OpenLDAP 2.4.49 specifically for a crasher with back-ldap and
> controls:
>
> <https://bugs.openldap.org/show_bug.cgi?id=9076>
>
> Regards,
> Quanah
>
My version is 2.4.49+dfsg-2~bpo10+1 (buster-backports). Should be in
there I think.
I've now included ppolicy.schema to solve the issue. Tomorrow I will
try if the issue is still present.
Thanks for your input!
Regards
Kevin
> --
>
> Quanah Gibson-Mount
> Product Architect
> Symas Corporation
> Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
> <http://www.symas.com>
1 year
Re: [EXT] Slapd unexpectedly shutdown
by Quanah Gibson-Mount
--On Tuesday, April 7, 2020 11:36 PM +0200 Kevin Olbrich <ko(a)sv01.de> wrote:
> Def. ANOM_ABEND: Triggered when a processes ends abnormally (with a
> signal that could cause a core dump, if enabled).
>
> Looks like it's actually crashing but I don't think it's because of
> "slap_global_control: unrecognized control: 1.3.6.1.4.1.42.2.27.8.5.1"
> as I can see multiple lines before the crash.
>
> Is it possible that Slapd crashed if it tries to proxy data to a node
> while it shuts down? Maybe a special case when bind was successful but
> query failes?
Hi Kevin,
What OpenLDAP Release are you running? I ask, because there was a fix that
went into OpenLDAP 2.4.49 specifically for a crasher with back-ldap and
controls:
<https://bugs.openldap.org/show_bug.cgi?id=9076>
Regards,
Quanah
--
Quanah Gibson-Mount
Product Architect
Symas Corporation
Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
<http://www.symas.com>
1 year
macbookair configuration to connect to network personal folder
by Arnaud Gymnase
Hello all,
I've a procect for a school to authenticate macbooks to an OpenLDAP
Server. The server part has been build using a ClearOS system (centos)
with OpenLDAP. I then configured the macbook to authentify to my
OpenLDAP server to open a session and this works fine.
The next thing I'd like consist of configuring an automatic connexion to
the personnal network folder. I tried to use the attribute
"NFSHomeDirectory" in several ways but this didn't worked.
Do you have an idea on which attribute could I use to connect my user
(sambaHomepath ?) and maybe how could I retrive my username/password to
authenticate to it ?
Thanks to all :)
1 year
Antw: [EXT] Slapd unexpectedly shutdown
by Ulrich Windl
>>> Kevin Olbrich <ko(a)sv01.de> schrieb am 06.04.2020 um 23:00 in Nachricht
<15853_1586208569_5E8B9F39_15853_1729_1_CA+gLzy8cpURhV3Ti41im3Z=vRcq2VoEd0n_E+P8
EKnfN4=wPQ(a)mail.gmail.com>:
> Hi!
>
> I'm experiencing an issue with Slapd 2.4.49 on Debian Buster.
> I use Slapd as a proxy / lb with two AD nodes behind it.
>
> If I reboot one of the AD nodes, everything is fine. As soon as I
> reboot the seconds one (while the first is back and available),
> OpenLDAP shuts down immediately:
> Apr 06 22:03:33 ldap-lb1.ldap.example.com slapd[22140]: Stopping
> OpenLDAP: slapd.
> Apr 06 22:03:33 ldap-lb1.ldap.example.com systemd[1]: slapd.service:
> Succeeded.
>
> There is no traffic during this time and it matches the exact time
> when node two is down.
>
> Debian still uses an init script for slapd but I did not find anything
> interesting in it.
>
> Is there a config setting that I missed in the docs that explains this
> behavior?
> As it's not crashing, HA in systemd won't catch this issue.
Start some tracing and logging I'd suggest. Something ust be triggering that, and you must capture the "something" before you can get a solution.
>
> Kind regards
> Kevin
1 year
front end for openldap
by paulo bruck
Hi All
I have been using openldap for many years and I would like to thanks to
all .
Is there a framewok to use with openldap as backend? Preferably based on
Python 80)
I look at django-ldapdb but project is almost dead and does not have all
that I need.
Flask has only autentication..8(
Any ideas?
thanks in advanced and again nice work 80))
--
Paulo Ricardo Bruck consultor
tel 011 3596-4881/4882 011 98140-9184 (TIM)
http://www.contatogs.com.br
http://www.protejasuarede.com.br
gpg AAA59989 at wwwkeys.us.pgp.net
1 year
Antw: [EXT] LMDB: sync'ing to disk and flash wear
by Ulrich Windl
>>> Alberto Mardegan <mardy(a)users.sourceforge.net> schrieb am 06.04.2020 um 08:44
in Nachricht
<10444_1586183585_5E8B3DA1_10444_128_1_5863f87e-d26b-2943-9302-bf5bfc270e71@user
.sourceforge.net>:
> Hi all!
> I'm trying to understand the impact that a LDMB database will have on
> the flash wear of an embedded device.
>
> In order to minimize disk writes, I'm keeping a DB open all the time,
> and I'd like to have the changes written into the physical storage only
> when mdb_env_sync() is called (near the end of my process lifetime).
> I'm opening the DB with the MDB_NOSYNC | MDB_WRITEMAP | MDB_NOMETASYNC
> flags, and if I strace the executable I see that lmdb behaves as
> expected: no writes happen until I call mdb_env_sync().
>
> However, the writes end up to the physical storage anyway, because the
> kernel reserves the right of flushing changes of an mmap'ed file to disk
> at any time (as per the documentation of MAP_SHARED). I don't exactly
> know if this happens because some other process is calling sync(), or
> which criteria the kernel follows, but as a matter of fact a write
> happens every few seconds.
>
> Looking at the code, it looks like it's impossible to instruct LMDB to
> use MAP_PRIVATE without modifying the code. But before exploring that
> solution, I'd like to be sure I understand the real behaviour of LMDB.
Doesn't MAP_PRIVATE imply that the map can't be written to disk?
>
> Do I understand correctly, that if during the lifetime of my process the
> same key is assigned different values, with mdb_txn_commit() being
> called each time, the same addresses in the shared memory area will be
> changed several times (leading to unnecessary re-writes of the same area
> of the flash storage)? Or is LMDB only operating in "append" mode, so
> that it tries to avoid writing the same area twice?
I think no DB system can implement a commit with just one write.
Why not put your DB im a RAM disk and then rsync to flash when done?
Regards,
Ulrich
>
> Thanks in advance for any answer,
> Alberto
>
> --
> http://www.mardy.it - Geek in un lingua international
1 year
LMDB: sync'ing to disk and flash wear
by Alberto Mardegan
Hi all!
I'm trying to understand the impact that a LDMB database will have on
the flash wear of an embedded device.
In order to minimize disk writes, I'm keeping a DB open all the time,
and I'd like to have the changes written into the physical storage only
when mdb_env_sync() is called (near the end of my process lifetime).
I'm opening the DB with the MDB_NOSYNC | MDB_WRITEMAP | MDB_NOMETASYNC
flags, and if I strace the executable I see that lmdb behaves as
expected: no writes happen until I call mdb_env_sync().
However, the writes end up to the physical storage anyway, because the
kernel reserves the right of flushing changes of an mmap'ed file to disk
at any time (as per the documentation of MAP_SHARED). I don't exactly
know if this happens because some other process is calling sync(), or
which criteria the kernel follows, but as a matter of fact a write
happens every few seconds.
Looking at the code, it looks like it's impossible to instruct LMDB to
use MAP_PRIVATE without modifying the code. But before exploring that
solution, I'd like to be sure I understand the real behaviour of LMDB.
Do I understand correctly, that if during the lifetime of my process the
same key is assigned different values, with mdb_txn_commit() being
called each time, the same addresses in the shared memory area will be
changed several times (leading to unnecessary re-writes of the same area
of the flash storage)? Or is LMDB only operating in "append" mode, so
that it tries to avoid writing the same area twice?
Thanks in advance for any answer,
Alberto
--
http://www.mardy.it - Geek in un lingua international
1 year