OpenLDAP instances crashes
by Saurabh Lahoti
Dear,
Today, received below error in /var/log/messages with OpenLDAP instance
crashing.
Aug 16 13:21:07 muledeer kernel: slapd[29253]: segfault at 0 ip
00007fbeddf3af09 sp 00007fb72effc480 error 4 in
uniquemember.so.0.0.0[7fbeddf3a000+2000]
Aug 16 13:21:08 muledeer abrt[24629]: Saved core dump of pid 16470
(/usr/app/ldap/openldap2.4.46/libexec/slapd) to
/var/spool/abrt/ccpp-2018-08-16-13:21:07-16470 (472973312 bytes)
Aug 16 13:21:08 muledeer abrtd: Directory 'ccpp-2018-08-16-13:21:07-16470'
creation detected
Aug 16 13:21:08 muledeer abrtd: Executable
'/usr/app/ldap/openldap2.4.46/libexec/slapd' doesn't belong to any package
and ProcessUnpackaged is set to 'no'
Aug 16 13:21:08 muledeer abrtd: 'post-create' on
'/var/spool/abrt/ccpp-2018-08-16-13:21:07-16470' exited with 1
Aug 16 13:21:08 muledeer abrtd: Deleting problem directory
'/var/spool/abrt/ccpp-2018-08-16-13:21:07-16470'
Could you please guide us in finding probable root of this error..?
----
*Thanks & Kind Regards,*
Saurabh LAHOTI.
*Ideas enlighten Innovation**!!!*
Please consider the environment before printing this mail..!!
5 years, 1 month
help to get our openldap updated and replicated
by admin@genome.arizona.edu
Hi all, I am about the 4th sysadmin for our organization, and our
openldap is old, 2.4.40 system version for CentOS 6.9. Also there might
have been incorrect modifications to the slapd.d files since it was
really difficult to update things. The olcRootDN was set to "cn=config"
somehow so I had to manually update that to the Manager account and
figure out the CRC32 and everything, but at least I could make some
updates now.
Anyway, I would like to get our installation updated to a current
version, as well as set up some sort of replication with our other
server, in case one goes down then our users could still login and use
our applications, or I could still add/delete users. Perhaps a
multi-master config would be best? (Also maybe update the databases too
since they are using bdb format? but maybe that is just unnecessary
extra work) I tried to setup replication by following a guide, but was
not successful and actually made things worse for our demon, so had to
undo the changes for now. I guess 2.4.40 has some problems with
replication anyway from what I've heard.
First, to get openldap updated, would it be as simple as compiling the
new version and then updating the init script /etc/init.d/slapd to point
to the new binaries? I would stop slapd and get a backup of
/etc/openldap and /var/lib/ldap. Then I could just leave our current
config in /etc/openldap and databases in /var/lib/ldap? I've already
built the new version and "make test" was successful so am ready to
proceed from there with your assistance and suggestions.
Thanks,
--
Chandler
Arizona Genomics Institute
www.genome.arizona.edu
5 years, 1 month
OpenLDAP Developpers Day Silver Jubilee
by Peter
Dear all,
something completely different:
Join Us Celebrating 20 Years of OpenLDAP
The community is coming together!
ON:*October 8th, 2018*
IN:*Tübingen, Germany*
LOCATION: *Computing Center of the University of Tübingen*
You are invited to join this forum, which is a great way to share and
discuss your ideas, achievements and visions with the community.
With this Call for Content we want to enable the community to continue
the professional discourse. For this we are looking for contributions:
either deployment success stories or reports on new development activities.
This event brings together developers of OpenLDAP software, directory
researchers and other OpenLDAP community members interested in
discussing ongoing and future development efforts.
If you want to participate and to present a topic, please contact
odd-silverjubilee(a)daasi.de <mailto:odd-silverjubilee@daasi.de>before
September 10th, 2018.
Meet the team, learn and network.
More info at:
https://daasi.de/en/2018/08/14/5th-openldap-developer-day-2018-20-years-a...
And please forward this to people connected with OpenLDAP.
Cheers,
Peter
--
_______________________________________________________________________
Peter Gietz (CEO)
DAASI International GmbH phone: +49 7071 407109-0
Europaplatz 3 Fax: +49 7071 407109-9
D-72072 Tübingen mail: peter.gietz(a)daasi.de
Germany Web: www.daasi.de
DAASI International GmbH, Tübingen
Geschäftsführer Peter Gietz, Amtsgericht Stuttgart HRB 382175
Directory Applications for Advanced Security and Information Management
_______________________________________________________________________
5 years, 1 month
Re: Multiprocess LMDB transaction: resource not available
by Stefano Cossu
So, while perusing lmdb.h I saw this:
> Use an MDB_env* in the process which opened it, not after fork().
That was my problem. It had nothing to do with transactions. I need to
open the environment and hold a separate handle in each process. Right?
Moving the mdb_env_create and mdb_env_open functions and related
variables inside the get_() function got rid of the errors. In my dummy
code below that is obviously terribly expensive, but my real application
is running a pool of long-lived workers (HTTP server) and the cost of
opening the environments only happens at server start.
Thanks,
Stefano
On 08/19/2018 10:36 AM, Stefano Cossu wrote:
> Hello,
> I am writing a framework in Cython using multi-process access to an LMDB
> database. I ran into a "Resource not available" error.
>
> I isolated the problem in the following script (this is Cython calling
> the LMDB C API, so bear with the hybrid syntax):
>
>
> import time
> cimport cylmdb as lmdb # This is a Cython header mirroring lmdb.h
>
> import multiprocessing
> import threading
>
> cdef:
> lmdb.MDB_env *env
> lmdb.MDB_dbi dbi
>
>
> cdef void _check(int rc) except *:
> if rc != lmdb.MDB_SUCCESS:
> out_msg = 'LMDB Error ({}): {}'.format(
> rc, lmdb.mdb_strerror(rc).decode())
> raise RuntimeError(out_msg)
>
>
> cpdef void get_() except *:
> cdef:
> lmdb.MDB_txn *txn
> lmdb.MDB_val key_v, data_v
>
> _check(lmdb.mdb_txn_begin(env, NULL, lmdb.MDB_RDONLY, &txn))
>
> key_v.mv_data = b'a'
> key_v.mv_size = 1
>
> _check(lmdb.mdb_get(txn, dbi, &key_v, &data_v))
> print((<unsigned char *>data_v.mv_data)[:data_v.mv_size])
> time.sleep(1)
> _check(lmdb.mdb_txn_commit(txn))
> print('Thread {} in process {} done.'.format(
> threading.currentThread().getName(),
> multiprocessing.current_process().name))
>
>
> def run():
> cdef:
> unsigned int flags = 0
> #unsigned int flags = lmdb.MDB_NOTLS # I tried this too.
> lmdb.MDB_txn *wtxn
> lmdb.MDB_val key_v, data_v
>
> # Set up environment.
> _check(lmdb.mdb_env_create(&env))
> _check(lmdb.mdb_env_set_maxreaders(env, 128))
> _check(lmdb.mdb_env_open(env, '/tmp/test_mp', flags, 0o644))
>
> # Create DB.
> _check(lmdb.mdb_txn_begin(env, NULL, 0, &wtxn))
> _check(lmdb.mdb_dbi_open(wtxn, NULL, lmdb.MDB_CREATE, &dbi))
>
> # Write something.
> key_v.mv_data = b'a'
> key_v.mv_size = 1
> ts = str(time.time()).encode()
> data_v.mv_data = <unsigned char *>ts
> data_v.mv_size = len(ts)
> _check(lmdb.mdb_put(wtxn, dbi, &key_v, &data_v, 0))
> _check(lmdb.mdb_txn_commit(wtxn))
>
>
> print('Multiprocess jobs:')
> for i in range(5):
> multiprocessing.Process(target=get_).start()
> # Env should be closed only after all processes return.
> #lmdb.mdb_env_close(env)
>
>
> If I execute run(), the first process runs successfully, but apparently
> it's holding on to some resources that the other processes need.
>
> Multiprocess jobs:
> b'1534691578.4300401'
> Process Process-2:
> Traceback (most recent call last):
> File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in
> _bootstrap
> self.run()
> File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
> self._target(*self._args, **self._kwargs)
> File "lakesuperior/sandbox/threading_poc.pyx", line 23, in
> lakesuperior.sandbox.threading_poc.get_
> cpdef void get_() except *:
> File "lakesuperior/sandbox/threading_poc.pyx", line 28, in
> lakesuperior.sandbox.threading_poc.get_
> _check(lmdb.mdb_txn_begin(env, NULL, lmdb.MDB_RDONLY, &txn))
> File "lakesuperior/sandbox/threading_poc.pyx", line 20, in
> lakesuperior.sandbox.threading_poc._check
> raise RuntimeError(out_msg)
> RuntimeError: LMDB Error (11): Resource temporarily unavailable
> Process Process-3:
> Traceback (most recent call last):
> File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in
> _bootstrap
> self.run()
> File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
> self._target(*self._args, **self._kwargs)
> File "lakesuperior/sandbox/threading_poc.pyx", line 23, in
> lakesuperior.sandbox.threading_poc.get_
> cpdef void get_() except *:
> File "lakesuperior/sandbox/threading_poc.pyx", line 28, in
> lakesuperior.sandbox.threading_poc.get_
> _check(lmdb.mdb_txn_begin(env, NULL, lmdb.MDB_RDONLY, &txn))
> File "lakesuperior/sandbox/threading_poc.pyx", line 20, in
> lakesuperior.sandbox.threading_poc._check
> raise RuntimeError(out_msg)
> RuntimeError: LMDB Error (11): Resource temporarily unavailable
> Process Process-4:
>
> Process Process-5:
> Traceback (most recent call last):
> File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in
> _bootstrap
> self.run()
> File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
> self._target(*self._args, **self._kwargs)
> File "lakesuperior/sandbox/threading_poc.pyx", line 23, in
> lakesuperior.sandbox.threading_poc.get_
> cpdef void get_() except *:
> Traceback (most recent call last):
> File "lakesuperior/sandbox/threading_poc.pyx", line 28, in
> lakesuperior.sandbox.threading_poc.get_
> _check(lmdb.mdb_txn_begin(env, NULL, lmdb.MDB_RDONLY, &txn))
> File "lakesuperior/sandbox/threading_poc.pyx", line 20, in
> lakesuperior.sandbox.threading_poc._check
> raise RuntimeError(out_msg)
> File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in
> _bootstrap
> self.run()
> File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
> self._target(*self._args, **self._kwargs)
> File "lakesuperior/sandbox/threading_poc.pyx", line 23, in
> lakesuperior.sandbox.threading_poc.get_
> cpdef void get_() except *:
> RuntimeError: LMDB Error (11): Resource temporarily unavailable
> File "lakesuperior/sandbox/threading_poc.pyx", line 28, in
> lakesuperior.sandbox.threading_poc.get_
> _check(lmdb.mdb_txn_begin(env, NULL, lmdb.MDB_RDONLY, &txn))
> File "lakesuperior/sandbox/threading_poc.pyx", line 20, in
> lakesuperior.sandbox.threading_poc._check
> raise RuntimeError(out_msg)
> RuntimeError: LMDB Error (11): Resource temporarily unavailable
>
>
> If I run the same function multiple times on the same process,
> everything is fine.
>
> Can someone point out what is wrong with this script?
>
> Thank you,
> Stefano
>
>
>
--
exitsts .not
exeunt hoall.
http://stefano.cossu.cc
5 years, 1 month
Re: Possible transaction issue with LMDB
by Howard Chu
William Brown wrote:
> On Fri, 2018-08-17 at 07:33 +0100, Howard Chu wrote:
>> William Brown wrote:
>>> On Fri, 2018-08-17 at 07:06 +0100, Howard Chu wrote:
>>>>> I'm quite aware that it is COW - this issue is specific to COW
>>>>> trees.
>>>>> Transactions must be removed in order, they can not be removed
>>>>> out
>>>>> of
>>>>> order. It is about how pages are reclaimed for the freelist
>>>>
>>>> Incorrect.
>>>
>>> We may have to "agree to disagree" then.
>>
>> Pure, utter nonsense. You're simply wrong, and at this point you're
>> spewing FUD.
>
> You know what - I made a mistake. I read "free list" when it should be
> "pending free list". Which makes much more sense, but is poorly
> signaled in the naming conventions.
>
> I also misread this comment about "incorrect". You meant "incorrect"
> about the reclaiming of freelist not about the ordering.
>
> Instead of trying to have a constructive discussion, I believe your
> responses here were quite hostile, and their "short" nature has led to
> further miscommunication. My mistake was answering too quickly. I have
> removed myself from the list because I really just don't want to be
> involved in this style of discussion in the future.
You claimed to have actual code showing a misbehavior in LMDB when the
described misbehavior is impossible.
That's not a misunderstanding or a mistake - that's pure fabrication.
Dishonesty is absolutely not welcome. Bye.
--
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
5 years, 1 month
Possible transaction issue with LMDB
by William Brown
Hi there,
While doing some integration testing of LMDB I noticed that there may
be an issue with out of order transaction handling.
The scenario is:
Open Read TXN A
Open Write TXN X, and change values of the DB
Commit X
Open Read TXN B
Open Write TXN Y, and change values of the DB
Commit Y
Open Read TXN C
Abort/Close TXN B.
At this point, because of the page touch between A -> B and B -> C, B
now believes that the pages of A are the "last time" they are available
as they were all subsequently copied for TXN C. The pages of A are then
added to the freelists when B closes. When TXN A is read from the data
may have been altered due to future writes as LMDB attempts to use
previously allocated pages first.
This situation is more likely to arise on large batch writes, but could
manifest with smaller series of writes. This would be a silent issue,
as the over-written pages may be valid, and could cause data to
"silently vanish" inside of the read transaction A leading to
unpredictable results.
I hope that this report helps you to diagnose and resolve the issue.
--
Sincerely,
William
5 years, 1 month
Re: [LMDB] What pointer is returned with combination of MDB_WRITEMAP and MDB_RESERVE?
by Victor Baybekov
Hello,
Working with overflow pages directly via pointers outside write
transactions works great and it helps that they do not move "by design" in
current versions as discussed in this thread.
I have two related scenarios that will give a substantial performance boost
in my case.
*The first one* is updating a value in-place via pointer from aborted write
transaction. If I
1) use MDB_WRITEMAP,
2) from **write** transaction find a record (which is small and not in an
overflow page),
3) modify a part of it's value (for duplicates this part is not used in the
compare function) directly via the MDB_VAL data pointer (e.g.
interlocked_increment or compare_and_swap),
4) and **abort** the transaction,
then readers see the updated value via normal read transactions later.
Since I do the direct updates from inside a write transaction all other
writers should be locked until I exit the transaction (abort in this case),
and no pages should move since the transaction is aborted. Is this correct?
Does this work "by design" or "by accident" currently?
*The second one* is about updating values in-place from read transactions.
If I
1) use MDB_WRITEMAP,
2) open a database with MDB_INTEGERKEY (and could use a dedicated
environment with a single DB if that changes the answer),
3) add values to the DB *only* using MDB_APPEND | MDB_NOOVERWRITE,
4) modify a part of a value directly via the MDB_VAL data pointer,
is it possible that the page from the read transaction is replaced with a
new one if there is a parallel write transaction?
There is a quote from Howard on GitHub (https://github.com/lmdbjava/
benchmarks/issues/9#issuecomment-354184989): "When we do sequential inserts
using MDB_APPEND, there is* no page split at all* - we just allocate a new
page and fill it, sequentially." Does this mean that if there are no page
splits then existing pages do not move as well, and it is "safe" to use
pointers outside of write transactions as is the case with the overflow
pages?
In both cases I update values of a struct that indicate e.g. some lifecycle
stage of an object the LMDB record refers to and stage transitions are
idempotent. If a direct pointer write doesn't make to disk due to system
failure subsequent readers (workers) will see an older stage and repeat
stage transition.
Therefore missed direct writes do not break application logic, I only care
about physical corruption of the entire DB. If I update the values in-place
inside read transactions and the page becomes stale this should not corrupt
DB since the old page will go to the free list only after the read
transaction is finished, so this "hack" should not break DB. But then
missed writes will be a norm and not a special-case on OS failure. But if
pages do not move, all these "soft" updates could be done in parallel and
be very fast.
Unfortunately I cannot answer this myself while trying to read the mbd.c
file. In the second scenario I'm specifically concerned what is happening
when DB becomes large and the tree needs rebalancing. At least in this case
some pages need to move, but does the rebalancing replace/split existing
pages?
Thanks & best regards,
Victor
On Fri, Oct 30, 2015 at 9:26 PM, Howard Chu <hyc(a)symas.com> wrote:
> Victor Baybekov wrote:
>
>> Thanks a lot! My proof-of-concept code works OK.
>>
>> I do not understand all subtle details of mmap reliability, could you
>> please
>> help with these two:
>>
>> If I write data to a pointer to an opaque blob as discussed above, and my
>> process crashes before mdb_env_sync, but OS doesn't crash - will that
>> data be
>> secure in the mmap file?
>>
>
> Of course. The OS owns the memory, it doesn't matter if your process
> crashes.
>
> Also, am I correct that mdb_env_sync synchronizes all dirty pages in the
>> mmap
>> file as seen by a file system, regardless how they were modified - either
>> via
>> LMDB API or via a direct pointer writes?
>>
>
> Yes.
>
> As for "you could at least set a callback to notify you that a block has
>> moved" - if that is implemented, it would be nice to have a notification
>> /before/ a block is moved (with old and new address, so that right after
>> the
>> callback it is OK to use the new address), otherwise this non-intended but
>> convenient use of LMDB won't work anymore.
>>
>
> "right after the callback it is OK to use the new address" - that's the
> point of the callback, it's job is to make the new address valid. So yes,
> when it returns, you use the new address.
>
>>
>>
>> Best regards,
>> Victor
>>
>>
>>
>> On Sat, Oct 3, 2015 at 1:27 AM, Howard Chu <hyc(a)symas.com
>> <mailto:hyc@symas.com>> wrote:
>>
>> Howard Chu wrote:
>>
>> Victor Baybekov wrote:
>>
>> Thank you! I understand this copy-on-write behavior, but am
>> interested if I
>> could control it a little. What if I use records that are
>> always
>> much bigger
>> than a single page, e.g. 100 kb with 4kb pages, and make sure
>> that
>> a record is
>> never updated (via LMDB means) during a lifetime of an
>> environment, - is there
>> any scenario that the location of such a big record could be
>> changed during a
>> lifetime of an environment, without updating the record?
>>
>>
>> At this point in time, no, if you don't update a large record
>> there is no
>> reason that it will move. That is not to say that this won't
>> change in the
>> future. The documentation tells you what promises we are willing
>> to make.
>> Relying on any non-documented behavior is your own responsibility.
>>
>>
>> Note that the relocation functions in LMDB are intended to accommodate
>> blocks being moved around. The actual guts of that API haven't been
>> implemented, but probably in 1.x we'll flesh them out. Given that
>> support,
>> you could at least set a callback to notify you that a block has
>> moved.
>> But currently, overflow pages don't move if they're not modified.
>>
>>
>>
>>
>> On Fri, Oct 2, 2015 at 4:38 PM, Howard Chu <hyc(a)symas.com
>> <mailto:hyc@symas.com>
>> <mailto:hyc@symas.com <mailto:hyc@symas.com>>> wrote:
>>
>> Victor Baybekov wrote:
>>
>> Hi,
>>
>> Docs for MDB_RESERVE say that a returned pointer to
>> the
>> reserved
>> space is
>> valid "before the next update operation or the
>> transaction ends." Docs
>> for MDB_WRITEMAP say that it "writes directly to the
>> mmap
>> instead of
>> using
>> malloc for pages." Does combining the two options
>> return
>> a pointer
>> directly to
>> a place in a mmap
>>
>>
>> Yes.
>>
>> so that this pointer could be used after a
>> transaction ends
>> or after the next update?
>>
>>
>> No.
>>
>> Longer answer: maybe.
>>
>> Full answer: LMDB is copy-on-write. If you update another
>> record on the
>> same page, in a later transaction, the contents of that
>> page
>> will be
>> copied to a new page and the original page will go onto
>> the
>> freelist. In
>> that case, the pointer you got must not be used again.
>>
>> If you don't directly update that page and cause it to be
>> copied, then you
>> might get lucky and be able to use the pointer for a
>> while.
>> It all depends
>> on what other modifications you do and how they affect
>> that
>> node or
>> neighboring nodes.
>>
>>
>> I have a use case where I want to somewhat abuse LMDB
>> safety for
>> convenience.
>> If I could get a pointer to a place inside a mmap I
>> could
>> work with
>> LMDB value
>> as opaque blob or as a region inside the single big
>> mmap.
>> This could
>> be more
>> convenient than creating and opening hundreds of
>> temporary memory
>> mapped files
>> and keeping open handles to them. For example, Aeron
>> terms could be
>> stored
>> like this: a stream id per an LMDB db and a term id
>> for a
>> key in the
>> db.
>>
>>
>> Thanks!
>> Victor
>>
>>
>
> --
> -- Howard Chu
> CTO, Symas Corp. http://www.symas.com
> Director, Highland Sun http://highlandsun.com/hyc/
> Chief Architect, OpenLDAP http://www.openldap.org/project/
>
5 years, 1 month
permissions replication
by Miroslav Misek
Hi,
I am setting up master-slave replication for our off-site office, so it
can use authentication against ldap even with internet connectivity issues.
Replication itself is working without problems. But it replicates only
data and not olcAccess attributes on database. So I have to set them
manually.
Please is there any way to replicate those attributes too?
I found only one way, and it is master-master replication of cn=config
database.
And it is not usable in our environment. Off-site office don`t have
public ip. And it is better for me to have this ldap instance read-only.
Thank you,
Miroslav Misek
5 years, 1 month
Re: Search memberOf
by Arianna Milazzo
Ok, I understand that it isn't supported, but at the moment I can't try
other solutions.
And since that aside from that filter, the rest works, I don't want to give
up like that.
Infact if I look for the following values (then on the groups)
Search base: cn=groupname,ou=group,dc=pigreco,dc=it
Filter: (member=cn=Name Surname,ou=people,dc=pigreco,dc=it)
I get if Name Surname is part of the groupname group
If I search
Search base: dc=pigreco,dc=it
Filter: (member=cn=Name Surname,ou=people,dc=pigreco,dc=it)
I get the list of which groups Name Surname belongs
*But with this (then on the people)*
Search base: dc=pigreco,dc=it
Filter: (memberOf=cn=groupname,ou=group,dc=pigreco,dc=it)
*I have no result and in the log I read:get_ava: illegal value for
attributeType memberof*
obviously same thing with:
(&
(uid=n.surname)
(memberOf=cn=groupname,ou=group,dc=pigreco,dc=it)
)
:(
On the "groups" it is works! On the "people" it doesn't work (*get_ava:
illegal value for attributeType memberof*)
It's frustrating!
2018-08-07 21:35 GMT+02:00 Quanah Gibson-Mount <quanah(a)symas.com>:
> --On Tuesday, August 07, 2018 1:30 PM -0700 Quanah Gibson-Mount <
> quanah(a)symas.com> wrote:
>
> --On Tuesday, August 07, 2018 11:23 AM +0200 Arianna Milazzo
>> <arianna(a)ariannamicrochip.it> wrote:
>> Trying to force LDAP
>> functionality with back-sql is going to work well as a path to pursue.
>>
>
> *is not going to work well.
>
> --Qunah
>
>
>
> --
>
> Quanah Gibson-Mount
> Product Architect
> Symas Corporation
> Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
> <http://www.symas.com>
>
>
5 years, 1 month