https://bugs.openldap.org/show_bug.cgi?id=8250
Howard Chu <hyc(a)openldap.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |scott(a)gameranger.com
--- Comment #4 from Howard Chu <hyc(a)openldap.org> ---
*** Issue 8010 has been marked as a duplicate of this issue. ***
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=8124
Howard Chu <hyc(a)openldap.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|UNCONFIRMED |RESOLVED
Resolution|--- |DUPLICATE
--- Comment #3 from Howard Chu <hyc(a)openldap.org> ---
*** This issue has been marked as a duplicate of issue 7980 ***
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=7980
--- Comment #2 from Howard Chu <hyc(a)openldap.org> ---
*** Issue 8124 has been marked as a duplicate of this issue. ***
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=7980
Howard Chu <hyc(a)openldap.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
Resolution|--- |INVALID
Status|UNCONFIRMED |RESOLVED
--- Comment #1 from Howard Chu <hyc(a)openldap.org> ---
Not needed. Pass additional state using the MDB_val *a pointer.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=7853
--- Comment #4 from Howard Chu <hyc(a)openldap.org> ---
(In reply to dw(a)hmmz.org from comment #0)
> Full_Name: David Wilson
> Version: LMDB 0.9.11
> OS:
> URL:
> https://raw.githubusercontent.com/dw/py-lmdb/master/misc/readers_mrb_env.
> patch
> Submission from: (NULL) (178.238.153.20)
>
>
> Currently if a user (wilfully or accidentally, say, through composition of
> third party libs) opens the same LMDB environment twice within a process, and
> maintains active read transactions at the time one mdb_env_close() is called,
> all reader slots will be deallocated in all environments due to the logic
> around line 4253 that unconditionally clears reader slots based on PID.
>
> I'd like to avoid this in py-lmdb, and seems there are a few alternatives to
> fix it:
> 4) Modify lock.mdb to include MDB_env* address within the process, and update
> mdb_env_close() to invalidate only readers associated with the environment
> being closed. I dislike using the MDB_env* as an opaque publicly visible
> cookie, but perhaps storing this field might be reused for other things in
> future.
I was looking at this approach, which initially seems harmless. But
unfortunately it does nothing to address the fact the fcntl lock on the
lock file will still go away on the first env_close() call. If there are
no other processes with the env open, then the next process that opens
the env will re-init the lock file, because it doesn't know about the
existing process. At that point the existing process is hosed anyway.
So the question is, if a process has made the mistake of opening the same env
twice, what is the anticipated lifetime of each env instance? If the code was
written according to doc guidelines, then both env instances should only be
closed after all processing is done and just before program exit. In that case
it doesn't really matter what happens to the reader table or the fcntl lock.
But if the two instances have drastically different lifetimes, then all bets
are off.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=7842
Howard Chu <hyc(a)openldap.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|UNCONFIRMED |RESOLVED
Resolution|--- |DUPLICATE
--- Comment #3 from Howard Chu <hyc(a)openldap.org> ---
*** This issue has been marked as a duplicate of issue 7969 ***
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=7969
Howard Chu <hyc(a)openldap.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |rsbx(a)acm.org
--- Comment #18 from Howard Chu <hyc(a)openldap.org> ---
*** Issue 7842 has been marked as a duplicate of this issue. ***
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=7969
Howard Chu <hyc(a)openldap.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|UNCONFIRMED |RESOLVED
Resolution|--- |FIXED
--- Comment #17 from Howard Chu <hyc(a)openldap.org> ---
Patch committed, fix released in LMDB 0.9.15
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=7668
Howard Chu <hyc(a)openldap.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|UNCONFIRMED |RESOLVED
Resolution|--- |TEST
--- Comment #2 from Howard Chu <hyc(a)openldap.org> ---
MDB_PREVSNAPSHOT flag was added for LMDB 1.0 to allow opening env with the
previous instead of current meta page.
mdb_env_set_checksum() was added for LMDB 1.0 to allow specifying per-page
checksums.
--
You are receiving this mail because:
You are on the CC list for the issue.
https://bugs.openldap.org/show_bug.cgi?id=7165
--- Comment #5 from Howard Chu <hyc(a)openldap.org> ---
Most of this bug appears to be obsolete.
(In reply to Hallvard Furuseth from comment #0)
> Full_Name: Hallvard B Furuseth
> Version: master, d861910f55cb1beb5e443caa7e961ed760074352
> OS: Linux x86_64
> URL:
> Submission from: (NULL) (195.1.106.125)
> Submitted by: hallvard
>
>
> Aborting an MDB process can break MDB if another MDB process is
> running, because then locks/reader table info are not reset.
Using mdb_stat -rr will purge stale readers. I suppose we could
tweak back-mdb to also do a reader purge if it ever gets a MDB_READERS_FULL
error from mdb_txn_begin.
>
> The problems below go away when the last slap process terminates
> so the lockfile can be reset. Sometimes kill -KILL is needed.
>
>
> ==== lock.conf ====
> include servers/slapd/schema/core.schema
> database mdb
> suffix o=hi
> directory lock.dir
>
>
> (A) If a process holding a shared mutex dies, other processes' ops
> on the mutex can hang/fail:
Robust mutex support was added in a2ac10107e2fb845c4a38a339239063ec4407d84 in
2014.
>
> rm -rf lock.dir; mkdir lock.dir
> servers/slapd/slapd -f lock.conf -h ldapi://ldapi -d0 &
> MDB_KILL_R=true servers/slapd/slapd -Tcat -f lock.conf
> ldapsearch -xLLH ldapi://ldapi -b o=hi l=x 1.1
> ^C (kills ldapsearch which is waiting for results)
> kill %% (fails to kill slapd)
> kill -KILL %%
>
> The Posix fix, robust mutexes, is outlined below. With _WIN32,
> check return code WAIT_ABANDONED. __APPLE__ Posix semaphores:
> Don't know. SysV semaphores have SEM_UNDO to cancel the process'
> effect if the process dies, but the peers do not seem to be
> informed that this happened.
>
> if ((rc = pthread_mutexattr_init( &mattr )) ||
> (rc = pthread_mutexattr_setpshared(&mattr, PTHREAD_PROCESS_SHARED)) ||
> (rc = pthread_mutexattr_setrobust( &mattr, PTHREAD_MUTEX_ROBUST )) ||
> (rc = pthread_mutex_init(&mutex, &mattr )))
> return rc;
> ...
> switch (pthread_mutex_lock(&mutex)) {
> case 0:
> break;
> case EOWNERDEAD:
> Repair the possibly half-updated data protected by <mutex>;
> if (successful repair) {
> pthread_mutex_consistent(&mutex);
> break;
> }
> /* With Posix mutexes, make future use of <mutex> return failure: */
> pthread_mutex_unlock(&mutex);
> /* FALLTHRU */
> default:
> return MDB_PANIC;
> }
> ...
> pthread_mutex_unlock(&mutex);
>
>
> BTW, the above also happens with MDB_KILL_W=true slapcat followed
> by ldapadd, dunno why slapcat uses a txn without MDB_TXN_RDONLY.
slapcat already opens the env RDONLY. In
3e47e825fd61e1bff001c01f40d70263c89ffabd in 2012 was patched to also create
the txn RDONLY.
> In this case, raise(SIGINT) is sufficient. With MDB_KILL_R, that
> does not kill slapd and we need SIGKILL - it catches SIGINT I
> guess. Don't know if that difference is intentional.
>
>
> (B1) Repeated kill -KILL slap<tool> while slapd is running can use
> up the reader table since it does not clear its table slots,
> making the db unusable.
>
> (B2) slapcat report no error after it fails to read the db due to
> full reader table, it exits with success status.
Unable to reproduce, dunno when this was fixed but it returns
an error code now.
>
> (B3) After one "kill -KILL slap<tool>" while slapd is running, I
> imagine the stale slot which does not go away can prevent freelist
> reuse so the db grows quickly. But I have not seen that.
Depends on timing I suppose; it's possible that there was no
active read txn at the moment, in which case the stale reader slot
has no effect on page reuse.
>
> bash$ rm -rf lock.dir; mkdir lock.dir
> bash$ servers/slapd/slapd -f lock.conf -h ldapi://ldapi -d0 &
> bash$ for (( i=0; i<130; i++)) {
> MDB_KILL_UR=true servers/slapd/slapd -Tcat -f lock.conf
> } > /dev/null
> bash$ servers/slapd/slapd -Tcat -f lock.conf -d-1 2>cat.out
> (success result, no output, no errors in the log)
> bash$ ldapsearch -xLLH ldapi://ldapi -b o=hi l=x 1.1
> version: 1
>
> [R..RLOCKED..R1] Other (e.g., implementation specific) error (80)
> Additional information: internal error
> bash$
>
> Maybe it's easier to have a per-process reader table after all,
> and a shared table with just the oldest txn per process.
>
> Or, each transaction chould have an exclusive file region lock.
> A writer could try to grab the oldest txn's region lock if that
> belongs to another process, and clear the txn it if successful.
> A reader, before giving up, would just search for another pid's
> txn whose lock it can grab.
The current code uses an fcntl lock to detect process liveness.
>
> Except since the system reuses pids, the ID would either have to
> be more exclusive (e.g. pid + mdb_env_open() time) or mdb_env_open
> must clear away reader table entries with its own pid.
>
> Speaking of which, I don't see the use of the repeated getpid()
> calls. Should be enough to store the pid in the MDB_env. fcntl
> record locks do not survive fork(), so an MDB_env doesn't either.
> And the thread ID is unsed, and must be reset with memset, not
> 'mr_tid = 0' since pthread_t can be a struct.
>
The above is no longer true.
>
> (C) Mentioned earlier: Startup can fail due to a race condition,
> when another process starts MDB at the same time and then aborts:
> - Process 1 starts MDB, gets write lock in mdb_env_setup_locks().
> - Process 2 starts MDB, awaits read lock since write lock is taken.
> - Process 1 dies.
> - Process 2 gets the read lock and proceeds, but with an uninitialized
> lockfile since it expects that process 1 initialized it.
Probably still true, but already doc'd in Caveats.
>
> Fix: An extra exclusive lock around the current lock setup code.
>
> kill slapd process, if any
> rm -rf lock.dir; mkdir lock.dir
> MDB_RACE=true servers/slapd/slapd -Tcat -f lock.conf & sleep 1; \
> servers/slapd/slapd -Tcat -f lock.conf
>
> In addition, the above dumps core in mdb_env_close:
> for (i=0; i<env->me_txns->mti_numreaders; i++)
> where env->me_txns == (MDB_txninfo *) 0xffffffffffffffff.
>
>
> (D) And of course ^Z when holding a mutex can block other processes'
> threads, but I suppose BDB has the same problem.
>
> DB_SUSPEND=true servers/slapd/slapd -Tadd -f lock.conf < /dev/null
> servers/slapd/slapd -Tadd -f lock.conf < /dev/null
> (hanging...) ^C
> kill %%
>
> The fix I can see for now is "don't do that" in the doc. MDB
> could in addition use pthread_mutex_timedlock() so slapd at least
> can keep running and can be killed without kill -KILL. Final fix
> = a lock-free library, when a useful portable one emerges.
At this point the only potential action I see is to add a check for
MDB_READERS_FULL as noted at the beginning of this reply.
--
You are receiving this mail because:
You are on the CC list for the issue.