Howard Chu writes:
Hallvard B Furuseth wrote:
> hyc(a)symas.com writes:
>> h.b.furuseth(a)usit.uio.no wrote:
>>> * When a thread is finishing, make it go active for context_reset().
>>> Should make slapi module destructors safer, and I don't see that it
>>> can hurt.
>> I don't understand this.
>
> Let me dig a bit back in the ITS...
> "context_reset() can call connection_fake_destroy(), which via
> slapi_int_free_object_extensions() calls slapi module destructors,
> which can do who-knows-what."
context_reset() can only be called from the main thread. It can (must)
never be called by anything running from the pool.
It's called from pool_wrapper(). pool_wrapper() could go active so
there can be no pause while that context_reset() call is running.
>>> * Scheduling: If several threads call pool_pause(), (...)
(need to check out how it should work...)
>>> - pool_context() breaks if several ldap_pvt_thread_t
bit patterns can
>>> represent the same thread ID: TID_HASH() would give different hashes
>>> for the same thread, and pool_context() stops searching when it hits
>>> a slot with ctx == NULL. (My previous bug report was a bit wrong.)
>>
>> How can a single thread ID be represented by multiple bit patterns?
>
> A struct/union with padding bytes seems the least unlikely possibility.
Note that I did say "least unlikely" rather than "most likely":
It's not that I expect to encounter it, more that I'd prefer not to be
surprised in the form of a slapd malfuntion.
So in that respect, my preference (not for RE23) is to cast to an
integer for the hash, and also to add pthread_<set/get>specific() so
thread_keys[] and its hash will only be a fallback solution.
Any implementation that leaves uninitialized padding bytes would be
a
bug. Tools like valgrind would barf all over them.
No. Valid compiler, valid code. Therefore valgrind is silent about
copying uninitialized values, it only complains when you use them for
something - like write() or arithmetic. So a valgrind complaint about
an uninitialized _value_ isn't necessary at the point the uninitialized
_variable_ is read. E.g. this complains in printf(), not in foo():
int foo()
{
struct { char c; /*maybe padding here*/ int j; } s;
s.c = 1;
s.j = 2;
return ((unsigned char *)&s)[1]; /* maybe padding byte */
}
int main()
{
printf("%d\n", foo());
}
> A pointer in hardware where several address representations map
to the
> same physical address, and the compiler handles that by normalizing
> pointers when they are compared/subtracted. Like DOS "huge" pointers
> would have been if the compiler normalized them when comparing them
> instead of when incrementing/decrementing them. 20-bit address bus,
> 32-bit pointers, physical address = 16 * <segment half of pointer> +
> <offset half of pointer>.
Nobody voluntarily uses such memory models today. The very mention of "DOS"
invalidates this discussion since there is no threading environment there.
Right, at this point I answered how it _can_ happen, not why I expect to
encounter it. I could see the demands of hardware design cause strange
memory models to proliferate again, but I'm not holding my breath.
I'd be a bit less surprised to encounter integers with padding bits
again, but not for a simple ID type, and hashing an integer instead of
its bytes would fix it anyway.
>> The best fix may be to just use the address of the
thread's stack as the
>> thread ID. That's actually perfect for our purpose.
>
> How does a function called from somewhere inside the thread know that
> address? That's almost what the user context is, and thus what
> thread_keys[] maps the thread ID to.
It requires some degree of machine-dependence, which is why I never
bothered to code it. The principle is simple though - all of our thread
stacks are some multiple of 1MB in size. Take the address of any local
variable, mask off the low 20 bits, and you have a unique thread ID (+/-
some additional masking/shifting based on the actual thread stack size).
I'm not sure how to determine _which_ multiple of 1MB - start two
threads and compare their stack addresses, maybe? In any case, I
agree it sounds rather more machine-dependent than the current code.
--
Regards,
Hallvard