William Brown wrote:
On Thu, 2017-11-02 at 09:08 +0100, Michael Ströder wrote:
Yes, you're right. But not sure whether you really hit the GIL limit since python-ldap releases GIL whenever it calls libldap functions. And of course when running a multi-threaded client each thread should have its own LDAPObject instance. (I assume here that Python is built with thread support and python- ldap was built against libldap_r. Otherwise all calls into libldap (without _r) are serialized with a global lock.)>
Yeah, the GIL isn't the issue, it's the global lock. You need to start multiple separate python interpreters to really generate this load. We have a python load test, but you have to start about 16 instances of it to really stress a server.
I've always wondered what the purpose of the ldap lock was, but that's a topic for it's own thread I think :)
In case of python-ldap linked against libldap_r the global lock only serializes calls into ldap_initialize(). So if you're using a separate persistent connection per thread it should never hit this lock again.
Hmm, to be on-topic here I'm looking at ldap_initialize(3) again:
Note: the first call into the LDAP library also initial- izes the global options for the library. As such the first call should be single-threaded or otherwise pro- tected to insure that only one call is active. It is rec- ommended that ldap_get_option() or ldap_set_option() be used in the program's main thread before any additional threads are created. See ldap_get_option(3).
@OpenLDAP developers: Does that mean it would be sufficient to have *one* serialized call into libldap (e.g. *one* call of ldap_get_option(3) when importing module "ldap") and then use no global lock afterwards?
Ciao, Michael.