The timeout is a per-request timeout. Isn't doing paged requests
explicitly about making multiple requests, with possibly semi-arbitrary
delays by the client between them, but each of which therefore gets its
own shot at the timeout?
Philip Guenther
On Thu, 15 Apr 2021, varun mittal wrote:
> Any inputs on this one ?
>
> The client timeout parameter works for individual search page or the entire
> time period of the search, if there are multiple pages ?
>
>
> On Fri, Apr 2, 2021 at 2:07 PM varun mittal <vmittal05@gmail.com> wrote:
>
> > I am using openldap-2.4.39 on CentOS 7, to query my AD server, with
> > python-ldap wrapper
> >
> > I set the following scheme:
> >
> > ldap.set_option(ldap.OPT_NETWORK_TIMEOUT, 30)
> > ldap.set_option(ldap.OPT_TIMEOUT, 120)
> > conn = ldap.initialize(ldap://server-ip)
> >
> > Using 3 types of queries - synchronous search_s(), asynchronous with and
> > without paging search_ext()
> >
> > I am not using any timeout in the _ext method or the result3() methods
> >
> > One of my python client LDAP searches(asynchronous with paging) took about
> > 14 minutes to complete, in the customer environment. Eventually, the search
> > was successful.
> >
> > Looking at the documentation, I am not sure which timeout value would be
> > applicable here.
> >
> > I thought setting OPT_TIMEOUT should suffice for all kinds of searches.
> >
> > And the strange thing is that the similar query, but synchronous(
> > ldap_search_ext_s) from my C client failed within 120 seconds. This is
> > the default AD server timelimit. The C application didn't specify any
> > timeouts
> >
> > What am I missing here?
> >
>