Re: (ITS#6660) paged result searches fail to deallocate memory until slapd shutdown
by quanah@zimbra.com
--On Thursday, September 30, 2010 11:14 AM +0000 masarati(a)aero.polimi.it
wrote:
> There's no mention of overlays; I was wondering whether or not it could be
> related to some adverse interaction, e.g. with sssvlv, which has some
> paged-results specific code. Can you also post on overlays either global
> or specific to "olcDatabase={2}hdb"?
There are no overlays being used. ;) As I said, a very vanilla config.
--Quanah
--
Quanah Gibson-Mount
Principal Software Engineer
Zimbra, Inc
--------------------
Zimbra :: the leader in open source messaging and collaboration
13 years, 2 months
RE: ITS#6661 (Was: FW: (6661))
by masarati@aero.polimi.it
Should be fine now. The whole thing originated from the fact that
be_rootdn_bind() was passed a NULL SlapReply* without handling results
accordingly. Thanks, p.
> Yes it is fixed,
>
> But in your fix, only the rootpw password works. If we have the rootdn
> also as a dn stored inside the ldap tree then openldap does not tries to
> bind to the dn of the tree if the rootpw is incorrect
>
> if we use the same code segment of bind.cpp written for back-bdb which is:
>
> /* allow noauth binds */
> switch ( be_rootdn_bind( op, NULL ) ) {
> case LDAP_SUCCESS:
> /* frontend will send result */
> return rs->sr_err;
> default:
> break;
> }
> And the rootpw is not matched, then slapd will continue to search the ldap
> tree and if it finds a dn and its userPassword matches, then it
> authenticates. If an appropriate dn / password is not found in the tree,
> then it throughs the invalid credentials error.
>
> Maybe the bind-dbd way is more correct?
>
>
13 years, 2 months
RE: ITS#6661 (Was: FW: (6661))
by gtzanetis@pylones.gr
Yes it is fixed,
But in your fix, only the rootpw password works. If we have the rootdn also=
as a dn stored inside the ldap tree then openldap does not tries to bind t=
o the dn of the tree if the rootpw is incorrect
if we use the same code segment of bind.cpp written for back-bdb which is:
/* allow noauth binds */
switch ( be_rootdn_bind( op, NULL ) ) {
case LDAP_SUCCESS:
/* frontend will send result */
return rs->sr_err;
default:
break;
}
And the rootpw is not matched, then slapd will continue to search the ldap =
tree and if it finds a dn and its userPassword matches, then it authenticat=
es. If an appropriate dn / password is not found in the tree, then it throu=
ghs the invalid credentials error.
Maybe the bind-dbd way is more correct?
13 years, 2 months
Re: (ITS#6660) paged result searches fail to deallocate memory until slapd shutdown
by hyc@symas.com
masarati(a)aero.polimi.it wrote:
>> --On Wednesday, September 29, 2010 8:34 PM +0200 masarati(a)aero.polimi.it
>> wrote:
>>
>>>> --On Wednesday, September 29, 2010 12:38 AM +0000 quanah(a)zimbra.com
>>>> wrote:
>>>>
>>>>> It appears to be a problem with the entry cache, which is set to
>>>>> 25,000:
>>>>
>>>> Running with a fix from Howard, the entry cache behaves correctly.
>>>> However, slapd still grows at the same rate.
>>>>
>>>> If I limit to only 10 paged results searches, slapd grows at a rate of
>>>> 300MB Virtual and 300MB Resident for every set of 10 paged results
>>>> searches
>>>> I do concurrently, up until I run slapd out of memory. There's
>>>> something
>>>> very wrong with paged results searches.
>>>
>>> Could it be configuration-specific? I tested with a plain configuration
>>> resulting from test003; maybe some player in the middle, say, is causing
>>> entries to be duplicated and leaked, or read-locks on originals are not
>>> released correctly? Can you post a configuration that shows the issue?
>>
>> Hi Pierangelo,
>>
>> My testing shows the issue is only really visible with large databases
>> that
>> return giant result sets. I don't expect you to see it with a small
>> database and test003, because the amount of "lost" memory will be a few
>> bytes at best.
>
> If it's something related to bdb's cache size I agree; if it's related to
> paged results I'd expect to notice it anyway using valgrind or so.
>
There's no actual leak, so valgrind won't point anything out. It's simply an
issue with the entry cache not running its purge in some cases where it needs
to. cn=monitor shows that the entry cache grows far beyond its configured
size. Still looking into a proper fix.
--
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
13 years, 2 months
Re: (ITS#6660) paged result searches fail to deallocate memory until slapd shutdown
by masarati@aero.polimi.it
> --On Wednesday, September 29, 2010 8:34 PM +0200 masarati(a)aero.polimi.it
> wrote:
>
>>> --On Wednesday, September 29, 2010 12:38 AM +0000 quanah(a)zimbra.com
>>> wrote:
>>>
>>>> It appears to be a problem with the entry cache, which is set to
>>>> 25,000:
>>>
>>> Running with a fix from Howard, the entry cache behaves correctly.
>>> However, slapd still grows at the same rate.
>>>
>>> If I limit to only 10 paged results searches, slapd grows at a rate of
>>> 300MB Virtual and 300MB Resident for every set of 10 paged results
>>> searches
>>> I do concurrently, up until I run slapd out of memory. There's
>>> something
>>> very wrong with paged results searches.
>>
>> Could it be configuration-specific? I tested with a plain configuration
>> resulting from test003; maybe some player in the middle, say, is causing
>> entries to be duplicated and leaked, or read-locks on originals are not
>> released correctly? Can you post a configuration that shows the issue?
>
> Hi Pierangelo,
>
> My testing shows the issue is only really visible with large databases
> that
> return giant result sets. I don't expect you to see it with a small
> database and test003, because the amount of "lost" memory will be a few
> bytes at best.
If it's something related to bdb's cache size I agree; if it's related to
paged results I'd expect to notice it anyway using valgrind or so.
> My configuration itself is very minimal, but it is in cn=config format. ;)
>
> dn: olcDatabase={2}hdb
> objectClass: olcDatabaseConfig
> objectClass: olcHdbConfig
> olcDatabase: {2}hdb
> olcSuffix:
> olcAccess: {0}to attrs=userPassword by anonymous auth by
> dn.children="cn=adm
> ins,cn=zimbra" write
> olcAccess: {1}to dn.subtree="cn=zimbra" by
> dn.children="cn=admins,cn=zimbra"
> write
> olcAccess: {2}to
> attrs=zimbraZimletUserProperties,zimbraGalLdapBindPassword,zi
> mbraGalLdapBindDn,zimbraAuthTokenKey,zimbraPreAuthKey,zimbraPasswordHistory,
> z
> imbraIsAdminAccount,zimbraAuthLdapSearchBindPassword by
> dn.children="cn=admi
> ns,cn=zimbra" write by * none
> olcAccess: {3}to attrs=objectclass by dn.children="cn=admins,cn=zimbra"
> write
> by dn.base="uid=zmpostfix,cn=appaccts,cn=zimbra" read by
> dn.base="uid=zmam
> avis,cn=appaccts,cn=zimbra" read by users read by * none
> olcAccess: {4}to attrs=@amavisAccount by
> dn.children="cn=admins,cn=zimbra"
> wr
> ite by dn.base="uid=zmamavis,cn=appaccts,cn=zimbra" read by * +0 break
> olcAccess: {5}to attrs=mail by dn.children="cn=admins,cn=zimbra" write
> by
> dn
> .base="uid=zmamavis,cn=appaccts,cn=zimbra" read by * +0 break
> olcAccess: {6}to attrs=zimbraAllowFromAddress by
> dn.children="cn=admins,cn=zi
> mbra" write by dn.base="uid=zmpostfix,cn=appaccts,cn=zimbra" read by *
> none
> olcAccess: {7}to filter="(!(zimbraHideInGal=TRUE))"
> attrs=cn,co,company,dc,di
> splayName,givenName,gn,initials,l,mail,o,ou,physicalDeliveryOfficeName,posta
> l
> Code,sn,st,street,streetAddress,telephoneNumber,title,uid,homePhone,mobile,p
> a
> ger by dn.children="cn=admins,cn=zimbra" write by
> dn.base="uid=zmpostfix,cn
> =appaccts,cn=zimbra" read by users read by * none
> olcAccess: {8}to
> attrs=zimbraId,zimbraMailAddress,zimbraMailAlias,zimbraMailCa
> nonicalAddress,zimbraMailCatchAllAddress,zimbraMailCatchAllCanonicalAddress,
> z
> imbraMailCatchAllForwardingAddress,zimbraMailDeliveryAddress,zimbraMailForwa
> r
> dingAddress,zimbraPrefMailForwardingAddress,zimbraMailHost,zimbraMailStatus,
> z
> imbraMailTransport,zimbraDomainName,zimbraDomainType,zimbraPrefMailLocalDeli
> v
> eryDisabled by dn.children="cn=admins,cn=zimbra" write by
> dn.base="uid=zmpo
> stfix,cn=appaccts,cn=zimbra" read by * none
> olcAccess: {9}to attrs=entry by dn.children="cn=admins,cn=zimbra" write
> by *
> read
> olcLastMod: TRUE
> olcMaxDerefDepth: 15
> olcReadOnly: FALSE
> olcRootDN: cn=config
> olcSizeLimit: unlimited
> olcTimeLimit: unlimited
> olcMonitoring: TRUE
> olcDbDirectory: /opt/zimbra/data/ldap/hdb/db
> olcDbCacheSize: 25000
> olcDbCheckpoint: 64 5
> olcDbConfig: {0}#
> olcDbConfig: {1}# Set the database in memory cache size.
> olcDbConfig: {2}#
> olcDbConfig: {3}set_cachesize 0 52428800 0
> olcDbConfig: {4}
> olcDbConfig: {5}#
> olcDbConfig: {6}# Set database flags.
> olcDbConfig: {7}# Automatically remove log files that are no longer
> needed.
> olcDbConfig: {8}set_log_config DB_LOG_AUTO_REMOVE
> olcDbConfig: {9}
> olcDbConfig: {10}#
> olcDbConfig: {11}# Set log values.
> olcDbConfig: {12}#
> olcDbConfig: {13}set_lg_regionmax 262144
> olcDbConfig: {14}set_lg_max 10485760
> olcDbConfig: {15}set_lg_bsize 2097152
> olcDbConfig: {16}set_lg_dir /opt/zimbra/data/ldap/hdb/logs
> olcDbConfig: {17}# Increase locks
> olcDbConfig:: ezE4fXNldF9sa19tYXhfbG9ja3MJMzAwMA==
> olcDbConfig:: ezE5fXNldF9sa19tYXhfb2JqZWN0cwkxNTAw
> olcDbConfig:: ezIwfXNldF9sa19tYXhfbG9ja2VycwkxNTAw
> olcDbNoSync: FALSE
> olcDbDirtyRead: FALSE
> olcDbIDLcacheSize: 25000
> olcDbIndex: objectClass eq
> olcDbIndex: entryUUID eq
> olcDbIndex: entryCSN eq
> olcDbIndex: cn pres,eq,sub
> olcDbIndex: uid pres,eq
> olcDbIndex: zimbraForeignPrincipal eq
> olcDbIndex: zimbraYahooId eq
> olcDbIndex: zimbraId eq
> olcDbIndex: zimbraVirtualHostname eq
> olcDbIndex: zimbraVirtualIPAddress eq
> olcDbIndex: zimbraMailDeliveryAddress eq,sub
> olcDbIndex: zimbraAuthKerberos5Realm eq
> olcDbIndex: zimbraMailForwardingAddress eq
> olcDbIndex: zimbraMailCatchAllAddress eq,sub
> olcDbIndex: zimbraShareInfo sub
> olcDbIndex: zimbraMailTransport eq
> olcDbIndex: zimbraMailAlias eq,sub
> olcDbIndex: zimbraACE sub
> olcDbIndex: zimbraDomainName eq,sub
> olcDbIndex: mail pres,eq,sub
> olcDbIndex: zimbraCalResSite eq,sub
> olcDbIndex: givenName pres,eq,sub
> olcDbIndex: displayName pres,eq,sub
> olcDbIndex: sn pres,eq,sub
> olcDbIndex: zimbraCalResRoom eq,sub
> olcDbIndex: zimbraCalResCapacity eq
> olcDbIndex: zimbraCalResBuilding eq,sub
> olcDbIndex: zimbraCalResFloor eq,sub
> olcDbLinearIndex: FALSE
> olcDbMode: 0600
> olcDbSearchStack: 16
> olcDbShmKey: 0
> olcDbCacheFree: 1000
> olcDbDNcacheSize: 0
> structuralObjectClass: olcHdbConfig
> entryUUID: 152ab0a8-333e-102d-8700-d562901af228
> creatorsName: cn=config
> createTimestamp: 20081020215916Z
> entryCSN: 20081020215916.275992Z#000000#000#000000
> modifiersName: cn=config
> modifyTimestamp: 20081020215916Z
There's no mention of overlays; I was wondering whether or not it could be
related to some adverse interaction, e.g. with sssvlv, which has some
paged-results specific code. Can you also post on overlays either global
or specific to "olcDatabase={2}hdb"?
p.
13 years, 2 months
ITS#6661 (Was: FW: (6661))
by masarati@aero.polimi.it
> Hi Pierangelo,
>
> I replied to the ticket's list but I forgot to include your address.
>
> Here is my reply if you care to read it,
>
> Regards,
>
> George
>
>
>
> -----Original Message-----
> From: George Tzanetis
> Sent: Thursday, September 30, 2010 10:37 AM
> To: 'openldap-its(a)openldap.org'
> Subject: (ITS#6661)
>
> Hi,
>
> I built openldap using the new code. The rootpw now works, but if a wrong
> password in an ldap query, then the ldap query process locks.
>
> e.g.:
> with rootdn: 'cn=root,dc=example,dc=gr'
> and rootpw: secret
>
> -when rootdn and rootpw are correct:
> ldapwhoami -h 192.168.6.10 -D 'cn=root,dc=example,dc=gr' -w 'secret'
>>dn:cn=root,dc=example,dc=gr
>
> -when rootdn is wrong:
> Ldapwhoami -h 192.168.6.10 -D 'cn=root,dc=example,dc=com' -w 'secret'
>>ldap_bind: Invalid credentials (49)
>
> -when rootdn is correct and rootpw is wrong
> Ldapwhoami -h 192.168.6.10 -D 'cn=root,dc=example,dc=com' -w 'secret1'
> "NO RESULT, the ldapwhoami locks"
>
>
> Here are the logs of the slapd process:
>
>
> ###################################
> #with correct rootdn & rootpw #
> ###################################
> daemon: activity on 1 descriptor
> daemon: activity on:
> slap_listener_activate(8):
> daemon: epoll: listen=7 active_threads=0 tvp=NULL
> daemon: epoll: listen=8 busy
>>>> slap_listener(ldap:///)
> daemon: activity on 1 descriptor
> daemon: activity on:
> daemon: epoll: listen=7 active_threads=0 tvp=NULL
> daemon: epoll: listen=8 active_threads=0 tvp=NULL
> daemon: listen=8, new connection on 23
> daemon: activity on 1 descriptor
> daemon: activity on: 23r
> daemon: read active on 23
> daemon: added 23r (active) listener=(nil)
> daemon: epoll: listen=7 active_threads=0 tvp=NULL
> daemon: epoll: listen=8 active_threads=0 tvp=NULL
> daemon: activity on 1 descriptor
> daemon: activity on:
> daemon: epoll: listen=7 active_threads=0 tvp=NULL
> daemon: epoll: listen=8 active_threads=0 tvp=NULL
> conn=1000 fd=23 ACCEPT from IP=192.168.6.10:47722 (IP=0.0.0.0:389)
> connection_get(23)
> connection_get(23): got connid=1000
> connection_read(23): checking for input on id=1000
> ber_get_next
> ldap_read: want=8, got=8
> ldap_read: want=36, got=36
> ber_get_next: tag 0x30 len 42 contents:
> ber_dump: buf=0x1d047ee0 ptr=0x1d047ee0 end=0x1d047f0a len=42
> op tag 0x60, time 1285831215
> ber_get_next
> ldap_read: want=8 error=Resource temporarily unavailable
> daemon: activity on 1 descriptor
> daemon: activity on:
> daemon: epoll: listen=7 active_threads=0 tvp=NULL
> daemon: epoll: listen=8 active_threads=0 tvp=NULL
> conn=1000 op=0 do_bind
> ber_scanf fmt ({imt) ber:
> ber_dump: buf=0x1d047ee0 ptr=0x1d047ee3 end=0x1d047f0a len=39
> ber_scanf fmt (m}) ber:
> ber_dump: buf=0x1d047ee0 ptr=0x1d047f01 end=0x1d047f0a len=9
>>>> dnPrettyNormal: <cn=root,dc=example,dc=gr>
> => ldap_bv2dn(cn=root,dc=example,dc=gr,0)
> <= ldap_bv2dn(cn=root,dc=example,dc=gr)=0
> => ldap_dn2bv(272)
> <= ldap_dn2bv(cn=root,dc=example,dc=gr)=0
> => ldap_dn2bv(272)
> <= ldap_dn2bv(cn=root,dc=example,dc=gr)=0
> <<< dnPrettyNormal: <cn=root,dc=example,dc=gr>, <cn=root,dc=example,dc=gr>
> conn=1000 op=0 BIND dn="cn=root,dc=example,dc=gr" method=128
> do_bind: version=3 dn="cn=root,dc=example,dc=gr" method=128
> ==> ndb_back_bind: dn: cn=root,dc=example,dc=gr
> conn=1000 op=0 BIND dn="cn=root,dc=example,dc=gr" mech=SIMPLE ssf=0
> do_bind: v3 bind: "cn=root,dc=example,dc=gr" to "cn=root,dc=example,dc=gr"
> send_ldap_result: conn=1000 op=0 p=3
> send_ldap_result: err=0 matched="" text=""
> send_ldap_response: msgid=1 tag=97 err=0
> ber_flush2: 14 bytes to sd 23
> ldap_write: want=14, written=14
> conn=1000 op=0 RESULT tag=97 err=0 text=
> daemon: activity on 1 descriptor
> daemon: activity on: 23r
> daemon: read active on 23
> daemon: epoll: listen=7 active_threads=0 tvp=NULL
> daemon: epoll: listen=8 active_threads=0 tvp=NULL
> connection_get(23)
> connection_get(23): got connid=1000
> connection_read(23): checking for input on id=1000
> ber_get_next
> ldap_read: want=8, got=8
> ldap_read: want=24, got=24
> ber_get_next: tag 0x30 len 30 contents:
> ber_dump: buf=0x1d045c10 ptr=0x1d045c10 end=0x1d045c2e len=30
> op tag 0x77, time 1285831215
> ber_get_next
> ldap_read: want=8 error=Resource temporarily unavailable
> daemon: activity on 1 descriptor
> daemon: activity on:
> daemon: epoll: listen=7 active_threads=0 tvp=NULL
> daemon: epoll: listen=8 active_threads=0 tvp=NULL
> conn=1000 op=1 do_extended
> ber_scanf fmt ({m) ber:
> ber_dump: buf=0x1d045c10 ptr=0x1d045c13 end=0x1d045c2e len=27
> conn=1000 op=1 EXT oid=1.3.6.1.4.1.4203.1.11.3
> do_extended: oid=1.3.6.1.4.1.4203.1.11.3
> conn=1000 op=1 WHOAMI
> send_ldap_extended: err=0 oid= len=26
> send_ldap_response: msgid=2 tag=120 err=0
> ber_flush2: 42 bytes to sd 23
> ldap_write: want=42, written=42
> conn=1000 op=1 RESULT oid= err=0 text=
> daemon: activity on 1 descriptor
> daemon: activity on: 23r
> daemon: read active on 23
> daemon: epoll: listen=7 active_threads=0 tvp=NULL
> daemon: epoll: listen=8 active_threads=0 tvp=NULL
> connection_get(23)
> connection_get(23): got connid=1000
> connection_read(23): checking for input on id=1000
> ber_get_next
> ldap_read: want=8, got=7
> ber_get_next: tag 0x30 len 5 contents:
> ber_dump: buf=0x1d045c10 ptr=0x1d045c10 end=0x1d045c15 len=5
> op tag 0x42, time 1285831215
> ber_get_next
> ldap_read: want=8, got=0
>
> ber_get_next on fd 23 failed errno=0 (Success)
> connection_read(23): input error=-2 id=1000, closing.
> connection_closing: readying conn=1000 sd=23 for close
> daemon: activity on 1 descriptor
> daemon: activity on:
> daemon: epoll: listen=7 active_threads=0 tvp=NULL
> daemon: epoll: listen=8 active_threads=0 tvp=NULL
> connection_close: deferring conn=1000 sd=23
> conn=1000 op=2 do_unbind
> conn=1000 op=2 UNBIND
> connection_resched: attempting closing conn=1000 sd=23
> connection_close: conn=1000 sd=23
> daemon: removing 23
> conn=1000 fd=23 closed
>
>
> ##########################################
> #with correct rootdn & incorrect rootpw #
> ##########################################
> daemon: activity on 1 descriptor
> daemon: activity on:
> slap_listener_activate(8):
> daemon: epoll: listen=7 active_threads=0 tvp=NULL
> daemon: epoll: listen=8 busy
>>>> slap_listener(ldap:///)
> daemon: listen=8, new connection on 23
> daemon: added 23r (active) listener=(nil)
> conn=1001 fd=23 ACCEPT from IP=192.168.6.10:47723 (IP=0.0.0.0:389)
> daemon: activity on 2 descriptors
> daemon: activity on: 23r
> daemon: read active on 23
> daemon: epoll: listen=7 active_threads=0 tvp=NULL
> daemon: epoll: listen=8 active_threads=0 tvp=NULL
> connection_get(23)
> connection_get(23): got connid=1001
> connection_read(23): checking for input on id=1001
> ber_get_next
> ldap_read: want=8, got=8
> ldap_read: want=37, got=37
> ber_get_next: tag 0x30 len 43 contents:
> ber_dump: buf=0x1d0460b0 ptr=0x1d0460b0 end=0x1d0460db len=43
> op tag 0x60, time 1285831240
> ber_get_next
> ldap_read: want=8 error=Resource temporarily unavailable
> conn=1001 op=0 do_bind
> ber_scanf fmt ({imt) ber:
> ber_dump: buf=0x1d0460b0 ptr=0x1d0460b3 end=0x1d0460db len=40
> ber_scanf fmt (m}) ber:
> ber_dump: buf=0x1d0460b0 ptr=0x1d0460d1 end=0x1d0460db len=10
>>>> dnPrettyNormal: <cn=root,dc=example,dc=gr>
> => ldap_bv2dn(cn=root,dc=example,dc=gr,0)
> <= ldap_bv2dn(cn=root,dc=example,dc=gr)=0
> => ldap_dn2bv(272)
> <= ldap_dn2bv(cn=root,dc=example,dc=gr)=0
> => ldap_dn2bv(272)
> <= ldap_dn2bv(cn=root,dc=example,dc=gr)=0
> <<< dnPrettyNormal: <cn=root,dc=example,dc=gr>, <cn=root,dc=example,dc=gr>
> conn=1001 op=0 BIND dn="cn=root,dc=example,dc=gr" method=128
> do_bind: version=3 dn="cn=root,dc=example,dc=gr" method=128
> ==> ndb_back_bind: dn: cn=root,dc=example,dc=gr
> daemon: activity on 1 descriptor
> daemon: activity on:
> daemon: epoll: listen=7 active_threads=0 tvp=NULL
> daemon: epoll: listen=8 active_threads=0 tvp=NULL
Should be re-fixed now, sorry. Thanks for the report. p.
13 years, 2 months
(ITS#6662) ldapsearch with slapd-ndb only works when filter is a substring
by gtzanetis@pylones.gr
Full_Name: George Tzanetis
Version: 2.4.23 stable
OS: Red Hat Enterprise 5.5
URL: ftp://ftp.openldap.org/incoming/
Submission from: (NULL) (62.169.213.126)
It seems that when using slapd-ndb the filters in ldapsearches only work if they
are substrings. i.e *text or text* or te*xt for attributes that are not defined
as indices. If the attribute is defined as an index then the substring filter
does not work, as indicate in the manual.
The slapd.conf is as follows:
pidfile /usr/local/openldap/var/run/slapd.pid
argsfile /usr/local/openldap/var/run/slapd.args
#######################################################################
# NDB database definitions
#######################################################################
#NDB database defintions
database ndb
suffix "dc=example,dc=gr"
rootdn "cn=root,dc=example,dc=gr"
rootpw secret
dbconnect 192.168.6.11
dbhost 192.168.6.12
dbport 3306
dbname openldap
dbuser ldapUser
dbpass "1234"
dbconnections 3
dbsocket /tmp/mysql.sock
attrblob description
index uid
#######################################################################
# Monitor Database definitions
#######################################################################
database monitor
loglevel 5
The ldif of an ou:
version: 1
dn: ou=test,dc=example,dc=gr
objectClass: top
objectClass: organizationalUnit
ou: test
dn: uid=user1,ou=test,dc=example,dc=gr
objectClass: top
objectClass: inetOrgPerson
objectClass: posixAccount
cn: user1
gidNumber: -1
givenName: user1
homeDirectory: *
sn: user1
uid: user1
uidNumber: -1
userPassword:: 1234
dn: uid=user2,ou=test,dc=example,dc=gr
objectClass: top
objectClass: inetOrgPerson
objectClass: posixAccount
cn: user2
gidNumber: -1
givenName: user2
homeDirectory: *
sn: user2
uid: user2
uidNumber: -1
userPassword:: 1234
dn: uid=user3,ou=test,dc=example,dc=gr
objectClass: top
objectClass: inetOrgPerson
objectClass: posixAccount
cn: user3
gidNumber: -1
givenName: user3
homeDirectory: *
sn: user3
uid: user3
uidNumber: -1
userPassword:: 1234
dn: uid=user4,ou=test,dc=example,dc=gr
objectClass: top
objectClass: inetOrgPerson
objectClass: posixAccount
cn: user4
gidNumber: -1
givenName: user4
homeDirectory: *
sn: user4
uid: user4
uidNumber: -1
userPassword:: 1234
the ldapsearch queries:
-search with specific cn inside the ou:
---------------------------------------------------------------------
ldapsearch -h 192.168.132.177 -b 'ou=test,dc=example,dc=gr' -D
"cn=root,dc=example,dc=gr" -L -w 'secret' "cn=user1"
version: 1
#
# LDAPv3
# base <ou=test,dc=example,dc=gr> with scope subtree
# filter: cn=user1
# requesting: ALL
#
# search result
# numResponses: 1
---------------------------------------------------------------------
No result
but if we search the cn as a substring:
---------------------------------------------------------------------
ldapsearch -h 192.168.132.177 -b 'ou=test,dc=example,dc=gr' -D
"cn=root,dc=example,dc=gr" -L -w 'secret1' "cn=user1*"
version: 1
#
# LDAPv3
# base <ou=test,dc=example,dc=gr> with scope subtree
# filter: cn=user1*
# requesting: ALL
#
# user1@test, test, example.gr
dn: uid=user1@test,ou=test,dc=example,dc=gr
objectClass: top
objectClass: inetOrgPerson
objectClass: posixAccount
userPassword:: 1234
sn: user1
cn: user1
uid: user1@test
givenName: user1
uidNumber: -1
gidNumber: -1
homeDirectory: *
# search result
# numResponses: 2
# numEntries: 1
---------------------------------------------------------------------
any substring will give a result i.e. cn=*user1, cn=user1*, cn=us*er1 etc.
if we search for cn=user* it will display all entries of the ou as expected.
The same behavior exists if we filter using any other attribute with the
exception of the objectClass attribute, or with the uid attribute which is
indexed.
Is this normal?
Thank you,
George
13 years, 2 months
(ITS#6626)
by gtzanetis@pylones.gr
I wanted to add that I do not find the contextCSN attribute anywhere inside=
the database schema that the slapd-ndb backend creates.
Maybe that is the reason I get the segmentation fault every time the slapd =
process tries to update this attribute?
13 years, 2 months
(ITS#6661)
by gtzanetis@pylones.gr
Hi,
I built openldap using the new code. The rootpw now works, but if a wrong p=
assword in an ldap query, then the ldap query process locks.
e.g.:
with rootdn: 'cn=3Droot,dc=3Dexample,dc=3Dgr'
and rootpw: secret
-when rootdn and rootpw are correct:
ldapwhoami -h 192.168.6.10 -D 'cn=3Droot,dc=3Dexample,dc=3Dgr' -w 'secret'
>dn:cn=3Droot,dc=3Dexample,dc=3Dgr
-when rootdn is wrong:
Ldapwhoami -h 192.168.6.10 -D 'cn=3Droot,dc=3Dexample,dc=3Dcom' -w 'secret'
>ldap_bind: Invalid credentials (49)
-when rootdn is correct and rootpw is wrong
Ldapwhoami -h 192.168.6.10 -D 'cn=3Droot,dc=3Dexample,dc=3Dcom' -w 'secret1=
'
"NO RESULT, the ldapwhoami locks"
Here are the logs of the slapd process:
###################################
#with correct rootdn & rootpw #
###################################
daemon: activity on 1 descriptor
daemon: activity on:
slap_listener_activate(8):
daemon: epoll: listen=3D7 active_threads=3D0 tvp=3DNULL
daemon: epoll: listen=3D8 busy
>>> slap_listener(ldap:///)
daemon: activity on 1 descriptor
daemon: activity on:
daemon: epoll: listen=3D7 active_threads=3D0 tvp=3DNULL
daemon: epoll: listen=3D8 active_threads=3D0 tvp=3DNULL
daemon: listen=3D8, new connection on 23
daemon: activity on 1 descriptor
daemon: activity on: 23r
daemon: read active on 23
daemon: added 23r (active) listener=3D(nil)
daemon: epoll: listen=3D7 active_threads=3D0 tvp=3DNULL
daemon: epoll: listen=3D8 active_threads=3D0 tvp=3DNULL
daemon: activity on 1 descriptor
daemon: activity on:
daemon: epoll: listen=3D7 active_threads=3D0 tvp=3DNULL
daemon: epoll: listen=3D8 active_threads=3D0 tvp=3DNULL
conn=3D1000 fd=3D23 ACCEPT from IP=3D192.168.6.10:47722 (IP=3D0.0.0.0:389)
connection_get(23)
connection_get(23): got connid=3D1000
connection_read(23): checking for input on id=3D1000
ber_get_next
ldap_read: want=3D8, got=3D8
ldap_read: want=3D36, got=3D36
ber_get_next: tag 0x30 len 42 contents:
ber_dump: buf=3D0x1d047ee0 ptr=3D0x1d047ee0 end=3D0x1d047f0a len=3D42
op tag 0x60, time 1285831215
ber_get_next
ldap_read: want=3D8 error=3DResource temporarily unavailable
daemon: activity on 1 descriptor
daemon: activity on:
daemon: epoll: listen=3D7 active_threads=3D0 tvp=3DNULL
daemon: epoll: listen=3D8 active_threads=3D0 tvp=3DNULL
conn=3D1000 op=3D0 do_bind
ber_scanf fmt ({imt) ber:
ber_dump: buf=3D0x1d047ee0 ptr=3D0x1d047ee3 end=3D0x1d047f0a len=3D39
ber_scanf fmt (m}) ber:
ber_dump: buf=3D0x1d047ee0 ptr=3D0x1d047f01 end=3D0x1d047f0a len=3D9
>>> dnPrettyNormal: <cn=3Droot,dc=3Dexample,dc=3Dgr>
=3D> ldap_bv2dn(cn=3Droot,dc=3Dexample,dc=3Dgr,0)
<=3D ldap_bv2dn(cn=3Droot,dc=3Dexample,dc=3Dgr)=3D0
=3D> ldap_dn2bv(272)
<=3D ldap_dn2bv(cn=3Droot,dc=3Dexample,dc=3Dgr)=3D0
=3D> ldap_dn2bv(272)
<=3D ldap_dn2bv(cn=3Droot,dc=3Dexample,dc=3Dgr)=3D0
<<< dnPrettyNormal: <cn=3Droot,dc=3Dexample,dc=3Dgr>, <cn=3Droot,dc=3Dexamp=
le,dc=3Dgr>
conn=3D1000 op=3D0 BIND dn=3D"cn=3Droot,dc=3Dexample,dc=3Dgr" method=3D128
do_bind: version=3D3 dn=3D"cn=3Droot,dc=3Dexample,dc=3Dgr" method=3D128
=3D=3D> ndb_back_bind: dn: cn=3Droot,dc=3Dexample,dc=3Dgr
conn=3D1000 op=3D0 BIND dn=3D"cn=3Droot,dc=3Dexample,dc=3Dgr" mech=3DSIMPLE=
ssf=3D0
do_bind: v3 bind: "cn=3Droot,dc=3Dexample,dc=3Dgr" to "cn=3Droot,dc=3Dexamp=
le,dc=3Dgr"
send_ldap_result: conn=3D1000 op=3D0 p=3D3
send_ldap_result: err=3D0 matched=3D"" text=3D""
send_ldap_response: msgid=3D1 tag=3D97 err=3D0
ber_flush2: 14 bytes to sd 23
ldap_write: want=3D14, written=3D14
conn=3D1000 op=3D0 RESULT tag=3D97 err=3D0 text=3D
daemon: activity on 1 descriptor
daemon: activity on: 23r
daemon: read active on 23
daemon: epoll: listen=3D7 active_threads=3D0 tvp=3DNULL
daemon: epoll: listen=3D8 active_threads=3D0 tvp=3DNULL
connection_get(23)
connection_get(23): got connid=3D1000
connection_read(23): checking for input on id=3D1000
ber_get_next
ldap_read: want=3D8, got=3D8
ldap_read: want=3D24, got=3D24
ber_get_next: tag 0x30 len 30 contents:
ber_dump: buf=3D0x1d045c10 ptr=3D0x1d045c10 end=3D0x1d045c2e len=3D30
op tag 0x77, time 1285831215
ber_get_next
ldap_read: want=3D8 error=3DResource temporarily unavailable
daemon: activity on 1 descriptor
daemon: activity on:
daemon: epoll: listen=3D7 active_threads=3D0 tvp=3DNULL
daemon: epoll: listen=3D8 active_threads=3D0 tvp=3DNULL
conn=3D1000 op=3D1 do_extended
ber_scanf fmt ({m) ber:
ber_dump: buf=3D0x1d045c10 ptr=3D0x1d045c13 end=3D0x1d045c2e len=3D27
conn=3D1000 op=3D1 EXT oid=3D1.3.6.1.4.1.4203.1.11.3
do_extended: oid=3D1.3.6.1.4.1.4203.1.11.3
conn=3D1000 op=3D1 WHOAMI
send_ldap_extended: err=3D0 oid=3D len=3D26
send_ldap_response: msgid=3D2 tag=3D120 err=3D0
ber_flush2: 42 bytes to sd 23
ldap_write: want=3D42, written=3D42
conn=3D1000 op=3D1 RESULT oid=3D err=3D0 text=3D
daemon: activity on 1 descriptor
daemon: activity on: 23r
daemon: read active on 23
daemon: epoll: listen=3D7 active_threads=3D0 tvp=3DNULL
daemon: epoll: listen=3D8 active_threads=3D0 tvp=3DNULL
connection_get(23)
connection_get(23): got connid=3D1000
connection_read(23): checking for input on id=3D1000
ber_get_next
ldap_read: want=3D8, got=3D7
ber_get_next: tag 0x30 len 5 contents:
ber_dump: buf=3D0x1d045c10 ptr=3D0x1d045c10 end=3D0x1d045c15 len=3D5
op tag 0x42, time 1285831215
ber_get_next
ldap_read: want=3D8, got=3D0
ber_get_next on fd 23 failed errno=3D0 (Success)
connection_read(23): input error=3D-2 id=3D1000, closing.
connection_closing: readying conn=3D1000 sd=3D23 for close
daemon: activity on 1 descriptor
daemon: activity on:
daemon: epoll: listen=3D7 active_threads=3D0 tvp=3DNULL
daemon: epoll: listen=3D8 active_threads=3D0 tvp=3DNULL
connection_close: deferring conn=3D1000 sd=3D23
conn=3D1000 op=3D2 do_unbind
conn=3D1000 op=3D2 UNBIND
connection_resched: attempting closing conn=3D1000 sd=3D23
connection_close: conn=3D1000 sd=3D23
daemon: removing 23
conn=3D1000 fd=3D23 closed
##########################################
#with correct rootdn & incorrect rootpw #
##########################################
daemon: activity on 1 descriptor
daemon: activity on:
slap_listener_activate(8):
daemon: epoll: listen=3D7 active_threads=3D0 tvp=3DNULL
daemon: epoll: listen=3D8 busy
>>> slap_listener(ldap:///)
daemon: listen=3D8, new connection on 23
daemon: added 23r (active) listener=3D(nil)
conn=3D1001 fd=3D23 ACCEPT from IP=3D192.168.6.10:47723 (IP=3D0.0.0.0:389)
daemon: activity on 2 descriptors
daemon: activity on: 23r
daemon: read active on 23
daemon: epoll: listen=3D7 active_threads=3D0 tvp=3DNULL
daemon: epoll: listen=3D8 active_threads=3D0 tvp=3DNULL
connection_get(23)
connection_get(23): got connid=3D1001
connection_read(23): checking for input on id=3D1001
ber_get_next
ldap_read: want=3D8, got=3D8
ldap_read: want=3D37, got=3D37
ber_get_next: tag 0x30 len 43 contents:
ber_dump: buf=3D0x1d0460b0 ptr=3D0x1d0460b0 end=3D0x1d0460db len=3D43
op tag 0x60, time 1285831240
ber_get_next
ldap_read: want=3D8 error=3DResource temporarily unavailable
conn=3D1001 op=3D0 do_bind
ber_scanf fmt ({imt) ber:
ber_dump: buf=3D0x1d0460b0 ptr=3D0x1d0460b3 end=3D0x1d0460db len=3D40
ber_scanf fmt (m}) ber:
ber_dump: buf=3D0x1d0460b0 ptr=3D0x1d0460d1 end=3D0x1d0460db len=3D10
>>> dnPrettyNormal: <cn=3Droot,dc=3Dexample,dc=3Dgr>
=3D> ldap_bv2dn(cn=3Droot,dc=3Dexample,dc=3Dgr,0)
<=3D ldap_bv2dn(cn=3Droot,dc=3Dexample,dc=3Dgr)=3D0
=3D> ldap_dn2bv(272)
<=3D ldap_dn2bv(cn=3Droot,dc=3Dexample,dc=3Dgr)=3D0
=3D> ldap_dn2bv(272)
<=3D ldap_dn2bv(cn=3Droot,dc=3Dexample,dc=3Dgr)=3D0
<<< dnPrettyNormal: <cn=3Droot,dc=3Dexample,dc=3Dgr>, <cn=3Droot,dc=3Dexamp=
le,dc=3Dgr>
conn=3D1001 op=3D0 BIND dn=3D"cn=3Droot,dc=3Dexample,dc=3Dgr" method=3D128
do_bind: version=3D3 dn=3D"cn=3Droot,dc=3Dexample,dc=3Dgr" method=3D128
=3D=3D> ndb_back_bind: dn: cn=3Droot,dc=3Dexample,dc=3Dgr
daemon: activity on 1 descriptor
daemon: activity on:
daemon: epoll: listen=3D7 active_threads=3D0 tvp=3DNULL
daemon: epoll: listen=3D8 active_threads=3D0 tvp=3DNULL
thanks,
George
13 years, 2 months
Re: (ITS#6660) paged result searches fail to deallocate memory until slapd shutdown
by quanah@zimbra.com
--On Wednesday, September 29, 2010 8:34 PM +0200 masarati(a)aero.polimi.it
wrote:
>> --On Wednesday, September 29, 2010 12:38 AM +0000 quanah(a)zimbra.com
>> wrote:
>>
>>> It appears to be a problem with the entry cache, which is set to 25,000:
>>
>> Running with a fix from Howard, the entry cache behaves correctly.
>> However, slapd still grows at the same rate.
>>
>> If I limit to only 10 paged results searches, slapd grows at a rate of
>> 300MB Virtual and 300MB Resident for every set of 10 paged results
>> searches
>> I do concurrently, up until I run slapd out of memory. There's something
>> very wrong with paged results searches.
>
> Could it be configuration-specific? I tested with a plain configuration
> resulting from test003; maybe some player in the middle, say, is causing
> entries to be duplicated and leaked, or read-locks on originals are not
> released correctly? Can you post a configuration that shows the issue?
Hi Pierangelo,
My testing shows the issue is only really visible with large databases that
return giant result sets. I don't expect you to see it with a small
database and test003, because the amount of "lost" memory will be a few
bytes at best.
My configuration itself is very minimal, but it is in cn=config format. ;)
dn: olcDatabase={2}hdb
objectClass: olcDatabaseConfig
objectClass: olcHdbConfig
olcDatabase: {2}hdb
olcSuffix:
olcAccess: {0}to attrs=userPassword by anonymous auth by
dn.children="cn=adm
ins,cn=zimbra" write
olcAccess: {1}to dn.subtree="cn=zimbra" by
dn.children="cn=admins,cn=zimbra"
write
olcAccess: {2}to
attrs=zimbraZimletUserProperties,zimbraGalLdapBindPassword,zi
mbraGalLdapBindDn,zimbraAuthTokenKey,zimbraPreAuthKey,zimbraPasswordHistory,
z
imbraIsAdminAccount,zimbraAuthLdapSearchBindPassword by
dn.children="cn=admi
ns,cn=zimbra" write by * none
olcAccess: {3}to attrs=objectclass by dn.children="cn=admins,cn=zimbra"
write
by dn.base="uid=zmpostfix,cn=appaccts,cn=zimbra" read by
dn.base="uid=zmam
avis,cn=appaccts,cn=zimbra" read by users read by * none
olcAccess: {4}to attrs=@amavisAccount by dn.children="cn=admins,cn=zimbra"
wr
ite by dn.base="uid=zmamavis,cn=appaccts,cn=zimbra" read by * +0 break
olcAccess: {5}to attrs=mail by dn.children="cn=admins,cn=zimbra" write by
dn
.base="uid=zmamavis,cn=appaccts,cn=zimbra" read by * +0 break
olcAccess: {6}to attrs=zimbraAllowFromAddress by
dn.children="cn=admins,cn=zi
mbra" write by dn.base="uid=zmpostfix,cn=appaccts,cn=zimbra" read by *
none
olcAccess: {7}to filter="(!(zimbraHideInGal=TRUE))"
attrs=cn,co,company,dc,di
splayName,givenName,gn,initials,l,mail,o,ou,physicalDeliveryOfficeName,posta
l
Code,sn,st,street,streetAddress,telephoneNumber,title,uid,homePhone,mobile,p
a
ger by dn.children="cn=admins,cn=zimbra" write by
dn.base="uid=zmpostfix,cn
=appaccts,cn=zimbra" read by users read by * none
olcAccess: {8}to
attrs=zimbraId,zimbraMailAddress,zimbraMailAlias,zimbraMailCa
nonicalAddress,zimbraMailCatchAllAddress,zimbraMailCatchAllCanonicalAddress,
z
imbraMailCatchAllForwardingAddress,zimbraMailDeliveryAddress,zimbraMailForwa
r
dingAddress,zimbraPrefMailForwardingAddress,zimbraMailHost,zimbraMailStatus,
z
imbraMailTransport,zimbraDomainName,zimbraDomainType,zimbraPrefMailLocalDeli
v
eryDisabled by dn.children="cn=admins,cn=zimbra" write by
dn.base="uid=zmpo
stfix,cn=appaccts,cn=zimbra" read by * none
olcAccess: {9}to attrs=entry by dn.children="cn=admins,cn=zimbra" write
by *
read
olcLastMod: TRUE
olcMaxDerefDepth: 15
olcReadOnly: FALSE
olcRootDN: cn=config
olcSizeLimit: unlimited
olcTimeLimit: unlimited
olcMonitoring: TRUE
olcDbDirectory: /opt/zimbra/data/ldap/hdb/db
olcDbCacheSize: 25000
olcDbCheckpoint: 64 5
olcDbConfig: {0}#
olcDbConfig: {1}# Set the database in memory cache size.
olcDbConfig: {2}#
olcDbConfig: {3}set_cachesize 0 52428800 0
olcDbConfig: {4}
olcDbConfig: {5}#
olcDbConfig: {6}# Set database flags.
olcDbConfig: {7}# Automatically remove log files that are no longer needed.
olcDbConfig: {8}set_log_config DB_LOG_AUTO_REMOVE
olcDbConfig: {9}
olcDbConfig: {10}#
olcDbConfig: {11}# Set log values.
olcDbConfig: {12}#
olcDbConfig: {13}set_lg_regionmax 262144
olcDbConfig: {14}set_lg_max 10485760
olcDbConfig: {15}set_lg_bsize 2097152
olcDbConfig: {16}set_lg_dir /opt/zimbra/data/ldap/hdb/logs
olcDbConfig: {17}# Increase locks
olcDbConfig:: ezE4fXNldF9sa19tYXhfbG9ja3MJMzAwMA==
olcDbConfig:: ezE5fXNldF9sa19tYXhfb2JqZWN0cwkxNTAw
olcDbConfig:: ezIwfXNldF9sa19tYXhfbG9ja2VycwkxNTAw
olcDbNoSync: FALSE
olcDbDirtyRead: FALSE
olcDbIDLcacheSize: 25000
olcDbIndex: objectClass eq
olcDbIndex: entryUUID eq
olcDbIndex: entryCSN eq
olcDbIndex: cn pres,eq,sub
olcDbIndex: uid pres,eq
olcDbIndex: zimbraForeignPrincipal eq
olcDbIndex: zimbraYahooId eq
olcDbIndex: zimbraId eq
olcDbIndex: zimbraVirtualHostname eq
olcDbIndex: zimbraVirtualIPAddress eq
olcDbIndex: zimbraMailDeliveryAddress eq,sub
olcDbIndex: zimbraAuthKerberos5Realm eq
olcDbIndex: zimbraMailForwardingAddress eq
olcDbIndex: zimbraMailCatchAllAddress eq,sub
olcDbIndex: zimbraShareInfo sub
olcDbIndex: zimbraMailTransport eq
olcDbIndex: zimbraMailAlias eq,sub
olcDbIndex: zimbraACE sub
olcDbIndex: zimbraDomainName eq,sub
olcDbIndex: mail pres,eq,sub
olcDbIndex: zimbraCalResSite eq,sub
olcDbIndex: givenName pres,eq,sub
olcDbIndex: displayName pres,eq,sub
olcDbIndex: sn pres,eq,sub
olcDbIndex: zimbraCalResRoom eq,sub
olcDbIndex: zimbraCalResCapacity eq
olcDbIndex: zimbraCalResBuilding eq,sub
olcDbIndex: zimbraCalResFloor eq,sub
olcDbLinearIndex: FALSE
olcDbMode: 0600
olcDbSearchStack: 16
olcDbShmKey: 0
olcDbCacheFree: 1000
olcDbDNcacheSize: 0
structuralObjectClass: olcHdbConfig
entryUUID: 152ab0a8-333e-102d-8700-d562901af228
creatorsName: cn=config
createTimestamp: 20081020215916Z
entryCSN: 20081020215916.275992Z#000000#000#000000
modifiersName: cn=config
modifyTimestamp: 20081020215916Z
The actual DB_CONFIG is:
zimbra@zre-ldap001:~/data/ldap/hdb/db$ cat DB_CONFIG
#
# Set the database in memory cache size.
#
set_cachesize 10 0 0
#
# Set database flags.
# Automatically remove log files that are no longer needed.
set_log_config DB_LOG_AUTO_REMOVE
#
# Set log values.
#
set_lg_regionmax 262144
set_lg_max 10485760
set_lg_bsize 2097152
set_lg_dir /opt/zimbra/data/ldap/hdb/logs
# Increase locks
set_lk_max_locks 3000
set_lk_max_objects 1500
set_lk_max_lockers 1500
--
Quanah Gibson-Mount
Principal Software Engineer
Zimbra, Inc
--------------------
Zimbra :: the leader in open source messaging and collaboration
13 years, 2 months