--On Sunday, October 08, 2006 4:35 AM +0000 hyc(a)symas.com wrote:
>> Then I've tried to syncrepl the entire DB, turning on the empty
>> consumers (crazy idea ;) i know ) but the provider memory allocated,
>> again reached 4GB and... bum
>>
>> some ideas?
4GB is the memory limit on 32-bit Solaris, IIRC. Just like it is 2GB on
32-bit linux. Try building everything 64-bit. Another thing to try, from
when I ran my servers on Solaris, is to fix the limits in /etc/system for
the maximum amount of memory that can be allocated. I vaguely recall
having to do this for my Solaris systems. Oh, and with BDB on Solaris, you
want to use the "shm_key" flag in slapd.conf to use shared memory segments,
when running on SPARC.
Here's the last portions of /etc/system on my old Solaris LDAP servers:
* turn off executable stacks
set noexec_user_stack = 1
set noexec_user_stack_log = 1
* increase the size of the kernel stack to 24k for Solaris 8/64-bit
set rpcmod:svc_default_stksize=0x6000
set lwp_default_stksize=0x6000
* force load the shared module kernel information.
forceload: sys/shmsys
* allow shared memory segments of up to 3 GB (default 1MB)
* See http://www.sun.com/sun-on-net/itworld/UIR960101perf.html
set shmsys:shminfo_shmmax=3221225472
* Increase memory performance when filesystem is being heavily used.
* See http://www.sun.com/sun-on-net/performance/priority_paging.html
set priority_paging=1
* Increase memory performance when filesystem is being heavily used.
* See http://www.samag.com/documents/s=1323/sam0110e/0110e.htm
set maxpgio=25468
set slowscan=500
* autoup influences how much RAM is checked by fsflush every 5 seconds.
* Default = 30. Increase autoup to decrease mem management overhead.
* See http://docs.sun.com/db/doc/806-7009/6jftnqsin?a=view
set autoup = 60
* Up tcp conn hash size. Defaults to 256.
* See http://www.deny-all.com/en/solsecu/tech/tuning.html
set tcp:tcp_conn_hash_size = 16384
* Up file descriptor limit. Defaults to 1024.
set rlim_fd_max = 16384
--Quanah
--
Quanah Gibson-Mount
Principal Software Developer
ITS/Shared Application Services
Stanford University
GnuPG Public Key: http://www.stanford.edu/~quanah/pgp.html
PS: 4GB of RAM is probably not enough for a 20GB database, you won't be
able to configure caches large enough to get decent search performance.
I think 8GB would be a practical minimum here.
Paolo.Rossi.con(a)h3g.it wrote:
> Full_Name: Paolo Rossi
> Version: 2.3.27
> OS: Solaris 8
> URL: ftp://ftp.openldap.org/incoming/
> Submission from: (NULL) (88.149.168.114)
>
>
> Hi, during some test on very huge DB, due to see how syncrepl works in this
> scenario, I've found a strange behavior:
>
> Solaris 8 on 2xUSIII+ 4GB RAM
> openLDAP 2.3.27
> BDB 4.2.52.4
>
> backend hdb
>
> 1 provider, 1 consumer, 1 consumer with filter.
>
> on 1 million dn LDAP whit 2 sub-dn for each dn, all the systems works fine,
> when I've tried to use a 10m dn with 3 sub-dn (very big ldap, openldap-data dir
> about 20GB):
--
-- Howard Chu
Chief Architect, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc
OpenLDAP Core Team http://www.openldap.org/project/
No ideas, this problem doesn't occur on my Linux system. We've slapadd'd
and slapcat'd databases of over a terabyte in size, with hundreds of
millions of entries, and never seen slapcat grow beyond the size of the
BDB cache. How large is your cache in DB_CONFIG?
Paolo.Rossi.con(a)h3g.it wrote:
> Full_Name: Paolo Rossi
> Version: 2.3.27
> OS: Solaris 8
> URL: ftp://ftp.openldap.org/incoming/
> Submission from: (NULL) (88.149.168.114)
>
>
> Hi, during some test on very huge DB, due to see how syncrepl works in this
> scenario, I've found a strange behavior:
>
> Solaris 8 on 2xUSIII+ 4GB RAM
> openLDAP 2.3.27
> BDB 4.2.52.4
>
> backend hdb
>
> 1 provider, 1 consumer, 1 consumer with filter.
>
> on 1 million dn LDAP whit 2 sub-dn for each dn, all the systems works fine,
> when I've tried to use a 10m dn with 3 sub-dn (very big ldap, openldap-data dir
> about 20GB):
>
> slapadd with -w on producer. it works.
> some ldapsearch, it works.
>
> stop producer,
> slapcat the producer to obtain the ldif for consumer preload and... bum
>
> after about 150 minutes of slapcat, memory full (look the screenshot of top )
>
> PID |USERNAME | SIZE |RES |TIME | CPU |COMMAND
> 21495 | ldap |4072M |3591M |150:07 | 21.36% |slapd
>
> memory full, then coredump and a console messages:
>
> ch_malloc of 16392 bytes failed
> ch_malloc.c:57: failed assertion `0'
>
> the out ldif was about 85% of the full LDAP
>
> I've tried again with same results.
>
>
> Then I've tried to syncrepl the entire DB, turning on the empty consumers (crazy
> idea ;) i know ) but the provider memory allocated, again reached 4GB and...
> bum
>
> core dumped
>
> in the slapd.log
>
> ch_calloc of 1 elems of 80 bytes failed
>
> second try:
> ch_malloc of 16 bytes failed
>
> seems to be a issue like ITS#4010
>
> some ideas?
>
> Regards
--
-- Howard Chu
Chief Architect, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc
OpenLDAP Core Team http://www.openldap.org/project/
Full_Name: Howard Chu
Version: HEAD/re24
OS:
URL: ftp://ftp.openldap.org/incoming/
Submission from: (NULL) (24.126.120.178)
Submitted by: hyc
If the current master commits an update but crashes before the change is
propagated to the mirror, and then new updates are subsequently committed on the
new master, the non-propagated changes will never be picked up, because the new
master's context is newer than everything on the crashed master.
The hole can be plugged by keeping a per-consumer contextCSN in addition to the
overall contextCSN, so that the old state isn't lost.
While we're at it, we should add a consumer option for specifying additional
locations in which to store contextCSNs. This would e.g. solve ITS#4626 by
allowing a consumer in a glued subordinate to keep its parent context up to
date.
> Pierangelo Masarati wrote:
>
>> I don't see an error in OpenLDAP software here. authz regexp matching
>> is
>> designed to succeed only if the identity is univoquely resolved to
>> exactly one
>> DN. I'm afraid but I cannot even imagine how slapd could decide to pick
>> one out
>> of many DNs when authenticating a user; I guess noone else can.
>>
>> p.
>>
>
> Matched dn's are unique, as they describing the same Entry:
>
> dn: uid=works,dc=example,dc=org
> objectClass: extensibleObject
> uid: works
>
> dn: cn=worksalso,dc=example,dc=org
> objectClass: extensibleObject
> cn: worksalso
>
> dn: uid=fails,dc=example,dc=org
> objectClass: extensibleObject
> uid: fails
> cn: fails
>
> "(|(cn=works)(uid=works))" and "(|(cn=worksalso)(uid=worksalso))" matching
> either attribute, whereas "(|(cn=works)(uid=works))" matches twice, but
> describes the same object.
>
> ldapsearching for "(|(cn=fails)(uid=fails))" will also return only the one
> and unique entry "uid=fails,dc=example,dc=org"
What authz-regexp does is run an internal search. If the search returns
exactly one entry, then there's no way it can be, say, returned twice,
otherwise it would also when running aregular search. Moreover, I've
recrated you scenario in 2.3.27 and HEAD, and everything seems to work as
expected in all cases. I suspect something else is wrong, for example
data in your DB is not like it appears. Usually, guessing and expecting
is a bad practice when debugging software. Please perform offending
operations with full logs on; check that your data is not duplicated (for
example, you might not see duplicates because they're hidden by ACLs) and
so. Unless you can show a clear malfunction of the software (which I
don't see here) I'm inclined towards closing this ITS.
p.
Ing. Pierangelo Masarati
OpenLDAP Core Team
SysNet s.n.c.
Via Dossi, 8 - 27100 Pavia - ITALIA
http://www.sys-net.it
------------------------------------------
Office: +39.02.23998309
Mobile: +39.333.4963172
Email: pierangelo.masarati(a)sys-net.it
------------------------------------------
Full_Name: Paolo Rossi
Version: 2.3.27
OS: Solaris 8
URL: ftp://ftp.openldap.org/incoming/
Submission from: (NULL) (88.149.168.114)
Hi, during some test on very huge DB, due to see how syncrepl works in this
scenario, I've found a strange behavior:
Solaris 8 on 2xUSIII+ 4GB RAM
openLDAP 2.3.27
BDB 4.2.52.4
backend hdb
1 provider, 1 consumer, 1 consumer with filter.
on 1 million dn LDAP whit 2 sub-dn for each dn, all the systems works fine,
when I've tried to use a 10m dn with 3 sub-dn (very big ldap, openldap-data dir
about 20GB):
slapadd with -w on producer. it works.
some ldapsearch, it works.
stop producer,
slapcat the producer to obtain the ldif for consumer preload and... bum
after about 150 minutes of slapcat, memory full (look the screenshot of top )
PID |USERNAME | SIZE |RES |TIME | CPU |COMMAND
21495 | ldap |4072M |3591M |150:07 | 21.36% |slapd
memory full, then coredump and a console messages:
ch_malloc of 16392 bytes failed
ch_malloc.c:57: failed assertion `0'
the out ldif was about 85% of the full LDAP
I've tried again with same results.
Then I've tried to syncrepl the entire DB, turning on the empty consumers (crazy
idea ;) i know ) but the provider memory allocated, again reached 4GB and...
bum
core dumped
in the slapd.log
ch_calloc of 1 elems of 80 bytes failed
second try:
ch_malloc of 16 bytes failed
seems to be a issue like ITS#4010
some ideas?
Regards
> slapd crashed with an "assert failure" in lastmod.c line 593 when using
> slapmodrn command. Fault is reproducable. Get around is to remove lastmod
> overlay from config file.
There is no slapmodrdn tool in OpenLDAP suite; I assume you mean
ldapmodrdn. In any case, slapo-lastmod(5) is essentially unmaintained and
basically useless. I suggest you use slapo-accesslog(5) instead.
p.
Ing. Pierangelo Masarati
OpenLDAP Core Team
SysNet s.n.c.
Via Dossi, 8 - 27100 Pavia - ITALIA
http://www.sys-net.it
------------------------------------------
Office: +39.02.23998309
Mobile: +39.333.4963172
Email: pierangelo.masarati(a)sys-net.it
------------------------------------------
Full_Name: Andrew Neil Parker
Version: 2.3.27
OS: RHEL 3.2
URL: ftp://ftp.openldap.org/incoming/
Submission from: (NULL) (194.60.106.5)
slapd crashed with an "assert failure" in lastmod.c line 593 when using
slapmodrn command. Fault is reproducable. Get around is to remove lastmod
overlay from config file.
Trace output using -d 1 is provided on ftp site.
150 Opening BINARY mode data connection for 'andy.n.parker-061006.ext'.
226 Transfer complete (unique file name:andy.n.parker-061006.ext).
3036 bytes sent in 0.00025 seconds (1.2e+04 Kbytes/s)
Full_Name: Andrew Neil Parker
Version: 2.3.27
OS: RHEL 3.2
URL: ftp://ftp.openldap.org/incoming/
Submission from: (NULL) (194.60.106.5)
slapd crashed with an "assert failure" in lastmod.c line 593 when using
slapmodrn command. Fault is reproducable. Get around is to remove lastmod
overlay from config file.
Trace output using -d 1 is provided