running slapcat on a live openldap installation?
by Adam Williams
I have two totally separate openldap 2.4 installations, both are live.
One is at work (roark) and the other is at home (missioncontrol).
On the one on roark when I run slapcat it errors, why is that?:
[root@roark ~]# slapcat -v -l /root/backup.ldif -b
"dc=mdah,dc=state,dc=ms,dc=us"
bdb_db_open: database "dc=mdah,dc=state,dc=ms,dc=us": unclean shutdown
detected; attempting recovery.
bdb_db_open: database "dc=mdah,dc=state,dc=ms,dc=us": recovery skipped
in read-only mode. Run manual recovery if errors are encountered.
bdb_db_open: database "dc=mdah,dc=state,dc=ms,dc=us": alock_recover failed
bdb_db_close: database "dc=mdah,dc=state,dc=ms,dc=us": alock_close failed
backend_startup_one: bi_db_open failed! (-1)
slap_startup failed
however, it runs fine on missioncontrol:
[root@missioncontrol ~]# slapcat -v -l /root/backup.ldif -b
"dc=squeezer,dc=net"
bdb_monitor_db_open: monitoring disabled; configure monitor database to
enable
# id=00000001
# id=00000002
# id=00000003
# id=00000004
# id=00000005
# id=00000006
# id=00000007
# id=00000008
# id=00000009
# id=0000000a
# id=0000000b
# id=0000000c
# id=0000000d
# id=0000000e
# id=0000000f
# id=00000010
# id=00000011
# id=00000012
# id=00000013
# id=00000014
# id=00000015
# id=00000016
# id=00000017
# id=00000018
# id=00000019
# id=0000001a
# id=0000001b
# id=0000001c
# id=0000001d
# id=0000001e
# id=0000001f
# id=00000020
# id=00000021
# id=00000022
# id=00000023
# id=00000024
# id=00000025
# id=00000026
# id=00000027
# id=00000028
both systems have the same configuration other then one is
dc=squeezer,dc=net, and the other is dc=mdah,dc=state,dc=ms,dc=us. but
other then that, their slapd.conf's are the same. On roark, I can get a
slapcat-like dump with:
ldapsearch -v -x -h roark.mdah.state.ms.us -D
"cn=Manager,dc=mdah,dc=state,dc=ms,dc=us" -w xxxxxxxxx + "*"
but I'd also like to have a slapcat dump as a secondary backup.
13 years, 9 months
How to use SHA-2 passwords?
by Michael Ströder
It seems support for SHA-2 passwords was added in release 2.4.14 (see
also ITS#5660). How to make use of them? What's the password scheme?
Ciao, Michael.
13 years, 9 months
LDAP cn=config Newbie Question
by Calos Lopez
Hi there,
I'm entering in the OpenLDAP world and I've installed from scratch an
2.4.16. Since version 2.3 the configuration of OpenLDAP are stored has
ldif entries but many examples still continues assume that we are
working with ldap.conf file.
For instance, I'm trying to set up a replication scenario with
syncrepl following this tutorial
http://www.zytrax.com/books/ldap/ch7/#ol-syncrepl but all the examples
are based in ldap.conf files.My question where do I enter the config
settings for syncrepl? In the olcDatabase={1} or bdb,
olcDatabase={-1}frontend ? What are the names of the attributes that I
must insert to the equivalent to the ldap.conf entries :
syncrepl rid=000
provider=ldap://master-ldap.example.com
type=refreshAndPersist
retry="5 5 300 +"
searchbase="dc=example,dc=com"
attrs="*,+"
bindmethod=simple
binddn="cn=admin,ou=people,dc=example,dc=com"
credentials=dirtysecret
I'm really a bit confused what is the method for configurate LDAP
server since this new configuration paradigm cn=config.
All help will be very useful.
Best regards
13 years, 9 months
openldap 2.4.16 hangs with dncachesize smaller than max number of records
by Rodrigo Costa
OpenLdap group,
I'm having a possible issue that could be a problem. I have a DB with
around 4 million entrances. In my slapd.conf I use the following cache
constraints :
#Cache values
cachesize 10000
dncachesize 3000000
idlcachesize 10000
cachefree 10
I'm also running the system in 2 machines in MirrorMode without problems
about this configuration.
My DB has exactly 3882992 entrances. In this way the dncachesize is
smaller than the number of records. After I move the dncache constraint
to a size smaller than the number of records(memory concern) I start to
have some issues related with ldapsearch, for example.
After the number of entrances in cache match the constraints(always pass
a little) the system hangs for new searches. It appears to be for
records "not yet" cached. If a record not cached is searched the
ldapsearch bind but hangs during search. One example can be seen below :
[root@brtldp11 ~]# time ldapsearch -LLL -x -D
"cn=admin,ou=CONTENT,o=domain,c=fr" -w secret -b
"ou=CONTENT,o=domain,c=fr" -H ldap://10.142.15.170:389
'pnnumber=+554184011071'
real 0m40.140s
user 0m0.003s
sys 0m0.001s
Just after I press CTRL-C the command stopped. It would stay forever in
this state. This happens after I ldapsearch the full DB and cache is filled.
If I then search this same record in the mirror I have the return very
fast. See example :
[root@brtldp11 ~]# time ldapsearch -LLL -x -D
"cn=admin,ou=CONTENT,o=domain,c=fr" -w secret -b
"ou=CONTENT,o=domain,c=fr" -H ldap://10.142.15.172:389
'pnnumber=+554184011071'
dn:
pnnumber=\2B554184011071,uid=1219843774965\2B554184011071,ou=REPOSITORY,ou
=CONTENT,o=domain,c=fr
subpnid: 0
pntype: 2
pncaps: 7
objectClass: phoneinfo
pnnumber: +554184011071
real 0m0.257s
user 0m0.002s
sys 0m0.003s
So this is doesn't appear to be a record problem since also both systems
are in mirror mode. Also the return is in a reasonable time, in around
257 miliseconds. After cached it can even be faster.
See the number of entrances I could search before system hangs :
[root@brtldp12 ~]# wc -l /backup/temp2.txt
3078804 /backup/temp2.txt
The number was 3,078,804 records since even dncache boundary is
3,000,000 it always pass a little.
Appears that if a cache boundary is smaller than number of DB records
slapd can hang when searching for new records not yet cached. Since I
have multiple DBs with this order of records I needed to impose this
boundary smaller than the number of records. My expectation was some
performance degradation after cache is filled but not that system could
hang(at least for non cached records).
After this situation happens the slapd process does not ends correctly
only after kill -9 is given in the process.
I would like to know if someone already passed by this situation and I
believe it can be easily reproduced with cache configuration smaller
than the number of records. I believe any DB in this situation could
reproduce the same behavior.
Any comments if this could be a configuration issue or some other
related issue? Would this be a ITS?
Thanks,
Rodrigo.
13 years, 9 months
Re: openldap 2.4.16 hangs with dncachesize smaller than max number of records
by Rodrigo Costa
Howard,
Maybe I'm not understanding very well your explanation.
What I can see is that until the dncache is filled any search can be
done very fast. After this is filled even stopping the original search
and starting a new one the pace is now very slow with some queries
taking around 8 seconds to end.
real 0m8.001s
user 0m0.002s
sys 0m0.006s
Once dncache is filled is there any configuration like cachefree that
could make the searches become similar to the original pace?
Thanks,
Rodrigo.
Howard Chu wrote:
> Rodrigo Costa wrote:
>
>> Howard,
>>
>> The idea was exactly to use the dncachesize large so the search or
>> random database access would not be affected.
>>
>> The issue is that after DB is filled any search hangs time by time even
>> there are many entrances into cache. I was expecting that to remove or
>> re-cache an entrance some performance affectance would occur but if
>> ldapsearch would be stopped and a new one started the search would be
>> faster since there are millions of entrances already cached.
>
> Except that there *aren't* million of entries already cached, you've
> only cached a few thousand entries. And when you have such a tiny
> fraction of the database cached, the cache is going to be mostly
> useless for random access patterns. At this point you're making me
> repeat myself, so I'm stopping here.
>
> By the way, an "entrance" is a doorway. A directory object is an
> "entry" ...
> -- -- Howard Chu
> CTO, Symas Corp. http://www.symas.com
> Director, Highland Sun http://highlandsun.com/hyc/
> Chief Architect, OpenLDAP http://www.openldap.org/project/
>
>
13 years, 9 months
Re: openldap 2.4.16 hangs with dncachesize smaller than max number of records
by Rodrigo Costa
Howard,
The idea was exactly to use the dncachesize large so the search or
random database access would not be affected.
The issue is that after DB is filled any search hangs time by time even
there are many entrances into cache. I was expecting that to remove or
re-cache an entrance some performance affectance would occur but if
ldapsearch would be stopped and a new one started the search would be
faster since there are millions of entrances already cached.
What I'm seeing is exactly the opposite where after cache is filled even
a new search or query would be very slow. See example below after I
stopped the first ldapsearch that filled cache and started a new one :
[root@brtldp12 ~]# date;cat /backup/test_temp_CONTENT.ldif|egrep -e
'^pnnumber' |wc -l;sleep 1;date;cat /backup/test_temp_CONTENT.ldif|egrep
-e '^pnnumber' |wc -l
Wed Jun 17 21:29:10 BRT 2009
3380
Wed Jun 17 21:29:11 BRT 2009
3380
[root@brtldp12 ~]# date;cat /backup/test_temp_CONTENT.ldif|egrep -e
'^pnnumber' |wc -l;sleep 1;date;cat /backup/test_temp_CONTENT.ldif|egrep
-e '^pnnumber' |wc -l
Wed Jun 17 21:29:21 BRT 2009
3514
Wed Jun 17 21:29:22 BRT 2009
3536
See in the first 2 lines that slapd appears to hang since even passed 1
second there are no increment in the file I'm dumping the searched
records. I was expecting the opposite since there are cached entrances
that would be in memory and should be returned faster to the ldapsearch.
Not totally sure how the cache works. I do not see any I/O that could
justify these hangs and any other HW resource limitation.
I think this behavior is related with dncachesize smaller than the
maximum number of records. Not sure if looking in the code if some
possible constraints after cache is filled would be checked for this
behavior. I can put gdb and try to grab more information to you.
Thanks,
Rodrigo.
Howard Chu wrote:
> Rodrigo Costa wrote:
>>
>> Howard,
>>
>> I tried bigger caches but I do not have enough memory to apply them.
>> This was the reason I only tried the dncachesize to speed up search
>> queries.
>>
>> I also have this same database running in a old openldap version(2.1.30)
>> even with a little more records on it. In this version, as I understand,
>> there isn't any cache at openLDAP but only at BDB.
>
> False. back-bdb was originally written without entry caching, but it
> was never released that way, and entry and IDL caching are both
> present in 2.1.30.
>
>> See how it is
>> behaving in terms of memory and CPU from the openLDAP 2.1.30:
>>
>> PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU
>> COMMAND
>> 3651 root 24 0 136M 135M 126M S 0.0 1.1 0:00 1 slapd
>>
>> See a really small memory consumption and I really reasonable
>> performance. The only issue I have with this version is about the
>> replication mechanism which I would like to increase its availability
>> using syncrepl unless slurpd.
>>
>> The problem is that for version after 2.1 looks like we need to have
>> enough memory to cache all database since there are many situations
>> where slapd appear increase CPU or memory usage and considerably reduce
>> performance.
>
> Not all, but as the documentation says, the dncache needs to be that
> large. None of the other caches are as critical.
>
>> I tried to remove from slapd.conf any cache constraint seeing if the
>> previous version performance directly from disk reads would be
>> reproduced. I saw some good startup, like 500 returns per second, but
>> after sometime slapd hanged and did not return any more records to
>> ldapseach. Also it consumed all 4 CPUs :
>>
>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
>> COMMAND
>> 4409 ldap 15 0 400m 183m 132m S 400 1.5 27:22.70 slapd
>>
>> And even after I stop the ldapsearch the CPU consumption continues
>> consuming all CPUs processing. I believe it entered in a dead loop.
>
> You should get a gdb snapshot of this situation so we can see where
> the loop is occurring.
>
>> I do not have a heavily load system but based in records and DBs I have
>> some memory resource constraints.
>
> Performance is directly related to memory. If the DB is too large to
> fit in memory then you're stuck with disk I/Os for most operations and
> nothing can improve that. You're being mislead by your observation of
> "initially good results" - you're just seeing the filesystem cache at
> first, but when it gets exhausted then you see the true performance of
> your disks.
>
>> I also tried some smaller caches, like :
>>
>> cachesize 500000
>> dncachesize 500000
>> idlcachesize 500000
>> cachefree 10000
>>
>> But it also hangs the search after sometime.
>>
>> I was wondering if there is a way to run slapd without cache, reading
>> from disk(like first time read to insert record in cache), what is
>> enough for small/medium systems in terms of consulting. In this way I
>> could use the behavior as the 2.1.30 system and the new syncrepl
>> replication.
>
> The 2.4 code is quite different from 2.1, there is no way to get the
> same behavior.
>
> -- -- Howard Chu
> CTO, Symas Corp. http://www.symas.com
> Director, Highland Sun http://highlandsun.com/hyc/
> Chief Architect, OpenLDAP http://www.openldap.org/project/
>
>
13 years, 9 months
Re: openldap 2.4.16 hangs with dncachesize smaller than max number of records
by Rodrigo Costa
Howard,
I tried bigger caches but I do not have enough memory to apply them.
This was the reason I only tried the dncachesize to speed up search queries.
I also have this same database running in a old openldap version(2.1.30)
even with a little more records on it. In this version, as I understand,
there isn't any cache at openLDAP but only at BDB. See how it is
behaving in terms of memory and CPU from the openLDAP 2.1.30:
PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND
3651 root 24 0 136M 135M 126M S 0.0 1.1 0:00 1 slapd
See a really small memory consumption and I really reasonable
performance. The only issue I have with this version is about the
replication mechanism which I would like to increase its availability
using syncrepl unless slurpd.
The problem is that for version after 2.1 looks like we need to have
enough memory to cache all database since there are many situations
where slapd appear increase CPU or memory usage and considerably reduce
performance.
I tried to remove from slapd.conf any cache constraint seeing if the
previous version performance directly from disk reads would be
reproduced. I saw some good startup, like 500 returns per second, but
after sometime slapd hanged and did not return any more records to
ldapseach. Also it consumed all 4 CPUs :
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
COMMAND
4409 ldap 15 0 400m 183m 132m S 400 1.5 27:22.70 slapd
And even after I stop the ldapsearch the CPU consumption continues
consuming all CPUs processing. I believe it entered in a dead loop.
I do not have a heavily load system but based in records and DBs I have
some memory resource constraints.
I also tried some smaller caches, like :
cachesize 500000
dncachesize 500000
idlcachesize 500000
cachefree 10000
But it also hangs the search after sometime.
I was wondering if there is a way to run slapd without cache, reading
from disk(like first time read to insert record in cache), what is
enough for small/medium systems in terms of consulting. In this way I
could use the behavior as the 2.1.30 system and the new syncrepl
replication.
Thanks a lot !
Rodrigo.
Howard Chu wrote:
> Quanah Gibson-Mount wrote:
>> --On Wednesday, June 17, 2009 5:25 AM -0700 Rodrigo Costa
>> <rlvcosta(a)yahoo.com> wrote:
>>
>>> Could this be a configuration issue? I do not think but I'm putting
>>> below my cache configuration :
>
> Clearly it is a configuration issue.
>
>>> # Cache values
>>> cachesize 10000
>>> dncachesize 3000000
>>> idlcachesize 10000
>>> cachefree 10
>>
>> These values are extremely low (except for dncachesize) for a system
>> with 4
>> million records. I'd expect something more like:
>>
>> cachesize 4000000
>> dncachesize 3000000
>
> dncachesize must always be >= cachesize
>
>> idlcachesize 8000000
>> cachefree 100000
>>
>> or something along those lines. Particularly in the case of your
>> ldapsearch, cachesize is likely the most relevant. Try playing with the
>> settings a bit more.
>
> -- -- Howard Chu
> CTO, Symas Corp. http://www.symas.com
> Director, Highland Sun http://highlandsun.com/hyc/
> Chief Architect, OpenLDAP http://www.openldap.org/project/
>
>
13 years, 9 months
Re: openldap 2.4.16 hangs with dncachesize smaller than max number of records
by Rodrigo Costa
Howard,
I download the latest HEAD and compiled for testing.
By my test it looks like slapd isn't anymore hanging but there still
some performance issues. I continued with the same 3,000,000 dncachesize
boundary and a DB with around 4,000,000 records. The ldapsearch run ok
with a pace of more than 1500 records read by second until it reaches
the dncachesize boundary where this pace reduce to less than 20 records
per second.
Since a record not cached, like starting a ldapsearch for the first
time, the limiting factor would be the disk I/O. I was expecting for a
record not cached, even filled up the dncache, the pace would remain
very similar since only disk I/O would be the limiting factor. I monitor
system resources and after dncache filled I wouldn't see any increase in
disk I/O or any other HW liniting factor that could leave to this
considerable drop.
The system took around 20minutes to reach the 3,000,000 records search
but it would then take more than 10 hours to finish the ending 1,000,000.
What is more strange is different from the official openLDAP 2.4.16, if
I stop the ldapsearch and start a new one, even there are now records in
memory, the pace continues in the around 20 records per second.
This creates performance issues since looks like the system enters in a
cache controlled state where records are read(by logic) very slowly, not
a system resource(like HW) limitation. Please see below some tests I did
where these paces can be seen :
[root@brtldp12 ~]# date;cat /backup/test_temp_CONTENT.ldif|egrep -e
'^pnnumber' |wc -l;sleep 1;date;cat /backup/test_temp_CONTENT.ldif|egrep
-e '^pnnumber' |wc -l
Wed Jun 17 00:27:14 BRT 2009
224
Wed Jun 17 00:27:15 BRT 2009
246
[root@brtldp12 ~]# date;cat /backup/test_temp_CONTENT.ldif|egrep -e
'^pnnumber' |wc -l;sleep 1;date;cat /backup/test_temp_CONTENT.ldif|egrep
-e '^pnnumber' |wc -l
Wed Jun 17 00:28:03 BRT 2009
3089
Wed Jun 17 00:28:04 BRT 2009
4700
I used the mated machine(replication) that didn't fill the cache and
also started a new ldapsearch in the master 1 machine to show the pace
continue in a slow rate.
Not sure if this was the behavior expected since after cache is filled
system start to respond in a very slow rate even there are already
cached records that would speed things up.
Could this be a configuration issue? I do not think but I'm putting
below my cache configuration :
#Cache values
cachesize 10000
dncachesize 3000000
idlcachesize 10000
cachefree 10
Thanks a lot!
Rodrigo.
Howard Chu wrote:
> Rodrigo Costa wrote:
>>
>> OpenLdap group,
>>
>> I'm having a possible issue that could be a problem. I have a DB with
>> around 4 million entrances. In my slapd.conf I use the following cache
>> constraints :
>
>> Any comments if this could be a configuration issue or some other
>> related issue? Would this be a ITS?
>
> A number of dncache issues have been fixed already in CVS.
>
> -- -- Howard Chu
> CTO, Symas Corp. http://www.symas.com
> Director, Highland Sun http://highlandsun.com/hyc/
> Chief Architect, OpenLDAP http://www.openldap.org/project/
> -- Delft Hydraulics, GeoDelft, the Subsurface and Groundwater unit of
> TNO and
> parts of Rijkswaterstaat have joined forces in a new independent
> institute for
> delta technology, Deltares. Deltares combines knowledge and experience
> in the
> field of water, soil and the subsurface. We provide innovative
> solutions to make
> living in deltas, coastal areas and river basins safe, clean and
> sustainable.
>
> DISCLAIMER: This message is intended exclusively for the addressee(s)
> and may
> contain confidential and privileged information. If you are not the
> intended
> recipient please notify the sender immediately and destroy this message.
> Unauthorized use, disclosure or copying of this message is strictly
> prohibited.
> The foundation 'Stichting Deltares', which has its seat at Delft,
> The Netherlands, Commercial Registration Number 41146461, is not
> liable in any
> way whatsoever for consequences and/or damages resulting from the
> improper,
> incomplete and untimely dispatch, receipt and/or content of this e-mail.
>
>
13 years, 9 months
2.4.16: sizelimit broken due to ors_slimit is set to SLAPD_DEFAULT_SIZELIMIT
by Christian Fischer
Hi all,
I've upgraded from 2.3.43 to 2.4.16 on gentoo amd64.
Syncrepl could not finnish its initial replication due to a sizelimit of 500
entries.
This is a bit amazing because I've set sizelimit to unlimited, and I had no
such trouble with 2.3.43.
I've played a bit with the sizelimit statement.
If sizelimit is set to a value >=0<500 the behavior is as expected, unlimited
(-1) and values >500 are ignored.
I've turned on args debugging to see if something is different between both
versions.
With version 2.3.43 op->ors_slimit is set to 0 if do_search() is executed,
with version 2.4.16 op->ors_slimit is set to 500 (SLAPD_DEFAULT_SIZELIMIT).
That explains the different behavior of limits_check().
With ors_slimit set to SLAPD_DEFAULT_SIZELIMIT it runs into
servers/slapd/limits.c:1294 and ors_slimit will only set to
ors_limit->lms_s_soft if the value of ors_limit->lms_s_soft is between 1 and
SLAPD_DEFAULT_SIZELIMIT -1.
I don't know why you have set ors_slimit to SLAPD_DEFAULT_SIZELIMIT but this
breaks unlimited sizelimit or size limits > SLAPD_DEFAULT_SIZELIMIT.
I think that lms_s_soft should initially be set to SLAPD_DEFAULT_SIZELIMIT,
not ors_slimit.
But as said, I don't know why you have done this and probably you had good
reasons to do so. I'm a bit confused that nobody but me had such limit
problems with 2.4.16 till now.
Attached my config snippet.
bye
Christian
### conf snippet ###
loglevel none
security ssf=256
disallow bind_anon
require authc
database bdb
suffix "dc=foo,dc=bar"
rootdn "cn=Manager,dc=foo,dc=bar"
rootpw secret
directory /var/lib/openldap-data
checkpoint 32 30
sizelimit unlimited
index entryUUID eq
syncrepl rid=123
provider=ldap://isc01.foo.bar
starttls=yes
tls_reqcert=never
type=refreshAndPersist
retry="5 5 60 +"
searchbase="dc=foo,dc=bar"
scope=sub
schemachecking=on
bindmethod=simple
binddn="cn=syncrepl,ou=dsa,dc=foo,dc=bar"
credentials=secret
--
"Without music to decorate it, time is just a bunch of boring production
deadlines or dates by which bills must be paid."
--- Frank Vincent Zappa
13 years, 9 months
Mirror config
by Martin Wilderoth
Hello,
I have created a mirror configuration.
What is the differens on using updateref
or not in this config ?
/Best Regards Martin
13 years, 9 months