Multi-master syncrepl config: excluding olcSaslHost from replication
by Kyle Brantley
I'm working on a full multi-master slapd setup, where both the cn=config
database as well as the actual database are fully replicated. All access
to the slapd instances is gated with GSSAPI (the only notable exception
being the syncrepl users, which I'm working on converting over to GSSAPI
anyway).
However, I need to be able to configure olcSaslHost on a per-server
basis. Server A will have a different value for olcSaslHost than server
B will. While I want to replicate cn=config, I don't want to replicate
the olcSaslHost attribute.
I've tried two things, and neither have worked:
1) Configuring exattrs="olcSaslHost" in the olcSyncrepl statements for
cn=config.
* This allows me to configure the attribute, but as soon as there
is any other change to cn=config, it wipes the attribute out across all
of the nodes.
2) Updating the ACL for the syncrepl user to not have access to the
olcSaslHost attribute.
* Unfortunately, this has similar behavior to the above: making a
change on one node will wipe the olcSaslHost attribute out of all of the
nodes.
How best can I go about doing this? I was hoping that olcSaslRealm was
multi-value and could be configured in a manner similar to olcServerID,
but that isn't the case. I was hoping that denying access to the
attributes (via ACL or olcSynrepl config) would make the syncrepl engine
ignore the attribute, but because it can't see it on the node where the
attribute was changed, but it can see it on the downstream nodes, it
wipes the attribute out entirely.
Any help / suggestions welcome.
Thanks,
--Kyle
8 years, 1 month
ClearText Passwords in slapcat: please provide some inputs
by Manuel Afonso
Hi people,
I am using ubuntu and phpldapadmin to manage openldap.
I have here a big issue: when using phpldapadmin/openldap, all the
times there is (for each user/entry) a field with
cleartextPassword: <cleartextpassword> (this is seen
in slapcat output)
What I want is to put in place a mechanism where there is no plain text
field with the password in clear in each entry of openldap.
I have read about ppolicy overlay, slappasswd and so on but so far I
was not able to figure out how to avoid this annoying clear text
password available when I do a slapcat (as root of course)
Does anybody had such an issue ?
Any ideas or links to point for a solution?
Another question:
is it possible that this clear text password is somehow needed for the
correct operation of openldap?
Thanks a lot for your time and (I hope) help.
Kind regards,
Manuel - Lisbon PT
This is what I got for the user mafonso (me) when doing a slapcat >
output :
(as can be seen there is the field cleartextPassword: with pass in
clear text)
dn: cn=mafonso,ou=***,dc=***,dc=***,dc=***,dc=pt
objectClass: ****Person
objectClass: mailAccount
objectClass: sambaSamAccount
objectClass: posixAccount
objectClass: top
givenName: Manuel
sn: Afonso
displayName: Manuel Afonso
cn: mafonso
mailacceptinguser: 1
maildrop: mafonso(a)***.pt
intranetRole: cn=**,ou=**,ou=**,dc=**,dc=**,dc=**,dc=pt
...
portalRole: ***
...
gidNumber: 516
sambaSID: ***
uidNumber: 1399
uid: mafonso
homeDirectory: /home/mafonso
intranetStatus: U
sambaAcctFlags: [UX]
loginShell: /bin/false
mailacceptinggeneralid: mafonso@****
mailacceptinggeneralid: ***(a)**.**.**.pt
userPassword:: e1N....
cleartextPassword: <cleartextpassword>
sambaNTPassword: D6...
sambaLMPassword: 45...
8 years, 1 month
LDAP over SSL ( ldaps )
by Aneela Saleem
Hi all,
Can anyone please provide me some link for enabling "ldaps" i have followed
many links but continuously failing to do so. I have also tried "startTLS"
but its not compatible with Apache Knox. Any help would be appreciated.
Thanks
8 years, 1 month
Re: Slapd is coming down
by Édnei Rodrigues
Hello quanah!
Yes, I know, my SO killed the slapd because it was configured.
But I don't have any other service in the server, only the openldap.
Quanah, below follow the information:
- Openldap-syncrepl: 2 GB
- Openldap-translucent: 567 MB
Now, about the slapd process. The strange is the size in the memory:
- Server 1
Tasks: 110 total, 1 running, 109 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.3%us, 0.3%sy, 0.0%ni, 99.2%id, 0.2%wa, 0.0%hi, 0.0%si,
0.0%st
Mem: 12248128k total, 5686580k used, 6561548k free, 223244k buffers
Swap: 2097148k total, 88940k used, 2008208k free, 4488408k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
COMMAND
4965 ldap 20 0 4692m 777m 527m S 1.0 6.5 101:16.12
slapd
1460 ldap 20 0 9700m 2.1g 1.9g S 0.3 18.0 1174:58 slapd
ldap 1460 1 2 Jul12 ? 19:34:58
/usr/local/openldap/libexec/slapd -h ldap://127.0.0.1:1389 ldaps://
127.0.0.1:1636 -f /usr/local/openldap/etc/openldap/slapd-syncrepl.conf -u
ldap -g ldap -l local3
ldap 4965 1 10 Aug19 ? 01:41:19
/usr/local/openldap/libexec/slapd -h ldap://*:389 ldaps://*:636 -f
/usr/local/openldap/etc/openldap/slapd.conf -u ldap -g ldap -l local4
root 14399 14157 0 09:12 pts/0 00:00:00 grep ldap
- Server 2
Tasks: 113 total, 1 running, 112 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.7%id, 0.2%wa, 0.0%hi, 0.0%si,
0.0%st
Mem: 12194476k total, 8111400k used, 4083076k free, 213632k buffers
Swap: 2097148k total, 7344k used, 2089804k free, 7225108k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
COMMAND
30907 ldap 20 0 8832m 2.3g 2.2g S 0.0 19.9 11:10.55
slapd
30956 ldap 20 0 4486m 502m 339m S 0.0 4.2 7:34.95 slapd
ldap 30907 1 1 Aug19 ? 00:11:10
/usr/local/openldap/libexec/slapd -h ldap://127.0.0.1:1389 ldaps://
127.0.0.1:1636 -f /usr/local/openldap/etc/openldap/slapd-syncrepl.conf -u
ldap -g ldap -l local3
ldap 30956 1 0 Aug19 ? 00:07:39
/usr/local/openldap/libexec/slapd -h ldap://*:389 ldaps://*:636 -f
/usr/local/openldap/etc/openldap/slapd.conf -u ldap -g ldap -l local4
Otherwise, in the few days, the slapd process increase the consumption
until remain nothing! And it is the why the oom-killer kill the slapd,
because your bad score.
Why is the slapd consuming many memory?
Do you need more information?
Thank you!
Em 19/08/2015 14:11, "Quanah Gibson-Mount" <quanah(a)zimbra.com> escreveu:
> --On Wednesday, August 19, 2015 2:39 PM -0300 Édnei Rodrigues <
> ednei.felipe.rodrigues(a)gmail.com> wrote:
>
>
>> Hello Guys, how are you doing ?
>>
>
> Aug 19 09:51:44 ds1openldap2h kernel: Out of memory: Kill process 21760
>> (slapd) score 957 or sacrifice child
>> Aug 19 09:51:44 ds1openldap2h kernel: Killed process 21760, UID 55,
>> (slapd) total-vm:18314360kB, anon-rss:11646816kB, file-rss:680kB
>>
>
> Your OS killed it, slapd didn't "come down". You don't give any useful
> information, so it's hard to provide guidance. I've often seen this when
> other processes (particularly java based) are using up memory, and slapd
> goes to alloc new memory, so the OS kills it. Useful details besides your
> version (2.4.39, per the log) would be:
>
> database backend
> database size
> slapd process size after DB is fully in memory
>
> etc
>
> --Quanah
>
>
> --
>
> Quanah Gibson-Mount
> Platform Architect
> Zimbra, Inc.
> --------------------
> Zimbra :: the leader in open source messaging and collaboration
>
8 years, 1 month
ACL rule: getting crazy with it.
by Simone Taliercio
Hi All,
I've Jasig CAS connected to OpenLDAP for users authentication.
My LDAP Schema is the following:
dc=com
dc=companyA,dc=com
ou=user,dc=companyA,dc=com
dc=companyB,dc=com
ou=user,dc=companyB,dc=com
I would like to give to a specific user
(cn=admin,ou=user,dc=companyB,dc=com)
the ability to create inetOrgPerson objetcs under ou=user,dc=companyA,dc=com
and the restriction to have only search access to
ou=user,dc=companyB,dc=com where actually some attributes should be hidden
(such as userPassword).
I tried several ACL but always with one strange problem: a user is able to
login via CAS. Then, he/she logouts and if try with a different account
then LDAP returns DN_RESOLUTION_FAILURE.
That issue is occurring even with a simple ACL such as:
access to *
by self write
by anonymous auth
by users search
The only way to workaround that issue is removing any ACL or leaving "by
users read".
As DN bind I'm using dc=com.
Any suggestion? I cannot understand if focusing on CAS for this issue, or
ACL LDAP side.
Thanks a LOT for the support!
Simone
8 years, 1 month
Slapd is coming down
by Édnei Rodrigues
Hello Guys, how are you doing ?
In the last weeks, my openldaps servers were suffering with many come down.
I have a peak about the 400 connections per peer ( We have 2 ldap servers ).
Although I has added more memory (4 GB), the problem decreased, but not
resolved.
So, How Could I troubleshooting it ?
My enviroment:
- Two Red Hat Enterprise Linux Server release 6.6 (Santiago)
- Two processors and 12 GB of RAM
- Both environments are virtualized.
***************************************
My slapd.conf:
moduleload back_ldap
moduleload translucent
moduleload dynlist
moduleload back_monitor
backend mdb
backend ldap
allow bind_v2
allow bind_anon_dn
database mdb
directory /usr/local/openldap/var/openldap-translucent
suffix "*****"
rootdn "*************"
rootpw ****************************
maxsize 4294967296
sizelimit 100000
overlay translucent
uri "ldap://localhost:1389/"
translucent_bind_local on
translucent_pwmod_local on
translucent_local ******************************
idassert-bind bindmethod=none
overlay dynlist
dynlist-attrset groupOfURLs memberURL member:uniqueMember
**************************
My slapd-syncrepl.conf:
allow bind_v2
allow bind_anon_dn
moduleload syncprov
moduleload dynlist
# Definicoes da base primaria
database mdb
suffix "**"
rootdn "******"
directory /usr/local/openldap/var/openldap-syncrepl
rootpw ******
sizelimit 100000
maxsize 8589934592
overlay dynlist
dynlist-attrset groupOfURLs memberURL member:uniqueMember
loglevel sync stats
idletimeout 0
# ACLs
include /usr/local/openldap/etc/openldap/schema/sicredi.acl
overlay syncprov
# Começo do Consumidor
index entryUUID eq
# syncrepl directives
syncrepl rid=0
provider=ldap://MASTER_Production:389
bindmethod=simple
binddn="************"
credentials=****
searchbase="****"
logbase="cn=accesslog"
logfilter="(&(objectClass=auditWriteObject)(reqResult=0))"
type=refreshAndPersist
retry="60 +"
syncdata=accesslog
# Refer updates to the master
updateref ldap://MASTER_Production:389
***** My openldap version: openldap-ltb-2.4.39-1.el6.x86_64
And What I saw in the logs:
Aug 19 09:51:43 ds1openldap2h kernel: slapd invoked oom-killer:
gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0
Aug 19 09:51:43 ds1openldap2h kernel: slapd cpuset=/ mems_allowed=0
Aug 19 09:51:43 ds1openldap2h kernel: Pid: 4233, comm: slapd Not tainted
2.6.32-504.23.4.el6.x86_64 #1
Aug 19 09:51:43 ds1openldap2h kernel: Call Trace:
Aug 19 09:51:44 ds1openldap2h kernel: [<ffffffff810d4241>] ?
cpuset_print_task_mems_allowed+0x91/0xb0
Aug 19 09:51:44 ds1openldap2h kernel: [<ffffffff81127500>] ?
dump_header+0x90/0x1b0
Aug 19 09:51:44 ds1openldap2h kernel: [<ffffffff8122ee7c>] ?
security_real_capable_noaudit+0x3c/0x70
Aug 19 09:51:44 ds1openldap2h kernel: [<ffffffff81127982>] ?
oom_kill_process+0x82/0x2a0
Aug 19 09:51:44 ds1openldap2h kernel: [<ffffffff811278c1>] ?
select_bad_process+0xe1/0x120
Aug 19 09:51:44 ds1openldap2h kernel: [<ffffffff81127dc0>] ?
out_of_memory+0x220/0x3c0
Aug 19 09:51:44 ds1openldap2h kernel: [<ffffffff811346ff>] ?
__alloc_pages_nodemask+0x89f/0x8d0
Aug 19 09:51:44 ds1openldap2h kernel: [<ffffffff8116c9aa>] ?
alloc_pages_current+0xaa/0x110
Aug 19 09:51:44 ds1openldap2h kernel: [<ffffffff811248f7>] ?
__page_cache_alloc+0x87/0x90
Aug 19 09:51:44 ds1openldap2h kernel: [<ffffffff811242de>] ?
find_get_page+0x1e/0xa0
Aug 19 09:51:44 ds1openldap2h kernel: [<ffffffff81125897>] ?
filemap_fault+0x1a7/0x500
Aug 19 09:51:44 ds1openldap2h kernel: [<ffffffff8114ed04>] ?
__do_fault+0x54/0x530
Aug 19 09:51:44 ds1openldap2h kernel: [<ffffffff8114f2d7>] ?
handle_pte_fault+0xf7/0xb00
Aug 19 09:51:44 ds1openldap2h kernel: [<ffffffff8109ec20>] ?
autoremove_wake_function+0x0/0x40
Aug 19 09:51:44 ds1openldap2h kernel: [<ffffffff8114ff79>] ?
handle_mm_fault+0x299/0x3d0
Aug 19 09:51:44 ds1openldap2h kernel: [<ffffffff8104d096>] ?
__do_page_fault+0x146/0x500
Aug 19 09:51:44 ds1openldap2h kernel: [<ffffffff81529a1e>] ?
thread_return+0x4e/0x7d0
Aug 19 09:51:44 ds1openldap2h kernel: [<ffffffff8153001e>] ?
do_page_fault+0x3e/0xa0
Aug 19 09:51:44 ds1openldap2h kernel: [<ffffffff8152d3d5>] ?
page_fault+0x25/0x30
Aug 19 09:51:44 ds1openldap2h kernel: Mem-Info:
Aug 19 09:51:44 ds1openldap2h kernel: Node 0 DMA per-cpu:
Aug 19 09:51:44 ds1openldap2h kernel: CPU 0: hi: 0, btch: 1 usd: 0
Aug 19 09:51:44 ds1openldap2h kernel: CPU 1: hi: 0, btch: 1 usd: 0
Aug 19 09:51:44 ds1openldap2h kernel: Node 0 DMA32 per-cpu:
Aug 19 09:51:44 ds1openldap2h kernel: CPU 0: hi: 186, btch: 31 usd: 0
Aug 19 09:51:44 ds1openldap2h kernel: CPU 1: hi: 186, btch: 31 usd: 0
Aug 19 09:51:44 ds1openldap2h kernel: Node 0 Normal per-cpu:
Aug 19 09:51:44 ds1openldap2h kernel: CPU 0: hi: 186, btch: 31 usd: 10
Aug 19 09:51:44 ds1openldap2h kernel: CPU 1: hi: 186, btch: 31 usd: 0
Aug 19 09:51:44 ds1openldap2h kernel: active_anon:2571953
inactive_anon:399686 isolated_anon:0
Aug 19 09:51:44 ds1openldap2h kernel: active_file:305 inactive_file:604
isolated_file:0
Aug 19 09:51:44 ds1openldap2h kernel: unevictable:0 dirty:1 writeback:0
unstable:0
Aug 19 09:51:44 ds1openldap2h kernel: free:29811 slab_reclaimable:2307
slab_unreclaimable:7010
Aug 19 09:51:44 ds1openldap2h kernel: Node 0 DMA free:15276kB min:80kB
low:100kB high:120kB active_anon:0kB inactive_anon:0kB active_file:0kB
inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB
present:14884kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB
slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB
unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0
all_unreclaimable? yes
Aug 19 09:51:44 ds1openldap2h kernel: lowmem_reserve[]: 0 3000 12090 12090
Aug 19 09:51:44 ds1openldap2h kernel: Node 0 DMA32 free:53124kB min:16748kB
low:20932kB high:25120kB active_anon:2127764kB inactive_anon:559804kB
active_file:0kB inactive_file:152kB unevictable:0kB isolated(anon):0kB
isolated(file):0kB present:3072096kB mlocked:0kB dirty:0kB writeback:0kB
mapped:128kB shmem:0kB slab_reclaimable:8kB slab_unreclaimable:12kB
kernel_stack:0kB pagetables:900kB unstable:0kB bounce:0kB writeback_tmp:0kB
pages_scanned:0 all_unreclaimable? no
Aug 19 09:51:44 ds1openldap2h kernel: lowmem_reserve[]: 0 0 9090 9090
Aug 19 09:51:44 ds1openldap2h kernel: Node 0 Normal free:51016kB
min:50752kB low:63440kB high:76128kB active_anon:8160048kB
inactive_anon:1038940kB active_file:1224kB inactive_file:2052kB
unevictable:0kB isolated(anon):0kB isolated(file):0kB present:9308160kB
mlocked:0kB dirty:4kB writeback:0kB mapped:1288kB shmem:0kB
slab_reclaimable:9220kB slab_unreclaimable:28028kB kernel_stack:1392kB
pagetables:44504kB unstable:0kB bounce:0kB writeback_tmp:0kB
pages_scanned:0 all_unreclaimable? no
Aug 19 09:51:44 ds1openldap2h kernel: lowmem_reserve[]: 0 0 0 0
Aug 19 09:51:44 ds1openldap2h kernel: Node 0 DMA: 1*4kB 1*8kB 2*16kB 2*32kB
1*64kB 0*128kB 1*256kB 1*512kB 0*1024kB 1*2048kB 3*4096kB = 15276kB
Aug 19 09:51:44 ds1openldap2h kernel: Node 0 DMA32: 2599*4kB 1163*8kB
354*16kB 75*32kB 28*64kB 15*128kB 9*256kB 12*512kB 9*1024kB 0*2048kB
1*4096kB = 53236kB
Aug 19 09:51:44 ds1openldap2h kernel: Node 0 Normal: 846*4kB 640*8kB
413*16kB 236*32kB 119*64kB 48*128kB 28*256kB 10*512kB 2*1024kB 0*2048kB
0*4096kB = 50760kB
Aug 19 09:51:44 ds1openldap2h kernel: 5598 total pagecache pages
Aug 19 09:51:44 ds1openldap2h kernel: 4605 pages in swap cache
Aug 19 09:51:44 ds1openldap2h kernel: Swap cache stats: add 1173695, delete
1169090, find 121530/123860
Aug 19 09:51:44 ds1openldap2h kernel: Free swap = 0kB
Aug 19 09:51:44 ds1openldap2h kernel: Total swap = 2097148kB
Aug 19 09:51:44 ds1openldap2h kernel: 3145712 pages RAM
Aug 19 09:51:44 ds1openldap2h kernel: 97157 pages reserved
Aug 19 09:51:44 ds1openldap2h kernel: 797 pages shared
Aug 19 09:51:44 ds1openldap2h kernel: 3014707 pages non-shared
Aug 19 09:51:44 ds1openldap2h kernel: [ pid ] uid tgid total_vm rss
cpu oom_adj oom_score_adj name
Aug 19 09:51:44 ds1openldap2h kernel: [ 441] 0 441 2795
0 0 -17 -1000 udevd
Aug 19 09:51:44 ds1openldap2h kernel: [ 1251] 0 1251 47346
150 0 0 0 vmtoolsd
Aug 19 09:51:44 ds1openldap2h kernel: [ 1290] 0 1290 23283
37 1 -17 -1000 auditd
Aug 19 09:51:44 ds1openldap2h kernel: [ 1326] 65 1326 107809
573 1 0 0 nslcd
Aug 19 09:51:44 ds1openldap2h kernel: [ 1340] 0 1340 62279
388 1 0 0 rsyslogd
Aug 19 09:51:44 ds1openldap2h kernel: [ 1353] 0 1353 2707
46 0 0 0 irqbalance
Aug 19 09:51:44 ds1openldap2h kernel: [ 1369] 32 1369 4744
15 1 0 0 rpcbind
Aug 19 09:51:44 ds1openldap2h kernel: [ 1380] 81 1380 5881
36 1 0 0 dbus-daemon
Aug 19 09:51:44 ds1openldap2h kernel: [ 1413] 0 1413 1020
0 1 0 0 acpid
Aug 19 09:51:44 ds1openldap2h kernel: [ 1423] 68 1423 10041
151 0 0 0 hald
Aug 19 09:51:44 ds1openldap2h kernel: [ 1424] 0 1424 5100
2 1 0 0 hald-runner
Aug 19 09:51:44 ds1openldap2h kernel: [ 1456] 0 1456 5630
2 1 0 0 hald-addon-inpu
Aug 19 09:51:44 ds1openldap2h kernel: [ 1470] 68 1470 4502
2 1 0 0 hald-addon-acpi
Aug 19 09:51:44 ds1openldap2h kernel: [ 1510] 55 1510 2488944
50131 1 0 0 slapd
Aug 19 09:51:44 ds1openldap2h kernel: [ 1572] 28 1572 241540
244 1 0 0 nscd
Aug 19 09:51:44 ds1openldap2h kernel: [ 1597] 0 1597 16081
20 1 -17 -1000 sshd
Aug 19 09:51:44 ds1openldap2h kernel: [ 1606] 38 1606 6566
72 1 0 0 ntpd
Aug 19 09:51:44 ds1openldap2h kernel: [ 1629] 0 1629 28188
2 0 0 0 abrtd
Aug 19 09:51:44 ds1openldap2h kernel: [ 1637] 0 1637 28131
23 1 0 0 abrt-dump-oops
Aug 19 09:51:44 ds1openldap2h kernel: [ 1651] 0 1651 51804
2959 1 0 0 osad
Aug 19 09:51:44 ds1openldap2h kernel: [ 1661] 0 1661 28742
18 1 0 0 crond
Aug 19 09:51:44 ds1openldap2h kernel: [ 1771] 497 1771 257947
665 1 0 0 icinga2
Aug 19 09:51:44 ds1openldap2h kernel: [ 1795] 0 1795 25232
26 1 0 0 rhnsd
Aug 19 09:51:44 ds1openldap2h kernel: [ 1804] 0 1804 27085
23 0 0 0 rhsmcertd
Aug 19 09:51:44 ds1openldap2h kernel: [ 1818] 0 1818 1016
1 0 0 0 mingetty
Aug 19 09:51:44 ds1openldap2h kernel: [ 1820] 0 1820 1016
1 0 0 0 mingetty
Aug 19 09:51:44 ds1openldap2h kernel: [ 1822] 0 1822 1016
1 0 0 0 mingetty
Aug 19 09:51:44 ds1openldap2h kernel: [ 1824] 0 1824 1016
1 0 0 0 mingetty
Aug 19 09:51:44 ds1openldap2h kernel: [ 1826] 0 1826 1016
1 0 0 0 mingetty
Aug 19 09:51:44 ds1openldap2h kernel: [ 1828] 0 1828 2794
0 1 -17 -1000 udevd
Aug 19 09:51:44 ds1openldap2h kernel: [ 1829] 0 1829 2794
0 1 -17 -1000 udevd
Aug 19 09:51:44 ds1openldap2h kernel: [ 1830] 0 1830 1016
1 0 0 0 mingetty
Aug 19 09:51:44 ds1openldap2h kernel: [21760] 55 21760 4578590
2911874 1 0 0 slapd
Aug 19 09:51:44 ds1openldap2h kernel: [16962] 0 16962 3105
113 1 0 0 nmon_x86_64_rhe
Aug 19 09:51:44 ds1openldap2h kernel: [20314] 0 20314 3059
113 0 0 0 nmon_x86_64_rhe
Aug 19 09:51:44 ds1openldap2h kernel: Out of memory: Kill process 21760
(slapd) score 957 or sacrifice child
Aug 19 09:51:44 ds1openldap2h kernel: Killed process 21760, UID 55, (slapd)
total-vm:18314360kB, anon-rss:11646816kB, file-rss:680kB
Thanks for your attention.
--
Atenciosamente,
Édnei Rodrigues
8 years, 1 month
0.9.16 build problems for android
by Kristoffer Sjögren
Hi
This is a long shot.
Just upgraded LMDB to 0.9.16 for lmdbjni (Java bindings for LMDB) and
when building for android-ndk-r10e I get a build error complaing about
undeclared PTHREAD_MUTEX_ROBUST [1].
The build procedure uses automake and is quite complex because of Java
JNI bindings but maybe the build error/output rings a bell anyway?
The build works fine with LMDB 0.9.15.
Cheers,
-Kristoffer
[1] src/mdb.c:4636:49: error: 'PTHREAD_MUTEX_ROBUST' undeclared (first
use in this function)
[2] Full build output - http://pastebin.com/JwreP0dZ
[3] agcc
#!/bin/bash
$NDK/toolchains/arm-linux-androideabi-4.9/prebuilt/linux-x86_64/bin/arm-linux-androideabi-gcc
--sysroot=$NDK/platforms/android-19/arch-arm -DMDB_DSYNC=O_SYNC $@
8 years, 1 month
Upgrading OpenLDAP from 2.4.39 to 2.4.41
by Brian Wright
We have a 2-way master production cluster actively running 2.4.39. We
are wanting to upgrade to 2.4.41 (we haven't yet internally certified
42). What is the best practices around upgrading this cluster while
keeping at least one of the servers online? Is it replication-safe to
have one node on 2.4.39 and one on 2.4.41 for a short period?
If not, I can temporarily comment out the syncrepl statements until both
servers are on 2.4.41 and then reenable replication after both are
upgraded. We can live with short inconsistency period between the
servers during the upgrade of the servers until we can resume
replication. Also, is the DB format fully compatible between these two
versions or would it be better to reload the DB new from an LDIF backup
on 2.4.41? Is there an upgrade doc somewhere that discusses these issues?
Thanks.
--
Signature
*Brian Wright*
*Sr. UNIX Systems Engineer *
901 Mariners Island Blvd Suite 200
San Mateo, CA 94404 USA
*Email *brianw(a)marketo.com <mailto:brianw@marketo.com>
*Phone *+1.650.539.3530**
*****www.marketo.com <http://www.marketo.com/>*
Marketo Logo
8 years, 1 month
OpenLDAP proxy referral via SSL
by Declan O'Doherty
I am trying to set up an OpenLDAP server (2.40) on CentOS 6 that uses a proxy referral to another LDAP server (OpenDJ) , all via SSL/TLS
The referral works if I do not use SSL but when I configure slapd.conf to require certs, I get an invalid password error:
res_errno: 49, res_error: <Invalid password.>, res_matched: <>
The proxy is configured to use SASL EXTERNAL binding but when connecting to the OpenDJ service, it binds anonymously:
ldap_back_dobind_int: DN="<certificate DN>" without creds, binding anonymouslyldap_sasl_bind
The same configuration works on Windows using OpenLDAP 2.38
Here is the configuration for the ldap backend in slapd.conf:
database ldap
suffix "ou=organization unit, o=organization"
uri "ldaps://test.ldap.com:636/"
chase-referrals yes
idassert-bind bindmethod=sasl
saslmech=EXTERNAL
binddn="O=ORG,OU=ORGUNIT,C=US"
tls_cacert="/path/to/ca cert.pem"
tls_cert="/path/to/server cert.pem"
tls_key="/path/to/server key.pem"
tls_reqcert=demand
mode=self
Thank you
8 years, 1 month