slapd-meta
by Fr3ddie
Hello to the list,
I'm trying to configure the slapd-meta OpenLDAP backend on an online
cn=config
configuration with no luck. Slapd version is 2.4.39 (the maximum I can
achieve on the target machines building from vanilla source).
The documentation is clear but too concise for me so I will try to explain
what I'm trying to do to see if there is anybody that can help me.
Currently I have 3 slapd servers that share a common root for the DIT, i.e.:
dc=loc1,dc=root
dc=loc2,dc=root
dc=loc3,dc=root
What I would like to achieve is to obtain a fourth server that contains
the previous trees, along with its own tree, i.e. a server that contains:
dc=loc0,dc=root (locally hosted data)
dc=loc1,dc=root (coming from the first server, chasing referrals)
dc=loc2,dc=root (coming from the second server, chasing referrals)
dc=loc3,dc=root (coming from the third server, chasing referrals)
this way, all the clients connecting to this server will be able to
retrieve data also from the other three remote servers.
As far as I understood, I only need to configure the "loc0" server to access
the other three servers and get the data to serve to clients.
I have already configured the fourth server with its local DIT and this is
the configuration:
# cat 'cn=config.ldif'
dn: cn=config
objectClass: olcGlobal
cn: config
olcArgsFile: /var/run/slapd/slapd.args
olcPidFile: /var/run/slapd/slapd.pid
structuralObjectClass: olcGlobal
creatorsName: cn=config
olcServerID: 1
olcThreads: 32
olcToolThreads: 8
olcRequires: LDAPv3
olcConnMaxPendingAuth: 100
olcTLSCACertificateFile: /etc/ssl/certs/my_ca_cert.pem
olcTLSCertificateFile: /etc/ssl/certs/this-host_x509_cert.pem
olcTLSCertificateKeyFile: /etc/ssl/private/this-host_x509_key.key
olcTLSVerifyClient: try
olcTimeLimit: 600
olcLogLevel: stats2 sync
[...]
# cat 'cn=module{0}.ldif'
dn: cn=module{0}
objectClass: olcModuleList
cn: module{0}
olcModulePath: /usr/lib/ldap
olcModuleLoad: {0}back_hdb
olcModuleLoad: {1}syncprov
olcModuleLoad: {2}accesslog
structuralObjectClass: olcModuleList
[...]
Schema files are the following:
cn={0}core.ldif
cn={1}cosine.ldif
cn={2}nis.ldif
cn={3}inetorgperson.ldif
cn={4}dyngroup.ldif
cn={5}kerberos.ldif
# cat 'olcDatabase={1}hdb.ldif'
dn: olcDatabase={1}hdb
objectClass: olcDatabaseConfig
objectClass: olcHdbConfig
olcDatabase: {1}hdb
olcDbDirectory: /var/lib/ldap
olcSuffix: dc=loc0,dc=root
olcAccess: {0}to
attrs=userPassword,shadowLastChange,krbPrincipalKey by dn="cn
=admin,dc=loc0,dc=root" write by anonymous auth by self write by *
none
olcAccess: {1}to dn.base="" by * read
olcAccess: {2}to * by dn="cn=admin,dc=loc0,dc=root" write by * read
olcLastMod: TRUE
olcRootDN: cn=admin,dc=loc0,dc=root
olcRootPW:: xxxxxxxxxxxxxxxxxxxx
olcDbCacheSize: 10000
olcDbCheckpoint: 512 10
olcDbConfig: {0}set_cachesize 0 524288000 1
olcDbConfig: {1}set_lk_max_objects 1500
olcDbConfig: {2}set_lk_max_locks 1500
olcDbConfig: {3}set_lk_max_lockers 1500
olcDbConfig: {4}set_flags DB_LOG_AUTOREMOVE
olcDbIDLcacheSize: 30000
olcDbIndex: default pres,eq
[...]
structuralObjectClass: olcHdbConfig
olcSyncrepl: {0}rid=0 provider=ldap://second-host.loc0.root
bindmethod=s
imple binddn="cn=admin,dc=loc0,dc=root" credentials=xxxxxx
searchbase="dc=loc0,dc=root"
logbase="cn=accesslog" logfilter="(&(objectClass=auditWriteObj
ect)(reqResult=0))" schemachecking=on type=refreshAndPersist
retry="60 +" syn
cdata=accesslog starttls=yes
olcMirrorMode: TRUE
[...]
On top of this DB I have the "syncprov" and the "accesslog" overlays
configured
(these are two servers in "MirrorMode", configured following the
OpenLDAP admin documentation).
I believe this DB is the ones containing the actual "loc0" DIT data...
Then I have the accesslog DB for the replica (with the syncprov overlay
on top):
# cat 'olcDatabase={2}hdb.ldif'
dn: olcDatabase={2}hdb
objectClass: olcDatabaseConfig
objectClass: olcHdbConfig
olcDatabase: {2}hdb
olcDbDirectory: /var/lib/ldap/accesslog
olcSuffix: cn=accesslog
olcRootDN: cn=admin,dc=loc0,dc=root
olcDbConfig: {0}set_cachesize 0 524288000 1
olcDbConfig: {1}set_lk_max_objects 1500
olcDbConfig: {2}set_lk_max_locks 1500
olcDbConfig: {3}set_lk_max_lockers 1500
olcDbConfig: {4}set_flags DB_LOG_AUTOREMOVE
olcDbIndex: default eq
olcDbIndex: entryCSN,objectClass,reqEnd,reqResult,reqStart
[...]
On top of this environment I start loading the needed modules with this
LDIF file:
version: 1
dn: cn=module{0},cn=config
changetype: modify
add: olcModuleLoad
olcModuleLoad: back_ldap
-
add: olcModuleLoad
olcModuleLoad: back_meta
-
add: olcModuleLoad
olcModuleLoad: rwm
and it seems I'm able to load the new modules without errors
into the configuration, thus I obtain:
# cat 'cn=module{0}.ldif'
dn: cn=module{0}
structuralObjectClass: olcModuleList
objectClass: olcModuleList
cn: module{0}
olcModulePath: /usr/lib/ldap
olcModuleLoad: {0}back_hdb
olcModuleLoad: {1}syncprov
olcModuleLoad: {2}accesslog
olcModuleLoad: {3}back_ldap
olcModuleLoad: {4}back_meta
olcModuleLoad: {5}rwm
[...]
Now I try to load the slapd-meta directives into a new database using
this LDIF:
version: 1
dn: olcDatabase={3}meta,cn=config
objectClass: olcDatabaseConfig
objectClass: olcMetaConfig
olcDatabase: {3}meta
olcSuffix: dc=root
olcDbURI: "ldap://server-loc1.loc1.root/dc=loc1,dc=root"
olcDbIdAssertBind: bindmethod=simple
binddn="cn=admin,dc=loc1,dc=root" credentials=xxxxxx starttls=yes
tls_reqcert=demand
olcDbURI: "ldap://server-loc2.loc2.root/dc=loc2,dc=root"
olcDbIdAssertBind: bindmethod=simple
binddn="cn=admin,dc=loc2,dc=root" credentials=xxxxxx starttls=yes
tls_reqcert=demand
olcDbURI: "ldap://server-loc3.loc3.root/dc=loc3,dc=root"
olcDbIdAssertBind: bindmethod=simple
binddn="cn=admin,dc=loc3,dc=root" credentials=xxxxxx starttls=yes
tls_reqcert=demand
but I obtain an error that sticks me trying various combinations without
success:
# ldapadd -Y EXTERNAL -H ldapi:/// -f slapd-META-DB-CREATION.ldif
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
adding new entry "olcDatabase={3}meta,cn=config"
ldap_add: Object class violation (65)
additional info: attribute 'olcDbURI' not allowed
and:
# tail /var/log/openldap/slapd.log
Nov 9 19:47:17 server01 slapd[32392]: conn=1025 op=2 ENTRY
dn="dc=loc0,dc=root"
Nov 9 19:47:29 server01 slapd[32392]: conn=1052 op=2 INTERM
oid=1.3.6.1.4.1.4203.1.9.1.4
Nov 9 19:49:47 server01 slapd[32392]: conn=1327 op=2 ENTRY
dn="dc=loc0,dc=root"
Nov 9 19:52:17 server01 slapd[32392]: conn=1628 op=2 ENTRY
dn="dc=loc0,dc=root"
Nov 9 19:54:46 server01 slapd[32392]: conn=1929 op=2 ENTRY
dn="dc=loc0,dc=root"
Nov 9 19:57:07 server01 slapd[32392]: Entry
(olcDatabase={3}meta,cn=config), attribute 'olcDbURI' not allowed
Into the slapd-meta documentation the "URI" directive is mentioned but
the "DbURI" seems to
raise a "better error", in fact if I try to modify the above LDIF file
using "URI" I obtain:
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
adding new entry "olcDatabase={3}meta,cn=config"
ldap_add: Undefined attribute type (17)
additional info: olcUri: attribute type undefined
Moreover, it is not stated into the slapd-meta docs that the slapd-ldap
backend is needed by slapd-meta but,
anyway, I think its needed because if I try to load the slapd-meta alone
it raises an error (I don't remember exactly which one).
At this point I'm stuck to this error and I wasn't able to find any hint
on the web to solve this :(
The examples I was able to find were related with the static slapd.conf
configuration, I counldn't
find any "full" configuration example using the cn=config.
I'm wondering if I should create a "cn=root" actual DB first and then
link the sub-DITs to it,
or, maybe, add some other overlay... I really can't understand how it
should work :(
Can please anybody help me?
Thank you very much
6 years, 10 months
Virtual list view problem
by Venish Khant
Hi all
I am using cpan Net::LDAP module to access LDAP entries. I want to
search LDAP entries using Net::LDAP search method. When I do search, I
want some limited number of entries from search result, for
this(searching) process I am using Net::LDAP::Control::VLV module. But
I get error on VLV response control. Please, any one have idea about
this error.
*
Error:* Died at vlv.pl line 50,
This is my example. I changed the font style of line 50
#!/usr/bin/perl -w
use Net::LDAP;
use Net::LDAP::Control::VLV;
use Net::LDAP::Constant qw( LDAP_CONTROL_VLVRESPONSE );
use Net::LDAP::Control::Sort;
sub procentry {
my ( $mesg, $entry) = @_;
# Return if there is no entry to process
if ( !defined($entry) ) {
return;
}
print "dn: " . $entry->dn() . "\n";
@attrs = $entry->attributes();
foreach $attr (@attrs) {
#printf("\t%s: %s\n", $attr, $entry->get_value($attr));
$attrvalue = $entry->get_value($attr,asref=>1);
#print $attr.":". $entry->get_value($attr)."\n";
foreach $value(@$attrvalue) {
print "$attr: $value\n";
}
}
$mesg->pop_entry;
print "\n";
}
$ldap = Net::LDAP->new( "localhost" );
# Get the first 20 entries
$vlv = Net::LDAP::Control::VLV->new(
before => 0, # No entries from before target entry
after => 19, # 19 entries after target entry
content => 0, # List size unknown
offset => 1, # Target entry is the first
);
my $sort = Net::LDAP::Control::Sort->new( order => 'cn' );
@args = ( base => "dc=example,dc=co,dc=in",
scope => "subtree",
filter => "(objectClass=inetOrgPerson)",
callback => \&procentry, # Call this sub for each entry
control => [ $sort, $vlv ],
);
$mesg = $ldap->search( @args );
# Get VLV response control
*($resp) = $mesg->control( LDAP_CONTROL_VLVRESPONSE ) or die;*
$vlv->response( $resp );
# Set the control to get the last 20 entries
$vlv->end;
$mesg = $ldap->search( @args );
# Get VLV response control
($resp) = $mesg->control( LDAP_CONTROL_VLVRESPONSE ) or die;
$vlv->response( $resp );
# Now get the previous page
$vlv->scroll_page( -1 );
$mesg = $ldap->search( @args );
# Get VLV response control
($resp) = $mes
# Now page with first entry starting with "B" in the middle
$vlv->before(9); # Change page to show 9 before
$vlv->after(10); # Change page to show 10 after
$vlv->assert("B"); # assert "B"
$mesg = $ldap->search( @args );g->control( LDAP_CONTROL_VLVRESPONSE ) or
die;
$vlv->response( $resp );
--
Venish Khant
www.deeproot.co.in
7 years, 3 months
Growing an LMDB database after MDB_MAP_FULL
by Bruno Freudensprung
Hi,
I have a question regarding growing an LMDB database when a write transaction hits MDB_MAP_FULL.
I would like to avoid defining a high mapsize value because my application will contain many MDB_envs, and because I have Windows users (Windows allocates the whole file on the disk).
Based on the intuition that MDB_MAP_FULL should not leave the database in a weird state, I have made the following little experiment. When MDB_MAP_FULL is encountered I tried to:
* copy the current env (mdb_env_copy) into another directory (fine: it does not seem to contain uncommited data)
* reset the transaction < error bit > (modified LMDB code to introduce a < txn->mt_flags &= ~MDB_TXN_ERROR > somewhere)
* commit the transaction
* close the database
* close the env
* reopen it with a higher mapsize value
* reopen the database
* create another transaction
* continue writing
... and it seems to be working pretty well.
Assuming I am ready to < relax > some of the ACID requirements, does it sound reasonable to think that MDB_MAP_FULL does not leave LMDB is a weird state? And that the < trick > described above should always be working? By < working > I mean: the copied environment will never contain uncommitted data (so I can rely on it to implement a kind of rollback), the reopened environment will always be valid and contain the expected data (data written before hitting MDB_MAP_FULL)?
Thanks in advance for any insight,
Best regards,
Bruno.
7 years, 9 months
TOTP configuration
by PRAJITH
Hi,
Could you please add more info about the TOTP modul? I could not find any
single article about this.
7 years, 9 months
RE24 testing call (2.4.43)
by Quanah Gibson-Mount
If you know how to build OpenLDAP manually, and would like to participate
in testing the next set of code for the 2.4.43 release, please do so.
Generally, get the code for RE24:
<http://www.openldap.org/devel/gitweb.cgi?p=openldap.git;a=snapshot;h=refs...>
Configure & build.
Execute the test suite (via make test) after it is built.
Thanks!
--Quanah
--
Quanah Gibson-Mount
Platform Architect
Zimbra, Inc.
--------------------
Zimbra :: the leader in open source messaging and collaboration
7 years, 9 months
RE: RE24 testing call #3 (2.4.43) LMDB RE0.9 testing call #3 (0.9.17)
by Sergio NNX
> What git revision of LMDB did you test?
The last tag from git repository, 0.9.16
Thanks.
> Subject: Re: RE24 testing call #3 (2.4.43) LMDB RE0.9 testing call #3 (0.9.17)
> To: sfhacker(a)hotmail.com
> From: hyc(a)symas.com
> Date: Wed, 25 Nov 2015 13:02:44 +0000
>
> Sergio NNX wrote:
> > Ciao,
> >
> > We are testing the latest version of Cyrus SASL against LMDB (on Windows,
> > built from source) and when we run the testsuite app, we get a runtime
> > exception shown below:
>
> Please use the -technical mailing list for LMDB discussions.
>
> >
> > ...
> > ...
> > Testing sasl_listmech()... ok
> > Testing serverstart... All memory accounted for!
> > ok
> > Testing client-first/no-server-last correctly...
> > SRP --> start
> >
> > Program received signal SIGSEGV, Segmentation fault.
> > 0x00000000 in ?? ()
> > (gdb) bt
> > #0 0x00000000 in ?? ()
> > #1 0x00432a4d in mdb_node_search (mc=mc@entry=0x289608,
> > key=key@entry=0x289850, exactp=exactp@entry=0x289604) at mdb.c:4943
> > #2 0x004361bc in mdb_cursor_set (mc=mc@entry=0x289608,
> > key=key@entry=0x289850, data=data@entry=0x289858, op=op@entry=MDB_SET,
> > exactp=exactp@entry=0x289604) at mdb.c:5725
> > #3 0x0043667c in mdb_get (txn=0x35c7b0, dbi=1, key=0x289850, data=0x289858)
> > at mdb.c:5391
> > #4 0x00431a55 in _sasldb_getdata ()
> > #5 0x0042fda3 in sasldb_auxprop_lookup ()
> > #6 0x004119f7 in _sasl_auxprop_lookup ()
> > #7 0x00413a6c in _sasl_canon_user_lookup ()
> > #8 0x0042ea7e in srp_server_mech_step ()
> > #9 0x0040cddd in sasl_server_step ()
> > #10 0x0040d361 in sasl_server_start ()
> > #11 0x0040458f in doauth ()
> > #12 0x004060d6 in test_clientfirst ()
> > #13 0x00406405 in foreach_mechanism ()
> > #14 0x00733d1f in main ()
> > (gdb)
> >
> >
> > Building SASL against Berkeley DB does not show the same issue.
> >
> > Any pointers will be greatly appreciated.
> >
> > Thanks.
> >
> > Sergio.
7 years, 9 months
OpenLdap Clear-text Password in Debug Mode
by Rich Alford
Hi All:
I'm not sure if this issue results from my ignorance of OpenLdap, or it's
not
capable of resolving. Regardless, any direction you can provide would be
greatly appreciated:
I have a basic OpenLdap installation with TLS encryption. Passwords are
hashed in the ldap directory. The user password travels from client to
server
encrypted as it should, then gets unencrypted by slapd, and IF IN DEBUG MODE
gets displayed in *clear-text*. Theoretically, the password should be
hashed on the client, sent across the network, to be compared against the
hashed passwords in the database.
What am I missing??
Thank you,
Rich
7 years, 9 months
playing with ldap protocol
by Friedrich Locke
i am trying to understand ldap protocol! i have read the RFC 4511 (i
believe) about it. In order to understand it better, i wrote a program that
reads from internet and writes to an output file.
I have issued the following ldapsearch command:
ldapsearch -x -h localhost -p 2000 -D ou=ufv,dc=br -w 123456
What i got on the output file was:
sioux@scallop$ hexdump -C o
00000000 30 1e 02 01 01 60 19 02 01 03 04 0c 6f 75 3d 75
|0....`......ou=u|
00000010 66 76 2c 64 63 3d 62 72 80 06 31 32 33 34 35 36
|fv,dc=br..123456|
00000020
I have the following understanding o f the protocol:
60 19 [02 01 [3] 04 0c [ou=ufv,dc=br] 80 06 [123456]]
What about the first "30 1e 02 01 01" ?
1e means the size is bigger than 30 and 2 bytes is specified ? Is it for
the message id ? What about the rest of 27 bytes of message what is not
accounted ?
Shoud it not account for the 27 bytes length ?
Thanks in advance.
BTW: what is the message id for the message sent ?
7 years, 10 months
LMDB overflow pages - MDB_RESERVE
by Christian Sell
I forgot to mention: I am using MDB_RESERVE to avoid extra memcpy. Could it be
that this is the cause for the extreme database bloat?
> Hello,
>
> something wrong with the question below?
>
> I am trying to use LMDB to store large (huge) amounts of binary data which,
> for
> the reason of limiting memory footprint, are split into chunks. Each chunk ist
> stored under a separate key, made up of [collectionId, chunkId], so that I can
> later iterate the chunks using a LMDB cursor. Chunk size is configurable.
>
> During my tests, I encountered a strange scenario where, after inserting some
> 2000 chunks consisting of 512KB each, the database size had grown to a value
> that was roughly 135 times the calculated size of the data. I ran the stat
> utility over the db and saw that there were > 12000 overflow pages vs. approx.
> 2000 data pages. When I reduced the chunk size to 4060 bytes, the number of
> overflow pages went down to 1000, and the database size went down to the
> expected number (I experimented with different sizes, this was the best
> result).
> I did not find any documentation to explain this behaviour, or how to deal
> with
> it. Of course it makes me worry about database bloat and the consequences. Can
> anyone shed light on this?
>
> thanks,
> Christian
>
Christian Sell
7 years, 10 months