first time user
by Kaveh Ehsani
Hi Everyone,
I am using this for the first time so if there are protocols to follow please let me know. I have a problem with binding from my client to provider as the provider does not allow anonymous binding, I am also new to openldap, and it is centos 7 I am using which no longer uses slapd.conf. I initially used this to change the monitor ACL:
ldapmodify -H ldapi:/// -x -D "cn=config" -W <<EOF
dn: olcDatabase={1}monitor,cn=config
changetype: modify
replace: olcAccess
olcAccess: {0}to *
by dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth" read
by dn.base="cn=Manager,dc=${MYDOMAIN},dc=${MYTLD}" read
by * none
EOF
Which worked fined. Then tried to modifying it by adding:
'by anonymous search'
and try to run the same ldapmodify as:
ldapmodify -H ldapi:/// -x -D "cn=config" -W <<EOF
dn: olcDatabase={1}monitor,cn=config
changetype: modify
replace: olcAccess
olcAccess: {0}to *
by dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth" read
by dn.base="cn=Manager,dc=${MYDOMAIN},dc=${MYTLD}" read
by anonymous search
EOF
and I get this error:
ldap_start_tls: Can't contact LDAP server (-1)
I think my binding inside sssd.conf on the client side is incorrect for the newuser01 I have added to the ldapserver
Useldap_default_bind_dn = cn=newuser01,dc=example,dc=com
Thanks for all the feed backs.
7 years, 5 months
Need help in design for users with multiple posixAccounts
by Bastian Tweddell
Dear all,
Currently, I am in the process of rethinking the way we have structured
our posixAccounts in the LDAP DB. We have a centralized storage cluster
which is attached to a number of compute clusters. Thus, we need to
provide the same uids/gids everywhere. Whereas the homedirectory for an
user may vary depending on the local mount points of the shared files
systems in the cluster.
By now, this problem was solved by introducing objectClasses which
basically implement posixAccounts. One oc for each compute cluster we
have. Because maintaining the schema extensions on an increasingly
fluctuating environment is tough work, I would like to get rid of these
oc-extensions by doing the following:
- ou=TOP
- ou=users:
Keep attributes uid, uidNumber, gidNumber. They are unique for our
site and should be used everywhere.
- ou=systemA
- ou=users:
Keep attributes which extends entries from ou=TOP,ou=users by
adding homedirectory and loginShell. They depend on the cluster
configuration.
I ended up with the idea to use dynlist which automatically fetches the
unique attributes from entries beneath ou=TOP,ou=users and specific
attributes from the entry beneath the ou=systemA branch.
--- example ldif: ---
uid=user01,ou=users,ou=TOP
objectClass: posixAccount
uid: user01
uidNumber: 10000
gidNumber: 10000
cn: Some Name
homedirectory: n/a
uid=user01,ou=users,ou=systemA,ou=TOP
objectClass: posixAccount
objectClass: x-extendUnique
x-extendURI: ldap:///ou=TOP,ou=users?uid,uidNumber,gidNumber?one?(uid=user01)
homedirectory: /local/mount/home4711/user01
loginShell: /bin/ksh
--- eop ---
--- draft slapd.conf ---
overlay dynlist
dynlist-attrset x-extendUnique x-extendURI
--- eop ---
So the idea is, when searching for a user beneath the systemA branch,
the resulting entry should be completed by attributes in the top users'
branch.
Before playing around with that, I would like to ask if the way I intend
to use dynlist is the proposed way?
Is there another neat way to implement that? Meaning, that a search
result entry is combined from two different entries? I really would like
to avoid having copies of the unique attributes at many locations in
the tree.
How does schemachecking work on dynlist?
How would dynlist respond to single-valued attributes which are
imported but already present in the entry? Would it overwrite the
existing attribute?
Do you think, keeping multiple entries for an user is too much overhead
compared to use only one entry with multiple objectClasses?
Many thanks,
--
Bastian Tweddell Juelich Supercomputing Centre
phone: +49 (2461) 61-6586 HPC in Neuroscience
7 years, 5 months
mdb backup via slapcat
by Bastian Tweddell
Dear all,
I am used to run slapcat to create backups of the backend database
_while_ slapd is running. Recently I migrated from bdb to mdb. Now I
read that using slapcat on a running slapd is only safe for bdb and hdb
backends [1].
1: http://www.openldap.org/faq/data/cache/287.html
Question:
- Is that still true?
- In which situation could data corruption occur?
- Is that supposed to change in the future?
Many thanks,
--
Bastian Tweddell Juelich Supercomputing Centre
phone: +49 (2461) 61-6586 HPCNS, HPS
------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------
7 years, 5 months
Re: LMDB data growth - overflow pages
by Christian Sell
Hi,
as far as my scenario goes, I shortly afterward detected serious errors in my
setup that were responsible to the buildup. There is an article on the symas
website
(https://symas.com/understanding-lmdb-database-file-sizes-and-memory-utili...)
that adresses the question/issue of database growth, I assume you have read
that. Crashes sould of course not happen, but what exactly do you mean by
"crash" anyway?
regards,
Christian
> salesman1 IIS Advertising <sales-iss(a)outlook.com> hat am 26. Juni 2016 um
> 00:59 geschrieben:
>
>
> Re: I HAVE THE SAME PROBLEMS, TOO !
>
>
> from your comments
>
> http://www.openldap.org/lists/openldap-technical/201511/msg00201.html
>
>
>
> I noticed the following using lmdb. Tested version is up-to-date 0.9.18, from
> 2016.
>
>
> Database grows much larger than expected in many use cases, and eventually
> crashes, also was
>
> not a well-behaved database when simple, missing or misbehaved client code
> executes it.
>
> .
>
> Do you think the same ?
>
>
> Maybe lbmd is was fine-tuned for other especific needs, or is a toy database,
> or a complicated
>
> case of application in our scenario. Unfortunatelly is not like Redis yet.
>
> We can not throw anything special on it without some effort.
>
>
>
> We really want do use lbmd, it's fast, but can't get it to work as expected.
> Lmbd may be incomplete,
>
> buggy for (the common coder) or || (the experienced coder) who tries do use
> it for other needs
>
> than designed. So it's now a dream but not easily realistic. Needs an expert.
>
>
>
>
> 1. Database size grows 3x the data we put in, than stops growing - when
> records (key+data) from 410 to 4010 bytes are added.
>
> -> and consumed much more space than expected for 5000*400 bytes = 2.000.000
> bytes
>
>
> 2. Second, when we put key+data > 4096 bytes, database grows much larger and
> crashes a lot of times, with a number
>
> of page overflows shown on 'mdb_dump ./testdb'
>
>
> We made keys ("k:%d", i%5000) for (i=0; i < 1000000; i++) // so there are
> 5000 keys, else is overwriting;
>
> Data is random ascii or fixed memset( 'x',400 )
>
>
> Program name: sample2
>
> We are overwriting old values AND NOT appending, but it grows anyway. Larger
> data than 4096 bytes per key-value
>
> seems impossible or non-sense at this time. Are we using wrong flags or
> program structure ?
>
>
>
> trials - first run goes fine: #./sample2
>
>
> [root@ip-172-31-13-74 liblmdb]# ./sample2; ls ./testdb/* -l
> Start
> Init done
> written 100000 pairs of len=400 bytes
> -rw-r--r-- 1 root root 2584576 Jun 25 21:43 ./testdb/data.mdb
> -rw-r--r-- 1 root root 8192 Jun 25 21:43 ./testdb/lock.mdb
> [root@ip-172-31-13-74 liblmdb]# ./sample2; ls ./testdb/* -l
> Start
> Init done
> written 100000 pairs of len=400 bytes
> -rw-r--r-- 1 root root 5173248 Jun 25 21:43 ./testdb/data.mdb
> -rw-r--r-- 1 root root 8192 Jun 25 21:43 ./testdb/lock.mdb
> [root@ip-172-31-13-74 liblmdb]# ./sample2; ls ./testdb/* -l
> Start
> Init done
> written 100000 pairs of len=400 bytes
> -rw-r--r-- 1 root root 7761920 Jun 25 21:43 ./testdb/data.mdb
> -rw-r--r-- 1 root root 8192 Jun 25 21:43 ./testdb/lock.mdb
> [root@ip-172-31-13-74 liblmdb]#
>
>
>
> Looks like it's not replacing old keys and have a page/data size problem,
>
>
> Any suggestions ?
>
>
> REGARDS !!
>
> Fabio Martinez - Buenos Aires and São Paulo
>
> gcc developer
>
> Brazil
>
>
> --
>
>
>
>
> Hello,
>
> I am trying to use LMDB to store large (huge) amounts of binary data which,
> for the reason of limiting memory footprint, are split into chunks. Each chunk
> ist stored under a separate key, made up of [collectionId, chunkId], so that I
> can later iterate the chunks using a LMDB cursor. Chunk size is configurable.
>
> During my tests, I encountered a strange scenario where, after inserting some
> 2000 chunks consisting of 512KB each, the database size had grown to a value
> that was roughly 135 times the calculated size of the data. I ran stat over
> the db and saw that there were > 12000 overflow pages vs. approx. 2000 data
> pages. When I reduced the chunk size to 4060 bytes, the number of overflow
> pages went down to 1000, and the database size went down to the expected
> number (I experimented with different sizes, this was the best result). I did
> not find any documentation that would explain this behavior, or how to deal
> with it. Of course it makes me worry about database bloat and the
> consequences. Can anyone shed light on this?
>
> thanks,
> Christian
>
7 years, 5 months
Re: Antw: Re: openldap 2.4.44 - delta-syncrepl fails on auditContext
by Frank Swasey
Today at 6:09am, Ulrich Windl wrote:
> I wonder why you configure accesslog on one node, but not on the other. Here we use same configuration on every node.
I am continuing the practice of having a master and slaves. The masters are set up with MMR and do have accesslog configured on them. The slaves are set up to delta-syncrepl the main database from and force all updates to go to the MMR masters. I have a second set of slaves that are used for mail only, and they replicate just the mail related data from the MMR masters.
--
Frank Swasey | http://www.uvm.edu/~fcs
Sr Systems Administrator | Always remember: You are UNIQUE,
University of Vermont | just like everyone else.
"I am not young enough to know everything." - Oscar Wilde (1854-1900)
7 years, 5 months
openldap 2.4.44 - delta-syncrepl fails on auditContext
by Frank Swasey
Delta-Syncrepl has started failing to actually replicate the consumer
starting with an empty database, it fails with code 0x50 because of the
auditContext attribute that is present in the suffix entry on the master
server due to the accesslog overlay being used.
I can make it work if I load the accesslog overlay in the replica's
configuration (without actually configuring it). This appears to be new
behavior since 2.4.42.
Is this expected, and now required with 2.4.44, or should I open an ITS?
--
Frank Swasey | http://www.uvm.edu/~fcs
Sr Systems Administrator | Always remember: You are UNIQUE,
University of Vermont | just like everyone else.
"I am not young enough to know everything." - Oscar Wilde (1854-1900)
7 years, 5 months
Schema's not importing correctly?
by Trent Dierking
Per the quickstart guide, I am trying to import an ldif file with the
following:
dn: dc=example,dc=com
objectclass: dcObject
objectclass: organization
o: Example Company
dc: example
dn: cn=Manager,dc=example,dc=com
objectclass: organizationalRole
cn: Manager
However, I receive errors indicating something is wrong about the object
class. So I tried the first two lines only, and receive:
ldap_add: Object class violation (65)
additional info: no structural object class provided
The common error faq indicates this is because my objectclass is not valid.
Since this is the quickstart guide, and dcObject comes from core.schema
(which is included by default), what the hell is going on? My best guess is
that the schema is not being correctly imported, but I'm not sure why.
7 years, 5 months
Re: Odd MMR behaviour with delta-syncrepl and refreshAndPersist
by Quanah Gibson-Mount
--On Wednesday, June 15, 2016 1:59 PM +0100 Mark Cairney
<Mark.Cairney(a)ed.ac.uk> wrote:
> Hi Quanah,
>
> I can confirm I still see the issue when deleting and adding user
> objects and groups using 3-way delta-MMR.
Please keep replies on the list.
> From one of the servers receiving the change:
>
[snip]
> I spotted the reference to "cn=marksgroup2" in the log above so decided
> to try it with an objectclass that has no group memberships managed by
> the memberOf overlay (simplesecurityobject) and it worked as expected:
>
> Again from the consumer I see the following logged:
[snip]
> So it looks like there's possibly an additional effect being caused by
> the memberOf overlay but as about 90% of our LDAP writes are the
> creation/modification/deletion of users and groups this could be a pain
> on a production system :-)
>
> Is this enough for you to go on? If there's any additional logging or
> details of my config I'm happy to pass them on.
I dropped your replication log bits in case there was anything sensitive in
there.
I agree, it looks like the memberof overlay is breaking replication in your
case. I would suggest filing an ITS with details on your setup, and the
logging you provided, obfuscated as necessary.
--Quanah
--
Quanah Gibson-Mount
Platform Architect
Manager, Systems Team
Zimbra, Inc.
--------------------
Zimbra :: the leader in open source messaging and collaboration
A division of Synacor, Inc
7 years, 5 months
Give user only access to a few entries that he "owns"
by PenguinWhispererThe .
Hi all,
I'm not very experienced with ldap. I've been looking into the access
controls documentation but I'm unsure on what the proper way to handle this
is.
So let me expain what I want to accomplish: a user entry (posixAccount,
password, givenName, ...) can update his own password by using the "self"
keyword. All good there. But a user has some assets he owns. For example a
host (in Common tree).
I want the user to be able to update one attribute of this host.
"self" keyword doesn't work here as the user doesn't bind to it.
So I added an owner attribute to the host and with that attribute I
reference to the user.
Now I need some kind of "glue" to verify that the user is allowed to write
to the attribute.
Do I need a filter? Wouldn't this just filter out a specific attribute? Or
will it only filter entries based on the filter match?
In the latter case (which seems like a logical way for openldap to handle
this) I would need:
- attr: to select what attribute the user access is modified
- filter: to only apply on the user his host
- by: variable definition for this clause to only apply on the binded user
I've read about dnattr but I'm unsure this is accomplishes what I want.
Could anyone share an example?
Thanks
7 years, 5 months