(ITS#6702) slappasswd prompts missing
by gessel@blackrosetech.com
Full_Name: David Gessel
Version: openldap-sasl-server-2.4.23
OS: FreeBSD 8.1
URL: ftp://ftp.openldap.org/incoming/
Submission from: (NULL) (66.93.181.141)
Attempting to insert a hash of the root pw into slapd.conf with the command:
# slappasswd >> slapd.conf
yields a blank prompt, that is there is no feedback to prompt the user along the
lines of:
"Enter root password"
and
"Verify root password"
But entering the root password twice with no feedback (and blank characters for
security, so completely blind) gets through the command successfully.
12 years, 10 months
(ITS#6701) Update to man8 slappasswd: English usage
by diver06@gmx.net
Full_Name: Simon Wright
Version: 2.4.23
OS: FreeBSD
URL: ftp://ftp.openldap.org/incoming/
Submission from: (NULL) (212.98.32.54)
The man page for slappasswd says that encryption scheme names need to be
protected due to "{". This is wrong usage: scheme names may need to be protected
*by* "{"
Patch:
--- openldap-2.4.23/doc/man/man8/slappasswd.8 2010-04-13 22:22:46.000000000
+0200
+++ slappasswd.8 2010-11-11 16:36:28.000000000 +0100
@@ -100,7 +100,7 @@
The default is
.BR {SSHA} .
-Note that scheme names may need to be protected, due to
+Note that scheme names may need to be protected by
.B {
and
.BR } ,
12 years, 10 months
(ITS#6700) memberOf overlay does not handle MODRDN correctly
by raphael.ouazana@linagora.com
Full_Name: Raphael Ouazana
Version: 2.4.23
OS: Linux
URL: ftp://ftp.openldap.org/incoming/
Submission from: (NULL) (213.41.232.151)
Hi,
When using memberOf overlay, a MODRDN request on a member does not update
correctly the matching group, even if the memberof-refint option is set to
true.
In my little understanding, memberof_value_modify calls op->o_bd->be_modify
which calls back the overlay code, and finally never reach the backend.
Regards,
Raphaël Ouazana.
12 years, 10 months
RE: (ITS#6683) DDS fails with expired branches
by Petteri.Stenius@ubisecure.com
Hi,
Thank you for your reply.
My knowledge about the bdb internals is limited. I have reproduced this
issue and reduced it to a small amount of data (see below). I think I
have done everything right with regards to setting up and preparing the
database and indexes. I've used both ldapmodify and slapadd/slapindex to
prepare the db.
If you think my indexes are corrupt then can you please give me pointers
how to verify?
I don't think there exist test cases for operators LE "<=3D" or GE =
">=3D" in
the source codes.
Thanks,
Petteri
-----Original Message-----
From: Howard Chu [mailto:hyc@symas.com]=20
Sent: Tuesday, November 09, 2010 10:45 AM
To: Petteri Stenius
Cc: openldap-its(a)openldap.org
Subject: Re: (ITS#6683) DDS fails with expired branches
Petteri.Stenius(a)ubisecure.com wrote:
> Hello,
>
> Further investigation shows this issue is caused by operator LE search
> failing with indexed attributes. Also this indexed search issue is NOT
> limited to DDS.=3D20
>
> I have reproduced the issue with integerOrderingMatch and
> generalizedTimeOrderingMatch.=3D20
>
> The piece of code I find suspicious is in servers/back-bdb/idl.c,
> somewhere in the middle it reads
>
> /* skip presence key on range inequality lookups */
> while (rc =3D3D=3D3D 0&& kptr->size !=3D3D len) {
> rc =3D3D cursor->c_get( cursor, kptr,&data, flags |
> DB_NEXT_NODUP );
> }
>
> If I remove this block then LE search works as expected with indexed
> attributes. The key here seems to be the DB_NEXT_NODUP flag. This flag
> causes the iterator block a few lines below to return partial matches.
That implies that there's something else corrupt in the index, because
the=20
presence key will never be the same size as an equality key.
>
> Thanks,
> Petteri
>
> -----Original Message-----
> From: openldap-bugs-bounces(a)OpenLDAP.org
> [mailto:openldap-bugs-bounces@OpenLDAP.org] On Behalf Of
> petteri.stenius(a)ubisecure.com
> Sent: Monday, October 25, 2010 1:35 PM
> To: openldap-its(a)openldap.org
> Subject: (ITS#6683) DDS fails with expired branches
>
> Full_Name:=3D20
> Version: 2.4.23
> OS: Linux
> URL: ftp://ftp.openldap.org/incoming/
> Submission from: (NULL) (195.197.205.34)
>
>
> Hello,
>
> I have a directory with branches of dynamicObject entries. It looks
like
> if the
> entryExpireTimestamp value is the same on objects within a branch then
> DDS
> search for expired objects will only find the top-most object. This
> results in
> remove failing with message
>
> DDS dn=3D3D"cn=3D3Dtop,cn=3D3Droot,dc=3D3Dtest" is non-leaf; =
deferring.
>
>
> To reproduce
>
> OpenLDAP 2.4.23, Berkeley DB 4.6.21
>
> Use slapadd to prepare directory with following
>
> dn: cn=3D3DRoot,dc=3D3Dtest
> objectClass: top
> objectClass: applicationProcess
> cn: Root
>
> dn: cn=3D3Dtop,cn=3D3DRoot,dc=3D3Dtest
> objectClass: top
> objectClass: device
> objectClass: dynamicObject
> entryTTL: 60
> entryExpireTimestamp: 20101024113626Z
> cn: top
>
> dn: cn=3D3Dleaf1,cn=3D3Dtop,cn=3D3DRoot,dc=3D3Dtest
> objectClass: top
> objectClass: device
> objectClass: dynamicObject
> entryTTL: 60
> entryExpireTimestamp: 20101024113626Z
> cn: leaf1
>
> dn: cn=3D3Dleaf2,cn=3D3Dtop,cn=3D3DRoot,dc=3D3Dtest
> objectClass: top
> objectClass: device
> objectClass: dynamicObject
> entryTTL: 60
> entryExpireTimestamp: 20101024113626Z
> cn: leaf2
>
> dn: cn=3D3Dleaf3,cn=3D3Dtop,cn=3D3DRoot,dc=3D3Dtest
> objectClass: top
> objectClass: device
> objectClass: dynamicObject
> entryTTL: 60
> entryExpireTimestamp: 20101024113626Z
> cn: leaf3
>
>
> Relevant slapd.conf entries
>
> database bdb
> suffix "cn=3D3DRoot,dc=3D3Dtest"
> rootdn "cn=3D3DRoot,dc=3D3Dtest"
> rootpw "password"
>
> overlay dds
> dds-default-ttl 3600
> dds-min-ttl 60
> dds-interval 60
> dds-state true
> index entryExpireTimestamp eq,pres
>
> access to dn.subtree=3D3D"cn=3D3DRoot,dc=3D3Dtest"
> by users write
> by * read
>
>
> Running "slapd -d 1 -d 256" produces following
>
> put_filter:
>
"(&(objectClass=3D3DdynamicObject)(entryExpireTimestamp<=3D3D201010250824=
46Z
)=3D
> )"
> put_filter: AND
> put_filter_list
>
"(objectClass=3D3DdynamicObject)(entryExpireTimestamp<=3D3D20101025082446=
Z)"
> put_filter: "(objectClass=3D3DdynamicObject)"
> put_filter: simple
> put_simple_filter: "objectClass=3D3DdynamicObject"
> put_filter: "(entryExpireTimestamp<=3D3D20101025082446Z)"
> put_filter: simple
> put_simple_filter: "entryExpireTimestamp<=3D3D20101025082446Z"
> ber_scanf fmt ({mm}) ber:
> ber_scanf fmt ({mm}) ber:
> =3D3D> bdb_search
> bdb_dn2entry("cn=3D3Droot,dc=3D3Dtest")
> =3D3D> bdb_dn2id("cn=3D3Droot,dc=3D3Dtest")
> <=3D3D bdb_dn2id: got id=3D3D0x1
> entry_decode: "cn=3D3DRoot,dc=3D3Dtest"
> <=3D3D entry_decode(cn=3D3DRoot,dc=3D3Dtest)
> search_candidates: base=3D3D"cn=3D3Droot,dc=3D3Dtest" (0x00000001) =
scope=3D3D2
> =3D3D> bdb_dn2idl("cn=3D3Droot,dc=3D3Dtest")
> =3D3D> bdb_equality_candidates (objectClass)
> =3D3D> key_read
> <=3D3D bdb_index_read: failed (-30989)
> <=3D3D bdb_equality_candidates: id=3D3D0, first=3D3D0, last=3D3D0
> =3D3D> bdb_equality_candidates (objectClass)
> =3D3D> key_read
> <=3D3D bdb_index_read 4 candidates
> <=3D3D bdb_equality_candidates: id=3D3D4, first=3D3D2, last=3D3D5
> =3D3D> bdb_inequality_candidates (entryExpireTimestamp)
> =3D3D> key_read
> <=3D3D bdb_index_read 1 candidates
> =3D3D> key_read
> <=3D3D bdb_index_read: failed (-30989)
> <=3D3D bdb_inequality_candidates: id=3D3D1, first=3D3D2, last=3D3D2
> bdb_search_candidates: id=3D3D1 first=3D3D2 last=3D3D2
> entry_decode: "cn=3D3Dtop,cn=3D3DRoot,dc=3D3Dtest"
> <=3D3D entry_decode(cn=3D3Dtop,cn=3D3DRoot,dc=3D3Dtest)
> =3D3D> bdb_dn2id("cn=3D3Dtop,cn=3D3Droot,dc=3D3Dtest")
> <=3D3D bdb_dn2id: got id=3D3D0x2
> send_ldap_result: conn=3D3D-1 op=3D3D0 p=3D3D0
> bdb_dn2entry("cn=3D3Dtop,cn=3D3Droot,dc=3D3Dtest")
> =3D3D> bdb_dn2id_children("cn=3D3Dtop,cn=3D3Droot,dc=3D3Dtest")
> <=3D3D bdb_dn2id_children("cn=3D3Dtop,cn=3D3Droot,dc=3D3Dtest"): (0)
> send_ldap_result: conn=3D3D-1 op=3D3D0 p=3D3D0
> DDS dn=3D3D"cn=3D3Dtop,cn=3D3Droot,dc=3D3Dtest" is non-leaf; =
deferring.
> DDS expired=3D3D0
>
>
> ldapsearch "(entryExpireTimestamp=3D3D*)" produces
>
> dn: cn=3D3Dtop,cn=3D3DRoot,dc=3D3Dtest
> entryExpireTimestamp: 20101024113626Z
>
> dn: cn=3D3Dleaf1,cn=3D3Dtop,cn=3D3DRoot,dc=3D3Dtest
> entryExpireTimestamp: 20101024113626Z
>
> dn: cn=3D3Dleaf2,cn=3D3Dtop,cn=3D3DRoot,dc=3D3Dtest
> entryExpireTimestamp: 20101024113626Z
>
> dn: cn=3D3Dleaf3,cn=3D3Dtop,cn=3D3DRoot,dc=3D3Dtest
> entryExpireTimestamp: 20101024113626Z
>
>
> where ldapsearch "(entryExpireTimestamp<=3D3D20101024113626Z)" only
finds
>
> dn: cn=3D3Dtop,cn=3D3DRoot,dc=3D3Dtest
> entryExpireTimestamp: 20101024113626Z
>
>
> If I change all timestamps to distinct values then expiration of
> complete
> branches works as expected.
>
>
> Thanks,
> Petteri
>
>
>
--=20
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
12 years, 10 months
(ITS#6698) Please add BDB 5.x support for SLAPD
by faessler@was.ch
Full_Name: Dominik Fässler
Version: 2.4.23
OS: FreeBSD (8.1-RELEASE)
URL:
Submission from: (NULL) (62.2.101.221)
We know that for the moment only BDB versions 4.4 - 4.8 are supported as backend
for SLAPD.
We would be happy, if you could include support for BDB versions 5.x as well.
(For now we are using 5.1)
Thanx.
12 years, 10 months
Re: (ITS#6696) back-sql and pagedResultsControl can be extremely heavy due to no LIMIT
by masarati@aero.polimi.it
Howard Chu wrote:
>> Using back-sql on large databases along with pagedResult control is not
>> advisable. Limiting the number of entries returned by each query is not
>> viable as well, since some entries might not mathc the LDAP filter, or
>> ACLs or so, possibly leading to less than pageSize entries returned
>> within one page. PagedResults could be removed from back-sql, and dealt
>> with by an overlay that simply pages results returned by back-sql in a
>> single internal search; probably this is the preferable approach, since
>> it would also result in a reduction of the complexity of back-sql.
>> However, I have little interest in improving back-sql, so patches are
>> welcome, as usual...
>
> The sssvlv overlay already intercepts pagedResults requests if they
> occur in combination with the Sort control. It would be trivial to
> extend it to always intercept pagedResults, and then we can rip the
> paging support out of each of the backends. (Of course, there's a
> marginal efficiency advantage to letting back-bdb/hdb do its own paging.
> A configurable option might be best.)
That's more or less what I had in mind. I assume you merged the two
functionalities in one overlay because pagedResult needs special care
when combined with SSSVLV, and this might be true for other
functionalities, though (e.g. having efficient pagedResult; life would
be much better without it, since clients do not need it while it makes
servers' life harder). With respect to conditionally exploiting native
pagedResult capabilities of back-bdb/hdb I only fear some issues related
to glued databases. Those could be possibly solved by disabling native
back-bdb/hdb pagedResult handling when used in glued databases, or even
more granularly, when a search spans more than one database in a glued
configuration, delegating the handling to the overlay in those cases.
p.
12 years, 10 months
Re: (ITS#6696) back-sql and pagedResultsControl can be extremely heavy due to no LIMIT
by hyc@symas.com
masarati(a)aero.polimi.it wrote:
> Andrew.Gray(a)unlv.edu wrote:
>> Full_Name: Andrew Gray
>> Version: 2.4.17
>> OS: Debian 5.0
>> URL: ftp://ftp.openldap.org/incoming/
>> Submission from: (NULL) (131.216.14.1)
>>
>>
>> On receiving LDAP queries with a pagedResultsControl (in this case with a size
>> of 250), back-sql generates an extremely inefficient query for every iteration
>> in the form of:
>>
>> SELECT DISTINCT ldap_entries.id,people.local_id,text('UNLVexpperson') AS
>> objectClass,ldap_entries.dn AS dn FROM ldap_entries,people,ldap_entry_objclasses
>> WHERE people.local_
>> id=ldap_entries.keyval AND ldap_entries.oc_map_id=1 AND upper(ldap_entries.dn)
>> LIKE upper('%'||'%OU=PEOPLE,DC=UNLV,DC=EDU') AND ldap_entries.id>250 AND (2=2 OR
>> (ldap_entries.id=ldap_entry_objclasses.entry_id AND ldap_entry_objclasses.oc_
>> name='UNLVexpperson'))
>>
>> (this repeats for id>250, id>500, id>750, etc. etc.)
>>
>> Ideally (IMO) there really should be a SQL LIMIT applied here, as in this case
>> slapd gets back a few tens of thousands of rows on every iteration, and the
>> memory usage explodes and eventually gets killed.
>
> Using back-sql on large databases along with pagedResult control is not
> advisable. Limiting the number of entries returned by each query is not
> viable as well, since some entries might not mathc the LDAP filter, or
> ACLs or so, possibly leading to less than pageSize entries returned
> within one page. PagedResults could be removed from back-sql, and dealt
> with by an overlay that simply pages results returned by back-sql in a
> single internal search; probably this is the preferable approach, since
> it would also result in a reduction of the complexity of back-sql.
> However, I have little interest in improving back-sql, so patches are
> welcome, as usual...
The sssvlv overlay already intercepts pagedResults requests if they occur in
combination with the Sort control. It would be trivial to extend it to always
intercept pagedResults, and then we can rip the paging support out of each of
the backends. (Of course, there's a marginal efficiency advantage to letting
back-bdb/hdb do its own paging. A configurable option might be best.)
--
-- Howard Chu
CTO, Symas Corp. http://www.symas.com
Director, Highland Sun http://highlandsun.com/hyc/
Chief Architect, OpenLDAP http://www.openldap.org/project/
12 years, 10 months
Re: (ITS#6641) Syncrepl failure with 'overlay unique'
by ondrej.kuznik@acision.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 09/07/2010 03:26 PM, andrew.findlay(a)skills-1st.co.uk wrote:
> On Tue, Sep 07, 2010 at 05:09:07AM -0700, Howard Chu wrote:
>
>>> We've talked about doing this isolation in the first refresh upon slapd
>>> startup. That might still be a good idea.
>
> It would certainly help to keep the apparent promises made by things
> like the uniqueness overlay. Alternatively you could take the view
> that the data will converge eventually and that is all that the LDAP
> standards promise.
We've hit a similar problem and decided to go this way, i.e. allowing
the replication to bypass the uniqueness constraints.
In RFC 4533, there are only two claims regarding DIT consistency:
<quote>
1.2 Intended Usage
Upon completion of each synchronization stage of the operation, all
information to construct a synchronized client copy of the content has
been provided to the client [...]. Except for transient inconsistencies
due to concurrent operation (or other) processing at the server, the
client copy is an accurate reflection of the content held by the server.
Transient inconsistencies will be resolved by subsequent synchronization
operations.
[...]
This protocol is not intended to be used in applications requiring
transactional data consistency.
</quote>
There is no claim that the DIT should stay consistent wrt. any
structural restrictions during one synchronization operation. It might
be worth noting that trying to replicate the data through as one
transaction, like Andrew suggested (quoted below), would still have been
impossible without a change in the overlay to allow it to pass through.
>>> Doing it on every refresh seems far more problematic, because without some
>>> type of multi-version concurrency control, that means making the server
>>> non-responsive until the refresh completes.
>
> That may not be a problem with refresh-and-persist, as in normal
> circumstances I would expect updates to arrive at the consumer in the
> same order they hit the supplier (so this bug could not trigger). More
> difficult for scheduled refresh mode though. Could the consumer server
> simply write-lock every entry involved in the refresh while it processes
> the list, and then commit the whole lot in one DB transaction?
>
> Andrew
I have put a preliminary version of patches that modify the unique
overlay here
ftp://ftp.openldap.org/incoming/ondrej-kuznik-20101109-unique_bypass_v1.tgz
They add a new configuration attribute olcUniqueAllowManageBypass (it is
prohibitively long for a name, though) that, if set to TRUE, triggers
the uniqueness checks not to be performed if the operation has manage
privilegies on the entry. There are three separate patches,
configuration code regarding the new attribute, the checks in
unique_{add,modify,modrdn} and manpage modifications.
Some things that should be sorted out before this is complete from my
point of view:
1. While there might be a way to find out whether an operation comes
from replication or is really issued by a rootdn, I have not found
any. As a workaround, I used the "manage" privilege to the entry as
a trigger, since this privilege already allows to change the
structural objectclass of an entry, operation otherwise prohibited
by RFC 4512.
2. When performing a modifyDN operation, the privilege is checked
against the entry to be changed instead of the new entry. I found
this approach more appropriate than modifying the entry returned by
overlay_entry_get_ov as I do not yet understand the implications of
doing that and do not know whether it is possible and ok to create a
phony entry by hand just for the call to access_allowed. This is
clearly marked as a "FIXME" comment in the patch.
3. The entry attribute used for acl checking might be too broad.
However, deriving the attributes needed for each domain seemed too
complex. At least when this is the first version of a patch for
which I have no indication whether even the general idea is
considered worth pursuing.
4. The code is similar in all three cases so the first attempt was to
move it into unique_search, but there is not enough information
available there and would have to be provided by each of the three
functions anyway.
Howard and/or others, do you consider this solution valid for this ITS?
If yes, could you help me address the things that should be sorted out?
Last, but not least, the IPR stuff:
The attached modifications to OpenLDAP Software are subject to the
following notice:
Copyright 2010 Acision
Redistribution and use in source and binary forms, with or without
modification, are permitted only as authorized by the OpenLDAP Public
License.
Regards,
Ondrej Kuznik
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
iEYEARECAAYFAkzZX1QACgkQ9GWxeeH+cXuS6gCdEL7txhDF6ukM3jGAcdZNXuQo
lkYAoJXgT8T67OAkETtvYsBuehTV2gO5
=t7jk
-----END PGP SIGNATURE-----
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
12 years, 10 months
Re: (ITS#6696) back-sql and pagedResultsControl can be extremely heavy due to no LIMIT
by masarati@aero.polimi.it
Andrew.Gray(a)unlv.edu wrote:
> Full_Name: Andrew Gray
> Version: 2.4.17
> OS: Debian 5.0
> URL: ftp://ftp.openldap.org/incoming/
> Submission from: (NULL) (131.216.14.1)
>
>
> On receiving LDAP queries with a pagedResultsControl (in this case with a size
> of 250), back-sql generates an extremely inefficient query for every iteration
> in the form of:
>
> SELECT DISTINCT ldap_entries.id,people.local_id,text('UNLVexpperson') AS
> objectClass,ldap_entries.dn AS dn FROM ldap_entries,people,ldap_entry_objclasses
> WHERE people.local_
> id=ldap_entries.keyval AND ldap_entries.oc_map_id=1 AND upper(ldap_entries.dn)
> LIKE upper('%'||'%OU=PEOPLE,DC=UNLV,DC=EDU') AND ldap_entries.id>250 AND (2=2 OR
> (ldap_entries.id=ldap_entry_objclasses.entry_id AND ldap_entry_objclasses.oc_
> name='UNLVexpperson'))
>
> (this repeats for id>250, id>500, id>750, etc. etc.)
>
> Ideally (IMO) there really should be a SQL LIMIT applied here, as in this case
> slapd gets back a few tens of thousands of rows on every iteration, and the
> memory usage explodes and eventually gets killed.
Using back-sql on large databases along with pagedResult control is not
advisable. Limiting the number of entries returned by each query is not
viable as well, since some entries might not mathc the LDAP filter, or
ACLs or so, possibly leading to less than pageSize entries returned
within one page. PagedResults could be removed from back-sql, and dealt
with by an overlay that simply pages results returned by back-sql in a
single internal search; probably this is the preferable approach, since
it would also result in a reduction of the complexity of back-sql.
However, I have little interest in improving back-sql, so patches are
welcome, as usual...
p.
12 years, 10 months