I realize that development discussion is supposed to go to openldap-devel, but despite having subscribed to that a month ago, none of my postings have gone through. I hear they are having some technical difficulties with that list, so for the sake of this submission not being indefinitely delayed, here it is.
Attached is a proposed patch to fix ITS #7161. It uses the same method as the accesslog module to generate a subsecond generalized time, appending the o_tincr value from the operation structure as fractional seconds. The only other code that looks at the value of that attribute calls parse_time to pull seconds out of it (ignoring the fractional second part), so other than modifying the format the attribute is stored in I don't believe there are any other changes required with this.
Paul B. Henson wrote:
I realize that development discussion is supposed to go to openldap-devel, but despite having subscribed to that a month ago, none of my postings have gone through. I hear they are having some technical difficulties with that list, so for the sake of this submission not being indefinitely delayed, here it is.
Attached is a proposed patch to fix ITS #7161. It uses the same method as the accesslog module to generate a subsecond generalized time, appending the o_tincr value from the operation structure as fractional seconds. The only other code that looks at the value of that attribute calls parse_time to pull seconds out of it (ignoring the fractional second part), so other than modifying the format the attribute is stored in I don't believe there are any other changes required with this.
The patch has a couple of issues - you have completely changed the semantics of the timestamp. The code explicitly used "now" - the current time when the operation completed, and you have changed it to use op->o_time, which is the time when the operation was first received. You said all you wanted to do was add microseconds, but this patch changes much more than that.
Aside from that, there's no reason to make a 2nd redundant call to slap_timestamp - just copy the result from timestr to failtimestr.
Rejecting this patch.
From: Howard Chu Sent: Friday, May 23, 2014 5:30 PM
you have completely changed the semantics of the timestamp. The code explicitly used "now" - the current time when
the
operation completed, and you have changed it to use op->o_time, which is the time when the operation was first received. You said all you wanted to
do
was add microseconds, but this patch changes much more than that.
Yes, I almost went into more detail on that when I posted, I suppose I probably should have.
During my testing, the seconds in op->o_time were never different than now was set to by a call to time(). Assuming an operation never takes more than one second to complete, if you are working with a one second granularity, the start time and the stop time are roughly equivalent, never differing by more than one second (ie, start at 10.8, stop at 11.2, use 11 rather than 10). What is the worst case amount of time it would take to perform a bind operation? The change made was in ppolicy_bind_response, which unless I misunderstand, is only called for the bind operation, so the change in semantic time definition would only apply to binds, and impact the timestamp stored in pwdFailureTime, whether or not a given pwdFailureTime value was recent enough to consider, and whether or not a password was considered expired.
Further, while the semantics did change, is the new definition just different, or worse than the previous? Was there a specific reason it was decided to mark the time of failure as when the bind operation completed as opposed to when it began? What benefit does it provide?
I originally did intend to add microseconds to the existing call to time, but after reviewing other uses of fractional seconds, such as in the accesslog module, it seemed more convenient to use the op->o_tincr, which while not time-based also allows you to distinguish operations at a subsecond granularity. For the purpose of keeping track of the failures, it seemed not so important you actually logged the exact microsecond the failure occurred, but simply that you are able to distinguish multiple failures within one second. I considered keeping the call to time(), and simply adding op->o_tincr on to that, but then I would potentially be mixing time intervals.
If you want to keep the same semantics of the time being at the end of the operation rather than at the start, is it acceptable to call ldap_pvt_gettime from this module, and then call lutil_tm2time to get a lutil_timet containing both seconds and microseconds? ldap_pvt_gettime involves a mutex, and seemed like would be less efficient to use than the existing time and operation increment that was already present, given that a second or two delta didn't really seem to make a difference for the use that was made of the time.
Aside from that, there's no reason to make a 2nd redundant call to slap_timestamp - just copy the result from timestr to failtimestr.
You're right, I did not consider that. I will make that change.
Rejecting this patch.
Thank you for the feedback.
Paul B. Henson wrote:
From: Howard Chu Sent: Friday, May 23, 2014 5:30 PM
you have completely changed the semantics of the timestamp. The code explicitly used "now" - the current time when
the
operation completed, and you have changed it to use op->o_time, which is the time when the operation was first received. You said all you wanted to
do
was add microseconds, but this patch changes much more than that.
Yes, I almost went into more detail on that when I posted, I suppose I probably should have.
During my testing, the seconds in op->o_time were never different than now was set to by a call to time(). Assuming an operation never takes more than one second to complete, if you are working with a one second granularity, the start time and the stop time are roughly equivalent, never differing by more than one second (ie, start at 10.8, stop at 11.2, use 11 rather than 10). What is the worst case amount of time it would take to perform a bind operation?
There is no correct answer to that question, therefore you must not assume there is such a thing. With proxy-bind it could take several seconds to get an answer back from a remote server. Under heavy load with a CPU-intensive password hash it could also take unpredictable times.
The change made was in ppolicy_bind_response, which unless I misunderstand, is only called for the bind operation, so the change in semantic time definition would only apply to binds, and impact the timestamp stored in pwdFailureTime, whether or not a given pwdFailureTime value was recent enough to consider, and whether or not a password was considered expired.
Further, while the semantics did change, is the new definition just different, or worse than the previous? Was there a specific reason it was decided to mark the time of failure as when the bind operation completed as opposed to when it began? What benefit does it provide?
The *failure* occurred at that instant, not at the instant the request was received. It is simply a matter of correctness.
I originally did intend to add microseconds to the existing call to time, but after reviewing other uses of fractional seconds, such as in the accesslog module, it seemed more convenient to use the op->o_tincr, which while not time-based also allows you to distinguish operations at a subsecond granularity. For the purpose of keeping track of the failures, it seemed not so important you actually logged the exact microsecond the failure occurred, but simply that you are able to distinguish multiple failures within one second. I considered keeping the call to time(), and simply adding op->o_tincr on to that, but then I would potentially be mixing time intervals.
You need to actually use microseconds, since the time-increment is only unique on the local server and will not guarantee uniqueness in a replication scenario.
Hi,
On Sat, 24 May 2014, Michael Ströder wrote:
Howard Chu wrote:
You need to actually use microseconds, since the time-increment is only unique on the local server and will not guarantee uniqueness in a replication scenario.
'pwdFailureTime' gets replicated?
ist does if you configure ppolicy_forward_updates (olcPPolicyForwardUpdates):
ppolicy_forward_updates Specify that policy state changes that result from Bind operations (such as recording failures, lockout, etc.) on a consumer should be forwarded to a master instead of being written directly into the con- sumer’s local database. This setting is only useful on a replication consumer, and also requires the updateref setting and chain overlay to be appropriately configured.
Greetings Christian
Christian Kratzer wrote:
Hi,
On Sat, 24 May 2014, Michael Ströder wrote:
Howard Chu wrote:
You need to actually use microseconds, since the time-increment is only unique on the local server and will not guarantee uniqueness in a replication scenario.
'pwdFailureTime' gets replicated?
ist does if you configure ppolicy_forward_updates (olcPPolicyForwardUpdates):
Yupp, chaining.
This is asking for trouble anyway as you already noticed yourself.
Ciao, Michael.
Hi,
On Sun, 25 May 2014, Michael Ströder wrote:
'pwdFailureTime' gets replicated?
ist does if you configure ppolicy_forward_updates (olcPPolicyForwardUpdates):
Yupp, chaining.
This is asking for trouble anyway as you already noticed yourself.
yes this was for chaining. And I was bitten by some really digusting issues.
Nevertheless I do see the metadata from slapo-ppolicy getting replicated between masters in my current setup. I just checked and have forward_updates turned off.
Following is from ppolicy.c and we should go intot the else block where dont_replicate is only set for SLAP_SINGLE_SHADOW:
1148 /* If this server is a shadow and forward_updates is true, 1149 * use the frontend to perform this modify. That will trigger 1150 * the update referral, which can then be forwarded by the 1151 * chain overlay. Obviously the updateref and chain overlay 1152 * must be configured appropriately for this to be useful. 1153 */ 1154 if ( SLAP_SHADOW( op->o_bd ) && pi->forward_updates ) { 1155 op2.o_bd = frontendDB; 1156 1157 /* Must use Relax control since these are no-user-mod */ 1158 op2.o_relax = SLAP_CONTROL_CRITICAL; 1159 op2.o_ctrls = ca; 1160 ca[0] = &c; 1161 ca[1] = NULL; 1162 BER_BVZERO( &c.ldctl_value ); 1163 c.ldctl_iscritical = 1; 1164 c.ldctl_oid = LDAP_CONTROL_RELAX; 1165 } else { 1166 /* If not forwarding, don't update opattrs and don't replicate */ 1167 if ( SLAP_SINGLE_SHADOW( op->o_bd )) { 1168 op2.orm_no_opattrs = 1; 1169 op2.o_dont_replicate = 1; 1170 } 1171 op2.o_bd->bd_info = (BackendInfo *)on->on_info; 1172 }
This seems to translate into ppolicy changes being replicated between masters which is something I rely on in the current design. I am still not 100% sure on the rationale of the SLAP_SINGLE_SHADOW() check in the else path but it obviously does what I need.
Mileage may of course vary because of scaling issues and replicating timestamps with microsend resolution does not make sense in this use case.
I leveraged above code in my patch for getting authTimestamp replicated between masters in ITS#7721.
I had done the patch with chaining in mind but now also use it with plain multimaster replication without any chaining.
Greetings Christian
On Sat, May 24, 2014 at 10:56:50AM +0200, Michael Ströder wrote:
'pwdFailureTime' gets replicated?
Based on my testing, in a multimaster environment, that attribute is indeed replicated. For account lockout purposes, that's required if you want the failure count to be across all servers and not relative to each one...
On Fri, May 23, 2014 at 08:51:02PM -0700, Howard Chu wrote:
The *failure* occurred at that instant, not at the instant the request was received. It is simply a matter of correctness.
For my purposes, it doesn't really matter whether the bind is considered to have failed as of when it was attempted vs when all the processing was completed, so if you prefer the latter I'll rework my patch to keep those semanics.
You need to actually use microseconds, since the time-increment is only unique on the local server and will not guarantee uniqueness in a replication scenario.
Ah, good point.
Thanks for the feedback...
Paul B. Henson wrote:
On Fri, May 23, 2014 at 08:51:02PM -0700, Howard Chu wrote:
The *failure* occurred at that instant, not at the instant the request was received. It is simply a matter of correctness.
For my purposes, it doesn't really matter whether the bind is considered to have failed as of when it was attempted vs when all the processing was completed, so if you prefer the latter I'll rework my patch to keep those semanics.
You need to actually use microseconds, since the time-increment is only unique on the local server and will not guarantee uniqueness in a replication scenario.
Ah, good point.
But even with exact microseconds uniqueness cannot be guaranteed in a replication scenario.
I also wonder what people who want to see pwdFailureTime replicated expect when bind requests are load-balanced to different replicas - not unusual.
I don't think that you can meet the expectations of your IT sec folks regarding exact failure count.
Ciao, Michael.
Michael Ströder wrote:
Paul B. Henson wrote:
On Fri, May 23, 2014 at 08:51:02PM -0700, Howard Chu wrote:
The *failure* occurred at that instant, not at the instant the request was received. It is simply a matter of correctness.
For my purposes, it doesn't really matter whether the bind is considered to have failed as of when it was attempted vs when all the processing was completed, so if you prefer the latter I'll rework my patch to keep those semanics.
You need to actually use microseconds, since the time-increment is only unique on the local server and will not guarantee uniqueness in a replication scenario.
Ah, good point.
But even with exact microseconds uniqueness cannot be guaranteed in a replication scenario.
True, but collisions will be extremely rare, which cannot be said for using the time-increment.
I also wonder what people who want to see pwdFailureTime replicated expect when bind requests are load-balanced to different replicas - not unusual.
In the single-master case it's anybody's guess what anyone would expect. In a multi-master case it's clear that the expectation is that all servers maintain identical counts.
I don't think that you can meet the expectations of your IT sec folks regarding exact failure count.
Ciao, Michael.
Howard Chu wrote:
Michael Ströder wrote:
Paul B. Henson wrote:
On Fri, May 23, 2014 at 08:51:02PM -0700, Howard Chu wrote:
The *failure* occurred at that instant, not at the instant the request was received. It is simply a matter of correctness.
For my purposes, it doesn't really matter whether the bind is considered to have failed as of when it was attempted vs when all the processing was completed, so if you prefer the latter I'll rework my patch to keep those semanics.
You need to actually use microseconds, since the time-increment is only unique on the local server and will not guarantee uniqueness in a replication scenario.
Ah, good point.
But even with exact microseconds uniqueness cannot be guaranteed in a replication scenario.
True, but collisions will be extremely rare, which cannot be said for using the time-increment.
I also wonder what people who want to see pwdFailureTime replicated expect when bind requests are load-balanced to different replicas - not unusual.
In the single-master case it's anybody's guess what anyone would expect. In a multi-master case it's clear that the expectation is that all servers maintain identical counts.
I don't think that you can meet the expectations of your IT sec folks regarding exact failure count.
I suspect that people expect the failed logins to correctly sum up. This won't won't work reliably for an even not so high attack rate due to the replication latency.
Ciao, Michael.
On Fri, May 23, 2014 at 08:51:02PM -0700, Howard Chu wrote:
You need to actually use microseconds, since the time-increment is only unique on the local server and will not guarantee uniqueness in a replication scenario.
Attached is an updated patch for this ITS which uses microseconds rather than the time-increment, maintains the semantics of "now" being when the code is called rather than when the operation began, and copies the first timestamp to create a second with microseconds rather than redundantly calling slapd_timestamp.
Let me know if there's anything else that needs to be fixed or changed.
Thanks...
I haven't seen any response to this updated patch I submitted last week; is this now something that would be considered for integration, or are there any other changes you'd like to see first?
Thanks...
On Fri, May 30, 2014 at 05:09:18PM -0700, Paul B. Henson wrote:
On Fri, May 23, 2014 at 08:51:02PM -0700, Howard Chu wrote:
You need to actually use microseconds, since the time-increment is only unique on the local server and will not guarantee uniqueness in a replication scenario.
Attached is an updated patch for this ITS which uses microseconds rather than the time-increment, maintains the semantics of "now" being when the code is called rather than when the operation began, and copies the first timestamp to create a second with microseconds rather than redundantly calling slapd_timestamp.
Let me know if there's anything else that needs to be fixed or changed.
Thanks...
From 4db8660f6616a70a67feba1e07ee6f866014b1d2 Mon Sep 17 00:00:00 2001 From: "Paul B. Henson" henson@acm.org Date: Fri, 30 May 2014 16:47:34 -0700 Subject: [PATCH] ITS#7161 ppolicy pwdFailureTime resolution should be better than 1 second
servers/slapd/overlays/ppolicy.c | 20 ++++++++++++++++---- 1 file changed, 16 insertions(+), 4 deletions(-)
diff --git a/servers/slapd/overlays/ppolicy.c b/servers/slapd/overlays/ppolicy.c index 83aa099..f8b7335 100644 --- a/servers/slapd/overlays/ppolicy.c +++ b/servers/slapd/overlays/ppolicy.c @@ -911,8 +911,11 @@ ppolicy_bind_response( Operation *op, SlapReply *rs ) int ngut = -1, warn = -1, age, rc; Attribute *a; time_t now, pwtime = (time_t)-1;
- struct lutil_tm now_tm;
- struct lutil_timet now_usec; char nowstr[ LDAP_LUTIL_GENTIME_BUFSIZE ];
- struct berval timestamp;
- char nowstr_usec[ LDAP_LUTIL_GENTIME_BUFSIZE+8 ];
- struct berval timestamp, timestamp_usec; BackendInfo *bi = op->o_bd->bd_info; Entry *e;
@@ -929,11 +932,20 @@ ppolicy_bind_response( Operation *op, SlapReply *rs ) return SLAP_CB_CONTINUE; }
- now = slap_get_time(); /* stored for later consideration */
ldap_pvt_gettime(&now_tm); /* stored for later consideration */
lutil_tm2time(&now_tm, &now_usec);
now = now_usec.tt_sec; timestamp.bv_val = nowstr; timestamp.bv_len = sizeof(nowstr); slap_timestamp( &now, ×tamp );
/* Separate timestamp for pwdFailureTime with microsecond granularity */
strcpy(nowstr_usec, nowstr);
timestamp_usec.bv_val = nowstr_usec;
timestamp_usec.bv_len = timestamp.bv_len;
snprintf( timestamp_usec.bv_val + timestamp_usec.bv_len-1, sizeof(".123456Z"), ".%06dZ", now_usec.tt_usec );
timestamp_usec.bv_len += STRLENOF(".123456");
if ( rs->sr_err == LDAP_INVALID_CREDENTIALS ) { int i = 0, fc = 0;
@@ -946,8 +958,8 @@ ppolicy_bind_response( Operation *op, SlapReply *rs ) m->sml_values = ch_calloc( sizeof(struct berval), 2 ); m->sml_nvalues = ch_calloc( sizeof(struct berval), 2 );
ber_dupbv( &m->sml_values[0], ×tamp );
ber_dupbv( &m->sml_nvalues[0], ×tamp );
ber_dupbv( &m->sml_values[0], ×tamp_usec );
m->sml_next = mod; mod = m;ber_dupbv( &m->sml_nvalues[0], ×tamp_usec );
-- 1.8.3.2
On Fri, Jun 06, 2014 at 01:58:39PM -0700, Paul B. Henson wrote:
I haven't seen any response to this updated patch I submitted last week; is this now something that would be considered for integration, or are there any other changes you'd like to see first?
Still looking for some feedback on this; good to go, needs work, or even just don't want this enhancement...
Thanks...
On Fri, May 30, 2014 at 05:09:18PM -0700, Paul B. Henson wrote:
On Fri, May 23, 2014 at 08:51:02PM -0700, Howard Chu wrote:
You need to actually use microseconds, since the time-increment is only unique on the local server and will not guarantee uniqueness in a replication scenario.
Attached is an updated patch for this ITS which uses microseconds rather than the time-increment, maintains the semantics of "now" being when the code is called rather than when the operation began, and copies the first timestamp to create a second with microseconds rather than redundantly calling slapd_timestamp.
Let me know if there's anything else that needs to be fixed or changed.
Thanks...
From 4db8660f6616a70a67feba1e07ee6f866014b1d2 Mon Sep 17 00:00:00 2001 From: "Paul B. Henson" henson@acm.org Date: Fri, 30 May 2014 16:47:34 -0700 Subject: [PATCH] ITS#7161 ppolicy pwdFailureTime resolution should be better than 1 second
servers/slapd/overlays/ppolicy.c | 20 ++++++++++++++++---- 1 file changed, 16 insertions(+), 4 deletions(-)
diff --git a/servers/slapd/overlays/ppolicy.c b/servers/slapd/overlays/ppolicy.c index 83aa099..f8b7335 100644 --- a/servers/slapd/overlays/ppolicy.c +++ b/servers/slapd/overlays/ppolicy.c @@ -911,8 +911,11 @@ ppolicy_bind_response( Operation *op, SlapReply *rs ) int ngut = -1, warn = -1, age, rc; Attribute *a; time_t now, pwtime = (time_t)-1;
- struct lutil_tm now_tm;
- struct lutil_timet now_usec; char nowstr[ LDAP_LUTIL_GENTIME_BUFSIZE ];
- struct berval timestamp;
- char nowstr_usec[ LDAP_LUTIL_GENTIME_BUFSIZE+8 ];
- struct berval timestamp, timestamp_usec; BackendInfo *bi = op->o_bd->bd_info; Entry *e;
@@ -929,11 +932,20 @@ ppolicy_bind_response( Operation *op, SlapReply *rs ) return SLAP_CB_CONTINUE; }
- now = slap_get_time(); /* stored for later consideration */
ldap_pvt_gettime(&now_tm); /* stored for later consideration */
lutil_tm2time(&now_tm, &now_usec);
now = now_usec.tt_sec; timestamp.bv_val = nowstr; timestamp.bv_len = sizeof(nowstr); slap_timestamp( &now, ×tamp );
/* Separate timestamp for pwdFailureTime with microsecond granularity */
strcpy(nowstr_usec, nowstr);
timestamp_usec.bv_val = nowstr_usec;
timestamp_usec.bv_len = timestamp.bv_len;
snprintf( timestamp_usec.bv_val + timestamp_usec.bv_len-1, sizeof(".123456Z"), ".%06dZ", now_usec.tt_usec );
timestamp_usec.bv_len += STRLENOF(".123456");
if ( rs->sr_err == LDAP_INVALID_CREDENTIALS ) { int i = 0, fc = 0;
@@ -946,8 +958,8 @@ ppolicy_bind_response( Operation *op, SlapReply *rs ) m->sml_values = ch_calloc( sizeof(struct berval), 2 ); m->sml_nvalues = ch_calloc( sizeof(struct berval), 2 );
ber_dupbv( &m->sml_values[0], ×tamp );
ber_dupbv( &m->sml_nvalues[0], ×tamp );
ber_dupbv( &m->sml_values[0], ×tamp_usec );
m->sml_next = mod; mod = m;ber_dupbv( &m->sml_nvalues[0], ×tamp_usec );
-- 1.8.3.2
Paul B. Henson wrote:
On Fri, Jun 06, 2014 at 01:58:39PM -0700, Paul B. Henson wrote:
I haven't seen any response to this updated patch I submitted last week; is this now something that would be considered for integration, or are there any other changes you'd like to see first?
Still looking for some feedback on this; good to go, needs work, or even just don't want this enhancement...
Your posts were missed because you sent them to this discussion list, instead of to the ITS where the actual bug report resides.
Paul B. Henson wrote:
On Fri, May 23, 2014 at 08:51:02PM -0700, Howard Chu wrote:
You need to actually use microseconds, since the time-increment is only unique on the local server and will not guarantee uniqueness in a replication scenario.
Attached is an updated patch for this ITS which uses microseconds rather than the time-increment, maintains the semantics of "now" being when the code is called rather than when the operation began, and copies the first timestamp to create a second with microseconds rather than redundantly calling slapd_timestamp.
Let me know if there's anything else that needs to be fixed or changed.
You've got the right idea now but not making best use of the APIs.
ldap_pvt_gettime() returns structured time. There is no reason to then call lutil_tm2time() to turn it into seconds, and then call slap_timestamp() which must turn seconds into structured time again for formatting. Personally I would just sprintf a timestamp here using the lutil_tm structure.
On Sun, Jun 15, 2014 at 06:04:20AM -0700, Howard Chu wrote:
ldap_pvt_gettime() returns structured time. There is no reason to then call lutil_tm2time() to turn it into seconds, and then call slap_timestamp() which must turn seconds into structured time again for formatting. Personally I would just sprintf a timestamp here using the lutil_tm structure.
You mean you'd copy the the struct lutil_tm to a struct tm and call strftime, or you'd actually duplicate the functionality of strftime yourself with a raw sprintf? It seems if you have an API function to generate a timestamp (or one to format a string time), it's cleaner to actually use them, even if it requires swapping some types around, than to duplicate their functionality inline yourself. But if that's what you want I'll work on it.
A time_t is still needed in other parts of the function for comparisons, so it seems lutil_tm2time would still need to be called to aquire it, even if the timestamp is made using strftime or sprintf using structured time? Or you'd have to call both ldap_pvt_gettime and time().
Thanks...
Paul B. Henson wrote:
On Sun, Jun 15, 2014 at 06:04:20AM -0700, Howard Chu wrote:
ldap_pvt_gettime() returns structured time. There is no reason to then call lutil_tm2time() to turn it into seconds, and then call slap_timestamp() which must turn seconds into structured time again for formatting. Personally I would just sprintf a timestamp here using the lutil_tm structure.
You mean you'd copy the the struct lutil_tm to a struct tm and call strftime, or you'd actually duplicate the functionality of strftime yourself with a raw sprintf? It seems if you have an API function to generate a timestamp (or one to format a string time), it's cleaner to actually use them, even if it requires swapping some types around, than to duplicate their functionality inline yourself. But if that's what you want I'll work on it.
I would have just used a raw sprintf. It's not like the provided APIs do anything special.
A time_t is still needed in other parts of the function for comparisons, so it seems lutil_tm2time would still need to be called to aquire it, even if the timestamp is made using strftime or sprintf using structured time? Or you'd have to call both ldap_pvt_gettime and time().
No, good point, I'd overlooked the other uses of "now" in the function. In that case we may just go ahead with the patch in its current form.
Paul B. Henson wrote:
On Sun, Jun 15, 2014 at 12:56:34PM -0700, Howard Chu wrote:
No, good point, I'd overlooked the other uses of "now" in the function. In that case we may just go ahead with the patch in its current form.
Cool. Let me know either way, thanks...
Passes test022, committed to master. Thanks for the patch.
On Sun, Jun 15, 2014 at 01:47:27PM -0700, Howard Chu wrote:
Passes test022, committed to master.
Cool, much appreciated. Any chance of backporting it to RE24? It actually applies cleanly, with just a couple offsets. Unless I'm not thinking of something, it should be backwards compatible with the existing code.
Thanks...
Paul B. Henson wrote:
On Mon, Jun 16, 2014 at 12:23:55PM -0700, Paul B. Henson wrote:
Cool, much appreciated. Any chance of backporting it to RE24?
Never mind, Quanah told me off list he'd pulled it back to RE24.
Thanks again for merging it.
Great! It works! Thanks to all for working on it.
Ciao, Michael.
openldap-technical@openldap.org