Do you have any more specifics to back that up? We were running just fine until recently we seemed to cross a threshold. (hence my interest in conn_max_pending* , as all devices behind the F5 (when on the same subnet as the clients using hte balancer) will see all connections coming from the F5's IP's )
F5 load is low (less than 18% cpu), it's a reasonably high powered model (6400) and we aren't hitting traffic/packet limitations in it according to traces we've sent to F5 for analysis.
-- David J. Andruczyk
----- Original Message ---- From: Quanah Gibson-Mount quanah@zimbra.com To: David J. Andruczyk djandruczyk@yahoo.com; openldap-software@openldap.org Sent: Tuesday, July 21, 2009 4:38:29 PM Subject: Re: performance issue behind a a load balancer 2.3.32
--On Tuesday, July 21, 2009 12:39 PM -0700 "David J. Andruczyk" djandruczyk@yahoo.com wrote:
This is a large production environment (several hundred servers, thousands of requests per minute) and the F5-LB is used to balance the load and take care if a node needs to be taken out of service for maint for any reason. With RR DNS if a server is slow (for whatever reason ,backups ,etc) the F5 notices that and adjusts the connections distribution as needed, RR DNS can't do that. As far as indexes, the environment has been performing extremelywell until recently after a few m=hundred thousand more users were added as well as signifiantly higher activity, at which point we began seeing issues when behind the loadbalancers during peak times of day. The LB vender says the issue is with with openldap, and those settings, conn_max_pending/conn_max_pending_auth were the only ones that seemed to stick out, though the documentation on those is rather ambiguous.
We've certainly seen that F5 load balancers cause problems just like your seeing when used with LDAP. They just slow things down way too much to be worthwhile.
--Quanah
--
Quanah Gibson-Mount Principal Software Engineer Zimbra, Inc -------------------- Zimbra :: the leader in open source messaging and collaboration