Re: SSL Performance increase?

2015-02-06 Thread Klavs Klavsen

Shawn Heisey wrote on 02/06/2015 07:38 AM:
[CUT]

The current haproxy version implements almost every
performance-enhancing method mentioned in that video, as long as your
openssl is new enough.



It's on CentOS 6.. is that new enough?

We'll start testing different scenarios and see if we find the reason 
for the problem.


Thank you for the video.

--
Regards,
Klavs Klavsen, GSEC - k...@vsen.dk - http://www.vsen.dk - Tlf. 61281200

Those who do not understand Unix are condemned to reinvent it, poorly.
  --Henry Spencer




TCP Fast Open towards to backend servers

2015-02-06 Thread Pavlos Parissis
Hi,

I see tfo setting for bind directive but it isn't clear to me if HAProxy
will use TCP Fast Open towards the backend server.
Shall I assume that if client uses TCP Fast Open HAProxy will do the
same for server side?

Cheers,
Pavlos



signature.asc
Description: OpenPGP digital signature


Re: nbproc 1 and stats in ADMIN mode?

2015-02-06 Thread Klavs Klavsen

Tait Clarridge wrote on 02/05/2015 09:55 PM:
[CUT]

Just to check.. if I set nbproc to f.ex. 4 - then I understand I need to
define 4xstats.. and when I visit the webinterface.. I'll actually only
get stats from one of the 4 processes..

But we have ADMIN enabled for stats - so we can disable backend servers
etc.. will we have to do that for each of the 4 stats editions -
before it's actually active or is that state shared among them all?



Yes, you have to do all our admin operations on each webinterface.

Cheers,
Pavlos




You can also write a small application to take the admin requests and
send them to each haproxy process' web interface, or you can set the
backend server health check to something you can dynamically change to
start failing and bring the server down gracefully that way.



So instead of actually using the nice ADMIN mode that someone authored.. 
I have to write my own, if I want to scale haproxy?


:(

I believe varnish solves this scaling by using threads instead - and 
hence varnishadm/stats works no matter how many threads one has.


Using a process/pre-forker model instead could work just as well, if one 
used shared-memory or some other form of IPC..


I realize it's obviously not an itch that's scratching for anyone 
currently.. but is there already a feature request somewhere on this, or 
can I file one? :)


--
Regards,
Klavs Klavsen, GSEC - k...@vsen.dk - http://www.vsen.dk - Tlf. 61281200

Those who do not understand Unix are condemned to reinvent it, poorly.
  --Henry Spencer




Re: SSL Performance increase?

2015-02-06 Thread Dennis Jacobfeuerborn
On 06.02.2015 18:50, Dennis Jacobfeuerborn wrote:
 On 06.02.2015 14:13, Lukas Tribus wrote:
 I tried to implement these recommendations but didn't seem to get
 results I was expecting. How exactly does one reliably test that the
 1-RTT handshake is actually working?

 Enable TFO and announce http/1.1 via NPN and ALPN, that should
 do it.

 But your client will have to support all those features as-well (for
 example TFO can't possibly work in Windows).

 You will have to capture the TLS handshake in wireshark to see
 how fast it was (in terms of time and RTT's).
 
 This is really what I'm trying to get at. What is a specific way to test
 this? Which clients do support a 1-RTT handshake and what would a
 Wireshark session look like where the 1-RTT handshake succeeds compared
 to one which doesn't.
 
 There is a lot of information about this on the internet.
 All of it extremely vague. Surely there must be a way to come up with a
 test scenario that can verify such a setup more deterministically?

Case in point: In the attached capture it looks like tls false start is
working yet I actually haven't activated npn/alpn on the haproxy side
which means tls false start should fail, no?

Regards,
  Dennis
No. Time   SourceDestination   Protocol 
Length Info
  4 0.00021900010.99.0.1 10.99.0.202   TLSv1.2  583 
   Client Hello
  5 0.0004910.99.0.202   10.99.0.1 TLSv1.2  227 
   Server Hello, Change Cipher Spec, Encrypted Handshake Message
  7 0.00150300010.99.0.1 10.99.0.202   TLSv1.2  141 
   Change Cipher Spec, Encrypted Handshake Message
  8 0.00159400010.99.0.1 10.99.0.202   TLSv1.2  727 
   Application Data
 10 0.00231700010.99.0.202   10.99.0.1 TLSv1.2  231 
   Application Data


[SPAM] -30% sur la Nouvelle Collection : profitez-en !

2015-02-06 Thread delilipm pour Monoprix Mode
Title: Craquez vite pour cette offre EXCLU WEB et pour les SOLDES  PRIX RONDS : 2, 4, 6 ...
Consulter la version en ligne	 	Vous recevez ce message car votre adresse est enregistre dans la liste MBP. 		 Craquez vite pour cette offre EXCLU WEB et pour les SOLDES  PRIX RONDS : 2, 4, 6 ...		 	   MODE BB  |  MODE ENFANT  |  MODE FEMME  |  LINGERIE  |  MODE HOMME 			 	    			  	
 			   		4 bonnes raisons de faire votre shopping Mode sur Monoprix.fr	    	  		 			   livraison 			   offerte 			ds 60 d'achat   			  retours 			   gratuits 						  paiement 			   scuris  			   Service Clients 			 de 8h  22h**  		 	      
  
	  		 		 															  			  Trouvez votre magasin   		 	          			  Monoprix Exploitation - 552 083 297 RCS Nanterre.	(1) Offre valable exclusivement sur Monoprix.fr du 06 au 10/02/15 dans la limite des stocks disponibles. La rduction de 30% est valable pour l'achat de deux produits ou plus de la rubrique Nouvelle Collection. La rduction sera applique lors de la validation de l'tape 3  Mes Avantages .	(2) Soldes sur Internet et en magasins  partir du 7 janvier 2015 (hors magasins situs dans les dpartements 54, 55, 57 et 88) sur une slection d'articles signals par une pastille rouge (-50%), jaune (-70%) ou violette (-80%) jusqu' puisement des stocks. Remise immdiate en magasin et sur Monoprix.fr sur le prix de
rfrence de la saison.	* Livraison gratuite en magasin ds 15 d'achat, en Relais Kiala ou  domicile ds 60 d'achat.	** Numro Vert, appel gratuit depuis un poste fixe. 		 			 		  	   	Pour ne plus recevoir de messages, suivez cette page.





Prix coup de cœur pour la Saint Valentin. Laissez-vous tenter

2015-02-06 Thread Bexley
Prix coup de cœur pour la Saint Valentin.  Laissez-vous tenter  Profitez des 
soldes pour (vous) faire plaisir. En boutiques ou sur 
Internet.
 Pour visualiser ce message sur votre navigateur, consultez notre version en 
ligne: 
http://communication.bexley.com/HM?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-TBvcStGb5lw8W0bBhOG5mpqVsje_HhdCavVKN
. Bexley Quality for men
: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-TYfcStGb5lw8W0bBhOG5mpqVsje_HhdCavVFs
Chaussures Ville: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-TZvcStGb5lw8W0bBhOG5mpqVsje_HhdCavVFt
 | Chaussures Détente: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-TZ_cStGb5lw8W0bBhOG5mpqVsje_HhdCavVFy
 | Chemises: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-TZPcStGb5lw8W0bBhOG5mpqVsje_HhdCavVFz
 | Pulls  Polos: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-TZfcStGb5lw8W0bBhOG5mpqVsje_HhdCavVFw
 | Chinos: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-TevcStGb5lw8W0bBhOG5mpqVsje_HhdCavVFx
 | Ceintures: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-Te_cStGb5lw8W0bBhOG5mpqVsje_HhdCavVF2
 | Chaussettes: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-TePcStGb5lw8W0bBhOG5mpqVsje_HhdCavVF3
 | Accessoires: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-TefcStGb5lw8W0bBhOG5mpqVsje_HhdCavVF0
Sélection Saint Valentin
: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-TfvcStGb5lw8W0bBhOG5mpqVsje_HhdCavVF1
Sélection Saint Valentin
: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-Tf_cStGb5lw8W0bBhOG5mpqVsje_HhdCavVF6
Montre automatique
: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-TfPcStGb5lw8W0bBhOG5mpqVsje_HhdCavVF7
Eau de parfum
: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-TffcStGb5lw8W0bBhOG5mpqVsje_HhdCavVF4
Gants en cuir
: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-TcvcStGb5lw8W0bBhOG5mpqVsje_HhdCavVF5
separateur
Boxers
: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-Tc_cStGb5lw8W0bBhOG5mpqVsje_HhdCavVF-
Echarpes chèche
: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-TcPcStGb5lw8W0bBhOG5mpqVsje_HhdCavVF_
Chemise 100% coton
: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-TcfcStGb5lw8W0bBhOG5mpqVsje_HhdCavVF8
separateur
Cartable cuir
: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-TdvcStGb5lw8W0bBhOG5mpqVsje_HhdCavVF9
Portefeuille cuir
: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-Td_cStGb5lw8W0bBhOG5mpqVsje_HhdCavVKC
Livraison gratuite dès 99€
: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-TdPcStGb5lw8W0bBhOG5mpqVsje_HhdCavVKD
Nos boutiques
: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-TdfcStGb5lw8W0bBhOG5mpqVsje_HhdCavVKA
Shop online
: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-TCvcStGb5lw8W0bBhOG5mpqVsje_HhdCavVKB
  
LIVRAISON GRATUITE AVANTAGES CLIENTS NOS MAGASINS France et Europe* dès 99€ 
Autres destinations jusqu'à -80%: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-TC_cStGb5lw8W0bBhOG5mpqVsje_HhdCavVKG
Remise par lots, remise fidélité 5%,: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-TCPcStGb5lw8W0bBhOG5mpqVsje_HhdCavVKH

Détaxe automatique livraison hors U.E: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-TCfcStGb5lw8W0bBhOG5mpqVsje_HhdCavVKE
Paris, Bruxelles
 Lyon, Marseille, Annecy: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-TDvcStGb5lw8W0bBhOG5mpqVsje_HhdCavVKF

PARTAGER 
: 
http://communication.bexley.com/HS?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-TD_cStGb5lw8W0bBhOG5mpqVsje_HhdCavVKK

: 
http://communication.bexley.com/HS?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-TDPcStGb5lw8W0bBhOG5mpqVsje_HhdCavVKL

: 
http://communication.bexley.com/HS?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-TDfcStGb5lw8W0bBhOG5mpqVsje_HhdCavVKI

: 
http://communication.bexley.com/HS?a=ENX7Cqo_g3eG8SA9MKJtf9LnGHxKLk-TAvcStGb5lw8W0bBhOG5mpqVsje_HhdCavVKJ

Bexley, le spécialiste de la chaussure de luxe à prix vraiment 
accessibles !

 Depuis près de 30 ans, Bexley est le chausseur de référence pour les 
hommes qui recherchent des chaussures de luxe au meilleur prix. 
S'adressant à une clientèle d'hommes ayant le goût des belles choses et 
le souci du détail, Bexley propose également une gamme de chemises, 
pulls, polos, ceintures et autres accessoires pour hommes dont le 
rapport qualité-prix a déjà séduit plus de 50 clients !
 Vous êtes plutôt classique ou trendy ? Vous aimez les modèles chics ou 
casual ? Retrouvez l'intégralité de nos collections ainsi que la liste 
de nos boutiques sur www.bexley.fr: 

Re: haproxy and multiple ports

2015-02-06 Thread PiBa-NL

Nick Couchman schreef op 6-2-2015 om 23:52:

It's hard to figure out exactly how to phrase what I'm trying to do, but I essentially 
need a configuration for HAProxy where I can pin the load-balancing of one 
front-end port to another one, so that both go to the same back-end port.  Here's what 
I'm trying to do...I'm using HAProxy to load-balance RDP connections.  I also have a 
piece of software that goes between the RDP client and the RDP server that provides USB 
pass-through.  So, the initial connection happens on port 3389, but then I need the 
client to also open a connection on another port - let's say 4000 - to the exact same 
back-end host.  Is this possible in HAProxy?

Thanks!
-Nick


This looks like a similar problem perhaps it will work for you to?:
http://blog.haproxy.com/2011/07/14/send-users-to-the-same-server-for-imap-and-smtp/



Re: haproxy and multiple ports

2015-02-06 Thread Nick Couchman
C
- Original Message -
 From: PiBa-NL piba.nl@gmail.com
 To: Nick E Couchman nick.couch...@seakr.com, haproxy@formilux.org
 Sent: Friday, February 6, 2015 4:06:18 PM
 Subject: Re: haproxy and multiple ports

 Nick Couchman schreef op 6-2-2015 om 23:52:
 It's hard to figure out exactly how to phrase what I'm trying to do, but I
 essentially need a configuration for HAProxy where I can pin the
 load-balancing of one front-end port to another one, so that both go to the
 same back-end port.  Here's what I'm trying to do...I'm using HAProxy to
 load-balance RDP connections.  I also have a piece of software that goes
 between the RDP client and the RDP server that provides USB pass-through.  
 So,
 the initial connection happens on port 3389, but then I need the client to 
 also
 open a connection on another port - let's say 4000 - to the exact same 
 back-end
 host.  Is this possible in HAProxy?

 Thanks!
 -Nick

 This looks like a similar problem perhaps it will work for you to?:
 http://blog.haproxy.com/2011/07/14/send-users-to-the-same-server-for-imap-and-smtp/

Cool.  Looks like the stick match line is what I need.  I will try it out.  
Thanks!



haproxy and multiple ports

2015-02-06 Thread Nick Couchman
It's hard to figure out exactly how to phrase what I'm trying to do, but I 
essentially need a configuration for HAProxy where I can pin the 
load-balancing of one front-end port to another one, so that both go to the 
same back-end port.  Here's what I'm trying to do...I'm using HAProxy to 
load-balance RDP connections.  I also have a piece of software that goes 
between the RDP client and the RDP server that provides USB pass-through.  So, 
the initial connection happens on port 3389, but then I need the client to also 
open a connection on another port - let's say 4000 - to the exact same back-end 
host.  Is this possible in HAProxy?

Thanks!
-Nick



Re: HAproxy constant memory leak

2015-02-06 Thread Georges-Etienne Legendre
Hi Willy,

Yes, please send me the script.

Thanks!

-- Georges-Etienne

 Le 2015-02-06 à 01:55, Willy Tarreau w...@1wt.eu a écrit :
 
 Hi Georges-Etienne,
 
 On Thu, Feb 05, 2015 at 09:10:25PM -0500, Georges-Etienne Legendre wrote:
 Hi Willy,
 
 I'm not sure how to document this leak. I don't know exactly how is
 implemented the firewall SSL health check... Would the Wireshark trace be
 enough to report the issue?
 
 Yes I think it will definitely help, given that the exchange is very short,
 it basically is an SSL hello. It is important to report the strace output
 as well and to mention that haproxy is running in a chroot.
 
 If that helps you, I can send you the script I wrote from your capture.
 You could try it to confirm it does the same effect as the firewall's
 checks. Once confirmed, it will be much easier to build a bug report
 given that you'll simply have to attach the script as a reproducer.
 
 Best regards,
 Willy
 



Re: HAproxy constant memory leak

2015-02-06 Thread Pavlos Parissis
On 06/02/2015 11:19 πμ, Georges-Etienne Legendre wrote:
 Hi Willy,
 
 Yes, please send me the script.
 

Willy,
If it isn't against the policies of this ML to send attachments and the
script is few kilobytes size, could you please send it to the list?

Thanks,
Pavlos




signature.asc
Description: OpenPGP digital signature


Re: tcp-response inspect-delay with WAIT_END

2015-02-06 Thread Chris
 Could you let us know why exactly you need to delay responses???

This is an API.

Unfortunately, the client behavior we are looking to address here cannot be
identified by client IP, ID, or anything else in the request.  In fact, it
cannot be identified until the server has gone through considerable work to
building up the response.

We still want to answer the client, but we know that as soon as we do the
client will come back with other, similar requests.  We cannot typically
slow subsequent requests by IP as this will affect large numbers of clients
behind proxy servers.  We have found that the only reasonable way to slow
the client's behavior is to slow the response.  This has proven to be quite
effective in our particular situation.

Currently the server delays the response after it is built, but instead we
would like to be able to add a magic header to the response that directs
HAProxy to delay it.

Is this possible?  Playing in a sandbox environment I have not been able to
get HAProxy to delay a response using tcp-response inspect-delay.  It
either answers immediately or the client times out.


 
acl trigger_delay res.hdr(response-delay) -m found
tcp-response inspect-delay 100ms
tcp-response content accept unless trigger_delay
tcp-response content accept if WAIT_END
 





Re: SSL Performance increase?

2015-02-06 Thread Dennis Jacobfeuerborn
On 06.02.2015 07:38, Shawn Heisey wrote:
 On 2/5/2015 5:54 AM, Klavs Klavsen wrote:
 Adding nbproc 4, improved performance of https from 511 req/s to 1296
 req/s..

 not quite an exponential scaling..

 We tested with 8 cores and got 1328 req/s.. so it seems we're hitting
 something else already after 2,5 core..

 vmstat 1 - also reveals a lot of cpu-idle time..
 
 For cleartext performance I really don't know for sure what you can do,
 except maybe using bare metal rather than a virtual machine.  Other
 people have been around this community a lot longer than I have and may
 have better ideas.
 
 Getting that cleartext performance up to a reasonable level will be your
 first step.  Once that's done, there are a lot of things that will help
 with performance using SSL.  This is an AWESOME video on that subject:
 
 https://www.youtube.com/watch?v=0EB7zh_7UE4
 
 The current haproxy version implements almost every
 performance-enhancing method mentioned in that video, as long as your
 openssl is new enough.

Has somebody ever posted a working example configuration for haproxy
that applies the improvements mentioned in the video?
I tried to implement these recommendations but didn't seem to get
results I was expecting. How exactly does one reliably test that the
1-RTT handshake is actually working?

Regards,
  Dennis




urgent ftp problem.

2015-02-06 Thread Lee Musgrave
Hi, just subscribed to the iist, not had confirmation response yet, so i
hope this gets through and i can see responses.

i've got haproxy 1.5 installed and working, and it's working well so far.

i'm setting up a few servers to handle all our ftp, but in the meantime,
i'm using haproxy to proxy ftp from specific ip's to specific servers.


typical ftp listener in my haproxy.cfg:

listen ubuntu-server-12-ftp
bind 213.146.180.246:21
mode tcp
bind-process 1
option tcplog
balance leastconn
server ubuntu-server-12 192.168.51.201:21 send-proxy check

iptables rules on the load balancers:

-A PREROUTING -d 213.146.180.246/32 -p tcp -m tcp --dport 6100:6200 -j DNAT
--to-destination 192.168.51.201
-A POSTROUTING -s 192.168.51.201/32 -j SNAT --to-source 213.146.180.246

the server on 192.168.51.201 has the gateway address 192.168.51.1, which is
the floating ip used between the two haproxy servers.

the haproxy servers default route for 0.0.0.0/0 is our firewall

this is all working fine for external ftp access. i've configure proftpd
with the required defaultaddress, masqueradeaddress, passive port range etc.

however, i've now found out i'm also required to allow ftp use FROM this
server to external ftp servers.
however when i try to ftp out, i can connect to a server and login, but
can't do anything else:

Connected to ftp.*.com.
220 Welcome to the *** FTP service.
Name (ftp..com:root): p***
331 Please specify the password.
Password:
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp ls
500 Illegal PORT command.
ftp: bind: Address already in use
ftp dir
500 Illegal PORT command.
ftp pwd
257 /
ftp

what do i need to do to get this working?

thanks
lee.


Re: Setting uuid cookies not for sticky sessions

2015-02-06 Thread Baptiste
On Thu, Feb 5, 2015 at 5:24 PM, Alberto alberto-hap4...@ggsys.net wrote:
 I have multiple back ends using different stacks.

 All I need is to ensure that every client gets a unique
 cookie. They don't need to be used for sticky sessions.

 Pretty much all the examples I find are for hard coding,
 prefixing and/or for sticky session purposes.

 Is there a way to get haproxy just set a simple uuid cookie if
 one isn't there?

 Thanks,

 Alberto



Hi Alberto,

You may be able to do something with the http-response set-header and
the rand fetch.

Baptiste



Re: tcp-response inspect-delay with WAIT_END

2015-02-06 Thread Baptiste
On Thu, Feb 5, 2015 at 10:22 PM, Chris k...@adobe.com wrote:
 Hello,

 We have some complex logic in our application that will at times determine
 that the response to a specific query should be delayed.  Currently this is
 handled in the application with a short (~100ms) sleep.

 We would like to move this delay in response to the load balancer.  I have
 tried to do this by adding a response header as a flag for HAProxy to act on
 and adding configuration like the following to the backend:

   acl trigger_delay res.hdr(response-delay) -m found
   tcp-response inspect-delay 100ms
   tcp-response content accept unless trigger_delay
   tcp-response content accept if WAIT_END

 With the above configuration, the response is delayed until the client times
 out (2 minutes) regardless of how trigger_delay evaluates.

 The following configurations exhibit the same behavior:

   tcp-response inspect-delay 100ms
   tcp-response content accept if WAIT_END

 - or -

   acl trigger_delay res.hdr(response-delay) -m found
   tcp-response inspect-delay 100ms
   tcp-response content accept unless trigger_delay


 It seems that either a header-based ACL or WAIT_END cause any tcp-response
 inpect-delay to timeout.  It does not seem to matter if the header-based ACL
 returns true or false.  Are they not compatible with a response delay?


 Ideally when we encounter the delay flag in the response of the app server,
 we would also add the src to a stick-table for reference in delaying
 subsequent incoming connections from that IP (maybe the next, say 5 minutes
 or so).  Is this possible/reasonable?

 Thank you,

 Chris


Hi Chris,

Could you let us know why exactly you need to delay responses???

Because here you propose a response (which doesn't work) to a problem
you're facing without explaining us the problem.
So it's hard to help.

Baptiste



RE: SSL Performance increase?

2015-02-06 Thread Lukas Tribus
 I tried to implement these recommendations but didn't seem to get
 results I was expecting. How exactly does one reliably test that the
 1-RTT handshake is actually working?

Enable TFO and announce http/1.1 via NPN and ALPN, that should
do it.

But your client will have to support all those features as-well (for
example TFO can't possibly work in Windows).

You will have to capture the TLS handshake in wireshark to see
how fast it was (in terms of time and RTT's).


Lukas 


Re: SSL Performance increase?

2015-02-06 Thread Dennis Jacobfeuerborn
On 06.02.2015 14:13, Lukas Tribus wrote:
 I tried to implement these recommendations but didn't seem to get
 results I was expecting. How exactly does one reliably test that the
 1-RTT handshake is actually working?
 
 Enable TFO and announce http/1.1 via NPN and ALPN, that should
 do it.
 
 But your client will have to support all those features as-well (for
 example TFO can't possibly work in Windows).
 
 You will have to capture the TLS handshake in wireshark to see
 how fast it was (in terms of time and RTT's).

This is really what I'm trying to get at. What is a specific way to test
this? Which clients do support a 1-RTT handshake and what would a
Wireshark session look like where the 1-RTT handshake succeeds compared
to one which doesn't.

There is a lot of information about this on the internet.
All of it extremely vague. Surely there must be a way to come up with a
test scenario that can verify such a setup more deterministically?

Regards,
   Dennis



Re: Possible bug with del acl

2015-02-06 Thread Thierry FOURNIER
Hello,

Thank you for the repport. It is fixed. The patches will be
integrated in the 1.6 and 1.5 branches ASAP.

Thierry


On Tue, 3 Feb 2015 15:58:08 -0700
KJ Rasmussen kj_...@hotmail.com wrote:

 Below is my haproxy config file, the contents of 
 both ACL files (Empty and we are only really using one in my example) as
  well as the steps I am taking to add and remove an ACL via the socket.
 
 It appears that after the removal of the ACL haproxy still behaves as if the 
 ACL is still in place.
 
 - Start of haproxy config -
 #-
 # Global settings
 #-
 global
   # to have these messages end up in /var/log/haproxy.log you will
   # need to:
   #
   # 1) configure syslog to accept network log events.  This is done
   #by adding the '-r' option to the SYSLOGD_OPTIONS in
   #/etc/sysconfig/syslog
   #
   # 2) configure local2 events to go to the /var/log/haproxy.log
   #   file. A line like the following can be added to
   #   /etc/sysconfig/syslog
   #
   #local2.*   /var/log/haproxy.log
   #
   chroot  /var/lib/haproxy-chroot
   pidfile /var/run/haproxy.pid
   maxconn 4000
   userhaproxy
   group   haproxy
   daemon
   tune.ssl.default-dh-param 1024
  
   # turn on stats unix socket
   stats socket/var/run/haproxy.sock mode 600 level admin
   stats timeout   5s
  
 #-
 # common defaults that all the 'listen' and 'backend' sections will
 # use if not designated in their block
 #-
 defaults
   modehttp
   log global
   option  dontlognull
   option  forwardfor
   option  http-server-close
   retries 3
   timeout queue   1m
   timeout connect 10s
   timeout client  1m
   timeout server  1m
   timeout http-keep-alive 10s
   timeout check   10s
  
 #-
 # stats web
 #-
 listen stats :8082
   mode http
   stats enable
   stats uri /
  
 #-
 # frontends/backends below here
 #-
 frontend main *:8000
  
   # ACLS
   acl bucket-none hdr(host) -i -f /etc/haproxy/acl.d/default.hosts.acl.conf
   acl bucket-main1-a hdr(host) -i -f /etc/haproxy/acl.d/main1-a.hosts.acl.conf
  
   # use backends based off ACL
   use_backend main0-aif bucket-none
   use_backend main1-aif bucket-main1-a
  
 backend main0-a
   server server-main0-a 127.0.0.1:9000 check
 backend main1-a
   server server-main1-a 127.0.0.1:9001 check
  
 - End of haproxy config -
  
 - Start of contents of default.hosts.acl.conf -
 - End of contents of default.hosts.acl.conf -
  
 - Start of contents of main1-a.hosts.acl.conf -
 - End of contents of main1-a.hosts.acl.conf -
  
 - start of confirmation that backends work -
 $ curl http://localhost:9000
 html0/html
 $ curl http://localhost:9001
 html1/html
 - end of confirmation that backends work -
  
 - Confirmation that frontend does not work before adding acl via socket 
 -
 $ curl -H 'Host: test.com' http://localhost:8000
 htmlbodyh1503 Service Unavailable/h1
 No server is available to handle this request.
 /body/html
 - End confirmation that frontend does not work before adding acl via 
 socket -
  
 - Adding of the test.com acl -
 $ echo add acl /etc/haproxy/acl.d/default.hosts.acl.conf test.com | socat 
 /var/run/haproxy.sock stdio
 Done.
 - End adding of the test.com acl -
  
 - Confirmation that the ACL additon worked -
 $ curl -H 'Host: test.com' http://localhost:8000
 html0/html
 - End confirmation that the ACL additon worked -
  
 - Deletion of ACL -
 $ echo del acl /etc/haproxy/acl.d/default.hosts.acl.conf test.com | socat 
 /var/run/haproxy.sock stdio
 Done.
 - End deletion of ACL -
  
 - Start of Proof that for some reason haproxy behaves as if the acl still 
 exists -
 $ curl -H 'Host: test.com' http://localhost:8000
 html0/html
 - End of Proof that for some reason haproxy behaves as if the acl still 
 exists -
 
 
 - Kevin