Re: could haproxy call redis for a result?
nginx would be more suitable for something like this. It even has a redis plugin: http://wiki.nginx.org/HttpRedis Perhaps you can achieve your functionality with the redis_next_upstream parameter. Sergej On Tue, May 8, 2012 at 4:39 AM, S Ahmed sahmed1...@gmail.com wrote: I agree it will add overheard for each call. Well would there a way for me to somehow tell haproxy from my application to block a particular url, and then send another api call to allow traffic from that url? That would be really cool to have an API where I could do this from. I know haproxy has rate limiting as per: http://blog.serverfault.com/2010/08/26/1016491873/ But wondering if one could have more control over it, like say you have multiple haproxy servers and you want to synch them, or simply the application layer needs to decide when to drop a url connection or when to accept. On Mon, May 7, 2012 at 7:39 PM, Baptiste bed...@gmail.com wrote: On Tue, May 8, 2012 at 12:26 AM, S Ahmed sahmed1...@gmail.com wrote: I'm sure this isn't possible but it would be cool if it is. My backend services write to redis, and if a client reaches a certain threshold, I want to hard drop all further requests until x minutes have passed. Would it be possible, for each request, haproxy performs a lookup in redis, and if a 0 is returned, drop the request completly (hard drop), if it is 1, continue processing. It would introduce latency in the request processing. Why would you need such way of serving your request? By the way, this is not doable with HAProxy. Well, at least, not out of the box :) Depending on your needs, you could hack some dirty scripts which can sync your redis DB with HAProxy server status through the stats socket. cheers
Re: ACLs that depend on cookie values
Hi Malcolm, On Mon, May 07, 2012 at 06:19:36PM -0700, Malcolm Handley wrote: I'd like to write an ACL that compares the integer value of a cookie with a constant. (My goal is to be able to block percentiles of our users if we have more traffic than we can handle, so I want to block a request if the cookie's value is, say, less then 25.) I understand that I can do something like hdr_sub(cookie) -i regular expression but that doesn't let me treat the value as an integer and compare it. I also know about hdr_val(header) but that gives me the entire value of the cookie header, not just the value of a particular cookie. Is there any way that I can do this? In the next snapshot I hope to be able to push today, there is a new cookie pattern fetch method which brings a number of cook_* ACL keywords. It does not have cook_val at the moment, but I can check if that's hard to add or not. In the mean time, I think that if you manage to rewrite your cookie header to replace it with a header holding only the value, it might work, though it's dirty and quite tricky. Instead, with regex you can actually match integer expressions, it's just a bit complicated but doable. For instance, a value below 25 might be defined like this (not tested right now but you get the idea) : COOK=([0-9]|1[0-9]|2[0-4])([^0-9]|$) I've been doing this for a long time to extract requests by response times in logs until I got fed up and wrote halog. Willy
Re: could haproxy call redis for a result?
On Tue, May 8, 2012 at 4:39 AM, S Ahmed sahmed1...@gmail.com wrote: I agree it will add overheard for each call. Well would there a way for me to somehow tell haproxy from my application to block a particular url, and then send another api call to allow traffic from that url? This is different. Soon, you'll be able to store an URL in a stick-table, so you'll be able to update a gpc counter by setting up a particular header on the server side which tells HAProxy to block this request. For the cancellation of this blocking system, you could request the URL with a particular header to unblock it. It might be doable with HAProxy nightly snapshot, but you should definitely wait for Willy to provide the 1.5-dev9 which allows strings in stick-tables.
Re: HAProxy Hardware LB
Hi, I'd agree with that choice, they don't look very pretty but we have found them very reliable especially with Intel SSDs: We have a good 500+ Loadbalancer.org customers are on that platform: http://uk.loadbalancer.org/r16.php On 8 May 2012 09:21, Timh Bergström timh.bergst...@quickvz.com wrote: Hi, I would highly recommend Supermicro's Atom-boxes, they do have Intel-chips (dual-gig) on-board in their mini-19 servers (if you find the right one). You can use a SSD-drive and you're down to very few moving parts. Link: http://www.supermicro.com/products/nfo/atom.cfm Good luck! Timh Bergström www.quickvz.com On Wed, May 2, 2012 at 1:07 PM, Sebastian Fohler i...@far-galaxy.de wrote: Hi, I'm trying to build a small size loadbalancing maschine which fit's into a small 19 rackmountable case. Are there any experiences which some specific hardware, for example ATOM boards or something similiar? Can someone recomment anything special? Best regards Sebastian -- Regards, Malcolm Turnbull. Loadbalancer.org Ltd. Phone: +44 (0)870 443 8779 http://www.loadbalancer.org/
Re: Missing log entries
Thanks, that seems to have helped. On 2 May 2012 23:06, Baptiste bed...@gmail.com wrote: Hi, You should enable http-server-close option in both frontend and backend or in defaults section. Otherwise, the first request is the only logged (tunnel mode). cheers On Wed, May 2, 2012 at 12:53 PM, Peter Gillard-Moss pgill...@thoughtworks.com wrote: Hello, I am observing some strange behaviour with haproxy and logging on Ubuntu Oneiric. haproxy is setup to log to /dev/log and logs successfully appear in /var/log/syslog (via rsyslog). Well, some of them do. Some just don't. If I look on the servers we are proxying/load balancing I can see requests in their logs but they aren't in the haproxy output in /var/log/syslog. I've also noticed that if I do a wget then the entries appear, however from a browser they don't appear. I've also noticed that the entries in haproxy aren't always in the server logs and the entries in the server logs often aren't in haproxy. Any help is much appreciated. We are using HA-Proxy version 1.4.15 2011/04/08 This is our configuration: global daemon maxconn 256 log /dev/log local0 defaults mode http timeout connect 5000ms timeout client 5ms timeout server 5ms option httplog frontend http-in bind *:80 default_backend servers log global backend servers server one one:8080 server two two:8080 Thanks Peter -- Peter Gillard-Moss Developer | ThoughtWorks Studios | Technical Solutions http://www.thoughtworks-studios.com -- Peter Gillard-Moss Developer | ThoughtWorks Studios | Technical Solutions http://www.thoughtworks-studios.com
Re: could haproxy call redis for a result?
Ok that sounds awesome, how will that work though? i.e. from say java, how will I do that? From what your saying it sounds like I will just have to modify the response add and a particular header. And on the flip side, if I want to unblock I'll make a http request with something in the header that will unblock it? When do you think this will go live? On Tue, May 8, 2012 at 4:26 AM, Baptiste bed...@gmail.com wrote: On Tue, May 8, 2012 at 4:39 AM, S Ahmed sahmed1...@gmail.com wrote: I agree it will add overheard for each call. Well would there a way for me to somehow tell haproxy from my application to block a particular url, and then send another api call to allow traffic from that url? This is different. Soon, you'll be able to store an URL in a stick-table, so you'll be able to update a gpc counter by setting up a particular header on the server side which tells HAProxy to block this request. For the cancellation of this blocking system, you could request the URL with a particular header to unblock it. It might be doable with HAProxy nightly snapshot, but you should definitely wait for Willy to provide the 1.5-dev9 which allows strings in stick-tables.
Community educational development
Press Release The American Grants and Loans Catalog is now available. Our new and revised 2012 edition contains more than 2800 financial programs, subsidies, scholarships, grants and loans offered by the US federal government. In addition you will also have access to over 2400 programs funded by private corporations and foundations. That is over 5200 programs available through various sources of financial providing organizations. NEW: You will also have access to our live Database that is updated on a daily basis. This product also provides daily email alerts as programs are announced. The Database is also available with IP recognition. This allows you to login without a username or password (Great for libraries or educational institutions who want their users to access the database). Businesses, students, researchers, scientists, teachers, doctors, private individuals, municipalities, government departments, educational institutions, law enforcement agencies, nonprofits, foundations and associations will find a wealth of information that will help them with their new ventures or existing projects. The document is a fully searchable PDF file for easy access to your particular needs and interests. Simply enter your keywords to search through the publication. It is the perfect tool for libraries and educational institutions to use as a reference guide for students who require funds to pursue their education. Contents of the Directory: * Web link to program announcement page * Web link to Federal agency or foundation administering the program * Authorization upon which a program is based * Objectives and goals of the program * Types of financial assistance offered under a program * Uses and restrictions placed upon a program * Eligibility requirements * Application and award process * Regulations, guidelines and literature relevant to a program * Information contacts at the headquarters, regional, and local offices * Programs that are related based upon program objectives and uses Programs in the Catalog provide a wide range of benefits and services for categories such as: * Agriculture * Arts * Business and Commerce * Community Development * Consumer Protection * Cultural Affairs * Disaster Prevention and Relief * Education * Employment, Labor and Training * Energy * Engineering * Environmental Quality * Food and Nutrition * Health * Housing * Income Security and Social Services * Information and Statistics * Law, Justice, and Legal Services * Natural Resources * Regional Development * Science and Technology * Transportation CD version: $69.95 Printed version: $149.95 To order please call: 1 (800) 610-4543 Please do not respond to this message. This is a post-only mailing. This is a CANSPAM ACT compliant advertising broadcast sent by: American Publishing Inc., 7025 County Road 46A, Suite 1071, Lake Mary, FL, 32746-4753 Please do not respond to this message. This is a post-only mailing. If you do not wish to receive information from us in the future please reply here: abort...@email.com Copyright © 2012 American Publishing Inc.. All rights reserved.
Re: could haproxy call redis for a result?
On Tue, May 8, 2012 at 3:25 PM, S Ahmed sahmed1...@gmail.com wrote: Ok that sounds awesome, how will that work though? i.e. from say java, how will I do that? From what your saying it sounds like I will just have to modify the response add and a particular header. And on the flip side, if I want to unblock I'll make a http request with something in the header that will unblock it? That's it. You'll have to track these headers with ACLs in HAProxy and to update the stick table accordingly. Then based on the value setup in the stick table, HAProxy can decide whether it will allow or reject the request. When do you think this will go live? In an other mail, Willy said he will release 1.5-dev9 today. So I guess it won't be too long now. Worste case would be later in the week or next week. cheers
Low performance when using mode http for Exchange-Outlook-Anywhere-RPC
Hello List, I placed haproxy in front of our exchange cluster for OutlookAnywhere Clients (that's just RPCoverHTTP, port 443). SSL is terminated by pound and forwards traffic on loopback to haproxy. Everything works but it's awfully slow when i use mode http; requests look like this: RPC_IN_DATA /rpc/rpcproxy.dll?[...] HTTP/1.1 HTTP/1.1 200 Success..Content-Type:application/rpc..Content-Length:1073741824 RPC_OUT_DATA /rpc/rpcproxy.dll?[..] HTTP/1.1 HTTP/1.1 200 Success..Content-Type:application/rpc..Content-Length:1073741824 (this is the nature of microsoft rpc I've been told, it's using two channels to make it duplex) and are held open in both cases (mode tcp and mode http) due to long configured timeouts (and no option httpclose for the http-mode) I can't see a big difference in how packets look, there's an awful lot of nearly empty packets with Syn and Push set, but that's in both cases. Packets reach 16k (that's the mtu of the loopback device) The only difference you can see in the Outlook Connection Info Window is the response-time: with mode tcp it's around 16-200ms while in http-mode it's above 800ms. Any hint? Or is mode-http of no use because I'll be unable to inject stuff into the session-cookie at all? Thx in advance Beni.
Re: Low performance when using mode http for Exchange-Outlook-Anywhere-RPC
Hello Benedikt, On Tue, May 08, 2012 at 05:33:46PM +0200, Benedikt Fraunhofer wrote: Hello List, I placed haproxy in front of our exchange cluster for OutlookAnywhere Clients (that's just RPCoverHTTP, port 443). SSL is terminated by pound and forwards traffic on loopback to haproxy. Everything works but it's awfully slow when i use mode http; requests look like this: RPC_IN_DATA /rpc/rpcproxy.dll?[...] HTTP/1.1 HTTP/1.1 200 Success..Content-Type:application/rpc..Content-Length:1073741824 RPC_OUT_DATA /rpc/rpcproxy.dll?[..] HTTP/1.1 HTTP/1.1 200 Success..Content-Type:application/rpc..Content-Length:1073741824 (this is the nature of microsoft rpc I've been told, it's using two channels to make it duplex) and are held open in both cases (mode tcp and mode http) due to long configured timeouts (and no option httpclose for the http-mode) I can't see a big difference in how packets look, there's an awful lot of nearly empty packets with Syn and Push set, but that's in both cases. Packets reach 16k (that's the mtu of the loopback device) The only difference you can see in the Outlook Connection Info Window is the response-time: with mode tcp it's around 16-200ms while in http-mode it's above 800ms. Any hint? Or is mode-http of no use because I'll be unable to inject stuff into the session-cookie at all? For such border-line uses, you need to enable option http-no-delay. By default, haproxy tries to merge as many TCP segments as possible. But in your case, the application is abusing the HTTP protocol by expecting that incomplete data will be immediately delivered to the other end (which is wrong but it's not the first time microsoft does crappy things with HTTP, see NTLM). Please note that such protocols will generally not work across caches or anti-virus proxies. With option http-no-delay, haproxy refrains from merging consecutive segments and forwards data as fast as they enter. This obviously leads to higher CPU and network usage due to the increase of small packets, but at least it will work as expected. Regards, Willy
Re: Low performance when using mode http for Exchange-Outlook-Anywhere-RPC
Hello Willy, 2012/5/8 Willy Tarreau w...@1wt.eu: For such border-line uses, you need to enable option http-no-delay. By great! that did it. default, haproxy tries to merge as many TCP segments as possible. But in your case, the application is abusing the HTTP protocol by expecting that Does haproxy even discard the PUSH Flag on tcp-packets? or is microsoft simply not sending it? wrong but it's not the first time microsoft does crappy things with HTTP, see NTLM). HTTP is such a versatile protocol, and, as already being sung by some some of them want to be abused :) Please note that such protocols will generally not work across caches or anti-virus proxies. Well. In this case, all proxies on the client side will only see https traffic; they should not be able to inspect that. With option http-no-delay, haproxy refrains from merging consecutive segments and forwards data as fast as they enter. This obviously leads to higher CPU and network usage due to the increase of small packets, but at least it will work as expected. I'm following the mailing-list and saw that you did something different for web-sockets? [...]because haproxy switches to tunnel mode when it sees the WS handshake and it keeps the connection open for as long as there is traffic.[...] or is tunnel mode something different and keeps the inner working of assembling and merging packets in the http-mode I dunno if that's important but maybe one should do that for Content-Type:application/rpc, too, but anyhow it easy to throw in the option and i'm more than happy that i can stay with my setup and have client-stickiness for dryout-purposes. And congrats to your new president :) Thx again and again Beni.
Re: SPDY support?
why never? F5 just announced support for it http://www.slideshare.net/f5dotcom/f5-ado-slide-share I appreciate it is not a standard... yet ... but never is such a strong word and seems shortsighted is there something I am missing why you would say never? On Wed, May 2, 2012 at 6:25 PM, Baptiste bed...@gmail.com wrote: Hi, As far as I know, never. :) On the other hand, HTTP 2.0 may be integrated asap as soon as proposed by IETF. cheers On Tue, May 1, 2012 at 4:05 AM, Joe Stein joe.st...@medialets.com wrote: Hi, I was wondering if/when SPDY support might be added to HAPROXY? Thanks! /* Joe Stein, 973-944-0094 http://www.medialets.com Twitter: @allthingshadoop */ -- /* Joe Stein, 973-944-0094 http://www.medialets.com Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop */
Re: Low performance when using mode http for Exchange-Outlook-Anywhere-RPC
On Tue, May 08, 2012 at 06:14:15PM +0200, Benedikt Fraunhofer wrote: For such border-line uses, you need to enable option http-no-delay. By great! that did it. Fine, thanks for your feedback. Does haproxy even discard the PUSH Flag on tcp-packets? or is microsoft simply not sending it? It cannot know if there was a PUSH since only the kernel knows it. Haproxy does not receive packets but data. However the http-no-delay option ensures that a PUSH is emitted on output segments so that none of them waits anywhere. wrong but it's not the first time microsoft does crappy things with HTTP, see NTLM). HTTP is such a versatile protocol, and, as already being sung by some some of them want to be abused :) :-) Anyway not respecting standards is what causes trouble in real life. Please note that such protocols will generally not work across caches or anti-virus proxies. Well. In this case, all proxies on the client side will only see https traffic; they should not be able to inspect that. Some of them will still be able to do it, many proxies do inspect HTTPS right now by spoofing certificates. And on the server side, you cannot easily offload SSL to a proxy or install a load balancer (you needed one with a specific option to keep the latency low). I'm following the mailing-list and saw that you did something different for web-sockets? [...]because haproxy switches to tunnel mode when it sees the WS handshake and it keeps the connection open for as long as there is traffic.[...] exactly. or is tunnel mode something different and keeps the inner working of assembling and merging packets in the http-mode Haproxy switches to a tunnel mode when it sees a CONNECT request succeed or a 101 response to an Upgrade request. When in tunnel mode, it basically falls back to TCP mode and forwards data as fast as possible between both sides without inspecting anything. I dunno if that's important but maybe one should do that for Content-Type:application/rpc, too, but anyhow it easy to throw in the option and i'm more than happy that i can stay with my setup and have client-stickiness for dryout-purposes. No, I really don't want to change the TCP behaviour based on a content-type. It's not normal at all, the lower layers have no reason to adapt to contents. It's somewhat a violation of the layering model that we must not perform at all. Regards, Willy
Re: SPDY support?
Never unless SPDY become the new standard for HTTP/2.0, validated by IETF. To be honest, I talk from time to time to Willy about SPDY protocol. And he does not want to implement a protocol which is not a standard within HAProxy. He prefers waiting for the standardized HTTP/2.0 and because some stuff in SPDY are not F5 is not the only one, boostedge from Activenetworks, nginx, apache (through a module), and others have implemented or are implemting SPDY. But Willy is the best person to answer you, I hope he'll answer you soon :) Note that I'm on your side: I'd be keen to have SPDY implemented in HAProxy. Unfortunately, it's a long time job and HAProxy is missing some major features before implementing SPDY (well that's my point of view). Cheers On Tue, May 8, 2012 at 6:20 PM, Joe Stein joe.st...@medialets.com wrote: why never? F5 just announced support for it http://www.slideshare.net/f5dotcom/f5-ado-slide-share I appreciate it is not a standard... yet ... but never is such a strong word and seems shortsighted is there something I am missing why you would say never? On Wed, May 2, 2012 at 6:25 PM, Baptiste bed...@gmail.com wrote: Hi, As far as I know, never. :) On the other hand, HTTP 2.0 may be integrated asap as soon as proposed by IETF. cheers On Tue, May 1, 2012 at 4:05 AM, Joe Stein joe.st...@medialets.com wrote: Hi, I was wondering if/when SPDY support might be added to HAPROXY? Thanks! /* Joe Stein, 973-944-0094 http://www.medialets.com Twitter: @allthingshadoop */ -- /* Joe Stein, 973-944-0094 http://www.medialets.com Twitter: @allthingshadoop */
Re: SPDY support?
Hi, On Tue, May 08, 2012 at 06:57:04PM +0200, Baptiste wrote: Never unless SPDY become the new standard for HTTP/2.0, validated by IETF. To be honest, I talk from time to time to Willy about SPDY protocol. And he does not want to implement a protocol which is not a standard within HAProxy. He prefers waiting for the standardized HTTP/2.0 and because some stuff in SPDY are not F5 is not the only one, boostedge from Activenetworks, nginx, apache (through a module), and others have implemented or are implemting SPDY. But Willy is the best person to answer you, I hope he'll answer you soon :) Note that I'm on your side: I'd be keen to have SPDY implemented in HAProxy. Unfortunately, it's a long time job and HAProxy is missing some major features before implementing SPDY (well that's my point of view). The point is that SPDY is nice and brings a lot of performance boost, but at the expense of a much more complex infrastructure and a more fragile handling of DoS attacks. It's around 100 times easier to DoS a SPDY server than it is for an HTTP server because you can force the server to parse and process large requests with very few bytes due to the header compression. The header compression also means that double buffering becomes mandatory, which comes with a cost for intermediaries. At the moment, SPDY ensures that HTTP/1.1 can be optimized as much as possible, but there are inherent issues in HTTP/1.1 that have to be addressed in HTTP/2.0 (CRLF, long header names, folding, etc...). That's why with the guys from Squid, Varnish and Wingate we presented an concurrent proposal to the IETF one month ago : http://tools.ietf.org/html/draft-tarreau-httpbis-network-friendly-00 Right now there are 4 drafts for HTTP/2.0 : SPDY, ours (which is really just a small draft and which we still need to work on), the MS guy's and hopefully Waka if Roy Fielding finds time to write it and publish it. All of these drafts use very different concepts, and with a component such as haproxy, it can be between 3 and 6 months of work before such a support is implemented, and maybe more for the most complex ones. For this reason, I don't want to implement something which is going to move soon. It's very likely that most of SPDY will be adopted as HTTP/2, but better work on HTTP/2 when it takes shape than work on SPDY right now and throw everything away once it's just finished. Hoping this clarifies the situation, Willy
Re: SPDY support?
very much so, thanks Willy On Tue, May 8, 2012 at 2:01 PM, Willy Tarreau w...@1wt.eu wrote: Hi, On Tue, May 08, 2012 at 06:57:04PM +0200, Baptiste wrote: Never unless SPDY become the new standard for HTTP/2.0, validated by IETF. To be honest, I talk from time to time to Willy about SPDY protocol. And he does not want to implement a protocol which is not a standard within HAProxy. He prefers waiting for the standardized HTTP/2.0 and because some stuff in SPDY are not F5 is not the only one, boostedge from Activenetworks, nginx, apache (through a module), and others have implemented or are implemting SPDY. But Willy is the best person to answer you, I hope he'll answer you soon :) Note that I'm on your side: I'd be keen to have SPDY implemented in HAProxy. Unfortunately, it's a long time job and HAProxy is missing some major features before implementing SPDY (well that's my point of view). The point is that SPDY is nice and brings a lot of performance boost, but at the expense of a much more complex infrastructure and a more fragile handling of DoS attacks. It's around 100 times easier to DoS a SPDY server than it is for an HTTP server because you can force the server to parse and process large requests with very few bytes due to the header compression. The header compression also means that double buffering becomes mandatory, which comes with a cost for intermediaries. At the moment, SPDY ensures that HTTP/1.1 can be optimized as much as possible, but there are inherent issues in HTTP/1.1 that have to be addressed in HTTP/2.0 (CRLF, long header names, folding, etc...). That's why with the guys from Squid, Varnish and Wingate we presented an concurrent proposal to the IETF one month ago : http://tools.ietf.org/html/draft-tarreau-httpbis-network-friendly-00 Right now there are 4 drafts for HTTP/2.0 : SPDY, ours (which is really just a small draft and which we still need to work on), the MS guy's and hopefully Waka if Roy Fielding finds time to write it and publish it. All of these drafts use very different concepts, and with a component such as haproxy, it can be between 3 and 6 months of work before such a support is implemented, and maybe more for the most complex ones. For this reason, I don't want to implement something which is going to move soon. It's very likely that most of SPDY will be adopted as HTTP/2, but better work on HTTP/2 when it takes shape than work on SPDY right now and throw everything away once it's just finished. Hoping this clarifies the situation, Willy -- /* Joe Stein, 973-944-0094 http://www.medialets.com Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop */
Re: TCP reverse proxy
you're right but this works only with a single protocol managed by haproxy, doesn't it ? My idea was to have an ACL for each of these standard protocols in order to have a specific backend. Regards, Emmanuel *Adoptez l'éco-attitude.* N'imprimez ce courriel que si c'est vraiment nécessaire 2012/5/7 Willy Tarreau w...@1wt.eu Hi Emmanuel, On Fri, Apr 20, 2012 at 09:02:07AM +0200, Emmanuel Bézagu wrote: As haproxy already accepts to reverse proxy ssl and ssh, would it be possible to support protocols as OpenVPN, tinc or XMPP ? Haproxy will work with any TCP-based protocol which does not report addresses or ports inside the payload. For instance, it works well on SSH, SMTP, LDAP, RDP, PeSIT, SSL, etc... but not on FTP, most RPC, etc... In general, any protocol which can easily be translated will work. I think this is the case for all those above, but you might prefer testing to be sure. Regards, Willy
Re: TCP reverse proxy
On 8 May 2012 20:24, Emmanuel Bézagu emmanuel.bez...@gmail.com wrote: you're right but this works only with a single protocol managed by haproxy, doesn't it ? My idea was to have an ACL for each of these standard protocols in order to have a specific backend. 1) That's why there are different ports for different protocols; just put haproxy on each protocol's native port; 2) MY EYES THEY BURN! Seriously, Comic Sans when posting to mailing lists? Didn't your mother teach you /any/ manners? ;-) J 2012/5/7 Willy Tarreau w...@1wt.eu Hi Emmanuel, On Fri, Apr 20, 2012 at 09:02:07AM +0200, Emmanuel Bézagu wrote: As haproxy already accepts to reverse proxy ssl and ssh, would it be possible to support protocols as OpenVPN, tinc or XMPP ? Haproxy will work with any TCP-based protocol which does not report addresses or ports inside the payload. For instance, it works well on SSH, SMTP, LDAP, RDP, PeSIT, SSL, etc... but not on FTP, most RPC, etc... In general, any protocol which can easily be translated will work. I think this is the case for all those above, but you might prefer testing to be sure. Regards, Willy -- Jonathan Matthews Oxford, London, UK http://www.jpluscplusm.com/contact.html
Re: SPDY support?
On May 8, 2012, at 2:01 PM, Willy Tarreau wrote: That's why with the guys from Squid, Varnish and Wingate we presented an concurrent proposal to the IETF one month ago : http://tools.ietf.org/html/draft-tarreau-httpbis-network-friendly-00 I hope that HTTP 2.0 requires encryption/compression for all traffic. Also, I would hope that geographic/distributed load balancing is better addressed in the protocol. That is, any request can get forwarded to another IP immediately (along with any session data needed by the new server) and a short response back to the client (if the new server accepts the request) containing a Unique Request ID and the IP for the client to connect to for the response. The client would, when seeing this redirect response, connect to the IP with the Request ID to get the response. Subsequent requests from the client should be made to the new IP for the given host and could be changed again. I'm thinking this could make geographic load balancing easy without using DNS to make the geo decisions based only on source ip. And, this might really help with DDoS attack mitigation in that a server/haproxy could easily transfer authenticated users (e.g., logged in users to the site) to separate networks (that only accept authenticated requests) and severely limiting the connection rate to domain's DNS IP. Kevin
Re: could haproxy call redis for a result?
Great. So any ideas how many urls one can story in these sticky tables before it becomes a problem? Would 250K be something of a concern? On Tue, May 8, 2012 at 11:26 AM, Baptiste bed...@gmail.com wrote: On Tue, May 8, 2012 at 3:25 PM, S Ahmed sahmed1...@gmail.com wrote: Ok that sounds awesome, how will that work though? i.e. from say java, how will I do that? From what your saying it sounds like I will just have to modify the response add and a particular header. And on the flip side, if I want to unblock I'll make a http request with something in the header that will unblock it? That's it. You'll have to track these headers with ACLs in HAProxy and to update the stick table accordingly. Then based on the value setup in the stick table, HAProxy can decide whether it will allow or reject the request. When do you think this will go live? In an other mail, Willy said he will release 1.5-dev9 today. So I guess it won't be too long now. Worste case would be later in the week or next week. cheers
[ANNOUNCE] haproxy 1.5-dev9
Hi all, I found some time to work on haproxy last weeks and to perform a number of fundamental changes that have been needed for a long time. First, while working on SSL and Compression at Exceliance, we found that the way the internal buffers and the HTTP message interact is really annoying. It comes from a long leftover of the migration which happened in 1.3 but it now had to come to an end. Some buffer manipulation functions have to deal with pointers that are copied into other places and because of this, some operations such as a simple realign are not possible. So I've changed the way it works. Now a buffer has a base (or origin) pointer, everything below it is from the past and is leaving the buffer. Everything above it is new and waiting for being forwarded. And HTTP messages don't hold and absolute pointer anymore, just offsets relative to the base pointer. The change was complex but the code is much more manageable and offers much more flexibility right now. Some of these changes conflicted with the ACL and pattern frameworks, so it was the right moment to merge them together. We now have a single sample fetch function for each type of data we want to extract, and both ACLs and patterns rely on this. The first user-visible benefit from this is that ACLs can now match cookies, URL parameters and arbitrary payloads. In practice, the current code is almost ready to enable session tracking on any input criteria. I thought I could make the track-sc1 and track-sc2 actions track headers but some more changes were needed that were out of the scope of all these changes, so I left them for later. Since some ACLs and pattern fetch methods supported an argument, a new argument management framework was implemented, making it very easy to declare variable number of typed arguments for new keywords. Thanks to this extension, I could bring new optional arguments to hdr() and cook() fetch methods to specify an occurrence number. This allows stick-tables to extract an IP address from a precise occurrence of the X-Forwarded-For header for instance, and to write ACLs which match such headers against networks found in files. Another point which had to be done was to automatically type the samples. Since the pattern framework supported automatic type casts, it was easy to complete this. Thanks to these types, we now support IPv6 ACLs, and the src and dst ACL/patterns are IPv4 or IPv6 depending on the data found. This is important because it means that it is now possible to mix v4 and v6 addresses in ACL patterns. As a side effect, the src6 and dst6 pattern fetches have been removed because they were redundant with src and dst. All these extensions required some improved parsing and error reporting. Thus I have implemented a simple and convenient error reporting framework based on a new memprintf() function which acts on a single pointer that is automatically reallocated and freed. A large number of config parsing options (specifically the ACL ones) which used to report error at line X are now able to say something like occurrence -20 too negative at argument 2 of hdr_ip(), must be = -10. I wish I've done this earlier, it's so simple, it took far less time to implement than the time it took to design without it in the past ! Along these things, the long-awaited use-server directive was introduced. It works as an exception to load balancing and persistence. It is convenient to avoid creating many backends when you want to select a server for a specific purpose (eg: monitoring). The log framework now learned to create, emit and log a unique request ID. Using the same syntax as log-format, it is possible to build a string which is supposed to uniquely identify a request in a given environment. This string is logged and emitted in headers so that everyone along the chain can log the same information, making it much easier to correlate events across large infrastructures. The error capture system was lacking a number of important information. I discovered this while trying to track a bug I have on my server, which causes invalid contents to sometimes be emitted and blocked by haproxy which logs them. Unfortunately, the level of information made these traces inexploitable. Now there are additional information such as the client's source port, all known internal flags, the position in the stream and the length of the last chunk. This will probably help when I get the error again. Another point, I found an uninitialized entry in a structure which made me waste 2 hours because on one machine, the first malloc() returned a zeroed area while on another one it was not the case. So I have added a command line option to enable memory poisonning. It immediately gave me another occurrence which I fixed :-) However I think the code is safe now. A number of other minor issues were fixed : - balance source did not properly hash IPv6 addresses (Alex Markham) - logformat could
Re: SPDY support?
On Tue, May 08, 2012 at 03:58:21PM -0400, KT Walrus wrote: On May 8, 2012, at 2:01 PM, Willy Tarreau wrote: That's why with the guys from Squid, Varnish and Wingate we presented an concurrent proposal to the IETF one month ago : http://tools.ietf.org/html/draft-tarreau-httpbis-network-friendly-00 I hope that HTTP 2.0 requires encryption/compression for all traffic. This has been discussed to great extents on the http-bis WG and I must say I'm one of those against making this mandatory, as it significantly increases costs for every hop without providing *any* benefit : - compressing videos and images is useless and is why nobody enables TLS compression on HTTP ; - encrypting everything will make it necessary to decrypt at many hops and will make it totally usual for many users to see broken sites and crypto errors everywhere, meaning that they won't care anymore. How many times did you have to click on I accept the risks when your browser told you a site's certificate was wrong ? And as Poul Henning Kamp of Varnish said, compressing or encrypting pink bits brings no benefit at all. Let's ensure that the new protocol makes it much easier and safer to stack components on top of each other, and to enable safer crypto (which means one which can be deciphered by proxies as an opt-in if you don't want your children to visit porn sites at school). Anyway, this discussion is for the http-bis WG, not haproxy's ML. Also, I would hope that geographic/distributed load balancing is better addressed in the protocol. That is, any request can get forwarded to another IP immediately (along with any session data needed by the new server) and a short response back to the client (if the new server accepts the request) containing a Unique Request ID and the IP for the client to connect to for the response. The client would, when seeing this redirect response, connect to the IP with the Request ID to get the response. Subsequent requests from the client should be made to the new IP for the given host and could be changed again. There is no way this could take off. Current plans for HTTP/2.0 are to reduce the number of RTTs as much as possible, and adding a preliminary request means one more RTT. You have to think with smartphones right now, they will dominate the web in a few years and they have the worst imaginable connectivity. Regards, Willy
Re: TCP reverse proxy
On Tue, May 08, 2012 at 08:35:26PM +0100, Jonathan Matthews wrote: On 8 May 2012 20:24, Emmanuel Bézagu emmanuel.bez...@gmail.com wrote: you're right but this works only with a single protocol managed by haproxy, doesn't it ? My idea was to have an ACL for each of these standard protocols in order to have a specific backend. 1) That's why there are different ports for different protocols; just put haproxy on each protocol's native port; I think understand what Emmanuel is trying to do : use a single incoming port for multiple protocols when it's not easy/possible to open more. Sometimes you really need this on home networks. But even in professional networks you might need to control that the incoming traffic is what you expect it to be. Emmanuel, with 1.5-dev9 that I just released a few minutes ago, you can have your ACLs match arbitrary payload contents. However this means that your protocols need to talk first (eg: not like SSH/SMTP/FTP) and that you know what to check there at precise locations. 2) MY EYES THEY BURN! Seriously, Comic Sans when posting to mailing lists? Didn't your mother teach you /any/ manners? ;-) Jonathan, are you reading a mailing list in HTML ? Seriously ? Didn't your mother tell you that reading mails in HTML format was the best way to catch malware and to contribute to botnets, especially when these are public lists ? Shame on you both then ! :-) Willy
Re: [ANNOUNCE] haproxy 1.5-dev9
I thought I could make the track-sc1 and track-sc2 actions track headers but some more changes were needed that were out of the scope of all these changes, so I left them for later. That is really sad :) Hopefully you'll be able to add string tracking to track-sc[12] soon, cause we'll be able to do great things :) cheers
Re: could haproxy call redis for a result?
Hi, Willy has just released 1.5-dev9, but unfortunately the track functions can't yet track strings (and so URLs). I'll let you know once a nightly snapshot could do it and we could work on a proof of concept configuration. Concerning 250K URLs, that should not be an issue at all to store them. Maybe looking for one URL could have a performance impact, we'll see. cheers On Tue, May 8, 2012 at 10:00 PM, S Ahmed sahmed1...@gmail.com wrote: Great. So any ideas how many urls one can story in these sticky tables before it becomes a problem? Would 250K be something of a concern? On Tue, May 8, 2012 at 11:26 AM, Baptiste bed...@gmail.com wrote: On Tue, May 8, 2012 at 3:25 PM, S Ahmed sahmed1...@gmail.com wrote: Ok that sounds awesome, how will that work though? i.e. from say java, how will I do that? From what your saying it sounds like I will just have to modify the response add and a particular header. And on the flip side, if I want to unblock I'll make a http request with something in the header that will unblock it? That's it. You'll have to track these headers with ACLs in HAProxy and to update the stick table accordingly. Then based on the value setup in the stick table, HAProxy can decide whether it will allow or reject the request. When do you think this will go live? In an other mail, Willy said he will release 1.5-dev9 today. So I guess it won't be too long now. Worste case would be later in the week or next week. cheers
Re: [ANNOUNCE] haproxy 1.5-dev9
On Tue, May 08, 2012 at 11:38:53PM +0200, Baptiste wrote: I thought I could make the track-sc1 and track-sc2 actions track headers but some more changes were needed that were out of the scope of all these changes, so I left them for later. That is really sad :) No it's not sad because the code is really taking shape and such features become easier to add day after day. Hopefully you'll be able to add string tracking to track-sc[12] soon, cause we'll be able to do great things :) I know but as you're well aware, the most important for me is to ensure that we can concurrently work on this code. So I sometimes prefer delay minor features to focus on architectural changes which allow multiple persons to develop in parallel. This is the most important as I'm still too much the bottleneck. Willy
Re: [ANNOUNCE] haproxy 1.5-dev9
I know but as you're well aware, the most important for me is to ensure that we can concurrently work on this code. So I sometimes prefer delay minor features to focus on architectural changes which allow multiple persons to develop in parallel. This is the most important as I'm still too much the bottleneck. Willy I know, but I was expecting that we could play with strings in stick tables with this release and so I'm just a bit disappointed :) Well, I'll wait a bit more time for it. cheers
Re: [ANNOUNCE] haproxy 1.5-dev9
Hi, On 08-05-2012 22:33, Willy Tarreau wrote: Hi all, I found some time to work on haproxy last weeks and to perform a number of fundamental changes that have been needed for a long time. [snipp] Some of these changes conflicted with the ACL and pattern frameworks, so it was the right moment to merge them together. We now have a single sample fetch function for each type of data we want to extract, and both ACLs and patterns rely on this. The first user-visible benefit from this is that ACLs can now match cookies, URL parameters and arbitrary payloads. In practice, the current code is almost ready to enable session tracking on any input criteria. I thought I could make the track-sc1 and track-sc2 actions track headers but some more changes were needed that were out of the scope of all these changes, so I left them for later. Since some ACLs and pattern fetch methods supported an argument, a new argument management framework was implemented, making it very easy to declare variable number of typed arguments for new keywords. Thanks to this extension, I could bring new optional arguments to hdr() and cook() fetch methods to specify an occurrence number. This allows stick-tables to extract an IP address from a precise occurrence of the X-Forwarded-For header for instance, and to write ACLs which match such headers against networks found in files. [snipp] After all this changes is it still necessary to have the appsession directive in haproxy? Could it not be removed to avoid confusions and future question what should be used appsession or *cook*? Cheers Aleks
Re: could haproxy call redis for a result?
Yes it is the lookup that I am worried about. On Tue, May 8, 2012 at 5:46 PM, Baptiste bed...@gmail.com wrote: Hi, Willy has just released 1.5-dev9, but unfortunately the track functions can't yet track strings (and so URLs). I'll let you know once a nightly snapshot could do it and we could work on a proof of concept configuration. Concerning 250K URLs, that should not be an issue at all to store them. Maybe looking for one URL could have a performance impact, we'll see. cheers On Tue, May 8, 2012 at 10:00 PM, S Ahmed sahmed1...@gmail.com wrote: Great. So any ideas how many urls one can story in these sticky tables before it becomes a problem? Would 250K be something of a concern? On Tue, May 8, 2012 at 11:26 AM, Baptiste bed...@gmail.com wrote: On Tue, May 8, 2012 at 3:25 PM, S Ahmed sahmed1...@gmail.com wrote: Ok that sounds awesome, how will that work though? i.e. from say java, how will I do that? From what your saying it sounds like I will just have to modify the response add and a particular header. And on the flip side, if I want to unblock I'll make a http request with something in the header that will unblock it? That's it. You'll have to track these headers with ACLs in HAProxy and to update the stick table accordingly. Then based on the value setup in the stick table, HAProxy can decide whether it will allow or reject the request. When do you think this will go live? In an other mail, Willy said he will release 1.5-dev9 today. So I guess it won't be too long now. Worste case would be later in the week or next week. cheers
Re: [ANNOUNCE] haproxy 1.5-dev9
Hi, Yes, appsession has been obsoleted by cookie and set-cookie stick tables pattern extraction (in HAProxy 1.5-dev7 as far as I remember). As an example: stick-table type string len 32 size 10K stick store-response set-cookie(PHPSESSID) stick on cookie(PHPSESSID) or, better, if your cookie is presented on the query string by the key session_id, then this would do the persistence as well: stick on url_param(session_id) You can use peers section and to share the content of the table between two LBs and to recover your table after a reload of haproxy. regards