Re: HAProxy - Load Balancing + GeoIP

2014-06-29 Thread Łukasz Jagiełło
Hi,

Did you though maybe about CDN ? You can always pass traffic without
caching and load-balance like you want pointing your "main HAProxy". Check
https://www.fastly.com/.


On Sun, Jun 29, 2014 at 7:46 AM, Marius Jankunas  wrote:

> Hello,
>
> First of all congrutalions for HAProxy 1.5.0 release, glad you finally
> finished. :)
>
>
> If you have some free time maybe could advise or give any hints which
> could me?
> I'm interested in HAProxy, and would like to know is it possible do load
> balancing to servers which are nearest to clients? And even if yes, so
> could this reduce latency, and improve e.g. website loading speed? I tried
> to draw an datagram(see attachment) which shows how i would like to do load
> balancing.
>
> About Datagram:
>
> Example there are 6 users: 2 from Asia, 2 from Europe, 2 from United
> states.
> All 6 users connecting to main haproxy server first, which stands in EU.
>
> For Asia users ping to main haproxy server is ~ 175ms
> For Europe users ping to main haproxy server ~22ms
> For United states users ping to main haproxy ~ 76ms
>
> Asia users to Haproxy (Asia) has ping of 15ms
> Europe users to Haproxy (EU) has ping of ~17ms
> United states users to Haproxy (US) has of ~12ms
>
> All 3 Haproxy (Asia),(EU),(US) servers has ping of +/-35ms to Application
> Server.
>
> I don't know, but feeling this only would great only additional latency
> for users. If yes, how we can make users connect direct to (Asia),(EU),(US)
> Haproxy server by their geo location? Thank you for any reply.
>
>
> Marius,
>
> 
> FREE 3D MARINE AQUARIUM SCREENSAVER - Watch dolphins, sharks & orcas on
> your desktop!
> Check it out at http://www.inbox.com/marineaquarium
>



-- 
Łukasz Jagiełło
lukaszjagielloorg


sFlow patch for upstream

2014-01-16 Thread Łukasz Jagiełło
Hi,

Base on original sFlow patch:
https://github.com/sflow/haproxy/commit/b0058a4b344bb61d05182db291f76453eaaea301

I've made patch for patch :) to work with upstream HAProxy version.
https://github.com/ljagiello/haproxy/commit/1a93d1ff693388dd96a0428fc28e43dae8ee4267

If anyone is also using it.

Cheers
-- 
Łukasz Jagiełło
lukaszjagielloorg


Re: Copying a Header before Modifying it

2012-03-29 Thread Łukasz Jagiełło
W dniu 29 marca 2012 14:48 użytkownik Łukasz Jagiełło
 napisał:
> From what I notice, as long as keepalive working only requests going
> to backend from first request. After keepalive ends, again backend is
> choice by ACL.

Should be:

"From what I notice, as long as keepalive working all requests going
to backend from first request. After keepalive ends, again backend is
choice by ACL."

-- 
Łukasz Jagiełło
lukaszjagielloorg



Re: Copying a Header before Modifying it

2012-03-29 Thread Łukasz Jagiełło
2012/3/29 William Lewis :
> Hi,
>
> So I use Haproxy to rewrite some URL requests infront of my java webservers,
> but I also want my java webservers to be able to issue redirects relative to
> the url that hit haproxy.
>
> Specifically I want the developers that have access to application platform
> but not the haproxy to be able to enforce a resource is only accessible over
> https without me having to write a rule in the haproxy config. In this case
> they just need to be able to get the original request and send back a 403
> redirect with https:// on the front, of course they don't see the original
> url so this is a problem.
>
> I tried solving it with this rule
>
> reqirep ^((HEAD|GET|POST|PUT|DELETE|TRACE|OPTIONS|CONNECT|PATCH)\ ([^\ ]*)\
> HTTP/1.[01]) \1\nX-Original-Request:\ \3
>
> run before any of the rewrite rules
>
> e.g.
> reqrep ^([^\ \t]*[\ \t])(.*) \1/tomcatcontext\2
>
> This results in a request to the webserver which looks like
>
> GET /tomcatcontext/ HTTP/1.1
> X-Original-Request: /
> Host: example.com
> Connection: keep-alive
> ...
>
> This all works great until you then try and do some acl matching in the
> haproxy, because an acl like
>
> acl example-com hdr_end(host) -i example.com
>
> will no longer match.
>
> Looks like a bug to me but I'd be interested in hearing any other ways of
> getting the original request through to the backend or otherwise allowing
> the backend to signal the haproxy that request needs to be redirected onto
> https.

Notice possible same problem with example like this:
#v+
   acl ssl url_reg \/static\/.*
   acl static  hdr(host) -i s.example.com
   reqirep ^Host:\ example.com   Host:\ s.example.com if ssl
   reqrep  ^([^\ ]*)\ /static/(.*) \1\ /\2 if ssl

   use_backend cache if static
   default_backend default
#v-

Problem happen when "timeout http-keep-alive 1s" and without "option
httpclose". When I turn "option httpclose" problem gone, and start
working normal.

>From what I notice, as long as keepalive working only requests going
to backend from first request. After keepalive ends, again backend is
choice by ACL.

-- 
Łukasz Jagiełło
lukaszjagielloorg



Re: Performance problems

2012-02-12 Thread Łukasz Jagiełło
2012/2/12 Sebastian Fohler :
> I've checked the values Willy posted on the haproxy page. All my hardware
> configurations should meet the needs of haproxy. Still I have major
> performance problems. How do I best find out why? The logs tell me not
> nearly anything I neec to now to fix that problems. Since I use vm's to try
> haproxy, I'm able to change some specifics in case I need to.
> My Hardware assigned to the vm's is:
>
> Two cores: Intel(R) Xeon(R) CPU X3430 @ 2.40GHz
> 512 MB Ram

Did you try increase memory ? 512MB for system even virtual isn't much nowadays.

What traffic we talk about ?

-- 
Łukasz Jagiełło
lukaszjagielloorg



non-ascii characters in urls

2010-08-23 Thread Łukasz Jagiełło
Hi,

I'm wonder is there any solution for regex non-ascii characters in
URLs ? For example want to block url like this:

http://some.domain.com/server-info

Got ACL:

acl status  url_reg \/server-(status|info)(.*)?

,but if someone wrote url like this:

http://some.domain.com/%73%65%72%76%65%72%2D%69%6E%66%6F

ACL won't get it. I could change acl like this:

acl status  url_reg
\/(server|\%73\%65\%72\%76\%65\%72)(-|\%2D)(status|info|\%69\%6E\%66\%6F|\%73\%74\%61\%74\%75\%73)(.*)?

But still someone can wrote:

http://some.domain.com/s%65%72%76%65%72%2D%69%6E%66%6F

and will get server status. Is it possible to transform url to ASCII ?

Regards
-- 
Łukasz Jagiełło
lukaszjagielloorg



Re: Some Questions

2010-07-13 Thread Łukasz Jagiełło
2010/7/13 eni-urgence :
> Hello everybody.
>
>   1) I want to use the errorfile directive in the configuration in order to
> display a custom html page (on proxy disk). Is it possible to include an
> image file in those pages ? And if not, if I use html page stored on a
> webserver, can I include image and css ?

For image/css and so on, use errorloc and run small webserver like
nginx. Working in my case really cool.
Got configuration like this:

backend error
option  httpchk HEAD /check.txt HTTP/1.0
server  error   127.0.0.1:8000 check inter 3000 fall 2 rise 2

frontend some_domain.pl x.y.z.t:80
errorloc400 http://error.some_domain.pl/400.html
errorloc403 http://error.some_domain.pl/403.html
errorloc408 http://error.some_domain.pl/408.html
errorloc500 http://error.some_domain.pl/500.html
errorloc502 http://error.some_domain.pl/502.html
errorloc503 http://error.some_domain.pl/503.html
errorloc504 http://error.some_domain.pl/504.html
[...]

Nginx listen any domain and display correct error page, or define
different error page for each domain.

-- 
Łukasz Jagiełło
lukaszjagielloorg



High numer tcp closed and timewait

2010-07-13 Thread Łukasz Jagiełło
Hi,

>From some time now I see high number tcp close and timewait.

#v+
lb-01 ~ # free
 total   used   free sharedbuffers cached
Mem:   404630813482282698080  0 363184 571832
-/+ buffers/cache: 4132123633096
Swap:  2096440  02096440
#v-
#v+
lb-01 ~ # ss -s
Total: 1732 (kernel 1834)
TCP:   94808 (estab 574, closed 93166, orphaned 171, synrecv 0,
timewait 93163/0), ports 5048

Transport Total IPIPv6
* 1834  - -
RAW   0 0 0
UDP   13130
TCP   1642  1642  0
INET  1655  1655  0
FRAG  0 0 0
#v-

My sysctl looks like this:

http://pastebin.com/DPvDv4xu

Haproxy settings:

#v+
defaults
log global
modehttp
option  httplog
option  dontlognull
option  redispatch
option  httpclose
option  srvtcpka
option  clitcpka
option  forwardfor
balance roundrobin
retries 3
maxconn 8192
timeout client  30s
timeout connect 30s
timeout server  30s
#v-

Rest config it's just fronend's backend's with acl but no more options.

I'm wonder is it's possible to make that number of close/timewait
lower or is it normal ?

-- 
Łukasz Jagiełło
lukaszjagielloorg



Problem with HTTP error 408

2010-06-06 Thread Łukasz Jagiełło
Hi,

I'm wonder what can cause HTTP error 408. Atm each day around 4-5k
requests ends with 408. Really random request also at random backends.
Already try change timeouts but doesn't change that number at all.

Thats my connections at 60-70% max bandwidth:
#v+
~ # netstat -ntu | wc -l
67121
#v-

Most important things from haproxy.conf
#v+
global
#   log 127.0.0.1   local0
log 127.0.0.1   local1 notice
user nobody
group nobody
daemon
#debug
#quiet
nbproc 4
pidfile /var/run/haproxy.pid
maxconn 32000

defaults
log global
modehttp
option  httplog
option  dontlognull
option  redispatch
option  httpclose
option  srvtcpka
option  forwardfor
balance roundrobin
retries 3
maxconn 8192
timeout client  30s
timeout connect 30s
timeout server  30s

then only backends and frontends sections (around 40 all)
#v-

My '/etc/sysctl.conf':
#v+
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
net.ipv4.tcp_max_tw_buckets = 262144
net.ipv4.tcp_tw_recycle = 1
net.core.wmem_max = 33554432
net.core.netdev_max_backlog = 2000
kernel.panic = 1
net.ipv4.tcp_rmem = 16184 174760 33554432
kernel.panic_on_oops = 1
net.ipv4.conf.all.arp_ignore = 1
net.core.rmem_default = 215040
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_max_orphans = 262144
net.ipv4.tcp_max_syn_backlog = 8192
net.core.wmem_default = 215040
net.ipv4.tcp_dsack = 0
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.tcp_wmem = 16184 174760 33554432
net.ipv4.tcp_timestamps = 0
net.core.rmem_max = 33554432
net.ipv4.tcp_sack = 0
#v-

Regards,
-- 
Łukasz Jagiełło



Re: acls and httpclose

2010-04-21 Thread Łukasz Jagiełło
2010/4/21 Angelo Höngens :
> Hey, I read somewhere on the list that when you use keepalives, only the
> first request in the connection is matched to an acl, and then the other
> requests in the connection are not evaluated.
>
> I noticed this behavior as well. As an experiment I set up a large
> config, where I select one out of 325 backends, based on one out of 8000
> host headers. I noticed that only the first request in a connection is
> matched to a backend, and the rest follows to the same backend, even
> though the host header is different. With the httpclose option,
> everything works as it should.
>
> My question is: is this behavior by design, or is this a work-in-progress?

From: http://haproxy.1wt.eu/download/1.4/doc/configuration.txt

"As stated in section 1, HAProxy does not yes support the HTTP keep-alive
  mode. So by default, if a client communicates with a server in this mode, it
  will only analyze, log, and process the first request of each connection. To
  workaround this limitation, it is possible to specify "option httpclose". It
  will check if a "Connection: close" header is already set in each direction,
  and will add one if missing. Each end should react to this by actively
  closing the TCP connection after each transfer, thus resulting in a switch to
  the HTTP close mode. Any "Connection" header different from "close" will also
  be removed."

So looks like everything works as it should.

> I want to use haproxy for content switching on a large scale (lot of
> acls, lot of backends), but with httpclose on haproxy uses 25% cpu,
> without httpclose haproxy uses 5% cpu. So I'd rather not use httpclose
> if I don't have to..

Also looks ok, since if you use httpclose haproxy got more work, so
cpu also got more work.


-- 
Łukasz Jagiełło
System Administrator
G-Forces Web Management Polska sp. z o.o. (www.gforces.pl)

Ul. Kruczkowskiego 12, 80-288 Gdańsk
Spółka wpisana do KRS pod nr 246596 decyzją Sądu Rejonowego Gdańsk-Północ



Re: Preventing bots from starving other users?

2009-11-15 Thread Łukasz Jagiełło
2009/11/15 Wout Mertens :
> I was wondering if HAProxy helps in the following situation:
>
> - We have a wiki site which is quite slow
> - Regular users don't have many problems
> - We also get crawled by a search bot, which creates many concurrent 
> connections, more than the hardware can handle
> - Therefore, service is degraded and users usually have their browsers time 
> out on them
>
> Given that we can't make the wiki faster, I was thinking that we could solve 
> this by having a per-source-IP queue, which made sure that a given source IP 
> cannot have more than e.g. 3 requests active at the same time. Requests 
> beyond that would get queued.
>
> Is this possible?

Guess so. I move traffic from crawlers to special web backend cause
they mostly harvest when I got backup window and slow down everything
even more. Add request limit should be also easy. Just check docu.

-- 
Łukasz Jagiełło
System Administrator
G-Forces Web Management Polska sp. z o.o. (www.gforces.pl)

Ul. Kruczkowskiego 12, 80-288 Gdańsk
Spółka wpisana do KRS pod nr 246596 decyzją Sądu Rejonowego Gdańsk-Północ



httpclose and reconnect

2009-02-09 Thread Łukasz Jagiełło
Hi,

I use haproxy-1.3.15.7 and yesterday notice such problem:

Got 2 backends:
- apache
- squid

Apache backend got apache servers, squid backend got squid servers.
Now depending of ACL (mostly base at file type, put request at apache
or squid). Normally request for page looks like this first request is
"GET /"  goes to apache and other requests (gif/jpg/... files) goes to
squid. Everything works fine at first connect after haproxy start.
When I make "Hard" Refresh "GET /" does to squid and haproxy in debug
mode doesn't even show that request.

Adding "option  httpclose" in defaults fix that problem. But should
haproxy every time check ACL and decide where put request ?

Regards
-- 
Łukasz Jagiełło
G-Forces Web Management Polska

T: +44 (0) 845 055 9040
F: +44 (0) 845 055 9038
E: lukasz.jagie...@gforces.pl
W: www.gforces.co.uk

This e-mail is confidential and intended solely for the use of the
individual to whom it is addressed. If you are not the intended
recipient, please notify the sender immediately. Do not disclose the
contents of this e-mail to any other person, use it for any purpose,
print, copy, store, or distribute it. This e-mail should then be
destroyed. Any views expressed in this message are those of the
individual sender except where the sender specifically states them to
be the views of G-Forces Web Management Ltd. Registered Office: 4 & 5
Kings Row, Armstrong Road, Maidstone, Kent. ME15 6AQ. Registered
Company No. 3863609. VAT Registration No. 7250 384 50