AW: haproxy.org bug pages broken (missing html headers and footer?)

2023-09-30 Thread Mathias Weiersmüller
Hi Willy, 

> Argh, thanks for notifying us! Haproxy dev5 crashed leaving a huge core
> that filled the FS (I hope it's complete, not checked yet), and the cron
> job that rebuilds the bugs page miserably failed as you can see :-/
> 
> That's now fixed, thank you!
> Willy

the links to the respective bugs seem to be broken too, example:
 
http://www.haproxy.org/bugs/://git.haproxy.org/?p=haproxy-2.6.git;a=commitdiff;h=dfa9730
 
it should be:
https://git.haproxy.org/?p=haproxy-2.6.git;a=commitdiff;h=dfa9730

Best regards

Matti






AW: Do `tune.rcvbuf.server` and `tune.sndbuf.server` (and their `tune.*.client` equivalents) lead to TCP fragmentation?

2018-09-30 Thread Mathias Weiersmüller
However the bandwidth behaviour is exactly the same:
* no `tune.sndbuf.client`, bandwidth goes up to 11 MB/s for a large download;
* with `tune.sndbuf.client 16384` it goes up to ~110 KB/s;
* with `tune.sndbuf.client 131072` it goes up to ~800 KB/s;
* with `tune.sndbuf.client 262144` it goes up to ~1400 KB/s; (These are 
bandwidths obtained after the TCP window has "settled".)

It seems there is a liniar correlation between that tune parameter and the 
bandwidth.


However due to the fact that I get the same behaviour both with and without 
offloading, I wonder if there isn't somehow a "hidden"
consequence of setting this `tune.sndbuf.client` parameter?

==

Sorry for the extremly brief answer:
- you mentioned you have 160 ms latency.
- tune.sndbuf.client 16384 allows you to have 16384 bytes "on-the-fly", meaning 
unacknowlegded. 16384 / 0.16 sec = roughly 128 KB/s 
- do the math with your value of 131072 and you will have get your ~800 KB/s.
- no hidden voodoo happening here: read about BDP (Bandwidth Delay Product)

Cheers

Matti



AW: Do `tune.rcvbuf.server` and `tune.sndbuf.server` (and their `tune.*.client` equivalents) lead to TCP fragmentation?

2018-09-30 Thread Mathias Weiersmüller
I am pretty sure you have TCP segmentation offload enabled. The TCP/IP stack 
therefore sends bigger-than-allowed TCP segments towards the NIC who in turn 
takes care about the proper segmentation.

You want to check the output of "ethtool -k eth0" and the values of:
tcp-segmentation-offload
generic-segmentation-offload

Cheers

Mathias


-Ursprüngliche Nachricht-
Von: Ciprian Dorin Craciun  
Gesendet: Sonntag, 30. September 2018 08:30
An: w...@1wt.eu
Cc: haproxy@formilux.org
Betreff: Re: Do `tune.rcvbuf.server` and `tune.sndbuf.server` (and their 
`tune.*.client` equivalents) lead to TCP fragmentation?

On Sun, Sep 30, 2018 at 9:08 AM Willy Tarreau  wrote:
> > I've played with `tune.rcvbuf.server`, `tune.sndbuf.server`, 
> > `tune.rcvbuf.client`, and `tune.sndbuf.client` and explicitly set 
> > them to various values ranging from 4k to 256k.  Unfortunately in 
> > all cases it seems that this generates too large TCP packets (larger 
> > than the advertised and agreed MSS in both direction), which in turn 
> > leads to TCP fragmentation and reassembly.  (Both client and server 
> > are Linux
> > >4.10.  The protocol used was HTTP 1.1 over TLS 1.2.)
>
> No no no, I'm sorry but this is not possible at all. You will never 
> find a single TCP stack doing this! I'm pretty sure there is an issue 
> somewhere in your capture or analysis.
>
> [...]
>
> However, if the problem you're experiencing is only with the listening 
> side, there's an "mss" parameter that you can set on your "bind" lines 
> to enforce a lower MSS, it may be a workaround in your case. I'm 
> personally using it at home to reduce the latency over ADSL ;-)


I am also extreemly sckeptical that this is HAProxy's fault, however the only 
change needed to eliminate this issue was commenting-out these tune arguments.  
I have also explicitly set the `mss` parameter to `1400`.

The catpure was taken directly on the server on public interface.

I'll try to make a fresh catpure to see if I can replicate this.


> > The resulting bandwidth was around 10 MB.
>
> Please use correct units when reporting issues, in order to reduce the 
> confusion. "10 MB" is not a bandwidth but a size (10 megabytes). Most 
> likely you want to mean 10 megabytes per second (10 MB/s). But maybe 
> you even mean 10 megabits per second (10 Mb/s or 10 Mbps), which 
> equals
> 1.25 MB/s.

:)  Sorry for that.  (Thats the otucome of writing emails at 3 AM after 4 hours 
of pocking into a production system.)  I completely agree with you about the 
MB/Mb consistency, and I always hate that some providers still use MB to mean 
mega-bits, like it's 2000.  :)

Yes, I meant 10 mega-bytes / second.  Sory again.

Ciprian.



AW: transparent mode -> chksum incorrect

2018-03-22 Thread Mathias Weiersmüller
Hi Marius,

your NIC is probably doing the TCP checksum calculation (called « TCP 
offloading»). The TCP/IP stacks therefore sends all outbound TCP packets with 
the same dummy checksum (in your case: 0x2a21) to the NIC driver. This saves 
some CPU cycles.

Check your TCP offloading settings using:
/sbin/ethtool -k eth0

Disable TCP Offloading using:
sudo /sbin/ethtool -K eth0 tx off rx off

In other words: You have no problem, it's just tcpdump which thinks there is a 
TCP checksum problem. If you want to work around this, use the following 
tcpdump option:
-K
   --dont-verify-checksums
  Don't attempt to verify IP, TCP, or UDP checksums.  This is 
useful for interfaces that perform some or all
  of those checksum calculation in hardware; otherwise, all 
outgoing TCP checksums will be flagged as bad.

Cheers

Mathias

==

Von: matei marius  
Gesendet: Donnerstag, 22. März 2018 11:50
An: HAproxy Mailing Lists 
Betreff: transparent mode -> chksum incorrect


Hello
I'm  trying to configure haproxy in transparent mode using the configuration 
below:

The backend servers have as default gateway the haproxy IP (172.17.232.232)

frontend fe_frontend_pool_proxy_3128
    timeout client 30m
    mode tcp
    bind 172.17.232.232:3128 transparent
    default_backend bk_pool_proxy_3128

backend bk_pool_proxy_3128
    timeout server 30m
    timeout connect 5s
    mode tcp
    balance leastconn
    default-server inter 5s fall 3 rise 2 on-marked-down shutdown-sessions
    source 0.0.0.0 usesrc clientip
    server sibipd-wcg1 172.17.232.229:3128 check port 3128 inter 3s rise 3 
fall 3
    server romapd-wcg2 172.17.32.80:3128 check port 3128 backup inter 3s 
rise 3 fall 3 weight 10 source 0.0.0.0
    option redispatch

I have these iptables rules on the HAProxy server
iptables -t mangle -N DIVERT
iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 111
iptables -t mangle -A DIVERT -j ACCEPT
ip rule add fwmark 111 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100
    

This setup is working perfectly from any IP class other than 172.17.232.x.
        
When I try to access the service from the same IP class with haproxy I see the 
packets having incorrect checksum .

tcpdump -i eth0 -n  host 172.17.232.229 and host 172.17.232.233 -vv
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 
bytes


12:37:21.741935 IP (tos 0x0, ttl 64, id 63601, offset 0, flags [DF], proto TCP 
(6), length 60)
    172.17.232.233.34012 > 172.17.232.229.3128: Flags [S], cksum 0x2a21 
(incorrect -> 0xf5a2), seq 111508051, win 29200, options [mss 1460,sackOK,TS 
val 573276706 ecr 0,nop,wscale 7], length 0
12:37:21.743005 IP (tos 0x0, ttl 64, id 53770, offset 0, flags [DF], proto TCP 
(6), length 60)
    172.17.232.233.34014 > 172.17.232.229.3128: Flags [S], cksum 0x2a21 
(incorrect -> 0xdbe0), seq 1250971688, win 29200, options [mss 1460,sackOK,TS 
val 573276706 ecr 0,nop,wscale 7], length 0

What am I doing wrong?    
    
Thanks
Marius