Haproxy Technologies, Help with bookkeeping?

2024-04-20 Thread Jane Martin
Hello ,

It’s possible that Haproxy Technologies might benefit from the bookkeeping
services we deliver to companies like yours.

Why not let us focus on the heavy lifting, so you can focus on your passion?

If this sounds good, let me know a good time to have us get in touch.


Jane Martin, Compliance Analyst | Atinesh


Re: Help tracking "connection refused" under pressure on v2.9

2024-03-29 Thread Ricardo Nabinger Sanchez
Hi Willy,

On Fri, 29 Mar 2024 07:17:56 +0100
Willy Tarreau  wrote:

> > These "connection refused" is from our watchdog; but the effects are as
> > perceptible from the outside.  When our watchdog hits this situation,
> > it will forcefully restart HAProxy (we have 2 instances) because there
> > will be a considerable service degradation.  If you remember, there's
> > https://github.com/haproxy/haproxy/issues/1895 and we talked briefly
> > about this in person, at HAProxyConf.  
> 
> Thanks for the context. I didn't remember about the issue. I remembered
> we discussed for a while but didn't remember about the issue in question
> obviously, given the number of issues I'm dealing with :-/
> 
> In the issue above I'm seeing an element from Felipe saying that telnet
> to port 80 can take between 3 seconds to accept. That really makes me
> think about either the SYN queue being full, causing drops and retransmits,
> or a lack of socket memory to accept packets. That one could possibly be
> caused by tcp_mem not being large enough due to some transfers with high
> latency fast clients taking a lot of RAM, but it should not affect the
> local UNIX socket. Also, killing the process means killing all the
> associated connections and will definitely result in freeing a huge
> amount of network buffers, so it could fuel that direction. If you have
> two instances, did you notice if the two start to behave badly at the
> same time ? If that's the case, it would definitely indicate a possible
> resource-based cause like socket memory etc.

Of our 2 HAProxy instances, it is usually one (mostly the frontend one)
that exhibits this behavior.  And as it is imperative that the
corrective action be as swift as possible, all instances are terminated
(which can include older instances, from graceful reloads), and new
instances are started.  Very harsh, but at >50 Gbps, each full second
of downtime adds up considerably to network pressure.

So for context, our least capable machine has 256 GB RAM.  We have not
seen any spikes over the metrics we monitor, and this issue tends to
happen at a very stable steady-state, albeit a loaded one.  While it is
currently outside of our range for detailed data, we didn't notice
anything unusual, especially regarding memory usage, on these traps we
reported.

But of course, there could be a metric that we're not yet aware that
correlates.  Anyone from the dustier, darkest corners that you know of?
:-)


> 
> > But this is incredibly elusive to reproduce; it comes and goes.  It
> > might happen every few minutes, or not happen at all for months.  Not
> > tied to a specific setup: different versions, kernels, machines.  In
> > fact, we do not have better ways to detect the situation, at least not
> > as fast, reactive, and resilient.  
> 
> It might be useful to take periodic snapshots of /proc/slabinfo and
> see if something jumps during such incidents (grep for TCP, net, skbuff
> there). I guess you have not noticed any "out of socket memory" nor such
> indications in your kernel logs, of course :-/

We have no indications of memory pressure related to network.  At the
peak, we usually see something like 15~22% overall active memory (it
fails me, but it might take >70% of active memory for these machines to
actually degrade, maybe more).  As for TCP stuff, around 16~30k active
sockets, plus some 50~100k in timewait, and still not creating any
problems.


> 
> Another one that could make sense to monitor is "PoolFailed" in
> "show info". It should always remain zero.

We collect this (all available actually); I don't remember this one
ever measuring more than zero.  But we'll keep an eye on it.

In time, could this be somewhat unrelated to HAProxy?  I.e., maybe
kernel?

Cheers,

-- 
Ricardo Nabinger Sanchez https://www.taghos.com.br/



Re: Help tracking "connection refused" under pressure on v2.9

2024-03-29 Thread Willy Tarreau
Hi Ricardo,

On Thu, Mar 28, 2024 at 06:21:16PM -0300, Ricardo Nabinger Sanchez wrote:
> Hi Willy,
> 
> On Thu, 28 Mar 2024 04:37:11 +0100
> Willy Tarreau  wrote:
> 
> > Thanks guys! So there seems to be an annoying bug. However I'm not sure
> > how this is related to your "connection refused", except if you try to
> > connect at the moment the process crashes and restarts, of course. I'm
> > seeing that the bug here is stktable_requeue_exp() calling task_queue()
> > with an invalid task expiration. I'm having a look now. I'll respond in
> > the issue with what I can find, thanks for your report.
> 
> These "connection refused" is from our watchdog; but the effects are as
> perceptible from the outside.  When our watchdog hits this situation,
> it will forcefully restart HAProxy (we have 2 instances) because there
> will be a considerable service degradation.  If you remember, there's
> https://github.com/haproxy/haproxy/issues/1895 and we talked briefly
> about this in person, at HAProxyConf.

Thanks for the context. I didn't remember about the issue. I remembered
we discussed for a while but didn't remember about the issue in question
obviously, given the number of issues I'm dealing with :-/

In the issue above I'm seeing an element from Felipe saying that telnet
to port 80 can take between 3 seconds to accept. That really makes me
think about either the SYN queue being full, causing drops and retransmits,
or a lack of socket memory to accept packets. That one could possibly be
caused by tcp_mem not being large enough due to some transfers with high
latency fast clients taking a lot of RAM, but it should not affect the
local UNIX socket. Also, killing the process means killing all the
associated connections and will definitely result in freeing a huge
amount of network buffers, so it could fuel that direction. If you have
two instances, did you notice if the two start to behave badly at the
same time ? If that's the case, it would definitely indicate a possible
resource-based cause like socket memory etc.

> But this is incredibly elusive to reproduce; it comes and goes.  It
> might happen every few minutes, or not happen at all for months.  Not
> tied to a specific setup: different versions, kernels, machines.  In
> fact, we do not have better ways to detect the situation, at least not
> as fast, reactive, and resilient.

It might be useful to take periodic snapshots of /proc/slabinfo and
see if something jumps during such incidents (grep for TCP, net, skbuff
there). I guess you have not noticed any "out of socket memory" nor such
indications in your kernel logs, of course :-/

Another one that could make sense to monitor is "PoolFailed" in
"show info". It should always remain zero.

> > Since you were speaking about FD count and maxconn at 900k, please let
> > me take this opportunity for a few extra sanity checks. By default we
> > assign up to about 50% of the FD to pipes (i.e. up to 25% pipes compared
> > to connections), so if maxconn is 900k you can reach 1800 + 900 = 2700k
> > FD. One thing to keep in mind is that /proc/sys/fs/nr_open sets a
> > per-process hard limit and usually is set to 1M, and that
> > /proc/sys/fs/file-max sets a system-wide limit and depends on the amount
> > of RAM, so both may interact with such a large setting. We could for
> > example imagine that at ~256k connections with as many pipes you're
> > reaching around 1M FDs and that the connection from socat to the CLI
> > socket cannot be accepted and is rejected. Since you recently updated
> > your kernel, it might be worth checking if the default values are still
> > in line with your usage.
> 
> We set our defaults pretty high in anticipation:
> 
>   /proc/sys/fs/file-max = 5M;
>   /proc/sys/fs/nr_open = 3M;

OK!

> Even with our software stack, we do not reach the limits.  A long time
> ago we did hit (lower limits back then) and the effects are devastating.

Yes, that's always the problem with certain limits, they hit you at the
worst ever moments, when the most users are counting on you to work fine
and when it's the hardest to spot anomalies.

Willy



Re: Help tracking "connection refused" under pressure on v2.9

2024-03-28 Thread Ricardo Nabinger Sanchez
Hi Willy,

On Thu, 28 Mar 2024 04:37:11 +0100
Willy Tarreau  wrote:

> Thanks guys! So there seems to be an annoying bug. However I'm not sure
> how this is related to your "connection refused", except if you try to
> connect at the moment the process crashes and restarts, of course. I'm
> seeing that the bug here is stktable_requeue_exp() calling task_queue()
> with an invalid task expiration. I'm having a look now. I'll respond in
> the issue with what I can find, thanks for your report.

These "connection refused" is from our watchdog; but the effects are as
perceptible from the outside.  When our watchdog hits this situation,
it will forcefully restart HAProxy (we have 2 instances) because there
will be a considerable service degradation.  If you remember, there's
https://github.com/haproxy/haproxy/issues/1895 and we talked briefly
about this in person, at HAProxyConf.

But this is incredibly elusive to reproduce; it comes and goes.  It
might happen every few minutes, or not happen at all for months.  Not
tied to a specific setup: different versions, kernels, machines.  In
fact, we do not have better ways to detect the situation, at least not
as fast, reactive, and resilient.


> 
> Since you were speaking about FD count and maxconn at 900k, please let
> me take this opportunity for a few extra sanity checks. By default we
> assign up to about 50% of the FD to pipes (i.e. up to 25% pipes compared
> to connections), so if maxconn is 900k you can reach 1800 + 900 = 2700k
> FD. One thing to keep in mind is that /proc/sys/fs/nr_open sets a
> per-process hard limit and usually is set to 1M, and that
> /proc/sys/fs/file-max sets a system-wide limit and depends on the amount
> of RAM, so both may interact with such a large setting. We could for
> example imagine that at ~256k connections with as many pipes you're
> reaching around 1M FDs and that the connection from socat to the CLI
> socket cannot be accepted and is rejected. Since you recently updated
> your kernel, it might be worth checking if the default values are still
> in line with your usage.

We set our defaults pretty high in anticipation:

/proc/sys/fs/file-max = 5M;
/proc/sys/fs/nr_open = 3M;

Even with our software stack, we do not reach the limits.  A long time
ago we did hit (lower limits back then) and the effects are devastating.

Cheers,

-- 
Ricardo Nabinger Sanchez https://www.taghos.com.br/



Re: Help tracking "connection refused" under pressure on v2.9

2024-03-27 Thread Willy Tarreau
On Wed, Mar 27, 2024 at 02:26:47PM -0300, Ricardo Nabinger Sanchez wrote:
> On Wed, 27 Mar 2024 11:06:39 -0300
> Felipe Wilhelms Damasio  wrote:
> 
> > kernel: traps: haproxy[2057993] trap invalid opcode ip:5b3e26
> > sp:7fd7c002f100 error:0 in haproxy[42c000+1f7000]
> 
> We managed to get a core file, and so created ticket #2508
> (https://github.com/haproxy/haproxy/issues/2508) with more details.

Thanks guys! So there seems to be an annoying bug. However I'm not sure
how this is related to your "connection refused", except if you try to
connect at the moment the process crashes and restarts, of course. I'm
seeing that the bug here is stktable_requeue_exp() calling task_queue()
with an invalid task expiration. I'm having a look now. I'll respond in
the issue with what I can find, thanks for your report.

Since you were speaking about FD count and maxconn at 900k, please let
me take this opportunity for a few extra sanity checks. By default we
assign up to about 50% of the FD to pipes (i.e. up to 25% pipes compared
to connections), so if maxconn is 900k you can reach 1800 + 900 = 2700k
FD. One thing to keep in mind is that /proc/sys/fs/nr_open sets a
per-process hard limit and usually is set to 1M, and that
/proc/sys/fs/file-max sets a system-wide limit and depends on the amount
of RAM, so both may interact with such a large setting. We could for
example imagine that at ~256k connections with as many pipes you're
reaching around 1M FDs and that the connection from socat to the CLI
socket cannot be accepted and is rejected. Since you recently updated
your kernel, it might be worth checking if the default values are still
in line with your usage.

Cheers,
Willy



Re: Help tracking "connection refused" under pressure on v2.9

2024-03-27 Thread Ricardo Nabinger Sanchez
On Wed, 27 Mar 2024 11:06:39 -0300
Felipe Wilhelms Damasio  wrote:

> kernel: traps: haproxy[2057993] trap invalid opcode ip:5b3e26
> sp:7fd7c002f100 error:0 in haproxy[42c000+1f7000]

We managed to get a core file, and so created ticket #2508
(https://github.com/haproxy/haproxy/issues/2508) with more details.

Cheers,

-- 
Ricardo Nabinger Sanchez https://www.taghos.com.br/



Re: Help tracking "connection refused" under pressure on v2.9

2024-03-27 Thread Ricardo Nabinger Sanchez
On Wed, 27 Mar 2024 11:06:39 -0300
Felipe Wilhelms Damasio  wrote:

> kernel: traps: haproxy[2057993] trap invalid opcode ip:5b3e26 sp:7fd7c002f100 
> error:0 in haproxy[42c000+1f7000]

In our build, this would be where instruction pointer was:

(gdb) list *0x5b10e6
0x5b10e6 is in __task_queue (src/task.c:285).
280(wq != _ctx->timers && wq != _ctx->timers));
281 #endif
282 /* if this happens the process is doomed anyway, so better 
catch it now
283  * so that we have the caller in the stack.
284  */
285 BUG_ON(task->expire == TICK_ETERNITY);
286
287 if (likely(task_in_wq(task)))
288 __task_unlink_wq(task);
289

However, we can't produce a stack trace from only the instruction
pointer; at least I don't know how (but would love to learn if it is
possible).

We are trying to get a core dump, too.

Cheers,

-- 
Ricardo Nabinger Sanchez https://www.taghos.com.br/



Re: Help tracking "connection refused" under pressure on v2.9

2024-03-27 Thread Felipe Wilhelms Damasio
Hi,

We've confirmed a few findings after we poured ~75-80Gbps of traffic
on purpose on a single machine:
- haproxy does indeed crashes;
- hence, we have no stats socket to collect a few things;

It seems that under pressure (not sure which conditions yet) the
kernel seems to be killing it. dmesg shows:

kernel: traps: haproxy[2057993] trap invalid opcode ip:5b3e26
sp:7fd7c002f100 error:0 in haproxy[42c000+1f7000]

This is a relatively new kernel:

Linux ndt-spo-12 6.1.60 #1 SMP PREEMPT_DYNAMIC Wed Oct 25 19:17:36 UTC
2023 x86_64 Intel(R) Xeon(R) Gold 6338N CPU @ 2.20GHz GenuineIntel
GNU/Linux

And it seems to happen on different kernels.

Does anyone have any tips on how to proceed to track this down?

Before the crash, "show info" showed only around 27,000 CurConn, so
not a great deal for maxconn 90.

Thanks!

On Tue, Mar 26, 2024 at 11:33 PM Felipe Wilhelms Damasio
 wrote:
>
> Hi,
>
> Since we don't really know how to track this one, we thought it might
> be better to reach out here to get feedback.
>
> We're using haproxy to deliver streaming files under pressure
> (80-90Gbps per machine). When using h1/http, splice-response is a
> great help to keep load under control. We use branch v2.9 at the
> moment.
>
> However, we've hit a bug with splice-response (Github issue created)
> and we had to use all day our haproxies without splicing.
>
> When we reach a certain load, a "connection refused" alarm starting
> buzzing like crazy (2-3 times every 30 minutes). This alarm is simply
> a connect to localhost with 500ms timeout:
>
> socat /dev/null  tcp4-connect:127.0.0.1:80,connect-timeout=0.5
>
> The log file indicates the port is virtually closed:
>
> 2024/03/27 01:06:04 socat[984480] E read(6, 0xe98000, 8192): Connection 
> refused
>
> The thing is haproxy process is very much alive...so we just restart
> it everytime this happens.
>
> What data do you suggest we collect to help track this down? Not sure
> if the stats socket is available, but we can definitely try and get
> some information.
>
> We're not running out of fds, or even connections with/without backlog
> (we have a global maxconn of 90 with ~30,000 streaming sessions
> active and we have tcp_max_syn_backlog set to 262144), we checked. But
> it seems to correlate with heavy traffic.
>
> Thanks!
>
> --
> Felipe Damasio



-- 
Felipe Damasio



Help tracking "connection refused" under pressure on v2.9

2024-03-26 Thread Felipe Wilhelms Damasio
Hi,

Since we don't really know how to track this one, we thought it might
be better to reach out here to get feedback.

We're using haproxy to deliver streaming files under pressure
(80-90Gbps per machine). When using h1/http, splice-response is a
great help to keep load under control. We use branch v2.9 at the
moment.

However, we've hit a bug with splice-response (Github issue created)
and we had to use all day our haproxies without splicing.

When we reach a certain load, a "connection refused" alarm starting
buzzing like crazy (2-3 times every 30 minutes). This alarm is simply
a connect to localhost with 500ms timeout:

socat /dev/null  tcp4-connect:127.0.0.1:80,connect-timeout=0.5

The log file indicates the port is virtually closed:

2024/03/27 01:06:04 socat[984480] E read(6, 0xe98000, 8192): Connection refused

The thing is haproxy process is very much alive...so we just restart
it everytime this happens.

What data do you suggest we collect to help track this down? Not sure
if the stats socket is available, but we can definitely try and get
some information.

We're not running out of fds, or even connections with/without backlog
(we have a global maxconn of 90 with ~30,000 streaming sessions
active and we have tcp_max_syn_backlog set to 262144), we checked. But
it seems to correlate with heavy traffic.

Thanks!

-- 
Felipe Damasio



Help wanted: Random delays in https request processing

2023-05-30 Thread Marno Krahmer
Hey,

I noticed, that I am experiencing a strange issue with https requests (both on 
http/1.1 and http/2):

It seems like around 1 of 500 / 1 of 1000 requests gets delayed by around 60 to 
90  Seconds between the Client and HAProxy.
All other requests work fine and are blazingly fast.

What the client application logs:
17:59:20.488006 Starting REST request
17:59:40.492774 REST request: PUT 
https://mydomain.com:443/super_fancy_url/_doc/1313409683%3A58505246%3A2023-05-30T15%3A59%3A20Z?refresh=false
 returned 0 and took 1.01ms(name_lookup_time: 0.09ms, connect_time: 0.09ms)

This error happens, because the client does not receive a http response within 
the 20 seconds configured timeout.

When looking through the HAProxy logs, I find a log line for this request, but 
for whatever reason, the time logged there does not match the request time:

May 30 18:00:40 haproxy[458192]: 10.152.40.11:42054 [30/May/2023:18:00:40.964] 
HTTP_MYDOMAIN~ HTTP_MYDOMAIN/super-server.mydomain.com 0/0/0/9/9 201 494 - - 
 32852/116/0/0/0 0/0 "PUT 
/super_fancy_url/_doc/1313409683%3A58505246%3A2023-05-30T15%3A59%3A20Z?refresh=false
  HTTP/1.1"

I double-checked the system clocks on the client and the HAProxy node and can 
ensure, that they are in sync.

As the traffic is SSL encrypted, I don’t think, I can do a useful tcp-dump here.

Interesting is, that the application claims to be able to connect on the 
TCP-Level to HAProxy in less than a millisecond. (That might be, as we have 
sub-millisecond latency in our network), but it seems to really be the 
HTTP-Request that is delayed. But HAProxy reports, that the connection to / 
response from the backend server took only 9ms.
To me, that does not explain an exceeded timeout of >20 seconds.

When looking at examples (same source and destination), where the request was 
not delayed / timing out, then the times logged by HAproxy were correct too.

The hopefully important bits from my config:

global
  # Send logs to syslog
  log 10.12.244.22:1514 local0 notice
  log 10.12.244.22:1514 local1 info

  maxconn 10
  ulimit-n 655360
  nbthread 64

  external-check
  spread-checks 10
  insecure-fork-wanted

  master-worker

  # Hard stop old workers after some time to prevent 
https://github.com/haproxy/haproxy/issues/1701
  hard-stop-after 576h # 24 days

  stats socket /var/run/haproxy/admin.sock mode 666 expose-fd listeners level 
admin
  stats socket /var/run/haproxy/haproxy.sock expose-fd listeners mode 666
  stats socket /var/run/haproxy.sock expose-fd listeners mode 666 level admin
  stats timeout 30s
  user haproxy
  group haproxy
  daemon

  # Generated by 
https://ssl-config.mozilla.org/#server=haproxy=1.8.8=intermediate
  ssl-default-bind-ciphers 
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
  ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets

  ssl-default-server-ciphers 
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
  ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets

  tune.ssl.default-dh-param 2048
  tune.bufsize 524288

defaults
  log global
  option  dontlognull
  option  redispatch
  option  log-health-checks
  timeout connect 1s
  timeout client  600s
  timeout server  600s
  timeout client-fin 5s
  timeout server-fin 5s
  timeout check   250
  errorfile 400 /etc/haproxy/errors/400.http
  errorfile 403 /etc/haproxy/errors/403.http
  errorfile 408 /etc/haproxy/errors/408.http
  errorfile 500 /etc/haproxy/errors/500.http
  errorfile 502 /etc/haproxy/errors/502.http
  errorfile 503 /etc/haproxy/errors/503.http
  errorfile 504 /etc/haproxy/errors/504.http
  default-server maxconn 10
  option allbackups

listen HTTP_MYDOMAIN
  mode http

  bind 10.200.4.198:80
  bind 10.200.4.198:443 ssl crt /etc/ssl/haproxy/
  option httpchk GET /_cluster/health?local=true
  option  httplog
  option forwardfor header X-Real-Ip

  http-request replace-value Upgrade (.*) websocket # 
https://bishopfox.com/blog/h2c-smuggling-request
  http-request del-header X-Forwarded-For
  http-request set-header X-Forwarded-Port 443 if { ssl_fc }
  http-request set-header X-Forwarded-Port 80 if !{ ssl_fc }
  http-request del-header X-Forwarded-Proto
  http-request set-header X-Forwarded-Proto https if { ssl_fc }
  http-request set-header X-Forwarded-Proto http if !{ ssl_fc }


  timeout check 1500ms
  default-server inter 2s

  http-response set-header LB-FQDN "haproxy-1.mydomain.com"
  http-response set-header LB-Backend-Server %s
  # https://serverfault.com/questions/650588/haproxy-timing-connection-diagram
  http-response set-header LB-Times "connect: %Th ms; queue: %Tw ms; 
be-connect: %Tc ms; 

Re: haproxy. com: Basic steps to help up your business!!

2023-01-11 Thread lewiscarroll
Hi *haproxy. com* Team,



Your website *haproxy. com* is not ranking top in Google organic searches
for many competitive keyword phrases.

Your company is not doing well in most of the Social Media Websites.

Your website is having less number of index pages and Google back-link.

Your website requires a lot of additional Up-gradations.

It's apparent that you have used Ad Words marketing to promote your
business in the past, however, your website does see some organic search
traffic here and there. Now, I believe I can help enhance that portion of
organic traffic significantly at *haproxy. com*.

I guarantee you will see a drastic change in your search ranking and
traffic once these issues are fixed. Also, this is one time, so no paying
AdWords every month.

*Please find the below benefits of the campaign:*

*1) Top page on search engine. (Google, Yahoo, Bing)*

*2) Increase your website traffic and sales.*

*3) Create a brand name for your website.*

*4) Increase your website's online visibility.*



If my proposal *sounds interesting* for your business goal, please feel
free to email me, it is ensured hereby that we promise that our technical
team will contact you in very shortly and pursue as per your business
requirement.


If yes, please allow me to send you a *SEO Packages and **quote*.

Hoping to hear from you and take this business partnership ahead.

*Kind Regards*

*Lewis Carroll! Business Development Manager*

**

NOTE: *We are not Spammers and are against spamming of any kind. To opt-out
of future mailings, just send a blank reply. (Hit reply and send.)
[image: beacon]


Help with a backend server configuration question

2022-11-16 Thread Yujun Wu
Hello,


Could someone help with a backend server question? Our servers are dual

stack machines with both IPv4/IPv6 addresses. For example, one of them has:


IPv4 address 131.225.69.84<https://131.225.69.84>

IPv6 address 2620:6a:0:4812:f0:0:69:84


If I want the server to accept both IPv4 and IPv6 requests (some of the clients 
only have IPv6 addresses), what should I put in the [IPADDRESS]?


--

backend webdav1

balance roundrobin

mode tcp

server server1 [IPADDRESS]:8000 check

--


Or, I need have two backends, one for IPv4 (webdav1ipv4), another for 
IPv6(webdav1ipv6)?


Thanks in advice for any advice on this.


Regards,
Yujun


Re: Seeking for help about regression test

2022-01-27 Thread Willy Tarreau
Hi,

On Thu, Jan 27, 2022 at 01:08:51PM +, ? ?? wrote:
> Hi,
> 
> We are a team from China that looking forward to use haproxy in newly
> designed operating system and check if haproxy is running well on it, for
> research using. From Github we can see many regression test examples, it is
> detailed and well tested. However, for lacking of enough knowledge about
> haproxy, is there any other information that is related to these regression
> tests? For example, what are their usage and how our CI system may use these
> tests to make regression test?

It really depends on what you're trying to do. The tests were made to
cover areas that are well known to be sensitive to changes in the code.
Some test focus on a particular feature. Others just on a long chain of
features. In general they're essentially aimed at the development team
so that we can detect early that we broke assumption somewhere, or that 
a code change will not work on a certain platform (missing include or
syscall etc).

Now regarding how you could include that into your CI, I really have no
idea and I doubt anyone will know better than the people who currently
work on your CI. We use them to produce this:
  https://github.com/haproxy/haproxy/actions
or this:
  https://github.com/haproxy/haproxy/runs/4968592983?check_suite_focus=true
for example, based on scripts in the .github directory.

With that in mind it will be up to you to decide if it could make any
sense for you to include this into your CI.

Regards,
Willy



Seeking for help about regression test

2022-01-27 Thread 李 景旭
Hi,

We are a team from China that looking forward to use haproxy in newly designed 
operating system and check if haproxy is running well on it, for research 
using. From Github we can see many regression test examples, it is detailed and 
well tested. However, for lacking of enough knowledge about haproxy, is there 
any other information that is related to these regression tests? For example, 
what are their usage and how our CI system may use these tests to make 
regression test?

Any information will be appreciated and be useful to us. Thank you in advance.

Jingxu, Li
>From Harbin, China





Re: Help with peer setup and "srv_conn(bk_customer/haproxy-dest1)"

2021-12-08 Thread Aleksandar Lazic

Hi.

Anyone which can help to protect the backen with backend states?

Regards
Alex

On 05.12.21 11:42, Aleksandar Lazic wrote:


Hi.

I try to protect an backend server against a overload within a master/master 
setup.
The test setup looks like this

lb1: 8081 \
    -hap-dest: 8080
lb2: 8082 /

When I now call lb1 with curl the "tracker/quota1" gpc is increased and the 
second request is denied.
The problem is that the peer on lb2 does not get the counter data to protect 
the backend on lb2 too.

Please can anybody help me to fix my mistake and find a proper solution.


```
curl -v http://127.0.0.1:8081/; curl -v http://127.0.0.1:8081
* Connected to 127.0.0.1 (127.0.0.1) port 8081 (#0)
 > GET / HTTP/1.1
 > Host: 127.0.0.1:8081
 > User-Agent: curl/7.68.0
 > Accept: */*
 >
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< dest_dst_conn: 1
< content-length: 0
<
* Connection #0 to host 127.0.0.1 left intact


* Connected to 127.0.0.1 (127.0.0.1) port 8081 (#0)
 > GET / HTTP/1.1
 > Host: 127.0.0.1:8081

< HTTP/1.1 403 Forbidden
< content-length: 93
< cache-control: no-cache
< content-type: text/html

```

``` lb1
echo "show peers;show table tracker/quota1;show table tracker/quota2"|socat - 
tcp4-connect:127.0.0.1:9990

0x55bb71554dc0: [05/Dec/2021:10:27:17] id=tracker disabled=0 flags=0x33 
resync_timeout= task_calls=5
   0x55bb71558350: id=tracker(remote,inactive) addr=127.0.0.1:20001 
last_status=NAME last_hdshk=5m36s
     reconnect= heartbeat= confirm=0 tx_hbt=0 rx_hbt=0 
no_hbt=0 new_conn=1 proto_err=0 coll=0
     flags=0x0
     shared tables:
   0x55bb7156f1e0 local_id=2 remote_id=0 flags=0x0 remote_data=0x0
   last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
   table:0x55bb71556a50 id=tracker/quota1 update=3 localupdate=3 
commitupdate=0 refcnt=1
     Dictionary cache not dumped (use "show peers dict")
   0x55bb7156f090 local_id=1 remote_id=0 flags=0x0 remote_data=0x0
   last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
   table:0x55bb71556c60 id=tracker/quota2 update=2 localupdate=2 
commitupdate=0 refcnt=1
     Dictionary cache not dumped (use "show peers dict")
   0x55bb71557300: id=h1(local,inactive) addr=127.0.0.1:2 last_status=NONE 
last_hdshk=
     reconnect= heartbeat= confirm=0 tx_hbt=0 rx_hbt=0 
no_hbt=0 new_conn=0 proto_err=0 coll=0
     flags=0x0
     shared tables:
   0x55bb7156f230 local_id=2 remote_id=0 flags=0x0 remote_data=0x0
   last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
   table:0x55bb71556a50 id=tracker/quota1 update=3 localupdate=3 
commitupdate=0 refcnt=1
     Dictionary cache not dumped (use "show peers dict")
   0x55bb7156f0e0 local_id=1 remote_id=0 flags=0x0 remote_data=0x0
   last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
   table:0x55bb71556c60 id=tracker/quota2 update=2 localupdate=2 
commitupdate=0 refcnt=1
     Dictionary cache not dumped (use "show peers dict")

# table: tracker/quota1, type: string, size:100, used:1
0x55bb71772888: key=0 use=0 exp=53297 server_id=0 gpc0=1

# table: tracker/quota2, type: string, size:100, used:1
0x55bb71772958: key=0 use=0 exp=53297 server_id=0 gpc1=0

```

``` lb2
echo "show peers;show table tracker/quota1;show table tracker/quota2"|socat - 
tcp4-connect:127.0.0.1:9991

0x5618ae836dc0: [05/Dec/2021:10:27:12] id=tracker disabled=0 flags=0x33 
resync_timeout= task_calls=5
   0x5618ae83a350: id=tracker(remote,inactive) addr=127.0.0.1:2 
last_status=NAME last_hdshk=5m31s
     reconnect= heartbeat= confirm=0 tx_hbt=0 rx_hbt=0 
no_hbt=0 new_conn=2 proto_err=0 coll=0
     flags=0x0
     shared tables:
   0x5618ae8511e0 local_id=2 remote_id=0 flags=0x0 remote_data=0x0
   last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
   table:0x5618ae838a50 id=tracker/quota1 update=0 localupdate=0 
commitupdate=0 refcnt=1
     Dictionary cache not dumped (use "show peers dict")
   0x5618ae851090 local_id=1 remote_id=0 flags=0x0 remote_data=0x0
   last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
   table:0x5618ae838c60 id=tracker/quota2 update=0 localupdate=0 
commitupdate=0 refcnt=1
     Dictionary cache not dumped (use "show peers dict")
   0x5618ae839300: id=h2(local,inactive) addr=127.0.0.1:20001 last_status=NONE 
last_hdshk=
     reconnect= heartbeat= confirm=0 tx_hbt=0 rx_hbt=0 
no_hbt=0 new_conn=0 proto_err=0 coll=0
     flags=0x0
     shared tables:
   0x5618ae851230 local_id=2 remote_id=0 flags=0x0 remote_data=0x0
   last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
   table:0x56

Help with peer setup and "srv_conn(bk_customer/haproxy-dest1)"

2021-12-05 Thread Aleksandar Lazic


Hi.

I try to protect an backend server against a overload within a master/master 
setup.
The test setup looks like this

lb1: 8081 \
   -hap-dest: 8080
lb2: 8082 /

When I now call lb1 with curl the "tracker/quota1" gpc is increased and the 
second request is denied.
The problem is that the peer on lb2 does not get the counter data to protect 
the backend on lb2 too.

Please can anybody help me to fix my mistake and find a proper solution.


```
curl -v http://127.0.0.1:8081/; curl -v http://127.0.0.1:8081
* Connected to 127.0.0.1 (127.0.0.1) port 8081 (#0)
> GET / HTTP/1.1
> Host: 127.0.0.1:8081
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< dest_dst_conn: 1
< content-length: 0
<
* Connection #0 to host 127.0.0.1 left intact


* Connected to 127.0.0.1 (127.0.0.1) port 8081 (#0)
> GET / HTTP/1.1
> Host: 127.0.0.1:8081

< HTTP/1.1 403 Forbidden
< content-length: 93
< cache-control: no-cache
< content-type: text/html

```

``` lb1
echo "show peers;show table tracker/quota1;show table tracker/quota2"|socat - 
tcp4-connect:127.0.0.1:9990

0x55bb71554dc0: [05/Dec/2021:10:27:17] id=tracker disabled=0 flags=0x33 
resync_timeout= task_calls=5
  0x55bb71558350: id=tracker(remote,inactive) addr=127.0.0.1:20001 
last_status=NAME last_hdshk=5m36s
reconnect= heartbeat= confirm=0 tx_hbt=0 rx_hbt=0 no_hbt=0 
new_conn=1 proto_err=0 coll=0
flags=0x0
shared tables:
  0x55bb7156f1e0 local_id=2 remote_id=0 flags=0x0 remote_data=0x0
  last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
  table:0x55bb71556a50 id=tracker/quota1 update=3 localupdate=3 
commitupdate=0 refcnt=1
Dictionary cache not dumped (use "show peers dict")
  0x55bb7156f090 local_id=1 remote_id=0 flags=0x0 remote_data=0x0
  last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
  table:0x55bb71556c60 id=tracker/quota2 update=2 localupdate=2 
commitupdate=0 refcnt=1
Dictionary cache not dumped (use "show peers dict")
  0x55bb71557300: id=h1(local,inactive) addr=127.0.0.1:2 last_status=NONE 
last_hdshk=
reconnect= heartbeat= confirm=0 tx_hbt=0 rx_hbt=0 
no_hbt=0 new_conn=0 proto_err=0 coll=0
flags=0x0
shared tables:
  0x55bb7156f230 local_id=2 remote_id=0 flags=0x0 remote_data=0x0
  last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
  table:0x55bb71556a50 id=tracker/quota1 update=3 localupdate=3 
commitupdate=0 refcnt=1
Dictionary cache not dumped (use "show peers dict")
  0x55bb7156f0e0 local_id=1 remote_id=0 flags=0x0 remote_data=0x0
  last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
  table:0x55bb71556c60 id=tracker/quota2 update=2 localupdate=2 
commitupdate=0 refcnt=1
Dictionary cache not dumped (use "show peers dict")

# table: tracker/quota1, type: string, size:100, used:1
0x55bb71772888: key=0 use=0 exp=53297 server_id=0 gpc0=1

# table: tracker/quota2, type: string, size:100, used:1
0x55bb71772958: key=0 use=0 exp=53297 server_id=0 gpc1=0

```

``` lb2
echo "show peers;show table tracker/quota1;show table tracker/quota2"|socat - 
tcp4-connect:127.0.0.1:9991

0x5618ae836dc0: [05/Dec/2021:10:27:12] id=tracker disabled=0 flags=0x33 
resync_timeout= task_calls=5
  0x5618ae83a350: id=tracker(remote,inactive) addr=127.0.0.1:2 
last_status=NAME last_hdshk=5m31s
reconnect= heartbeat= confirm=0 tx_hbt=0 rx_hbt=0 no_hbt=0 
new_conn=2 proto_err=0 coll=0
flags=0x0
shared tables:
  0x5618ae8511e0 local_id=2 remote_id=0 flags=0x0 remote_data=0x0
  last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
  table:0x5618ae838a50 id=tracker/quota1 update=0 localupdate=0 
commitupdate=0 refcnt=1
Dictionary cache not dumped (use "show peers dict")
  0x5618ae851090 local_id=1 remote_id=0 flags=0x0 remote_data=0x0
  last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
  table:0x5618ae838c60 id=tracker/quota2 update=0 localupdate=0 
commitupdate=0 refcnt=1
Dictionary cache not dumped (use "show peers dict")
  0x5618ae839300: id=h2(local,inactive) addr=127.0.0.1:20001 last_status=NONE 
last_hdshk=
reconnect= heartbeat= confirm=0 tx_hbt=0 rx_hbt=0 
no_hbt=0 new_conn=0 proto_err=0 coll=0
flags=0x0
shared tables:
  0x5618ae851230 local_id=2 remote_id=0 flags=0x0 remote_data=0x0
  last_acked=0 last_pushed=0 last_get=0 teaching_origin=0 update=0
  table:0x5618ae838a50 id=tracker/quota1 update=0 localupdate=0 
commitupdate=0 refcnt=1
Dictionary cache not dumped (use "show peers dict")
  0x5618ae8510e0 lo

Re: Help

2021-07-16 Thread Aleksandar Lazic

Hi.

On 16.07.21 14:34, Anilton Silva Fernandes wrote:

Hi there…

Can I get another HELP:

This time, I want to receive a request, and check for URL to know which backend 
should be call.

This is my config:

frontend web_accounts
     mode tcp
     bind 10.15.1.12:443
     default_backend accounts_servers

frontend web_apimanager
     mode tcp
     bind 10.15.1.13:443

     use_backend apiservices if   { path_beg /api/ }    
# IF THERE’S API ON THE URL SEND TO APISERVICES
     use_backend apimanager  unless   { path_beg /api }  # IF 
THERE’S NOT API, SEND IT TO APIMANAGER


This is not possible with TCP mode.
You have to switch to HTTP mode.

In this Blog post is such a example documented and more about HAProxy acls.

https://www.haproxy.com/blog/introduction-to-haproxy-acls/


backend accounts_servers
    mode tcp
    balance roundrobin
    server  accounts1 10.16.18.128:443 check

backend apimanager
    mode tcp
    balance roundrobin
    server  apimanager1 10.16.18.129:9445 check

backend apiservices
    mode tcp
    balance roundrobin
    server  apimanagerqa.cvt.cv 10.16.18.129:8245 check

Thank you

*From:*Emerson Gomes [mailto:emerson.go...@gmail.com]
*Sent:* 7 de julho de 2021 12:34
*To:* Anilton Silva Fernandes 
*Cc:* haproxy@formilux.org
*Subject:* Re: Help

Hello Anilton,

In the "bind *:443" line, do not specify a PEM file directly, but only the 
directory where your PEM file(s) resides.

Also, make sure that both the certificate and private key are contained within 
the same PEM file.

It should look like this:

-BEGIN CERTIFICATE-
    xxx
-END CERTIFICATE-
-BEGIN PRIVATE KEY-
   xxx
-END PRIVATE KEY-

BR.,

Emerson

Em qua., 7 de jul. de 2021 às 14:47, Anilton Silva Fernandes mailto:anilton.fernan...@cvt.cv>> escreveu:

Hi there.

    Can I get some help from you.

I’m configuring HAProxy as a frontend on HTTPS with centified and I want 
clients to be redirect to BACKEND on HTTPS as well (443) but I want clients to 
see only HAProxy certificate, as the backend one is not valid.

Bellow the schematic of my design:

So, on

This is the configuration file I’m using:




frontend haproxy mode http bind *:80 bind *:443 ssl crt 
/etc/ssl/cvt.cv/accounts_cvt.pem default_backend wso2 backend wso2 mode http 
option forwardfor redirect scheme https if !{ ssl_fc } server my-api 
10.16.18.128:443 check ssl verify none http-request set-header X-Forwarded-Port 
%[dst_port] http-request add-header X-Forwarded-Proto https if { ssl_fc }



frontend web_accounts mode tcp bind 192.168.1.214:443 default_backend 
accounts_servers frontend web_apimanager mode tcp bind 192.168.1.215:443 
default_backend apimanager_servers backend accounts_servers balance roundrobin 
server accounts1 10.16.18.128:443 check server accounts2 10.16.18.128:443 check 
backend apimanager_servers balance roundrobin server accounts1 10.16.18.128:443 
check server accounts2 10.16.18.128:443 check





The first one is what works but we got SSL problems due to invalid 
certificates on Backend;

The second one is what we would like, but does not work and says some erros:

[ALERT] 187/114337 (7823) : parsing [/etc/haproxy/haproxy.cfg:85] : 'bind *:443' 
: unable to load SSL private key from PEM file '/etc/ssl/cvt.cv/accounts_cvt.pem 
<http://cvt.cv/accounts_cvt.pem>'.

[ALERT] 187/114337 (7823) : Error(s) found in configuration file : 
/etc/haproxy/haproxy.cfg

[ALERT] 187/114337 (7823) : Proxy 'haproxy': no SSL certificate specified 
for bind '*:443' at [/etc/haproxy/haproxy.cfg:85] (use 'crt').

[ALERT] 187/114337 (7823) : Fatal errors found in configuration.

Errors in configuration file, check with haproxy check.

This is on CentOS 6

Thank you

Melhores Cumprimentos

**

*Anilton Fernandes | Plataformas, Sistemas e Infraestruturas*

Cabo Verde Telecom, SA

Group Cabo Verde Telecom

Rua Cabo Verde Telecom, 1, Edificio CVT

198, Praia, Santiago, República de Cabo Verde

Phone: +238 3503934 | Mobile: +238 9589123 | Email – anilton.fernan...@cvt.cv 
<mailto:anilton.fernan...@cvt.cv>

cid:image001.jpg@01D5997A.B9848FB0






RE: Help

2021-07-16 Thread Anilton Silva Fernandes
Hi there…

Can I get another HELP:

This time, I want to receive a request, and check for URL to know which backend 
should be call.

This is my config:

frontend web_accounts
mode tcp
bind 10.15.1.12:443
default_backend accounts_servers

frontend web_apimanager
mode tcp
bind 10.15.1.13:443
use_backend apiservices if   { path_beg /api/ }
# IF THERE’S API ON THE URL SEND TO APISERVICES
use_backend apimanager  unless   { path_beg /api }  # IF 
THERE’S NOT API, SEND IT TO APIMANAGER


backend accounts_servers
   mode tcp
   balance roundrobin
   server  accounts1 10.16.18.128:443 check

backend apimanager
   mode tcp
   balance roundrobin
   server  apimanager1 10.16.18.129:9445 check


backend apiservices
   mode tcp
   balance roundrobin
   server  apimanagerqa.cvt.cv 10.16.18.129:8245 check


Thank you

From: Emerson Gomes [mailto:emerson.go...@gmail.com]
Sent: 7 de julho de 2021 12:34
To: Anilton Silva Fernandes 
Cc: haproxy@formilux.org
Subject: Re: Help

Hello Anilton,

In the "bind *:443" line, do not specify a PEM file directly, but only the 
directory where your PEM file(s) resides.
Also, make sure that both the certificate and private key are contained within 
the same PEM file.

It should look like this:

-BEGIN CERTIFICATE-
   xxx
-END CERTIFICATE-
-BEGIN PRIVATE KEY-
  xxx
-END PRIVATE KEY-

BR.,
Emerson

Em qua., 7 de jul. de 2021 às 14:47, Anilton Silva Fernandes 
mailto:anilton.fernan...@cvt.cv>> escreveu:
Hi there.

Can I get some help from you.

I’m configuring HAProxy as a frontend on HTTPS with centified and I want 
clients to be redirect to BACKEND on HTTPS as well (443) but I want clients to 
see only HAProxy certificate, as the backend one is not valid.

Bellow the schematic of my design:

[cid:image001.png@01D77A35.28392CD0]


So, on

This is the configuration file I’m using:



[frontend haproxy mode http bind *:80 bind *:443 ssl crt 
/etc/ssl/cvt.cv/accounts_cvt.pem default_backend wso2backend wso2 
mode http option forwardfor redirect scheme https if !{ ssl_fc } 
server my-api 10.16.18.128:443 check ssl verify none http-request 
set-header X-Forwarded-Port %[dst_port] http-request add-header 
X-Forwarded-Proto https if { ssl_fc }]


[frontend web_accounts  mode tcp  bind 192.168.1.214:443  
default_backend accounts_serversfrontend web_apimanager  mode tcp  
bind 192.168.1.215:443  default_backend apimanager_serversbackend 
accounts_servers  balance roundrobin  server  accounts1 
10.16.18.128:443 check  server  accounts2 10.16.18.128:443 checkbackend 
apimanager_servers  balance roundrobin  server  accounts1 
10.16.18.128:443 check  server  accounts2 10.16.18.128:443 check]




























The first one is what works but we got SSL problems due to invalid certificates 
on Backend;

The second one is what we would like, but does not work and says some erros:
[ALERT] 187/114337 (7823) : parsing [/etc/haproxy/haproxy.cfg:85] : 'bind 
*:443' : unable to load SSL private key from PEM file 
'/etc/ssl/cvt.cv/accounts_cvt.pem<http://cvt.cv/accounts_cvt.pem>'.
[ALERT] 187/114337 (7823) : Error(s) found in configuration file : 
/etc/haproxy/haproxy.cfg
[ALERT] 187/114337 (7823) : Proxy 'haproxy': no SSL certificate specified for 
bind '*:443' at [/etc/haproxy/haproxy.cfg:85] (use 'crt').
[ALERT] 187/114337 (7823) : Fatal errors found in configuration.
Errors in configuration file, check with haproxy check.


This is on CentOS 6

Thank you




Melhores Cumprimentos

Anilton Fernandes | Plataformas, Sistemas e Infraestruturas
Cabo Verde Telecom, SA
Group Cabo Verde Telecom
Rua Cabo Verde Telecom, 1, Edificio CVT
198, Praia, Santiago, República de Cabo Verde
Phone: +238 3503934 | Mobile: +238 9589123 | Email – 
anilton.fernan...@cvt.cv<mailto:anilton.fernan...@cvt.cv>

[cid:image001.jpg@01D5997A.B9848FB0]




Re: Help

2021-07-07 Thread Emerson Gomes
Hello Anilton,

In the "bind *:443" line, do not specify a PEM file directly, but only the
directory where your PEM file(s) resides.
Also, make sure that both the certificate and private key are contained
within the same PEM file.

It should look like this:

-BEGIN CERTIFICATE-
   xxx
-END CERTIFICATE-
-BEGIN PRIVATE KEY-
  xxx
-END PRIVATE KEY-

BR.,
Emerson

Em qua., 7 de jul. de 2021 às 14:47, Anilton Silva Fernandes <
anilton.fernan...@cvt.cv> escreveu:

> Hi there.
>
>
>
> Can I get some help from you.
>
>
>
> I’m configuring HAProxy as a frontend on HTTPS with centified and I want
> clients to be redirect to BACKEND on HTTPS as well (443) but I want clients
> to see only HAProxy certificate, as the backend one is not valid.
>
>
>
> Bellow the schematic of my design:
>
>
>
>
>
>
>
> So, on
>
>
>
> This is the configuration file I’m using:
>
>
>
> [image: frontend haproxy mode http bind *:80 bind *:443 ssl crt
> /etc/ssl/cvt.cv/accounts_cvt.pem default_backend wso2 backend wso2 mode
> http option forwardfor redirect scheme https if !{ ssl_fc } server my-api
> 10.16.18.128:443 check ssl verify none http-request set-header
> X-Forwarded-Port %[dst_port] http-request add-header X-Forwarded-Proto
> https if { ssl_fc }]
> [image: frontend web_accounts mode tcp bind 192.168.1.214:443
> default_backend accounts_servers frontend web_apimanager mode tcp bind
> 192.168.1.215:443 default_backend apimanager_servers backend
> accounts_servers balance roundrobin server accounts1 10.16.18.128:443 check
> server accounts2 10.16.18.128:443 check backend apimanager_servers balance
> roundrobin server accounts1 10.16.18.128:443 check server accounts2
> 10.16.18.128:443 check]
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> The first one is what works but we got SSL problems due to invalid
> certificates on Backend;
>
>
>
> The second one is what we would like, but does not work and says some
> erros:
>
> [ALERT] 187/114337 (7823) : parsing [/etc/haproxy/haproxy.cfg:85] : 'bind
> *:443' : unable to load SSL private key from PEM file '/etc/ssl/
> cvt.cv/accounts_cvt.pem'.
>
> [ALERT] 187/114337 (7823) : Error(s) found in configuration file :
> /etc/haproxy/haproxy.cfg
>
> [ALERT] 187/114337 (7823) : Proxy 'haproxy': no SSL certificate specified
> for bind '*:443' at [/etc/haproxy/haproxy.cfg:85] (use 'crt').
>
> [ALERT] 187/114337 (7823) : Fatal errors found in configuration.
>
> Errors in configuration file, check with haproxy check.
>
>
>
>
>
> This is on CentOS 6
>
>
>
> Thank you
>
>
>
>
>
>
>
>
>
> Melhores Cumprimentos
>
>
>
> *Anilton Fernandes | Plataformas, Sistemas e Infraestruturas*
>
> Cabo Verde Telecom, SA
>
> Group Cabo Verde Telecom
>
> Rua Cabo Verde Telecom, 1, Edificio CVT
>
> 198, Praia, Santiago, República de Cabo Verde
>
> Phone: +238 3503934 | Mobile: +238 9589123 | Email –
> anilton.fernan...@cvt.cv
>
>
>
> [image: cid:image001.jpg@01D5997A.B9848FB0]
>
>
>
>
>


Re: Help

2021-07-07 Thread Shawn Heisey

On 7/7/2021 6:45 AM, Anilton Silva Fernandes wrote:


Hi there.

Can I get some help from you.

I’m configuring HAProxy as a frontend on HTTPS with centified and I 
want clients to be redirect to BACKEND on HTTPS as well (443) but I 
want clients to see only HAProxy certificate, as the backend one is 
not valid.





The second one is what we would like, but does not work and says some 
erros:


[ALERT] 187/114337 (7823) : parsing [/etc/haproxy/haproxy.cfg:85] : 
'bind *:443' : unable to load SSL private key from PEM file 
'/etc/ssl/cvt.cv/accounts_cvt.pem'.



The error is shown clearly.  It's telling you that the private key is 
not contained in the file you mentioned for your certificate.


For haproxy, the certificate file must contain three things:  1) The 
server certificate.  2) Any intermediate certificates.  3) The private 
key for the server certificate.  Order is not important for 2 and 3, but 
I'm pretty sure the first certificate in the file must be the server 
cert.  The root certificate is usually not required -- the end-user's 
browser should already contain that.  I am not aware of any way to 
specify the private key in a separate file, but one might exist that I 
have never seen.


My certificate files also contain a fourth item - "DH PARAMETERS", 
generated with "openssl dhparam 2048".  Each certificate gets its own 
dhparam, and it is regenerated each time I renew the cert.


Thanks,
Shawn




Help

2021-07-07 Thread Anilton Silva Fernandes
Hi there.

Can I get some help from you.

I'm configuring HAProxy as a frontend on HTTPS with centified and I want 
clients to be redirect to BACKEND on HTTPS as well (443) but I want clients to 
see only HAProxy certificate, as the backend one is not valid.

Bellow the schematic of my design:

[cid:image004.png@01D77325.9DBECB70]


So, on

This is the configuration file I'm using:


[frontend haproxy mode http bind *:80 bind *:443 
ssl crt /etc/ssl/cvt.cv/accounts_cvt.pem default_backend wso2backend 
wso2 mode http option forwardfor redirect scheme https if !{ ssl_fc 
} server my-api 10.16.18.128:443 check ssl verify none http-request 
set-header X-Forwarded-Port %[dst_port] http-request add-header 
X-Forwarded-Proto https if { ssl_fc }]
[frontend web_accounts  mode tcp  bind 192.168.1.214:443  
default_backend accounts_serversfrontend web_apimanager  mode tcp  
bind 192.168.1.215:443  default_backend apimanager_serversbackend 
accounts_servers  balance roundrobin  server  accounts1 
10.16.18.128:443 check  server  accounts2 10.16.18.128:443 checkbackend 
apimanager_servers  balance roundrobin  server  accounts1 
10.16.18.128:443 check  server  accounts2 10.16.18.128:443 check]



























The first one is what works but we got SSL problems due to invalid certificates 
on Backend;

The second one is what we would like, but does not work and says some erros:
[ALERT] 187/114337 (7823) : parsing [/etc/haproxy/haproxy.cfg:85] : 'bind 
*:443' : unable to load SSL private key from PEM file 
'/etc/ssl/cvt.cv/accounts_cvt.pem'.
[ALERT] 187/114337 (7823) : Error(s) found in configuration file : 
/etc/haproxy/haproxy.cfg
[ALERT] 187/114337 (7823) : Proxy 'haproxy': no SSL certificate specified for 
bind '*:443' at [/etc/haproxy/haproxy.cfg:85] (use 'crt').
[ALERT] 187/114337 (7823) : Fatal errors found in configuration.
Errors in configuration file, check with haproxy check.


This is on CentOS 6

Thank you




Melhores Cumprimentos

Anilton Fernandes | Plataformas, Sistemas e Infraestruturas
Cabo Verde Telecom, SA
Group Cabo Verde Telecom
Rua Cabo Verde Telecom, 1, Edificio CVT
198, Praia, Santiago, República de Cabo Verde
Phone: +238 3503934 | Mobile: +238 9589123 | Email - 
anilton.fernan...@cvt.cv<mailto:anilton.fernan...@cvt.cv>

[cid:image001.jpg@01D5997A.B9848FB0]




image003.emz
Description: image003.emz


oledata.mso
Description: oledata.mso


image007.emz
Description: image007.emz


image008.emz
Description: image008.emz


Re: help for implementation of first fetch function "sample_fetch_json_string"

2021-04-08 Thread Aleksandar Lazic

Tim,

you are great ;-)

On 08.04.21 18:14, Tim Düsterhus wrote:

Aleks,

On 4/8/21 5:07 PM, Aleksandar Lazic wrote:
http-request set-var(sess.json) %[req.hdr(Authorization),b64dec,json_string("\$.kubernetes\\.io/serviceaccount/namespace")] 


http-request set-var() does not expect the %[] syntax, because it always takes 
a sample. Even the following returns the same error message:


http-request set-var(sess.json) %[req.hdr(Authorization)]

I expect that the decoded json string is in args[0] and the 
"\$.kubernetes\\.io/serviceaccount/namespace" is in smp, is this assumption 
right?


The assumption is not correct, because you are not searching for a fetch. You 
want a converter, because you are converting an existing sample. I suggest you 
take a look at the "digest" converter. You can find it in sample.c.


Best regards
Tim Düsterhus


Thanks ,this was the hint I needed ;-)

Regards
Alex



Re: help for implementation of first fetch function "sample_fetch_json_string"

2021-04-08 Thread Tim Düsterhus

Aleks,

On 4/8/21 5:07 PM, Aleksandar Lazic wrote:
http-request set-var(sess.json) 
%[req.hdr(Authorization),b64dec,json_string("\$.kubernetes\\.io/serviceaccount/namespace")] 


http-request set-var() does not expect the %[] syntax, because it always 
takes a sample. Even the following returns the same error message:


http-request set-var(sess.json) %[req.hdr(Authorization)]

I expect that the decoded json string is in args[0] and the 
"\$.kubernetes\\.io/serviceaccount/namespace" is in smp, is this 
assumption right?


The assumption is not correct, because you are not searching for a 
fetch. You want a converter, because you are converting an existing 
sample. I suggest you take a look at the "digest" converter. You can 
find it in sample.c.


Best regards
Tim Düsterhus



help for implementation of first fetch function "sample_fetch_json_string"

2021-04-08 Thread Aleksandar Lazic

Hi.

I try to implement "sample_fetch_json_string" based on 
https://github.com/cesanta/mjson.

Because I haven't implemented a fetch function until now it would be nice when 
somebody
helps me and point me into the right direction. Maybe I have overseen a 
documentation in
the doc directory.

Let's assume there is this haproxy liney.

```
# get the namespace from a bearer token
http-request set-var(sess.json) 
%[req.hdr(Authorization),b64dec,json_string("\$.kubernetes\\.io/serviceaccount/namespace")]
http-request return status 200 content-type text/plain lf-string 
%[date,ltime(%Y-%m-%d_%H-%M-%S)] hdr x-var "val=%[var(sess.json)]"
```

When I run this I get the following message. That's what I also don't 
understand because
I have added the "sample_fetch_json_string" into the "static struct 
sample_fetch_kw_list smp_kws = ...".

```
./haproxy -d -f ../test-haproxy.conf
[NOTICE] 097/170201 (1105377) : haproxy version is 2.4-dev15-909947-31
[NOTICE] 097/170201 (1105377) : path to executable is ./haproxy
[ALERT] 097/170201 (1105377) : parsing [../test-haproxy.conf:10] : error 
detected in frontend 'fe1' while parsing 'http-request set-var(sess.json)' rule 
: missing fetch method.
[ALERT] 097/170201 (1105377) : Error(s) found in configuration file : 
../test-haproxy.conf
```

I expect that the decoded json string is in args[0] and the 
"\$.kubernetes\\.io/serviceaccount/namespace" is in smp, is this assumption 
right?

You see I have some open question which I hope someone can answer it.

That's the function signature.

https://github.com/cesanta/mjson#mjson_get_string
// s, len is a JSON string [ "abc", "de\r\n" ]
int mjson_get_string(const char *s, int len, const char *path, char *to, int 
sz);

I think that this line isn't right, but what's the right one?

rc = mjson_get_string(args[0].data.str.area, args[0].data.str.data, 
args[1].data.str.area, tmp->area, tmp->size);

attached the WIP diff and the test config.

It's similar concept as the env fetch function.
What I don't know in which struct is what.


``` from sample.c
smp_fetch_env(const struct arg *args, struct sample *smp, const char *kw, void 
*private)

/* This sample function fetches the value from a given json string.
 * The mjson library is used to parse the json struct
*/
static int sample_fetch_json_string(const struct arg *args, struct sample *smp, 
const char *kw, void *private)
{
struct buffer *tmp;
int rc;

tmp = get_trash_chunk();
/* json stringjson string length 
search patternvalue value length
rc = mjson_get_string(args[0].data.str.area, args[0].data.str.data, 
"$.kubernetes\\.io/serviceaccount/namespace", tmp->area, tmp->size);
*/
rc = mjson_get_string(args[0].data.str.area, args[0].data.str.data, 
args[1].data.str.area, tmp->area, tmp->size);

smp->flags |= SMP_F_CONST;
smp->data.type = SMP_T_STR;
smp->data.u.str.area = tmp->area;
smp->data.u.str.data = tmp->data;
return 1;
}
```

Regards
Alex
diff --git a/Makefile b/Makefile
index 9b22fe4be..7f6998cdc 100644
--- a/Makefile
+++ b/Makefile
@@ -883,7 +883,8 @@ OBJS += src/mux_h2.o src/mux_fcgi.o src/http_ana.o src/stream.o\
 src/ebistree.o src/auth.o src/wdt.o src/http_acl.o \
 src/hpack-enc.o src/hpack-huff.o src/ebtree.o src/base64.o \
 src/hash.o src/dgram.o src/version.o src/fix.o src/mqtt.o src/dns.o\
-src/server_state.o src/proto_uxdg.o src/init.o src/cfgdiag.o
+src/server_state.o src/proto_uxdg.o src/init.o src/cfgdiag.o   \
+src/mjson.o
 
 ifneq ($(TRACE),)
 OBJS += src/calltrace.o
@@ -946,6 +947,10 @@ dev/poll/poll:
 dev/tcploop/tcploop:
 	$(Q)$(MAKE) -C dev/tcploop tcploop CC='$(cmd_CC)' OPTIMIZE='$(COPTS)'
 
+dev/json/json: dev/json/json.o dev/json/mjson/src/mjson.o src/chunk.o
+	$(cmd_LD) $(LDFLAGS) -o $@ $^ $(LDOPTS)
+	#$(Q)$(MAKE) -C dev/json json CC='$(cmd_CC)' OPTIMIZE='$(COPTS)'
+
 # rebuild it every time
 .PHONY: src/version.c
 
diff --git a/dev/json/test-data.json b/dev/json/test-data.json
new file mode 100644
index 0..fdda596e9
--- /dev/null
+++ b/dev/json/test-data.json
@@ -0,0 +1 @@
+{"iss":"kubernetes/serviceaccount","kubernetes.io/serviceaccount/namespace":"openshift-logging","kubernetes.io/serviceaccount/secret.name":"deployer-token-m98xh","kubernetes.io/serviceaccount/service-account.name":"deployer","kubernetes.io/serviceaccount/service-account.uid":"35dddefd-3b5a-11e9-947c-fa163e480910","sub":"system:serviceaccount:openshift-logging:deployer"}
\ No newline at end of file
diff --git a/dev/json/test-data.json.base64 b/dev/json/test-data.json.base64
new file mode 100644
index 0..75cddd3ac
--- /dev/null
+++ b/dev/json/test-data.json.base64
@@ -0,0 +1 @@

Re: [PATCH] MINOR: build: discard echoing in help target

2021-01-18 Thread William Lallemand
On Sun, Jan 17, 2021 at 06:47:47PM +, Bertrand Jacquin wrote:
> When V=1 is used in conjuction with help, the output becomes pretty
> difficult to read properly.
> 
>   $ make TARGET=linux-glibc V=1 help
>   ..
> DEBUG_USE_ABORT: use abort() for program termination, see 
> include/haproxy/bug.h for details
>   echo; \
>  if [ -n "" ]; then \
>if [ -n "" ]; then \
>   echo "Current TARGET: "; \
>else \
>   echo "Current TARGET:  (custom target)"; \
>fi; \
>  else \
>echo "TARGET not set, you may pass 'TARGET=xxx' to set one among :";\
>echo "  linux-glibc, linux-glibc-legacy, solaris, freebsd, dragonfly, 
> netbsd,"; \
>echo "  osx, openbsd, aix51, aix52, aix72-gcc, cygwin, haiku, 
> generic,"; \
>echo "  custom"; \
>  fi
> 
>   TARGET not set, you may pass 'TARGET=xxx' to set one among :
> linux-glibc, linux-glibc-legacy, solaris, freebsd, dragonfly, netbsd,
> osx, openbsd, aix51, aix52, aix72-gcc, cygwin, haiku, generic,
> custom
>   echo;echo "Enabled features for TARGET '' (disable with 'USE_xxx=') :"
> 
>   Enabled features for TARGET '' (disable with 'USE_xxx=') :
>   set --POLL  ; echo "  $*" | (fmt || 
> cat) 2>/dev/null
> POLL
>   echo;echo "Disabled features for TARGET '' (enable with 'USE_xxx=1') :"
> 
>   Disabled features for TARGET '' (enable with 'USE_xxx=1') :
>   set -- EPOLL KQUEUE NETFILTER PCRE PCRE_JIT PCRE2 PCRE2_JIT  PRIVATE_CACHE 
> THREAD PTHREAD_PSHARED BACKTRACE STATIC_PCRE STATIC_PCRE2 TPROXY LINUX_TPROXY 
> LINUX_SPLICE LIBCRYPT CRYPT_H GETADDRINFO OPENSSL LUA FUTEX ACCEPT4 CLOSEFROM 
> ZLIB SLZ CPU_AFFINITY TFO NS DL RT DEVICEATLAS 51DEGREES WURFL SYSTEMD 
> OBSOLETE_LINKER PRCTL THREAD_DUMP EVPORTS OT QUIC; echo "  $*" | (fmt || cat) 
> 2>/dev/null
> EPOLL KQUEUE NETFILTER PCRE PCRE_JIT PCRE2 PCRE2_JIT PRIVATE_CACHE
> 
> This commit ensure the help target always discard line echoing
> regardless of V variable as done for reg-tests-help target.

Thanks, merged!

-- 
William Lallemand



[PATCH] MINOR: build: discard echoing in help target

2021-01-17 Thread Bertrand Jacquin
When V=1 is used in conjuction with help, the output becomes pretty
difficult to read properly.

  $ make TARGET=linux-glibc V=1 help
  ..
DEBUG_USE_ABORT: use abort() for program termination, see 
include/haproxy/bug.h for details
  echo; \
 if [ -n "" ]; then \
   if [ -n "" ]; then \
  echo "Current TARGET: "; \
   else \
  echo "Current TARGET:  (custom target)"; \
   fi; \
 else \
   echo "TARGET not set, you may pass 'TARGET=xxx' to set one among :";\
   echo "  linux-glibc, linux-glibc-legacy, solaris, freebsd, dragonfly, 
netbsd,"; \
   echo "  osx, openbsd, aix51, aix52, aix72-gcc, cygwin, haiku, generic,"; 
\
   echo "  custom"; \
 fi

  TARGET not set, you may pass 'TARGET=xxx' to set one among :
linux-glibc, linux-glibc-legacy, solaris, freebsd, dragonfly, netbsd,
osx, openbsd, aix51, aix52, aix72-gcc, cygwin, haiku, generic,
custom
  echo;echo "Enabled features for TARGET '' (disable with 'USE_xxx=') :"

  Enabled features for TARGET '' (disable with 'USE_xxx=') :
  set --POLL  ; echo "  $*" | (fmt || 
cat) 2>/dev/null
POLL
  echo;echo "Disabled features for TARGET '' (enable with 'USE_xxx=1') :"

  Disabled features for TARGET '' (enable with 'USE_xxx=1') :
  set -- EPOLL KQUEUE NETFILTER PCRE PCRE_JIT PCRE2 PCRE2_JIT  PRIVATE_CACHE 
THREAD PTHREAD_PSHARED BACKTRACE STATIC_PCRE STATIC_PCRE2 TPROXY LINUX_TPROXY 
LINUX_SPLICE LIBCRYPT CRYPT_H GETADDRINFO OPENSSL LUA FUTEX ACCEPT4 CLOSEFROM 
ZLIB SLZ CPU_AFFINITY TFO NS DL RT DEVICEATLAS 51DEGREES WURFL SYSTEMD 
OBSOLETE_LINKER PRCTL THREAD_DUMP EVPORTS OT QUIC; echo "  $*" | (fmt || cat) 
2>/dev/null
EPOLL KQUEUE NETFILTER PCRE PCRE_JIT PCRE2 PCRE2_JIT PRIVATE_CACHE

This commit ensure the help target always discard line echoing
regardless of V variable as done for reg-tests-help target.
---
 Makefile | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/Makefile b/Makefile
index 27d56451cdd7..b0ab6bce5281 100644
--- a/Makefile
+++ b/Makefile
@@ -882,8 +882,8 @@ INCLUDES = $(wildcard include/*/*.h)
 DEP = $(INCLUDES) .build_opts
 
 help:
-   $(Q)sed -ne "/^[^#]*$$/q;s/^# \{0,1\}\(.*\)/\1/;p" Makefile
-   $(Q)echo; \
+   @sed -ne "/^[^#]*$$/q;s/^# \{0,1\}\(.*\)/\1/;p" Makefile
+   @echo; \
   if [ -n "$(TARGET)" ]; then \
 if [ -n "$(set_target_defaults)" ]; then \
echo "Current TARGET: $(TARGET)"; \
@@ -896,10 +896,10 @@ help:
 echo "  osx, openbsd, aix51, aix52, aix72-gcc, cygwin, haiku, 
generic,"; \
 echo "  custom"; \
   fi
-   $(Q)echo;echo "Enabled features for TARGET '$(TARGET)' (disable with 
'USE_xxx=') :"
-   $(Q)set -- $(foreach opt,$(patsubst USE_%,%,$(use_opts)),$(if 
$(USE_$(opt)),$(opt),)); echo "  $$*" | (fmt || cat) 2>/dev/null
-   $(Q)echo;echo "Disabled features for TARGET '$(TARGET)' (enable with 
'USE_xxx=1') :"
-   $(Q)set -- $(foreach opt,$(patsubst USE_%,%,$(use_opts)),$(if 
$(USE_$(opt)),,$(opt))); echo "  $$*" | (fmt || cat) 2>/dev/null
+   @echo;echo "Enabled features for TARGET '$(TARGET)' (disable with 
'USE_xxx=') :"
+   @set -- $(foreach opt,$(patsubst USE_%,%,$(use_opts)),$(if 
$(USE_$(opt)),$(opt),)); echo "  $$*" | (fmt || cat) 2>/dev/null
+   @echo;echo "Disabled features for TARGET '$(TARGET)' (enable with 
'USE_xxx=1') :"
+   @set -- $(foreach opt,$(patsubst USE_%,%,$(use_opts)),$(if 
$(USE_$(opt)),,$(opt))); echo "  $$*" | (fmt || cat) 2>/dev/null
 
 # Used only to force a rebuild if some build options change, but we don't do
 # it for certain targets which take no build options



Re: [PATCH] help coverity to detect BUG_ON as a real stop

2020-10-09 Thread Willy Tarreau
On Fri, Oct 09, 2020 at 03:09:06AM +0500,  ??? wrote:
> Hello,,
> 
> I added DEBUG_STRICT=1 to coverity build definition.
> hopefully, it will resolve 1 coverity issue.

Applied, thanks Ilya!
Willy



[PATCH] help coverity to detect BUG_ON as a real stop

2020-10-08 Thread Илья Шипицин
Hello,,

I added DEBUG_STRICT=1 to coverity build definition.
hopefully, it will resolve 1 coverity issue.

Cheers,
Ilya Shipitcin
From ab5ab86b0398eb063f3d6ee392207b0238e9e083 Mon Sep 17 00:00:00 2001
From: Ilya Shipitsin 
Date: Fri, 9 Oct 2020 03:05:11 +0500
Subject: [PATCH] CI: travis-ci: help Coverity to detect BUG_ON() as a real
 stop

---
 .travis.yml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/.travis.yml b/.travis.yml
index a8aaccba5..e73d40c33 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -55,7 +55,7 @@ matrix:
   - os: linux
 if: type == cron
 compiler: clang
-env: TARGET=linux-glibc COVERITY_SCAN_PROJECT_NAME="Haproxy" COVERITY_SCAN_BRANCH_PATTERN="*" COVERITY_SCAN_NOTIFICATION_EMAIL="chipits...@gmail.com" COVERITY_SCAN_BUILD_COMMAND="make CC=clang DEFINE=-DDEBUG_USE_ABORT TARGET=$TARGET $FLAGS 51DEGREES_SRC=$FIFTYONEDEGREES_SRC"
+env: TARGET=linux-glibc COVERITY_SCAN_PROJECT_NAME="Haproxy" COVERITY_SCAN_BRANCH_PATTERN="*" COVERITY_SCAN_NOTIFICATION_EMAIL="chipits...@gmail.com" COVERITY_SCAN_BUILD_COMMAND="make CC=clang DEFINE=-DDEBUG_USE_ABORT TARGET=$TARGET $FLAGS 51DEGREES_SRC=$FIFTYONEDEGREES_SRC DEBUG_STRICT=1"
 script:
   - |
 if [ ! -z ${COVERITY_SCAN_TOKEN+x} ]; then
-- 
2.26.2



Re: [PATCH] ci: travis-ci: help coverity to recognize abort

2020-09-12 Thread Willy Tarreau
On Sat, Sep 12, 2020 at 11:31:29AM +0500,  ??? wrote:
> so, it is good time to adjust .gitignore :)
> 
> I also added commit message with explanation. I'm ok if you modify it by
> your will.

That's perfect, all applied with no change now, thank you very much Ilya!
Willy



Re: [PATCH] ci: travis-ci: help coverity to recognize abort

2020-09-12 Thread Илья Шипицин
so, it is good time to adjust .gitignore :)

I also added commit message with explanation. I'm ok if you modify it by
your will.

чт, 10 сент. 2020 г. в 22:34, Willy Tarreau :

> Hi Ilya,
>
> On Thu, Sep 10, 2020 at 09:45:08PM +0500,  ??? wrote:
> > ping :)
>
> Ah sorry, thanks for the reminedr, I remember reading it and thought it
> was merged, but I was wrong. However I'm seeing two mistakes:
>
>   - first patch accidently merged a copy of your libwurfl.a. Don't
> worry, I'll edit the patch to get rid of it, that's easy.
>
>   - the second one doesn't explain what the problem was and it's hard to
> figure it from just the patch itself. Please keep in mind that the
> purpose of the commit message is in the first place to quickly explain
> why the patch has to exist in the first place. I can help redact a bit
> of the message if you're not comfortable with it but I need a bit
> of input.
>
> Thanks!
> Willy
>
From 8ff6d4b69e4dce1e63364a207484054a9e4c6daf Mon Sep 17 00:00:00 2001
From: Ilya Shipitsin 
Date: Sat, 12 Sep 2020 11:15:43 +0500
Subject: [PATCH 1/3] CLEANUP: Update .gitignore

This excludes ar archives from being ignored.
---
 .gitignore | 1 +
 1 file changed, 1 insertion(+)

diff --git a/.gitignore b/.gitignore
index 3a760af99..f77751a6d 100644
--- a/.gitignore
+++ b/.gitignore
@@ -31,6 +31,7 @@
 !/reg-tests
 # Reject some generic files
 *.o
+*.a
 *~
 *.rej
 *.orig
-- 
2.26.2

From 4ede61ed91a918e715c636d46320edc3e8e82e90 Mon Sep 17 00:00:00 2001
From: Ilya Shipitsin 
Date: Sat, 12 Sep 2020 11:27:51 +0500
Subject: [PATCH 3/3] CI: travis-ci: help Coverity to recognize abort()

generally haproxy uses (*(volatile int*)1=0) for abort. It is not recognized
by static analyzers, e.g. Coverity scan as abort, so fallback to abort() was
introduced in previous commit for code analysis purpose. Let us explicitely
use it for Coverity build job
---
 .travis.yml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/.travis.yml b/.travis.yml
index ca867d967..8850850ec 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -55,7 +55,7 @@ matrix:
   - os: linux
 if: type == cron
 compiler: clang
-env: TARGET=linux-glibc COVERITY_SCAN_PROJECT_NAME="Haproxy" COVERITY_SCAN_BRANCH_PATTERN="*" COVERITY_SCAN_NOTIFICATION_EMAIL="chipits...@gmail.com" COVERITY_SCAN_BUILD_COMMAND="make CC=clang TARGET=$TARGET $FLAGS 51DEGREES_SRC=$FIFTYONEDEGREES_SRC"
+env: TARGET=linux-glibc COVERITY_SCAN_PROJECT_NAME="Haproxy" COVERITY_SCAN_BRANCH_PATTERN="*" COVERITY_SCAN_NOTIFICATION_EMAIL="chipits...@gmail.com" COVERITY_SCAN_BUILD_COMMAND="make CC=clang DEFINE=-DDEBUG_USE_ABORT TARGET=$TARGET $FLAGS 51DEGREES_SRC=$FIFTYONEDEGREES_SRC"
 script:
   - |
 if [ ! -z ${COVERITY_SCAN_TOKEN+x} ]; then
-- 
2.26.2

From 264c0aa16d1f1a2fcfa0d5777ed6c08921f7552f Mon Sep 17 00:00:00 2001
From: Ilya Shipitsin 
Date: Sat, 12 Sep 2020 11:24:48 +0500
Subject: [PATCH 2/3] BUILD: introduce possibility to define ABORT_NOW()
 conditionally

code analysis tools recognize abort() better, so let us introduce
such possibility
---
 Makefile  |  1 +
 include/haproxy/bug.h | 11 ---
 2 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/Makefile b/Makefile
index 48c595511..c0645093c 100644
--- a/Makefile
+++ b/Makefile
@@ -106,6 +106,7 @@
 #   SUBVERS: add a sub-version (eg: platform, model, ...).
 #   VERDATE: force haproxy's release date.
 #   VTEST_PROGRAM  : location of the vtest program to run reg-tests.
+#   DEBUG_USE_ABORT: use abort() for program termination, see include/haproxy/bug.h for details
 
 # verbosity: pass V=1 for verbose shell invocation
 V = 0
diff --git a/include/haproxy/bug.h b/include/haproxy/bug.h
index a008126f5..ad2018b13 100644
--- a/include/haproxy/bug.h
+++ b/include/haproxy/bug.h
@@ -37,10 +37,15 @@
 #define DPRINTF(x...)
 #endif
 
-/* This abort is more efficient than abort() because it does not mangle the
- * stack and stops at the exact location we need.
- */
+#ifdef DEBUG_USE_ABORT
+/* abort() is better recognized by code analysis tools */
+#define ABORT_NOW() abort()
+#else
+/* More efficient than abort() because it does not mangle the
+  * stack and stops at the exact location we need.
+  */
 #define ABORT_NOW() (*(volatile int*)1=0)
+#endif
 
 /* BUG_ON: complains if  is true when DEBUG_STRICT or DEBUG_STRICT_NOCRASH
  * are set, does nothing otherwise. With DEBUG_STRICT in addition it immediately
-- 
2.26.2



Re: [PATCH] ci: travis-ci: help coverity to recognize abort

2020-09-10 Thread Willy Tarreau
Hi Ilya,

On Thu, Sep 10, 2020 at 09:45:08PM +0500,  ??? wrote:
> ping :)

Ah sorry, thanks for the reminedr, I remember reading it and thought it
was merged, but I was wrong. However I'm seeing two mistakes:

  - first patch accidently merged a copy of your libwurfl.a. Don't
worry, I'll edit the patch to get rid of it, that's easy.

  - the second one doesn't explain what the problem was and it's hard to
figure it from just the patch itself. Please keep in mind that the
purpose of the commit message is in the first place to quickly explain
why the patch has to exist in the first place. I can help redact a bit
of the message if you're not comfortable with it but I need a bit
of input.

Thanks!
Willy



Re: [PATCH] ci: travis-ci: help coverity to recognize abort

2020-09-10 Thread Илья Шипицин
ping :)

вс, 6 сент. 2020 г. в 13:40, Илья Шипицин :

> Hello,
>
> based on discussion https://github.com/haproxy/haproxy/issues/755
>
> cheers,
> Ilya Shipitcin
>


[PATCH] ci: travis-ci: help coverity to recognize abort

2020-09-06 Thread Илья Шипицин
Hello,

based on discussion https://github.com/haproxy/haproxy/issues/755

cheers,
Ilya Shipitcin
From f02b672daf08cb94eff13dd07f575f37ae6a Mon Sep 17 00:00:00 2001
From: Ilya Shipitsin 
Date: Sun, 6 Sep 2020 13:34:25 +0500
Subject: [PATCH 2/2] CI: travis-ci: help Coverity to recognize abort()

---
 .travis.yml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/.travis.yml b/.travis.yml
index ca867d967..8850850ec 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -55,7 +55,7 @@ matrix:
   - os: linux
 if: type == cron
 compiler: clang
-env: TARGET=linux-glibc COVERITY_SCAN_PROJECT_NAME="Haproxy" COVERITY_SCAN_BRANCH_PATTERN="*" COVERITY_SCAN_NOTIFICATION_EMAIL="chipits...@gmail.com" COVERITY_SCAN_BUILD_COMMAND="make CC=clang TARGET=$TARGET $FLAGS 51DEGREES_SRC=$FIFTYONEDEGREES_SRC"
+env: TARGET=linux-glibc COVERITY_SCAN_PROJECT_NAME="Haproxy" COVERITY_SCAN_BRANCH_PATTERN="*" COVERITY_SCAN_NOTIFICATION_EMAIL="chipits...@gmail.com" COVERITY_SCAN_BUILD_COMMAND="make CC=clang DEFINE=-DDEBUG_USE_ABORT TARGET=$TARGET $FLAGS 51DEGREES_SRC=$FIFTYONEDEGREES_SRC"
 script:
   - |
 if [ ! -z ${COVERITY_SCAN_TOKEN+x} ]; then
-- 
2.26.2

From 5d4f98349e9600f256c207330257fab84be72a95 Mon Sep 17 00:00:00 2001
From: Ilya Shipitsin 
Date: Sun, 6 Sep 2020 13:32:31 +0500
Subject: [PATCH 1/2] BUILD: introduce possibility to define ABORT_NOW()
 conditionally

code analysis tools recognize abort() better, so let us introduce
such possibility
---
 Makefile |   1 +
 contrib/wurfl/libwurfl.a | Bin 0 -> 5116 bytes
 include/haproxy/bug.h|   7 ++-
 3 files changed, 7 insertions(+), 1 deletion(-)
 create mode 100644 contrib/wurfl/libwurfl.a

diff --git a/Makefile b/Makefile
index 48c595511..c0645093c 100644
--- a/Makefile
+++ b/Makefile
@@ -106,6 +106,7 @@
 #   SUBVERS: add a sub-version (eg: platform, model, ...).
 #   VERDATE: force haproxy's release date.
 #   VTEST_PROGRAM  : location of the vtest program to run reg-tests.
+#   DEBUG_USE_ABORT: use abort() for program termination, see include/haproxy/bug.h for details
 
 # verbosity: pass V=1 for verbose shell invocation
 V = 0
diff --git a/contrib/wurfl/libwurfl.a b/contrib/wurfl/libwurfl.a
new file mode 100644
index ..62b8637dbe2d5c07814ee8e3b2afb2ae555ceaac
GIT binary patch
literal 5116
zcmeI0PiP!v6u`eEwXI20<6o5KFcFm$%yxJ4kCj-q4M`T!lm-?>B9kw(lkCvloj5bQ
zCM9aHAYHu_58|Oo)DVH3xW_!NlRzyfG`zS2M5n}15#z7U@pnPDn@kAezgid
zcEPmdlAL!+PQczcob7q8XO?Z>my0|`QTk@h@q(%>?RUZ{7BDUB8sm+TNiPz
zxL#S7ob#|{F|8cml$P2aa*Lb+ZfbaLPG!NRJwRBJKK#HYnSoQbP1s{NNvETlZW~jpW@taf0wII>@=qvH*a){uhio1e9n<|N2dYX@Xu5{iN+Q-
z_r(PWkxHaeiF5)EI@DD(7%DQ1k~8l~Z^clrs==>n9s+pal#1YKw=yuX?Dzph(?HIa
z_)aQUe0$BBERj}Y6})WEvCObh7{j8;tI_mL2qzUzP=$CDV2~Ra?-`hmzi)!71jk~rPG_Mw36*sUGa6=yO18JRu2IyuN%>_^54f^hg0o~x1>RT4fr-5Og|JFZ6
zebO<47Rr}25a%W4Jb%_RP{F43=NV)1A$53SmxL9bKbNNgb3NMh8+}4#)Z4yD{(AFo
z32bNvef~4x-|YM@QbO9H0rC9Mm;L$t(5wvsM2L&@G2nS%clW}h`y8g^-&
U>+8Qi#($0euOYFsi0J-*0cf^aod5s;

literal 0
HcmV?d1

diff --git a/include/haproxy/bug.h b/include/haproxy/bug.h
index a008126f5..447b3ad22 100644
--- a/include/haproxy/bug.h
+++ b/include/haproxy/bug.h
@@ -37,10 +37,15 @@
 #define DPRINTF(x...)
 #endif
 
-/* This abort is more efficient than abort() because it does not mangle the
+#ifdef DEBUG_USE_ABORT
+/* abort() is better recognized by code analysis tools */
+#define ABORT_NOW() abort()
+#else
+/* More efficient than abort() because it does not mangle the
  * stack and stops at the exact location we need.
  */
 #define ABORT_NOW() (*(volatile int*)1=0)
+#endif
 
 /* BUG_ON: complains if  is true when DEBUG_STRICT or DEBUG_STRICT_NOCRASH
  * are set, does nothing otherwise. With DEBUG_STRICT in addition it immediately
-- 
2.26.2



Re: Can I help with the 2.1 release?

2020-08-08 Thread Willy Tarreau
On Thu, Jul 30, 2020 at 11:10:35PM +0300, Valter Jansons wrote:
> On Thu, Jul 30, 2020 at 10:37 PM Julien Pivotto  
> wrote:
> > I'm with Lukas on this. 2.1 is a strong release, and we should be
> > grateful for everyone which is using that release, as their feedback is
> > valuable for the building the next releases of HAProxy.
> 
> My apologies if the message sounded ungrateful, for rolling out new
> changes and testing. As the latest 2.2.0 release did show just
> recently, there is great benefit in people running upcoming (new)
> changes.

No offense, don't worry :-)

We're used to say that odd versions being maintained for less time, we're
allowed to take more risks with them and we know that most of their users
are those autonomous enough to roll back or switch to another one in case
of trouble. As such, the stability of an odd version can be a bit more
chaotic than the one of an even one, and that's by choice to preserve more
users. Also I'm less reluctant to backport small features to odd versions
than for even ones (it's a give and take, brave users test & report issues
and in exchange they can get a version that better suits their needs). In
other areas of the industry, the terms "early availability" and "general
deployment" exist to designate these different stability statuses, and I
think that they model what we do quite well.

Of course when a new version is issued, it needs a little bit of time to
dry up, and a few surprises are expected. But the point is that there
should be (by design) less risks to upgrade from 2.1.x to 2.2.x than from
2.0.x to 2.1.x two months after the new major release is emitted. Here
we're facing something unusual in that 2.1 appeared to be exceptionally
stable and 2.2 started with some rough edges, so at this point of the
cycle the difference in stability expectations might still be less visible
of course.

Anyway, the point of maintaining long term supported versions is that
anyone is free to use the one that best suits their needs. Anything
between the oldest that supports all needed features, to the latest
stable enough for the use case is fine.

As a rule of thumb, I'd say that it's probably OK to always be late by
one or two stable versions on average. This should help one figure what
branch to deploy: if the latest stable emits one version every two weeks,
it means you need to upgrade your production every two to four weeks. If
an older stable one produces one version every 6 months, it may allow you
not to care about your prod for 6 months to one year. But in any case
there is always the risk of a critical bug requiring an urgent deployment,
so you should see this as a rule of thumb only and not a strict rule.

Hoping this clarifies the process a bit.

Willy



Re: Can I help with the 2.1 release?

2020-07-30 Thread Valter Jansons
On Thu, Jul 30, 2020 at 10:37 PM Julien Pivotto  wrote:
> I'm with Lukas on this. 2.1 is a strong release, and we should be
> grateful for everyone which is using that release, as their feedback is
> valuable for the building the next releases of HAProxy.

My apologies if the message sounded ungrateful, for rolling out new
changes and testing. As the latest 2.2.0 release did show just
recently, there is great benefit in people running upcoming (new)
changes.

On Thu, Jul 30, 2020 at 10:29 PM Lukas Tribus  wrote:
> 2.1 is not a technical preview, it's a proper release train with full
> support. Support for it will cease in 2021-Q1, but I don't think you
> can conclude that that means it's getting less love now.

My "technical preview" wording and the release ramp-down expectation
was somewhat based on past release lines, such as the 2.1.0 ANNOUNCE
saying "2.1 is a stable branch that will be maintained till around Q1
2021, and is mostly aimed at experienced users, just like 1.9 was" and
the 2.0.0 ANNOUNCE saying "As most of you know, 1.9 will not be
maintained for a long time and should mostly be seen as a
technological preview or technical foundation for 2.0."

I do recognize the 2021Q1 commitment of maintenance. If the release
velocity is indeed to be expected from the team for the 2.1 line then
apologies for my doubt on the priorities/time allocation.

On Thu, Jul 30, 2020 at 10:37 PM Julien Pivotto  wrote:
> I am not yet confident to run 2.2 in prod yet, but I will roll out 2.2
> in non-prod env soon.

On Thu, Jul 30, 2020 at 10:29 PM Lukas Tribus  wrote:
> I would be reluctant to suggest upgrading mission-critical setups to
> 2.2, it's not even a month old at this point. Unless you expect to run
> into bugs and have time and resources to troubleshoot it.

Everyone should, of course, evaluate their upgrade strategies
themselves. I did not intend that to be a general advisory to "upgrade
all the things". Instead I was attempting to pose a legitimate
question out of interest as to whether there are any blockers for a
2.2 LTS migration from 2.1, considering they had already upgraded from
the 2.0 LTS.



Re: Can I help with the 2.1 release?

2020-07-30 Thread Julien Pivotto
On 30 Jul 21:29, Lukas Tribus wrote:
> Hello,
> 
> On Thu, 30 Jul 2020 at 20:49, Valter Jansons  wrote:
> >
> > On Thu, Jul 30, 2020 at 6:44 PM Harris Kaufmann  
> > wrote:
> > > my company really needs the next 2.1 release but we want to avoid
> > > deploying a custom, self compiled version.
> > >
> > > Is there something I can do to help with the release? I guess there
> > > are no blocking issues left?
> >
> > For mission-critical setups you should be running the LTS release
> > lines. The 2.1 release line was more of a technical preview line for
> > the following 2.2 LTS release, to keep changes flowing, and you should
> > not expect regular new release tags on the 2.1 line considering the
> > 2.2 line has shipped. I am not involved in the release process but I
> > would assume the team will push a new 2.1 tag some day however I do
> > not see that being a high priority for them in any way.
> >
> > As a result, I would instead rephrase the question in the other
> > direction: Are there any blockers for you to upgrade to 2.2?
> 
> I'm not sure I agree.
> 
> I would be reluctant to suggest upgrading mission-critical setups to
> 2.2, it's not even a month old at this point. Unless you expect to run
> into bugs and have time and resources to troubleshoot it.
> 
> 2.1 is not a technical preview, it's a proper release train with full
> support. Support for it will cease in 2021-Q1, but I don't think you
> can conclude that that means it's getting less love now.
> 
> 
> Lukas
> 

I'm with Lukas on this. 2.1 is a strong release, and we should be
grateful for everyone which is using that release, as their feedback is
valuable for the building the next releases of HAProxy.

I am not yet confident to run 2.2 in prod yet, but I will roll out 2.2
in non-prod env soon.

-- 
 (o-Julien Pivotto
 //\Open-Source Consultant
 V_/_   Inuits - https://www.inuits.eu


signature.asc
Description: PGP signature


Re: Can I help with the 2.1 release?

2020-07-30 Thread Lukas Tribus
Hello,

On Thu, 30 Jul 2020 at 20:49, Valter Jansons  wrote:
>
> On Thu, Jul 30, 2020 at 6:44 PM Harris Kaufmann  
> wrote:
> > my company really needs the next 2.1 release but we want to avoid
> > deploying a custom, self compiled version.
> >
> > Is there something I can do to help with the release? I guess there
> > are no blocking issues left?
>
> For mission-critical setups you should be running the LTS release
> lines. The 2.1 release line was more of a technical preview line for
> the following 2.2 LTS release, to keep changes flowing, and you should
> not expect regular new release tags on the 2.1 line considering the
> 2.2 line has shipped. I am not involved in the release process but I
> would assume the team will push a new 2.1 tag some day however I do
> not see that being a high priority for them in any way.
>
> As a result, I would instead rephrase the question in the other
> direction: Are there any blockers for you to upgrade to 2.2?

I'm not sure I agree.

I would be reluctant to suggest upgrading mission-critical setups to
2.2, it's not even a month old at this point. Unless you expect to run
into bugs and have time and resources to troubleshoot it.

2.1 is not a technical preview, it's a proper release train with full
support. Support for it will cease in 2021-Q1, but I don't think you
can conclude that that means it's getting less love now.


Lukas



Re: Can I help with the 2.1 release?

2020-07-30 Thread Valter Jansons
On Thu, Jul 30, 2020 at 6:44 PM Harris Kaufmann  wrote:
> my company really needs the next 2.1 release but we want to avoid
> deploying a custom, self compiled version.
>
> Is there something I can do to help with the release? I guess there
> are no blocking issues left?

For mission-critical setups you should be running the LTS release
lines. The 2.1 release line was more of a technical preview line for
the following 2.2 LTS release, to keep changes flowing, and you should
not expect regular new release tags on the 2.1 line considering the
2.2 line has shipped. I am not involved in the release process but I
would assume the team will push a new 2.1 tag some day however I do
not see that being a high priority for them in any way.

As a result, I would instead rephrase the question in the other
direction: Are there any blockers for you to upgrade to 2.2?



Can I help with the 2.1 release?

2020-07-30 Thread Harris Kaufmann
Hi,

my company really needs the next 2.1 release but we want to avoid
deploying a custom, self compiled version.

Is there something I can do to help with the release? I guess there
are no blocking issues left?

Best regards,
Harris



Re: [PATCH] DOC/MINOR: halog: Add long help info for ic flag

2020-05-18 Thread William Lallemand
On Fri, May 15, 2020 at 11:05:17PM +0200, Aleksandar Lazic wrote:
> Hi.
> 
> attached a patch for halog.
> 
> Regards
> 
> Aleks

> From 37ba93a5f29200e34cfb31aacf93ddcd80fca2ab Mon Sep 17 00:00:00 2001
> From: Aleksandar Lazi 
> Date: Fri, 15 May 2020 22:58:30 +0200
> Subject: [PATCH] DOC/MINOR: halog: Add long help info for ic flag
> 
> Add missing long help text for the ic (ip count) flag
> ---

Thanks, applied!

-- 
William Lallemand



[PATCH] DOC/MINOR: halog: Add long help info for ic flag

2020-05-15 Thread Aleksandar Lazic
Hi.

attached a patch for halog.

Regards

Aleks
>From 37ba93a5f29200e34cfb31aacf93ddcd80fca2ab Mon Sep 17 00:00:00 2001
From: Aleksandar Lazi 
Date: Fri, 15 May 2020 22:58:30 +0200
Subject: [PATCH] DOC/MINOR: halog: Add long help info for ic flag

Add missing long help text for the ic (ip count) flag
---
 contrib/halog/halog.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/contrib/halog/halog.c b/contrib/halog/halog.c
index 91e2af357..3c785fc09 100644
--- a/contrib/halog/halog.c
+++ b/contrib/halog/halog.c
@@ -190,6 +190,7 @@ void help()
 	   " -cc   output number of requests per cookie code (2 chars)\n"
 	   " -tc   output number of requests per termination code (2 chars)\n"
 	   " -srv  output statistics per server (time, requests, errors)\n"
+	   " -ic   output statistics per ip count (time, requests, errors)\n"
 	   " -u*   output statistics per URL (time, requests, errors)\n"
 	   "   Additional characters indicate the output sorting key :\n"
 	   "   -u : by URL, -uc : request count, -ue : error count\n"
-- 
2.20.1



I provide backlink services to help you with your google SEO

2020-04-23 Thread Muhammad Mujtabha
Hi
I hope you are well and good
provide backlink services to help you with your google SEO rankings. I have
access to many high quality sites related to your business. Is this
something that would interest you? If so I would be delighted to provide
you with some more information.
if you want to promot your site and want high of DA,traffic,link, i am the
best person I can do it through paid sites
I look forward to hearing from you


Websites  DA PA Moz Rank  IP Address
https://www.news9.com 71 59 5.9 151.101.2.133
https://www.wfxg.com 60 49 4.9 151.101.2.133
https://www.wrcbtv.com 75 58 5.8 151.101.2.133
http://www.wfmj.com/ 69 57 5.7 151.101.2.133


Re: [PATCH] CLEANUP: h2: Help static analyzers understand the list's end marker

2020-03-19 Thread Tim Düsterhus
Willy,

Am 19.03.20 um 15:55 schrieb Willy Tarreau:
> Actually I'm pretty sure that I did it this way precisely for performance
> reasons: avoid repeatedly checking a pointer for half of the headers which
> are pseudo headers (method, scheme, authority, path just for the request).
> 
> It's perfectly possible that the difference is negligible though, but if
> it's not, I'm sorry but I'll favor performance over static analysers'
> own pleasure. So this one will definitely deserve a test.

Yes, the performance <-> static analyzer trade-off not preferring the
static analyzers is acceptable to me.

Best regards
Tim Düsterhus



Re: [PATCH] CLEANUP: h2: Help static analyzers understand the list's end marker

2020-03-19 Thread Willy Tarreau
Hi Tim,

On Thu, Mar 19, 2020 at 03:15:24PM +0100, Tim Duesterhus wrote:
> Willy,
> 
> I know you dislike adjusting code to please static analyzers, but I'd argue
> that using the new IST_NULL + isttest() combination is easier to understand 
> for humans as well. A simple .ptr == NULL check might also be slightly faster
> compared to isteq() with an empty string?
> 
> I have verified that reg-tests pass, but as this is deep within the internals
> please check this carefully.
> 
> Best regards
> Tim Düsterhus
> 
> Apply with `git am --scissors` to automatically cut the commit message.
> 
> -- >8 --
> Clang Static Analyzer (scan-build) was having a hard time to understand that
> `hpack_encode_header` would never be called with a garbage value for `v`.
> 
> It failed to detect that `if (isteq(n, ist(""))) break;` would exist the
> loop in all cases. By setting `n` to `IST_NULL` and checking with
> `isttest()` it no longer complains.

Actually I'm pretty sure that I did it this way precisely for performance
reasons: avoid repeatedly checking a pointer for half of the headers which
are pseudo headers (method, scheme, authority, path just for the request).

It's perfectly possible that the difference is negligible though, but if
it's not, I'm sorry but I'll favor performance over static analysers'
own pleasure. So this one will definitely deserve a test.

Thanks,
Willy



[PATCH] CLEANUP: h2: Help static analyzers understand the list's end marker

2020-03-19 Thread Tim Duesterhus
Willy,

I know you dislike adjusting code to please static analyzers, but I'd argue
that using the new IST_NULL + isttest() combination is easier to understand 
for humans as well. A simple .ptr == NULL check might also be slightly faster
compared to isteq() with an empty string?

I have verified that reg-tests pass, but as this is deep within the internals
please check this carefully.

Best regards
Tim Düsterhus

Apply with `git am --scissors` to automatically cut the commit message.

-- >8 --
Clang Static Analyzer (scan-build) was having a hard time to understand that
`hpack_encode_header` would never be called with a garbage value for `v`.

It failed to detect that `if (isteq(n, ist(""))) break;` would exist the
loop in all cases. By setting `n` to `IST_NULL` and checking with
`isttest()` it no longer complains.

The check must be moved to the beginning of the loop to prevent a NULL
pointer derefence for the pseudo header skip.
---
 src/mux_h2.c | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/src/mux_h2.c b/src/mux_h2.c
index 013ef86f8..b353c4d7c 100644
--- a/src/mux_h2.c
+++ b/src/mux_h2.c
@@ -4617,7 +4617,7 @@ static size_t h2s_frt_make_resp_headers(struct h2s *h2s, 
struct htx *htx)
}
 
/* marker for end of headers */
-   list[hdr].n = ist("");
+   list[hdr].n = IST_NULL;
 
if (h2s->status == 204 || h2s->status == 304) {
/* no contents, claim c-len is present and set to zero */
@@ -4660,6 +4660,9 @@ static size_t h2s_frt_make_resp_headers(struct h2s *h2s, 
struct htx *htx)
 
/* encode all headers, stop at empty name */
for (hdr = 0; hdr < sizeof(list)/sizeof(list[0]); hdr++) {
+   if (!isttest(list[hdr].n))
+   break; // end
+
/* these ones do not exist in H2 and must be dropped. */
if (isteq(list[hdr].n, ist("connection")) ||
isteq(list[hdr].n, ist("proxy-connection")) ||
@@ -4672,9 +4675,6 @@ static size_t h2s_frt_make_resp_headers(struct h2s *h2s, 
struct htx *htx)
if (*(list[hdr].n.ptr) == ':')
continue;
 
-   if (isteq(list[hdr].n, ist("")))
-   break; // end
-
if (!hpack_encode_header(, list[hdr].n, list[hdr].v)) {
/* output full */
if (b_space_wraps(mbuf))
@@ -4870,7 +4870,7 @@ static size_t h2s_bck_make_req_headers(struct h2s *h2s, 
struct htx *htx)
}
 
/* marker for end of headers */
-   list[hdr].n = ist("");
+   list[hdr].n = IST_NULL;
 
mbuf = br_tail(h2c->mbuf);
  retry:
@@ -5007,6 +5007,9 @@ static size_t h2s_bck_make_req_headers(struct h2s *h2s, 
struct htx *htx)
struct ist n = list[hdr].n;
struct ist v = list[hdr].v;
 
+   if (!isttest(n))
+   break; // end
+
/* these ones do not exist in H2 and must be dropped. */
if (isteq(n, ist("connection")) ||
(auth.len && isteq(n, ist("host"))) ||
@@ -5030,9 +5033,6 @@ static size_t h2s_bck_make_req_headers(struct h2s *h2s, 
struct htx *htx)
if (*(n.ptr) == ':')
continue;
 
-   if (isteq(n, ist("")))
-   break; // end
-
if (!hpack_encode_header(, n, v)) {
/* output full */
if (b_space_wraps(mbuf))
-- 
2.25.2




Re:haproxy.com: Let Google Help Your Customers Find You!!

2020-03-11 Thread Catherine Hough
 Hi,

How are you? Hope you are doing well.

Let me first start with your website in which I noticed something
interesting while going through it, haproxy.com.

It's obvious that you have used *Adwords marketing* to promote your
business in the past.

There is no good news if your website is currently ranked low in search
engine results because; you need to increase the *website’s rank* by
implementing a new SEO approach. The most necessary factor in SEO is search
engines’ ranking factors like *keywords & content, engagement & traffic*,
or domain-level brand metrics to be assured your website is seen as
relevant and popular by search engines.

Of course every website wants to come top on the search engines. So, as I
found a number of SEO issues on your website like broken links, page speed
issue, HTML validation errors, images with no ALT text on your website,
that's distracting your website to get any traffic.

We will be fixing those problems and promote your website, products or
services through *engaging contents* on *relevant places* on the web (read,
social media). After resolving the issues, I *guarantee* you will see a
drastic change in ranking of your search results and increase in traffics.

I want to inform you something that this payment is *one time*, no needs to
pay for the Adwords *every month*.

Let me know if you are interested in our work and offers so that I will be
sending you a no obligation *audit report* and quote.

I hope to get a positive response from you and take our partnership way
ahead to the future.

Best Regards,

*Catherine Hough * |SEO Consultant

……..

I have a well prepared free website audit report for your website. If you
are interested I can show you the report. It would be grateful for me to
send you our package, pricing and past work details, if you would like to
assess our work.
[image: beacon]


Re: [HELP] slim is being set to 1

2020-01-07 Thread Aleksandar Lazic

Hi.

On 07.01.20 07:27, Shah, Aman wrote:

Hello all,

We have an haproxy server and we are getting some odd values. Value of slim is 
coming as 1 for most of the domains. The value is nowhere being set and maxconn 
is being set to 2000. Can you please help me to figure out what could be the 
actual reason behind the slim value coming as 1. Do we have to manually enforce 
it? How slim value is actually being set ?


Here's the full output from haproxy -vv:

root@haproxy01:~# haproxy -vv
HA-Proxy version 1.8.8-1ubuntu0.1 2018/05/29
Copyright 2000-2018 Willy Tarreau 

Build options :
   TARGET  = linux2628
   CPU = generic
   CC  = gcc
   CFLAGS  = -g -O2 -fdebug-prefix-map=/build/haproxy-VmwZ9X/haproxy-1.8.8=. 
-fstack-protector-strong -Wformat -Werror=format-security -Wdate-time 
-D_FORTIFY_SOURCE=2
   OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 
USE_SYSTEMD=1 USE_PCRE=1 USE_PCRE_JIT=1 USE_NS=1


Default settings :
   maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.1.0g  2 Nov 2017
Running on OpenSSL version : OpenSSL 1.1.0g  2 Nov 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
Built with Lua version : Lua 5.3.3
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
IP_FREEBIND

Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built with PCRE version : 8.39 2016-06-14
Running on PCRE version : 8.39 2016-06-14
PCRE library supports JIT : yes
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")

Built with network namespace support.

Available polling systems :
   epoll : pref=300,  test result OK
    poll : pref=200,  test result OK
  select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
     [SPOE] spoe
     [COMP] compression
     [TRACE] trace


And also one of the sample output of haproxy stats for the reference:


# haproxyctl show stat
# 
pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg,downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim,rate_max,check_status,check_code,check_duration,hrsp_1xx,hrsp_2xx,hrsp_3xx,hrsp_4xx,hrsp_5xx,hrsp_other,hanafail,req_rate,req_rate_max,req_tot,cli_abrt,srv_abrt,comp_in,comp_out,comp_byp,comp_rsp,lastsess,last_chk,last_agt,qtime,ctime,rtime,ttime,agent_status,agent_code,agent_duration,check_desc,agent_desc,check_rise,check_fall,check_health,agent_rise,agent_fall,agent_health,addr,cookie,mode,algo,conn_rate,conn_rate_max,conn_tot,intercepted,dcon,dses, 



abc,BACKEND,0,0,0,1,1,1,216,1407,0,0,,0,0,0,0,UP,1,1,0,,0,6783,0,,1,5,0,,1,,1,0,,10,1,0,0,0,01,0,0,0,0,0,0,3086,,,0,0,1,1,,http,roundrobin,,, 


Have you set anywhere maxconn or fullconn because this are the parameters which 
are used for slim.


http://git.haproxy.org/?p=haproxy-1.8.git=search=HEAD=grep=ST_F_SLIM

What's your output of `show info`?
Can you share a minimal config.


Thanks,
Aman


Regards
Aleks



[HELP] slim is being set to 1

2020-01-06 Thread Shah, Aman

Hello all,

We have an haproxy server and we are getting some odd values. Value of 
slim is coming as 1 for most of the domains. The value is nowhere being 
set and maxconn is being set to 2000. Can you please help me to figure 
out what could be the actual reason behind the slim value coming as 1. 
Do we have to manually enforce it? How slim value is actually being set 
?


Here's the full output from haproxy -vv:

root@haproxy01:~# haproxy -vv
HA-Proxy version 1.8.8-1ubuntu0.1 2018/05/29
Copyright 2000-2018 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -g -O2 
-fdebug-prefix-map=/build/haproxy-VmwZ9X/haproxy-1.8.8=. 
-fstack-protector-strong -Wformat -Werror=format-security -Wdate-time 
-D_FORTIFY_SOURCE=2
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 
USE_LUA=1 USE_SYSTEMD=1 USE_PCRE=1 USE_PCRE_JIT=1 USE_NS=1


Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 
200


Built with OpenSSL version : OpenSSL 1.1.0g  2 Nov 2017
Running on OpenSSL version : OpenSSL 1.1.0g  2 Nov 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
Built with Lua version : Lua 5.3.3
Built with transparent proxy support using: IP_TRANSPARENT 
IPV6_TRANSPARENT IP_FREEBIND

Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built with PCRE version : 8.39 2016-06-14
Running on PCRE version : 8.39 2016-06-14
PCRE library supports JIT : yes
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), 
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")

Built with network namespace support.

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
[SPOE] spoe
[COMP] compression
[TRACE] trace


And also one of the sample output of haproxy stats for the reference:


# haproxyctl show stat
# 
pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg,downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim,rate_max,check_status,check_code,check_duration,hrsp_1xx,hrsp_2xx,hrsp_3xx,hrsp_4xx,hrsp_5xx,hrsp_other,hanafail,req_rate,req_rate_max,req_tot,cli_abrt,srv_abrt,comp_in,comp_out,comp_byp,comp_rsp,lastsess,last_chk,last_agt,qtime,ctime,rtime,ttime,agent_status,agent_code,agent_duration,check_desc,agent_desc,check_rise,check_fall,check_health,agent_rise,agent_fall,agent_health,addr,cookie,mode,algo,conn_rate,conn_rate_max,conn_tot,intercepted,dcon,dses,


abc,BACKEND,0,0,0,1,1,1,216,1407,0,0,,0,0,0,0,UP,1,1,0,,0,6783,0,,1,5,0,,1,,1,0,,10,1,0,0,0,01,0,0,0,0,0,0,3086,,,0,0,1,1,,http,roundrobin,,,


Thanks,
Aman



Re: Re: Help, URL does not work with CHINESE charactor?

2019-12-25 Thread Willy Tarreau
Hi!

On Tue, Dec 24, 2019 at 07:59:25PM +0800, JWD wrote:
> It works with "option accept-invalid-http-request".
> Thanks a lot.
> 
> Yes, it does not work with IE only.
> Other web browser is fine.

I already noticed this behaviour with very old versions of IE in the past,
it would do this on redirects: if the server sends binary characters in a
redirect, then it would follow them without re-encoding them. I didn't know
that more recent versions would still do this mistake.

I strongly suspect that your application is the one producing such wrong
links and that IE blindly follows them. You must really have a look there
and fix the application. Each time you enable "option accept-invalid-something"
it must be understood as a final warning for something deemed to fail sooner
or later.

Willy



Re: Re: Help, URL does not work with CHINESE charactor?

2019-12-24 Thread JWD
It works with "option accept-invalid-http-request".
Thanks a lot.

Yes, it does not work with IE only.
Other web browser is fine.




JWD

From: Lukas Tribus
Date: 2019-12-24 19:31
To: JWD
CC: Aleksandar Lazic; haproxy
Subject: Re: Re: Help, URL does not work with CHINESE charactor?
On Tue, 24 Dec 2019 at 11:46, JWD  wrote:
>
> I have tried version 1.7,1.8,2.0,2.1, all the same.
>
> Config:
> frontend www
> acl acl-app hdr(host) -i sharepoint.domain.com
> use_backend app if acl-app
> backend
> cookie HA-Server insert indirect nocache
> server app 192.168.129.66:80 cookie app check inter 30s
>
> Log:
> Dec 24 18:37:01 localhost haproxy[20108]: 192.168.134.81 - - 
> [24/Dec/2019:10:37:01 +] "" 400 0 "" "" 2423 066 "www" "www" 
> "" -1 -1 -1 -1 0 CR-- 2 2 0 0 0 0 0 "" "" "" ""
>
> # echo "show errors" | socat unix-connect:/etc/haproxy/hastats stdio
> Total events captured on [24/Dec/2019:10:13:18.909] : 3
>
> [24/Dec/2019:10:07:53.573] frontend www (#2): invalid request
>   backend  (#-1), server  (#-1), event #2
>   src 192.168.134.81:3400, session #103, session flags 0x0080
>   HTTP msg state MSG_RQURI(4), msg flags 0x, tx flags 0x
>   HTTP chunk len 0 bytes, HTTP body len 0 bytes
>   buffer flags 0x20808002, out 0 bytes, total 566 bytes
>   pending 566 bytes, wrapping at 16384, error at position 109:
>
>   0  GET 
> /CorWork/_layouts/15/TD.ECM.DoucmentDepartment/DepartmentFileDefau
>   00070+ lt.aspx?destLink=/CorWork/ProjectShare/\xB8\xC4\xBD\xF8\xCF\xEE\xC4
>   00116+ \xBF/ECM\xD0\xC2\xB9\xA6\xC4\xDC HTTP/1.1\r\n

Those are invalid requests, the URL must be encoded. Does IE really
still sends this crap after all those years?

You can try ignoring this with:
option accept-invalid-http-request

But it does not ignore everything. See:

https://cbonte.github.io/haproxy-dconv/2.0/configuration.html#4.2-option%20accept-invalid-http-request



Lukas

Re: Re: Help, URL does not work with CHINESE charactor?

2019-12-24 Thread Lukas Tribus
On Tue, 24 Dec 2019 at 11:46, JWD  wrote:
>
> I have tried version 1.7,1.8,2.0,2.1, all the same.
>
> Config:
> frontend www
> acl acl-app hdr(host) -i sharepoint.domain.com
> use_backend app if acl-app
> backend
> cookie HA-Server insert indirect nocache
> server app 192.168.129.66:80 cookie app check inter 30s
>
> Log:
> Dec 24 18:37:01 localhost haproxy[20108]: 192.168.134.81 - - 
> [24/Dec/2019:10:37:01 +] "" 400 0 "" "" 2423 066 "www" "www" 
> "" -1 -1 -1 -1 0 CR-- 2 2 0 0 0 0 0 "" "" "" ""
>
> # echo "show errors" | socat unix-connect:/etc/haproxy/hastats stdio
> Total events captured on [24/Dec/2019:10:13:18.909] : 3
>
> [24/Dec/2019:10:07:53.573] frontend www (#2): invalid request
>   backend  (#-1), server  (#-1), event #2
>   src 192.168.134.81:3400, session #103, session flags 0x0080
>   HTTP msg state MSG_RQURI(4), msg flags 0x, tx flags 0x
>   HTTP chunk len 0 bytes, HTTP body len 0 bytes
>   buffer flags 0x20808002, out 0 bytes, total 566 bytes
>   pending 566 bytes, wrapping at 16384, error at position 109:
>
>   0  GET 
> /CorWork/_layouts/15/TD.ECM.DoucmentDepartment/DepartmentFileDefau
>   00070+ lt.aspx?destLink=/CorWork/ProjectShare/\xB8\xC4\xBD\xF8\xCF\xEE\xC4
>   00116+ \xBF/ECM\xD0\xC2\xB9\xA6\xC4\xDC HTTP/1.1\r\n

Those are invalid requests, the URL must be encoded. Does IE really
still sends this crap after all those years?

You can try ignoring this with:
option accept-invalid-http-request

But it does not ignore everything. See:

https://cbonte.github.io/haproxy-dconv/2.0/configuration.html#4.2-option%20accept-invalid-http-request



Lukas



Re: Re: Help, URL does not work with CHINESE charactor?

2019-12-24 Thread JWD
I have tried version 1.7,1.8,2.0,2.1, all the same.

Config:
frontend www 
acl acl-app hdr(host) -i sharepoint.domain.com
use_backend app if acl-app 
backend
cookie HA-Server insert indirect nocache
server app 192.168.129.66:80 cookie app check inter 30s

Log:
Dec 24 18:37:01 localhost haproxy[20108]: 192.168.134.81 - - 
[24/Dec/2019:10:37:01 +] "" 400 0 "" "" 2423 066 "www" "www" 
"" -1 -1 -1 -1 0 CR-- 2 2 0 0 0 0 0 "" "" "" "" 

# echo "show errors" | socat unix-connect:/etc/haproxy/hastats stdio
Total events captured on [24/Dec/2019:10:13:18.909] : 3

[24/Dec/2019:10:07:53.573] frontend www (#2): invalid request
  backend  (#-1), server  (#-1), event #2
  src 192.168.134.81:3400, session #103, session flags 0x0080
  HTTP msg state MSG_RQURI(4), msg flags 0x, tx flags 0x
  HTTP chunk len 0 bytes, HTTP body len 0 bytes
  buffer flags 0x20808002, out 0 bytes, total 566 bytes
  pending 566 bytes, wrapping at 16384, error at position 109:

  0  GET /CorWork/_layouts/15/TD.ECM.DoucmentDepartment/DepartmentFileDefau
  00070+ lt.aspx?destLink=/CorWork/ProjectShare/\xB8\xC4\xBD\xF8\xCF\xEE\xC4
  00116+ \xBF/ECM\xD0\xC2\xB9\xA6\xC4\xDC HTTP/1.1\r\n
  00138  Accept: text/html, application/xhtml+xml, image/jxr, */*\r\n
  00196  Accept-Language: zh-CN\r\n
  00220  User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; Touch; r
  00290+ v:11.0) like Gecko\r\n
  00310  Accept-Encoding: gzip, deflate\r\n
  00342  Host: app.td-tech.com\r\n
  00365  Connection: Keep-Alive\r\n
  00389  Cookie: HA-Server=app; WSS_FullScreenMode=false; stsSyncAppName=Outloo
  00459+ k; stsSyncIconPath=%2F%5Flayouts%2F15%2Fimages%2Fmenuoutl%2Egif; Ribbo
  00529+ n.Document=973830|-1|0|-533254637\r\n
  00564  \r\n





JWD

From: Aleksandar Lazic
Date: 2019-12-24 17:08
To: JWD; haproxy
Subject: Re: Help, URL does not work with CHINESE charactor?
Hi JWD.

On 24.12.19 02:53, JWD wrote:
> Hi,all
> I have a backend, which is sharepoint website.
> If URL include CHINESE charactor, it return HTTP 400 ERROR from IE 11 with 
> haproxy.
> But it is ok without haproxy.
> Can anyone help me?
> Thanks.
> This can not access, return HTTP 400 ERROR:
> http://sharepoint.domain.com/CorWork/_layouts/15/TD.ECM.DoucmentDepartment 
> /DepartmentFileDefault.aspx?destLink=/CorWork/ProjectShare/改进项目/知识管理一期 
> 项目/年度评审会
> This is ok, if encode URL:
> http://sharepoint.domain.com/CorWork/_layouts/15/TD.ECM.DoucmentDepartment/DepartmentFileDefault.aspx?destLink=/CorWork/ProjectShare/%e6%94%b9%e8%bf%9b%e9%a1%b9%e7%9b%ae/%e7%9f%a5%e8%af%86%e7%ae%a1%e7%90%86%e4%b8%80%e6%9c%9f%e9%a1%b9%e7%9b%ae/%e5%b9%b4%e5%ba%a6%e8%af%84%e5%ae%a1%e4%bc%9a
>  
> 

Which haproxy version do you use?
haproxy -vv

What's in your haproxy log?
What's your haproxy config, shorten for the use case.

My assumption is that you try to use something like this

`option httpchk GET \r\nHost:\ sharepoint.domain.com\r\rn... `

This option sends the URL 1:1 as written in the config, no conversion will be 
done.

Maybe in the future there will be a funtion `url_enc` similar to the url_dec 
command but for now you will need to encode the URL as you have done.

I have created a feature request for url_enc function.

https://www.mail-archive.com/haproxy@formilux.org/msg35783.html

> JWD

Regards
Aleks

Re: Help, URL does not work with CHINESE charactor?

2019-12-24 Thread Aleksandar Lazic

Hi JWD.

On 24.12.19 02:53, JWD wrote:

Hi,all
I have a backend, which is sharepoint website.
If URL include CHINESE charactor, it return HTTP 400 ERROR from IE 11 with 
haproxy.
But it is ok without haproxy.
Can anyone help me?
Thanks.
This can not access, return HTTP 400 ERROR:
http://sharepoint.domain.com/CorWork/_layouts/15/TD.ECM.DoucmentDepartment 
/DepartmentFileDefault.aspx?destLink=/CorWork/ProjectShare/改进项目/知识管理一期 
项目/年度评审会

This is ok, if encode URL:
http://sharepoint.domain.com/CorWork/_layouts/15/TD.ECM.DoucmentDepartment/DepartmentFileDefault.aspx?destLink=/CorWork/ProjectShare/%e6%94%b9%e8%bf%9b%e9%a1%b9%e7%9b%ae/%e7%9f%a5%e8%af%86%e7%ae%a1%e7%90%86%e4%b8%80%e6%9c%9f%e9%a1%b9%e7%9b%ae/%e5%b9%b4%e5%ba%a6%e8%af%84%e5%ae%a1%e4%bc%9a 



Which haproxy version do you use?
haproxy -vv

What's in your haproxy log?
What's your haproxy config, shorten for the use case.

My assumption is that you try to use something like this

`option httpchk GET \r\nHost:\ sharepoint.domain.com\r\rn... `

This option sends the URL 1:1 as written in the config, no conversion will be 
done.

Maybe in the future there will be a funtion `url_enc` similar to the url_dec 
command but for now you will need to encode the URL as you have done.


I have created a feature request for url_enc function.

https://www.mail-archive.com/haproxy@formilux.org/msg35783.html


JWD


Regards
Aleks



Help, URL does not work with CHINESE charactor?

2019-12-23 Thread JWD
Hi,all

I have a backend, which is sharepoint website.

If URL include CHINESE charactor, it return HTTP 400 ERROR from IE 11 with 
haproxy.
But it is ok without haproxy.

Can anyone help me?
Thanks.

This can not access, return HTTP 400 ERROR:
http://sharepoint.domain.com/CorWork/_layouts/15/TD.ECM.DoucmentDepartment/DepartmentFileDefault.aspx?destLink=/CorWork/ProjectShare/改进项目/知识管理一期项目/年度评审会

This is ok, if encode URL:
http://sharepoint.domain.com/CorWork/_layouts/15/TD.ECM.DoucmentDepartment/DepartmentFileDefault.aspx?destLink=/CorWork/ProjectShare/%e6%94%b9%e8%bf%9b%e9%a1%b9%e7%9b%ae/%e7%9f%a5%e8%af%86%e7%ae%a1%e7%90%86%e4%b8%80%e6%9c%9f%e9%a1%b9%e7%9b%ae/%e5%b9%b4%e5%ba%a6%e8%af%84%e5%ae%a1%e4%bc%9a




JWD

Help, URL does not work with CHINESE charactor?

2019-12-23 Thread JWD
Hi,all

I have a backend, which is sharepoint website.

If URL include CHINESE charactor, it return HTTP 400 ERROR from IE 11 with 
haproxy.
But it is ok without haproxy.

Can anyone help me?
Thanks.

This can not access, return HTTP 400 ERROR:
http://sharepoint.domain.com/CorWork/_layouts/15/TD.ECM.DoucmentDepartment/DepartmentFileDefault.aspx?destLink=/CorWork/ProjectShare/改进项目/知识管理一期项目/年度评审会

This is ok, if encode URL:
http://sharepoint.domain.com/CorWork/_layouts/15/TD.ECM.DoucmentDepartment/DepartmentFileDefault.aspx?destLink=/CorWork/ProjectShare/%e6%94%b9%e8%bf%9b%e9%a1%b9%e7%9b%ae/%e7%9f%a5%e8%af%86%e7%ae%a1%e7%90%86%e4%b8%80%e6%9c%9f%e9%a1%b9%e7%9b%ae/%e5%b9%b4%e5%ba%a6%e8%af%84%e5%ae%a1%e4%bc%9a




JWD

October 25, 2019 - Space tech as inexpensive as a smartphone will now help Indian fishermen navigate cyclones

2019-10-25 Thread TradeBriefs



RE: Help required ehhe

2019-08-22 Thread Andy.ANTHOINE.ext
Thanks a lot man, i'll take a look at that ! really appreciate it :) i'll come 
back to you if needed, but the help was great! :)

-Message d'origine-
De : Aleksandar Lazic [mailto:al-hapr...@none.at]
Envoyé : vendredi 23 août 2019 11:14
À : ANTHOINE Andy (EXT) 
Cc : haproxy@formilux.org
Objet : Re: Help required ehhe



Am 23-08-2019 02:04, schrieb andy.anthoine@opt.nc:
> REALLY SORRY ABOUT THAT lo
>
> I'll copy paste my bad :D
>
> admin@sld-loadb-01-prd-cit:~$ curl -v -o /dev/null --max-time 5
> --http1.0 http://10.154.2.29:8080/iws/ curl -v -o /dev/null --max-time
> 5 --http1.0 http://10.154.2.29:8080/eoc/login
> -bash: curl: command not found
>
> Ok, i'll separate the commands next time :)
>
> The lb i m working on is the production one, i'll check, but can't
> really do anything like installing on it.

Okay.

> Telnet works :) if it's enough for you ?

should work.

> admin@sld-loadb-01-prd-cit:~$ telnet 10.154.2.29 8080 Trying
> 10.154.2.29...
> Connected to 10.154.2.29.
> Escape character is '^]'.
>
>
>> So on the back ends are the check ports reachable and you get a 200
>> back.
>
> server sli-ecmapp-01-prd-cit 10.154.2.29:8443 server
> sli-ecmapp-02-prd-cit 10.154.2.31:8443
>
> admin@sld-loadb-01-prd-cit:~$ telnet 10.154.2.29 8443 Trying
> 10.154.2.29...
> telnet: Unable to connect to remote host: No route to host

That's the problem!

The test was wrong. you make a telnet to 8443 but the check port is 8080 Try 
this.

echo -e 'GET /iws/ HTTP/1.0\n\r\n\r'|telnet 10.154.2.29 8080

> admin@sld-loadb-01-prd-cit:~$ telnet 10.154.2.31 8443 Trying
> 10.154.2.31...
> Connected to 10.154.2.31.
> Escape character is '^]'.
> ^CConnection closed by foreign host.

That's another one.

echo -e 'GET /iws/ HTTP/1.0\n\r\n\r'|telnet 10.154.2.31 8080

> Seems like i got one of my answer

Yes looks like.

>> Is there any firewall in between the LB & BE?
> Yes there is one

Are both ports (8080 & 8443) open from LB top both BE?

Looks like not a haproxy issue.

Regards
Aleks


> -Message d'origine-
> De : Aleksandar Lazic [mailto:al-hapr...@none.at] Envoyé : vendredi 23
> août 2019 10:57 À : ANTHOINE Andy (EXT)  Cc
> : haproxy@formilux.org Objet : Re: Help required ehhe
>
> Hi.
>
> Am 23-08-2019 01:43, schrieb andy.anthoine@opt.nc:
>
>> Hi,
>>
>> I can't launch the command from the LB
>
> I love screenshots! It's so easy to copy paste from them 8-/
>
> do you have any other tool which you can use to check if the
> connection is possible from LB to Backend?
>
> nc?
> telnet?
> ...?
>
>> But from the server he is what i get
>>
>> [root@sli-ecmapp2-prd ~]# curl -v -o /dev/null --max-time 5 --http1.0
>> http://10.154.2.29:8080/iws/  curl -v -o /dev/null --max-time 5
>> --http1.0 http://10.154.2.29:8080/eoc/login
>
> Please one curl AFTER the other, the next time, just separate the
> commands with ;.
>
> curl -v -o /dev/null --max-time 5 --http1.0
> http://10.154.2.29:8080/iws/ ; curl -v -o /dev/null --max-time 5
> --http1.0 http://10.154.2.29:8080/eoc/login
>
>> * About to connect() to 10.154.2.29 port 8080 (#0)
>> *   Trying 10.154.2.29...
> ...
>> 0 00 00 0  0  0 --:--:-- --:--:-- --:--:--
>>0* Connected to 10.154.2.29 (10.154.2.29) port 8080 (#0)
>>
>>> GET /iws/ HTTP/1.0
>>> User-Agent: curl/7.29.0
>>> Host: 10.154.2.29:8080
>>> Accept: */*
>>>
>>
>> < HTTP/1.1 200 OK
>> < Server: Apache-Coyote/1.1
>
> [snipp]
>
>> curl: (28) Resolving timed out after 5515 milliseconds
>
> That looks strange, but maybe not the issue for now.
>
>> * About to connect() to 10.154.2.29 port 8080 (#2)
>> *   Trying 10.154.2.29...
>> * Connected to 10.154.2.29 (10.154.2.29) port 8080 (#2)
>>
>>> GET /eoc/login HTTP/1.0
>>> User-Agent: curl/7.29.0
>>> Host: 10.154.2.29:8080
>>> Accept: */*
>>>
>>
>> < HTTP/1.1 200 OK
>> < Server: Apache-Coyote/1.1
>
> [snipp]
>
>> * Closing connection 2
>
> So on the back ends are the check ports reachable and you get a 200
> back.
> Is there any firewall in between the LB & BE?
>
> What's your haproxy version?
>
> haproxy -vv
>
>
>> Best regards
>>
>> Andy
>>
>> -Message d'origine-
>> De : Aleksandar Lazic [mailto:al-hapr...@none.at] Envoyé : vendredi
>> 23 août 2019 10:33 À : ANTHOINE Andy (EXT) 
>> Cc
>> : haproxy@formilux.org Objet : Re: Help required ehhe
>>
>> Am 23-08-2019 00:49, schrieb andy.anthoine@opt.nc:
>>
>>> Hi,

Re: Help required ehhe

2019-08-22 Thread Aleksandar Lazic




Am 23-08-2019 02:04, schrieb andy.anthoine@opt.nc:

REALLY SORRY ABOUT THAT lo

I'll copy paste my bad :D

admin@sld-loadb-01-prd-cit:~$ curl -v -o /dev/null --max-time 5
--http1.0 http://10.154.2.29:8080/iws/ curl -v -o /dev/null --max-time
5 --http1.0 http://10.154.2.29:8080/eoc/login
-bash: curl: command not found

Ok, i'll separate the commands next time :)

The lb i m working on is the production one, i'll check, but can't
really do anything like installing on it.


Okay.


Telnet works :) if it's enough for you ?


should work.


admin@sld-loadb-01-prd-cit:~$ telnet 10.154.2.29 8080
Trying 10.154.2.29...
Connected to 10.154.2.29.
Escape character is '^]'.


So on the back ends are the check ports reachable and you get a 200 
back.


server sli-ecmapp-01-prd-cit 10.154.2.29:8443
server sli-ecmapp-02-prd-cit 10.154.2.31:8443

admin@sld-loadb-01-prd-cit:~$ telnet 10.154.2.29 8443
Trying 10.154.2.29...
telnet: Unable to connect to remote host: No route to host


That's the problem!

The test was wrong. you make a telnet to 8443 but the check port is 8080
Try this.

echo -e 'GET /iws/ HTTP/1.0\n\r\n\r'|telnet 10.154.2.29 8080


admin@sld-loadb-01-prd-cit:~$ telnet 10.154.2.31 8443
Trying 10.154.2.31...
Connected to 10.154.2.31.
Escape character is '^]'.
^CConnection closed by foreign host.


That's another one.

echo -e 'GET /iws/ HTTP/1.0\n\r\n\r'|telnet 10.154.2.31 8080


Seems like i got one of my answer


Yes looks like.


Is there any firewall in between the LB & BE?

Yes there is one


Are both ports (8080 & 8443) open from LB top both BE?

Looks like not a haproxy issue.

Regards
Aleks



-Message d'origine-
De : Aleksandar Lazic [mailto:al-hapr...@none.at]
Envoyé : vendredi 23 août 2019 10:57
À : ANTHOINE Andy (EXT) 
Cc : haproxy@formilux.org
Objet : Re: Help required ehhe

Hi.

Am 23-08-2019 01:43, schrieb andy.anthoine@opt.nc:


Hi,

I can't launch the command from the LB


I love screenshots! It's so easy to copy paste from them 8-/

do you have any other tool which you can use to check if the
connection is possible from LB to Backend?

nc?
telnet?
...?


But from the server he is what i get

[root@sli-ecmapp2-prd ~]# curl -v -o /dev/null --max-time 5 --http1.0
http://10.154.2.29:8080/iws/  curl -v -o /dev/null --max-time 5
--http1.0 http://10.154.2.29:8080/eoc/login


Please one curl AFTER the other, the next time, just separate the
commands with ;.

curl -v -o /dev/null --max-time 5 --http1.0
http://10.154.2.29:8080/iws/ ; curl -v -o /dev/null --max-time 5
--http1.0 http://10.154.2.29:8080/eoc/login


* About to connect() to 10.154.2.29 port 8080 (#0)
*   Trying 10.154.2.29...

...

0 00 00 0  0  0 --:--:-- --:--:-- --:--:--
   0* Connected to 10.154.2.29 (10.154.2.29) port 8080 (#0)


GET /iws/ HTTP/1.0
User-Agent: curl/7.29.0
Host: 10.154.2.29:8080
Accept: */*



< HTTP/1.1 200 OK
< Server: Apache-Coyote/1.1


[snipp]


curl: (28) Resolving timed out after 5515 milliseconds


That looks strange, but maybe not the issue for now.


* About to connect() to 10.154.2.29 port 8080 (#2)
*   Trying 10.154.2.29...
* Connected to 10.154.2.29 (10.154.2.29) port 8080 (#2)


GET /eoc/login HTTP/1.0
User-Agent: curl/7.29.0
Host: 10.154.2.29:8080
Accept: */*



< HTTP/1.1 200 OK
< Server: Apache-Coyote/1.1


[snipp]


* Closing connection 2


So on the back ends are the check ports reachable and you get a 200 
back.

Is there any firewall in between the LB & BE?

What's your haproxy version?

haproxy -vv



Best regards

Andy

-Message d'origine-
De : Aleksandar Lazic [mailto:al-hapr...@none.at] Envoyé : vendredi 23
août 2019 10:33 À : ANTHOINE Andy (EXT)  Cc
: haproxy@formilux.org Objet : Re: Help required ehhe

Am 23-08-2019 00:49, schrieb andy.anthoine@opt.nc:


Hi,







Ehhe not an external ip don't worry, or i would have deleted it hehe







No change, the problem seems to be there since before i m here, and



they now need it to be fixed







I don't see anything in particular in the logs, beside that kind of



thing which is normal since the server is rebooted at this time ;)







Aug 22 05:00:23 sld-loadb-01-prd-cit local1.alert haproxy[2244]:



Server ecmapp-prd-be-8443/sli-ecmapp-01-prd-cit is DOWN, reason:



Layer4 connection problem, info: "Connection refused at step 1 of



tcp-check (connect port 8080)", check duration: 0ms. 1 active and 0



backup servers left. 62 sessions active, 0 requeued, 0 remaining in



queue.


this looks to me that the loadbalancer can't connect to the backend
check port, is something listen on the backend server on port 8080?

Please can you try the following command from the loadbalancer.

curl -v -o /dev/null --max-time 5 --http1.0
http://10.154.2.29:8080/iws/ curl -v -o /dev/null --max-time 5
--http1.0 http://10.154.2.29:8080/eoc/login

The same for 10.154.2.31


What do you mean by that ?


I have shorten the config to reduce the size of 

RE: Help required ehhe

2019-08-22 Thread Andy.ANTHOINE.ext
REALLY SORRY ABOUT THAT lo

I'll copy paste my bad :D

admin@sld-loadb-01-prd-cit:~$ curl -v -o /dev/null --max-time 5 --http1.0 
http://10.154.2.29:8080/iws/ curl -v -o /dev/null --max-time 5 --http1.0 
http://10.154.2.29:8080/eoc/login
-bash: curl: command not found

Ok, i'll separate the commands next time :)

The lb i m working on is the production one, i'll check, but can't really do 
anything like installing on it.

Telnet works :) if it's enough for you ?

admin@sld-loadb-01-prd-cit:~$ telnet 10.154.2.29 8080
Trying 10.154.2.29...
Connected to 10.154.2.29.
Escape character is '^]'.


So on the back ends are the check ports reachable and you get a 200 back.

server sli-ecmapp-01-prd-cit 10.154.2.29:8443
server sli-ecmapp-02-prd-cit 10.154.2.31:8443

admin@sld-loadb-01-prd-cit:~$ telnet 10.154.2.29 8443
Trying 10.154.2.29...
telnet: Unable to connect to remote host: No route to host
admin@sld-loadb-01-prd-cit:~$ telnet 10.154.2.31 8443
Trying 10.154.2.31...
Connected to 10.154.2.31.
Escape character is '^]'.
^CConnection closed by foreign host.

Seems like i got one of my answer


Is there any firewall in between the LB & BE? Yes there is one




-Message d'origine-
De : Aleksandar Lazic [mailto:al-hapr...@none.at]
Envoyé : vendredi 23 août 2019 10:57
À : ANTHOINE Andy (EXT) 
Cc : haproxy@formilux.org
Objet : Re: Help required ehhe

Hi.

Am 23-08-2019 01:43, schrieb andy.anthoine@opt.nc:

> Hi,
>
> I can't launch the command from the LB

I love screenshots! It's so easy to copy paste from them 8-/

do you have any other tool which you can use to check if the connection is 
possible from LB to Backend?

nc?
telnet?
...?

> But from the server he is what i get
>
> [root@sli-ecmapp2-prd ~]# curl -v -o /dev/null --max-time 5 --http1.0
> http://10.154.2.29:8080/iws/  curl -v -o /dev/null --max-time 5
> --http1.0 http://10.154.2.29:8080/eoc/login

Please one curl AFTER the other, the next time, just separate the commands with 
;.

curl -v -o /dev/null --max-time 5 --http1.0 http://10.154.2.29:8080/iws/ ; curl 
-v -o /dev/null --max-time 5 --http1.0 http://10.154.2.29:8080/eoc/login

> * About to connect() to 10.154.2.29 port 8080 (#0)
> *   Trying 10.154.2.29...
...
> 0 00 00 0  0  0 --:--:-- --:--:-- --:--:--
>0* Connected to 10.154.2.29 (10.154.2.29) port 8080 (#0)
>
>> GET /iws/ HTTP/1.0
>> User-Agent: curl/7.29.0
>> Host: 10.154.2.29:8080
>> Accept: */*
>>
>
> < HTTP/1.1 200 OK
> < Server: Apache-Coyote/1.1

[snipp]

> curl: (28) Resolving timed out after 5515 milliseconds

That looks strange, but maybe not the issue for now.

> * About to connect() to 10.154.2.29 port 8080 (#2)
> *   Trying 10.154.2.29...
> * Connected to 10.154.2.29 (10.154.2.29) port 8080 (#2)
>
>> GET /eoc/login HTTP/1.0
>> User-Agent: curl/7.29.0
>> Host: 10.154.2.29:8080
>> Accept: */*
>>
>
> < HTTP/1.1 200 OK
> < Server: Apache-Coyote/1.1

[snipp]

> * Closing connection 2

So on the back ends are the check ports reachable and you get a 200 back.
Is there any firewall in between the LB & BE?

What's your haproxy version?

haproxy -vv


> Best regards
>
> Andy
>
> -Message d'origine-
> De : Aleksandar Lazic [mailto:al-hapr...@none.at] Envoyé : vendredi 23
> août 2019 10:33 À : ANTHOINE Andy (EXT)  Cc
> : haproxy@formilux.org Objet : Re: Help required ehhe
>
> Am 23-08-2019 00:49, schrieb andy.anthoine@opt.nc:
>
>> Hi,
>
>>
>
>> Ehhe not an external ip don't worry, or i would have deleted it hehe
>
>>
>
>> No change, the problem seems to be there since before i m here, and
>
>> they now need it to be fixed
>
>>
>
>> I don't see anything in particular in the logs, beside that kind of
>
>> thing which is normal since the server is rebooted at this time ;)
>
>>
>
>> Aug 22 05:00:23 sld-loadb-01-prd-cit local1.alert haproxy[2244]:
>
>> Server ecmapp-prd-be-8443/sli-ecmapp-01-prd-cit is DOWN, reason:
>
>> Layer4 connection problem, info: "Connection refused at step 1 of
>
>> tcp-check (connect port 8080)", check duration: 0ms. 1 active and 0
>
>> backup servers left. 62 sessions active, 0 requeued, 0 remaining in
>
>> queue.
>
> this looks to me that the loadbalancer can't connect to the backend
> check port, is something listen on the backend server on port 8080?
>
> Please can you try the following command from the loadbalancer.
>
> curl -v -o /dev/null --max-time 5 --http1.0
> http://10.154.2.29:8080/iws/ curl -v -o /dev/null --max-time 5
> --http1.0 http://10.154.2.29:8080/eoc/login
>
> The same for 10.154.2.31
>
>> What do you mean by that ?
>
> I have shorten the config to redu

Re: Help required ehhe

2019-08-22 Thread Aleksandar Lazic

Hi.

Am 23-08-2019 01:43, schrieb andy.anthoine@opt.nc:


Hi,

I can't launch the command from the LB


I love screenshots! It's so easy to copy paste from them 8-/

do you have any other tool which you can use to check if the
connection is possible from LB to Backend?

nc?
telnet?
...?


But from the server he is what i get

[root@sli-ecmapp2-prd ~]# curl -v -o /dev/null --max-time 5 --http1.0
http://10.154.2.29:8080/iws/  curl -v -o /dev/null --max-time 5
--http1.0 http://10.154.2.29:8080/eoc/login


Please one curl AFTER the other, the next time, just separate the
commands with ;.

curl -v -o /dev/null --max-time 5 --http1.0 http://10.154.2.29:8080/iws/
; curl -v -o /dev/null --max-time 5 --http1.0
http://10.154.2.29:8080/eoc/login


* About to connect() to 10.154.2.29 port 8080 (#0)
*   Trying 10.154.2.29...

...

0 00 00 0  0  0 --:--:-- --:--:-- --:--:--
   0* Connected to 10.154.2.29 (10.154.2.29) port 8080 (#0)


GET /iws/ HTTP/1.0
User-Agent: curl/7.29.0
Host: 10.154.2.29:8080
Accept: */*



< HTTP/1.1 200 OK
< Server: Apache-Coyote/1.1


[snipp]


curl: (28) Resolving timed out after 5515 milliseconds


That looks strange, but maybe not the issue for now.


* About to connect() to 10.154.2.29 port 8080 (#2)
*   Trying 10.154.2.29...
* Connected to 10.154.2.29 (10.154.2.29) port 8080 (#2)


GET /eoc/login HTTP/1.0
User-Agent: curl/7.29.0
Host: 10.154.2.29:8080
Accept: */*



< HTTP/1.1 200 OK
< Server: Apache-Coyote/1.1


[snipp]


* Closing connection 2


So on the back ends are the check ports reachable and you get a 200
back.
Is there any firewall in between the LB & BE?

What's your haproxy version?

haproxy -vv



Best regards

Andy

-Message d'origine-
De : Aleksandar Lazic [mailto:al-hapr...@none.at]
Envoyé : vendredi 23 août 2019 10:33
À : ANTHOINE Andy (EXT) 
Cc : haproxy@formilux.org
Objet : Re: Help required ehhe

Am 23-08-2019 00:49, schrieb andy.anthoine@opt.nc:


Hi,







Ehhe not an external ip don't worry, or i would have deleted it hehe







No change, the problem seems to be there since before i m here, and



they now need it to be fixed







I don't see anything in particular in the logs, beside that kind of



thing which is normal since the server is rebooted at this time ;)







Aug 22 05:00:23 sld-loadb-01-prd-cit local1.alert haproxy[2244]:



Server ecmapp-prd-be-8443/sli-ecmapp-01-prd-cit is DOWN, reason:



Layer4 connection problem, info: "Connection refused at step 1 of



tcp-check (connect port 8080)", check duration: 0ms. 1 active and 0



backup servers left. 62 sessions active, 0 requeued, 0 remaining in



queue.


this looks to me that the loadbalancer can't connect to the backend
check port, is something listen on the backend server on port 8080?

Please can you try the following command from the loadbalancer.

curl -v -o /dev/null --max-time 5 --http1.0
http://10.154.2.29:8080/iws/ curl -v -o /dev/null --max-time 5
--http1.0 http://10.154.2.29:8080/eoc/login

The same for 10.154.2.31


What do you mean by that ?


I have shorten the config to reduce the size of the mail.


mode http







[snip more config]







-Message d'origine-



De : Aleksandar Lazic [mailto:al-hapr...@none.at] Envoyé : vendredi 23



août 2019 09:41 À : ANTHOINE Andy (EXT)  Cc



: haproxy@formilux.org Objet : Re: Help required ehhe







Hi.







Am 23-08-2019 00:28, schrieb andy.anthoine@opt.nc:







Hi,







I got that email on this site, not sure if it's still working etc



https://www.slideshare.net/haproxytech/haproxy-best-practice


[snipp]


What's your haproxy version?







haproxy -vv







Application load balancing & high availability v8.5.7 (8546)


That's not the full output of the command line call haproxy -vv .


Thanks a lot for the answer man !















Thanks !







Andy


Regards

Aleks

Ce
message et toutes les pièces jointes (ci-après le « message ») sont à
l'attention exclusive des destinataires désignés. Il peut contenir des
informations confidentielles. Si vous le recevez par erreur, merci d'en
informer immédiatement l'émetteur et de le détruire. Toute utilisation,
diffusion ou toute publication, totale ou partielle, est interdite,
sauf autorisation. Tout message électronique étant susceptible
d'altération, l'OPT NC décline toute responsabilité au titre de ce
message dans l'hypothèse où il aurait été modifié.
This message and any attachments (the «
message ») are intended solely for the addresses. It may contain
privileged information. If you receive this message in error, please
immediately notify the sender and delete it. Any use, dissemination or
disclosure, either whole or partial, is prohibited unless formal
approval. Emails are susceptible to alteration; OPT NC shall not
therefore be liable for the message if modified.

Pensez à l'environnement, n'imprimez que si nécessaire.

RE: Help required ehhe

2019-08-22 Thread Andy.ANTHOINE.ext
Hi,



I can't launch the command from the LB



[cid:image001.png@01D5599F.9D2D6E10]

But from the server he is what i get



[root@sli-ecmapp2-prd ~]# curl -v -o /dev/null --max-time 5 --http1.0 
http://10.154.2.29:8080/iws/ curl -v -o /dev/null --max-time 5 --http1.0 
http://10.154.2.29:8080/eoc/login

* About to connect() to 10.154.2.29 port 8080 (#0)

*   Trying 10.154.2.29...

  % Total% Received % Xferd  Average Speed   TimeTime Time  Current

 Dload  Upload   Total   SpentLeft  Speed

  0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 0* 
Connected to 10.154.2.29 (10.154.2.29) port 8080 (#0)

> GET /iws/ HTTP/1.0

> User-Agent: curl/7.29.0

> Host: 10.154.2.29:8080

> Accept: */*

>

< HTTP/1.1 200 OK

< Server: Apache-Coyote/1.1

< Accept-Ranges: bytes

< ETag: W/"1283-1552374458000"

< Last-Modified: Tue, 12 Mar 2019 07:07:38 GMT

< Content-Type: text/html

< Content-Length: 1283

< Date: Thu, 22 Aug 2019 23:42:28 GMT

< Connection: close

<

{ [data not shown]

100  1283  100  12830 0   632k  0 --:--:-- --:--:-- --:--:-- 1252k

* Closing connection 0

  0 00 00 0  0  0 --:--:--  0:00:04 --:--:-- 0* 
Resolving timed out after 5515 milliseconds

  0 00 00 0  0  0 --:--:--  0:00:06 --:--:-- 0

* Closing connection 1

curl: (28) Resolving timed out after 5515 milliseconds

* About to connect() to 10.154.2.29 port 8080 (#2)

*   Trying 10.154.2.29...

* Connected to 10.154.2.29 (10.154.2.29) port 8080 (#2)

> GET /eoc/login HTTP/1.0

> User-Agent: curl/7.29.0

> Host: 10.154.2.29:8080

> Accept: */*

>

< HTTP/1.1 200 OK

< Server: Apache-Coyote/1.1

< Date: Thu, 22 Aug 2019 23:42:34 GMT

< Cache-Control: no-cache

< Cache-Control: no-store

< Cache-Control: max-age=0

< X-Content-Type-Options: nosniff

< X-XSS-Protection: 1

< Set-Cookie: JSESSIONID=hr-UQa5y1CpyiQvYjONLkNyf; Path=/eoc

< Content-Type: text/html;charset=UTF-8

< Content-Length: 4398

< Connection: close

<







Login



var cwWebappPrefix='/eoc';var cwComma=",";var 
cwDecimal=".";var cwSessionTimeout=3000;var cwSessionTimeoutWarning=0;var 
cwHtEsc=false;var cwEnumHtEsc=true;var cwSkinnedWarnDialog = false;var 
formCount=0;var startLoading=false;function finishLoading(id){if(window[id] != 
null) {startLoading = false; return true;} else return false;}function 
loading(){return startLoading;}function 
increaseFormCount(){formCount++;}function decreaseFormCount(){formCount--;}var 
cwDefaultDialogMaskOpacity = 0;;var cwUE0311='Invalid date value. Valid format 
is ';

var cwUE0328='Start date must occur BEFORE End Date. Please correct 
the\n\t\t\tcriteria.';

var cwUE0116='This document contains errors and will not be saved.';

var cwUU0313='Warning';

var cwUU0315=' could not be matched to the display format: ';

var cwUU0316=' row(s) fetched (more exist but are not shown).';

var cwUU0317=' row(s) fetched.';

var cwUU0059='Please wait, operation in progress...';

var cwUU0318='Cancel';

var cwUU0319='Save';

var cwUU0322='No items found.';

var cwUU0324=' row(s) fetched out of ';

var cwUU0325=' row(s) found in total ';

var cwUU0326=' page(s).';





var isomorphicDir="/eoc/isomorphic/";var 
cwDir="../";









if(window.Browser.isIE){window.isc.Canvas.addClassProperties({$42a: 
"margin:0px;border:0px;padding:0px;background-color:transparent;background-image:none;",neverUsePNGWorkaround
 : true});window.isc.Canvas.addProperties({$r9:false});}

















if(window.Browser.isIE){isc.defineClass("MyResultSet", 
"ResultSet").addProperties({fetchRemoteDataReply : function (dsResponse, data, 
request) {if (dsResponse.httpResponseCode == 12152) 
{isc.RPCManager.handleError(dsResponse, request);} else {return 
this.Super("fetchRemoteDataReply", 
arguments);}}});isc.DataSource.addProperties({resultSetClass : 
"MyResultSet"});window.isc.RPCManager.addClassProperties({handleError : 
function (response, request) {Log.logWarn("httpResponseCode:  " + 
response.httpResponseCode);if (response.httpResponseCode == 12152) 
{window.isc.RPCManager.suspendTransaction(response.transactionNum);window.isc.Timer.setTimeout(function
 () 
{window.isc.RPCManager.resendTransaction(response.transactionNum);});}}});}







var $cwCurrentCss = '/css/ui_common.css'



isc.setAutoDraw(false);





* Closing connection 2



Best regards



Andy



-Message d'origine-
De : Aleksandar Lazic [mailto:al-hapr...@none.at]
Envoyé : vendredi 23 août 2019 10:33
À : ANTHOINE Andy (EXT) 
Cc : haproxy@formilux.org
Objet : Re: Help required ehhe







Am 23-08-2019 00:49, schrieb 
andy.anthoine@opt.nc<mailto:andy.anthoine@opt.nc>:

> Hi,

>

> Ehhe not an external ip don't worry, or

Re: Help required ehhe

2019-08-22 Thread Aleksandar Lazic




Am 23-08-2019 00:49, schrieb andy.anthoine@opt.nc:

Hi,

Ehhe not an external ip don't worry, or i would have deleted it hehe

No change, the problem seems to be there since before i m here, and
they now need it to be fixed

I don't see anything in particular in the logs, beside that kind of
thing which is normal since the server is rebooted at this time ;)

Aug 22 05:00:23 sld-loadb-01-prd-cit local1.alert haproxy[2244]:
Server ecmapp-prd-be-8443/sli-ecmapp-01-prd-cit is DOWN, reason:
Layer4 connection problem, info: "Connection refused at step 1 of
tcp-check (connect port 8080)", check duration: 0ms. 1 active and 0
backup servers left. 62 sessions active, 0 requeued, 0 remaining in
queue.


this looks to me that the loadbalancer can't connect to the backend
check port, is something listen on the backend server on port 8080?

Please can you try the following command from the loadbalancer.

curl -v -o /dev/null --max-time 5 --http1.0 http://10.154.2.29:8080/iws/
curl -v -o /dev/null --max-time 5 --http1.0 
http://10.154.2.29:8080/eoc/login


The same for 10.154.2.31


What do you mean by that ?


I have shorten the config to reduce the size of the mail.


mode http


[snip more config]

-Message d'origine-
De : Aleksandar Lazic [mailto:al-hapr...@none.at]
Envoyé : vendredi 23 août 2019 09:41
À : ANTHOINE Andy (EXT) 
Cc : haproxy@formilux.org
Objet : Re: Help required ehhe

Hi.

Am 23-08-2019 00:28, schrieb andy.anthoine@opt.nc:


Hi,

I got that email on this site, not sure if it's still working etc
https://www.slideshare.net/haproxytech/haproxy-best-practice


[snipp]


What's your haproxy version?

haproxy -vv

  Application load balancing & high availability v8.5.7 (8546)


That's not the full output of the command line call haproxy -vv .



Thanks a lot for the answer man !




Thanks !

Andy


Regards
Aleks



RE: Help required ehhe

2019-08-22 Thread Andy.ANTHOINE.ext
Hi,

Ehhe not an external ip don't worry, or i would have deleted it hehe

No change, the problem seems to be there since before i m here, and they now 
need it to be fixed

I don't see anything in particular in the logs, beside that kind of thing which 
is normal since the server is rebooted at this time ;)

Aug 22 05:00:23 sld-loadb-01-prd-cit local1.alert haproxy[2244]: Server 
ecmapp-prd-be-8443/sli-ecmapp-01-prd-cit is DOWN, reason: Layer4 connection 
problem, info: "Connection refused at step 1 of tcp-check (connect port 8080)", 
check duration: 0ms. 1 active and 0 backup servers left. 62 sessions active, 0 
requeued, 0 remaining in queue.

What do you mean by that ?

> mode http

[snip more config]




-Message d'origine-
De : Aleksandar Lazic [mailto:al-hapr...@none.at]
Envoyé : vendredi 23 août 2019 09:41
À : ANTHOINE Andy (EXT) 
Cc : haproxy@formilux.org
Objet : Re: Help required ehhe

Hi.

Am 23-08-2019 00:28, schrieb andy.anthoine@opt.nc:

> Hi,
>
> I got that email on this site, not sure if it's still working etc
> https://www.slideshare.net/haproxytech/haproxy-best-practice
>
> I'm having some issue with a service (white page sometimes for a
> webservice), and i m not totally sure it has anything to do with issue
> on my load balancer (i'm not the one who configured it, just having to
> work on it, guy gone, no documentation left etc…)
>
> When i started looking, i noticed that only this service had so many
> warnings and errors, so i thought « maybe » it can have something to
> do with it…

Was any changes in the setup which cloud be the reason for the the behaviour or 
you just recognizes it and it's "normal"?
What's in the logs?

> backend ecmapp-prd-be-8443
> acl kerberos hdr_beg(Authorization) -m beg Negotiate http-request
> replace-header Authorization (.*) "Basic " if kerberos

Ähm I hope this is not an U:P which is external visible

> mode http

[snip more config]

> Here is the configuration of the service, i m able to read half of it,
> chechking the documentation to find more information
>
> Thanks a lot of you can help, if not have a great day J

What's your haproxy version?

haproxy -vv

  Application load balancing & high availability v8.5.7 (8546)


Thanks a lot for the answer man !



> Thanks !
>
> Andy
>
> Ce
> message et toutes les pièces jointes (ci-après le « message ») sont à
> l'attention exclusive des destinataires désignés. Il peut contenir des
> informations confidentielles. Si vous le recevez par erreur, merci
> d'en informer immédiatement l'émetteur et de le détruire. Toute
> utilisation, diffusion ou toute publication, totale ou partielle, est
> interdite, sauf autorisation. Tout message électronique étant
> susceptible d'altération, l'OPT NC décline toute responsabilité au
> titre de ce message dans l'hypothèse où il aurait été modifié.
> This message and any attachments (the «
> message ») are intended solely for the addresses. It may contain
> privileged information. If you receive this message in error, please
> immediately notify the sender and delete it. Any use, dissemination or
> disclosure, either whole or partial, is prohibited unless formal
> approval. Emails are susceptible to alteration; OPT NC shall not
> therefore be liable for the message if modified.
>
> Pensez à l'environnement, n'imprimez que si nécessaire.

Always the same crap footer from the "enterprise" Companies to public mailing 
lists.

Regards
Aleks



Ce message et toutes les pièces jointes (ci-après le « message ») sont à 
l'attention exclusive des destinataires désignés. Il peut contenir des 
informations confidentielles. Si vous le recevez par erreur, merci d'en 
informer immédiatement l'émetteur et de le détruire. Toute utilisation, 
diffusion ou toute publication, totale ou partielle, est interdite, sauf 
autorisation. Tout message électronique étant susceptible d'altération, l'OPT 
NC décline toute responsabilité au titre de ce message dans l'hypothèse où il 
aurait été modifié.

This message and any attachments (the « message ») are intended solely for the 
addresses. It may contain privileged information. If you receive this message 
in error, please immediately notify the sender and delete it. Any use, 
dissemination or disclosure, either whole or partial, is prohibited unless 
formal approval. Emails are susceptible to alteration; OPT NC shall not 
therefore be liable for the message if modified.

Pensez à l'environnement, n'imprimez que si nécessaire.


Re: Help required ehhe

2019-08-22 Thread Aleksandar Lazic

Hi.

Am 23-08-2019 00:28, schrieb andy.anthoine@opt.nc:


Hi,

I got that email on this site, not sure if it's still working etc
https://www.slideshare.net/haproxytech/haproxy-best-practice

I'm having some issue with a service (white page sometimes for a
webservice), and i m not totally sure it has anything to do with issue
on my load balancer (i'm not the one who configured it, just having to
work on it, guy gone, no documentation left etc…)

When i started looking, i noticed that only this service had so many
warnings and errors, so i thought « maybe » it can have something to do
with it…


Was any changes in the setup which cloud be the reason for the the
behaviour or you just recognizes it and it's "normal"?
What's in the logs?


backend ecmapp-prd-be-8443
acl kerberos hdr_beg(Authorization) -m beg Negotiate
http-request replace-header Authorization (.*) "Basic " if
kerberos


Ähm I hope this is not an U:P which is external visible


mode http


[snip more config]


Here is the configuration of the service, i m able to read half of it,
chechking the documentation to find more information

Thanks a lot of you can help, if not have a great day J


What's your haproxy version?

haproxy -vv


Thanks !

Andy

Ce
message et toutes les pièces jointes (ci-après le « message ») sont à
l'attention exclusive des destinataires désignés. Il peut contenir des
informations confidentielles. Si vous le recevez par erreur, merci d'en
informer immédiatement l'émetteur et de le détruire. Toute utilisation,
diffusion ou toute publication, totale ou partielle, est interdite,
sauf autorisation. Tout message électronique étant susceptible
d'altération, l'OPT NC décline toute responsabilité au titre de ce
message dans l'hypothèse où il aurait été modifié.
This message and any attachments (the «
message ») are intended solely for the addresses. It may contain
privileged information. If you receive this message in error, please
immediately notify the sender and delete it. Any use, dissemination or
disclosure, either whole or partial, is prohibited unless formal
approval. Emails are susceptible to alteration; OPT NC shall not
therefore be liable for the message if modified.

Pensez à l'environnement, n'imprimez que si nécessaire.


Always the same crap footer from the "enterprise" Companies to public
mailing lists.

Regards
Aleks

Help required ehhe

2019-08-22 Thread Andy.ANTHOINE.ext
Hi,

I got that email on this site, not sure if it's still working etc 
https://www.slideshare.net/haproxytech/haproxy-best-practice

I'm having some issue with a service (white page sometimes for a webservice), 
and i m not totally sure it has anything to do with issue on my load balancer 
(i'm not the one who configured it, just having to work on it, guy gone, no 
documentation left etc...)

When i started looking, i noticed that only this service had so many warnings 
and errors, so i thought < maybe > it can have something to do with it...

[cid:image001.png@01D55994.DCB307D0]
backend ecmapp-prd-be-8443
  acl kerberos hdr_beg(Authorization) -m beg Negotiate
  http-request replace-header Authorization (.*) "Basic dXBhZG1pbjp1cGFkbWlu" 
if kerberos
  mode http
  log global
  option httplog
  balance roundrobin
  option forwardfor
  #cookie SERVERID insert indirect nocache
  cookie ReqClientId prefix nocache
  option tcp-check
  tcp-check connect port 8080
  tcp-check send GET\ /iws/\ HTTP/1.0\r\n
  tcp-check send \r\n
  tcp-check expect string 200
  tcp-check connect port 8080
  tcp-check send GET\ /eoc/login\ HTTP/1.0\r\n
  tcp-check send \r\n
  tcp-check expect string 200
  timeout client 5s
  server sli-ecmapp-01-prd-cit 10.154.2.29:8443 ssl cookie s1 weight 20 check 
port 8080 inter 1
  server sli-ecmapp-02-prd-cit 10.154.2.31:8443 ssl cookie s2 weight 20 check 
port 8080 inter 1

Here is the configuration of the service, i m able to read half of it, 
chechking the documentation to find more information

Thanks a lot of you can help, if not have a great day :)

Thanks !

Andy




Ce message et toutes les pi?ces jointes (ci-apr?s le < message >) sont ? 
l'attention exclusive des destinataires d?sign?s. Il peut contenir des 
informations confidentielles. Si vous le recevez par erreur, merci d'en 
informer imm?diatement l'?metteur et de le d?truire. Toute utilisation, 
diffusion ou toute publication, totale ou partielle, est interdite, sauf 
autorisation. Tout message ?lectronique ?tant susceptible d'alt?ration, l'OPT 
NC d?cline toute responsabilit? au titre de ce message dans l'hypoth?se o? il 
aurait ?t? modifi?.

This message and any attachments (the < message >) are intended solely for the 
addresses. It may contain privileged information. If you receive this message 
in error, please immediately notify the sender and delete it. Any use, 
dissemination or disclosure, either whole or partial, is prohibited unless 
formal approval. Emails are susceptible to alteration; OPT NC shall not 
therefore be liable for the message if modified.

Pensez ? l'environnement, n'imprimez que si n?cessaire.


Re: [PATCH] BUG/MINOR: Fix prometheus '# TYPE' and '# HELP' headers

2019-08-12 Thread Christopher Faulet

Le 07/08/2019 à 17:45, Anthonin Bonnefoy a écrit :

From: Anthonin Bonnefoy 

Prometheus protocol defines HELP and TYPE as a token after the '#' and
the space after the '#' is necessary.
This is expected in the prometheus python client for example
(https://github.com/prometheus/client_python/blob/a8f5c80f651ea570577c364203e0edbef67db727/prometheus_client/parser.py#L194)
and the missing space is breaking the parsing of metrics' type.
---
  contrib/prometheus-exporter/service-prometheus.c | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/contrib/prometheus-exporter/service-prometheus.c 
b/contrib/prometheus-exporter/service-prometheus.c
index 67914f602..9b9ef2ea8 100644
--- a/contrib/prometheus-exporter/service-prometheus.c
+++ b/contrib/prometheus-exporter/service-prometheus.c
@@ -1126,11 +1126,11 @@ static int promex_dump_metric_header(struct appctx 
*appctx, struct htx *htx,
types = promex_st_metric_types;
}
  
-	if (istcat(out, ist("#HELP "), max) == -1 ||

+   if (istcat(out, ist("# HELP "), max) == -1 ||
istcat(out, name, max) == -1 ||
istcat(out, ist(" "), max) == -1 ||
istcat(out, desc[appctx->st2], max) == -1 ||
-   istcat(out, ist("\n#TYPE "), max) == -1 ||
+   istcat(out, ist("\n# TYPE "), max) == -1 ||
istcat(out, name, max) == -1 ||
istcat(out, ist(" "), max) == -1 ||
istcat(out, types[appctx->st2], max) == -1 ||



Thanks, merged now.
--
Christopher Faulet



[PATCH] BUG/MINOR: Fix prometheus '# TYPE' and '# HELP' headers

2019-08-07 Thread Anthonin Bonnefoy
From: Anthonin Bonnefoy 

Prometheus protocol defines HELP and TYPE as a token after the '#' and
the space after the '#' is necessary.
This is expected in the prometheus python client for example
(https://github.com/prometheus/client_python/blob/a8f5c80f651ea570577c364203e0edbef67db727/prometheus_client/parser.py#L194)
and the missing space is breaking the parsing of metrics' type.
---
 contrib/prometheus-exporter/service-prometheus.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/contrib/prometheus-exporter/service-prometheus.c 
b/contrib/prometheus-exporter/service-prometheus.c
index 67914f602..9b9ef2ea8 100644
--- a/contrib/prometheus-exporter/service-prometheus.c
+++ b/contrib/prometheus-exporter/service-prometheus.c
@@ -1126,11 +1126,11 @@ static int promex_dump_metric_header(struct appctx 
*appctx, struct htx *htx,
types = promex_st_metric_types;
}
 
-   if (istcat(out, ist("#HELP "), max) == -1 ||
+   if (istcat(out, ist("# HELP "), max) == -1 ||
istcat(out, name, max) == -1 ||
istcat(out, ist(" "), max) == -1 ||
istcat(out, desc[appctx->st2], max) == -1 ||
-   istcat(out, ist("\n#TYPE "), max) == -1 ||
+   istcat(out, ist("\n# TYPE "), max) == -1 ||
istcat(out, name, max) == -1 ||
istcat(out, ist(" "), max) == -1 ||
istcat(out, types[appctx->st2], max) == -1 ||
-- 
2.19.1




RFC: receive side scaling, need help with approach to port ranges

2019-07-16 Thread Richard Russo
Here are my current patches for comments.

-- 
  Richard Russo
  to...@enslaves.us

On Fri, Jul 5, 2019, at 12:23 PM, Richard Russo wrote:
> Hi,
> 
> I've been experimenting with Recieve Side Scaling (RSS) for a tcp proxy 
> application. The basic idea with RSS is by configuring the NICs, 
> kernel, and application to use the same CPU for a given socket, cross 
> CPU locking and communication is eliminated or at least significantly 
> reduced. On my system, configuring RSS allowed me to handle about three 
> times as many sessions before reaching CPU saturation, with the 
> remaining bottleneck seeming to be kernel processing around socket 
> creation and closing which requires cross cpu coordination. 
> 
> Aligning the incoming sockets is very simple, setting a socket option 
> (IP_RSS_LISTEN_BUCKET) on the listen socket restricts the accepted 
> socket to that bucket, and that's straight forward to add to the tcp 
> listener code, and configuration.
> 
> Aligning outgoing sockets is trickier -- there's no kernel help with a 
> socket option or otherwise, an application has to run the hash 
> (toeplitz) on the 4-tuple of {local ip, local port, remote ip, remote 
> port } and only use an outgoing port if the hash matches.  I've had 
> trouble finding a good approach to handle this.
> 
> The simplest thing would be to run the hash when a port is assigned by 
> port_range and return the port if it hashes to the wrong bucket; but if 
> you've already used all the acceptable ports for that port range, you 
> spend a lot of time hashing the ports that are still in the range, 
> without making any progress.
> 
> If you have a port range per rss bucket, you could hash on port 
> assignment, and not return the ports in case they hash to a wrong 
> bucket; but in the case that the remote ip changes because you've 
> configured it to use DNS or if you change the IP via "set server addr", 
> the previously computed hashes are no longer valid -- you would really 
> want to try all the ports again.
> 
> What I ended up with was a lock on port ranges (instead of atomics as 
> used in 07425de71777b688e77a9c70a7088c13e66e41e9 BUG/MEDIUM: 
> port_range: Make the ring buffer lock-free), adding a revision counter 
> to the port range, and resetting the port range whenever the server IP 
> changed. To avoid running the hash during steady state, and because 
> checking all the ports when the range needs to be filled, I also made 
> port range filing incremental. 
> 
> This approach works, but it feels complicated, and it made my config 
> much more verbose --- I had to duplicate my frontend sections, one for 
> each RSS bucket, which sends to corresponding duplicated backends for 
> each bucket; the backends had additional configuration to indicate the 
> RSS bucket (and the number of buckets). Incidentally, because each RSS 
> bucket has a distinct set of ports, and because my use case doesn't use 
> any features which benefit from coordination within HAProxy (such as 
> stick tables etc), this makes it possible to run in process mode rather 
> than threaded mode without running into a lot of port already in use 
> warnings/errors that would happen otherwise when sharing a port range.
> 
> If it's helpful for the discussion, I can share my patches as-is, but 
> if there are better ideas on how to structure this, I'd rather try to 
> get the changes done in a nice way before sharing.
> 
> Thanks!
> 
> -- 
>   Richard Russo
>   to...@enslaves.us
> 
>

0001-Allow-for-binding-listen-sockets-to-a-provided-RSS-b.patch
Description: Binary data


0002-Revert-BUG-MEDIUM-port_range-Make-the-ring-buffer-lo.patch
Description: Binary data


0003-add-port_range-locking-to-protect-against-concurrent.patch
Description: Binary data


0004-refill-port-ranges-when-addresses-change.patch
Description: Binary data


0005-Allow-for-RSS-aligned-port-selection-for-outgoing-co.patch
Description: Binary data


Re: Help with 1.8.1/4 and spoa_server/spoa_example

2019-07-06 Thread Aleksandar Lazic
Hi Christopher.

Am 05.07.2019 um 16:29 schrieb Christopher Faulet:
> Le 03/07/2019 à 16:16, Aleksandar Lazic a écrit :
>> I know this is a old haproxy version but I have not the option to update as 
>> it's
>> part of a vendor product.
>>
>> I need to bring the `[haproxy-2.0.git]/ contrib/spoa_server/` up and runnig 
>> or
>> `[haproxy-1.8.git]/contrib/spoa_example/` from 1.8.4 to be able to run a 
>> python
>> script.
>>
>> Any help is welcome and I can offer some money for the help. It's urgent so I
>> will need the help asap.
>>
>> You can contact me also off the list.
> 
> 
> Hi Aleks,
> 
> The spoa_example cannot run python script. But I take a look to the 
> spoa_server
> and it seems to usable with HAProxy 1.8 with a small patch. You must downgrade
> the SPOE version and change the encoding of the frame's flags. The example
> configuration must also be adapted because there is no debug converter in
> HAProxy 1.8.
> 
> But, as you said, HAProxy 1.8.4 is old, and many fixes was pushed on the SPOE
> since then. So you would experience some bugs. Be careful.
> 
> And I noticed a bug with the spoe_server, It seems to accept and process only
> one connection per worker because of a while loop to read frames. I'm cc'ing
> Thierry.
> 
> I attached a quick-and-dirty patch to downgrade SPOP version of the 
> spoa_server
> to 1.0.

Thank you very much.

Best regards
Aleks



receive side scaling, need help with approach to port ranges

2019-07-05 Thread Richard Russo
Hi,

I've been experimenting with Recieve Side Scaling (RSS) for a tcp proxy 
application. The basic idea with RSS is by configuring the NICs, kernel, and 
application to use the same CPU for a given socket, cross CPU locking and 
communication is eliminated or at least significantly reduced. On my system, 
configuring RSS allowed me to handle about three times as many sessions before 
reaching CPU saturation, with the remaining bottleneck seeming to be kernel 
processing around socket creation and closing which requires cross cpu 
coordination. 

Aligning the incoming sockets is very simple, setting a socket option 
(IP_RSS_LISTEN_BUCKET) on the listen socket restricts the accepted socket to 
that bucket, and that's straight forward to add to the tcp listener code, and 
configuration.

Aligning outgoing sockets is trickier -- there's no kernel help with a socket 
option or otherwise, an application has to run the hash (toeplitz) on the 
4-tuple of {local ip, local port, remote ip, remote port } and only use an 
outgoing port if the hash matches.  I've had trouble finding a good approach to 
handle this.

The simplest thing would be to run the hash when a port is assigned by 
port_range and return the port if it hashes to the wrong bucket; but if you've 
already used all the acceptable ports for that port range, you spend a lot of 
time hashing the ports that are still in the range, without making any progress.

If you have a port range per rss bucket, you could hash on port assignment, and 
not return the ports in case they hash to a wrong bucket; but in the case that 
the remote ip changes because you've configured it to use DNS or if you change 
the IP via "set server addr", the previously computed hashes are no longer 
valid -- you would really want to try all the ports again.

What I ended up with was a lock on port ranges (instead of atomics as used in 
07425de71777b688e77a9c70a7088c13e66e41e9 BUG/MEDIUM: port_range: Make the ring 
buffer lock-free), adding a revision counter to the port range, and resetting 
the port range whenever the server IP changed. To avoid running the hash during 
steady state, and because checking all the ports when the range needs to be 
filled, I also made port range filing incremental. 

This approach works, but it feels complicated, and it made my config much more 
verbose --- I had to duplicate my frontend sections, one for each RSS bucket, 
which sends to corresponding duplicated backends for each bucket; the backends 
had additional configuration to indicate the RSS bucket (and the number of 
buckets). Incidentally, because each RSS bucket has a distinct set of ports, 
and because my use case doesn't use any features which benefit from 
coordination within HAProxy (such as stick tables etc), this makes it possible 
to run in process mode rather than threaded mode without running into a lot of 
port already in use warnings/errors that would happen otherwise when sharing a 
port range.

If it's helpful for the discussion, I can share my patches as-is, but if there 
are better ideas on how to structure this, I'd rather try to get the changes 
done in a nice way before sharing.

Thanks!

-- 
  Richard Russo
  to...@enslaves.us



Re: Help with 1.8.1/4 and spoa_server/spoa_example

2019-07-05 Thread Christopher Faulet

Le 03/07/2019 à 16:16, Aleksandar Lazic a écrit :

I know this is a old haproxy version but I have not the option to update as it's
part of a vendor product.

I need to bring the `[haproxy-2.0.git]/ contrib/spoa_server/` up and runnig or
`[haproxy-1.8.git]/contrib/spoa_example/` from 1.8.4 to be able to run a python
script.

Any help is welcome and I can offer some money for the help. It's urgent so I
will need the help asap.

You can contact me also off the list.



Hi Aleks,

The spoa_example cannot run python script. But I take a look to the spoa_server 
and it seems to usable with HAProxy 1.8 with a small patch. You must downgrade 
the SPOE version and change the encoding of the frame's flags. The example 
configuration must also be adapted because there is no debug converter in 
HAProxy 1.8.


But, as you said, HAProxy 1.8.4 is old, and many fixes was pushed on the SPOE 
since then. So you would experience some bugs. Be careful.


And I noticed a bug with the spoe_server, It seems to accept and process only 
one connection per worker because of a while loop to read frames. I'm cc'ing 
Thierry.


I attached a quick-and-dirty patch to downgrade SPOP version of the spoa_server 
to 1.0.



--
Christopher Faulet
>From 955c47ce6e8bf8a1ed644ffd353b52b54516c2fa Mon Sep 17 00:00:00 2001
From: Christopher Faulet 
Date: Fri, 5 Jul 2019 16:25:34 +0200
Subject: [PATCH] WIP: spoa_server: Downgrade SPOP version to 1.0

---
 contrib/spoa_server/spoa.c | 6 +++---
 contrib/spoa_server/spoa.h | 2 +-
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/contrib/spoa_server/spoa.c b/contrib/spoa_server/spoa.c
index f36c3db90..933eb79c8 100644
--- a/contrib/spoa_server/spoa.c
+++ b/contrib/spoa_server/spoa.c
@@ -687,7 +687,7 @@ static void prepare_agentack(struct worker *w)
 	w->ack[w->ack_len++] = SPOE_FRM_T_AGENT_ACK;
 
 	/* Set flags */
-	flags |= htonl(SPOE_FRM_FL_FIN);
+	flags |= SPOE_FRM_FL_FIN;
 	memcpy(w->ack + w->ack_len, , 4);
 	w->ack_len += 4;
 
@@ -949,7 +949,7 @@ prepare_agenthello(struct worker *w)
 	w->buf[idx++] = SPOE_FRM_T_AGENT_HELLO;
 
 	/* Set flags */
-	flags |= htonl(SPOE_FRM_FL_FIN);
+	flags |= SPOE_FRM_FL_FIN;
 	memcpy(w->buf+idx, , 4);
 	idx += 4;
 
@@ -994,7 +994,7 @@ prepare_agentdicon(struct worker *w)
 	w->buf[idx++] = SPOE_FRM_T_AGENT_DISCON;
 
 	/* Set flags */
-	flags |= htonl(SPOE_FRM_FL_FIN);
+	flags |= SPOE_FRM_FL_FIN;
 	memcpy(w->buf+idx, , 4);
 	idx += 4;
 
diff --git a/contrib/spoa_server/spoa.h b/contrib/spoa_server/spoa.h
index 8f912e435..0166c48c4 100644
--- a/contrib/spoa_server/spoa.h
+++ b/contrib/spoa_server/spoa.h
@@ -18,7 +18,7 @@
 #include 
 
 #define MAX_FRAME_SIZE16384
-#define SPOP_VERSION  "2.0"
+#define SPOP_VERSION  "1.0"
 #define SPOA_CAPABILITIES ""
 
 /* Flags set on the SPOE frame */
-- 
2.20.1



Help with 1.8.1/4 and spoa_server/spoa_example

2019-07-03 Thread Aleksandar Lazic
Hi.

I know this is a old haproxy version but I have not the option to update as it's
part of a vendor product.

I need to bring the `[haproxy-2.0.git]/ contrib/spoa_server/` up and runnig or
`[haproxy-1.8.git]/contrib/spoa_example/` from 1.8.4 to be able to run a python
script.

Any help is welcome and I can offer some money for the help. It's urgent so I
will need the help asap.

You can contact me also off the list.

Best regards
Aleks



Re: Please help for a solution like secure_link

2019-07-01 Thread Aleksandar Lazic


Tim,

Mon Jul 01 21:36:11 GMT+02:00 2019 Tim Düsterhus :

> Aleks,
 >
 > Am 01.07.19 um 21:27 schrieb Aleksandar Lazic:
 > > Maybe it's also possible with spoe ?
 >
 > I never worked with SPOE before, but I believe it might be possible.
 > SPOE is painful and fragile with HAProxy 1.8, though, because you have
 > to spawn the SPOA manually. Also using SPOE is pretty heavy, because it
 > requires interprocess communication which is not required with Lua.

Yes, that's true.

> Ideally you would just upgrade to HAProxy 2.0 or rebuild to include Lua.

I would like to use 2.0.1 as I switched today to it for my nextcloud and xmpp 
server and it works. :-)

I will try to see what's possible.

Thank you very much for your time an solution.

> Best regards
 > Tim Düsterhus

Best regards
 Aleks

> > Lua requires a rebuild of haproxy , which I want to avoid.
 > >
 > > Mon Jul 01 21:18:42 GMT+02:00 2019 Tim Düsterhus :
 > >
 > >> Aleks,
 > >>
 > >> Am 01.07.19 um 21:16 schrieb Aleksandar Lazic:
 > >>>
 > >>> The concat isn't available in 1.8 any substitution?
 > >>
 > >> Ugh, yeah. Both concat and strcmp are 1.9+. I must've missed that
 > >> requirement. You can use Lua to add yourself a concat and strcmp
 > >> converter. Or you do everything in Lua if you need Lua anyway.
 > >>
 > >> Best regards
 > >> Tim Düsterhus
 > >>
 > >
 >
 >





Re: Please help for a solution like secure_link

2019-07-01 Thread Tim Düsterhus
Aleks,

Am 01.07.19 um 21:27 schrieb Aleksandar Lazic:
> Maybe it's also possible with spoe ?

I never worked with SPOE before, but I believe it might be possible.
SPOE is painful and fragile with HAProxy 1.8, though, because you have
to spawn the SPOA manually. Also using SPOE is pretty heavy, because it
requires interprocess communication which is not required with Lua.

Ideally you would just upgrade to HAProxy 2.0 or rebuild to include Lua.

Best regards
Tim Düsterhus

> Lua requires a rebuild of haproxy , which I want to avoid.
> 
> Mon Jul 01 21:18:42 GMT+02:00 2019 Tim Düsterhus :
> 
>> Aleks,
>>
>> Am 01.07.19 um 21:16 schrieb Aleksandar Lazic:
>>>
>>> The concat isn't available in 1.8 any substitution?
>>
>> Ugh, yeah. Both concat and strcmp are 1.9+. I must've missed that
>> requirement. You can use Lua to add yourself a concat and strcmp
>> converter. Or you do everything in Lua if you need Lua anyway.
>>
>> Best regards
>> Tim Düsterhus
>>
> 



Re: Please help for a solution like secure_link

2019-07-01 Thread Aleksandar Lazic


Thanks.

Maybe it's also possible with spoe ?

Lua requires a rebuild of haproxy , which I want to avoid.

Mon Jul 01 21:18:42 GMT+02:00 2019 Tim Düsterhus :

> Aleks,
>
> Am 01.07.19 um 21:16 schrieb Aleksandar Lazic:
> >
> > The concat isn't available in 1.8 any substitution?
>
> Ugh, yeah. Both concat and strcmp are 1.9+. I must've missed that
> requirement. You can use Lua to add yourself a concat and strcmp
> converter. Or you do everything in Lua if you need Lua anyway.
>
> Best regards
> Tim Düsterhus
>



Re: Please help for a solution like secure_link

2019-07-01 Thread Tim Düsterhus
Aleks,

Am 01.07.19 um 21:16 schrieb Aleksandar Lazic:
> 
> The concat isn't available in 1.8 any substitution?

Ugh, yeah. Both concat and strcmp are 1.9+. I must've missed that
requirement. You can use Lua to add yourself a concat and strcmp
converter. Or you do everything in Lua if you need Lua anyway.

Best regards
Tim Düsterhus



Re: Please help for a solution like secure_link

2019-07-01 Thread Aleksandar Lazic


The concat isn't available in 1.8 any substitution?

Mon Jul 01 17:56:56 GMT+02:00 2019 Aleksandar Lazic :

> Hi Tim.
>
> Am 01.07.2019 um 17:48 schrieb Tim Düsterhus:
> > Aleks,
> >
> > Am 01.07.19 um 16:16 schrieb Aleksandar Lazic:
> >> My Idea is to use something like this in haproxy but I'm not sure if 
> >> haproxy
> >> only or haproxy+lua is the way to go?
> >
> > If you are fine with sha1 then it's theoretically possible with HAProxy
> > only:
>
> Cool, that was fast, I will try it tommorw and keep you updated.
> I love this community.
>
> >> http-request set-var(txn.sha1) url_param(sha1)
> >> http-request set-var(txn.expires) url_param(expires)
> >> http-request set-var(txn.expected_hash) path,concat(,txn.expires,),sha1,hex
> >>
> >> acl hash_valid var(txn.expected_hash),strcmp(txn.sha1) -m int eq 0
> >> acl expired date,sub(txn.expires) ge 0
> >>
> >> http-response set-header Date %[date]
> >> http-response set-header Expires %[var(txn.expires)]
> >> http-response set-header Expired %[date,sub(txn.expires)] if expired
> >> http-response set-header Not-Expired %[date,sub(txn.expires)] if !expired
> >> http-response set-header Given-Hash %[var(txn.sha1)]
> >> http-response set-header Expected-Hash %[var(txn.expected_hash)]
> >> http-response set-header Hash-Valid true if hash_valid
> >> http-response set-header Hash-Valid false if !hash_valid
> >
> > Inserting a secret is left as an exercise to the reader. Properly using
> > the two ACLs to allow or deny requests is left as an exercise as well.
>
> Yep it's a good start, many thanks.
>
> > NOTE OF CAUTION: The code above is vulnerable to a timing attack,
> > because strcmp does not perform a constant time comparison. The 'hex'
> > converter is not constant time either. The correct way to add the secret
> > would be using HMAC which is not trivial to do (there is no ready
> > converter), if even possible.
>
> Thank you to raise this topic, I will keep it in mind.
>
> > Best regards
> > Tim Düsterhus
>
> Best regards
> Aleks
>
>



Re: Please help for a solution like secure_link

2019-07-01 Thread Aleksandar Lazic
Hi Tim.

Am 01.07.2019 um 17:48 schrieb Tim Düsterhus:
> Aleks,
> 
> Am 01.07.19 um 16:16 schrieb Aleksandar Lazic:
>> My Idea is to use something like this in haproxy but I'm not sure if haproxy
>> only or haproxy+lua is the way to go?
> 
> If you are fine with sha1 then it's theoretically possible with HAProxy
> only:

Cool, that was fast, I will try it tommorw and keep you updated.
I love this community.

>>  http-request set-var(txn.sha1) url_param(sha1)
>>  http-request set-var(txn.expires) url_param(expires)
>>  http-request set-var(txn.expected_hash) 
>> path,concat(,txn.expires,),sha1,hex
>>
>>  acl hash_valid var(txn.expected_hash),strcmp(txn.sha1) -m int eq 0
>>  acl expired date,sub(txn.expires) ge 0
>>
>>  http-response set-header Date  %[date]
>>  http-response set-header Expires   %[var(txn.expires)]
>>  http-response set-header Expired   %[date,sub(txn.expires)] if  
>> expired
>>  http-response set-header Not-Expired   %[date,sub(txn.expires)] if 
>> !expired
>>  http-response set-header Given-Hash%[var(txn.sha1)]
>>  http-response set-header Expected-Hash %[var(txn.expected_hash)]
>>  http-response set-header Hash-Validtrue  if  hash_valid
>>  http-response set-header Hash-Validfalse if !hash_valid
> 
> Inserting a secret is left as an exercise to the reader. Properly using
> the two ACLs to allow or deny requests is left as an exercise as well.

Yep it's a good start, many thanks.

> NOTE OF CAUTION: The code above is vulnerable to a timing attack,
> because strcmp does not perform a constant time comparison. The 'hex'
> converter is not constant time either. The correct way to add the secret
> would be using HMAC which is not trivial to do (there is no ready
> converter), if even possible.

Thank you to raise this topic, I will keep it in mind.

> Best regards
> Tim Düsterhus

Best regards
Aleks



Re: Please help for a solution like secure_link

2019-07-01 Thread Tim Düsterhus
Aleks,

Am 01.07.19 um 16:16 schrieb Aleksandar Lazic:
> My Idea is to use something like this in haproxy but I'm not sure if haproxy
> only or haproxy+lua is the way to go?

If you are fine with sha1 then it's theoretically possible with HAProxy
only:

>   http-request set-var(txn.sha1) url_param(sha1)
>   http-request set-var(txn.expires) url_param(expires)
>   http-request set-var(txn.expected_hash) 
> path,concat(,txn.expires,),sha1,hex
> 
>   acl hash_valid var(txn.expected_hash),strcmp(txn.sha1) -m int eq 0
>   acl expired date,sub(txn.expires) ge 0
> 
>   http-response set-header Date  %[date]
>   http-response set-header Expires   %[var(txn.expires)]
>   http-response set-header Expired   %[date,sub(txn.expires)] if  
> expired
>   http-response set-header Not-Expired   %[date,sub(txn.expires)] if 
> !expired>http-response set-header Given-Hash%[var(txn.sha1)]
>   http-response set-header Expected-Hash %[var(txn.expected_hash)]
>   http-response set-header Hash-Validtrue  if  hash_valid
>   http-response set-header Hash-Validfalse if !hash_valid

Inserting a secret is left as an exercise to the reader. Properly using
the two ACLs to allow or deny requests is left as an exercise as well.

NOTE OF CAUTION: The code above is vulnerable to a timing attack,
because strcmp does not perform a constant time comparison. The 'hex'
converter is not constant time either. The correct way to add the secret
would be using HMAC which is not trivial to do (there is no ready
converter), if even possible.

Best regards
Tim Düsterhus



Please help for a solution like secure_link

2019-07-01 Thread Aleksandar Lazic
Hi.

I try to implement with haproxy 1.8 the following solution.

https://aws.amazon.com/fr/blogs/networking-and-content-delivery/serving-private-content-using-amazon-cloudfront-aws-lambdaedge/

https://www.nginx.com/blog/securing-urls-secure-link-module-nginx-plus/
https://nginx.org/en/docs/http/ngx_http_secure_link_module.html

In short.

The URL `https://host/secure/myfile?(...&)?md5=...=...` should be 
validated.

```
# where engima is the password.
# Make sure you keep one space between $uri and password

secure_link $arg_md5,$arg_expires;
secure_link_md5 "$secure_link_expires$uri enigma";

if ($secure_link = "") { return 403; }
if ($secure_link = "0") { return 410; }
```

It looks like similar to create a S3 download protection where the application
behind nginx/HAProxy create a MD5 URL which nginx/HAProxy needs to verify before
the client can download the file.

My Idea is to use something like this in haproxy but I'm not sure if haproxy
only or haproxy+lua is the way to go?


ENV SECRET=enigma

```

http-request set-var(sess.md5) url_param(md5)
http-request set-var(sess.expires) url_param(expires)

# is there any md5 function, I haven't seen it in the doc.
acl allow -m str
%[md5(url-without-params,sess.expires,"${SECRET}"),base64,regsub(/=/,'',g),regsub(/+/,
'-',g),regsub(/\//,'_',g)] %[sess.md5]

acl expired -m int %[date(-3600)] %[sess.expires]

http-request deny deny_status 403 if ! allow ! expired
http-request deny deny_status 410 if expired  # <= this is not possible AFAIK
http-request allow if allow

```

How difficult is it to make the
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#7.3.1-base64
compliant https://tools.ietf.org/html/rfc4648#section-5

That's the code from nginx for ngx_decode_base64url.
http://hg.nginx.org/nginx/file/tip/src/core/ngx_string.c#l1228

Any opinions and thanks for help?

Best regards
Aleks



OT: Seeking help: F5 in front of haproxy

2019-06-24 Thread Aleksandar Lazic

Dear List members.

Please accept my apologize for the question here, but as here are a lot persons 
with huge knowledge I hope someone can help me.

I have the following easy requirements for the F5.

* Route and Balance Clients ( browser ) based on SNI header to backend pool
 * Make health checks with SNI header set to check availability of back ends

HAProxy handle the requests properly, when I make a `openssl s_client 
-servername ` from the F5 shell but the browsers are unable to make TLS 
handshakes .

Any help is very welcome also of the list.
 Again sorry for disturbing you, and thanks for your patience.

Very best regards
 Aleks.




Re: Need help on CVE-2019-11323

2019-05-16 Thread Willy Tarreau
Hi,

On Fri, May 17, 2019 at 02:54:05AM +, ??? wrote:
> Recently I found an issue CVE-2019-11323, it already fixed in 1.9.7
> 
> But it looks like all other haproxy branches affected by this issue according 
> to the following link.
> 
> 
> https://www.cvedetails.com/cve/CVE-2019-11323/
> 
> CVE-2019-11323 : HAProxy before 1.9.7 mishandles a reload with rotated keys, 
> which triggers use of uninitialized, and very predictable, 
> H
> www.cvedetails.com
> CVE-2019-11323 : HAProxy before 1.9.7 mishandles a reload with rotated keys, 
> which triggers use of uninitialized, and very predictable, HMAC keys. This is 
> related to an include/types/ssl_sock.h error.
> 
> 
> Unfortunately I'm using haproxy 1.7.11, I don't want to upgrade 1.9 right now.
(...)

I've just checked right now and only 1.9.2 and above have the affected feature,
the version details in the CVE are thus incorrect. It was developed in 2.0-dev
and was backported to 1.9 earlier this year to adapt to newer OpenSSL versions.
So on 1.8 and earlier you're not affected.

Hoping this helps,
Willy



Need help on CVE-2019-11323

2019-05-16 Thread 白晨红
Hi guys,



I need your help.


Recently I found an issue CVE-2019-11323, it already fixed in 1.9.7

But it looks like all other haproxy branches affected by this issue according 
to the following link.


https://www.cvedetails.com/cve/CVE-2019-11323/

CVE-2019-11323 : HAProxy before 1.9.7 mishandles a reload with rotated keys, 
which triggers use of uninitialized, and very predictable, 
H<https://www.cvedetails.com/cve/CVE-2019-11323/>
www.cvedetails.com
CVE-2019-11323 : HAProxy before 1.9.7 mishandles a reload with rotated keys, 
which triggers use of uninitialized, and very predictable, HMAC keys. This is 
related to an include/types/ssl_sock.h error.


Unfortunately I'm using haproxy 1.7.11, I don't want to upgrade 1.9 right now.


So I checked haproxy 1.7 release, no new version, just 1.7.11.


And then I checked the code fix in 1.9 branch and compared with 1.7 branch.

https://git.haproxy.org/?p=haproxy.git;a=commitdiff;h=8ef706502aa2000531d36e4ac56dbdc7c30f718d;hp=646b7741bc683d6c6b43342369afcbba33d7b6ec

I couldn't find the same code in 1.7 branch, it looks like this issue just 
existed in 1.9 branch.

I don't understand why this issue affected all branches in cvedetails site.

Can somebody help confirm this,  CVE-2019-11323 didn't affect 1.7 branch, is it 
right?

Thanks,

John







Thanks


Re: HTTPS(nbproc > 1) and HTTP/2 help

2019-01-09 Thread Willy Tarreau
Hi Jarno,

On Thu, Jan 03, 2019 at 10:31:41AM +0200, Jarno Huuskonen wrote:
> Hi,
> 
> I'm trying to convert "legacy" haproxy (haproxy 1.9.0) config that has
> mode tcp https listen (bind-process 2 ...) feeding bind-process 1
> frontend via abns socket. Something like this:
> 
>   listen HTTPS_in
>   # missing bind-process etc.
>   mode tcp
>   tcp-request inspect-delay 3s
>   bind 127.0.0.1:8443 ssl crt common.pem alpn h2,http/1.1
> 
> #use-server h2 if { ssl_fc_alpn h2 }
> #use-server h1 unless { ssl_fc_alpn h2 }
>   server h1 abns@proc1 send-proxy-v2
>   #server h2 abns@proc1h2 send-proxy-v2
> 
>   frontend fe
>   mode http
>   bind abns@proc1 accept-proxy
>   bind abns@proc1h2 accept-proxy proto h2
>   tcp-request inspect-delay 5s
>   tcp-request content track-sc1 src table table1
>   
>   # sc1_http_req_cnt(table1) gt 4 || 1 are just examples
>   tcp-request content reject if { sc1_http_req_cnt(table1) gt 4 }
>   http-request deny deny_status 429 if { sc1_http_req_cnt(table1) 
> gt 1 }
> 
>   default_backend be
>   
>   backend be
>   mode http
>   http-request deny deny_status 200 # or some real servers
> 
>   backend table1
>   stick-table type ipv6 size 100 expire 120s store 
> http_req_cnt,http_req_rate(30s)
> 
> This doesn't work with alpn h2,http/1.1 (HTTP/2 doesn't work(as expected)).

I don't understand why it doesn't work. I guess what you're trying to do
is to off-load TLS on multiple front processes and process only clear
traffic on a single process, right ?

> Changing HTTPS_in to "mode http" kind of works, client gets error 400 (HTTP/2)
> or 502 (HTTP/1.1) when (tcp-request content reject) reject's the connection.

Here H2 should indeed not work since the server mode defaults to H1.
However the H1 to H1 should theorically work.

> mode tcp and use-server with ssl_fc_alpn h2 also seems to work, but can the
> client choose not use HTTP/2 with alpn h2 (at least the ssl_fc_alpn
> documentation suggests this) ? 

The client can choose to speak whatever protocol it wants, it's just an
advertisement to indicate the intent. For example you can connect using
openssl s_client -alpn h2 -connect 127.0.0.1:8443 and emit what you want.
It's the H2 mux which will try to decode this traffic due to the "proto h2"
directive which will verify if the protocol is understood or not.

> So it seems that some/best alternatives are:
> - use "mode http" and use http-request deny instead of tcp-request content
> reject (sends response instead of silently closing connection -> no error
> 400/502)
> - use nbproc 1 / nbthread > 1 and move HTTPS_in functionality to fe frontend

It's true that nbthread makes such things so much easier that it's
worth a try, especially in 1.9 where they are more robust than in the
early 1.8, and much faster! With that said, I'm perplex about your
config, I'm interested in figuring why it doesn't work. Please test
it with 1.9.1 or 2.0-dev in case it's just caused by an already fixed
bug.

> Are there any more alternatives/tricks on using more than 1 core for
> SSL and enabling HTTP/2 ? Are there any gotchas etc. to look out for
> when converting nbproc to nbthread config ?

Turning nbproc to nbthread is rather easy since you remove some config.
This definitely is what I'd encourage you to do as it's a nice long
term investment. Even with latest 1.8, unless you really have many
threads, you should move to nbthread.

Regards,
Willy



HTTPS(nbproc > 1) and HTTP/2 help

2019-01-03 Thread Jarno Huuskonen
Hi,

I'm trying to convert "legacy" haproxy (haproxy 1.9.0) config that has
mode tcp https listen (bind-process 2 ...) feeding bind-process 1
frontend via abns socket. Something like this:

listen HTTPS_in
# missing bind-process etc.
mode tcp
tcp-request inspect-delay 3s
bind 127.0.0.1:8443 ssl crt common.pem alpn h2,http/1.1

#use-server h2 if { ssl_fc_alpn h2 }
#use-server h1 unless { ssl_fc_alpn h2 }
server h1 abns@proc1 send-proxy-v2
#server h2 abns@proc1h2 send-proxy-v2

frontend fe
mode http
bind abns@proc1 accept-proxy
bind abns@proc1h2 accept-proxy proto h2
tcp-request inspect-delay 5s
tcp-request content track-sc1 src table table1

# sc1_http_req_cnt(table1) gt 4 || 1 are just examples
tcp-request content reject if { sc1_http_req_cnt(table1) gt 4 }
http-request deny deny_status 429 if { sc1_http_req_cnt(table1) 
gt 1 }

default_backend be

backend be
mode http
http-request deny deny_status 200 # or some real servers

backend table1
stick-table type ipv6 size 100 expire 120s store 
http_req_cnt,http_req_rate(30s)

This doesn't work with alpn h2,http/1.1 (HTTP/2 doesn't work(as expected)).

Changing HTTPS_in to "mode http" kind of works, client gets error 400 (HTTP/2)
or 502 (HTTP/1.1) when (tcp-request content reject) reject's the connection.

mode tcp and use-server with ssl_fc_alpn h2 also seems to work, but can the
client choose not use HTTP/2 with alpn h2 (at least the ssl_fc_alpn
documentation suggests this) ? 

So it seems that some/best alternatives are:
- use "mode http" and use http-request deny instead of tcp-request content 
reject (sends response instead of silently closing connection -> no error 
400/502)
- use nbproc 1 / nbthread > 1 and move HTTPS_in functionality to fe frontend

Are there any more alternatives/tricks on using more than 1 core for
SSL and enabling HTTP/2 ? Are there any gotchas etc. to look out for
when converting nbproc to nbthread config ?

Thanks,
-Jarno
 
-- 
Jarno Huuskonen



[PATCH 15/20] MINOR: cli: put @master @ @! in the help

2018-10-26 Thread William Lallemand
Add help for the prefix command of the CLI. These help only displays
from the CLI of the master.
---
 src/cli.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/src/cli.c b/src/cli.c
index 8291b2d7a..d8ae79d7b 100644
--- a/src/cli.c
+++ b/src/cli.c
@@ -2319,6 +2319,9 @@ static struct applet cli_applet = {
 
 /* register cli keywords */
 static struct cli_kw_list cli_kws = {{ },{
+   { { "@", NULL }, "@ : send a command to the 
 process", NULL, cli_io_handler_show_proc, NULL, NULL, 
ACCESS_MASTER_ONLY},
+   { { "@!", NULL }, "@!: send a command to the  
process", cli_parse_default, NULL, NULL, NULL, ACCESS_MASTER_ONLY},
+   { { "@master", NULL }, "@master: send a command to the master 
process", cli_parse_default, NULL, NULL, NULL, ACCESS_MASTER_ONLY},
{ { "help", NULL }, NULL, cli_parse_simple, NULL },
{ { "prompt", NULL }, NULL, cli_parse_simple, NULL },
{ { "quit", NULL }, NULL, cli_parse_simple, NULL },
-- 
2.16.4




Re: need help with sftp and http config on a single config file

2018-10-19 Thread Imam Toufique
Aah.., I see , it’s been awhile I have this, I seem to vaguely remember
about this now.

Yes I have sshd running on port 22, let me try a higher port for the
proxy.  But I can keep the 22 port number for my backend sftp servers,
correct?

Thanks Jarno, I appreciate your help very much!

—imam

On Fri, Oct 19, 2018 at 12:02 AM Jarno Huuskonen 
wrote:

> Hi,
>
> On Thu, Oct 18, Imam Toufique wrote:
> > *[root@crsplabnet2 examples]# haproxy -c -V -f /etc/haproxy/haproxy.cfg*
> > *Configuration file is valid*
> >
> > *when trying to start HA proxy, i see the following:*
> >
> > *[root@crsplabnet2 examples]# haproxy -D -f /etc/haproxy/haproxy.cfg -p
> > /var/run/haproxy.pid*
> > *[ALERT] 290/234618 (5889) : Starting frontend www-ssh-proxy: cannot bind
> > socket [0.0.0.0:22 <http://0.0.0.0:22>]*
>
> Do you have sshd already running on the haproxy server ?
> (Use netstat -tunapl / ss (something like ss -tlnp '( dport = :ssh or
> sport = :ssh )')
> to see if sshd is already listening on port 22).
>
> If you've sshd running on port 22 then you have to use different port or
> ipaddress for sshd / haproxy(www-ssh-proxy)
>
> -Jarno
>
> --
> Jarno Huuskonen
>
-- 
Regards,
*Imam Toufique*
*213-700-5485*


Re: need help with sftp and http config on a single config file

2018-10-19 Thread Jarno Huuskonen
Hi,

On Thu, Oct 18, Imam Toufique wrote:
> *[root@crsplabnet2 examples]# haproxy -c -V -f /etc/haproxy/haproxy.cfg*
> *Configuration file is valid*
> 
> *when trying to start HA proxy, i see the following:*
> 
> *[root@crsplabnet2 examples]# haproxy -D -f /etc/haproxy/haproxy.cfg -p
> /var/run/haproxy.pid*
> *[ALERT] 290/234618 (5889) : Starting frontend www-ssh-proxy: cannot bind
> socket [0.0.0.0:22 ]*

Do you have sshd already running on the haproxy server ?
(Use netstat -tunapl / ss (something like ss -tlnp '( dport = :ssh or sport = 
:ssh )')
to see if sshd is already listening on port 22).

If you've sshd running on port 22 then you have to use different port or
ipaddress for sshd / haproxy(www-ssh-proxy)

-Jarno

-- 
Jarno Huuskonen



need help with sftp and http config on a single config file

2018-10-19 Thread Imam Toufique
Hi,

I am working on a setup where I can host sftp and http from the same HA
proxy frontend, as I am having trouble with it.

here is my config file:
-

global
   log /dev/log local0
   log /dev/log local1 notice
   chroot /var/lib/haproxy
   stats timeout 30s
   user haproxy
   group haproxy
   daemon

defaults
   log global
   mode http
   option tcplog
   option dontlognull
   timeout connect 5000
   timeout client 5
   timeout server 5

frontend http_front
   bind *:80
   stats uri /haproxy?stats
   default_backend http_back
   mode http
   option forwardfor   # forward IP
   http-request set-header X-Forwarded-Port %[dst_port]
   http-request add-header X-Forwarded-Proto https if { ssl_fc }

backend http_back
   balance roundrobin # roundrobin is rotate customers into backend server
   server  web1 10.1.100.156:80 check inter 2000 cookie w1
   server  web2 10.1.100.160:80 check inter 2000 cookie w1
   timeout connect 90
   timeout server 90

frontend www-ssh-proxy
  bind *:22
  mode tcp
  default_backend www-ssh-proxy-backend

backend www-ssh-proxy-backend
   mode tcp
   balance roundrobin
   stick-table type ip size 200k expire 30m
   stick on src
   default-server inter 1s
   server web1 10.1.100.156:22 check id 1
   server web2 10.1.100.160:22 check id 2

I would like SFTP and HTTP to live happily in the same HA proxy config.
When I run the configuration check, everything seems to be fine.

*[root@crsplabnet2 examples]# haproxy -c -V -f /etc/haproxy/haproxy.cfg*
*Configuration file is valid*

*when trying to start HA proxy, i see the following:*

*[root@crsplabnet2 examples]# haproxy -D -f /etc/haproxy/haproxy.cfg -p
/var/run/haproxy.pid*
*[ALERT] 290/234618 (5889) : Starting frontend www-ssh-proxy: cannot bind
socket [0.0.0.0:22 ]*

*I am not sure what I am doing wrong here.  I have not setup sftp and
http in one system before.*

*Can you please give me a hand with this? *

*thanks a lot!*



-- 
Regards,
*Imam Toufique*
*213-700-5485*


Hoping to help you to get more ROI through your Haproxy.Org

2018-10-11 Thread David Parker
Hi *Haproxy.Org *Team,

Do you want to know why your website is not showing on Google search engine
and how you can get more business than your competitor?

Today, I went through your website *Haproxy.Org* ; you  seem to have a
great appealing website, but only the thing is search engine visitors are
already searching for your products and services, but if you don't use the
Right Keywords they're searching for on your site, it will be difficult for
them to find you on Google.

We will deliver you a huge ROI, high keyword ranking, more traffic, clicks,
page views and most importantly converting those visitors into paying
customers.

Let me know if I should share a *PLAN OF ACTION* for your website.

Is this something you are interested in?

I also prepared a* free website audit report* for your website. If you are
*interested* i can show you the report.

I'd be happy to send you our package, pricing and past work details, if
you'd like to assess our work.

Kind Regards,

*David Parker Business Development Manager*

**
If you want to unsubscribe please go to
*https://yet-another-mail-merge.com/unsubscribe
*
[image: beacon]


Re: Help you generate more revenue for your haproxy.com.

2018-09-05 Thread Olivier Houchard
On Wed, Sep 05, 2018 at 10:14:55PM +1000, Rob Thomas wrote:
> You gotta wonder how this guy got this mailing list.  He must have actually
> LOOKED at the website, right?
> 
> Sigh. Spammers.
> 
> For anyone who cares, I don't think it's possible for haproxy to get MORE
> exposure on google.
> 
> [image: image.png]
> 

Notice how he mentionned haproxy.com, which is only 2nd in your google search.

I think we can trust somebody named FREDDIE KIRK, after all he is SEO
Strategist AND Business Development Manager.

Regards,

Olivier

> On Wed, 5 Sep 2018 at 22:07, FREDDIE KIRK 
> wrote:
> 
> > Hi *haproxy.com ,*
> >
> > *Do you need to know how your website currently ranks on search engine
> > result pages and how you can start beating your competitors right now?*
> >
> > Today, I went through your website *haproxy.com *;
> > you seem to have a great website, but only the thing is People are already
> > searching for your products and services, but if you don't use the Right
> > Keywords they're searching for on your site, it will be difficult for them
> > to find you.
> >
> > We will deliver you a huge ROI, high ranking, more traffic, clicks, page
> > views and most importantly converting those visitors into paying customers.
> >
> > Let me know if I should share a *Plan of Action* for your website
> >
> > Kind Regards
> >
> > SEO Strategist
> > Business Development Manager
> >





Help you generate more revenue for your haproxy.com.

2018-09-05 Thread FREDDIE KIRK
Hi *haproxy.com ,*

*Do you need to know how your website currently ranks on search engine
result pages and how you can start beating your competitors right now?*

Today, I went through your website *haproxy.com *; you
seem to have a great website, but only the thing is People are already
searching for your products and services, but if you don't use the Right
Keywords they're searching for on your site, it will be difficult for them
to find you.

We will deliver you a huge ROI, high ranking, more traffic, clicks, page
views and most importantly converting those visitors into paying customers.

Let me know if I should share a *Plan of Action* for your website

Kind Regards

SEO Strategist
Business Development Manager


Re: Help with backend server sni setup

2018-07-30 Thread Aleksandar Lazic

Hi.

On 30/07/2018 16:39, Lukas Tribus wrote:

On Mon, 30 Jul 2018 at 13:30, Aleksandar Lazic  wrote:


Hi.

I have the following Setup.

APP -> Internal Haproxy -(HTTPS)-> external HAProxy -> APP

The external HAProxy is configured with multiple TLS Vhost.


Never use SNI for Vhosting. It should work with the host header only.
SNI should only be used for certificate selection, otherwise
overlapping certificates will cause wrong forwarding decisions.


The openshift router, based on haproxy 1.8, looks for the sni hostname
for routing.

https://github.com/openshift/origin/blob/master/images/router/haproxy/conf/haproxy-config.template#L198-L209

Due to this fact we *must* set the ssl hostname


I assume that when I add `server  sni appinternal.domain.com` to
the server line will be set the hostname field in the TLS session to
this value.


No, the sni keyword expects a fetch expression.

Set it to the host header for example:
sni req.hdr(host)

Or to a static string:
sni str(www.example.com)


When I take a look into the code I see this line.

http://git.haproxy.org/?p=haproxy-1.8.git;a=blob;f=src/backend.c;hb=ada31afbc1e9095d494973cad91a4e507c4c1d9b#l1255
ssl_sock_set_servername(srv_conn, smp->data.u.str.str);

and the implementation of this function is here
http://git.haproxy.org/?p=haproxy-1.8.git;a=blob;f=src/ssl_sock.c;hb=ada31afbc1e9095d494973cad91a4e507c4c1d9b#l5922

The blocks begins here.
http://git.haproxy.org/?p=haproxy-1.8.git;a=blob;f=src/backend.c;hb=ada31afbc1e9095d494973cad91a4e507c4c1d9b#l1236

As far as I understood this block and I'm not sure that I have it
understood right the fetch sample checks for the string, as you have
written, AND SET the hostname into the SSL/TLS header for SNI.

Now after I looked into the code and read the doc again it's clear now
for me.

This options set's

 cite from doc
the host name sent in the SNI TLS extension to the server.


Please apologise for the rush and my stupidity.


cheers,
lukas


Best greetings
aleks



Re: Help with backend server sni setup

2018-07-30 Thread Lukas Tribus
On Mon, 30 Jul 2018 at 13:30, Aleksandar Lazic  wrote:
>
> Hi.
>
> I have the following Setup.
>
> APP -> Internal Haproxy -(HTTPS)-> external HAProxy -> APP
>
> The external HAProxy is configured with multiple TLS Vhost.

Never use SNI for Vhosting. It should work with the host header only.
SNI should only be used for certificate selection, otherwise
overlapping certificates will cause wrong forwarding decisions.



> I assume that when I add `server  sni appinternal.domain.com` to the
> server line will be set the hostname field in the TLS session to this
> value.

No, the sni keyword expects a fetch expression.

Set it to the host header for example:
sni req.hdr(host)

Or to a static string:
sni str(www.example.com)


cheers,
lukas



Help with backend server sni setup

2018-07-30 Thread Aleksandar Lazic

Hi.

I have the following Setup.

APP -> Internal Haproxy -(HTTPS)-> external HAProxy -> APP

The external HAProxy is configured with multiple TLS Vhost.

I assume that when I add `server  sni appinternal.domain.com` to the
server line will be set the hostname field in the TLS session to this
value.

I'm not sure if this could work from the doc reading.

https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#5.2-sni

Could this work?

Best regards
Aleks



looking for help with redirect + acl

2018-07-23 Thread James Stroehmann

I need help with a current ACL and redirect that looks like this:

acl has_statistical_uri path_beg -i /statistical
http-request redirect code 301 prefix 
https://statistical.example.com/statisticalinsight if has_statistical_uri

When the request like this comes in:
https://statistical.example.com/statistical/example?key=value
it gets redirected to this:
https://statistical.example.com/statisticalinsight/statistical/example?key=value

They would like it to be redirected to:
https://statistical.example.com/statisticalinsight/example?key=value




Re: Help with environment variables in config

2018-07-22 Thread Willy Tarreau
On Sat, Jul 21, 2018 at 05:44:44PM +0100, Jonathan Matthews wrote:
> No. Sudo doesn't pass envvars through to its children by default:
> https://stackoverflow.com/questions/8633461/how-to-keep-environment-variables-when-using-sudo
> 
> Read that page *and* the comments - in particular be aware that you have to
> request (at the CLI) that sudo preserve envvars, and you also have to have
> been granted permission to do this, via the sudoers config file.
> 
> If this is all sounding a bit complicated, that's because it is.
> 
> You've chosen a relatively uncommon way of running haproxy - directly, via
> sudo. Consider running via an init script or systemd unit (?) or, failing
> that, just a script which is itself the sudo target, which sets the envvars
> in the privileged environment.

Also, something that people don't necessarily know is that haproxy can
set its own variables using "setenv" in the global sections. Some will
say this is stupid since you're going to use them in the same config,
but when you add to this the fact that we can load multiple files, it
becomes more obvious that it easily allows to parse an environment file
first made of a global section with a few variables, then the regular
(possibly shared) config. Typically it would do something like this :

$ cat /etc/haproxy/env.cfg
global
setenv GRAPH_ADDRESS graph.server.com
setenv GRAPH_PORT 8182

$ sudo haproxy -D -f /etc/haproxy/env.cfg -f /etc/haproxy/haproxy.cfg

Hoping this helps,
Willy



Re: Help with environment variables in config

2018-07-21 Thread Jonathan Matthews
No. Sudo doesn't pass envvars through to its children by default:
https://stackoverflow.com/questions/8633461/how-to-keep-environment-variables-when-using-sudo

Read that page *and* the comments - in particular be aware that you have to
request (at the CLI) that sudo preserve envvars, and you also have to have
been granted permission to do this, via the sudoers config file.

If this is all sounding a bit complicated, that's because it is.

You've chosen a relatively uncommon way of running haproxy - directly, via
sudo. Consider running via an init script or systemd unit (?) or, failing
that, just a script which is itself the sudo target, which sets the envvars
in the privileged environment.

J

On Sat, 21 Jul 2018 at 17:31, jdtommy  wrote:

> would this chain of calls not work?
>
> ubuntu@ip-172-31-30-4:~$ export GRAPH_ADDRESS=graph.server.com
> ubuntu@ip-172-31-30-4:~$ export GRAPH_PORT=8182
> ubuntu@ip-172-31-30-4:~$ sudo haproxy -d -V -f /etc/haproxy/haproxy.cfg
>
> On Sat, Jul 21, 2018 at 3:26 AM Igor Cicimov <
> ig...@encompasscorporation.com> wrote:
>
>> On Sat, Jul 21, 2018 at 7:12 PM, Jonathan Matthews <
>> cont...@jpluscplusm.com> wrote:
>>
>>> On Sat, 21 Jul 2018 at 09:12, jdtommy  wrote:
>>>
 I am setting them before I start haproxy in the terminal. I tried both
 starting it as a service and starting directly, but neither worked. It
 still would not forward it along.

>>>
>>> Make sure that, as well as setting them, you're *exporting* the envvars
>>> before asking a child process (i.e. haproxy) to use them.
>>>
>>> J
>>> --
>>> Jonathan Matthews
>>> London, UK
>>> http://www.jpluscplusm.com/contact.html
>>>
>> ​
>> As Jonathan said, plus make sure they are included/exported in the init
>> script or systemd file for the service.
>>
>>
>
> --
> Jarad Duersch
>
-- 
Jonathan Matthews
London, UK
http://www.jpluscplusm.com/contact.html


Re: Help with environment variables in config

2018-07-21 Thread jdtommy
actually, looking at this now I realize my sudo is messing it up.  I need
to set the env variables for the su environment. Yeah it works now.
Thanks for helping me get to that conclusion.

On Sat, Jul 21, 2018 at 10:31 AM jdtommy  wrote:

> would this chain of calls not work?
>
> ubuntu@ip-172-31-30-4:~$ export GRAPH_ADDRESS=graph.server.com
> ubuntu@ip-172-31-30-4:~$ export GRAPH_PORT=8182
> ubuntu@ip-172-31-30-4:~$ sudo haproxy -d -V -f /etc/haproxy/haproxy.cfg
>
> On Sat, Jul 21, 2018 at 3:26 AM Igor Cicimov <
> ig...@encompasscorporation.com> wrote:
>
>> On Sat, Jul 21, 2018 at 7:12 PM, Jonathan Matthews <
>> cont...@jpluscplusm.com> wrote:
>>
>>> On Sat, 21 Jul 2018 at 09:12, jdtommy  wrote:
>>>
 I am setting them before I start haproxy in the terminal. I tried both
 starting it as a service and starting directly, but neither worked. It
 still would not forward it along.

>>>
>>> Make sure that, as well as setting them, you're *exporting* the envvars
>>> before asking a child process (i.e. haproxy) to use them.
>>>
>>> J
>>> --
>>> Jonathan Matthews
>>> London, UK
>>> http://www.jpluscplusm.com/contact.html
>>>
>> ​
>> As Jonathan said, plus make sure they are included/exported in the init
>> script or systemd file for the service.
>>
>>
>
> --
> Jarad Duersch
>


-- 
Jarad Duersch


Re: Help with environment variables in config

2018-07-21 Thread jdtommy
would this chain of calls not work?

ubuntu@ip-172-31-30-4:~$ export GRAPH_ADDRESS=graph.server.com
ubuntu@ip-172-31-30-4:~$ export GRAPH_PORT=8182
ubuntu@ip-172-31-30-4:~$ sudo haproxy -d -V -f /etc/haproxy/haproxy.cfg

On Sat, Jul 21, 2018 at 3:26 AM Igor Cicimov 
wrote:

> On Sat, Jul 21, 2018 at 7:12 PM, Jonathan Matthews <
> cont...@jpluscplusm.com> wrote:
>
>> On Sat, 21 Jul 2018 at 09:12, jdtommy  wrote:
>>
>>> I am setting them before I start haproxy in the terminal. I tried both
>>> starting it as a service and starting directly, but neither worked. It
>>> still would not forward it along.
>>>
>>
>> Make sure that, as well as setting them, you're *exporting* the envvars
>> before asking a child process (i.e. haproxy) to use them.
>>
>> J
>> --
>> Jonathan Matthews
>> London, UK
>> http://www.jpluscplusm.com/contact.html
>>
> ​
> As Jonathan said, plus make sure they are included/exported in the init
> script or systemd file for the service.
>
>

-- 
Jarad Duersch


  1   2   3   4   5   6   >