Haproxy Technologies, Help with bookkeeping?

2024-04-20 Thread Jane Martin
Hello ,

It’s possible that Haproxy Technologies might benefit from the bookkeeping
services we deliver to companies like yours.

Why not let us focus on the heavy lifting, so you can focus on your passion?

If this sounds good, let me know a good time to have us get in touch.


Jane Martin, Compliance Analyst | Atinesh


Sponsored article for haproxy.org

2024-01-10 Thread Charlie Martin
Hello there,

My name is Charlie Martin, I am a freelance writer. I am interested in
writing a unique article matching your blog's topics, which will satisfy
your readers and boost traffic acquisition.

I have worked effectively with numerous websites, building solid
relationships. I am looking forward to building long-term relationships
with you, too. I would really appreciate your feedback. Hope to work well
with you.

I really appreciate any help you can provide.
Yours sincerely,
Charlie Martin
[image: beacon]


Request For Paid Link

2023-12-28 Thread Steve Martin
Hello,
Hope you are doing well,

I am looking for paid ADs or Text Link at https://www.haproxy.org/

I would like to place a text link on your website's home page, side bar or
bottom area. Share the links of websites related to my niches.

How much will you charge for a text link?

Looking forward to hearing from you.

Regards,
Steve Martin


Guest Article Submission Inquiry

2023-08-21 Thread Arber Martin
Dear Administrator


I am interested in submitting a guest article for publication on
[https://www.haproxy.com/

]

.

Could you kindly provide me with information on the submission process and
any guidelines I should follow?I am excited about the possibility of
sharing my content with your readership. I have Good quality content for
your sitesThank you for your time, and I look forward to your response.

Best regards,


Re: [PATCH] CI: travis-ci: disable arm64 builds

2021-08-09 Thread Martin Grigorov
Hi Илья,

On Mon, Aug 9, 2021 at 12:45 PM Илья Шипицин  wrote:

> I'm using arm64 in Oracle Cloud Ampere A1 Compute | Oracle
> <https://www.oracle.com/cloud/compute/arm/>
>

Yes!
OCI has an ARM64 VM in their free tier!
They also provide easy integration with GitHub Actions -
https://blogs.oracle.com/cloud-infrastructure/post/announcing-github-actions-arm-runners-for-the-arm-compute-platform-on-oracle-cloud-infrastructure
But the issue here is that GitHub Actions self-hosted runners are not
recommended for public/OSS projects because bad people can execute malware
on your VM via (meaningless) Pull Requests.


>
>
> also, I've found promising approach (using ARM on Github Actions)  Bump
> Bootstrap version from 5.0.2 to 5.1.0 · phpmyadmin/phpmyadmin@c90affe
> (github.com) <https://github.com/phpmyadmin/phpmyadmin/runs/3274334375>
>

This setup uses QEMU -
https://github.com/phpmyadmin/phpmyadmin/blob/c90affe793b56fb6f5e1c7eed676ec9031fb1480/.github/workflows/tests.yml#L51
.
It works but it is much slower than using a real ARM64 machine/VM.

Martin


> пн, 9 авг. 2021 г. в 14:29, Willy Tarreau :
>
>> Hi Martin,
>>
>> On Mon, Aug 09, 2021 at 11:04:34AM +0300, Martin Grigorov wrote:
>> > TravisCI just announced some improvements related to 'arch: arm64'
>> (using
>> > Equnix Metal machines) -
>> https://blog.travis-ci.com/2021-08-06-oss-equinix.
>>
>> Thanks for the info!
>>
>> > But I also had some similar problems with them recently and replaced the
>> > config with 'arch: arm64-graviton2; group: edge; virt: vm;', i.e. AWS
>> > Graviton2 machines. In my experience they behave more stable!
>>
>> Yeah, these machines are really fantastic. The real problem anyway is
>> likely that nowadays everyone is interested in testing on arm and very
>> few have one available, let alone even a cross-compiler, so I suspect
>> that these days a lot of people enable arm builds in such CI environments
>> because it's the only way they have to make sure their code builds there
>> at all.
>>
>> Cheers,
>> Willy
>>
>


Re: [PATCH] CI: travis-ci: disable arm64 builds

2021-08-09 Thread Martin Grigorov
Hi Willy,

On Mon, Aug 9, 2021 at 12:29 PM Willy Tarreau  wrote:

> Hi Martin,
>
> On Mon, Aug 09, 2021 at 11:04:34AM +0300, Martin Grigorov wrote:
> > TravisCI just announced some improvements related to 'arch: arm64' (using
> > Equnix Metal machines) -
> https://blog.travis-ci.com/2021-08-06-oss-equinix.
>
> Thanks for the info!
>
> > But I also had some similar problems with them recently and replaced the
> > config with 'arch: arm64-graviton2; group: edge; virt: vm;', i.e. AWS
> > Graviton2 machines. In my experience they behave more stable!
>
> Yeah, these machines are really fantastic. The real problem anyway is
> likely that nowadays everyone is interested in testing on arm and very
> few have one available, let alone even a cross-compiler, so I suspect
> that these days a lot of people enable arm builds in such CI environments
> because it's the only way they have to make sure their code builds there
> at all.
>

I am not sure whether you understood me. Or maybe I didn't understand you.
Anyway, let me rephrase:
TravisCI provides two types of ARM64 VMs - arm64 (powered by Equinix Metal)
and arm64-graviton2 (by AWS Graviton2).
You, as a user, can use any or both of them in your .travis.yml.
I prefer the AWS Graviton2 instances because there are less issues with
them.
So, instead of disabling the ARM64 CI job I suggest to try with
arm64-graviton2. Here is a sample setup for it -
https://github.com/apache/wicket/blob/270a5a43970cd975539331b21a34bd83a59c9c39/.travis.yml#L18-L22
More info about it at https://blog.travis-ci.com/2020-09-11-arm-on-aws

Martin


>
> Cheers,
> Willy
>


Re: [PATCH] CI: travis-ci: disable arm64 builds

2021-08-09 Thread Martin Grigorov
Hi,

On Sat, Aug 7, 2021 at 8:31 AM Willy Tarreau  wrote:

> Hi Ilya,
>
> On Tue, Aug 03, 2021 at 02:58:40PM +0500,  ??? wrote:
> > Hello,
> >
> > it looks like "something on travis-ci side".
> >
> > CC  src/raw_sock.o
> > gcc: fatal error: Killed signal terminated program cc1
> > compilation terminated.
> >
> > let us disable arm64 for a while.
>

TravisCI just announced some improvements related to 'arch: arm64' (using
Equnix Metal machines) - https://blog.travis-ci.com/2021-08-06-oss-equinix.
But I also had some similar problems with them recently and replaced the
config with 'arch: arm64-graviton2; group: edge; virt: vm;', i.e. AWS
Graviton2 machines. In my experience they behave more stable!

Regards,
Martin


>
> Yes I noticed a few of these lately, and sometimes it even looked like
> impossible downloads. I suspect that github assigns a maximum runtime
> to these VMs and that they're simply overloaded and victims of their
> success. I could be wrong of course.
>
> Now applied, thank you!
> Willy
>
>


Re: Load testing HAProxy 2.2 on x86_64 and aarch64 VMs

2020-07-23 Thread Martin Grigorov
Hi Илья,

I didn't have much success with Google Perf Tools.
I've tried both with adding '-lprofiler' to LDFLAGS when building HAProxy
and with LD_PRELOAD.
In both cases on ARM64 it fails with: terminated by signal SIGSEGV (Address
boundary error)

On x86_64 there is no such issue but the produced result file is always
empty.

$ ldd haproxy
linux-vdso.so.1 (0x7ffcfb5cd000)
libprofiler.so.0 => /usr/local/lib/libprofiler.so.0
(0x7f2115507000)<<<<<<<<<<<<<<<<<<
libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1
(0x7f21154cc000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x7f21154b)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7f21154aa000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7f211549f000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0
(0x7f211547c000)
libssl.so.1.1 => /home/ubuntu/opt/lib/libssl.so.1.1
(0x7f21151e6000)
libcrypto.so.1.1 => /home/ubuntu/opt/lib/libcrypto.so.1.1
(0x7f2114cf4000)
liblua5.3.so.0 => /usr/lib/x86_64-linux-gnu/liblua5.3.so.0
(0x7f2114cb9000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7f2114b6a000)
libsystemd.so.0 => /lib/x86_64-linux-gnu/libsystemd.so.0
(0x7f2114abd000)
libpcreposix.so.3 => /usr/lib/x86_64-linux-gnu/libpcreposix.so.3
(0x7f2114ab8000)
libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3
(0x7f2114a43000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1
(0x7f2114a28000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f2114836000)
libunwind.so.8 => /usr/lib/x86_64-linux-gnu/libunwind.so.8
(0x7f2114819000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6
(0x7f2114638000)
/lib64/ld-linux-x86-64.so.2 (0x7f211552e000)
liblzma.so.5 => /lib/x86_64-linux-gnu/liblzma.so.5
(0x7f211460f000)
liblz4.so.1 => /usr/lib/x86_64-linux-gnu/liblz4.so.1
(0x7f21145ee000)
libgcrypt.so.20 => /usr/lib/x86_64-linux-gnu/libgcrypt.so.20
(0x7f21144d)
libgpg-error.so.0 => /lib/x86_64-linux-gnu/libgpg-error.so.0
(0x7f21144ad000)

CPUPROFILE=/tmp/haproxy-load-x64.prof haproxy -p pid.txt -f haproxy.cfg

The above command creates /tmp/haproxy-load-x64.prof but it is never
populated with any data. I've tried stopping HAProxy with signals INT, KILL
and TERM.
I even tried with env var CPUPROFILESIGNAL=12 to start/stop the profiler
manually but again no success.

I could share with you reports from Linux 'perf' command. Just let me know
which events you'd be interested in!

Regards,
Martin

On Sat, Jul 18, 2020 at 11:40 AM Илья Шипицин  wrote:

> Hello, Martin!
>
> Can you please compare load profiles using google perftools ?
>
> I never tried to use gperf on ARM64, also, my trial at Linaro is over, I
> do not have an access to any ARM64 anymore.
> in short, gperf can be found https://github.com/gperftools/gperftools
>
> please follow "CPU profiling part".
>
> it can collect cachegrind output, I attached example kcachegrind report
> (you can sort by "self" time).
> it would be interesting to compare amd64 <--> arm64
>
> [image: Screenshot from 2020-07-18 13-35-27.png]
>
>
> пт, 10 июл. 2020 г. в 19:00, Martin Grigorov :
>
>> Hello HAProxy community,
>>
>> I wanted to compare how the newly released HAProxy 2.2 (Congrats!)
>> behaves under heavy load so I've ran some tests on my x86_64 and aarch64
>> VMs:
>>
>>
>> https://medium.com/@martin.grigorov/compare-haproxy-performance-on-x86-64-and-arm64-cpu-architectures-bfd55d1d5566
>>
>> Without much surprise the x86_64 VM gave better results!
>>
>> It is *not* a real use case scenario: the backends serve on GET / and
>> return "Hello World", without any file/network operations.
>>
>> What is interesting though is that I can get 120-160K reqs/sec when
>> hitting directly one of the backend servers, and only 20-40K reqs/sec when
>> using HAProxy as a load balancer in front of them.
>>
>> I'd be happy to re-run the tests with any kind of improvements you may
>> have!
>>
>> Regards,
>> Martin
>>
>


Re: Load testing HAProxy 2.2 on x86_64 and aarch64 VMs

2020-07-20 Thread Martin Grigorov
Hi Илья,

I will do it sometimes this week!

Regards,
Martin

On Sat, Jul 18, 2020 at 11:40 AM Илья Шипицин  wrote:

> Hello, Martin!
>
> Can you please compare load profiles using google perftools ?
>
> I never tried to use gperf on ARM64, also, my trial at Linaro is over, I
> do not have an access to any ARM64 anymore.
> in short, gperf can be found https://github.com/gperftools/gperftools
>
> please follow "CPU profiling part".
>
> it can collect cachegrind output, I attached example kcachegrind report
> (you can sort by "self" time).
> it would be interesting to compare amd64 <--> arm64
>
> [image: Screenshot from 2020-07-18 13-35-27.png]
>
>
> пт, 10 июл. 2020 г. в 19:00, Martin Grigorov :
>
>> Hello HAProxy community,
>>
>> I wanted to compare how the newly released HAProxy 2.2 (Congrats!)
>> behaves under heavy load so I've ran some tests on my x86_64 and aarch64
>> VMs:
>>
>>
>> https://medium.com/@martin.grigorov/compare-haproxy-performance-on-x86-64-and-arm64-cpu-architectures-bfd55d1d5566
>>
>> Without much surprise the x86_64 VM gave better results!
>>
>> It is *not* a real use case scenario: the backends serve on GET / and
>> return "Hello World", without any file/network operations.
>>
>> What is interesting though is that I can get 120-160K reqs/sec when
>> hitting directly one of the backend servers, and only 20-40K reqs/sec when
>> using HAProxy as a load balancer in front of them.
>>
>> I'd be happy to re-run the tests with any kind of improvements you may
>> have!
>>
>> Regards,
>> Martin
>>
>


Re: Log levels when logging to stdout

2020-07-16 Thread Martin Grigorov
On Thu, Jul 16, 2020 at 10:22 AM Jerome Magnin  wrote:

> Hi Martin,
>
> On Thu, Jul 16, 2020 at 10:05:40AM +0300, Martin Grigorov wrote:
> >
> > I am using such logging configuration (HAProxy built from master branch):
> >
> > global
> >   log stdout format raw local0 err
> >   ...
> > defaults
> >   log global
> >   option dontlog-normal
> >   option httplog
> >   option dontlognull
> >   ...
> >
> > But HAProxy still logs entries like the following:
> >
> > 0001:test_fe.clireq[001c:]: GET /haproxy-load HTTP/1.1
> > 0002:test_fe.clihdr[001e:]: host: 192.168.0.206:8080
> > 0006:test_fe.accept(0004)=0020 from [:::192.168.0.72:46768]
> > ALPN=
> > 0001:test_fe.clihdr[001c:]: host: 192.168.0.206:8080
> > 0002:test_fe.clihdr[001e:]: content-type: application/json
> > 0004:test_fe.accept(0004)=0024 from [:::192.168.0.72:46754]
> > ALPN=
> > 0005:test_fe.accept(0004)=001f from [:::192.168.0.72:46766]
> > ALPN=
> > 0001:test_fe.clihdr[001c:]: content-type: application/json
> > 0007:test_fe.accept(0004)=0025 from [:::192.168.0.72:46776]
> > ALPN=
> > 0003:test_fe.clireq[0022:]: GET /haproxy-load HTTP/1.1
> > 0008:test_fe.accept(0004)=001d from [:::192.168.0.72:46756]
> > ALPN=
> > 0004:test_fe.clireq[0024:]: GET /haproxy-load HTTP/1.1
> > 0003:test_fe.clihdr[0022:]: host: 192.168.0.206:8080
> > 0004:test_fe.clihdr[0024:]: host: 192.168.0.206:8080
> >
> > Those do not look like errors but in any case I tried also with 'emerg'
> log
> > level instead of 'err'
> >  and this didn't change anything.
>
> This is haproxy debug output, available when you start with haproxy -d.
>

Thanks, Jerome!
This was the problem !

Martin


>
> >
> > Do I configure it in a wrong way ?
> > I want HAProxy to log only when there is a problem. Because now it logs
> few
> > GBs of those when I load it and this affects the performance.
> > https://medium.com/@martin.grigorov/hi-willy-476dee6439d3
>
> How do you start haproxy ?
>
> --
> Jérôme
>


Log levels when logging to stdout

2020-07-16 Thread Martin Grigorov
Hello HAProxy community,

I am using such logging configuration (HAProxy built from master branch):

global
  log stdout format raw local0 err
  ...
defaults
  log global
  option dontlog-normal
  option httplog
  option dontlognull
  ...

But HAProxy still logs entries like the following:

0001:test_fe.clireq[001c:]: GET /haproxy-load HTTP/1.1
0002:test_fe.clihdr[001e:]: host: 192.168.0.206:8080
0006:test_fe.accept(0004)=0020 from [:::192.168.0.72:46768]
ALPN=
0001:test_fe.clihdr[001c:]: host: 192.168.0.206:8080
0002:test_fe.clihdr[001e:]: content-type: application/json
0004:test_fe.accept(0004)=0024 from [:::192.168.0.72:46754]
ALPN=
0005:test_fe.accept(0004)=001f from [:::192.168.0.72:46766]
ALPN=
0001:test_fe.clihdr[001c:]: content-type: application/json
0007:test_fe.accept(0004)=0025 from [:::192.168.0.72:46776]
ALPN=
0003:test_fe.clireq[0022:]: GET /haproxy-load HTTP/1.1
0008:test_fe.accept(0004)=001d from [:::192.168.0.72:46756]
ALPN=
0004:test_fe.clireq[0024:]: GET /haproxy-load HTTP/1.1
0003:test_fe.clihdr[0022:]: host: 192.168.0.206:8080
0004:test_fe.clihdr[0024:]: host: 192.168.0.206:8080

Those do not look like errors but in any case I tried also with 'emerg' log
level instead of 'err'
 and this didn't change anything.

Do I configure it in a wrong way ?
I want HAProxy to log only when there is a problem. Because now it logs few
GBs of those when I load it and this affects the performance.
https://medium.com/@martin.grigorov/hi-willy-476dee6439d3

Regards,
Martin


Load testing HAProxy 2.2 on x86_64 and aarch64 VMs

2020-07-10 Thread Martin Grigorov
Hello HAProxy community,

I wanted to compare how the newly released HAProxy 2.2 (Congrats!) behaves
under heavy load so I've ran some tests on my x86_64 and aarch64 VMs:

https://medium.com/@martin.grigorov/compare-haproxy-performance-on-x86-64-and-arm64-cpu-architectures-bfd55d1d5566

Without much surprise the x86_64 VM gave better results!

It is *not* a real use case scenario: the backends serve on GET / and
return "Hello World", without any file/network operations.

What is interesting though is that I can get 120-160K reqs/sec when hitting
directly one of the backend servers, and only 20-40K reqs/sec when using
HAProxy as a load balancer in front of them.

I'd be happy to re-run the tests with any kind of improvements you may have!

Regards,
Martin


Re: ssl_c_sha256 ?

2020-06-29 Thread Stephane Martin (stepham2)
Perfect, thank you all. Classical choice between "upgrade" and "backport" now __

Le 29/06/2020 12:59, « Tim Düsterhus »  a écrit :

Stephane,

Am 29.06.20 um 12:56 schrieb Stephane Martin (stepham2):
> Thank you for your quick answers!
> 
> So I understand that it is possible for haproxy >= 2.1. For haproxy 2.0, 
got to backport the sha2 filter, right ?

That is correct. I expect the commit I linked to apply pretty seamlessly
to HAProxy 2.0, it contains all you need.

One small note: The correct terminology for "sha2 filter" is "sha2
converter".

Best regards
Tim Düsterhus



Re: ssl_c_sha256 ?

2020-06-29 Thread Stephane Martin (stepham2)
Thank you for your quick answers!

So I understand that it is possible for haproxy >= 2.1. For haproxy 2.0, got to 
backport the sha2 filter, right ?

Stephane


Le 29/06/2020 12:54, « Tim Düsterhus »  a écrit :

Jarno,

Am 29.06.20 um 12:46 schrieb Jarno Huuskonen:
>> The ssl_c_sha1 is simply a hash of the DER representation of the
>> certificate. So you can just hash it with the sha2 converter:
>>
>> ssl_c_sha256,sha2(256)
> 
> I think the first fetch should be ssl_c_der ?
> (ssl_c_der,sha2(256))
> 

You are right, of course.

While adjusting the example from the commit message I replaced the 'der'
instead of the 'f'.

Best regards
Tim Düsterhus



ssl_c_sha256 ?

2020-06-29 Thread Stephane Martin (stepham2)
Hello,

I’m trying to setup TLS mutual authentication using pinned certificates in 
haproxy, ie. only accept a precise known certificate from the peer.

It is definitively possible using ACL and ssl_c_sha1, so that the route will 
only be accessible if the peer certificate has the right SHA1 fingerprint.

But sha1 usage is strongly not recommended for compliancy (you can understand 
why...).

In haproxy documentation I don't see any option to work with the sha256 
fingerprint of the peer certificate.

- Is there any other way to get that ?
- If it needs to be implemented in haproxy, would you have any clue where to 
start ?

Kind regards,
Stephane





Broken builds after "REORG: dgram: rename proto_udp to dgram"

2020-06-11 Thread Martin Grigorov
Hello,

Just FYI: the build is broken since this commit:
https://github.com/haproxy/haproxy/commit/7c18b54106ec21273aea3fe59ba23280e86821f5

https://travis-ci.com/github/haproxy/haproxy/builds

Regards,
Martin


Re: [PATCH] enable arm64 builds in travis-ci

2020-05-17 Thread Martin Grigorov
Hi Willy,

On Fri, May 15, 2020 at 6:07 PM Willy Tarreau  wrote:

> Ilya,
>
> > also, I'd suggest to purge travis-ci cache (if you are build in your own
> > fork).
> > some travis related issue might be related when something is took from
> > cache (which was not supposed to happen)
>
> Could you please handle Martin's patch, possibly cut it into several
> pieces if relevant and add a commit message indicating what it does
> (and why) ? Martin is not at ease with Git (which is not a problem),
> and it seems only him and you understand how the reasons of the changes
> in his patch. At least it's totally unclear to me why there's a new
> install target for arm64 and why there's a special "make" invocation
> there.
>

Let me explain the change.
At
https://github.com/haproxy/haproxy/blob/a8dbdf3c4b463a3f3e018f0cd02fa0d8d179bc07/.travis.yml#L113-L117
you
may see the default 'install' phase.
At
https://github.com/haproxy/haproxy/blob/a8dbdf3c4b463a3f3e018f0cd02fa0d8d179bc07/.travis.yml#L12-L19
is
the default environment.
They are used by every job from the matrix (
https://github.com/haproxy/haproxy/blob/a8dbdf3c4b463a3f3e018f0cd02fa0d8d179bc07/.travis.yml#L35
).
But each job can override the default environment and any of the phases
(before_install, install, after_install, script).
For the ARM64 build I overwrote the 'install' phase by copying the default
one and removing the execution of the build_ssl() function (the one that
builds OpenSSL from source) and I also overwrote the environment to update
the values of SSL_INC and SSL_LIB variables.
'openssl' and 'libssl-dev' packages are already installed in the Ubuntu
image used by TravisCI so there is nothing to install manually.. I've added
a comment (
https://github.com/haproxy/haproxy/blob/a8dbdf3c4b463a3f3e018f0cd02fa0d8d179bc07/.travis.yml#L47)
to remind us how it works.


> Feel free to add your "purge cache" change as an extra patch if needed.
> But in any case, please make sure it's still possible to follow the
> impact of each change, because we've touched many things blindly for
> a while on this arm64 issue and most of the changes were basically
> "let's see if this helps", which is a real mess :-/
>
> Thanks!
> Willy
>


Re: [PATCH] enable arm64 builds in travis-ci

2020-05-17 Thread Martin Grigorov
Thank you for applying the patch!

The ARM64 build passed at
https://travis-ci.com/github/haproxy/haproxy/jobs/335338296 !
But it passes only for the builds which have 'haproxy-mirror' :
https://travis-ci.com/github/haproxy/haproxy/builds
I am not sure what exactly "haproxy-mirror" is in the TravisCI config.
The builds which do not have "haproxy-mirror" next to the branch name also
do not have ARM64 build at all, e.g.
https://travis-ci.com/github/haproxy/haproxy/builds/166573019.

One thing that I notice is that the successful link above has *jobs *and
the failing one: *builds *before the job/build id.

On Fri, May 15, 2020 at 9:53 PM Willy Tarreau  wrote:

> On Fri, May 15, 2020 at 11:44:48PM +0500,  ??? wrote:
> > commit message adjusted
>
> Many thanks for this, Ilya, now pushed!
>
> Willy
>


Re: [PATCH] enable arm64 builds in travis-ci

2020-05-15 Thread Martin Grigorov
Those are set to new values at
https://github.com/haproxy/haproxy/pull/630/files#diff-354f30a63fb0907d4ad57269548329e3R51

On Fri, May 15, 2020 at 1:11 PM Илья Шипицин  wrote:

> or we'd better move SSL_LIB, SSL_INC to build-ssl.sh script
>
> пт, 15 мая 2020 г. в 15:09, Илья Шипицин :
>
>> probably, you also need to unset SSL_LIB and SSL_INC
>>
>>
>>
>> btw, I got an answer how to grant travis-ci rights (for triggering build
>> manually)
>>
>> https://travis-ci.community/t/undocumented-require-admin-permissions/8530
>>
>>
>> пт, 15 мая 2020 г. в 14:57, Martin Grigorov :
>>
>>> Hi,
>>>
>>> I've created https://github.com/haproxy/haproxy/pull/630
>>> With this change the build passed successfully for 5 mins and 7 secs for
>>> ARM64.
>>>
>>> Please let me know if you prefer me to send it as an attached .patch
>>> file here. (I haven't used `git format-patch` before :-/).
>>>
>>> Martin
>>>
>>> On Mon, May 11, 2020 at 12:38 PM Илья Шипицин 
>>> wrote:
>>>
>>>>
>>>>
>>>> сб, 9 мая 2020 г. в 11:45, Willy Tarreau :
>>>>
>>>>> On Sat, May 09, 2020 at 08:11:27AM +0200, Vincent Bernat wrote:
>>>>> >  ?  8 mai 2020 14:25 +02, Willy Tarreau:
>>>>> >
>>>>> > >> > Let's increase the timeout to see if it has a chance to finish,
>>>>> no ?
>>>>> > >> >
>>>>> > >>
>>>>> > >> yes
>>>>> > >
>>>>> > > OK now pushed. It's really annoying to work blindly like this. The
>>>>> > > build model Travis uses is broken by design. Requiring to commit
>>>>> > > something for testing is utterly wrong. And doing so within the
>>>>> > > project that's supposed to being test is further wrong. We already
>>>>> > > have 44 patches only on .travis.yml! If this continues like this,
>>>>> > > I predict that a "pre-CI" solution will appear to test if your
>>>>> > > change is likely to trigger a travis error before it gets merged...
>>>>> >
>>>>> > You can push changes to a (throwable) branch instead.
>>>>>
>>>>> Good point, that can also be a solution. But it remains completely
>>>>> hackish. It's basically abusing a versioning system to use it as a
>>>>> messaging system to indicate "please build with this".
>>>>>
>>>>> Willy
>>>>>
>>>>
>>>>
>>>> I created several topics (no answer yet).
>>>>
>>>> as for travis-ci rights, it's totally undocumented. but I suspect
>>>> travis grants
>>>> rights based on github rights. i.e. github admin becomes travis admin
>>>> as well.
>>>>
>>>> https://travis-ci.community/t/arm64-fails-with-non-clear-reason/8529
>>>>
>>>>
>>>> https://travis-ci.community/t/undocumented-require-admin-permissions/8530
>>>>
>>>>
>>>> https://travis-ci.community/t/undocumented-operation-requires-create-request-access-to-repository/8528
>>>>
>>>>
>>>


Re: [PATCH] enable arm64 builds in travis-ci

2020-05-15 Thread Martin Grigorov
Hi,

I've created https://github.com/haproxy/haproxy/pull/630
With this change the build passed successfully for 5 mins and 7 secs for
ARM64.

Please let me know if you prefer me to send it as an attached .patch file
here. (I haven't used `git format-patch` before :-/).

Martin

On Mon, May 11, 2020 at 12:38 PM Илья Шипицин  wrote:

>
>
> сб, 9 мая 2020 г. в 11:45, Willy Tarreau :
>
>> On Sat, May 09, 2020 at 08:11:27AM +0200, Vincent Bernat wrote:
>> >  ?  8 mai 2020 14:25 +02, Willy Tarreau:
>> >
>> > >> > Let's increase the timeout to see if it has a chance to finish, no
>> ?
>> > >> >
>> > >>
>> > >> yes
>> > >
>> > > OK now pushed. It's really annoying to work blindly like this. The
>> > > build model Travis uses is broken by design. Requiring to commit
>> > > something for testing is utterly wrong. And doing so within the
>> > > project that's supposed to being test is further wrong. We already
>> > > have 44 patches only on .travis.yml! If this continues like this,
>> > > I predict that a "pre-CI" solution will appear to test if your
>> > > change is likely to trigger a travis error before it gets merged...
>> >
>> > You can push changes to a (throwable) branch instead.
>>
>> Good point, that can also be a solution. But it remains completely
>> hackish. It's basically abusing a versioning system to use it as a
>> messaging system to indicate "please build with this".
>>
>> Willy
>>
>
>
> I created several topics (no answer yet).
>
> as for travis-ci rights, it's totally undocumented. but I suspect travis
> grants
> rights based on github rights. i.e. github admin becomes travis admin as
> well.
>
> https://travis-ci.community/t/arm64-fails-with-non-clear-reason/8529
>
> https://travis-ci.community/t/undocumented-require-admin-permissions/8530
>
>
> https://travis-ci.community/t/undocumented-operation-requires-create-request-access-to-repository/8528
>
>


Re: [PATCH] enable arm64 builds in travis-ci

2020-05-08 Thread Martin Grigorov
On Fri, May 8, 2020 at 10:54 AM Илья Шипицин  wrote:

>
>
> пт, 8 мая 2020 г. в 12:26, Willy Tarreau :
>
>> On Fri, May 08, 2020 at 09:34:32AM +0300, Martin Grigorov wrote:
>> > It must have started failing when you updated the version of OpenSSL.
>> > .travis.yml caches ~/opt folder between builds. After the update to
>> 1.1.1f
>> > the build doesn't see the OpenSSL binaries in the cache anymore and
>> tries
>> > to download it and build it.
>> > But as I've noticed in my attempt to build HAProxy with Docker+QEMU the
>> > build of OpenSSL is taking too long.
>> > The build of OpenSSL is wrapped with travis_wait to reduce the writes to
>> > stdout but the default time for travis_wait is 20 mins and this is not
>> > enough to build OpenSSL.
>>
>> That's very likely indeed.
>>
>
>
> it is not :)
> I provided link to my fork, openssl build takes 640 sec
>
>
>>
>> > Due to
>> >
>> https://travis-ci.community/t/output-is-truncated-heavily-in-arm64-when-a-command-hangs/7630
>> > TravisCI
>> > does not properly report that the problem is at build_ssl() step but
>> shows
>> > the last chunk of the buffered response and this confuses us all.
>>
>> Ah, excellent, precisely what I was looking for. And some indicate
>> that the buffering further causes issues in the build system itself!
>>
>> > I think the build will become green if we extend travis_wait to a higher
>> > value (
>> >
>> https://docs.travis-ci.com/user/common-build-problems/#build-times-out-because-no-output-was-received
>> ).
>> > I don't remember where I have read it but I think the upper limit is 120
>> > mins.
>> > @Willy: could you please change
>> > https://github.com/haproxy/haproxy/blob/master/.travis.yml#L112 to:
>> >
>> > travis_wait 120 bash -c 'scripts/build-ssl.sh >build-ssl.log 2>&1' ||
>> (cat
>> > build-ssl.log && exit 1)
>> >
>> > i.e. add '120' after travis_wait
>>
>> We could, but 120 (2 hours) seems a bit extreme. It also means we can
>> steal
>> 2 hours of CPU there in case something goes wrong.
>>
>
> travis-ci limits build at 60 min.
>
>
>>
>> > This should give it the time to download and install OpenSSL 1.1.1f and
>> to
>> > cache it. If the build passes once then the next builds should be much
>> > faster because OpenSSL will be used from the cache.
>>
>> I'm wondering why instead we don't fallback on an already packaged version
>> of OpenSSL for this platform. I mean, sure it's convenient to test the
>> latest version but it's already tested on x86 and we could very well use
>> any other version already packaged on the distro present there. This would
>> solve the problem and even increase versions coverage.
>>
>> Ilya, what do you think ?
>>
>
> yes, there are few options to think about.
> I also provided options (numbered 1..4), Hopefully, I think on your
> suggestions, Martin will think on my suggestion ... and we will decide what
> next would be.
>

I still believe the problem is in the time needed to build OpenSSL, not in
apt[-get].

We can increase temporarily the travis_wait to 60 just to cache the build
of 1.1.1f and then remove '60' again. Although if this is the problem then
'travis_wait 60 ...' won't do any harm for the next builds because it will
use the cache and return in few secs.

I like the idea to install openssl with apt but I am not sure whether we
can do this *only* for ARM64.


>
>
>>
>> Thanks,
>> Willy
>>
>


Re: [PATCH] enable arm64 builds in travis-ci

2020-05-08 Thread Martin Grigorov
Hi,

I think I understand why it started failing.
It must have started failing when you updated the version of OpenSSL.
.travis.yml caches ~/opt folder between builds. After the update to 1.1.1f
the build doesn't see the OpenSSL binaries in the cache anymore and tries
to download it and build it.
But as I've noticed in my attempt to build HAProxy with Docker+QEMU the
build of OpenSSL is taking too long.
The build of OpenSSL is wrapped with travis_wait to reduce the writes to
stdout but the default time for travis_wait is 20 mins and this is not
enough to build OpenSSL.
Due to
https://travis-ci.community/t/output-is-truncated-heavily-in-arm64-when-a-command-hangs/7630
TravisCI
does not properly report that the problem is at build_ssl() step but shows
the last chunk of the buffered response and this confuses us all.
I think the build will become green if we extend travis_wait to a higher
value (
https://docs.travis-ci.com/user/common-build-problems/#build-times-out-because-no-output-was-received).
I don't remember where I have read it but I think the upper limit is 120
mins.
@Willy: could you please change
https://github.com/haproxy/haproxy/blob/master/.travis.yml#L112 to:

travis_wait 120 bash -c 'scripts/build-ssl.sh >build-ssl.log 2>&1' || (cat
build-ssl.log && exit 1)

i.e. add '120' after travis_wait

This should give it the time to download and install OpenSSL 1.1.1f and to
cache it. If the build passes once then the next builds should be much
faster because OpenSSL will be used from the cache.

Regards,
Martin

On Fri, May 8, 2020 at 9:18 AM Willy Tarreau  wrote:

> Hi Martin,
>
> On Fri, May 08, 2020 at 08:56:07AM +0300, Martin Grigorov wrote:
> > Unfortunately it is not good:
> > https://travis-ci.com/github/haproxy/haproxy/jobs/329657180
>
> Indeed it's still not fixed on Travis' side. However what Ilya did
> actually worked, in that the status is not reported as a global
> build failure anymore. This allows us to continue to monitor if
> and when this issue finally resolves on the build infrastructure.
> It's also possible that they're not aware of the problem due to
> too few people using arm64. If someone has contacts there it might
> be worth checking with them. All we know for now is that it seems
> to stop moving while setting up libpcre2. Maybe there's a bug in
> a script in this package, resulting in a prompt for a question
> which never gets a response :-/  But that's something we can't
> check since we don't have access to an interactive shell there
> to diagnose.
>
> Willy
>


Re: [PATCH] enable arm64 builds in travis-ci

2020-05-07 Thread Martin Grigorov
Hi all,

On Thu, May 7, 2020 at 11:56 PM Willy Tarreau  wrote:

> Hi Ilya,
>
> On Thu, May 07, 2020 at 09:19:48PM +0500,  ??? wrote:
> > Hello,
> >
> > let us enable arm64 builds back.
>
> Good idea, just merged now. Let's see how that ends up now.
>

Unfortunately it is not good:
https://travis-ci.com/github/haproxy/haproxy/jobs/329657180

Martin


>
> Thanks,
> Willy
>
>


Re: [PATCH] fix errored ARM64 builds in travis-ci

2020-05-06 Thread Martin Grigorov
Hi,

I've just created a PR (https://github.com/haproxy/haproxy/pull/617/files)
that introduces testing on ARM64/AARCH64 at GitHub Actions.
It almost works! There are few tests that fail. Any help finding the reason
is very welcome!

Martin

On Mon, Mar 23, 2020 at 11:12 AM Martin Grigorov 
wrote:

> Hi Илья,
>
> On Sun, Mar 22, 2020 at 2:46 PM Илья Шипицин  wrote:
>
>> Martin,
>>
>> as the one of the most interested in ARM64 builds, I've got news for you
>>
>>
>> can you try
>>
>> travis_wait 30 bash -c 'scripts/build-ssl.sh >build-ssl.log 2>&1' || (cat
>> build-ssl.log && exit 1)
>>
>> in travis ? (please not "travis_wait 30" instead of "travis_wait")
>>
>
> it is running at the moment here:
> https://travis-ci.org/github/martin-g/haproxy/builds/665770469
>
>
>>
>>
>> also, it might be important to clear travis cache from time to time.
>> as for myself, "travis_wait 30" helped me to resolve similar issue on
>> another project (in my own fork haproxy on arm64 builds just fine)
>>
>> ср, 18 мар. 2020 г. в 23:35, Илья Шипицин :
>>
>>> well, there are several topics on travis-ci forum related to "output on
>>> ARM64 got truncated in the mid of ..."
>>> Let us disable ARM64 travis-ci builds for few months.
>>>
>>> Martin, I'll play with hosted github runner in order to find a way how
>>> we can limit its builds to allowed only.
>>>
>>> ср, 18 мар. 2020 г. в 18:57, Martin Grigorov :
>>>
>>>>
>>>> Current master's build passed the problematic point in my TravisCI
>>>> project: https://travis-ci.org/github/martin-g/haproxy/jobs/663953359
>>>> Note: I use TravisCI .org while HAProxy's official project is at .com:
>>>> https://travis-ci.com/github/haproxy/haproxy
>>>> I also think this is a problem on TravisCI's end.
>>>>
>>>> Martin
>>>>
>>>> On Wed, Mar 18, 2020 at 3:43 PM Илья Шипицин 
>>>> wrote:
>>>>
>>>>> I will disable PR builds.
>>>>>
>>>>> On Wed, Mar 18, 2020, 6:27 PM Willy Tarreau  wrote:
>>>>>
>>>>>> On Wed, Mar 18, 2020 at 06:21:15PM +0500,  ??? wrote:
>>>>>> > let us calm down a bit :)
>>>>>>
>>>>>> Agreed, especially since the build on PRs already happens and already
>>>>>> adds noise.
>>>>>>
>>>>>> > yes, I still believe it is because of buffering. I might have missed
>>>>>> > something.
>>>>>> > unless I will repair it, I'll drop arm64 support on travis (and we
>>>>>> will
>>>>>> > switch to self hosted github action runner)
>>>>>>
>>>>>> OK.
>>>>>>
>>>>>> Willy
>>>>>>
>>>>>


Re: Failing tests if USE_OPENSSL=1 is omitted in the FLAGS

2020-05-06 Thread Martin Grigorov
Hi Илья,

On Wed, May 6, 2020 at 11:59 AM Илья Шипицин  wrote:

> do you run tests on GH arm64 agents ? is it dedicated (your own) agents
> attached to your repo ? can you give a link ?
>

I use Docker + QEMU with GH hosted runner.
You can see the current diff at
https://github.com/haproxy/haproxy/compare/master...martin-g:feature/test-aarch64-on-github-actions
'windows-latest.yml` is temporary disabled until I'm ready with my
experiments.

As explained at
https://help.github.com/en/actions/hosting-your-own-runners/about-self-hosted-runners#self-hosted-runner-security-with-public-repositories
using
self-hosted runner is not secure if they are used for Pull Requests.
They can be used for pushes to `master` branch though since this is
supposed to be a reviewed code!

But the tests failures I reported below fail on my local machine (x86_64)
and my ARM64 VM. I do not use Docker and/or QEMU here.

Martin


> ср, 6 мая 2020 г. в 13:22, Martin Grigorov :
>
>> Hello HAProxy team,
>>
>> While working on a PR to build & test HAProxy on AARCH64 at GitHub
>> Actions I've noticed a strange behavior for some of the tests.
>>
>> To reduce the time of the build I've removed USE_OPENSSL=1 from the FLAGS
>> [1] passed to "make".
>> The build passed successfully, some of the tests are skipped because they
>> depend on SSL library, e.g.:
>>
>> ...
>> Add test: reg-tests/lua/txn_get_priv.vtc
>>   Add test: reg-tests/lua/h_txn_get_priv.vtc
>>   Skip reg-tests/ssl/wrong_ctx_storage.vtc because haproxy is not
>> compiled with the required option OPENSSL
>>   Skip reg-tests/ssl/ssl_client_auth.vtc because haproxy is not compiled
>> with the required option OPENSSL
>>   Skip reg-tests/ssl/add_ssl_crt-list.vtc because haproxy is not compiled
>> with the required option OPENSSL
>>   Skip reg-tests/ssl/set_ssl_cert.vtc because haproxy is not compiled
>> with the required option OPENSSL
>> ...
>>
>> but few tests just fail:
>>
>> Testing with haproxy version: 2.2-dev7
>> #top  TEST reg-tests/lua/txn_get_priv.vtc FAILED (5.008) exit=2
>> #top  TEST reg-tests/compression/lua_validation.vtc TIMED OUT (kill
>> -9)
>> #top  TEST reg-tests/compression/lua_validation.vtc FAILED (10.009)
>> signal=9
>> 2 tests failed, 0 tests skipped, 64 tests passed
>> ## Gathering results ##
>> ## Test case: reg-tests/compression/lua_validation.vtc ##
>> ## test results in:
>> "/tmp/haregtests-2020-05-06_09-08-49.WqBiWC/vtc.14212.7fffd08a"
>> ## Test case: reg-tests/lua/txn_get_priv.vtc ##
>> ## test results in:
>> "/tmp/haregtests-2020-05-06_09-08-49.WqBiWC/vtc.14212.065f2677"
>>  c0   HTTP rx timeout (fd:6 5000 ms)
>>  h1   Bad exit status: 0x0100 exit 0x1 signal 0 core 0
>> Makefile:995: recipe for target 'reg-tests' failed
>> make: *** [reg-tests] Error 1
>>
>> These tests fail both on AARCH64 and x86_64 consistently until I add back
>> USE_OPENSSL=1
>> to the flags.
>>
>> Is there an issue with these tests ?
>>
>> 1.
>> https://github.com/haproxy/haproxy/blob/fafa13dd6549ee431f41dc3c1857855974d79bea/.travis.yml#L14
>>
>> Regards,
>> Martin
>>
>


Failing tests if USE_OPENSSL=1 is omitted in the FLAGS

2020-05-06 Thread Martin Grigorov
Hello HAProxy team,

While working on a PR to build & test HAProxy on AARCH64 at GitHub Actions
I've noticed a strange behavior for some of the tests.

To reduce the time of the build I've removed USE_OPENSSL=1 from the FLAGS
[1] passed to "make".
The build passed successfully, some of the tests are skipped because they
depend on SSL library, e.g.:

...
Add test: reg-tests/lua/txn_get_priv.vtc
  Add test: reg-tests/lua/h_txn_get_priv.vtc
  Skip reg-tests/ssl/wrong_ctx_storage.vtc because haproxy is not compiled
with the required option OPENSSL
  Skip reg-tests/ssl/ssl_client_auth.vtc because haproxy is not compiled
with the required option OPENSSL
  Skip reg-tests/ssl/add_ssl_crt-list.vtc because haproxy is not compiled
with the required option OPENSSL
  Skip reg-tests/ssl/set_ssl_cert.vtc because haproxy is not compiled with
the required option OPENSSL
...

but few tests just fail:

Testing with haproxy version: 2.2-dev7
#top  TEST reg-tests/lua/txn_get_priv.vtc FAILED (5.008) exit=2
#top  TEST reg-tests/compression/lua_validation.vtc TIMED OUT (kill -9)
#top  TEST reg-tests/compression/lua_validation.vtc FAILED (10.009)
signal=9
2 tests failed, 0 tests skipped, 64 tests passed
## Gathering results ##
## Test case: reg-tests/compression/lua_validation.vtc ##
## test results in:
"/tmp/haregtests-2020-05-06_09-08-49.WqBiWC/vtc.14212.7fffd08a"
## Test case: reg-tests/lua/txn_get_priv.vtc ##
## test results in:
"/tmp/haregtests-2020-05-06_09-08-49.WqBiWC/vtc.14212.065f2677"
 c0   HTTP rx timeout (fd:6 5000 ms)
 h1   Bad exit status: 0x0100 exit 0x1 signal 0 core 0
Makefile:995: recipe for target 'reg-tests' failed
make: *** [reg-tests] Error 1

These tests fail both on AARCH64 and x86_64 consistently until I add back
USE_OPENSSL=1
to the flags.

Is there an issue with these tests ?

1.
https://github.com/haproxy/haproxy/blob/fafa13dd6549ee431f41dc3c1857855974d79bea/.travis.yml#L14

Regards,
Martin


Re: regtest: abns should work now :-)

2020-04-03 Thread Martin Grigorov
Hi everyone,

On Mon, Mar 23, 2020 at 11:11 AM Martin Grigorov 
wrote:

> Hi Илья,
>
> On Mon, Mar 23, 2020 at 10:52 AM Илья Шипицин 
> wrote:
>
>> well, I tried to repro abns failures on x86_64
>> I chose MS Azure VM of completely different size, both number of CPU and
>> RAM.
>> it was never reproduced, say on 1000 execution in loop.
>>
>> so, I decided "it looks like something with memory aligning".
>> also, I tried to run arm64 emulation on virtualbox. no luck yet.
>>
>
>



> Have you tried with multiarch Docker ?
>
> 1) execute
> docker run --rm --privileged multiarch/qemu-user-static:register --reset
> to register QEMU
>
> 2) create Dockerfile
> for Centos use: FROM multiarch/centos:7-aarch64-clean
> for Ubuntu use: FROM multiarch/ubuntu-core:arm64-bionic
>
> 3) enjoy :-)
>

Here is a PR for Varnish Cache project where I use Docker + QEMU to build
and package for several Linux distros and two architectures:
https://github.com/varnishcache/varnish-cache/pull/3263
They use CircleCI but I guess the same approach can be applied on GitHub
Actions.
If you are interested in this approach I could give it a try.


Regards,
Martin


>
>
>>
>> пн, 23 мар. 2020 г. в 13:43, Willy Tarreau :
>>
>>> Hi Ilya,
>>>
>>> I think this time I managed to fix the ABNS test. To make a long story
>>> short, it was by design extremely sensitive to the new process's startup
>>> time, which is increased with larger FD counts and/or less powerful VMs
>>> and/or noisy neighbors. This explains why it started to misbehave with
>>> the commit which relaxed the maxconn limitations. A starting process
>>> stealing a few ms of CPU from the old one could make its keep-alive
>>> timeout expire before it got a new request on a reused connection,
>>> resulting in an empty response as reported by the client.
>>>
>>> I'm going to issue dev5 now. s390x is currently down but all x86 ones
>>> build and run fine for now.
>>>
>>> Cheers,
>>> Willy
>>>
>>


Re: [PATCH] fix errored ARM64 builds in travis-ci

2020-03-23 Thread Martin Grigorov
Hi Илья,

On Sun, Mar 22, 2020 at 2:46 PM Илья Шипицин  wrote:

> Martin,
>
> as the one of the most interested in ARM64 builds, I've got news for you
>
>
> can you try
>
> travis_wait 30 bash -c 'scripts/build-ssl.sh >build-ssl.log 2>&1' || (cat
> build-ssl.log && exit 1)
>
> in travis ? (please not "travis_wait 30" instead of "travis_wait")
>

it is running at the moment here:
https://travis-ci.org/github/martin-g/haproxy/builds/665770469


>
>
> also, it might be important to clear travis cache from time to time.
> as for myself, "travis_wait 30" helped me to resolve similar issue on
> another project (in my own fork haproxy on arm64 builds just fine)
>
> ср, 18 мар. 2020 г. в 23:35, Илья Шипицин :
>
>> well, there are several topics on travis-ci forum related to "output on
>> ARM64 got truncated in the mid of ..."
>> Let us disable ARM64 travis-ci builds for few months.
>>
>> Martin, I'll play with hosted github runner in order to find a way how we
>> can limit its builds to allowed only.
>>
>> ср, 18 мар. 2020 г. в 18:57, Martin Grigorov :
>>
>>>
>>> Current master's build passed the problematic point in my TravisCI
>>> project: https://travis-ci.org/github/martin-g/haproxy/jobs/663953359
>>> Note: I use TravisCI .org while HAProxy's official project is at .com:
>>> https://travis-ci.com/github/haproxy/haproxy
>>> I also think this is a problem on TravisCI's end.
>>>
>>> Martin
>>>
>>> On Wed, Mar 18, 2020 at 3:43 PM Илья Шипицин 
>>> wrote:
>>>
>>>> I will disable PR builds.
>>>>
>>>> On Wed, Mar 18, 2020, 6:27 PM Willy Tarreau  wrote:
>>>>
>>>>> On Wed, Mar 18, 2020 at 06:21:15PM +0500,  ??? wrote:
>>>>> > let us calm down a bit :)
>>>>>
>>>>> Agreed, especially since the build on PRs already happens and already
>>>>> adds noise.
>>>>>
>>>>> > yes, I still believe it is because of buffering. I might have missed
>>>>> > something.
>>>>> > unless I will repair it, I'll drop arm64 support on travis (and we
>>>>> will
>>>>> > switch to self hosted github action runner)
>>>>>
>>>>> OK.
>>>>>
>>>>> Willy
>>>>>
>>>>


Re: regtest: abns should work now :-)

2020-03-23 Thread Martin Grigorov
Hi Илья,

On Mon, Mar 23, 2020 at 10:52 AM Илья Шипицин  wrote:

> well, I tried to repro abns failures on x86_64
> I chose MS Azure VM of completely different size, both number of CPU and
> RAM.
> it was never reproduced, say on 1000 execution in loop.
>
> so, I decided "it looks like something with memory aligning".
> also, I tried to run arm64 emulation on virtualbox. no luck yet.
>

Have you tried with multiarch Docker ?

1) execute
docker run --rm --privileged multiarch/qemu-user-static:register --reset
to register QEMU

2) create Dockerfile
for Centos use: FROM multiarch/centos:7-aarch64-clean
for Ubuntu use: FROM multiarch/ubuntu-core:arm64-bionic

3) enjoy :-)


>
> пн, 23 мар. 2020 г. в 13:43, Willy Tarreau :
>
>> Hi Ilya,
>>
>> I think this time I managed to fix the ABNS test. To make a long story
>> short, it was by design extremely sensitive to the new process's startup
>> time, which is increased with larger FD counts and/or less powerful VMs
>> and/or noisy neighbors. This explains why it started to misbehave with
>> the commit which relaxed the maxconn limitations. A starting process
>> stealing a few ms of CPU from the old one could make its keep-alive
>> timeout expire before it got a new request on a reused connection,
>> resulting in an empty response as reported by the client.
>>
>> I'm going to issue dev5 now. s390x is currently down but all x86 ones
>> build and run fine for now.
>>
>> Cheers,
>> Willy
>>
>


Re: [PATCH] fix errored ARM64 builds in travis-ci

2020-03-18 Thread Martin Grigorov
Hi,

On Wed, Mar 18, 2020 at 3:29 PM Willy Tarreau  wrote:

> On Wed, Mar 18, 2020 at 06:21:15PM +0500,  ??? wrote:
> > let us calm down a bit :)
>
> Agreed, especially since the build on PRs already happens and already
> adds noise.
>
> > yes, I still believe it is because of buffering. I might have missed
> > something.
> > unless I will repair it, I'll drop arm64 support on travis (and we will
> > switch to self hosted github action runner)
>

There is one major problem with GitHub Actions self hosted runners at the
moment - they are not really private.
I.e. if someone forks HAProxy and pushes something into their fork it will
trigger builds on your private node, i.e. it will consume its resources.
There is no way to say "this is my private node and I want it to build only
after commits in https://github.com/haproxy/haproxy;

If you find a way around this issue and you need an ARM64 VM then just let
me know!

Regards,
Martin


>
> OK.
>
> Willy
>
>


Re: [PATCH] switch to clang-9 in Linux/travis-ci builds

2020-03-15 Thread Martin Grigorov
Hi,

On Sat, Mar 14, 2020 at 11:26 AM Willy Tarreau  wrote:

> Hi Ilya,
>
> On Fri, Jan 24, 2020 at 11:46:45AM +0500,  ??? wrote:
> > Hello,
> >
> > let us use clang-9 instead of default clang-7 for linux builds.
>
> It seems I missed this one. Now applied carefully, we'll see. If it
> causes new failures, we'll adjust accordingly.
>

All looks good here on aarch64 with this change!

Martin


>
> Willy
>
>


Re: [PATCH]: BUILD link to lib atomic on ARM

2020-03-15 Thread Martin Grigorov
Hi David,

On my ARM64 VM `uname -m` returns:
$ uname -m
aarch64

Should your change take 'aarch64' into account as well ?

Martin

On Sun, Mar 15, 2020 at 3:34 PM David CARLIER  wrote:

> Oups sorry I really forgot :-)
>
> On Sun, 15 Mar 2020 at 13:32, Martin Grigorov 
> wrote:
>
>> Hi
>>
>> On Sun, Mar 15, 2020 at 3:03 PM Aleksandar Lazic 
>> wrote:
>>
>>> On 15.03.20 11:33, David CARLIER wrote:
>>> > Hi
>>> >
>>> > Here a little patch proposal to fix build on ARM.
>>> >
>>> > Regards.
>>>
>>> Ähm, maybe my mail client hide the Patch because I can't see it ;-)?
>>>
>>
>> It seems David forgot to attach it or the attachment didn't make it for
>> other reason. I also don't see it.
>>
>> Martin
>>
>>
>>>
>>> Regards
>>> Aleks
>>>
>>>


Re: [PATCH] enable DEBUG_STRICT=1 for all kind of CI builds

2020-03-15 Thread Martin Grigorov
Hello,

I've just tested the change on my ARM64 VM and the build is successful!

Regards,
Martin

On Sun, Mar 15, 2020 at 9:12 AM Илья Шипицин  wrote:

> Hello,
>
> I added DEBUG_STRICT=1 to all builds.
>
> Ilya Shipitcin
>


Re: [PATCH]: BUILD link to lib atomic on ARM

2020-03-15 Thread Martin Grigorov
Hi

On Sun, Mar 15, 2020 at 3:03 PM Aleksandar Lazic  wrote:

> On 15.03.20 11:33, David CARLIER wrote:
> > Hi
> >
> > Here a little patch proposal to fix build on ARM.
> >
> > Regards.
>
> Ähm, maybe my mail client hide the Patch because I can't see it ;-)?
>

It seems David forgot to attach it or the attachment didn't make it for
other reason. I also don't see it.

Martin


>
> Regards
> Aleks
>
>


Re: Tests timeout on my ARM64 test VM

2020-03-15 Thread Martin Grigorov
Hi Willy,

On Fri, Mar 13, 2020 at 10:03 PM Willy Tarreau  wrote:

> Hi Martin,
>
> On Fri, Mar 13, 2020 at 12:35:12PM +0200, Martin Grigorov wrote:
> > Hi ,
> >
> > Suddenly today the build is again green here!
> > I didn't make any changes to my testing setup.
> > It must be something on the OS level but I wasn't able to figure out what
> > makes the HAProxy tests timeout in the last several days.
>
> We've had issues with the abns test on other platforms in the past,
> namely s390x and ppc64le. It used to occasionally break on x86_64 as
> well but far less frequently. It was affected by two bugs that were
> solved yesterday after a few days of investigation and testing. We've
> seen yet another failure again on ppc64 while it was expected not to
> fail, so I'd be careful before claiming victory. However the abns
> test is extremely time sensitive and uses short delays around 15ms to
> try to trigger the issue, and in a VM it is possible to see this
> happen from time to time due to noisy neighbors. That's why I'm
> staying extremely prudent on the verdict. The PPC64 machine I tested
> on is provided by Minicloud and is a VM running on a real CPU, so it's
> much less affected by timing issues. I've run the test several hundreds
> of times in a row and couldn't make it fail anymore.
>
> So don't worry too much if it appeared and disappeared. The change
> that emphasized it was the increase in default maxconn (304e17eb8),
> apparently just due to a slightly longer startup time! And the ones
> expected to have fixed it are between bdb00c5d and 4b3f27b included.
>
> Note that I didn't manage to make it fail on arm64 (real machine,
> SolidRun's Macchiatobin).
>
> Hoping this clarifies the situation.
>

Thank you for this explanation!

The problem here was that the tests were failing even with older commits.
I've tried to git bisect the problem but no matter how far back in Git
history I went the problem was still there. But the logs from the tests
runs from just few days back were OK, no errors & timeouts.
Also the ARM64 tests on TravisCI were OK. And Travis's ARM64 nodes are less
powerful than mine VM.
Those are the reasons I believe that the problem was at my VM.
I just needed help with reading HAProxy's tests error logs. I didn't know
how to approach debugging them.

Regards,
Martin


>
> Regards,
> Willy
>


Re: Tests timeout on my ARM64 test VM

2020-03-13 Thread Martin Grigorov
Hi Илья,

Suddenly today the build is again green here!
I didn't make any changes to my testing setup.
It must be something on the OS level but I wasn't able to figure out what
makes the HAProxy tests timeout in the last several days.

Regards,
Martin

On Wed, Mar 11, 2020 at 4:13 PM Martin Grigorov 
wrote:

>
>
> On Wed, Mar 11, 2020 at 3:06 PM Илья Шипицин  wrote:
>
>> I will a look during next weekend
>>
>
> Thank you, Илья!
>
>
>>
>> BTW, I've managed to get Linaro VM :)
>>
>
> Congrats! :-)
>
>
>>
>> On Wed, Mar 11, 2020, 5:40 PM Martin Grigorov 
>> wrote:
>>
>>> Hi,
>>>
>>> On Mon, Mar 9, 2020 at 10:22 PM Martin Grigorov <
>>> martin.grigo...@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> I am not sure what have changed on my test ARM64 VM but the reg tests
>>>> started timing out.
>>>> Everything is fine on my dev machine (x86_64) and at Travis (
>>>> https://travis-ci.com/haproxy/haproxy).
>>>> I don't think it is ARM64 related. Most probably some OS setting or
>>>> something.
>>>> I've rebooted the system just to make sure it is not some busy port or
>>>> opened file descriptor but
>>>> it still fails the same way.
>>>>
>>>> Does someone see in the attached logs what could be the problem?
>>>>
>>>
>>> Anyone can help me here ?
>>>
>>> Martin
>>>
>>>
>>>> Thank you!
>>>>
>>>> Martin
>>>>
>>>


Re: Tests timeout on my ARM64 test VM

2020-03-11 Thread Martin Grigorov
On Wed, Mar 11, 2020 at 3:06 PM Илья Шипицин  wrote:

> I will a look during next weekend
>

Thank you, Илья!


>
> BTW, I've managed to get Linaro VM :)
>

Congrats! :-)


>
> On Wed, Mar 11, 2020, 5:40 PM Martin Grigorov 
> wrote:
>
>> Hi,
>>
>> On Mon, Mar 9, 2020 at 10:22 PM Martin Grigorov <
>> martin.grigo...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I am not sure what have changed on my test ARM64 VM but the reg tests
>>> started timing out.
>>> Everything is fine on my dev machine (x86_64) and at Travis (
>>> https://travis-ci.com/haproxy/haproxy).
>>> I don't think it is ARM64 related. Most probably some OS setting or
>>> something.
>>> I've rebooted the system just to make sure it is not some busy port or
>>> opened file descriptor but
>>> it still fails the same way.
>>>
>>> Does someone see in the attached logs what could be the problem?
>>>
>>
>> Anyone can help me here ?
>>
>> Martin
>>
>>
>>> Thank you!
>>>
>>> Martin
>>>
>>


Re: Tests timeout on my ARM64 test VM

2020-03-11 Thread Martin Grigorov
Hi,

On Mon, Mar 9, 2020 at 10:22 PM Martin Grigorov 
wrote:

> Hi,
>
> I am not sure what have changed on my test ARM64 VM but the reg tests
> started timing out.
> Everything is fine on my dev machine (x86_64) and at Travis (
> https://travis-ci.com/haproxy/haproxy).
> I don't think it is ARM64 related. Most probably some OS setting or
> something.
> I've rebooted the system just to make sure it is not some busy port or
> opened file descriptor but
> it still fails the same way.
>
> Does someone see in the attached logs what could be the problem?
>

Anyone can help me here ?

Martin


> Thank you!
>
> Martin
>


Re: Lua detection on aarch64

2020-01-30 Thread Martin Grigorov
Hi Илья,

On Wed, Jan 29, 2020 at 11:34 AM Илья Шипицин  wrote:

> Hello,
>
> I started to work on rpm packages.
> I also applied at Linaro for arm64 vm (request is not completed yet).
>
>
> interesting that Lua is not detected:
>
>
> https://copr-be.cloud.fedoraproject.org/results/chipitsine/haproxy-rpm/fedora-31-aarch64/01207488-haproxy/builder-live.log.gz
>
> Martin, do you have arm64, can you check that ?
>

I've had the same problem on my VM.
I've tried to debug it and I think the problem was somewhere here:
https://github.com/haproxy/haproxy/blob/160287b6760584eb1dc20d9032d6d49ed051ca0b/Makefile#L547
Maybe
the string concatenation with the quotes cause it.
At Travis it seems to work fine.
I've worked it around by specifying manually LUA_INC=/usr/include/lua5.3/:

make -j3 CC=$CC V=0 TARGET=$TARGET $FLAGS DEBUG_CFLAGS="$DEBUG_CFLAGS"
LDFLAGS="$LDFLAGS -L$SSL_LIB -Wl,-rpath,$SSL_LIB"
51DEGREES_SRC="$FIFTYONEDEGREES_SRC" EXTRA_OBJS="$EXTRA_OBJS"
LUA_INC=/usr/include/lua5.3/


>
> cheers,
> Ilya Shipitcin
>


Re: Disabling regtests in Travis ?

2020-01-27 Thread Martin Grigorov
On Fri, Jan 24, 2020 at 6:43 PM Willy Tarreau  wrote:

> On Fri, Jan 24, 2020 at 09:12:58PM +0500,  ??? wrote:
> > >> +  - make reg-tests VTEST_PROGRAM=../vtest/vtest
> > >> REGTESTS_TYPES=default,bug,devel
> > >>
> > >
> > > let us try that.
>
> OK, now pushed.
>
> > > I will have a look at "racy" tests.
> > > Maybe we'll enable them on Github Actions.
> > >
> > >
> > the good thing about Github Actions, it is possible to attach own build
> > agents. So, if we
> > have dedicated hardware and we not want to depend on travis-ci
> neighbours,
> > it might be an option.
>
> That's good to know, even if I doubt we'd need it, at least it
> opens possibilities.
>

The regtests run fine on my ARM64 VM. I run them daily.
If HAProxy team decides to move to GitHub Actions and to use an external
build agent for ARM64 then just ping me!

Regards,
Martin


>
> Willy
>
>


Re: [PATCH] introduce ARM64 travis-ci builds

2020-01-20 Thread Martin Grigorov
Thank you, Илья!

On Sun, Jan 19, 2020 at 9:20 AM Илья Шипицин  wrote:

> hello,
>
> sometimes arm64 builds fails, I think it is good chance to introduce
> regular builds
> and fix them.
>
> also, few small improvements.
>
> cheers,
> Ilya Shipicin
>


Re: ARM(64) builds

2020-01-19 Thread Martin Grigorov
Hi,

On Sat, Jan 18, 2020, 22:10 Илья Шипицин  wrote:

> tests on ARM64 randomly fail
> https://travis-ci.com/chipitsine/haproxy/jobs/277236120
>
> (after restart there's a good chance to success)
>

I have the same observation on TravisCI. I think the reason is that their
arm64 instances are less powerful than the amd64 ones.
At https://docs.travis-ci.com/user/multi-cpu-architectures it is said:
While available to all Open Source repositories, the concurrency available
for multiple CPU arch-based jobs is limited during the alpha period.
Not sure how much limited it is though.

Today this test failed once on my VM:

#top  TEST reg-tests/seamless-reload/abns_socket.vtc FAILED (1.111)
exit=2
1 tests failed, 0 tests skipped, 34 tests passed
## Gathering results ##
## Test case: reg-tests/seamless-reload/abns_socket.vtc ##
## test results in:
"/tmp/haregtests-2020-01-19_08-06-39.45Hchw/vtc.8496.328d7f95"
 c1   HTTP rx failed (fd:6 read: Connection reset by peer)
Makefile:964: recipe for target 'reg-tests' failed
make: *** [reg-tests] Error 1

but the next 4 runs were successul.


> сб, 18 янв. 2020 г. в 09:52, Martin Grigorov :
>
>>
>>
>> On Fri, Jan 17, 2020 at 11:17 PM Martin Grigorov <
>> martin.grigo...@gmail.com> wrote:
>>
>>>
>>>
>>> On Fri, Jan 17, 2020, 23:12 William Lallemand 
>>> wrote:
>>>
>>>> On Fri, Jan 17, 2020 at 08:50:27PM +0200, Martin Grigorov wrote:
>>>> > Testing with haproxy version: 2.2-dev0-70c5b0-123
>>>>
>>>> This binary was built with code from 1 week ago, it's normal that the
>>>> test does
>>>> not work since the fix was made this week.
>>>>
>>>
>>> I'm using the same steps to build HAProxy as from .travis-ci.yml
>>> I guess I have to add "make clean" in the beginning.
>>> I'll try it tomorrow! Thanks!
>>>
>>
>> That was it!
>> Everything is back to normal now!
>> Thank you, William!
>>
>>
>>>
>>>
>>>> --
>>>> William Lallemand
>>>>
>>>


Re: ARM(64) builds

2020-01-17 Thread Martin Grigorov
On Fri, Jan 17, 2020 at 11:17 PM Martin Grigorov 
wrote:

>
>
> On Fri, Jan 17, 2020, 23:12 William Lallemand 
> wrote:
>
>> On Fri, Jan 17, 2020 at 08:50:27PM +0200, Martin Grigorov wrote:
>> > Testing with haproxy version: 2.2-dev0-70c5b0-123
>>
>> This binary was built with code from 1 week ago, it's normal that the
>> test does
>> not work since the fix was made this week.
>>
>
> I'm using the same steps to build HAProxy as from .travis-ci.yml
> I guess I have to add "make clean" in the beginning.
> I'll try it tomorrow! Thanks!
>

That was it!
Everything is back to normal now!
Thank you, William!


>
>
>> --
>> William Lallemand
>>
>


Re: ARM(64) builds

2020-01-17 Thread Martin Grigorov
On Fri, Jan 17, 2020, 23:12 William Lallemand 
wrote:

> On Fri, Jan 17, 2020 at 08:50:27PM +0200, Martin Grigorov wrote:
> > Testing with haproxy version: 2.2-dev0-70c5b0-123
>
> This binary was built with code from 1 week ago, it's normal that the test
> does
> not work since the fix was made this week.
>

I'm using the same steps to build HAProxy as from .travis-ci.yml
I guess I have to add "make clean" in the beginning.
I'll try it tomorrow! Thanks!


> --
> William Lallemand
>


Re: ARM(64) builds

2020-01-17 Thread Martin Grigorov
Hi Илья,

On Fri, Jan 17, 2020 at 5:43 PM Илья Шипицин  wrote:

>
>
> пт, 17 янв. 2020 г. в 19:33, Martin Grigorov :
>
>>
>>
>> On Fri, Jan 17, 2020 at 4:13 PM Martin Grigorov <
>> martin.grigo...@gmail.com> wrote:
>>
>>> Hi all,
>>>
>>> Today's build consistently fails on my ARM64 VM:
>>>
>>> ## Starting vtest ##
>>> Testing with haproxy version: 2.2-dev0-70c5b0-123
>>> #top  TEST reg-tests/mcli/mcli_start_progs.vtc FAILED (3.004) exit=2
>>> 1 tests failed, 0 tests skipped, 34 tests passed
>>> ## Gathering results ##
>>> ## Test case: reg-tests/mcli/mcli_start_progs.vtc ##
>>> ## test results in:
>>> "/tmp/haregtests-2020-01-17_14-01-45.SGkYcJ/vtc.12807.6adfff44"
>>>  h1   haproxy h1 PID file check failed:
>>>  h1   Bad exit status: 0x0100 exit 0x1 signal 0 core 0
>>> Makefile:964: recipe for target 'reg-tests' failed
>>> make: *** [reg-tests] Error 1
>>>
>>
>> git bisect blaims this commit:
>>
>> 25b569302167e71b32e569a2366027e8e320e80a is the first bad commit
>> commit 25b569302167e71b32e569a2366027e8e320e80a
>> Author: William Lallemand 
>> Date:   Tue Jan 14 15:38:43 2020 +0100
>>
>> REGTEST: mcli/mcli_start_progs: start 2 programs
>>
>> This regtest tests the issue #446 by starting 2 programs and checking
>> if
>> they exist in the "show proc" of the master CLI.
>>
>> Should be backported as far as 2.0.
>>
>>
>> https://travis-ci.com/haproxy/haproxy is green
>> https://cirrus-ci.com/github/haproxy/haproxy is green
>> and what is even more interesting is that
>> https://travis-ci.org/martin-g/haproxy/builds (my fork with enabled
>> ARM64 testing on TravisCI) also just passed (after few failures due to
>> timing issues (I guess)
>>
>
>
> timing out might happen because of
> https://github.com/haproxy/haproxy/commit/ac8147446c7a3d1aa607042bc782095b03bc8dc4
> your fork is 16 commits behind current master. try to rebase to master
>

Thanks!
I've updated HAProxy on my ARM64 VM but didn't update my fork at GitHub so
it was behind.
I've just did it and the build again passed successfully:
https://travis-ci.org/martin-g/haproxy/builds/638568217


>
>
>>
>>
>>> Regards,
>>> Martin
>>>
>>> On Fri, Jan 17, 2020 at 9:22 AM Илья Шипицин 
>>> wrote:
>>>
>>>> привет!
>>>>
>>>> пт, 17 янв. 2020 г. в 11:42, Martin Grigorov >>> >:
>>>>
>>>>> Привет Илья,
>>>>>
>>>>> On Thu, Jan 16, 2020 at 10:37 AM Илья Шипицин 
>>>>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> чт, 16 янв. 2020 г. в 13:26, Martin Grigorov <
>>>>>> martin.grigo...@gmail.com>:
>>>>>>
>>>>>>> Hi Илья,
>>>>>>>
>>>>>>> On Thu, Jan 16, 2020 at 10:19 AM Илья Шипицин 
>>>>>>> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>> Hello, Martin!
>>>>>>>>
>>>>>>>> btw, just curious, how is Apache Foundation (or you personally)
>>>>>>>> interested in all that ?
>>>>>>>> please do not blame me, I really like to know.
>>>>>>>>
>>>>>>>
>>>>>> ok, so you work in some company that is interested in haproxy on
>>>>>> ARM64.
>>>>>> you do not want to tell it's name, at least is it legal ? is it
>>>>>> related to some government ?
>>>>>> if "no" and "no", I guess most people won't ask any more questions :)
>>>>>>
>>>>>
>>>>> It is legal and I do not work for a government of any country!
>>>>>
>>>>>
>>>>>>
>>>>>> personally, I do not work at Haproxy Inc, I use haproxy, sometimes I
>>>>>> contribute to it.
>>>>>> Please do not consider me as an "official representative".
>>>>>>
>>>>>>
>>>>>> I'm interested in testing haproxy on ARM64, I planned to do so. I can
>>>>>> priorierize it according to your interest to it.
>>>>>> and yes, I can accept hardware donati

Re: ARM(64) builds

2020-01-17 Thread Martin Grigorov
Hi William,

On Fri, Jan 17, 2020 at 4:46 PM William Lallemand 
wrote:

> On Fri, Jan 17, 2020 at 04:33:22PM +0200, Martin Grigorov wrote:
> >
> > git bisect blaims this commit:
> >
> > 25b569302167e71b32e569a2366027e8e320e80a is the first bad commit
> > commit 25b569302167e71b32e569a2366027e8e320e80a
> > Author: William Lallemand 
> > Date:   Tue Jan 14 15:38:43 2020 +0100
> >
> > REGTEST: mcli/mcli_start_progs: start 2 programs
>
> Well that's the commit which introduces the vtc file so that's normal.
>
> >
> > https://travis-ci.com/haproxy/haproxy is green
> > https://cirrus-ci.com/github/haproxy/haproxy is green
> > and what is even more interesting is that
> > https://travis-ci.org/martin-g/haproxy/builds (my fork with enabled
> ARM64
> > testing on TravisCI) also just passed (after few failures due to timing
> > issues (I guess)
>
> I don't see anything regarding mcli_start_progs.vtc in this links, can you
> provide the output of
> `make reg-tests -- --debug reg-tests/mcli/mcli_start_progs.vtc` ?
>

Please find attached the output.

this two lines look bad in it:

***  h1   debug|[ALERT] 016/184150 (7188) : parsing [cur--1:0] : proxy
'MASTER', another server named 'cur--1' was already defined at line 0,
please use distinct names.
***  h1   debug|[ALERT] 016/184150 (7188) : Fatal errors found in
configuration.

$ grep -rnH 'cur'
/tmp/haregtests-2020-01-17_18-41-50.A6zVLb/vtc.7182.700b0bcd/
/tmp/haregtests-2020-01-17_18-41-50.A6zVLb/vtc.7182.700b0bcd/LOG:71:***  h1
  debug|[ALERT] 016/184150 (7188) : parsing [cur--1:0] : proxy 'MASTER',
another server named 'cur--1' was already defined at line 0, please use
distinct names.


   │ File:
/tmp/haregtests-2020-01-17_18-41-50.A6zVLb/vtc.7182.700b0bcd/h1/cfg
───┼─
   1   │ global
   2   │ stats socket
"/tmp/haregtests-2020-01-17_18-41-50.A6zVLb/vtc.7182.700b0bcd/h1/stats.sock"
level admin mode 600
   3   │ stats socket "fd@${cli}" level admin
   4   │
   5   │ global
   6   │ nbproc 1
   7   │ defaults
   8   │ mode http
   9   │  option http-use-htx
  10   │ timeout connect 1s
  11   │ timeout client  1s
  12   │ timeout server  1s
  13   │
  14   │ frontend myfrontend
  15   │ bind "fd@${my_fe}"
  16   │ default_backend test
  17   │
  18   │ backend test
  19   │ server www1 127.0.0.1:39247
  20   │
  21   │ program foo
  22   │ command sleep 10
  23   │
  24   │ program bar
  25   │ command sleep 10

Please let me know if you need more information!


Thanks
>
> --
> William Lallemand
>
env VTEST_PROGRAM=../vtest/vtest make reg-tests -- --debug 
reg-tests/mcli/mcli_start_progs.vtc

## Preparing to run tests ##
Testing with haproxy version: 2.2-dev0-70c5b0-123
Target : linux-glibc
Options : +EPOLL -KQUEUE -MY_EPOLL -MY_SPLICE +NETFILTER -PCRE -PCRE_JIT -PCRE2 
-PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED -REGPARM -STATIC_PCRE 
-STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H -VSYSCALL 
+GETADDRINFO -OPENSSL -LUA +FUTEX +ACCEPT4 -MY_ACCEPT4 +ZLIB -SLZ +CPU_AFFINITY 
+TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL -SYSTEMD -OBSOLETE_LINKER 
+PRCTL +THREAD_DUMP -EVPORTS
## Gathering tests to run ##
  Add test: reg-tests/mcli/mcli_start_progs.vtc
## Starting vtest ##
Testing with haproxy version: 2.2-dev0-70c5b0-123
 dT   0.000
*top  TEST reg-tests/mcli/mcli_start_progs.vtc starting
 top  extmacro def pwd=/home/ubuntu/git/haproxy/haproxy
 top  extmacro def no-htx=
 top  extmacro def localhost=127.0.0.1
 top  extmacro def bad_backend=127.0.0.1 45249
 top  extmacro def bad_ip=192.0.2.255
 top  macro def testdir=/home/ubuntu/git/haproxy/haproxy/reg-tests/mcli
 top  macro def 
tmpdir=/tmp/haregtests-2020-01-17_18-41-50.A6zVLb/vtc.7182.700b0bcd
**   top  === varnishtest "Try to start a master CLI with 2 programs"
*top  VTEST Try to start a master CLI with 2 programs
**   top  === feature ignore_unknown_macro
**   top  === server s1 {
**   s1   Starting server
 s1   macro def s1_addr=127.0.0.1
 s1   macro def s1_port=39247
 s1   macro def s1_sock=127.0.0.1 39247
*s1   Listen on 127.0.0.1 39247
**   top  === haproxy h1 -W -S -conf {
**   s1   Started on 127.0.0.1 39247 (1 iterations)
 h1   macro def h1_closed_sock=127.0.0.1 33279
 h1   macro def h1_closed_addr=127.0.0.1
 h1   macro def h1_closed_port=33279
 dT   

Re: ARM(64) builds

2020-01-17 Thread Martin Grigorov
On Fri, Jan 17, 2020 at 4:13 PM Martin Grigorov 
wrote:

> Hi all,
>
> Today's build consistently fails on my ARM64 VM:
>
> ## Starting vtest ##
> Testing with haproxy version: 2.2-dev0-70c5b0-123
> #top  TEST reg-tests/mcli/mcli_start_progs.vtc FAILED (3.004) exit=2
> 1 tests failed, 0 tests skipped, 34 tests passed
> ## Gathering results ##
> ## Test case: reg-tests/mcli/mcli_start_progs.vtc ##
> ## test results in:
> "/tmp/haregtests-2020-01-17_14-01-45.SGkYcJ/vtc.12807.6adfff44"
>  h1   haproxy h1 PID file check failed:
>  h1   Bad exit status: 0x0100 exit 0x1 signal 0 core 0
> Makefile:964: recipe for target 'reg-tests' failed
> make: *** [reg-tests] Error 1
>

git bisect blaims this commit:

25b569302167e71b32e569a2366027e8e320e80a is the first bad commit
commit 25b569302167e71b32e569a2366027e8e320e80a
Author: William Lallemand 
Date:   Tue Jan 14 15:38:43 2020 +0100

REGTEST: mcli/mcli_start_progs: start 2 programs

This regtest tests the issue #446 by starting 2 programs and checking if
they exist in the "show proc" of the master CLI.

Should be backported as far as 2.0.


https://travis-ci.com/haproxy/haproxy is green
https://cirrus-ci.com/github/haproxy/haproxy is green
and what is even more interesting is that
https://travis-ci.org/martin-g/haproxy/builds (my fork with enabled ARM64
testing on TravisCI) also just passed (after few failures due to timing
issues (I guess)


> Regards,
> Martin
>
> On Fri, Jan 17, 2020 at 9:22 AM Илья Шипицин  wrote:
>
>> привет!
>>
>> пт, 17 янв. 2020 г. в 11:42, Martin Grigorov :
>>
>>> Привет Илья,
>>>
>>> On Thu, Jan 16, 2020 at 10:37 AM Илья Шипицин 
>>> wrote:
>>>
>>>>
>>>>
>>>> чт, 16 янв. 2020 г. в 13:26, Martin Grigorov >>> >:
>>>>
>>>>> Hi Илья,
>>>>>
>>>>> On Thu, Jan 16, 2020 at 10:19 AM Илья Шипицин 
>>>>> wrote:
>>>>>
>>>>>>
>>>>>> Hello, Martin!
>>>>>>
>>>>>> btw, just curious, how is Apache Foundation (or you personally)
>>>>>> interested in all that ?
>>>>>> please do not blame me, I really like to know.
>>>>>>
>>>>>
>>>> ok, so you work in some company that is interested in haproxy on ARM64.
>>>> you do not want to tell it's name, at least is it legal ? is it related
>>>> to some government ?
>>>> if "no" and "no", I guess most people won't ask any more questions :)
>>>>
>>>
>>> It is legal and I do not work for a government of any country!
>>>
>>>
>>>>
>>>> personally, I do not work at Haproxy Inc, I use haproxy, sometimes I
>>>> contribute to it.
>>>> Please do not consider me as an "official representative".
>>>>
>>>>
>>>> I'm interested in testing haproxy on ARM64, I planned to do so. I can
>>>> priorierize it according to your interest to it.
>>>> and yes, I can accept hardware donation (even considering I'm not part
>>>> of Haproxy Inc).
>>>>
>>>> also, from my point of view, what would be really useful in your case
>>>> is testing not just official reg-tests, but also
>>>> your configs. reg-tests cover only partially. If you enable clang asan
>>>> in your own workload there's a chance to catch
>>>> something interesting (or, to make sure your own workload is ok). I can
>>>> help with that as well.
>>>>
>>>
>>> Thanks for the offer!
>>> I've discussed it with my managers.
>>> Our offer is to donate a VM that could be used as an official CI agent
>>> for the HAProxy project long term.
>>>
>>
>> I'd split this into short term and long term approach.
>>
>> if you need to start any time soon, I'd focus on your own workload
>> testing first. I would build stand which emulates
>> your production workload, compile haproxy using clang address sanitizer
>> and give it a try (functional testing, load testing, ...)
>>
>> I can help with that.
>>
>> As for long term solution, currently haproxy simply cannot attach any
>> dedicated build agent to their CI. travis-ci does not allow
>> attaching dedicated agents. And haproxy team is very conservative in
>> adding new CI servers.
>>
>> I think, I will add arm64 (most prob

Re: ARM(64) builds

2020-01-17 Thread Martin Grigorov
Hi all,

Today's build consistently fails on my ARM64 VM:

## Starting vtest ##
Testing with haproxy version: 2.2-dev0-70c5b0-123
#top  TEST reg-tests/mcli/mcli_start_progs.vtc FAILED (3.004) exit=2
1 tests failed, 0 tests skipped, 34 tests passed
## Gathering results ##
## Test case: reg-tests/mcli/mcli_start_progs.vtc ##
## test results in:
"/tmp/haregtests-2020-01-17_14-01-45.SGkYcJ/vtc.12807.6adfff44"
 h1   haproxy h1 PID file check failed:
 h1   Bad exit status: 0x0100 exit 0x1 signal 0 core 0
Makefile:964: recipe for target 'reg-tests' failed
make: *** [reg-tests] Error 1

Regards,
Martin

On Fri, Jan 17, 2020 at 9:22 AM Илья Шипицин  wrote:

> привет!
>
> пт, 17 янв. 2020 г. в 11:42, Martin Grigorov :
>
>> Привет Илья,
>>
>> On Thu, Jan 16, 2020 at 10:37 AM Илья Шипицин 
>> wrote:
>>
>>>
>>>
>>> чт, 16 янв. 2020 г. в 13:26, Martin Grigorov >> >:
>>>
>>>> Hi Илья,
>>>>
>>>> On Thu, Jan 16, 2020 at 10:19 AM Илья Шипицин 
>>>> wrote:
>>>>
>>>>>
>>>>> Hello, Martin!
>>>>>
>>>>> btw, just curious, how is Apache Foundation (or you personally)
>>>>> interested in all that ?
>>>>> please do not blame me, I really like to know.
>>>>>
>>>>
>>> ok, so you work in some company that is interested in haproxy on ARM64.
>>> you do not want to tell it's name, at least is it legal ? is it related
>>> to some government ?
>>> if "no" and "no", I guess most people won't ask any more questions :)
>>>
>>
>> It is legal and I do not work for a government of any country!
>>
>>
>>>
>>> personally, I do not work at Haproxy Inc, I use haproxy, sometimes I
>>> contribute to it.
>>> Please do not consider me as an "official representative".
>>>
>>>
>>> I'm interested in testing haproxy on ARM64, I planned to do so. I can
>>> priorierize it according to your interest to it.
>>> and yes, I can accept hardware donation (even considering I'm not part
>>> of Haproxy Inc).
>>>
>>> also, from my point of view, what would be really useful in your case is
>>> testing not just official reg-tests, but also
>>> your configs. reg-tests cover only partially. If you enable clang asan
>>> in your own workload there's a chance to catch
>>> something interesting (or, to make sure your own workload is ok). I can
>>> help with that as well.
>>>
>>
>> Thanks for the offer!
>> I've discussed it with my managers.
>> Our offer is to donate a VM that could be used as an official CI agent
>> for the HAProxy project long term.
>>
>
> I'd split this into short term and long term approach.
>
> if you need to start any time soon, I'd focus on your own workload testing
> first. I would build stand which emulates
> your production workload, compile haproxy using clang address sanitizer
> and give it a try (functional testing, load testing, ...)
>
> I can help with that.
>
> As for long term solution, currently haproxy simply cannot attach any
> dedicated build agent to their CI. travis-ci does not allow
> attaching dedicated agents. And haproxy team is very conservative in
> adding new CI servers.
>
> I think, I will add arm64 (most probably openssl-1.1.1 only for now) soon.
> Also, I'm going to investigate your libressl failures.
>
> so, dedicated vm definetly will help in troubleshooting issues, for manual
> builds. It would save bunch of time. I do not mind if you will
> add my ssh to that vm.
>
>
> also, I requested access to Linaro.
>
>
>>
>> You can use Linaro for short term testing though.
>> https://www.linaro.cloud/
>> Here you can request free VM for short periods:
>> https://servicedesk.linaro.org/servicedesk/customer/portal/19/create/265
>> P.S. Linaro is not my employer!
>>
>>
>> Regards,
>> Martin
>>
>>
>>>
>>>>
>>>>>
>>>> @apache.org is just one of my several emails. And it is the default
>>>> one in my email client.
>>>> ASF is not related anyhow to my participation here.
>>>> If I used my work email then it might look like some kind of
>>>> advertisement. I'd like to avoid that!
>>>> Next time I will use my @gmail.com one, as more neutral. Actually I've
>>>> used the GMail one when registering to 

Re: ARM(64) builds

2020-01-16 Thread Martin Grigorov
Привет Илья,

On Thu, Jan 16, 2020 at 10:37 AM Илья Шипицин  wrote:

>
>
> чт, 16 янв. 2020 г. в 13:26, Martin Grigorov :
>
>> Hi Илья,
>>
>> On Thu, Jan 16, 2020 at 10:19 AM Илья Шипицин 
>> wrote:
>>
>>>
>>> Hello, Martin!
>>>
>>> btw, just curious, how is Apache Foundation (or you personally)
>>> interested in all that ?
>>> please do not blame me, I really like to know.
>>>
>>
> ok, so you work in some company that is interested in haproxy on ARM64.
> you do not want to tell it's name, at least is it legal ? is it related to
> some government ?
> if "no" and "no", I guess most people won't ask any more questions :)
>

It is legal and I do not work for a government of any country!


>
> personally, I do not work at Haproxy Inc, I use haproxy, sometimes I
> contribute to it.
> Please do not consider me as an "official representative".
>
>
> I'm interested in testing haproxy on ARM64, I planned to do so. I can
> priorierize it according to your interest to it.
> and yes, I can accept hardware donation (even considering I'm not part of
> Haproxy Inc).
>
> also, from my point of view, what would be really useful in your case is
> testing not just official reg-tests, but also
> your configs. reg-tests cover only partially. If you enable clang asan in
> your own workload there's a chance to catch
> something interesting (or, to make sure your own workload is ok). I can
> help with that as well.
>

Thanks for the offer!
I've discussed it with my managers.
Our offer is to donate a VM that could be used as an official CI agent for
the HAProxy project long term.

You can use Linaro for short term testing though.
https://www.linaro.cloud/
Here you can request free VM for short periods:
https://servicedesk.linaro.org/servicedesk/customer/portal/19/create/265
P.S. Linaro is not my employer!


Regards,
Martin


>
>>
>>>
>> @apache.org is just one of my several emails. And it is the default one
>> in my email client.
>> ASF is not related anyhow to my participation here.
>> If I used my work email then it might look like some kind of
>> advertisement. I'd like to avoid that!
>> Next time I will use my @gmail.com one, as more neutral. Actually I've
>> used the GMail one when registering to this mailing list, so probably the
>> post from @apache has been moderated. I'll be more careful next time!
>>
>> Thanks, Илья!
>>
>> Regards,
>> Martin
>>
>>
>>>
>>> чт, 16 янв. 2020 г. в 12:32, Martin Grigorov :
>>>
>>>> Hello HAProxy developers,
>>>>
>>>> At work we are going to use more and more ARM64 based servers and
>>>> HAProxy is one of the main products we rely on.
>>>> So I went checking whether HAProxy project has a CI environment for
>>>> testing on ARM architecture.
>>>>
>>>
>>> we are looking towards
>>> https://docs.travis-ci.com/user/multi-cpu-architectures
>>>
>>>
>>>> I've found this recent discussion:
>>>> https://www.mail-archive.com/haproxy@formilux.org/msg35302.html  (I
>>>> didn't find a way how to continue on the same mail thread, so I'm starting
>>>> a new one. Apologies for that!).
>>>>
>>>
>>> I played with arm64 for a while, the issue I encountered is travis-ci
>>> cache key, i.e. we cache openssl builds between our builds.
>>> so travis used the same cache key for both amd64 and arm64 builds (this
>>> might have changed recently, I did not check yet)
>>>
>>> arm64 is in my queue (as well as recent s390x arch from travis), hope to
>>> get back to it within month or so.
>>>
>>>
>>>> From this discussion and from
>>>> https://github.com/haproxy/haproxy/blob/master/.travis.yml I
>>>> understand that there is no public CI in use (i.e. TravisCI or CirrusCI)
>>>> but some of the developers run some tests locally regularly.
>>>>
>>>
>>> it is not completely true.
>>> there's public CI. we do not use github PR machinery, so sometimes tests
>>> fail after push to master branch. it is considered as ok, failures are
>>> fixed pretty fast.
>>> for example, see
>>> https://www.mail-archive.com/haproxy@formilux.org/msg35910.html
>>> it was just perfect, failure detected using CI and fixed within few
>>> days. no customers affected.
>>>
>>>
>>>> I've forked the project and tested on TravisCI (
>>>> https://travis-ci

Re: ARM(64) builds

2020-01-16 Thread Martin Grigorov
Hi Илья,

On Thu, Jan 16, 2020 at 10:19 AM Илья Шипицин  wrote:

>
> Hello, Martin!
>
> btw, just curious, how is Apache Foundation (or you personally) interested
> in all that ?
> please do not blame me, I really like to know.
>
>
@apache.org is just one of my several emails. And it is the default one in
my email client.
ASF is not related anyhow to my participation here.
If I used my work email then it might look like some kind of advertisement.
I'd like to avoid that!
Next time I will use my @gmail.com one, as more neutral. Actually I've used
the GMail one when registering to this mailing list, so probably the post
from @apache has been moderated. I'll be more careful next time!

Thanks, Илья!

Regards,
Martin


>
> чт, 16 янв. 2020 г. в 12:32, Martin Grigorov :
>
>> Hello HAProxy developers,
>>
>> At work we are going to use more and more ARM64 based servers and HAProxy
>> is one of the main products we rely on.
>> So I went checking whether HAProxy project has a CI environment for
>> testing on ARM architecture.
>>
>
> we are looking towards
> https://docs.travis-ci.com/user/multi-cpu-architectures
>
>
>> I've found this recent discussion:
>> https://www.mail-archive.com/haproxy@formilux.org/msg35302.html  (I
>> didn't find a way how to continue on the same mail thread, so I'm starting
>> a new one. Apologies for that!).
>>
>
> I played with arm64 for a while, the issue I encountered is travis-ci
> cache key, i.e. we cache openssl builds between our builds.
> so travis used the same cache key for both amd64 and arm64 builds (this
> might have changed recently, I did not check yet)
>
> arm64 is in my queue (as well as recent s390x arch from travis), hope to
> get back to it within month or so.
>
>
>> From this discussion and from
>> https://github.com/haproxy/haproxy/blob/master/.travis.yml I understand
>> that there is no public CI in use (i.e. TravisCI or CirrusCI) but some of
>> the developers run some tests locally regularly.
>>
>
> it is not completely true.
> there's public CI. we do not use github PR machinery, so sometimes tests
> fail after push to master branch. it is considered as ok, failures are
> fixed pretty fast.
> for example, see
> https://www.mail-archive.com/haproxy@formilux.org/msg35910.html
> it was just perfect, failure detected using CI and fixed within few days.
> no customers affected.
>
>
>> I've forked the project and tested on TravisCI (
>> https://travis-ci.org/martin-g/haproxy/builds) but unfortunately the
>> builds were not very stable:
>> 1) some tests fail sometimes. I guess it is because of some timing issues
>> For example:
>> - https://travis-ci.org/martin-g/haproxy/jobs/636745241
>> - https://travis-ci.org/martin-g/haproxy/jobs/636750676
>> - https://travis-ci.org/martin-g/haproxy/jobs/636763346
>>
>
> that's very interesting. I'll have a look.
>
>
>
>> 2) There was some weird issue on testing with LibreSSL
>> The output redirect at
>> https://github.com/haproxy/haproxy/blob/bb9da0b8e23c46caab34fe6005b66fa8065fe3ac/.travis.yml#L96
>>  for
>> some reason got stuck the build. I've removed temporarily the output
>> redirects and then it passed. So, it looks like some issue with TravisCI
>> environment.
>>
>
> arm64 is slower, I guess we should add "*travis*_*wait* *30" *to
> build-ssl.sh script
> thank for the hint
>
>
>>
>> In addition I've run the build and tests on one of our machines and all
>> was OK!
>>
>> My question to you is: Are you happy with your current way of testing ARM
>> architectures or you want to add more ?
>> Here are some options:
>> 1) enable TravisCI
>>
>
> already done
>
> https://travis-ci.com/haproxy/haproxy
>
>
>> 2) my company is willing to donate an ARM64 based VM, if you are
>> interested.
>>
>
> I do not work at Haproxy Inc :)
> Willy ?
>
>
>> You will have a SSH access and a user with sudo permissions to install
>> anything that is needed.
>> The spec is aarch64 8 cores CPU 2GHz (Kunpeng), 16GB RAM, 500G disk space
>> and 5M network bandwidth. The OS could be any of CentOS 7.4/7.5/7.6,
>> EulerOS 2.8, Fedora 29, OpenSuse 15.0 and Ubuntu 16.04/18.04.
>>
>> In both cases it will be ARM64. From the earlier mail discussion I
>> understand you would prefer ARM32.
>>
>
> as for myself, I prefer both arm64 and arm32.
> however, both AMD64 and ARM64 are the same 64 bits. both of them are
> little-endian. but you mentioned at least 3 builds with ARM64 failing (we
> have those tests passing on AMD64)!
>
>
>
>>
>> Kind regards,
>> Martin
>>
>


ARM(64) builds

2020-01-15 Thread Martin Grigorov
Hello HAProxy developers,

At work we are going to use more and more ARM64 based servers and HAProxy
is one of the main products we rely on.
So I went checking whether HAProxy project has a CI environment for testing
on ARM architecture.
I've found this recent discussion:
https://www.mail-archive.com/haproxy@formilux.org/msg35302.html  (I didn't
find a way how to continue on the same mail thread, so I'm starting a new
one. Apologies for that!).
>From this discussion and from
https://github.com/haproxy/haproxy/blob/master/.travis.yml I understand
that there is no public CI in use (i.e. TravisCI or CirrusCI) but some of
the developers run some tests locally regularly.
I've forked the project and tested on TravisCI (
https://travis-ci.org/martin-g/haproxy/builds) but unfortunately the builds
were not very stable:
1) some tests fail sometimes. I guess it is because of some timing issues
For example:
- https://travis-ci.org/martin-g/haproxy/jobs/636745241
- https://travis-ci.org/martin-g/haproxy/jobs/636750676
- https://travis-ci.org/martin-g/haproxy/jobs/636763346
2) There was some weird issue on testing with LibreSSL
The output redirect at
https://github.com/haproxy/haproxy/blob/bb9da0b8e23c46caab34fe6005b66fa8065fe3ac/.travis.yml#L96
for
some reason got stuck the build. I've removed temporarily the output
redirects and then it passed. So, it looks like some issue with TravisCI
environment.

In addition I've run the build and tests on one of our machines and all was
OK!

My question to you is: Are you happy with your current way of testing ARM
architectures or you want to add more ?
Here are some options:
1) enable TravisCI
2) my company is willing to donate an ARM64 based VM, if you are interested.
You will have a SSH access and a user with sudo permissions to install
anything that is needed.
The spec is aarch64 8 cores CPU 2GHz (Kunpeng), 16GB RAM, 500G disk space
and 5M network bandwidth. The OS could be any of CentOS 7.4/7.5/7.6,
EulerOS 2.8, Fedora 29, OpenSuse 15.0 and Ubuntu 16.04/18.04.

In both cases it will be ARM64. From the earlier mail discussion I
understand you would prefer ARM32.

Kind regards,
Martin


Cannot enable a config "disabled" frontend via socket command

2019-07-24 Thread Martin van Es
Exactly this problem:
https://www.mail-archive.com/haproxy@formilux.org/msg19356.html

is still true for frontends, so I can't start a frontend in disabled mode and 
later on enable it via socket.

Tested version: 1.8.19 in Debian buster.

Best regards,
Martin





Re: [PATCH] BUG/CRITICAL: SIGBUS crash on aarch64

2018-11-15 Thread Paul Martin
On Wed, Nov 14, 2018 at 06:05:00PM +0100, Olivier Houchard wrote:

> Oops, you're right indeed.
> I'm not sure I'm a big fan of special-casing STD_T_UINT. For example,
> STD_T_FRQP is probably 12bytes too, so it'd be a problem.
> Can you test the (untested, but hopefully right) patch attached ?

Yes, your patch works on aarch64 too.

-- 
Paul Martin http://www.codethink.co.uk/
Senior Software Developer, Codethink Ltd.



[PATCH] BUG/CRITICAL: SIGBUS crash on aarch64

2018-11-14 Thread Paul Martin
Atomic operations on aarch64 (arm64) have to be aligned to 8 byte
boundaries (same size as a pointer type), otherwise a SIGBUS is raised.

Because the variable ts here isn't guaranteed to be aligned due to the
various data_size adjustments, make sure that data_size is always
incremented by a minimum of sizeof(int *) rather than sizeof(int).

Program received signal SIGBUS, Bus error.
0xaab1176c in process_store_rules (s=s@entry=0xaae01060,
rep=0xaae010d0, rep=0xaae010d0, an_bit=8388608)
at src/stream.c:1609
1609HA_RWLOCK_WRLOCK(STK_SESS_LOCK, >lock);
(gdb) bt
%0  0xaab1176c in process_store_rules (s=s@entry=0xaae01060,
rep=0xaae010d0, rep=0xaae010d0, an_bit=8388608)
at src/stream.c:1609
%1  0xaab18898 in process_stream (t=,
context=0xaae01060, state=) at src/stream.c:2054
%2  0xaabb0220 in process_runnable_tasks () at src/task.c:421
%3  0xaab51b40 in run_poll_loop () at src/haproxy.c:2609
%4  run_thread_poll_loop (data=) at src/haproxy.c:2674
%5  0xaaac715c in main (argc=, argv=0xf290)
at src/haproxy.c:3286
---
 include/proto/stick_table.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/proto/stick_table.h b/include/proto/stick_table.h
index 40bb8ca6..6e39ad47 100644
--- a/include/proto/stick_table.h
+++ b/include/proto/stick_table.h
@@ -64,7 +64,7 @@ static inline int stktable_type_size(int type)
switch(type) {
case STD_T_SINT:
case STD_T_UINT:
-   return sizeof(int);
+   return sizeof(int *);
case STD_T_ULL:
return sizeof(unsigned long long);
case STD_T_FRQP:
-- 
2.19.1



Proxified TCP connections with no applicative test possible.

2018-07-20 Thread Thomas Martin
Hello,

I'm trying to setup haproxy for a kind a of weird situation.

Here is my architecture:
- Server S0 and S1 can connect to our client services (which we want
to be proxified)
- Server C0 is in a dedicated network and can't access our client FIX
servers directly. He needs to use S0 proxies (S0 and S1).
- Our client's services:
-- are not HTTP services,
-- doesn't allow more than one connection at a time: if another
simultaneous connection is done, it will be open (TCP speaking) but
none of it's applicative requests will be processed.


So my goal is to have my application connected to C0's haproxy, which
connect itself to S0 or S1 haproxy.


Here is my setup:
- On S0 and S1:
frontend f_FIX_SERVER
bind 10.10.10.{X,Y}:1
mode tcp
default_backend b_FIX_SERVER
backend b_FIX_SERVER
mode tcp
server fix_1 10.11.11.11:3129 check
server fix_2 10.11.11.11:3130 check backup

- On C0
frontend f_FIX_CLIENT
bind 127.0.0.1:2
mode tcp
default_backend b_FIX_CLIENT
backend b_FIX_CLIENT
mode tcp
#S0
server fix 10.10.10.X:1 check
#S1
server fix 10.10.10.Y:1 check backup


In case of S0 going DOWN, C0 will use S1 as expected (which is perfect).

But if my client's services goes unreachable on S0, while S0 is still
running, haproxy on C0 will NOT use S1 as haproxy on S0 is still
responding correctly (from C0 haproxy point of view).

I tried to play with "tcp-request connection reject" (and acl with
"nbsrv") on S0 and S1 but without any success.


Do you have any advice to help me?
Am I missing something obvious ?


Thanks for reading.

Regards,
Thomas



RE: TLS handshake works with certificate name mismatch using "verify required" and "verifyhost"

2018-07-16 Thread Martin RADEL
Hi Lukas,

Right, "verify required ssl verifyhost www.ham.eggs" fails now as expected.

My initial report that it doesn't work with "verifyhost" option was not 
completely right,
because in fact we never tried what would happen if we set a non-matching 
pattern in the "verifyhost" directive.

We tested without the verifyhost because of lack of knowledge that this would 
be mandatory. Then it always worked (even with the name mismatch),
And we tested with verifyhost directive, but only with a matching pattern, also 
always worked.

Now with the new information, it's clear why this always worked and what we 
have to do to achieve a correct haproxy config.
-> always use "verify required" *together* with "verifyhost"

I would also vote to change the HAProxy default behavior to more 
security-oriented when there are some directives not passed.
This would on the one hand generate more questions "why is it not working?", 
but on the other hand would have a stronger security out of the box.

Thanks for your help!

BR
Martin

-Original Message-
From: lu...@ltri.eu [mailto:lu...@ltri.eu]
Sent: Montag, 16. Juli 2018 14:11
To: Martin RADEL 
Cc: haproxy@formilux.org; w...@1wt.eu; m...@gandi.net
Subject: Re: TLS handshake works with certificate name mismatch using "verify 
required" and "verifyhost"

On Mon, 16 Jul 2018 at 11:57, Martin RADEL 
<mailto:martin.ra...@rbinternational.com> wrote:
>
> Hi,
>
> I think we found the issue:
> Seems that there was a misunderstanding from us regarding the haproxy 
> documentation with the "verifyhost" option.
>
> If I get it right, the documentation says that if we have a haproxy
> config that
> - Has "verify required"
> - Does not use SNI
> - Has no "verifyhost"
> Then HAProxy will simply ignore whatever hostname the server sends back in 
> its certificate and the handshake will be OK.

Yes, that is correct, also see the verify docs:
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#5.2-verify

Not sure how we ended up in this situation though. I remember there was a vivid 
discussion about whether "verify" should default to none or required. We opted 
for "required", to be "secure by default", but this is totally useless given 
that it requires verifyhost or sni, and will silently disable cert verification 
when those option are not given. That's probably the worst thing we can do in 
this case; this configuration should be rejected, imho. People that don't care 
about cert verification should simply set "verify none". But here we are now, 
and this is documented behavior :(

I think this was introduced in 2ab88675, maybe we can change this in 1.9.



> Please can you confirm that our understanding of HAProxy documentation is 
> correct?
> If so, then we could mark this topic as "solved" :-)

Yes, but I don't understand, you reported that verification is not happening 
*with* verifyhost:

> the connection to the backend works all the time, even when there is a name 
> mismatch and even if we use the “verify required” option together with 
> “verifyhost”.


"verify required ssl verifyhost www.ham.eggs" fails as expected for you now, 
correct?



Thanks,
Lukas
This message and any attachment ("the Message") are confidential. If you have 
received the Message in error, please notify the sender immediately and delete 
the Message from your system, any use of the Message is forbidden. 
Correspondence via e-mail is primarily for information purposes. RBI neither 
makes nor accepts legally binding statements via e-mail unless explicitly 
agreed otherwise. Information pursuant to § 14 Austrian Companies Code: 
Raiffeisen Bank International AG; Registered Office: Am Stadtpark 9, 1030 
Vienna,Austria; Company Register Number: FN 122119m at the Commercial Court of 
Vienna (Handelsgericht Wien).


RE: TLS handshake works with certificate name mismatch using "verify required" and "verifyhost"

2018-07-16 Thread Martin RADEL
Hi,

I think we found the issue:
Seems that there was a misunderstanding from us regarding the haproxy 
documentation with the "verifyhost" option.

If I get it right, the documentation says that if we have a haproxy config that
- Has "verify required"
- Does not use SNI
- Has no "verifyhost"
Then HAProxy will simply ignore whatever hostname the server sends back in its 
certificate and the handshake will be OK.

If the "verifyhost" option is set and it does match the pattern, SSL handshake 
will also be OK.
If the "verifyhost" option is set and it does not match the pattern, SSL 
handshake will fail.


We tested this with two different HAProxy configs now and I can confirm that 
it's exactly like that.
(Server is always presenting the same certificate with "*.foo.bar" in it's 
common name / subject)

TESTBACKEND1 config (WORKING) looks like this:
# --- TESTBACKEND1
backend TESTBACKEND1
option  forwardfor except 127.0.0.0/8
server TESTBACKEND1-server 10.1.1.1:443 check inter 30s  verify required 
ssl verifyhost www.foo.bar ca-file 
/etc/haproxy/certs/backend-ca-certificates.crt crt 
/etc/haproxy/certs/frontend-server-certificate.pem


TESTBACKEND2 config (NOT WORKING) looks like this:
# --- TESTBACKEND2
backend TESTBACKEND2
option  forwardfor except 127.0.0.0/8
server TESTBACKEND2-server 10.1.1.1:443 check inter 30s  verify required 
ssl verifyhost www.ham.eggs ca-file 
/etc/haproxy/certs/backend-ca-certificates.crt crt 
/etc/haproxy/certs/frontend-server-certificate.pem


Please can you confirm that our understanding of HAProxy documentation is 
correct?
If so, then we could mark this topic as "solved" :-)


BR
Martin


-Original Message-
From: lu...@ltri.eu [mailto:lu...@ltri.eu]
Sent: Samstag, 14. Juli 2018 11:35
To: Martin RADEL 
Cc: haproxy@formilux.org
Subject: Re: TLS handshake works with certificate name mismatch using "verify 
required" and "verifyhost"

Hello Martin,


> we have a strange situation with our HAProxy, running on Version 1.8.8 with 
> OpenSSL.

Please share the output of haproxy -vv. Did you build openssl yourself or is 
this a distribution provided openssl lib? I am asking because build issues can 
lead to very strange behavior.



> server BACKEND1-server 10.1.1.1:443 check inter 30s  verify required
> ssl verifyhost *.foo.bar

*.foo.bar is not a valid hostname. It is a valid wildcard representation in a 
cert's SAN, yes, but not a hostname. Use real hostname for verifyhost instead, 
like www.foo.bar

Also, lets confirm the backend is really configured as per expectations, by 
running requests via curl from the haproxy box:

This should work:
curl -v --cacert /etc/haproxy/certs/backend-ca-certificates.crt
--resolve www.foo.bar:443:10.1.1.1 https://www.foo.bar/

This should fail:
curl -v --cacert /etc/haproxy/certs/backend-ca-certificates.crt
--resolve www.foo.fail:443:10.1.1.1 https://www.foo.bar/



cheers,
lukas
This message and any attachment ("the Message") are confidential. If you have 
received the Message in error, please notify the sender immediately and delete 
the Message from your system, any use of the Message is forbidden. 
Correspondence via e-mail is primarily for information purposes. RBI neither 
makes nor accepts legally binding statements via e-mail unless explicitly 
agreed otherwise. Information pursuant to § 14 Austrian Companies Code: 
Raiffeisen Bank International AG; Registered Office: Am Stadtpark 9, 1030 
Vienna,Austria; Company Register Number: FN 122119m at the Commercial Court of 
Vienna (Handelsgericht Wien).


RE: TLS handshake works with certificate name mismatch using "verify required" and "verifyhost"

2018-07-16 Thread Martin RADEL
Hi,

The certificate subject and subject alternate name are set to “*.foo.bar” (I’m 
replacing real DNS name here with foo.bar here because of security reasons).
There is no IP address included in the server’s certificate.

We are not using SNI on our clients.

BR
Martin


From: ig...@encompasscorporation.com<mailto:ig...@encompasscorporation.com> 
[mailto:ig...@encompasscorporation.com]
Sent: Freitag, 13. Juli 2018 03:27
To: Martin RADEL 
mailto:martin.ra...@rbinternational.com>>
Cc: haproxy@formilux.org<mailto:haproxy@formilux.org>
Subject: Re: TLS handshake works with certificate name mismatch using "verify 
required" and "verifyhost"

On Fri, Jul 13, 2018 at 11:08 AM, Igor Cicimov 
mailto:ig...@encompasscorporation.com>> wrote:
Hi Martin,

On Thu, Jul 12, 2018 at 6:55 PM, Martin RADEL 
mailto:martin.ra...@rbinternational.com>> 
wrote:
Hi all,

we have a strange situation with our HAProxy, running on Version 1.8.8 with 
OpenSSL.
(See the details in the setup listed below - some lines are missing by 
intention. It’s a config snippet with just the interesting parts mentioned)

Initial situation:
We run a HAProxy instance which enforces mutual TLS on the frontend, allowing 
only clients to connect to it when they will present a specific certificate.
The HAPRoxy also does mutual TLS to the backend, presenting its frontend server 
certificate to the backend as a client certificate.
The backend only allows connections when the HAProxy’s certificate is presented 
to it.
To have a proper TLS handshake to the backend, and to be able to identify a 
man-in-the-middle scenario, we use the “verify required” directive together 
with the “verifyhost” directive.

The HAProxy is not able to resolve the backend’s real DNS-hostname, so it’s 
using the IP of the server instead (10.1.1.1)
The backend is presenting a wildcard server certificate with a DNS-hostname 
looking like “*.foo.bar”


In this configuration, one could assume that there is always a certificate name 
mismatch with the TLS handshake:
Backend server will present its server certificate with a proper DNS hostname 
in it, and the HAProxy will find out that it doesn’t match the initially used 
connection name “10.1.1.1”.


​Just checking if the IP hasn't been by any chance included in the certificate 
subjectAlternateNames ?
​

Issue:
In fact the connection to the backend works all the time, even when there is a 
name mismatch and even if we use the “verify required” option together with 
“verifyhost”.
Seems as if HAProxy completely ignores the mismatch, as if we would use the 
option “verify none”.


According to HAProxy documentation, this is clearly a not-expected behavior:
http://cbonte.github.io/haproxy-dconv/1.8/configuration.html#5.2-verify<https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fcbonte.github.io%2Fhaproxy-dconv%2F1.8%2Fconfiguration.html%235.2-verify=02%7C01%7Cmartin.radel%40rbinternational.com%7C7e7f5dccd90b4745bc0408d5e8601877%7C9b511fdaf0b143a5b06e1e720f64520a%7C1%7C1%7C636670421783654330=jl8aPpmP%2BtWoT2D4pjEqtU710EQxZg7LLNr3gTh4nEM%3D=0>


Can somebody please share some knowledge why this is working, or can confirm 
that this is a bug?


#-
# Global settings
#-
global
log /dev/log local2
pidfile /run/haproxy/haproxy.pid
maxconn 2
ssl-default-bind-ciphers 
ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS:!RC4
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11
stats socket /var/lib/haproxy/stats

#-
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#-
defaults
modehttp
log global
option  http-server-close
option  redispatch
retries 3
maxconn 2
errorfile 503   /etc/haproxy/errorpage.html
default-server  init-addr last,libc,none

# 
#  HAPROXY CONFIG WITH WILDCARD CERTIFICATE ON BACKEND
# 
# --- FRONTEND1 (TLS with mutual authentication) ---
frontend FRONTEND1
option  forwardfor except 
127.0.0.0/8<https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2F127.0.0.0%2F8=02%7C01%7Cmartin.radel%40rbinternational.com%7C7e7f5dccd90b4745bc0408d5e8601877%7C9b511fdaf0b143a5b06e1e720f64520a%7C1%7C1%7C636670421783664338=%2FUqCo%2BIaPk3YnRAJYvRZAKykLc0yfzj42pufo2BrJ20%3D=0>
acl authorizedClient ssl_c_s_dn(cn) -m str -f 
/etc/haproxy/authorized_clients.cfg
bind *:443 

TLS handshake works with certificate name mismatch using "verify required" and "verifyhost"

2018-07-12 Thread Martin RADEL
Hi all,

we have a strange situation with our HAProxy, running on Version 1.8.8 with 
OpenSSL.
(See the details in the setup listed below - some lines are missing by 
intention. It's a config snippet with just the interesting parts mentioned)

Initial situation:
We run a HAProxy instance which enforces mutual TLS on the frontend, allowing 
only clients to connect to it when they will present a specific certificate.
The HAPRoxy also does mutual TLS to the backend, presenting its frontend server 
certificate to the backend as a client certificate.
The backend only allows connections when the HAProxy's certificate is presented 
to it.
To have a proper TLS handshake to the backend, and to be able to identify a 
man-in-the-middle scenario, we use the "verify required" directive together 
with the "verifyhost" directive.

The HAProxy is not able to resolve the backend's real DNS-hostname, so it's 
using the IP of the server instead (10.1.1.1)
The backend is presenting a wildcard server certificate with a DNS-hostname 
looking like "*.foo.bar"


In this configuration, one could assume that there is always a certificate name 
mismatch with the TLS handshake:
Backend server will present its server certificate with a proper DNS hostname 
in it, and the HAProxy will find out that it doesn't match the initially used 
connection name "10.1.1.1".


Issue:
In fact the connection to the backend works all the time, even when there is a 
name mismatch and even if we use the "verify required" option together with 
"verifyhost".
Seems as if HAProxy completely ignores the mismatch, as if we would use the 
option "verify none".


According to HAProxy documentation, this is clearly a not-expected behavior:
http://cbonte.github.io/haproxy-dconv/1.8/configuration.html#5.2-verify


Can somebody please share some knowledge why this is working, or can confirm 
that this is a bug?


#-
# Global settings
#-
global
log /dev/log local2
pidfile /run/haproxy/haproxy.pid
maxconn 2
ssl-default-bind-ciphers 
ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS:!RC4
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11
stats socket /var/lib/haproxy/stats

#-
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#-
defaults
modehttp
log global
option  http-server-close
option  redispatch
retries 3
maxconn 2
errorfile 503   /etc/haproxy/errorpage.html
default-server  init-addr last,libc,none

# 
#  HAPROXY CONFIG WITH WILDCARD CERTIFICATE ON BACKEND
# 
# --- FRONTEND1 (TLS with mutual authentication) ---
frontend FRONTEND1
option  forwardfor except 127.0.0.0/8
acl authorizedClient ssl_c_s_dn(cn) -m str -f 
/etc/haproxy/authorized_clients.cfg
bind *:443 ssl crt /etc/haproxy/certs/frontend-server-certificate.pem 
ca-file /etc/haproxy/certs/frontend-ca-certificates.crt verify required
use_backend BACKEND1 if authorizedClient frontend

# --- BACKEND1
backend BACKEND1
option  forwardfor except 127.0.0.0/8
server BACKEND1-server 10.1.1.1:443 check inter 30s  verify required ssl 
verifyhost *.foo.bar ca-file 
/etc/haproxy/certs/backend-ca-certificates.crt crt 
/etc/haproxy/certs/frontend-server-certificate.pem







This message and any attachment ("the Message") are confidential. If you have 
received the Message in error, please notify the sender immediately and delete 
the Message from your system, any use of the Message is forbidden. 
Correspondence via e-mail is primarily for information purposes. RBI neither 
makes nor accepts legally binding statements via e-mail unless explicitly 
agreed otherwise. Information pursuant to ? 14 Austrian Companies Code: 
Raiffeisen Bank International AG; Registered Office: Am Stadtpark 9, 1030 
Vienna,Austria; Company Register Number: FN 122119m at the Commercial Court of 
Vienna (Handelsgericht Wien).


Re: Compression issues with http-server-close/httpclose

2018-02-01 Thread Martin Goldstone
Yes, the exact same configuration file, no caching or threads. We've not
tried HTTP/2 just yet, I'll have a look at this when I'm in the office
tomorrow.

Thanks

On 1 Feb 2018 19:31, "Willy Tarreau" <w...@1wt.eu> wrote:

Hi guys,

On Thu, Feb 01, 2018 at 06:16:32PM +0100, Lukas Tribus wrote:
> Hello Martin,
>
> On 1 February 2018 at 17:18, Martin Goldstone <m.j.goldst...@keele.ac.uk>
wrote:
> > Hi,
> >
> > We've been using haproxy in docker for quite some time to provide
reverse
> > proxy facilities for many and varied application servers.  Typically,
we've
> > always used option http-server-close in the config, except for rare
> > occasions where we might need http-keep-alive (eg ntlm authentication).
We
> > also have the following in our front end configs:
> >
> > compression algo gzip
> > compression type text/html application/x-javascript text/css
> > application/javascript text/javascript text/plain text/xml
application/json
> > application/vnd.ms-fontobject application/x-font-opentype
> > application/x-font-truetype application/x-font-ttf application/xml
font/eot
> > font/opentype font/otf image/svg+xml image/vnd.microsoft.icon
> >
> > These have been working fine up to and including the most recent
release of
> > 1.7. We've recently began re-engineering our reverse proxy set up, and
as
> > part of this we want to move to 1.8. However, we've discovered that some
> > resources requested by web pages (mainly javascript and css) don't load
> > properly when using haproxy 1.8 with these options. Basically, the page
> > doesn't display and the developer toolbar in Chrome gives an error of
> > net::ERR_INCOMPLETE_CHUNKED_ENCODING or net::ERR_INVALID_CHUNKED_
ENCODING
> > against the resources that failed to load in the network pane. I haven't
> > been able to determine yet why this doesn't apply to every resource the
page
> > attempts to load, but in this case out of 20 javascript and css files, 5
> > fail to load.
> >
> > [...]
> > Can anyone offer any advice?
>
> Ok, so this is clearly a bug that has to be fixed.

I really don't see anything changing in the compression area between 1.7
and 1.8, so I'm afraid we might have been breaking something somewhere
else :-(  Just to be sure Martin, are you using the same configuration ?
No threads nor caching for example (just trying to narrow down the problem)
?

> > We've noticed that the problem goes away when using option
http-keep-alive
> > and option http-pretend-keepalive, but as the documentation suggests
that
> > http-server-close is the preferred option, we'd prefer to stick with
that.

That's a very very useful element. So there might be some breakage in the
way we manage the analysers or filters at the end of the request. I think
we had some changes in this area during 1.8.

> Yeah, that comment in the documentation stems from the introduction of
> keepalive support 8 years ago (commit 16bfb021 "MINOR: config: add
> option http-keep-alive"), I think it should be removed now. We made
> http-keep-alive the default mode 4 years ago (commit 70dffdaa "MAJOR:
> http: switch to keep-alive mode by default") and everyone is using it
> nowadays, which is why you are seeing this unfixed bug in those old
> close modes as opposed to http-keep-alive.

I agree this definitely makes sense. I'm wondering whether it surfaces
when using HTTP/2 (haproxy.org runs with both H2 and compression and
we didn't get any such report yet though).

> Really http-keep-alive is what you should use in haproxy 1.8 (and
> 1.7), as it gets all the testing (filters, compression, HTTP/2) and
> everyone is using that (default) mode. Keep-alive is safe as it does
> not reuse session between different sessions by default:
> http://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.2-http-
reuse
>
>
> If there is agreement (Willy?) I can send a doc patch removing that
paragraph.

Oh yes please, thanks for the offer Lukas!

Willy


Compression issues with http-server-close/httpclose

2018-02-01 Thread Martin Goldstone
Hi,

We've been using haproxy in docker for quite some time to provide reverse
proxy facilities for many and varied application servers.  Typically, we've
always used option http-server-close in the config, except for rare
occasions where we might need http-keep-alive (eg ntlm authentication). We
also have the following in our front end configs:

compression algo gzip
compression type text/html application/x-javascript text/css
application/javascript text/javascript text/plain text/xml application/json
application/vnd.ms-fontobject application/x-font-opentype
application/x-font-truetype application/x-font-ttf application/xml font/eot
font/opentype font/otf image/svg+xml image/vnd.microsoft.icon

These have been working fine up to and including the most recent release of
1.7. We've recently began re-engineering our reverse proxy set up, and as
part of this we want to move to 1.8. However, we've discovered that some
resources requested by web pages (mainly javascript and css) don't load
properly when using haproxy 1.8 with these options. Basically, the page
doesn't display and the developer toolbar in Chrome gives an error of
net::ERR_INCOMPLETE_CHUNKED_ENCODING or net::ERR_INVALID_CHUNKED_ENCODING
against the resources that failed to load in the network pane. I haven't
been able to determine yet why this doesn't apply to every resource the
page attempts to load, but in this case out of 20 javascript and css files,
5 fail to load.

We've noticed that the problem goes away when using option http-keep-alive
and option http-pretend-keepalive, but as the documentation suggests that
http-server-close is the preferred option, we'd prefer to stick with that.

Can anyone offer any advice?

Thanks

-- 
Martin Goldstone
IT Systems Administrator
IT Services, Innovation Centre 1 (IC1)
Keele University, Keele, Staffordshire, United Kingdom, ST5 5NB
Telephone: +44 1782 734457
G+: http://google.com/+MartinGoldstoneKeele


haproxy - namespece implementation and usage

2016-09-17 Thread Martin Tóth
Hi fellow haproxy users,

i just wanted to ask if new implementation of haproxy (implemented in v. 1.6.9) 
namespaces can work like this. I have Zabbix proxy daemon running inside 
network namespace in Linux, let’s say namespace is named “customer”.
I want to be able to run haproxy daemon in default linux namespace and be able 
to connect with haproxy to Zabbix proxy demon running inside own namespace. Is 
this possible ?

My config :

namespace_list
namespace customer

frontend customer
mode tcp
bind 10.0.0.2:10001 accept-proxy # this is IP and port on host 
(10.0.0.2 - linux server IP) where i should connect when i want to reach 
customer Zabbix proxy daemon
default_backend serverlist

backend serverlist
mode tcp
server s1 10.8.1.4:10050 namespace customer # this is zabbix proxy 
dameon

It did not found any related example of configuration or more than one page of 
documentation. 

Thanks a lot for reply.

Regards.

Martin


HAProxy description server status

2016-07-25 Thread Martin Šindler
Hi,
I hope, that I'm writing the correct person, if not please forward this
question to correct one.

We are try to implement HAProxy and we are performing some tests, one of
our requirement is automatic management of HAProxy during server
maintenance, this need reading status of servers and put nods to
appropriate state.

I'm faced to problem regarding some missing information in documentation
(maybe this information is present but i can't find it).

*Problem:*
when i prompt command: *show servers state Test-cluster*

It returns me output which can be reformated to someting like:

be_id  : 5
be_name: Test-cluster
srv_id : 2
srv_name   : clstrnod2
srv_addr   : 192.168.10.6
srv_op_state   : *2*
srv_admin_state: 0
srv_uweight: 8
srv_iweight: 10
srv_time_since_last_change : 264872
srv_check_status   : 9
srv_check_result   : 3
srv_check_health   : 7
srv_check_state: 6
srv_agent_state: 22
bk_f_forced_id : 0
srv_f_forced_id: 0

I found in documentation that for example :

srv_op_state:Server operational state (UP/DOWN/...).

But there is no clue to realize what means INTEGER value 2. I have this
problem for all other status values as well. Could you please help me to
find documentation where is written what INTEGER values represents?

Thank you very much

Martin Sindler


Re: [PATCH] use SSL_CTX_set_ecdh_auto() for ecdh curve selection

2016-04-18 Thread David Martin
On Mon, Apr 18, 2016 at 3:02 PM, Janusz Dziemidowicz
<rrapt...@nails.eu.org> wrote:
> 2016-04-15 16:50 GMT+02:00 David Martin <dmart...@gmail.com>:
>> I have tested the current patch with the HAProxy default, a list of curves,
>> a single curve and also an incorrect curve.  All seem to behave correctly.
>> The conditional should only skip calling ecdh_auto() if curves_list()
>> returns 0 in which case HAProxy exits anyway.
>>
>> Maybe I'm missing something obvious, this has been a learning experience for
>> me.
>
> You are correct. I guess I shouldn't have been looking at patches
> during a break at a day work;)
> Seems ok for me now. Apart from the missing documentation changes;)
>
> --
> Janusz Dziemidowicz

Added doc changes :)
From f54632ab99e526ddb6d6acc26f6c1cb74b3c647d Mon Sep 17 00:00:00 2001
From: David Martin <dmart...@gmail.com>
Date: Mon, 18 Apr 2016 16:10:13 -0500
Subject: [PATCH] use SSL_CTX_set_ecdh_auto() for ecdh curve selection

Use SSL_CTX_set_ecdh_auto if the OpenSSL version supports it, this
allows the server to negotiate ECDH curves much like it does ciphers.
Prefered curves can be specified using the existing ecdhe bind options
(ecdhe secp384r1:prime256v1)
---
 doc/configuration.txt |  6 --
 src/ssl_sock.c| 16 +++-
 2 files changed, 19 insertions(+), 3 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 6b80158..be1f06f 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -9625,8 +9625,10 @@ backlog 
 
 ecdhe 
   This setting is only available when support for OpenSSL was built in. It sets
-  the named curve (RFC 4492) used to generate ECDH ephemeral keys. By default,
-  used named curve is prime256v1.
+  the named curve (RFC 4492) used to generate ECDH ephemeral keys. OpenSSL
+  1.0.2 and newer support a list of curves that are negotiated during SSL/TLS
+  handshake such as  "prime256v1:secp384r1" (without quotes). By default, used
+  named curve is prime256v1.
 
 ca-file 
   This setting is only available when support for OpenSSL was built in. It
diff --git a/src/ssl_sock.c b/src/ssl_sock.c
index 0d35c29..a5d9408 100644
--- a/src/ssl_sock.c
+++ b/src/ssl_sock.c
@@ -2756,7 +2756,20 @@ int ssl_sock_prepare_ctx(struct bind_conf *bind_conf, SSL_CTX *ctx, struct proxy
 	SSL_CTX_set_tlsext_servername_callback(ctx, ssl_sock_switchctx_cbk);
 	SSL_CTX_set_tlsext_servername_arg(ctx, bind_conf);
 #endif
-#if defined(SSL_CTX_set_tmp_ecdh) && !defined(OPENSSL_NO_ECDH)
+#if !defined(OPENSSL_NO_ECDH)
+#if defined(SSL_CTX_set_ecdh_auto)
+	{
+		const char *ecdhe = (bind_conf->ecdhe ? bind_conf->ecdhe : ECDHE_DEFAULT_CURVE);
+		if (!SSL_CTX_set1_curves_list(ctx, ecdhe)) {
+			Alert("Proxy '%s': unable to set elliptic curve list to '%s' for bind '%s' at [%s:%d].\n",
+curproxy->id, ecdhe, bind_conf->arg, bind_conf->file, bind_conf->line);
+			cfgerr++;
+		}
+		else {
+			SSL_CTX_set_ecdh_auto(ctx, 1);
+		}
+	}
+#elif defined(SSL_CTX_set_tmp_ecdh)
 	{
 		int i;
 		EC_KEY  *ecdh;
@@ -2774,6 +2787,7 @@ int ssl_sock_prepare_ctx(struct bind_conf *bind_conf, SSL_CTX *ctx, struct proxy
 		}
 	}
 #endif
+#endif
 
 	return cfgerr;
 }
-- 
1.9.1



Re: [PATCH] use SSL_CTX_set_ecdh_auto() for ecdh curve selection

2016-04-15 Thread David Martin
On Apr 15, 2016 4:24 AM, "Janusz Dziemidowicz" <rrapt...@nails.eu.org>
wrote:
>
> 2016-04-14 17:39 GMT+02:00 David Martin <dmart...@gmail.com>:
> > Here's a revised patch, it throws a fatal config error if
> > SSL_CTX_set1_curves_list() fails.  The default echde option is used so
> > current configurations should not be impacted.
> >
> > Sorry Janusz, forgot the list on my reply.
>
> I believe that now it is wrong as SSL_CTX_set_ecdh_auto works
> differently than this code implies.
> From what I was able to tell from OpenSSL code (always a pleasure
> looking at) it works as follows:
> - SSL_CTX_set_ecdh_auto turns on negotiation of curves, without this
> no curves will be negotiated (and only one configured curve will be
> used, "the old way")
> - the list of curves that are considered during negotiation contain
> all of the OpenSSL supported curves
> - unless you also call SSL_CTX_set1_curves_list() and narrow it down
> to the list you prefer
>
> Right now you patch either calls SSL_CTX_set_ecdh_auto or
> SSL_CTX_set1_curves_list, but not both. Unless I'm mistaken, this
> kinda is not how it is supposed to be used.
> Have you tested behavior of the server with any command line client?

I have tested the current patch with the HAProxy default, a list of curves,
a single curve and also an incorrect curve.  All seem to behave correctly.
The conditional should only skip calling ecdh_auto() if curves_list()
returns 0 in which case HAProxy exits anyway.

Maybe I'm missing something obvious, this has been a learning experience
for me.

>
> I believe this should be something like:
> #if new OpenSSL
>SSL_CTX_set_ecdh_auto(... 1)
>SSL_CTX_set1_curves_list() with user supplied ecdhe or
> ECDHE_DEFAULT_CURVE by default
> #elif ...
>SSL_CTX_set_tmp_ecdh() with user supplied ecdhe or
> ECDHE_DEFAULT_CURVE by default
> #endif
>
> This way haproxy behaves exactly the same with default configuration
> and any version of OpenSSL. User can configure multiple curves if
> there is sufficiently new OpenSSL.
>
> Changes to the documentation would also be nice in the patch :)
>
> --
> Janusz Dziemidowicz

Just to be clear I have no intention of running anything other than
prime256v1 on my systems nor do I think anyone else should. I had x25519 in
mind when looking over the code.

Perhaps something like this should wait until haproxy is ready for openssl
1.1.0 as it has little value without an alternative curve.

Love the project by the way, thanks for all the awesome work.


Re: [PATCH] use SSL_CTX_set_ecdh_auto() for ecdh curve selection

2016-04-14 Thread David Martin
Here's a revised patch, it throws a fatal config error if
SSL_CTX_set1_curves_list() fails.  The default echde option is used so
current configurations should not be impacted.

Sorry Janusz, forgot the list on my reply.

On Thu, Apr 14, 2016 at 10:37 AM, David Martin <dmart...@gmail.com> wrote:
> Here's a revised patch, it throws a fatal config error if
> SSL_CTX_set1_curves_list() fails.  The default echde option is used so
> current configurations should not be impacted.
>
> On Thu, Apr 14, 2016 at 7:22 AM, Janusz Dziemidowicz
> <rrapt...@nails.eu.org> wrote:
>> 2016-04-14 12:05 GMT+02:00 Willy Tarreau <w...@1wt.eu>:
>>> Hi David,
>>>
>>> On Wed, Apr 13, 2016 at 03:19:45PM -0500, David Martin wrote:
>>>> This is my first attempt at a patch, I'd love to get some feedback on this.
>>>>
>>>> Adds support for SSL_CTX_set_ecdh_auto which is available in OpenSSL 1.0.2.
>>>
>>>> From 05bee3e95e5969294998fb9e2794ef65ce5a6c1f Mon Sep 17 00:00:00 2001
>>>> From: David Martin <dmart...@gmail.com>
>>>> Date: Wed, 13 Apr 2016 15:09:35 -0500
>>>> Subject: [PATCH] use SSL_CTX_set_ecdh_auto() for ecdh curve selection
>>>>
>>>> Use SSL_CTX_set_ecdh_auto if the OpenSSL version supports it, this
>>>> allows the server to negotiate ECDH curves much like it does ciphers.
>>>> Prefered curves can be specified using the existing ecdhe bind options
>>>> (ecdhe secp384r1:prime256v1)
>>>
>>> Could it have a performance impact ? I mean, may this allow a client to
>>> force the server to use curves that imply harder computations for example ?
>>> I'm asking because some people got seriously hit by the move from dhparm
>>> 1024 to 2048, so if this can come with a performance impact we possibly want
>>> to let the user configure it.
>>
>> Switching ECDHE curves can have performance impact, for example result
>> of openssl speed on my laptop:
>>  256 bit ecdh (nistp256)   0.0003s   2935.3
>>  384 bit ecdh (nistp384)   0.0027s364.9
>>  521 bit ecdh (nistp521)   0.0016s623.2
>> The difference is so high for nistp256 because OpenSSL has heavily
>> optimized implementation
>> (https://www.imperialviolet.org/2010/12/04/ecc.html).
>>
>> Apart from calling SSL_CTX_set_ecdh_auto() this patch also takes into
>> account user supplied curve list, so users can customize this as
>> needed (currently haproxy only allows to select one curve, which is a
>> limitation of older OpenSSL versions).
>>
>> However, this patch reuses bind option 'ecdhe'. Currently it is
>> documented to accept only one curve. I believe it should be at least
>> updated to state that multiple curves can be used with sufficiently
>> new OpenSSL.
>> Also, I'm not sure what will happen when SSL_CTX_set1_curves_list() is
>> called with NULL (no ecdhe bind option). Even if it is accepted by
>> OpenSSL it will silently change haproxy default, before this patch it
>> was only prime256v1 (as defined in ECDHE_DEFAULT_CURVE), afterward it
>> will default to all curves supported by OpenSSL. Probably the best
>> would be to keep current default, so it all works consistently in
>> default configuration, regardless of version of haproxy and OpenSSL.
>>
>> --
>> Janusz Dziemidowicz
From a8b607bed1d787fdec67ad5234b678e4c1fbae72 Mon Sep 17 00:00:00 2001
From: David Martin <dmart...@gmail.com>
Date: Thu, 14 Apr 2016 10:24:40 -0500
Subject: [PATCH] use SSL_CTX_set_ecdh_auto() for ecdh curve selection

Use SSL_CTX_set_ecdh_auto if the OpenSSL version supports it, this
allows the server to negotiate ECDH curves much like it does ciphers.
Prefered curves can be specified using the existing ecdhe bind options
(ecdhe secp384r1:prime256v1)
---
 src/ssl_sock.c | 16 +++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/src/ssl_sock.c b/src/ssl_sock.c
index 0d35c29..a5d9408 100644
--- a/src/ssl_sock.c
+++ b/src/ssl_sock.c
@@ -2756,7 +2756,20 @@ int ssl_sock_prepare_ctx(struct bind_conf *bind_conf, SSL_CTX *ctx, struct proxy
 	SSL_CTX_set_tlsext_servername_callback(ctx, ssl_sock_switchctx_cbk);
 	SSL_CTX_set_tlsext_servername_arg(ctx, bind_conf);
 #endif
-#if defined(SSL_CTX_set_tmp_ecdh) && !defined(OPENSSL_NO_ECDH)
+#if !defined(OPENSSL_NO_ECDH)
+#if defined(SSL_CTX_set_ecdh_auto)
+	{
+		const char *ecdhe = (bind_conf->ecdhe ? bind_conf->ecdhe : ECDHE_DEFAULT_CURVE);
+		if (!SSL_CTX_set1_curves_list(ctx, ecdhe)) {
+			Alert("Proxy '%s': unable to set elliptic curve list to '%s' for bind '%s' at [%s:%d].\n",
+curproxy->id, ecdhe, bind_conf->arg, bind_conf->file, bind_conf->line);
+			cfgerr++;
+		}
+		else {
+			SSL_CTX_set_ecdh_auto(ctx, 1);
+		}
+	}
+#elif defined(SSL_CTX_set_tmp_ecdh)
 	{
 		int i;
 		EC_KEY  *ecdh;
@@ -2774,6 +2787,7 @@ int ssl_sock_prepare_ctx(struct bind_conf *bind_conf, SSL_CTX *ctx, struct proxy
 		}
 	}
 #endif
+#endif
 
 	return cfgerr;
 }
-- 
1.9.1



[PATCH] use SSL_CTX_set_ecdh_auto() for ecdh curve selection

2016-04-13 Thread David Martin
This is my first attempt at a patch, I'd love to get some feedback on this.

Adds support for SSL_CTX_set_ecdh_auto which is available in OpenSSL 1.0.2.
From 05bee3e95e5969294998fb9e2794ef65ce5a6c1f Mon Sep 17 00:00:00 2001
From: David Martin <dmart...@gmail.com>
Date: Wed, 13 Apr 2016 15:09:35 -0500
Subject: [PATCH] use SSL_CTX_set_ecdh_auto() for ecdh curve selection

Use SSL_CTX_set_ecdh_auto if the OpenSSL version supports it, this
allows the server to negotiate ECDH curves much like it does ciphers.
Prefered curves can be specified using the existing ecdhe bind options
(ecdhe secp384r1:prime256v1)
---
 src/ssl_sock.c | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/src/ssl_sock.c b/src/ssl_sock.c
index 0d35c29..a1af8cd 100644
--- a/src/ssl_sock.c
+++ b/src/ssl_sock.c
@@ -2756,7 +2756,13 @@ int ssl_sock_prepare_ctx(struct bind_conf *bind_conf, SSL_CTX *ctx, struct proxy
 	SSL_CTX_set_tlsext_servername_callback(ctx, ssl_sock_switchctx_cbk);
 	SSL_CTX_set_tlsext_servername_arg(ctx, bind_conf);
 #endif
-#if defined(SSL_CTX_set_tmp_ecdh) && !defined(OPENSSL_NO_ECDH)
+#if !defined(OPENSSL_NO_ECDH)
+#if defined(SSL_CTX_set_ecdh_auto)
+	{
+		SSL_CTX_set1_curves_list(ctx, bind_conf->ecdhe);
+		SSL_CTX_set_ecdh_auto(ctx, 1);
+	}
+#elif defined(SSL_CTX_set_tmp_ecdh)
 	{
 		int i;
 		EC_KEY  *ecdh;
@@ -2774,6 +2780,7 @@ int ssl_sock_prepare_ctx(struct bind_conf *bind_conf, SSL_CTX *ctx, struct proxy
 		}
 	}
 #endif
+#endif
 
 	return cfgerr;
 }
-- 
1.9.1



Re: Reloading haproxy without dropping connections

2016-01-22 Thread David Martin
We use the iptables syn drop method, works fine; the additional 1 sec
in response time for the tiny number of new connections doesn't bother
us as we are not restarting multiple time per hour.

On Fri, Jan 22, 2016 at 11:01 AM, CJ Ess  wrote:
> The yelp solution I can't do because it requires a newer kernel then I have
> access to, but the unbounce solution is interesting, I may be able to work
> up something around that.
>
>
>
> On Fri, Jan 22, 2016 at 4:07 AM, Pedro Mata-Mouros
>  wrote:
>>
>> Hi,
>>
>> Haven’t had the chance to implement this yet, but maybe these links can
>> get you started:
>>
>>
>> http://engineeringblog.yelp.com/2015/04/true-zero-downtime-haproxy-reloads.html
>> http://inside.unbounce.com/product-dev/haproxy-reloads/
>>
>> It’d be cool to have a sort of “officially endorsed” way of achieving
>> this.
>>
>> Best,
>>
>> Pedro.
>>
>>
>>
>> On 22 Jan 2016, at 00:38, CJ Ess  wrote:
>>
>> One of our sore points with HAProxy has been that when we do a reload
>> there is a ~100ms gap where neither the old or new HAproxy processes accept
>> any requests. See attached graphs. I assume that during this time any
>> connections received to the port are dropped. Is there anything we can do so
>> that the old process keeps accepting requests until the new process is
>> completely initialized and starts accepting connections on its own?
>>
>> I've looked into fencing the restart with iptable commands to blackhole
>> TCP SYNs, and I've looked into the huptime utility though I'm not sure
>> overloading libc functions is the best approach long term. Any other
>> solutions?
>>
>>
>> 
>> 
>>
>>
>>
>



responses from disabled servers

2015-10-15 Thread David Martin
I just want to say first of all that haproxy is incredibly useful and
I've enjoyed working with it tremendously.  Thank you!

My question is if a server is disabled because of a failed http health
check and there are requests in flight, will the requests from the
disabled app be returned to the client?  We are artificially marking
servers as down in the event that the server is going into maintenance
mode and are trying to avoid losing at requests.



Re: Accepting both, SSL- and non-SSL connections when acting as SSL end point

2015-09-15 Thread Martin Schmid

Hello piba, hello list

I am just overwhelmed by the possibilities that haproxy is offering!

If someone else needs a protocol switch as described, look at the 
appended documentation.


Following piba's idea and since apache does not provide native support 
for the proxy-protocol (unfortunately), I implemented a tcp protocol 
switch that directs openvpn connections to the server port 10443 and the 
SSL connection to the second frontend listening to SSL on port 60443.
The proxy protocol is used between protocol switch and SSL termination, 
configured by send-proxy and accept-proxy, respectively. Thus, the 
client's IP can be added later using the x-forwarded-for header via the 
http backend.


Maybe this could be improved more but all this is working perfectly now.

Thank you very much!

--

global
maxconn 4096
tune.ssl.default-dh-param 2048
debug
daemon
log 127.0.0.1local0

defaults
modehttp
option  httplog
log global
timeout connect 5000ms
timeout client 5ms
timeout server 5ms

frontend unsecured
bind 0.0.0.0:50080
timeout client 24h
reqadd X-Forwarded-Proto:\ http
default_backend www_backend

frontend ssl_terminal
mode tcp
option tcplog
bind /var/run/haproxy_ssl.sock ssl crt ssl.pem accept-proxy
timeout client 24h
default_backend www_backend

frontend switch
mode tcp
option tcplog
bind 0.0.0.0:443
tcp-request inspect-delay 5s
acl traffic_is_ssl req_ssl_ver  gt 0
acl enough_non_ssl_bytes   req_len  ge 22
tcp-request content accept if traffic_is_ssl   # accept SSL
tcp-request content accept if enough_non_ssl_bytes # accept non-SSL
# at this point we have something valid in the buffer
use_backend ssl_backend if traffic_is_ssl
default_backend ovpn_backend

backend ssl_backend
mode tcp
option tcplog
server httpsd /var/run/haproxy_ssl.sock send-proxy

backend www_backend
reqadd X-Forwarded-Proto:\ https
mode http
option httplog
option forwardfor
server httpd :80

backend ovpn_backend
mode tcp
option tcplog
server ovpnd :10443

listen stats *:20078
stats enable
stats uri /




Am 14.09.2015 um 15:31 schrieb PiBa-NL:

Op 14-9-2015 om 14:32 schreef Martin Schmid:

Hello list

I'm quite new to haproxy, and I've managed to use it with SSL
passthru and as SSL termination.
I've also startet looking into the code to find the answers or
solutions to what I want to achieve.

I have OpenVPN and HTTPS running on the same port. This can be done
with several setups whereof using the openvpn port sharing feature is
the easiest.

But now I need to know the remote IP addresses in order to be able to
lock out abusive access to the web server. Https used to be unharmed
by exploitative access, but now it's getting a problem. With http, I
can reduce the traffic by locking out ip adresses using fail2ban.
With https, I cannot see the ip address, so there is no way to lock
them out selectively.
Any tool that does the backend switching cannot add an
x-forwarded-for http header and be the SSL end point at the same
time. Haproxy seems to be the only tool that might be able to handle
both.

Looking at the code of haproxy, it seems to me that once I configure
a bind with ssl, it just drops all connections that do not begin wih
a SSL handshake.
However, it seems to be feasible to alter the code in order to fall
back to a non-ssl connection if the hadshake fails.

Has someone of you already tried to accomplish such, or am I missing
a detail that makes this impossible?


Regards

Martin



Hi Martin,

Not sure if this will work with openvpn, but you could try it..
This mail might interest you:
http://marc.info/?l=haproxy=132375969032305=2

First split out TCP traffic to different backends depending on data
send from the client.
Then possibly feed it from a backend server back to a second frontend
where you handle the ssl-offloading if desired, while using proxy
protocol to keep client-ip information, and namespaces or unixsockets
for the connection between the two.

Again, i have not tested it, but this seems like it could be a way to
configure it with current options..

Regards,
PiBa-NL





Accepting both, SSL- and non-SSL connections when acting as SSL end point

2015-09-14 Thread Martin Schmid

Hello list

I'm quite new to haproxy, and I've managed to use it with SSL passthru 
and as SSL termination.
I've also startet looking into the code to find the answers or solutions 
to what I want to achieve.


I have OpenVPN and HTTPS running on the same port. This can be done with 
several setups whereof using the openvpn port sharing feature is the 
easiest.


But now I need to know the remote IP addresses in order to be able to 
lock out abusive access to the web server. Https used to be unharmed by 
exploitative access, but now it's getting a problem. With http, I can 
reduce the traffic by locking out ip adresses using fail2ban. With 
https, I cannot see the ip address, so there is no way to lock them out 
selectively.
Any tool that does the backend switching cannot add an x-forwarded-for 
http header and be the SSL end point at the same time. Haproxy seems to 
be the only tool that might be able to handle both.


Looking at the code of haproxy, it seems to me that once I configure a 
bind with ssl, it just drops all connections that do not begin wih a SSL 
handshake.
However, it seems to be feasible to alter the code in order to fall back 
to a non-ssl connection if the hadshake fails.


Has someone of you already tried to accomplish such, or am I missing a 
detail that makes this impossible?



Regards

Martin




subscribe

2015-09-13 Thread Martin Schmid


--
Martin Schmid
Wolfwilerstrasse 57
CH-4626 Niederbuchsiten
www.haeschmi.ch




[no subject]

2015-09-13 Thread Martin Schmid


--
Martin Schmid
Wolfwilerstrasse 57
CH-4626 Niederbuchsiten
www.haeschmi.ch




garment supplier hope to cooperation with you

2015-07-22 Thread martin

DearSir/miss,
Ningbo kunchang  garment co., ltd . We are privately owned knitwear 
manufacturer locates in Ningbo , China ,Near shanghai city ,  with 500 
employees . We are a large-scale vertical garment company including printing 
factory , cutting and sewing factory all in house . We have monthly production 
capacity of 400,000pcs , and yearly turnover of us 9 million ,
The Garment produced by our cover women’s ,men’s and children’s knitwear 
,including t-shirt , polo , rugby , sweatshirt , knitted pants and knitted 
dresses, The main export markets are Europe , united states , Canada , 
Australia ect , Among the valued customers are a numbers of leading 
international retailers and brands , such as golf , Pierre cardin , gintonic , 
kitaro , signum , lerros ,   befree , top secrect , roxy , foxect .  
International customers value the high quality of our production . We focus on 
using top quality yarn , excellent knitting and dying , finished fabric 
inspection ,and controlling , every step reflects the keen pursuit of quality 
standards by everyone at our ,  
We have a strong sampling department , with computerized CAD systems, 
Three dedicated sampling lines , and an extensive fabric warehouse . This make 
it possible for our to deliver high quality salesman sample and develop sample 
in the shortest possible time . 
 Best Regards 
Martin 
General manager 
ningbo kunchang fashion co., ltd 
tel:0086-574-88361859
add: Room605, no.68 gongmao road , gu'an cun, lianfeng road, yinzhou ,ningbo , 
china 

using backend node details in acls/response manipulation

2015-03-24 Thread Martin Nikolov
Hi guys,
I'm wondering if it is possible to use things like selected backend node's
ip, name or port as variables. My goal is to set a header in the http
response with the selected backend's details to a certain set of source ip
addresses (hence the acl, which is the easy part). I searched in the
documentation, but was not able to find a solution.

Thanks in advance.
Regards.


Re: Client side ssl certificates for specific location

2015-02-24 Thread Martin
Remy van Elst relst@... writes:

 
 Lukas Tribus schreef op 09/01/14 00:08:
  Hi,
 
 
  $ openssl s_client -state -quiet -connect xx.xx.xx.xx:443
 
  SSL_connect:before/connect initialization
  SSL_connect:SSLv2/v3 write client hello A
  SSL_connect:SSLv3 read server hello A
  depth=4 /C=NL/O=xxx/CN=xxx
  verify error:num=19:self signed certificate in certificate chain
  verify return:0
  SSL_connect:SSLv3 read server certificate A
  SSL_connect:SSLv3 read server key exchange A
  SSL_connect:SSLv3 read server done A
  SSL_connect:SSLv3 write client key exchange A
  SSL_connect:SSLv3 write change cipher spec A
  SSL_connect:SSLv3 write finished A
  SSL_connect:SSLv3 flush data
  SSL_connect:SSLv3 read finished A
  GET /admin/ HTTP/1.0
 
  SSL_connect:SSL renegotiate ciphers
  SSL_connect:SSLv3 write client hello A
  SSL_connect:SSLv3 read server hello A
  depth=4 /C=NL/O=xxx/CN=xxx
  verify error:num=19:self signed certificate in certificate chain
  verify return:0
  SSL_connect:SSLv3 read server certificate A
  SSL_connect:SSLv3 read server key exchange A
  SSL_connect:SSLv3 read server certificate request A
  SSL_connect:SSLv3 read server done A
  SSL_connect:SSLv3 write client certificate A
  SSL_connect:SSLv3 write client key exchange A
  SSL_connect:SSLv3 write change cipher spec A
  SSL_connect:SSLv3 write finished A
  SSL_connect:SSLv3 flush data
  SSL3 alert read:fatal:handshake failure
  Ok, its clear what Apache is doing. After matching the /admin/ path,
  Apache triggers a SSL renegotiation and within that renegotiation 
requests
  the client certificate from the browser.
 
  Because this all happens with a renegotiation, this is probably 
valid
  behavior from a SSL/TLS perspective.
 
  It is still a layering violation imho, because we trigger layer 5
  renegotiation by matching a layer 7 event and I would rather avoid 
that.
 
 
 
  In the testing phase (now), some clients have issues with the app
  (because they have a certificate in their browser). On Apache/F5 
they
  are not asked for a certificate in using the app, only admins are.
  I understand this now; your actual customers are seeing this, if 
they
  have at least 1 client certificate in the browser, even though they
  don't use, care and know about the /admin/ path, which you are only
  using internally.
 
  I see why this sucks.
 
 
 
  If it is not possible to do this with haproxy (because it is 
invalid
  according to the spec) then I'll probably suggest they move the 
admin
  interface out of the user-facing part of the app.
  Yes, I suggest you do this. You will need to do this with a second 
bind
  statement, which means either moving it to a different port or to a
  different IP address.
 
  What would make sense in HAproxy however is the possibility to set 
the
  verify command based on SNI values and I think there are some very 
valid
  use cases that could benefit from it (SNI based virtual hosting with
  different client-cert settings or like in your case a dedicated 
admin
  interface).
 
  We already have a snifilter for crt-list [1], extending verify 
with a
  snifilter is probably doable.
 
 
 
  Regards,
 
  Lukas
 
 
  [1] http://cbonte.github.io/haproxy-dconv/configuration-
1.5.html#5.1-crt-list   
 Thank you again for the great explanation both. It is indeed just the 
 user facing part, functional wise the client certificate just works. I 
 think I like the setup with two apps better than what we have now, 
 should also make it easier to firewall the admin part better.
 
 Cheers,
 
 
 Attachment (smime.p7s): application/pkcs7-signature, 3722 bytes

Hi,

There are some valid use cases for requiring client certificate for a 
specific URI and not asking for it otherwise.

In example Estonian ID card (I'm sure there are others) - a smart card 
is inserted into reader and it's person's certificate can be read from 
there (after pin validation). This is a common secure 2-factor 
authentication mechanism here. Basic use case goes like this:
* user is on site (doesn't matter if on http or https)
* some action requires user to be securely identified
* user is redirected to an URI that requires client certificate
* user enters hes PIN and certificate is sent to server, server 
validates it and considers the user securely authorized/authenticated
* user continues on https

In use cases like this, it is essential that we don't ask (or even 
accept) client certificate on any other pages, just on the 
authentication page

creating a subdomain and load-balancing it separately, just for this one 
page seems a huge overkill (the authentication page doesn't need to do 
anything, we'd be fine if LB could forward the cert in HTTP headers, but 
its imperative that we only require/accept the cert on one URI)

IIS 8.0 seems to do the same trick as apache (you can set SSL client 
certificate requirements per-page in IIS as well), SSL dump: 
openssl s_client -state -quiet -connect xxx.xxx.xxx.xxx:443


Apache Benchmark Failed requests

2015-01-20 Thread Martin van Diemen
Hi All,

I'm using HAProxy as load balancer for load balancing websites and tomcat
servers. I'm currently testing the performance with Apache Benchmark and I
get the following result:

$ ab -n 500 -c 100 https://domain.tld/instance/

...

Document Length:2950 bytes

Concurrency Level:  100
Time taken for tests:   18.723 seconds
Complete requests:  500
Failed requests:247
   (Connect: 0, Receive: 0, Length: 247, Exceptions: 0)
Non-2xx responses:  500
Total transferred:  1709018 bytes
HTML transferred:   1473518 bytes
Requests per second:26.70 [#/sec] (mean)
Time per request:   3744.689 [ms] (mean)
Time per request:   37.447 [ms] (mean, across all concurrent requests)
Transfer rate:  89.14 [Kbytes/sec] received

Connection Times (ms)
  min  mean[+/-sd] median   max
Connect:  483 2564 886.7   26475032
Processing:78  974 711.07853233
Waiting:   36  824 653.37093225
Total:   1190 3537 921.2   33436370

Percentage of the requests served within a certain time (ms)
  50%   3343
  66%   3601
  75%   3662
  80%   4458
  90%   4464
  95%   5357
  98%   6111
  99%   6230
 100%   6370 (longest request)

As you may see I'm receiving a lot of failed request:
Failed requests:247
   (Connect: 0, Receive: 0, Length: 247, Exceptions: 0)
Non-2xx responses:  500

These failed requests only appear when having more than 1 tomcat server
configured in haproxy.cfg. HAProxy.cfg has the following configuration for
the tomcat servers:

backend tomcat_servers
# Enable HTTP connection closing on the server side
option http-server-close

# This value will be checked in incoming requests, and the first operational
# server possessing the same value will be selected.
cookie JSESSIONID prefix

# Servers
server tomcat1 192.168.1.1:8080 cookie tomcat1 check
server tomcat2 192.168.1.2:8080 cookie tomcat2 check

I've done some testing and I'm also seeing a big differences in Requests
per second when running AB via HAProxy (25.58 [#/sec]) or directly to the
server (466.68 [#/sec]).

The HAProxy uses less than 50 percent CPU when running AB.

Hope someone can help me figuring out what is causing this issue.

Thanks!

Martin


Re: Client Certificate

2014-07-03 Thread Martin van Diemen
Hi Lukas,

Thanks you for making this clear. I ended up by adding another public ip
just for SSL Client certificate authentication.

Groeten,

Martin


On Tue, Jul 1, 2014 at 3:17 PM, Lukas Tribus luky...@hotmail.com wrote:

 Hi Martin,


  Hi,
 
  I'm trying to configure HAProxy so that on one specific domain users
  authenticate with a SSL Client certificate.
 
  The Load Balancer has one public IP address and has a frontend
  configured which is bind to port 443:
  bind *:443 ssl crt ./haproxy/
 
  I selected the correct backend as followed:
  use_backend secure_servers if { ssl_fc_sni secure.domain.tld
 ssl_fc_has_crt }
 
  default_backend default_servers
 
  When changing bind to verify the ssl certicate all other ssl traffic is
  no longer allowed:
  bind *:443 ssl crt ./haproxy/ ca-file ./ca.pem verify required
 
  A solution would be to create another frontend with an additional
  public IP address but I want to prevent this if possible.
 
  How can I only require a SSL Client certificate on the
 secure.domain.tld?

 You cannot, this is not currently supported.


 The only workaround here is to put another proxying layer in tcp mode in
 front of your current deployment, enabling you to switch to a different
 backend -- second layer frontend combination according to the SNI value
 (req.ssl_sni [1] in this case, since you are not using SSL termination on
 the
 first proxy tier).

 (and you could use the recently implemented abstract namespaces for 1st
 tier
 backend - 2nd tier frontend connection).





 Regards,

 Lukas



 [1]
 http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#7.3.5-req.ssl_sni



Client Certificate

2014-07-01 Thread Martin van Diemen
Hi,

I'm trying to configure HAProxy so that on one specific domain users
authenticate with a SSL Client certificate.

The Load Balancer has one public IP address and has a frontend configured
which is bind to port 443:
bind *:443 ssl crt ./haproxy/

I selected the correct backend as followed:
use_backend secure_servers if { ssl_fc_sni secure.domain.tld ssl_fc_has_crt
}

default_backend default_servers
When changing bind to verify the ssl certicate all other ssl traffic is no
longer allowed:
bind *:443 ssl crt ./haproxy/ ca-file ./ca.pem verify required

A solution would be to create another frontend with an additional public IP
address but I want to prevent this if possible.

How can I only require a SSL Client certificate on the secure.domain.tld?

Many thanks!

Martin


unsubscribe

2014-04-08 Thread Martin Karbon





Re: Haproxy plus ices protocol (99.999% similar to HTTP)

2012-06-24 Thread Martin Konecny
Hi Willy,

Here is my attempt at a patch using the changes you suggested. I'm not a C
programmer (I work with interpreted languages), so please forgive any
mistakes. I tried to use as close of a coding style to yours existing code
as possible.

This patch was diff'ed against the latest commit in the master branch of
haproxy-1.4

I've tried this code, and it fixes my problem. Icecast clients that use
ICE/1.0 in their Request header are successfully forwarded to their
destination.

Martin

On Fri, Jun 22, 2012 at 11:21 AM, Willy Tarreau w...@1wt.eu wrote:

 On Fri, Jun 22, 2012 at 11:03:09AM -0400, Martin Konecny wrote:
  Hi Willy,
 
  Interesting patch I found. Seems another user scratched his own itch.
 
 
 https://github.com/kjwierenga/haproxy-icey/commit/b56a3ead05fa6704ad88ccfc88053e9dac6c3ac7
 
  It doesn't seem to be as comprehensive as the one you suggested though.

 Indeed, it does the first half of the job, but the version parsers later
 will
 randomly work or fail if you don't replace the protocol.

  I'll take a look into both solutions and get back to you.

 OK.

 Regards,
 Willy




0001-Allow-icecast-to-work-with-HAProxy.patch
Description: Binary data


Re: Haproxy plus ices protocol (99.999% similar to HTTP)

2012-06-24 Thread Martin Konecny
I wouldn't mind fixing it up with the suggestions you made. It gives me a
good excuse to brush up my C skills :). Give me a few days :)

Martin

On Sun, Jun 24, 2012 at 12:25 PM, Willy Tarreau w...@1wt.eu wrote:

 Hi Martin,

 On Sun, Jun 24, 2012 at 11:58:41AM -0400, Martin Konecny wrote:
  Hi Willy,
 
  Here is my attempt at a patch using the changes you suggested. I'm not a
 C
  programmer (I work with interpreted languages), so please forgive any
  mistakes.

 No problem !

  I tried to use as close of a coding style to yours existing code
  as possible.

 Very much appreciated, thanks (and I know what it's like to adapt to
 someone
 else's style) !

  This patch was diff'ed against the latest commit in the master branch of
  haproxy-1.4

 OK anyway I'll only apply the change to 1.5-dev. 1.4 is in maintenance and
 should not receive such new features.

  I've tried this code, and it fixes my problem. Icecast clients that use
  ICE/1.0 in their Request header are successfully forwarded to their
  destination.

 OK and I can imagine the server responds correctly as well ?

 It's nice then, I think we should not apply the fix systematically but we
 should rely on an option for this so that existing setups are not suddenly
 exposed to ICE/* requests that were previously not expected to pass
 through. But that's just a minor issue. I think we could have an option
 such as option http-replace-ice or something like this, valid in both
 the frontend and the backend so that whichever detects it first will apply
 it.

 Also, now we know the hack works, we need to check for the protocol
 matching
 ICE/ before performing the transformation, so the following part :

/* 4b. We may have to convert ICE/1.0 requests to HTTP/1.0 */
if (unlikely(msg-sl.rq.v_l == 7) 
 !http_upgrade_ice10_to_httpv10(req, msg, txn))
 goto return_bad_req;

 will become :

/* 4b. We may have to convert ICE/1.0 requests to HTTP/1.0 */
 if (unlikely(msg-sl.rq.v_l == 7) 
 memcmp(msg-sol[msg-sl.rq.v], ICE/, 4) == 0 
!http_upgrade_ice10_to_httpv10(req, msg, txn))
 goto return_bad_req;

 What would be nice is to have a few sentences to put into the doc to
 explain
 when this change is useful (eg: between which client and which server it
 was
 seen as necessary and was validated).

 I can help you finish the patch if you don't totally feel at ease with it,
 but till now it's good work.

 Cheers,
 Willy




Re: Haproxy plus ices protocol (99.999% similar to HTTP)

2012-06-22 Thread Martin Konecny
Hi Willy,

Interesting patch I found. Seems another user scratched his own itch.

https://github.com/kjwierenga/haproxy-icey/commit/b56a3ead05fa6704ad88ccfc88053e9dac6c3ac7

It doesn't seem to be as comprehensive as the one you suggested though.
I'll take a look into both solutions and get back to you.

Martin

On Fri, Jun 22, 2012 at 2:12 AM, Willy Tarreau w...@1wt.eu wrote:

 On Fri, Jun 22, 2012 at 01:51:12AM -0400, Martin Konecny wrote:
  Hi Willy,
 
  I can only answer your question by saying that other clients that use
 this
  protocol but replace ICE/1.0 with HTTP/1.0 have no problem with HAProxy.
 
  It seems that those other clients realized it wasn't a good idea to
 change
  that part for no good reason :).
 
 
  I thought I had found a solution when I started playing with reqrep
  (replace ICE with HTTP) but then I noticed only valid HTTP requests
 were
  being passed through this operator.

 exactly.

  Any other ideas? There is no official documentation, but this post
  http://stackoverflow.com/a/9985297/276949 should give you a brief
 overview.

 Thanks, that's useful information. From the info there and on the forum
 linked to from there, there are incompatibilities. The link above suggests
 that the server responds with HTTP/1.0 200 OK. The other link says it
 responds with ICY 200 OK. None of the links suggest any form of
 keep-alive
 either. So I think that some experimentation is required.

 If you're willing to make a few changes to the code, here's what I'm
 suggesting :

 1) add the I, C and E letters to http_is_ver_token[] in proto_http.c, so
 that
   the protocol is not rejected anymore in requests nor responses.

 2) in http_wait_for_request(), after the HTTP/0.9 to 1.0 conversion,
   add this to convert from ICE/1.0 to HTTP/1.0 :

/* 4. We may have to convert ICE/1.0 requests to HTTP/1.0 */
if (unlikely(msg-sl.rq.v_l == 7) 
 !http_upgrade_ice10_to_httpv10(txn))
goto return_bad_req;

 3) duplicate http_upgrade_v09_to_v10() and call the new one
   http_upgrade_ice10_to_httpv10(). Make it transform only the version
   tag from ICE/1.0 to HTTP/1.0.

 The rest of the processing would then remain unaffected since the request
 would have been turned very early into HTTP/1.0.

 If this works, we'll look how to more reliably implement this.

 Regards,
 Willy




Haproxy plus ices protocol (99.999% similar to HTTP)

2012-06-21 Thread Martin Konecny
Hello,

The ices protocol is based on HTTP and is used for online streaming. In the
early days the specification for this protocol was to use

 1 GET /serv/login.php?lang=enprofile=2 *ICE*/1.0
 2 Host: www.mydomain.com
 3 User-agent: my small browser
 4 Accept: image/jpeg, image/gif
 5 Accept: image/png

As you can see, the bolded part is the only difference between this and
HTTP. I guess the designer realized this was a bone headed move and almost
every modern icecast client today uses HTTP in its request header. However
I need it to work for older clients as well.

My question is, how can I get haproxy to accept this protocol? option
accept-invalid-http-request doesn't seem to be what I want, and I tried
using mode tcp until I realized I couldn't because my config file uses

use_backend backend if condition (which requires mode http).

My config file is fairly simple if this helps:

frontend http-in-8001
bind *:8001

acl martin11_master_acl url -i /martin11_master
use_backend martin11_master if martin11_master_acl

backend martin11_master
#mode tcp
server martin11_master 192.168.1.147:8002


Martin


Re: Haproxy plus ices protocol (99.999% similar to HTTP)

2012-06-21 Thread Martin Konecny
Hi Willy,

I can only answer your question by saying that other clients that use this
protocol but replace ICE/1.0 with HTTP/1.0 have no problem with HAProxy.

It seems that those other clients realized it wasn't a good idea to change
that part for no good reason :).


I thought I had found a solution when I started playing with reqrep
(replace ICE with HTTP) but then I noticed only valid HTTP requests were
being passed through this operator.

Any other ideas? There is no official documentation, but this post
http://stackoverflow.com/a/9985297/276949 should give you a brief overview.

Martin


On Fri, Jun 22, 2012 at 1:37 AM, Willy Tarreau w...@1wt.eu wrote:

 Hello Martin,

 On Thu, Jun 21, 2012 at 07:49:13PM -0400, Martin Konecny wrote:
  Hello,
 
  The ices protocol is based on HTTP and is used for online streaming. In
 the
  early days the specification for this protocol was to use
 
   1 GET /serv/login.php?lang=enprofile=2 *ICE*/1.0
   2 Host: www.mydomain.com
   3 User-agent: my small browser
   4 Accept: image/jpeg, image/gif
   5 Accept: image/png
 
  As you can see, the bolded part is the only difference between this and
  HTTP. I guess the designer realized this was a bone headed move and
 almost
  every modern icecast client today uses HTTP in its request header.
 However
  I need it to work for older clients as well.
 
  My question is, how can I get haproxy to accept this protocol? option
  accept-invalid-http-request doesn't seem to be what I want, and I tried
  using mode tcp until I realized I couldn't because my config file uses
 
  use_backend backend if condition (which requires mode http).
 
  My config file is fairly simple if this helps:
 
  frontend http-in-8001
  bind *:8001
 
  acl martin11_master_acl url -i /martin11_master
  use_backend martin11_master if martin11_master_acl
 
  backend martin11_master
  #mode tcp
  server martin11_master 192.168.1.147:8002

 You won't manage to make it pass through in HTTP mode without modifying
 the code. I have a few other questions, what does the response look like ?
 Also, does the protocol support content-length, transfer-encoding and
 content-encoding ? Does it use the same status codes ? Does it support
 metadata only requests (HEAD) ? Does it support metadata only responses
 (204, 304) ? Does it support interim responses (100, 101) ? Does it
 support any form of keep-alive ? Is it compliant to HTTP/1.0 ? To HTTP/1.1
 ?

 All these questions are very important, because the apparent syntax does
 not make a protocol compatible with HTTP. The semantics are much stronger
 than that.

 Regards,
 Willy




Re: loop in 1.5-dev4 ?

2011-03-22 Thread Martin Kofahl
Thank you very much for fixing this!

Martin

 Original-Nachricht 
 Datum: Tue, 22 Mar 2011 11:57:24 +0100
 Von: David du Colombier dducolomb...@exceliance.fr
 An: Martin Kofahl m.kof...@gmx.net
 CC: haproxy@formilux.org
 Betreff: Re: loop in 1.5-dev4 ?

 Hi,
 
 On Tue, 22 Mar 2011 07:36:17 +0100
 Willy Tarreau w...@1wt.eu wrote:
 
  On Mon, Mar 21, 2011 at 12:52:23PM +0100, Martin Kofahl wrote:
   With 1.4 the backend works fine, but with 1.5-dev4 haproxy connects
   itself in stead of the backend. Can you please verify this behavior?
  
  Sounds like the server's address was resolved as 0.0.0.0 and that
  the system is connecting to itself :-(
  
  Could you please run that through strace and send the output. The
  only possible thing I see was that I broke something when merging
  the IPv6 work, maybe sometimes an address is not filled before a
  connect().
 
 I can confirm this behavior.
 
 This bug was indeed caused by the recent IPv6 change.
 
 In backend.c, in function assign_server_address, if a server have no
 address, this address is replaced by the address the client asked.
 
 Since the IPv6 introduction, I added a function is_addr which returns
 1 when an address is not empty.
 
 A mistake in this function caused the return value of IPv4 to be
 inverted. Therefore the server address was incorrectly replaced
 by the address requested by the client.
 
 In your case, you observed a loop because your server used the same
 port as HAProxy.
 
 Please find attached the small patch which fix this problem.
 
 -- 
 David du Colombier



loop in 1.5-dev4 ?

2011-03-21 Thread Martin Kofahl
Hi,
I tried the following (shortened) configuration with the latest snapshot 
1.5-dev4-20110317 and plain 1.5-dev4.

frontend FRONT
mode http
bind 10.a.b.c:80
default_backend BACK

backend BACK
mode http
server DEF 10.d.e.f:80


With 1.4 the backend works fine, but with 1.5-dev4 haproxy connects itself in 
stead of the backend. Can you please verify this behavior?


Debug log (shortened) haproxy 1.4.4
:FRONT.accept(0005)=0007 from [10.x.y.z:2028]
:FRONT.clireq[0007:]: GET /fff HTTP/1.1
:FRONT.clihdr[0007:]: Host: 10.a.b.c
:BACK.srvrep[0007:0008]: HTTP/1.1 404 Not Found

Debug log (shortened) haproxy 1.5-dev4
0001:FRONT.accept(0005)=0007 from [10.x.y.z:2034]
0001:FRONT.clireq[0007:]: GET /fff HTTP/1.1
0001:FRONT.clihdr[0007:]: Host: 10.a.b.c
0002:FRONT.accept(0005)=0009 from [10.a.b.c:38839] --- wrong!!
0002:FRONT.clireq[0009:]: GET /fff HTTP/1.1
0002:FRONT.clihdr[0009:]: Host: FRONT
0002:FRONT.clihdr[0009:]: X-Forwarded-For: 10.x.y.z
0002:FRONT.clihdr[0009:]: Connection: close
0003:FRONT.accept(0005)=000b from [10.a.b.c:38840]
0003:FRONT.clireq[000b:]: GET /fff HTTP/1.1
0003:FRONT.clihdr[000b:]: Connection: close
0003:FRONT.clihdr[000b:]: Host: FRONT
0003:FRONT.clihdr[000b:]: X-Forwarded-For: 10.a.b.c
...


Kind regards
Martin



Re: proper way to use an acl + stick-table to filter based on conn_cur

2011-03-17 Thread Martin Kofahl
Do you know what's the differences between having the sticky-table on 
the front-end or on the back-end?


Am I right in assuming that that unused keep-alive connections would be 
counted on the front-end only and in-use connections if the sticky-table 
is on the back-end (option http-server-close)?


Martin

On 15.03.2011 23:44, Cory Forsyth wrote:

Interesting...

I was able to get it to work using a stick-table on the front-end, as 
bartavelle mentioned from this URL:

http://tehlose.wordpress.com/2010/12/15/fun-stuff-with-latest-haproxy-version/

I don't know enough C to dig into the code to check on that, though.

On Tue, Mar 15, 2011 at 4:34 PM, Cyril Bonté cyril.bo...@free.fr 
mailto:cyril.bo...@free.fr wrote:


Hi Willy and Cory,

Le mardi 15 mars 2011 22:17:50, Willy Tarreau a écrit :
  Whether I use src_conn_cur or sc1_conn_cur, with or without
the table
  argument, this does not work. No matter how many concurrent
connections
  per ip in the stick table, they never get denied.
 
  Any suggestions?

 At first glance, I cannot spot anything wrong.

I think there's a bug in the function acl_fetch_src_conn_cur() :
its code contains return acl_fetch_conn_cnt(...)
where it probably should be return acl_fetch_conn_cur(...)

Sorry, I can't test it tonight but maybe this can help you.

--
Cyril Bonté






unsubscribe

2010-08-18 Thread Martin Korte




share number of backend-server-connections among backend configurations

2010-06-16 Thread Martin Kofahl

Hi,
I have some backend-servers (eg A and B) in multiple backends (B has some 
special sites running that's why). Algorithm is least connection. But the 
information about the number of active connections is not shared between the 
backend configurations, even though servers have the same name. So if A has 10 
connections in backend TWO, backend ONE will still see A as unused with 0 
connections. Using the least connection algorithm I would wish that connection 
numbers are counted overall.


Sample configuration

frontend myfrontend *:80
  acl acl_site1 url_sub 
  use_backend TWO if acl_site1 
  default_backend ONE


backend ONE
  server A ...
  server B ...

backend TWO
  server A ...


What can I do to use the least conn algorithm in this setup?
Thank you! Martin




share number of server-connections among backends

2010-06-11 Thread Martin Kofahl
Hi,
I have some backend-servers (eg A and B) in multiple backends (B has some 
special sites running that's why). Algorithm is least connection. But this 
information is not shared between the backends, even though servers have the 
same name. So if A has 10 connections in backend TWO, backend ONE will still 
see A as unused with 0 connections. Using the least connection algorithm I 
would wish that connection numbers are counted overall.


Sample configuration

frontend myfrontend *:80
   acl acl_site1 url_sub 
   use_backend TWO if acl_site1 
   default_backend ONE

backend ONE
   server A ...
   server B ...

backend TWO
   server A ...


What can I do to use the least conn algorithm in this setup?
Thank you! Martin



Fw: Notification of Protection for Google Search Right About formilux

2010-03-18 Thread Martin
Dear , 


I got your email address from my colleague, saying that you are the person in 
charge. We are a professional internet service provider organization in Asia, 
having the business around the world. Recently we have received an application 
from one of our customer Loquen LTD, who have been claiming  to register 
formilux as their company's Google's Trademark Keyworld which can be used in 
google searching service. The successful registration will directly affect the 
search result on Google. And we know that your company is the owner of this 
keyword, so we need to contact you to see if you have any relationship with 
them or you consigned them to use this keyword. If it is true, we will then 
complete their registration within 3 workdays, if you do not have any 
relationship with them, please let us know ASAP in order to protect your 
interests.


Best Regards,


Martin
Auditing Department Director
HongKong Net 


Tel: 00852-3060 6608 
Fax: 00852 - 3072 3949 
Email mar...@hknetos.com  
Web: http://www.hknet.com 

PLEASE CONSIDER THE ENVIRONMENT BEFORE YOU PRINT THIS E-MAILThis email (and 
attachments) contains information from HongKong Net Center Limited, which may 
be CONFIDENTIAL or PRIVILEGED. If you are not the intended recipient, you must 
not disclose, copy, distribute or use the contents of this information. If you 
have received this email in error, please notify sender immediately and delete 
all copies. This email may also be subject to COPYRIGHT. No part of it may be 
reproduced, adapted or transmitted without the written consent of the copyright 
owner.
QQ截图未命名.jpg

Re: Being sticky to a host, but not to a port

2010-02-25 Thread Martin Aspeli

Willy Tarreau wrote:

Hi Martin,

On Thu, Feb 25, 2010 at 11:53:16AM +0800, Martin Aspeli wrote:

Hi,

We're contemplating a design where we'll have two servers, A and B. On
each, we'll have 8 instances of our application running, on ports
8081-8088. This is mainly to effectively use multiple cores.

The application uses cookie-based sessions. We are able to share session
among the 8 instances on each machine, but not across the two machines.

A is the master server, where an HAProxy instance will run. B is a
slave. Eventually, we'll want to be able to fail HAProxy over to B, but
let's not worry about that yet.

So, what we'd like is this:

  - A request comes in to HAProxy on A

  - If it's an unknown user, HAProxy picks the least busy of the 16
instances on either A or B. We'd probably use prefixed cookies, so the
user would be unknown until they logged in.

  - If it's a known user that's previously been to A, HAProxy picks the
least busy of the 8 instances on A

  - Ditto, if it's a known user that's previously been to B, HAProxy
picks the least busy of the 8 instances on B

Is this possible? I could imagine doing it with three HAProxy instances
(so the first one picks the server and the second one picks the node),
but that feels overly complex.


Well, your demand is already quite complex, which is proven by the number
of rules you gave above to explain what you want :-)


I could think of more complex rules if you'd like. ;-)


I see 3 backends in your description because you enumerate 3 algorithms :
   - the least busy of the 16
   - the least busy of the 8 A
   - the least busy of the 8 B


Okay, that makes sense.


The idea would then be to have a default backend which gets requests
without cookies, and one backend per other group. I don't know if you
consider that all 8 instances of one host are always in the same state
(up/down) or if they can be independant. Let's consider them independant
for now.


They would be independent, i.e. we may take one down for maintenance or 
a rolling release (or it could crash).



Also you must be very careful with your cookie in prefix mode : we have
to remove it in the two other backends before passing the request to the
server. However we don't want to stick on it otherwise the first server
of each farm would always get the connections. Thus, the idea is to set
the cookie in prefix mode but not assign any cookie to the servers. That
way no server will be found with that cookie value and the load balancing
will happen. However you must ensure that your servers will not set the
cookie again later, otherwise it will be sent without any prefix to the
client and the stickiness will be lost.


That's clever.

When is the prefix applied and stripped in this case?


Another solution would be to simply use cookie insertion mode.


Which option are you illustrating in the sample config below? I'm not 
sure I fully understand the implications of using prefix vs. insertion 
mode here.


 That way

you don't have to worry whether your application will set the cookie
again or not.


So, in fact most users will be anonymous (no session) and don't strictly 
need to be sticky to a server, even. When they log in, a cookie 
(beaker.session) is set by the application. The application won't trip 
up on other cookie values.



frontend www
acl a_is_ok nbsrv(bk_a) gt 0
acl b_is_ok nbsrv(bk_b) gt 0
acl cook_a  hdr_sub(cookie) SRV=A
acl cook_b  hdr_sub(cookie) SRV=B
use_backend bk_a if a_is_ok cook_a
use_backend bk_b if b_is_ok cook_b
default_backend bk_all

backend bk_all
# cookie-less LB, or catch-all for dead servers
balance leastconn
cookie SRV prefix
# or use this one : cookie SRV insert indirect nocache
option redispatch
server srv_a_1 1.1.1.1:8081 cookie A track bk_a/srv_a_1
...
server srv_a_8 1.1.1.1:8088 cookie A track bk_a/srv_a_8
server srv_b_1 1.1.1.2:8081 cookie B track bk_a/srv_b_1
...
server srv_b_8 1.1.1.2:8088 cookie B track bk_a/srv_b_8

backend bk_a
# Cookie: SRV=A
balance leastconn
option redispatch
server srv_a_1 1.1.1.1:8081 check
...
server srv_a_8 1.1.1.1:8088 check

backend bk_b
# Cookie: SRV=B
balance leastconn
option redispatch
server srv_b_1 1.1.1.2:8081 check
...
server srv_b_8 1.1.1.2:8088 check

I strongly suggest to enable the stats page on such a config
because it will not be easy to understand well in the first
tests.


Thanks for the suggestions!

Martin

--
Author of `Professional Plone Development`, a book for developers who
want to work with Plone. See http://martinaspeli.net/plone-book




Re: Being sticky to a host, but not to a port

2010-02-25 Thread Martin Aspeli

Hi,

Willy Tarreau wrote:


The prefix is :
   - always stripped from any request by a backend which declares
 cookie XXX prefix

   - always applied to any response by a backend which declares
 cookie XXX prefix when a server returns such a cookie AND
 has a cookie value assigned.

The issue can come from servers receiving an expired session and
who wants to reinitialise a new one. They will then emit a new
set-cookie with a new cookie value which will not be prefixed,
and the stickiness will be lost.


So, if the user's session expired, that's probably OK. Their session 
will be destroyed anyway, so if they go to a new backend, that won't matter.


Since the prefix/cookie in your example below is set on the way in, I 
assume that on the next request (after the session has been re-created), 
they'd go to the same backend?



Another solution would be to simply use cookie insertion mode.

Which option are you illustrating in the sample config below? I'm not
sure I fully understand the implications of using prefix vs. insertion
mode here.


Below is the prefix mode. If you want to work in insertion mode,
remove all cookie XXX prefix lines and uncomment the
cookie XXX insert line.


Right, thanks!


That way
you don't have to worry whether your application will set the cookie
again or not.

So, in fact most users will be anonymous (no session) and don't strictly
need to be sticky to a server, even. When they log in, a cookie
(beaker.session) is set by the application. The application won't trip
up on other cookie values.


I see, but given that you already want to stick on a farm anyway, you
have to set that. It should not be a big trouble because the anonymous
users will just stick to a farm, no to a server.


Yes, that was my simplification. :)


You could even improve
by using cookie XXX insert indirect postonly. It will only add a cookie
on responses to POST requests, which most often are the first login
request.


Good plan.

Thanks a lot!

Martin


--
Author of `Professional Plone Development`, a book for developers who
want to work with Plone. See http://martinaspeli.net/plone-book




Re: Load balacing based on XML tag

2010-02-23 Thread Martin Kofahl
Hi,
you can try using balance url_param param check_post. The check_post 
parameter lets haproxy to inspect the body, too. You may have to specify how 
many bytes haproxy will read. However, performance will suffer.

Martin

 Original-Nachricht 
 Datum: Wed, 24 Feb 2010 17:44:14 +1100
 Von: Dan Nguyen dan.ngu...@salmat.com.au
 An: haproxy@formilux.org
 Betreff: Load balacing based on XML tag

 Hi All,
 
  
 
   I have a question. Can HAProxy load balance based on a XML tag inside
 a web services request?
 
  
 
 Thanks,
 
 Dan.
 
 
 ***
 This e-mail, including any attachments to it, may contain confidential
 and/or personal information. If you have received this e-mail in error, you
 must not copy, distribute, or disclose it, use or take any action based on
 the information contained within it. Please notify the sender immediately by
 return e-mail of the error and then delete the original e-mail.
 
 The information contained within this e-mail may be solely the opinion of
 the sender and may not necessarily reflect the position, beliefs or
 opinions of the organisation on any issue. This email has been swept for the
 presence of computer viruses known to the organisation’s anti-virus systems.
 
 ***
 
 



haproxy opens random udp port

2010-01-25 Thread Martin Kofahl
Hi,
I'm testing haproxy 1.4-dev6 and wonder about a random chosen (but above 
32000?) udp port opened by haproxy. Haproy binds to tcp/80 and has syslog 
logging enabled to 127.0.0.1:514. What is this port for?

# lsof -iUDP -P
haproxy   19534  haproxy5u  IPv4 4962049   UDP *:32773

Martin



session taking long

2009-03-10 Thread Martin Karbon

Hi
I use haproxy to connect to two webservers with cookies. The connection 
works really fast, the application loads its container window, but to 
get the login screen clients usually have to wait 30-40 seconds and 
after that everthing works fast again. If I connect to the application 
directly it only take max 10 seconds. I tried to fiddle with the 
configuration but still no result.

Anyone an idea ?




stats socket problem

2009-01-21 Thread Martin Karbon

Hi
I am relatively new to this great software and I am having problems  
with the feature stats socket. it won't write the haproxy.stat file no  
matter what
I wanted to try to write some remote executable script that check the  
connection distribution every n seconds...

my configuration file is as follows
r...@balancix1:~# cat /etc/haproxy.cfg
global
log 127.0.0.1   syslog
log 127.0.0.1   kern notice
stats socket /var/run/haproxy.stat mode 600
user haproxy
group haproxy
maxconn 4096
defaults
log global
mode http
option httplog
option dontlognull
retries 3
redispatch
maxconn 2000
contimeout 5000
clitimeout 5
srvtimeout 5

listen ias 192.168.0.250:80
mode http
stats enable
balance roundrobin
option httpclose
server ias1 192.168.0.201:80 check
server ias2 192.168.0.202:80 check

what am I doing wrong? thanking you in advance

Martin


This message was sent using IMP, the Internet Messaging Program.